path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Tonality_classicalDB/ClassicalDB_musicbrainz_metadata.ipynb | ###Markdown
Tonality classicalDB musicbrainz metadataAudios in Tonality classicalDB dataset are proprietaries. We are going to upload to zenodo different musicbrainz metadata for each audio.Chromaprint is computed with essentia library
###Code
!pip install git+https://github.com/MTG/mirdata.git@Pedro/classicalDB
!pip install librosa
import mirdata
classicalDB = mirdata.Dataset('tonality_classicalDB')
classicalDB.download()
from google.colab import drive
drive.mount('/content/gdrive')
cp -r "/content/gdrive/MyDrive/classicalDB" /root/mir_datasets/classicalDB
# classicalDB.validate()
import json
import os
import numpy as np
import sys
import essentia.standard as estd
import acoustid
os.system("mkdir /root/mir_datasets/classicalDB/musicbrainz_metadata")
def save_json(dictionary, name_file):
with open(name_file, 'w') as fp:
json.dump(dictionary, fp, sort_keys=True, indent=4)
for k, track in classicalDB.load_tracks().items():
print("Computing " + track.title + "....", end='')
audio = estd.MonoLoader(filename=track.audio_path, sampleRate=44100)()
client = 'xxxxxx' # This is not a valid key. Use your key.
request = acoustid.match(client, track.audio_path)
similar_mbids = []
for score, recording_id, title, artist in acoustid.match(client, track.audio_path):
similar_mbids.append({'score': score, 'mbid': recording_id, 'title': title, 'artist': artist})
results = {
'chromaprint': estd.Chromaprinter()(audio),
'similar_mbids': similar_mbids
}
save_json(results, os.path.splitext(track.audio_path.replace('/audio/', '/musicbrainz_metadata/'))[0] + '.json')
print("OK")
###Output
Computing 01-Allegro__Gloria_in_excelsis_Deo_in_D_Major - D....OK
Computing 01-BWV_565_Toccata___Toccata_D_minor - Dm....OK
Computing 01-Bach_BWV525_GMajor - G....OK
Computing 01-Brahms_Violin_Concerto_in_D_Major - D....OK
Computing 01-Chopin_Piano_Concerto_No._1_in_E_minor_I__Allegro_maestoso_risoluto - Em....OK
Computing 01-Concerto_en_La_Majeur_pour_deux_Violons_A_Scordatura_et_basse_continue._Affetuoso - A....OK
Computing 01-Concerto_for_violin_and_orchestra_in_A_minor_BWV_1041 - Am....OK
Computing 01-Concerto_n.4_G_Major_BWV_1049___1_Allegro - G major - G....OK
Computing 01-Concierto_n_1_F_Major-_1_Allegro - F major - F....OK
Computing 01-F_major_op._18_no._1_-_Allegro_con_brio - F....OK
Computing 01-I._String_Quartet_No.16_KV_428_Eb_major__Allegro_ma_non_troppo - Eb....OK
Computing 01-I._String_Quartet_No.20_KV_499_D_Major__Allegretto - D....OK
Computing 01-I._String_Quintet_No.22_KV_589_Bb_Major__Allegro - Bb....OK
Computing 01-I._String_Quintet_No._5_Larghetto-Allegro_-_D_Major_KV_593 - D....OK
Computing 01-Ludwig_Van_Beethoven_-_Symphony_N_9_In_Re_Minor_Op_125-allegro_ma_non_troppo - Dm....OK
Computing 01-Mozart__KV_448_Allegro_con_spiritu_D_Major - D....OK
Computing 01-No._1_b-moll_op._9_No._1___Larghetto - Bbm....OK
Computing 01-No._3_in_C_major_op._59 - C....OK
Computing 01-Op_132_(A_min)_Assai_sostenuto_-_Allegro - Am....OK
Computing 01-Peter_Ilyich_Tchaikovsky___Piano_Concerto_No_1_in_B_flat_minor_Op_23_-_I._Allegro_non_troppo_e_molto_maestoso - Bbm....OK
Computing 01-Prelude_-_Suite_Nr._1_G-major_BWV.1007 - G....OK
Computing 01-Prelude_and_Fugue_No._13_F-sharp_major_BWV_882_-_1._Praeludium - F#....OK
Computing 01-Prelude_and_Fugue_No._1_in_C_major_BWV_870_-_1._Praeludium - C....OK
Computing 01-Quartet__E-flat_major_Op._127__I._Maestoso_-_Allegro - Eb....OK
Computing 01-Quartet_in_B-flat_major_Op._130__I.__Adagio_ma_non_troppo_-_Allegro - Bb....OK
Computing 01-Quartet_in_F-major_(Hess_34)_-_Transcription_of_Piano_Sonata_Op_14._No.1_-_1 - F....OK
Computing 01-Quartett_c-moll_op_18_No._4_in_C_minor_en_ut_mineur_1 - Cm....OK
Computing 01-Requiem_Mass_in_D_minor._K.626-Introitus_-_Requiem_aeternam - Dm....OK
Computing 01-Sonata_for_Piano__Cello_in_A_Major_Op._69_1._Allegro_ma_non_tanto - A....OK
Computing 01-Sonate_B_-Dur_RV_47 - Bb....OK
Computing 01-Sonate_B_Dur_RV46 - Bb....OK
Computing 01-Spring_-_Concerto_#1_in_E_major_-_1_-_Allegro - E....OK
Computing 01-String_Quartet_No.14_in_G_major_K._387_-_I._Allegro_vivace_assai - G....OK
Computing 01-String_Quartet_No._10_in_C_major_K._170_-_I._Andante - C....OK
Computing 01-String_Quartet_No.__1_in_G_major_K._80_(83f)_-_I._Adagio - G....OK
Computing 01-String_Quartet_No.__6_in_B_flat_major_K._159_-_I._Andante - Bb....OK
Computing 01-String_Quintet_No._1_in_B_flat_KV174_-_I._Allegro - Bb....OK
Computing 01-Suite_#1_in_C_Major-_I__Ouverture - C....OK
Computing 01-Telemann_TWV41C5_CMajor - C....OK
Computing 01-Toccata__Fugue_in_D_minor_BWV_565__Toccata - Dm....OK
Computing 01-Tomaso_Albinoni___Adagio_in_Sol_minore - Gm....OK
Computing 01-Twelve_Variations_in_F_Major_on_the_Mozart_theme__Ein_Madchen_oder_Weibchen__Theme._Allegretto - F....OK
Computing 01-brahms-string_quartet_nr.3_in_b_flat_opus_67_vivace-aaf - Bb....OK
Computing 01-carl_nielsen-symphony_no._1_in_g_minor_op._7_-_i._allegro_orgoglioso - Gm....OK
Computing 01-glenn_gould-invention_1_in_c_major-aaf - C....OK
Computing 01-jacques_loussier_trio--toccata_in_c_major_(toccata) - C....OK
Computing 01-mozart_-_sonata_in_a_k._331-twc - A....OK
Computing 01-nocturnes_op._9_no._1_in_b-flat_minor-ond - Bbm....OK
Computing 01-pachelbel_-_canon_in_d_major-xmas - D....OK
Computing 01-royal_philharmonic_collection-sonata_in_c_minor_kv457 - Cm....OK
Computing 01-seiji_ozawa_-_berliner_philharmoniker_-_symphony_no_1_in_d_major_op_25_-_1__allegro - D....OK
Computing 01-sibelius-violin_concerto_in_d_minor_op.47_allegro_moderato - Dm....OK
Computing 01-sinphonie_9_opus_95_b_minor_-_anton_dvorak-krema - Bm....OK
Computing 01-the_swingle_singers--fugue_in_d_minor - Dm....OK
Computing 01_chopin_-_nocturne_in_c-sharp_minor - C#m....OK
Computing 01_d._scarlatti_-_sonata_in_d_minor - Dm....OK
Computing 02-Antonio_Vivaldi___Concerto_in_Do_per_due_trombettearchi(Allegro-Largo-Allegro) - C....OK
Computing 02-No._2_Es-dur_op._9_No._2___Andante - Eb....OK
Computing 02-Variation_I_F_Major - F....OK
Computing 02-frederic_chopin--waltz_in_c-sharp_minor_op.64_no.2 - C#m....OK
Computing 02-glenn_gould-sinfonia_1_in_c_major-aaf - C....OK
Computing 02-mozart_-_sinfonia_concertante_in_e-flat-twc - Eb....OK
Computing 02-nocturnes_op._9_no._2_in_e-flat_minor-ond - Ebm....OK
Computing 02-royal_philharmonic_collection-no_2_in_e_minor - Em....OK
Computing 02-royal_philharmonic_collection-symphony_40_in_g_minor - Gm....OK
Computing 02_chopin_-_nocturne_in_e_minor_op._72_no._1 - Em....OK
Computing 03-BWV_534_Prelude___Prelude_F_minor - Fm....OK
Computing 03-J.S._Bach___Konzert_für_Oboe__Violine_C-moll_BWV_1060_-_Allegro - Cm....OK
Computing 03-No._3_H-dur_op._9_No._3___Allegretto - B....OK
Computing 03-Prelude_and_Fugue_No._14_F-sharp_minor_BWV_883_-_1._Praeludium - F#m....OK
Computing 03-Prelude_and_Fugue_No._2_in_C_minor_BWV_871_-_1._Praeludium - Cm....OK
Computing 03-Toccata_Adagio__Fugue_in_C_major_BWV_564__Toccata - C....OK
Computing 03-Variation_II_F_Major - F....OK
Computing 03-frederic_chopin--prelude_in_b_minor_op.28_no.6-lento_assai - Bm....OK
Computing 03-glenn_gould-invention_2_in_c_minor-aaf - Cm....OK
Computing 03-mozart_-_serenade_no._10_in_b-flat_for_winds-twc - Bb....OK
Computing 03-nocturnes_op._9_no._3_in_b_major-ond - B....OK
Computing 03-royal_philharmonic_collection-no_3_in_a_flat_major - Ab....OK
Computing 03_chopin_-_nocturne_in_c_minor_op._48_no._1 - Cm....OK
Computing 03_pasquini_-_canzone_in_e_minor - Em....OK
Computing 04-Bach_BWV529_FMajor - F....OK
Computing 04-Ballade_No._1_Op.23__G__minor - Gm....OK
Computing 04-Brahms_Sonatensatz_in_C_minor - Cm....OK
Computing 04-Concerto_for_violin_and_orchestra_in_D_minor_BWV_1043 - Dm....OK
Computing 04-Concerto_n.5_D_Major_BWV_1050___1_Allegro - D....OK
Computing 04-No._4_F-dur_op._15_No._1___Andante_cantabile - F....OK
Computing 04-Prelude__Fugue_in_F-major_(Hess_30)_-1 - F....OK
Computing 04-Schubert__Op._103_F_minor - Fm....OK
Computing 04-Serge_Rachmaninoff___Piano_Concerto_No_2_in_C_minor_Op_18_-_I._Moderato - Cm....OK
Computing 04-String_Quartet_No.__7_in_E_flat_major_K._160_(159a)_-_I._Allegro - Eb....OK
Computing 04-Summer_-_Concerto_#2_in_G_minor_-_1_-_Allegro_non_molto - Gm....OK
Computing 04-Variation_III_F_Major - F....OK
Computing 04-frederic_chopin--prelude_in_d-flat_major_op.28_no.15-sostenuto - Db....OK
Computing 04-glenn_gould-sinfonia_2_in_c_minor-aaf - Cm....OK
Computing 04-jacques_loussier_trio--sicilienne_in_g_minor - Gm....OK
Computing 04-mozart_-_divertimento_no._11_in_d-twc - D....OK
Computing 04-nocturnes_op._15_no._1_in_f_major-ond - F....OK
Computing 04-royal_philharmonic_collection-no_4_in_f_major - F....OK
Computing 04-royal_philharmonic_collection-sonata_in_b_flat_major_ - Bb....OK
Computing 04-the_swingle_singers--prelude_in_f_major - F....OK
Computing 04_a._marcello_-_andante_in_d_minor - Dm....OK
Computing 04_chopin_-_ballade_no._2_in_f_major_op._38 - F....OK
Computing 05-BWV_542_Fantasia___Fantasia_G_minor - Gm....OK
Computing 05-Brahms_Hungarian_Dances_n1_G_minor - Gm....OK
Computing 05-Concerto_en_Re_Majeur_pour_quatre_violons_sans_basse_continue._Adagio - D....OK
Computing 05-Concierto_n_2__F_Major-_1_allegro - F....OK
Computing 05-G_major_op._18_no._2_-_Allegro - G....OK
Computing 05-I._String_Quartet_No.17_KV_458__The_Hunt__Bb_Major__Allegro_vivace_assai - Bbm....OK
Computing 05-I._String_Quartet_No.21_KV_575_D_Major__Allegretto - D....OK
Computing 05-I._String_Quintet_No.19_KV_465_C_Major__Dissonance____Adagio_-_Allegro - C....OK
Computing 05-I._String_Quintet_No.23_KV_590_F_Major__Allegro_moderato - F....OK
Computing 05-I._String_Quintet_No._6_Allegro_di_molto_Eb_Major__KV_614 - Eb....OK
Computing 05-No._7_cis-moll_op._27_No._1___Larghetto - C#m....OK
Computing 05-Nocturne_No._4_Op._15_No._1_F_Major - F....OK
Computing 05-Prelude_and_Fugue_No._15_G_major_BWV_884_-_1._Praeludium - G....OK
Computing 05-Prelude_and_Fugue_No._3_in_C-sharp_major_BWV_872_-_1._Praeludium - C#....OK
Computing 05-Qartett_in_E_flat_major_(_Harp_)_op._74 - Eb....OK
Computing 05-Quartet_in_C-sharp_major_Op._131__I._Adagio_ma_non_troppo_e_molto_espressivo_-_attacca - C#....OK
Computing 05-Quartett_A-Major_op_18_No._5_in_A_major_en_la_majeur_1 - A....OK
Computing 05-Quartett_A-dur_op_18_No._5_in_A_major_en_la_majeur_1 - A....OK
Computing 05-Sonata_for_Piano__Cello_in_C_Major_Op._102_No._1_1._Andante - C....OK
Computing 05-Sonate_F_-Dur_RV_41 - F....OK
Computing 05-Sonate_a_moll_RV44 - Am....OK
Computing 05-String_Quartet_No.15_in_D_minor_K._421_(417b)_-_I._Allegro_moderato - Dm....OK
Computing 05-String_Quartet_No._11_in_E_flat_major_K._171_-_I._Adagio_-_Allegro_assai_-_Adagio - Eb....OK
Computing 05-String_Quartet_No.__2_in_D_major_K._155_(134a)_-_I._Allegro - D....OK
Computing 05-String_Quintet_No._4_in_C_minor_KV406_516b_-_I._Allegro - Cm....OK
Computing 05-Variation_IV_F_Major - F....OK
Computing 05-brahms-clarinet_quintet_in_b_minor_opus_115_allegretto-aaf - Bm....OK
Computing 05-glenn_gould-invention_5_in_e_flat_major-aaf - Eb....OK
Computing 05-mozart_-_piano_concerto_no._24_in_c_minor-twc - Cm....OK
Computing 05-nocturnes_op._15_no._2_in_f-sharp_major-ond - F#....OK
Computing 05-seiji_ozawa_-_berliner_philharmoniker_-_symphony_no_6_in_e_flat_minor_op__111_-_1__allegro_moderato - Ebm....OK
Computing 05-sibelius-serenade_no.2_in_g_minor_op.69b - Gm....OK
Computing 05_chopin_-_ballade_no._1_in_g_minor_op._23 - Gm....OK
Computing 06-Brahms_Hungarian_Dances_n2_D_minor - Dm....OK
Computing 06-Johann_Pachelbel___Canon_in_D_major - D....OK
Computing 06-No._8_Des-dur_op._27_No._2___Lento_sustenuto - Db....OK
Computing 06-Nocturne_No._5_Op._15_No._2_F_sharp_Major - F#....OK
Computing 06-Op_135_(F_maj)_Allegretto - F....OK
Computing 06-Passacaglia_in_C_minor_BWV_582 - Cm....OK
Computing 06-Prelude__Fugue_in_C-major_(Hess_31)_-_1 - C....OK
Computing 06-Variation_V_F_Major - F....OK
Computing 06-brahms-clarinet_quintet_in_b_minor_opus_115_adagio-aaf - Bm....OK
Computing 06-glenn_gould-sinfonia_5_in_e_flat_major-aaf - Eb....OK
Computing 06-jacques_loussier_trio--passacaglia_in_c_minor - Cm....OK
Computing 06-mozart_-_clarinet_quintet_in_a-twc - A....OK
Computing 06-nocturnes_op._15_no._3_in_g_minor-ond - Gm....OK
Computing 06-royal_philharmonic_collection-no_7_in_c_minor - Cm....OK
Computing 06-the_swingle_singers--fugue_in_c_minor - Cm....OK
Computing 06_chopin_-_waltz_no._3_in_a_minor_op._34_no._2 - Am....OK
Computing 07-Autumn_-_Concerto_#3_in_F_major__-_1_-_Allegro - F....OK
Computing 07-BWV_564_Toccata___Toccata_C_Major - C....OK
Computing 07-Bach_BWV526_EMinor - Em....OK
Computing 07-Brahms_Hungarian_Dances_n7_A_Major - A....OK
Computing 07-Concerto_for_violin_and_orchestra_in_E_major_BWV_1042 - E....OK
Computing 07-Concerto_n.6_Bb_Major_BWV_1051___1_Allegro - Bb....OK
Computing 07-No._9_H-dur_op._32_No._1___Andante_sustenuto - B....OK
Computing 07-Pastorale_in_F_major_BWV_590 - F....OK
Computing 07-Polonaise_No._6_Op._53_A_flat_major - Ab....OK
Computing 07-Prelude_and_Fugue_No._16_G_minor_BWV_885_-_1._Praeludium - Gm....OK
Computing 07-Prelude_and_Fugue_No._4_in_C-sharp_minor_BWV_873_-_1._Praeludium - C#m....OK
Computing 07-String_Quartet_No.__8_in_F_major_K._168_-_I._Allegro - F....OK
Computing 07-Variation_VI_F_Major - F....OK
Computing 07-frederic_chopin--mazurka_in_e_minor_op._41_no.2-andantino - Em....OK
Computing 07-glenn_gould-invention_14_in_b_flat_major-aaf - Bb....OK
Computing 07-mozart_-_violin_sonata_in_e-flat-twc - Eb....OK
Computing 07-nocturnes_op._27_no._1_in_c-sharp_minor-ond - C#m....OK
Computing 07-royal_philharmonic_collection-fantasie_in_c_minor_kv4 - Cm....OK
Computing 07-royal_philharmonic_collection-no_8_in_g_minor - Gm....OK
Computing 07-the_swingle_singers--fugue_in_d_major - D....OK
Computing 07_chopin_-_prelude_no._4_in_e_minor_op._28_no._4 - Em....OK
Computing 07_handel_-_allegro_in_d_minor - Dm....OK
Computing 08-Brahms_Hungarian_Dances_n9_E_minor - Em....OK
Computing 08-No._10_As-dur_op._32_No._2___Lento - Ab....OK
Computing 08-Preludium_-_Suite_Nr._4_E_flat_Major-dur_BWV.1010 - Eb....OK
Computing 08-Quartet_in_F-major_(Hess_32)_-_Op._18_No._1_first_version_-_1 - F....OK
Computing 08-String_Quartet_No.__3_in_G_major_K._156_(134b)_-_I._Presto - G....OK
Computing 08-Suite_#2_in_B_Minor-_I__Ouverture - Bm....OK
Computing 08-Telemann_TWV41d4_Dminor - Dm....OK
Computing 08-Variation_VII_F_Major - F....OK
Computing 08-glenn_gould-sinfonia_14_in_b_flat_major-aaf - Bb....OK
Computing 08-mozart_-_piano_concerto_no._17_in_g-twc - G....OK
Computing 08-nocturnes_op._27_no._2_in_d-flat_major-ond - Db....OK
Computing 08-royal_philharmonic_collection-allegretto_in_c_minor - Cm....OK
Computing 08-royal_philharmonic_collection-no_1_in_b_major - B....OK
Computing 08-royal_philharmonic_collection-sonata_in_g_major_kv283 - G....OK
Computing 09-Concerto_en_La_Majeur_pour_flute_à_bec_viole_de_gambe_cordes_et_basse_continue._Tempo_non_precise - A....OK
Computing 09-Concierto_n_3_G_Major-_2_adagio_(George_Malcolm) - G....OK
Computing 09-D_major_op._18_no._3_-_Allegro - D....OK
Computing 09-No._12_G-dur_op._37_No._2___Andantino - G....OK
Computing 09-Prelude_and_Fugue_No._17_A-flat_major_BWV_886_-_1._Praeludium - Ab....OK
Computing 09-Prelude_and_Fugue_No._5_in_D_major_BWV_874_-_1._Praeludium - D....OK
Computing 09-Quartet_in_F_minor_op._95 - Fm....OK
Computing 09-Quartett_B-dur_op_18_No._6_in_B_flat_major_en_si_bemol_majeur_1 - Bb....OK
Computing 09-Sonata_for_Piano__Cello_in_D_Major_Op._102_No._2_1._Allegro_con_brio - D....OK
Computing 09-Sonate_Es_Dur_RV39 - Eb....OK
Computing 09-Sonate_a_-moll_RV_43 - Am....OK
Computing 09-String_Quartet_No._12_in_B_flat_major_K._172_-_I._Allegro_spiritoso - Bb....OK
Computing 09-Variation_VIII_F_Major - F....OK
Computing 09-glenn_gould-invention_11_in_g_minor-aaf - Gm....OK
Computing 09-nocturnes_op._32_no._1_in_b_major-ond - B....OK
Computing 09-royal_philharmonic_collection-no_2_in_e_minor - Em....OK
Computing 09_bach_-_english_ste_3_in_g_minor_-_courante - Gm....OK
Computing 10-BWV 525 Trio sonata No. 1 - (Allegro) _ (Allegro)_E_flat_Major - Eb....OK
Computing 10-BWV_525_Trio_sonata_No._1_-_(Allegro)___(Allegro)_E_flat_Major - Eb....OK
Computing 10-Bach_BWV997_DMinor - Dm....OK
Computing 10-Haydn_-_Concert_for_trumpet_and_orchestra_in_Eb_major - Eb....OK
Computing 10-No. 13 c-moll op. 48 No. 1 _ Lento - C,....OK
Computing 10-Variation_IX_F_Major - F....OK
Computing 10-Winter_-_Concerto_#4_in_F_minor__-_1_-_Allegro_non_molto - Fm....OK
Computing 10-glenn_gould-sinfonia_11_in_g_minor-aaf - Gm....OK
Computing 10-nocturnes_op._32_no._2_in_a-flat_major-ond - Ab....OK
Computing 10-royal_philharmonic_collection-no_5_in_b_flat_major - Bb....OK
Computing 10-the_swingle_singers--prelude_in_c_major - C....OK
Computing 101-dmitri_shostakovich--no.1_in_c_major - C....OK
Computing 101-wa_mozart-symphony-no-25-in-g-minor-rare - Gm....OK
Computing 102-dmitri_shostakovich--no.2_in_a_minor - Am....OK
Computing 102-fatansie_impromptu_in_c_shrap_minor_-_op_66_no_4-bfhmp3 - C#m....OK
Computing 103-dmitri_shostakovich--no.3_in_g_major - G....OK
Computing 104-concerto_in_e_minor_op_4_no_2_-_1_allegro-kir - Em....OK
Computing 104-dmitri_shostakovich--no.4_in_e_minor - E....OK
Computing 104-mazurka_-_op_7_in_b_major-bfhmp3 - B....OK
Computing 105-dmitri_shostakovich--no.5_in_d_major - D....OK
Computing 105-etude_-_op_10_no_12_in_c_minor_(revolution)-bfhmp3 - Cm....OK
Computing 106-dmitri_shostakovich--no.6_in_b_minor - Bm....OK
Computing 106-polonaise_-_op_53_in_a_flat_major_(heroic)-bfhmp3 - Ab....OK
Computing 107-waltz_-_op_64_no_2_in_c_sharp_minor-bfhmp3 - C#m....OK
Computing 108-dmitri_shostakovich--no.8_in_f_sharp_minor - F#m....OK
Computing 108-nocturne_-_op_9_no_2_in_e_flat_major-bfhmp3 - Eb....OK
Computing 108-va-bach-air_of_a_g_string_from_suite_no.3_in_g_major-wcr - G....OK
Computing 108-wa_mozart-mass-in-c-minor-rare - Cm....OK
Computing 109-dmitri_shostakovich--no.9_in_e_major - E....OK
Computing 10_bach_-_english_ste_3_in_g_minor_-_allemande - Gm....OK
Computing 11-Benedetto_Marcello___Concerto_per_Oboe_Re_minor_-_Andante_e_spiccato - Dm....OK
Computing 11-Elgar_-_March_of_pomp_and_circumstance_in_D_MajorComparison - D....OK
Computing 11-No._15_f-moll_op._55_No._1___Andante - Fm....OK
Computing 11-Prelude_and_Fugue_No._18_G-sharp_minor_BWV_887_-_1._Praeludium - G#m....OK
Computing 11-Prelude_and_Fugue_No._6_in_D_minor_BWV_875_-_1._Praeludium - Dm....OK
Computing 11-Variation_X_Adagio_F_Major - F....OK
Computing 11-glenn_gould-invention_10_in_g_major-aaf - G....OK
Computing 11-nocturnes_op._55_no._1_in_f_minor-ond - Fm....OK
Computing 11-royal_philharmonic_collection-no_6_in_b_flat_major - Bb....OK
Computing 110-concerto_in_a_minor_op_4_no_4_-_1_allegro-kir - Am....OK
Computing 110-dmitri_shostakovich--no.10_in_c_sharp_minor - C#m....OK
Computing 110-prelude_-_op_28_no_15_raindrop_in_d_sharp_major-bfhmp3 - D#....OK
Computing 111-dmitri_shostakovich--no.11_in_b_major - B....OK
Computing 111-wa_mozart-adagio-in-c-minor-for-glass-armonica-rare - Cm....OK
Computing 111-waltz_-_op_64_no_1_minute_waltz_in_d_flat_minor-bfhmp3 - Dbm....OK
Computing 112-ballad_-_op_23_no_1_in_g_minor-bfhmp3 - Gm....OK
Computing 112-dmitri_shostakovich--no12_in_g_sharp_minor-fixed - G#m....OK
Computing 113-etude_-_op_10_no_3_in_e_major-bfhmp3 - E....OK
Computing 116-concerto_in_g_minor_op_4_no_6_-_1_allegro-kir - Gm....OK
Computing 11_bach_-_english_ste_3_in_g_minor_-_allegro - Gm....OK
Computing 11_chopin_-_mazurka_in_a_minor_op._17_no._4 - Am....OK
Computing 12-Allegro__Canta_in_prato_in_G_Major - G....OK
Computing 12-Minuett_in_A-flat_major_(Hess_33) - Ab....OK
Computing 12-No._18_E-dur_op._62_No._2___Lento - E....OK
Computing 12-String_Quartet_No.__4_in_C_major_K._157_-_I._Allegro - C....OK
Computing 12-Telemann_TWV41f1_FMinor - Fm....OK
Computing 12-Twelve_Variations_in_g_Major_on_a_Theme_from__Judas_Maccabaeus__Thema._Allegretto - G....OK
Computing 12-Variation_XI_Poco_Adagio_quasi_Andante_F_minor-_attacca_subito - Fm....OK
Computing 12-ballad_in_d_minor_op.15-i-fuf - Dm....OK
Computing 12-glenn_gould-sinfonia_10_in_g_major-aaf - G....OK
Computing 12-nocturnes_op._55_no._2_in_e-flat_major-ond - Eb....OK
Computing 12-royal_philharmonic_collection-no_7_in_c_major - C....OK
Computing 12-the_swingle_singers--invention_in_c_major - C....OK
Computing 13-Concero_en_Sol_Mineur_pour_flute_a_bec_violons_et_basse_continue._Allegro - Gm....OK
Computing 13-La_Tempesta_Di_Mare_in_Eb_Major-_1_-_Presto - Eb....OK
Computing 13-No._19_e-moll_op._post._72_No._1___Andante - Em....OK
Computing 13-Prelude_and_Fugue_No._19_A_major_BWV_888_-_1._Praeludium - A....OK
Computing 13-Prelude_and_Fugue_No._7_in_E-flat_major_BWV_876_-_1._Praeludium - Eb....OK
Computing 13-Sonate_B-Dur_RV_45 - Bb....OK
Computing 13-Sonate_g_moll_RV42 - Gm....OK
Computing 13-String_Quartet_No._13_in_D_minor_K._173_-_I._Allegro_ma_molto_moderato - Dm....OK
Computing 13-Variation_XII_Allegro_F_Major - F....OK
Computing 13-glenn_gould-invention_15_in_b_minor-aaf - Bm....OK
Computing 13-mazurek_in_e_minor_op.49-fuf - Em....OK
Computing 13-nocturnes_op._post._72_no.1_in_e_minor-ond - Em....OK
Computing 13-royal_philharmonic_collection-no_8_in_a_flat_major - Ab....OK
Computing 13-the_swingle_singers--fugue_in_d_major - D....OK
Computing 14-Sonata_for_Piano__Cello_in_F_Major_Op._5_No._1_1._Adagio_sostenuto - F....OK
Computing 14-glenn_gould-sinfonia_15_in_b_minor-aaf - Bm....OK
Computing 14-nocturnes_in_c-sharp_minor_(1830)-ond - C#m....OK
Computing 14-original_broadway_cast-invention_in_c_minor - Cm....OK
Computing 15-Allegro__Dixit_Dominus_in_D_Major - D....OK
Computing 15-Prelude_-_Suite_Nr._5_c_Minor-moll_BWV.1011 - Cm....OK
Computing 15-Prelude_and_Fugue_No._20_A_minor_BWV_889_-_1._Praeludium - Am....OK
Computing 15-Prelude_and_Fugue_No._8_in_D-sharp_minor_BWV_877_-_1._Praeludium - D#m....OK
Computing 15-String_Quartet_No.__5_in_F_major_K._158_-_I._Allegro - F....OK
Computing 15-Suite_#3_in_D_Major-_I__Ouverture - D....OK
Computing 15-glenn_gould-invention_7_in_e_minor-aaf - Em....OK
Computing 15-nocturne_in_c_minor_(1837)-ond - Cm....OK
Computing 15-the_swingle_singers--prelude_and_fugue_in_e_minor_n - Em....OK
Computing 16-Il_Piacere_in_C_Major-_1_-_Allegro - C....OK
Computing 16-Telemann_TWV42B4_BflatMajor - Bb....OK
Computing 16-glenn_gould-sinfonia_7_in_e_minor-aaf - Em....OK
Computing 16_a._soler_-_sonata_in_d_minor - Dm....OK
Computing 17-Concerto_en_Do_Majeur_pour_quatre_violons_sans_basse._Grave - C....OK
Computing 17-Prelude_and_Fugue_No._21_B-flat_major_BWV_890_-_1._Praeludium - Bb....OK
Computing 17-Prelude_and_Fugue_No._9_in_E_major_BWV_878_-_1._Praeludium - E....OK
Computing 17-Sonate_A_Dur_Vivaldi - A....OK
Computing 17-Sonate_e_-moll_RV_40 - Em....OK
Computing 17-arturo_sandoval--concerto_in_d_major_(first_movement) - D....OK
Computing 17-glenn_gould-invention_6_in_e_major-aaf - E....OK
Computing 17_galles_-_sonata_in_b_minor - Bm....OK
Computing 18-Sonata_for_Piano_and_Cello_in_G_Minor_Op._5_No._2_1._Adagio_sostenuto_e_espressivo_-_attacca - Gm....OK
Computing 18-glenn_gould-sinfonia_6_in_e_major-aaf - E....OK
Computing 18-the_swingle_singers--prelude_and_fugue_in_c_major - C....OK
Computing 19-Prelude_and_Fugue_No._10_in_E_minor_BWV_879_-_1._Praeludium - Em....OK
Computing 19-Prelude_and_Fugue_No._22_B-flat_minor_BWV_891_-_1._Praeludium - Bbm....OK
Computing 19-glenn_gould-invention_13_in_a_minor-aaf - Am....OK
Computing 19-the_swingle_singers--fugue_in_g_major - G....OK
Computing 20-Telemann_TWV41C2_CMajor - C....OK
Computing 20-glenn_gould-sinfonia_13_in_a_minor-aaf - Am....OK
Computing 201-dmitri_shostakovich--no.13_in_f_sharp_major - F#....OK
Computing 202-dmitri_shostakovich--no.14_in_e_flat_minor - Ebm....OK
Computing 202-va-chopin-nocturne_no2_in_e_flat-wcr - Eb....OK
Computing 202_bizet-syphony_in_c_adagio-sns - C....OK
Computing 203-dmitri_shostakovich--no.15_in_d_flat_major - Db....OK
Computing 204-concerto_for_piano_and_orchestra_in_a_minor_op__16-bfhmp3 - Am....OK
Computing 204-dmitri_shostakovich--no.16_in_b_flat_minor - Bbm....OK
Computing 204-va-liszt-liebestraum_no3_in_a_flat-wcr - Ab....OK
Computing 205-concerto_in_d_minor_op_4_no_8_-_1_allegro-kir - Dm....OK
Computing 205-dmitri_shostakovich--no.17_in_a_flat_major - Ab....OK
Computing 206-dmitri_shostakovich--no.18_in_f_minor - Fm....OK
Computing 207-dmitri_shostakovich--no.19_in_e_flat_major - Eb....OK
Computing 207-va-elgar-cello_concerto_in_e_adagio-wcr - E....OK
Computing 208-dmitri_shostakovich--no.20_in_c_minor - Cm....OK
Computing 209-dmitri_shostakovich--no.21_in_b_flat_major - Bb....OK
Computing 209-wa_mozart-piano-concerto-in-d-minor-1st-movement-rare - Dm....OK
Computing 21-Concerto_en_Mi_Mineur_pour_flute_à_bec_flute_traversiere_cordes_et_basse_continue._Largo - Em....OK
Computing 21-Prelude_and_Fugue_No._11_in_F_major_BWV_880_-_1._Praeludium - F....OK
Computing 21-Prelude_and_Fugue_No._23_B_major_BWV_892_-_1._Praeludium - B....OK
Computing 21-Seven_Variations_in_E_flat_Major_from__The_Magic_Flute__Thema._Andante - Eb....OK
Computing 210-dmitri_shostakovich--no.22_in_g_minor - Gm....OK
Computing 211-concerto_in_c_minor_op_4_no_10_-_1-kir - Cm....OK
Computing 211-dmitri_shostakovich--no.23_in_f_major - F....OK
Computing 212-dmitri_shostakovich--no.24_in_d_minor - Dm....OK
Computing 23-Prelude_and_Fugue_No._12_in_F_minor_BWV_881_-_1._Praeludium - F....OK
Computing 23-Prelude_and_Fugue_No._24_B_minor_BWV_893_-_1._Praeludium - Bm....OK
Computing 23-glenn_gould-invention_3_in_d_major-aaf - D....OK
Computing 24-glenn_gould-sinfonia_3_in_d_major-aaf - D....OK
Computing 25-glenn_gould-invention_4_in_d_minor-aaf - Dm....OK
Computing 26-glenn_gould-sinfonia_4_in_d_minor-aaf - Dm....OK
Computing 27-glenn_gould-invention_8_in_f_major-aaf - F....OK
Computing 28-glenn_gould-sinfonia_8_in_f_major-aaf - F....OK
Computing 29-glenn_gould-invention_9_in_f_minor-aaf - Fm....OK
Computing 30-glenn_gould-sinfonia_9_in_f_minor-aaf - Fm....OK
Computing 311-mozart-adagio_in_b_flat_kv_411-484a-mil - Bb....OK
Computing 311-va-boccherini-munuet_from_string_quarter_in_e_major-wcr - E....OK
Computing 312-mozart-adagio_in_f_kv_410-484d-mil - F....OK
Computing 313-mozart-adagio_in_c_kv_app._94-580a-mil - C....OK
Computing A_flat_major_02_Valse_Brillante_Op34_No1 - Ab....OK
Computing A_flat_major_05_Grande_Valse_Op42 - Ab....OK
Computing A_flat_major_07_Polonaise_Op53_No1 - Ab....OK
Computing A_flat_major_08-Mazurka_Op_No_8 - Ab....OK
Computing A_flat_major_08_Polonaise_Fantasie_Op61 - Ab....OK
Computing A_flat_major_08_Valse_Op64_No_3 - Ab....OK
Computing A_flat_major_09_Valse_Op69_No1 - Ab....OK
Computing A_flat_major_10_No_10_Vivacce_Assai - Ab....OK
Computing A_flat_major_12_Valse_Brown_Index_21 - Ab....OK
Computing A_flat_major_13_No_1_Allegro_sostenuto_13 - Ab....OK
Computing A_flat_major_16-Mazurka_Op_No_6 - Ab....OK
Computing A_flat_major_27_No_3_Allegretto - Ab....OK
Computing A_flat_major_29-Mazurka_Op_No_9 - Ab....OK
Computing A_flat_major_31-Mazurka_Op_No_1 - Ab....OK
Computing A_flat_major_37-Mazurka_Op_No_7 - Ab....OK
Computing A_flat_major_Ballade_III_Op_47_Chopin_Complete_Piano_Music - Ab....OK
Computing A_flat_major_Gallop_Marquis_Chopin_Complete_Piano_Music - Ab....OK
Computing A_flat_major_No_3_Dvorak_Slavonics_Dances - Ab....OK
Computing A_flat_major_No_8_Dvorak_Slavonics_Dances - Ab....OK
Computing A_flat_major_Nocturne_Op_32_No_2_Chopin_Vol_1 - Ab....OK
Computing A_flat_major_Nouvelle_Etude_No_2_Chopin_Complete_Piano_Music - Ab....OK
Computing A_flat_major_Prelude_No_17 - Ab....OK
Computing A_flat_major_Schubert_Four_Impromptus_Op_142_D_935_2_Jandó - Ab....OK
Computing A_flat_major_Schubert_Four_Impromptus_Op_90_D_899_4_Jandó - Ab....OK
Computing A_flat_major_Tarantella_08_ - Ab....OK
Computing A_flat_minor_Leos_Janacek_Adagio_Tzygane - Abm....OK
Computing A_flat_minor_Leos_Janacek_Allegretto_Tzygane - Abm....OK
Computing A_flat_minor_Leos_Janacek_Ballada_con_moto_Tzygane - Abm....OK
Computing A_major_02_Fantasia_in_Polish_Airs_Op13 - A....OK
Computing A_major_5_Allegro_CD04_Full_Symphonie_Mozart - A....OK
Computing A_major_5_Allegro_moderato_CD06_Full_Symphonie_Mozart - A....OK
Computing A_major_6_Andante_CD04_Full_Symphonie_Mozart - A....OK
Computing A_major_6_Andante_CD06_Full_Symphonie_Mozart - A....OK
Computing A_major_7_Menuetto_trio_CD04_Full_Symphonie_Mozart - A....OK
Computing A_major_7_Menuetto_trio_CD06_Full_Symphonie_Mozart - A....OK
Computing A_major_8_Allegro_CD04_Full_Symphonie_Mozart - A....OK
Computing A_major_8_Allegro_con_spirito_CD06_Full_Symphonie_Mozart - A....OK
Computing A_major_Comarosa_arr_Bream_SOnata_Julian_Bream_Baroque_Guitar - A....OK
Computing A_major_Concerto_BWV_1055_III_Allegro_ma_non_tanto_Bach_Complet_Orchestral - A....OK
Computing A_major_Concerto_BWV_1055_II_Larghetto_Bach_Complet_Orchestral - A....OK
Computing A_major_Concerto_BWV_1055_I_Allegro_Bach_Complet_Orchestral - A....OK
Computing A_major_Concerto_No_23_AK488_Adagio_Mozart_Piano_concerto_23_26 - A....OK
Computing A_major_Concerto_No_23_AK488_Allegro_assai_Mozart_Piano_concerto_23_26 - A....OK
Computing A_major_L_391_Domenico_Scarlatti_Piano_Sonatas - A....OK
Computing A_major_No3_A_flat_Major - A....OK
Computing A_major_No_5_Dvorak_Slavonics_Dances - A....OK
Computing A_major_Prelude_No_07 - A....OK
Computing A_minor_02_No_2_Allegro - Am....OK
Computing A_minor_03_Valse_Op34 - Am....OK
Computing A_minor_06-Mazurka_Op_No_6 - Am....OK
Computing A_minor_13-Mazurka_Op_No_3 - Am....OK
Computing A_minor_16_No_4_Agitato - Am....OK
Computing A_minor_16_valse_Brown_index_150 - Am....OK
Computing A_minor_23_No_11_Allegro_con_Brio - A....OK
Computing A_minor_30-Mazurka_Op_No_0 - Am....OK
Computing A_minor_Georges_Enescu_Allegro_Tzygane - Am....OK
Computing A_minor_Georges_Enescu_Andante_Tzygane - Am....OK
Computing A_minor_Georges_Enescu_moderato_Tzygane - Am....OK
Computing A_minor_Prelude_No_02_ - Am....OK
Computing A_minor_Quartet_No_7_Op_16_2_Andante_cantabile_Dvorak_String_Quartet_No7_Dvorak_String_Quartet_No7 - Am....OK
Computing A_minor_Quartet_No_7_in_Op_16_3_Allegro_scherzando_Dvorak_String_Quartet_No7 - Am....OK
Computing A_minor_Ravel_Trio_in_4th_Mov_Finale_Anime_Rubinstein_Heifetz_Piatigorsky_ - Am....OK
Computing B_flat_Major_Introduction_Variatons_on_Je_vends_des_scapulaires_01_ - Bb....OK
Computing B_flat_major_01_Variations_on_Mozart_La_ci_darem_la_mano_Op42 - Bb....OK
Computing B_flat_major_05-Mazurka_Op_No_5 - Bb....OK
Computing B_flat_major_10-Mazurka_Op_No_0 - Bb....OK
Computing B_flat_major_10_Allegro_CD1_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_11_Allegro_spiritoso_CD05_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_11_Andante_CD1_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_12_Allegro_molto_CD1_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_12_Andantino_grazioso_CD05_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_13_Allegro_CD05_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_1_Allegro_assai_CD10_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_2_Andante_moderato_CD10_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_3_Menuetto_trio_CD10_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_4_Allegro_assai_CD10_Full_Symphonie_Mozart - Bb....OK
Computing B_flat_major_Cantabile_Chopin_Complete_Piano_Music - Bb....OK
Computing B_flat_major_No_6_Dvorak_Slavonics_Dances - Bb....OK
Computing B_flat_major_Piano_Trio_No_1_3_Allegretto_scherzando_Dvorak_Piano_Trios_Vol2 - Bb....OK
Computing B_flat_major_Piano_Trio_No_1_4_Finale_Allegro_vivace_Dvorak_Piano_Trios_Vol2 - Bb....OK
Computing B_flat_major_Prelude_No_21_ - Bb....OK
Computing B_flat_major_Schubert_Four_Impromptus_Op_142_D_935_3_Jandó - bb....OK
Computing B_flat_major_Sonata_for_clarinet_in_and_piano_Allegro_con_fuoco_Tres_anime_F_Poulenc_Complete_Chamber_Music_Vol_2 - Bb....OK
Computing B_flat_major_Sonata_for_clarinet_in_and_piano_Allegro_tristemente_Allegretto_tres_calme_tempo_allegretto_F_Poulenc_Complete_Chamber_Music_Vol_2 - Bb....OK
Computing B_flat_major_Sonata_for_clarinet_in_and_piano_Romanza_Tres_calme_F_Poulenc_Complete_Chamber_Music_Vol_2 - Bb....OK
Computing B_flat_minor_02_Scherzo_No2_Op_31 - Bbm....OK
Computing B_flat_minor_17-Mazurka_Op_No_7 - Bbm....OK
Computing B_flat_minor_No_5_Dvorak_Slavonics_Dances - Bbm....OK
Computing B_flat_minor_Nocturne_Op_9_No_1_Chopin_Vol_1 - Bbm....OK
Computing B_flat_minor_Prelude_No_16_ - Bbm....OK
Computing B_major_28-Mazurka_Op_No_8 - B....OK
Computing B_major_33-Mazurka_Op_No_3 - B....OK
Computing B_major_39-Mazurka_Op_No_9 - B....OK
Computing B_major_No_1_Dvorak_Slavonics_Dances - B....OK
Computing B_major_Nocturne_Op_32_No_1_Chopin_Vol_1 - B....OK
Computing B_major_Nocturne_Op_62_No_1_Chopin_Vol_2 - B....OK
Computing B_major_Nocturne_Op_9_No_3_Chopin_Vol_1 - B....OK
Computing B_major_Prelude_No_11_ - B....OK
Computing B_minor_01_Scherzo_No1 - Bm....OK
Computing B_minor_10_Valse_Op69_No2 - Bm....OK
Computing B_minor_19-Mazurka_Op_No_9 - Bm....OK
Computing B_minor_25-Mazurka_Op_No_5 - Bm....OK
Computing B_minor_Allegro_22_No_10_con_fuoco - Bm....OK
Computing B_minor_Badinerie_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Bourr_e_I_II_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Menuet_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Polonaise_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Pr_lude_in_arranjed_from_the_Well_Tempered_Clavier_Book_I_n_24_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Prelude_No_06_ - Bm....OK
Computing B_minor_Rondeau_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Sarabande_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Siciliano_arranged_from_the_Violin_Sonata_n_4_in_C_minor_BWV_1017_Bach_Suites_ouvertures - Bm....OK
Computing B_minor_Suite_n_2_in_BWV_1067_Overture_Bach_Suites_ouvertures - B....OK
Computing B_minor_Wachet_auf_ruft_uns_die_Stimme_arranged_from_Chorale_Variation_BWV_140_Bach_Suites_ouvertures - Bm....OK
Computing C_major_01_No_1_allegro - C....OK
Computing C_major_04_Allegro_vivace_CD14_Full_Symphonie_Mozart - C....OK
Computing C_major_05_Andante_cantabile_CD14_Full_Symphonie_Mozart - C....OK
Computing C_major_06_Menuetto_trio_Allegretto_CD14_Full_Symphonie_Mozart - C....OK
Computing C_major_07_Molto_allegro_CD14_Full_Symphonie_Mozart - C....OK
Computing C_major_07_No_7_Vivace - C....OK
Computing C_major_09-Mazurka_Op_No_9 - C....OK
Computing C_major_10_Allegro_vivace_CD10_Full_Symphonie_Mozart - C....OK
Computing C_major_15-Mazurka_Op_No_5 - C....OK
Computing C_major_1_Allegro_spiritoso_CD08_Full_Symphonie_Mozart - C....OK
Computing C_major_1_Molto_allegro_CD09_Full_Symphonie_Mozart - C....OK
Computing C_major_24-Mazurka_Op_No_4 - C....OK
Computing C_major_2_Andante_CD08_Full_Symphonie_Mozart - C....OK
Computing C_major_2_Andante_Cantabile_Con_Moto_LVBeethoven_Symphonies_No1No2 - C....OK
Computing C_major_2_Andantino_CD09_Full_Symphonie_Mozart - C....OK
Computing C_major_3_Menuetto_Allegretto_trio_CD08_Full_Symphonie_Mozart - C....OK
Computing C_major_3_Menuetto_Allegro_Molto_E_Vivace_LVBeethoven_Symphonies_No1No2 - C....OK
Computing C_major_3_Presto_assai_CD09_Full_Symphonie_Mozart - C....OK
Computing C_major_40-Mazurka_Op_No_0 - C....OK
Computing C_major_4_Presto_CD08_Full_Symphonie_Mozart - C....OK
Computing C_major_5_Allegro_assai_CD05_Full_Symphonie_Mozart - C....OK
Computing C_major_6_Adagio_Allegro_spiritoso_CD11_Full_Symphonie_Mozart - C....OK
Computing C_major_6_Andantino_grazioso_CD05_Full_Symphonie_Mozart - C....OK
Computing C_major_7_Andante_CD11_Full_Symphonie_Mozart - C....OK
Computing C_major_7_Presto_assai_CD05_Full_Symphonie_Mozart - C....OK
Computing C_major_8_Allegro_vivace_CD10_Full_Symphonie_Mozart - C....OK
Computing C_major_8_Menuetto_trio_CD11_Full_Symphonie_Mozart - C....OK
Computing C_major_9_Andante_di_molto_piu_tosto_allegretto_CD10_Full_Symphonie_Mozart - C....OK
Computing C_major_9_Presto_CD11_Full_Symphonie_Mozart - C....OK
Computing C_major_Bolero_06_ - C....OK
Computing C_major_Bourr_e_I_II_Bach_Suites_ouvertures - C....OK
Computing C_major_Courante_Bach_Suites_ouvertures - C....OK
Computing C_major_Forlane_Bach_Suites_ouvertures - C....OK
Computing C_major_Gavotte_I_II_Bach_Suites_ouvertures - C....OK
Computing C_major_Menuet_I_II_Bach_Suites_ouvertures - C....OK
Computing C_major_No_1_Dvorak_Slavonics_Dances - C....OK
Computing C_major_No_7_Dvorak_Slavonics_Dances - C....OK
Computing C_major_Passepied_I_II_Bach_Suites_ouvertures - C....OK
Computing C_major_Prelude_No_01_ - C....OK
Computing C_major_Suite_n_1_BWV_1066_Overture_Bach_Suites_ouvertures - C....OK
Computing C_major_Sym_No_1_in_1_Adagio_Molto_Allegro_Con_Brio_LVBeethoven_Symphonies_No1No2 - C....OK
Computing C_minor_05_Polonaise_Op.40_n2 - Cm....OK
Computing C_minor_08_Suite_No_5_BWV_1011_Allemande_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_09_Suite_No_5_BWV_1011_Courante_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_10_Suite_No_5_BWV_1011_Sarabande_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_11_Suite_No_5_BWV_1011_Gavotte_1_2_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_11_Suite_No_5_BWV_1011_Gigue_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_12_No_12_Allegro_con_Fuoco - Cm....OK
Computing C_minor_18-Mazurka_Op_No_8 - Cm....OK
Computing C_minor_24_No_12_Allegro_molto - Cm....OK
Computing C_minor_41-Mazurka_Op_No_1 - Cm....OK
Computing C_minor_7_Suite_No_5_BWV_1011_Prelude_JSBach_Suite_Pour_Violoncelle - Cm....OK
Computing C_minor_Introduction_Rondo_07_ - Cm....OK
Computing C_minor_No_7_Dvorak_Slavonics_Dances - Cm....OK
Computing C_minor_Nocturne_B_I_108_Chopin_Vol_1 - Cm....OK
Computing C_minor_Nocturne_Op_48_No_1_Chopin_Vol_2 - Cm....OK
Computing C_minor_Pno_trio_no_1_op_8_Shostakovitch_Piano_Trios_1_2 - Cm....OK
Computing C_minor_Prelude_No_20_ - Cm....OK
Computing C_minor_Rondo_02_ - Cm....OK
Computing C_minor_Schubert_Four_Impromptus_Op_90_D_899_1_Jandó - Cm....OK
Computing C_minor_Vladimir_Horowitz_Etude_In_Op_25_No_12_Chopin - Cm....OK
Computing C_sharp_minor_02_Polonaise_Op.26_n1 - C#m....OK
Computing C_sharp_minor_03_Scherzo_No3_Op_39 - C#m....OK
Computing C_sharp_minor_04_No_4_Presto - C#m....OK
Computing C_sharp_minor_07_Valse_Op64_No2 - C#m....OK
Computing C_sharp_minor_19_No_7_Lento - C#m....OK
Computing C_sharp_minor_21-Mazurka_Op_No_1 - C#m....OK
Computing C_sharp_minor_26-Mazurka_Op_No_6 - C#m....OK
Computing C_sharp_minor_35-Mazurka_Op_No_5 - C#m....OK
Computing C_sharp_minor_38-Mazurka_Op_No_8 - C#m....OK
Computing C_sharp_minor_Nocturne_B_I_49_Chopin_Vol_1 - C#....OK
Computing C_sharp_minor_Nocturne_Op_27_No_1_Chopin_Vol_1 - C#m....OK
Computing C_sharp_minor_Prelude_No_10_ - C#m....OK
Computing C_sharp_minor_Prelude_No_25_ - C#m....OK
Computing D_flat_major_06_Berceuse_Op_57 - Db....OK
Computing D_flat_major_06_Valse_p64_No1_Minute_Waltz - Db....OK
Computing D_flat_major_13_Valse_Op70_No3 - Db....OK
Computing D_flat_major_20-Mazurka_Op_No_0 - Db....OK
Computing D_flat_major_20_No_8_Vivace_assai - Db....OK
Computing D_flat_major_26_No_2_Allegretto - Db....OK
Computing D_flat_major_Berceuse_Op_57_Chopin_Complete_Piano_Music - Db....OK
Computing D_flat_major_No_4_Dvorak_Slavonics_Dances - Db....OK
Computing D_flat_major_Nocturne_Op_27_No_2_Chopin_Vol_1 - Db....OK
Computing D_flat_major_Nouvelle_Etude_No_3_Chopin_Complete_Piano_Music - Db....OK
Computing D_flat_major_Prelude_No_15_ - Db....OK
Computing D_major_01_Allegro_assai_CD14_Full_Symphonie_Mozart - D....OK
Computing D_major_01_Allegro_vivace_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_02_Andante_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_02_Andante_CD14_Full_Symphonie_Mozart - D....OK
Computing D_major_02_Ludwig_van_Beethoven_Violin_Concerto_Op_61_Larghetto - D....OK
Computing D_major_03_Allegro_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_03_Allegro_CD14_Full_Symphonie_Mozart - D....OK
Computing D_major_03_Ludwig_van_Beethoven_Violin_Concerto_Op_61_Rondo_Allegro - D....OK
Computing D_major_04_Allegro_con_spirito_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_05_Andante_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_06_Menuetto_trio_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_07_Finale_Presto_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_08_Adagio_Allegro_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_09_Andante_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_10_Andante_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_10_Menuetto_trio_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_10_Presto_CD12_Full_Symphonie_Mozart - D....OK
Computing D_major_10_Presto_assai_CD05_Full_Symphonie_Mozart - D....OK
Computing D_major_11_Andante_grazioso_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_11_Andantino_grazioso_Allegro_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_11_Molto_allegro_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_12_Allegro_moderato_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_12_Presto_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_13_Allegro_assai_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_13_Andante_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_13_Molto Allegro_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_13_Suite_No_6_BWV_1012_Prelude_JSBach_Suite_Pour_Violoncelle - D....OK
Computing D_major_14_Andante_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_14_Andante_grazioso_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_14_Presto_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_15_Menuetto_Trio_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_15_Presto_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_15_Suite_No_6_BWV_1012_Courante_JSBach_Suite_Pour_Violoncelle - D....OK
Computing D_major_16_Finale_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_16_Suite_No_6_BWV_1012_Sarabande_JSBach_Suite_Pour_Violoncelle - D....OK
Computing D_major_17_Allegro_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_17_Suite_No_6_BWV_1012_Gavotte_12_JSBach_Suite_Pour_Violoncelle - D....OK
Computing D_major_18_Andante_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_18_Suite_No_6_BWV_1012_Gigue_JSBach_Suite_Pour_Violoncelle - D....OK
Computing D_major_19_Allegro_molto_CD1_Full_Symphonie_Mozart2 - D....OK
Computing D_major_1_Allegro_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_1_Allegro_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_1_March_K408_N_2_K385a_CD11_Full_Symphonie_Mozart - D....OK
Computing D_major_1_Molto_allegro_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_20_Allegro_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_21_Andante_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_22_Menuetto_Trio_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_23-Mazurka_Op_No_3 - D....OK
Computing D_major_23_Presto_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_2_Allegro_con_spirito_CD11_Full_Symphonie_Mozart - D....OK
Computing D_major_2_Andante_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_2_Andante_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_2_Andantino_con_moto_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_3_Andante_CD11_Full_Symphonie_Mozart - D....OK
Computing D_major_3_Menuetto_Trio_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_3_Menuetto_trio_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_3_Menuetto_trio_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Allegro_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Allegro_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Allegro_maestosos_Allegro_molto_CD09_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Menuetto_trio_CD11_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Presto_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_4_Presto_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Adagio_maestoso_Allegro_con_spirito_CD10_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Allegro_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Allegro_molto_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Andante_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Andante_maestoso_Allegro_assai_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Menuetto_galante_trio_CD09_Full_Symphonie_Mozart - D....OK
Computing D_major_5_Presto_CD11_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Andante_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Andante_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Andante_CD09_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Andante_grazioso_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Andantino_CD10_Full_Symphonie_Mozart - D....OK
Computing D_major_6_Presto_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_7_Allegro_CD02_Full_Symphonie_Mozart - D....OK
Computing D_major_7_Allegro_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_7_Menuetto_2_trios_CD09_Full_Symphonie_Mozart - D....OK
Computing D_major_7_Menuetto_trio_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_7_Presto_CD10_Full_Symphonie_Mozart - D....OK
Computing D_major_8_Adagio_Allegro_assai_CD09_Full_Symphonie_Mozart - D....OK
Computing D_major_8_Allegro_assai_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_8_Allegro_spiritoso_CD05_Full_Symphonie_Mozart - D....OK
Computing D_major_8_Andante_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_8_Prestissimo_CD07_Full_Symphonie_Mozart - D....OK
Computing D_major_9_Andante_CD08_Full_Symphonie_Mozart - D....OK
Computing D_major_9_Andantino_grazioso_CD05_Full_Symphonie_Mozart - D....OK
Computing D_major_9_Molto_allegro_CD04_Full_Symphonie_Mozart - D....OK
Computing D_major_9_Presto_CD1_Full_Symphonie_Mozart - D....OK
Computing D_major_Classical_Mozart_Pachabel_Cannon_in_D_Piano_ - D....OK
Computing D_major_Concerto_No_26_DK537_Allegro_Mozart_Piano_concerto_23_26 - D....OK
Computing D_major_Concerto_No_26_DK537_Larghetto_Mozart_Piano_concerto_23_26 - D....OK
Computing D_major_No_6_Dvorak_Slavonics_Dances - D....OK
Computing D_major_Prelude_No_05_ - D....OK
Computing D_minor_Adagio_Concerto_pour_piano_No5_Sonate_pour_piano_LVBeethoven_Tempest - Dm....OK
Computing D_minor_Allegretto_Concerto_pour_piano_No5_Sonate_pour_piano_LVBeethoven_Tempest - Dm....OK
Computing D_minor_Bagatelle_in_for_Violin_and_piano_F_Poulenc_Complete_Chamber_Music_Vol_2 - Dm....OK
Computing D_minor_Concerto_BWV_1052_III_Allegro_Bach_Complet_Orchestral - Dm....OK
Computing D_minor_Concerto_BWV_1052_II_Adagio_Bach_Complet_Orchestral - Dm....OK
Computing D_minor_Concerto_BWV_1052_I_Allegro_Bach_Complet_Orchestral - Dm....OK
Computing D_minor_Fantasia_in_Beethoven_Mozart_Schubert_Brahms_Schumann - Dm....OK
Computing D_minor_Largo_Allegro_Concerto_pour_piano_No5_Sonate_pour_piano_LVBeethoven_Tempest - Dm....OK
Computing D_minor_Prelude_No_24_ - Dm....OK
Computing D_minor_Toccata_Et_Fugue_BWV_565_Bach_Oeuvres_Pour_Orgue - Dm....OK
Computing D_minor_Visee_Suite_Allemande_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Bourree_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Courante_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Gavotte_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Gigue_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Menuets_I_and_II_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_Sarabande_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_minor_Visee_Suite_in_Prelude_Julian_Bream_Baroque_Guitar - Dm....OK
Computing D_sharp_major_14_Suite_No_6_BWV_1012_Allemande_JSBach_Suite_Pour_Violoncelle - D#....OK
Computing E_flat_major_01_Adagio_Allegro_CD13_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_01_Andante_Spianato_et_Grande_Polonaise_Brillante_Op.62 - Eb....OK
Computing E_flat_major_01_Grande_Valse_Brillante_Op18 - Eb....OK
Computing E_flat_major_01_Suite_No_4_BWV_1010_I_Prelude_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_02_Andante_con_moto_CD13_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_02_Suite_No_4_BWV_1010_Allemande_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_03_Menuetto_trio_Allegretto_CD13_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_03_Suite_No_4_BWV_1010_Courante_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_04_Finale_Allegro_CD13_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_04_Suite_No_4_BWV_1010_Sarabande_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_05_Suite_No_4_BWV_1010_Bourree_1_2_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_06_Suite_No_4_BWV_1010_Gigue_JSBach_Suite_Pour_Violoncelle - Eb....OK
Computing E_flat_major_11_No_11_Allegretto - Eb....OK
Computing E_flat_major_17_valse_Brown_Index_133 - Eb....OK
Computing E_flat_major_18_valse_Opposth - Eb....OK
Computing E_flat_major_1_Allegro_molto_CD1_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_1_Molto_presto_Andante_Allegro_CD05_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_2_Andante_CD1_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_3_Presto_CD1_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_5_Adagio_un_poco_mosso_Beethoven_Piano_Concerto_5_Emperor - Eb....OK
Computing E_flat_major_5_Allegro_Beethoven_Piano_Concerto_5_Emperor - Eb....OK
Computing E_flat_major_5_Allegro_CD03_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_5_Rondo_Allegro_Beethoven_Piano_Concerto_5_Emperor - Eb....OK
Computing E_flat_major_6_Andante_CD03_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_7_Menuetto_trio_CD03_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_8_Allegro_CD03_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_9_Andantino_grazioso_Anhang_CD03_Full_Symphonie_Mozart - Eb....OK
Computing E_flat_major_Beethoven_Piano_Concerto_No_5_in_Op_73_Emporor_Pachelbel_Bach_Beethoven_Schubert_Mozart_Dvorak_C - Eb....OK
Computing E_flat_major_Largo_Chopin_Complete_Piano_Music - Eb....OK
Computing E_flat_major_Nocturne_Op_55_No_2_Chopin_Vol_2 - Eb....OK
Computing E_flat_major_Nocturne_Op_9_No_2_Chopin_Vol_1 - Eb....OK
Computing E_flat_major_Prelude_No_19_ - Eb....OK
Computing E_flat_major_Schubert_Four_Impromptus_Op_90_D_899_2_Jandó - Eb....OK
Computing E_flat_major_Symphony_Finale_Vivace_Haydn_Symphonie_No99_101_Die_Uhr - Eb....OK
Computing E_flat_major_Symphony_No_99_Adagio_Haydn_Symphonie_No99_101_Die_Uhr - Eb....OK
Computing E_flat_major_Symphony_No_99_in_Adagio_Vivace_assai_Haydn_Symphonie_No99_101_Die_Uhr - Eb....OK
Computing E_flat_minor_03_Polonaise_Op.26_n2 - Ebm....OK
Computing E_flat_minor_04-Mazurka_Op_No_4 - Ebm....OK
Computing E_flat_minor_06_No_6_Andante_con_molto_ - Eb....OK
Computing E_flat_minor_Prelude_No_14_ - Ebm....OK
Computing E_major_03-Mazurka_Op_No_3 - E....OK
Computing E_major_03_No_3_Lento_ma_non_Troppo - E....OK
Computing E_major_04_Polonaise_Op40_No1 - E....OK
Computing E_major_04_Scherzo_No4_Op_54 - E....OK
Computing E_major_15_Valse_Brown_index_44 - E....OK
Computing E_major_Concerto_BWV_1053_III_Allegro_Bach_Complet_Orchestral - E....OK
Computing E_major_Concerto_BWV_1053_II_Siciliano_Bach_Complet_Orchestral - E....OK
Computing E_major_Concerto_BWV_1053_I_Allegro_Bach_Complet_Orchestral - E....OK
Computing E_major_L_210_Domenico_Scarlatti_Piano_Sonatas - E....OK
Computing E_major_Nocturne_Op_62_No_2_Chopin_Vol_2 - E....OK
Computing E_major_Prelude_No_09_ - E....OK
Computing E_major_Spring_Allegro_Antonio_Vivaldi_The_Four_Seasons - E....OK
Computing E_major_Spring_Danza_pastrorale_Allegro_Antonio_Vivaldi_The_Four_Seasons - E....OK
Computing E_major_Spring_Largo_e_pianissimo_sempre_Antonio_Vivaldi_The_Four_Seasons - E....OK
Computing E_major_Variations_sur_in_air_national_allemand_03_ - E....OK
Computing E_minor_11_Mazurka_Op_No_1 - Em....OK
Computing E_minor_12_Mazurka_Op_No_2 - Em....OK
Computing E_minor_14_Valse_OpPosth - Em....OK
Computing E_minor_17_No_5_Vivace17 - Em....OK
Computing E_minor_27-Mazurka_Op_No_7 - Em....OK
Computing E_minor_JSBach_Les_Grandes_Toccatas - Em....OK
Computing E_minor_No_2_Dvorak_Slavonics_Dances - Em....OK
Computing E_minor_Nocturne_Op_72_No_1_Chopin_Vol_2 - Em....OK
Computing E_minor_Pno_trio_no_2_op_67_Allegretto_Shostakovitch_Piano_Trios_1_2 - Em....OK
Computing E_minor_Pno_trio_no_2_op_67_Allegro_non_troppo_Shostakovitch_Piano_Trios_1_2 - Em....OK
Computing E_minor_Pno_trio_no_2_op_67_Andante_Shostakovitch_Piano_Trios_1_2 - Em....OK
Computing E_minor_Pno_trio_no_2_op_67_Largo_Shostakovitch_Piano_Trios_1_2 - Em....OK
Computing E_minor_Prelude_No_04_ - Em....OK
Computing F_Sharp_minor_Prelude_No_08_ - F....OK
Computing F_major_03_Krakowiak_Op14 - F....OK
Computing F_major_04_Valse_Brillante_OP34_No3 - F....OK
Computing F_major_05_Ludwig_van_Beethoven_Romance_No_2_Op_50_Beethoven_RomanceBeethoven_Romance - F....OK
Computing F_major_08_No_8_allegro - F....OK
Computing F_major_15_No_3_allegro - F....OK
Computing F_major_1_Allegro_CD03_Full_Symphonie_Mozart - F....OK
Computing F_major_20_Allegro_CD02_Full_Symphonie_Mozart - F....OK
Computing F_major_21_Andante_CD02_Full_Symphonie_Mozart - F....OK
Computing F_major_22_Menuetto_Trio_CD02_Full_Symphonie_Mozart - F....OK
Computing F_major_23_Allegro_molto_CD02_Full_Symphonie_Mozart - F....OK
Computing F_major_2_Andantino_grazioso_CD03_Full_Symphonie_Mozart - F....OK
Computing F_major_3_Menuetto_trio_CD03_Full_Symphonie_Mozart - F....OK
Computing F_major_4_Molto_allegro_CD03_Full_Symphonie_Mozart - F....OK
Computing F_major_7_Allegro_assai_CD1_Full_Symphonie_Mozart - F....OK
Computing F_major_Allegro_moderato_Tres_doux_Ravel_String_Quartet_No_1 - G....OK
Computing F_major_Assez_vif_Tres_rythme_Ravel_String_Quartet_No_1 - F....OK
Computing F_major_Autumn_Adagio_Antonio_Vivaldi_The_Four_Seasons - F....OK
Computing F_major_Autumn_Allegro_2_Antonio_Vivaldi_The_Four_Seasons - F....OK
Computing F_major_Autumn_Allegro_Antonio_Vivaldi_The_Four_Seasons - F....OK
Computing F_major_Ballade_II_Op_38_Chopin_Complete_Piano_Music - F....OK
Computing F_major_Classical_Piano_Bach_Fantasia_in_D_minor - Dm....OK
Computing F_major_Concerto_No_1_1_Allegro_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_1_2_Adagio_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_1_3_Allegro_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_1_4_Menueto_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_2_1_Allegro_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_2_2_Andante_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Concerto_No_2_3_Allegro_assai_JSBach_Concertos_Brandebourgeois_1_2_3_5 - F....OK
Computing F_major_Introduction_and_Allegro_for_Harp_FLute_Clarinet_Ravel_String_Quartet - F....OK
Computing F_major_No_3_Dvorak_Slavonics_Dances - F....OK
Computing F_major_No_4_Dvorak_Slavonics_Dances - F....OK
Computing F_major_Nocturne_Op_15_No_1_Chopin_Vol_1 - F....OK
Computing F_major_Prelude_No_23_ - F....OK
Computing F_major_Rondo_a_la_Mazur_04_ - F....OK
Computing F_major_Tres_lent_Ravel_String_Quartet_No_1 - F....OK
Computing F_major_Vif_et_agite_Ravel_String_Quartet_No_1 - F....OK
Computing F_minor_04_Maestoso - Fm....OK
Computing F_minor_05_Fantasie_Op_49 - Fm....OK
Computing F_minor_05_Larguetto - Fm....OK
Computing F_minor_06_Allegro_vivace - Fm....OK
Computing F_minor_07-Mazurka_Op_No_7 - Fm....OK
Computing F_minor_09_No_9_Allegro_molto_agitato - Fm....OK
Computing F_minor_14_No_2_Presto - Fm....OK
Computing F_minor_25_No_1_Andantino - Fm....OK
Computing F_minor_34-Mazurka_Op_No_4 - Fm....OK
Computing F_minor_Ballade_IV_Op_52_Chopin_Complete_Piano_Music - Fm....OK
Computing F_minor_Concerto_BWV_1056_III_Presto_Bach_Complet_Orchestral - Fm....OK
Computing F_minor_Concerto_BWV_1056_II_Largo_Bach_Complet_Orchestral - Fm....OK
Computing F_minor_Concerto_BWV_1056_I_Allegro_Bach_Complet_Orchestral - Fm....OK
Computing F_minor_Fantasie_Op_49_Chopin_Complete_Piano_Music - Fm....OK
Computing F_minor_L_118_Domenico_Scarlatti_Piano_Sonatas - Fm....OK
Computing F_minor_L_383_Domenico_Scarlatti_Piano_Sonatas - Fm....OK
Computing F_minor_No_5_inOp_9_3_Tempo_di_valse_Dvorak_String_Quartet_No5_ - Fm....OK
Computing F_minor_No_5_in_Op_9_2_Andante_con_moto_quasi_allegretto_Dvorak_String_Quartet_No5_ - Fm....OK
Computing F_minor_Nocturne_Op_55_No_1_Chopin_Vol_2 - Fm....OK
Computing F_minor_Nouvelle_Etude_No_1_Chopin_Complete_Piano_Music - Fm....OK
Computing F_minor_Piano_Trio_Op_65_B_130_III_Poco_adagio_Dvorak_Piano_Trios_Vol1 - Fm....OK
Computing F_minor_Piano_Trio_Op_65_B_130_II_Allegro_grazioso_Dvorak_Piano_Trios_Vol1 - Fm....OK
Computing F_minor_Piano_Trio_Op_65_B_130_IV_Finale_Allegro_con_brio_Dvorak_Piano_Trios_Vol1 - Fm....OK
Computing F_minor_Piano_Trio_Op_65_B_130_I_Allegro_ma_non_troppo_Dvorak_Piano_Trios_Vol1 - Fm....OK
Computing F_minor_Prelude_No_18_ - Fm....OK
Computing F_minor_Schubert_Four_Impromptus_Op_142_D_935_1_Jandó - Fm....OK
Computing F_minor_Schubert_Four_Impromptus_Op_142_D_935_4_Jandó - Fm....OK
Computing F_minor_Winter_Allegro_Antonio_Vivaldi_The_Four_Seasons - Fm....OK
Computing F_minor_Winter_Allegro_non_molto_Antonio_Vivaldi_The_Four_Seasons - Fm....OK
Computing F_minor_Winter_Largo_Antonio_Vivaldi_The_Four_Seasons - Fm....OK
Computing F_sharp_major_07_Barcarolle_Op_60 - F#....OK
Computing F_sharp_major_Nocturne_Op_15_No_2_Chopin_Vol_1 - F#....OK
Computing F_sharp_major_Prelude_No_13_ - F#....OK
Computing F_sharp_minor_01-Mazurka_Op_No_1 - F#m....OK
Computing F_sharp_minor_06_Polonaise_Op.44_n1 - F#m....OK
Computing F_sharp_minor_32-Mazurka_Op_No_2 - F#m....OK
Computing F_sharp_minor_JSBach_Les_Grandes_Toccatas - F#m....OK
Computing F_sharp_minor_Nocturne_Op_48_No_2_Chopin_Vol_2 - F#m....OK
Computing G_flat_major_05_No_5_Vivace - Gb....OK
Computing G_flat_major_11_Valse_Op70_No1 - Gb....OK
Computing G_flat_major_21_No_9_Allegro_assai - Gb....OK
Computing G_flat_major_Schubert_Four_Impromptus_Op_90_D_899_3_Jandó - Gb....OK
Computing G_major_04_Ludwig_van_Beethoven_Romance_No_1_Op_40_Beethoven_RomanceBeethoven_Romance - G....OK
Computing G_major_10_Andante_CD09_Full_Symphonie_Mozart - G....OK
Computing G_major_11_Tempo_primo_CD09_Full_Symphonie_Mozart - G....OK
Computing G_major_2_Allegro_CD05_Full_Symphonie_Mozart - G....OK
Computing G_major_36-Mazurka_Op_No_6 - G....OK
Computing G_major_3_Andantino_grazioso_CD05_Full_Symphonie_Mozart - G....OK
Computing G_major_4_Presto_CD05_Full_Symphonie_Mozart - G....OK
Computing G_major_8_Allegro_Andante_CD02_Full_Symphonie_Mozart - G....OK
Computing G_major_9_Allegro_spiritoso_CD09_Full_Symphonie_Mozart - G....OK
Computing G_major_9_Rondo_Allegro_CD02_Full_Symphonie_Mozart - G....OK
Computing G_major_Concerto_No_3_1_Allegro_moderato_JSBach_Concertos_Brandebourgeois_1_2_3_5 - G....OK
Computing G_major_Concerto_No_3_3_Allegro_JSBach_Concertos_Brandebourgeois_1_2_3_5 - G....OK
Computing G_major_Debussy_Trio_No_1_Andant_o_con_moto_allegro_Debussy_French_Trios_Piano - G....OK
Computing G_major_Debussy_Trio_No_1_Andante_espressivo_Debussy_French_Trios_Piano - Gm....OK
Computing G_major_Debussy_Trio_No_1_F_ale_Appassionato_Debussy_French_Trios_Piano - G....OK
Computing G_major_Debussy_Trio_No_1_Scherzo_termezzo_Moderato_con_allegro_Debussy_French_Trios_Piano - G....OK
Computing G_major_Flute_Concerto_No_1_K313_III_Rondo_tempo_di_menuetto_Alberto_Lizzio_Mozart_Festival_Orchestra_Mozart - G....OK
Computing G_major_Flute_Concerto_No_1_K313_II_Andante_non_troppo_Alberto_Lizzio_Mozart_Festival_Orchestra_Mozart - G....OK
Computing G_major_Flute_Concerto_No_1_in_K313_I_Allegro_maestoso_Alberto_Lizzio_Mozart_Festival_Orchestra_Mozart - G....OK
Computing G_major_L_103_Domenico_Scarlatti_Piano_Sonatas - G....OK
Computing G_major_L_387_Domenico_Scarlatti_Piano_Sonatas - G....OK
Computing G_major_L_487_Domenico_Scarlatti_Piano_Sonatas - G....OK
Computing G_major_Nocturne_Op_37_No_2_Chopin_Vol_2 - G....OK
Computing G_major_Prelude_No_03_ - G....OK
Computing G_minor_05_Janusz_Olejniczak_Chopin_Ballade_No_1_in_Op_23 - Gm....OK
Computing G_minor_05_Molto_Allegro_CD13_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_07_Menuetto_trio_Allegretto_CD13_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_08_Allegro_assai_CD13_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_14_Mazurka_Op_No_4 - Gm....OK
Computing G_minor_1_Allegro_con_brio_CD06_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_2_Andante_CD06_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_3_Menuetto_trio_CD06_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_4_Allegro_CD06_Full_Symphonie_Mozart - Gm....OK
Computing G_minor_Allemande_Bach_Suites_ouvertures - Gm....OK
Computing G_minor_Anime_et_tres_decide_debussy_Debussy_String_Quartet_No1 - Gm....OK
Computing G_minor_Assez_vif_et_bien_rythme_Debussy_String_Quartet_No1 - Gm....OK
Computing G_minor_Ballade_I_Op_23_Chopin_Complete_Piano_Music - Gm....OK
Computing G_minor_Courante_Bach_Suites_ouvertures - Gm....OK
Computing G_minor_Dvorak_Piano_Concerto_in_1_Allegro_agitato_Carlos_Kleiber_Sviatoslav_Richter - Gm....OK
Computing G_minor_Gavotte_Bach_Suites_ouvertures - Gm....OK
Computing G_minor_JSBach_Les_Grandes_Toccatas - Gm....OK
Computing G_minor_No_8_Dvorak_Slavonics_Dances - Gm....OK
Computing G_minor_Nocturne_Op_15_No_3_Chopin_Vol_1 - Gm....OK
Computing G_minor_Nocturne_Op_37_No_1_Chopin_Vol_2 - Gm....OK
Computing G_minor_Piano_Trio_No2_Allegro_moderato_Dvorak_Piano_Trios_Vol2 - Gm....OK
Computing G_minor_Piano_Trio_No2_Largo_Dvorak_Piano_Trios_Vol2 - Gm....OK
Computing G_minor_Prelude_No_22_ - Gm....OK
Computing G_minor_Sarabande_Bach_Suites_ouvertures - Gm....OK
Computing G_minor_Suite_in_arranged_from_English_Suite_n_3_BWV_808_Pr_lude_Bach_Suites_ouvertures - Gm....OK
Computing G_minor_Summer_Adagio_Presto_Antonio_Vivaldi_The_Four_Seasons - Gm....OK
Computing G_minor_Summer_Allegro_non_molto_Antonio_Vivaldi_The_Four_Seasons - Gm....OK
Computing G_minor_Summer_Presto_Antonio_Vivaldi_The_Four_Seasons - Gm....OK
Computing G_minor_The_Pianist_06_Frederic_Chopin_Waltz_No_3_in_Op_32_No_2 - G....OK
Computing G_minor_Tres_modere_debussy_Debussy_String_Quartet_No1 - Gm....OK
Computing G_sharp_minor_02_Mazurka_Op_No_2 - G#m....OK
Computing G_sharp_minor_18_No_6_allegro - G#m....OK
Computing G_sharp_minor_22_Mazurka_Op_No_2 - G#m....OK
Computing G_sharp_minor_Prelude_No_12 - G#m....OK
Computing No_7_A_major - A....OK
Computing Vanessa_Mae_-_The_Violin_Player_-_01_-_Toccata_And_Fugue_In_D_Minor - Dm....OK
Computing Vivaldi_Sonate_A_Major - A....OK
Computing Vladimir_Horowitz_-_Mozart_-_01_-_Piano_Sonata_In_B_Flat_Major_K.281_-_Allegro - Bb....OK
Computing Vladimir_Horowitz_-_Mozart_-_04_-_Piano_Sonata_In_C_Major_K.330_-_Allegro_Moderato - C....OK
Computing Vladimir_Horowitz_-_Mozart_-_07_-_Piano_Sonata_In_B_Flat_Major_K.333_-_Allegro - Bb....OK
Computing Vladimir_Horowitz_-_Mozart_-_10_-_Adagio_In_B_Minor_K.540 - Bm....OK
Computing Vladimir_Horowitz_-_Mozart_-_11_-_Rondo_In_D_Major_K.485 - D....OK
Computing mozart-04-no.24_in_c_minor-osc - Cm....OK
Computing mozart-11-no.4_in_d_major_k.218-osc - D....OK
|
example/.ipynb_checkpoints/test-lstm-checkpoint.ipynb | ###Markdown
demoThis is a demo for model temporal test and plot the result map and time series. Before this we trained a model using [train-lstm.py](train-lstm.py). By default the model will be saved in [here](output/CONUSv4f1/). - Load packages and define test options
###Code
import os
from hydroDL.data import dbCsv
from hydroDL.post import plot, stat
from hydroDL import master
# change to your path
cDir = r'/home/kxf227/work/GitHUB/hydroDL-dev/example/'
out = os.path.join(cDir, 'output', 'CONUSv4f1')
rootDB = os.path.join(cDir, 'data')
nEpoch = 100
tRange = [20160401, 20170401]
###Output
_____no_output_____
###Markdown
- Test the model in another year
###Code
df, yp, yt = master.test(
out, tRange=[20160401, 20170401], subset='CONUSv4f1', epoch=100, reTest=True)
yp = yp.squeeze()
yt = yt.squeeze()
###Output
read master file /home/kxf227/work/GitHUB/hydroDL-dev/example/output/CONUSv4f1/master.json
read master file /home/kxf227/work/GitHUB/hydroDL-dev/example/output/CONUSv4f1/master.json
output files: ['/home/kxf227/work/GitHUB/hydroDL-dev/example/output/CONUSv4f1/CONUSv4f1_20160401_20170401_ep100_SMAP_AM.csv']
Runing new results
/home/kxf227/work/GitHUB/hydroDL-dev/example/data/Subset/CONUSv4f1.csv
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/SMAP_AM.csv 0.06371784210205078
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/APCP_FORA.csv 0.052048444747924805
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/DLWRF_FORA.csv 0.053551435470581055
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/DSWRF_FORA.csv 0.05208420753479004
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/TMP_2_FORA.csv 0.05216193199157715
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/SPFH_2_FORA.csv 0.05547213554382324
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/VGRD_10_FORA.csv 0.053055524826049805
read /home/kxf227/work/GitHUB/hydroDL-dev/example/data/CONUSv4f1/2016/UGRD_10_FORA.csv 0.05231595039367676
batch 0
read master file /home/kxf227/work/GitHUB/hydroDL-dev/example/output/CONUSv4f1/master.json
###Markdown
- Calculate statistic metrices and plot result. An interactive map will be generated, where users can click on map to show time series of observation and model predictions.
###Code
# calculate stat
statErr = stat.statError(yp, yt)
dataGrid = [statErr['RMSE'], statErr['Corr']]
dataTs = [yp, yt]
t = df.getT()
crd = df.getGeo()
mapNameLst = ['RMSE', 'Correlation']
tsNameLst = ['LSTM', 'SMAP']
colorMap = None
colorTs = None
# plot map and time series
%matplotlib notebook
plot.plotTsMap(
dataGrid,
dataTs,
lat=crd[0],
lon=crd[1],
t=t,
mapNameLst=mapNameLst,
tsNameLst=tsNameLst,
isGrid=True)
###Output
_____no_output_____ |
03-visualisations_in_the_browser.ipynb | ###Markdown
03 - Test some Matplotlib visualisationsThis notebook tests some Matplotlib data visualisation capabilities from [GitHub.dev](https://github.dev) console using Python in the browser directly 😍
###Code
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import numpy as np
###Output
_____no_output_____
###Markdown
Confidence bands examplesThis example comes from [matplotlib gallery](https://matplotlib.org/stable/gallery/lines_bars_and_markers/fill_between_demo.htmlsphx-glr-gallery-lines-bars-and-markers-fill-between-demo-py), showing an example of an chart using filling the area between lines.
###Code
N = 21
x = np.linspace(0, 10, 11)
y = [3.9, 4.4, 10.8, 10.3, 11.2, 13.1, 14.1, 9.9, 13.9, 15.1, 12.5]
# fit a linear curve an estimate its y-values and their error.
a, b = np.polyfit(x, y, deg=1)
y_est = a * x + b
y_err = x.std() * np.sqrt(1/len(x) +
(x - x.mean())**2 / np.sum((x - x.mean())**2))
fig, ax = plt.subplots()
ax.plot(x, y_est, '-')
ax.fill_between(x, y_est - y_err, y_est + y_err, alpha=0.2)
ax.plot(x, y, 'o', color='tab:brown')
plt.show()
###Output
_____no_output_____
###Markdown
Box plot vs. violin plot comparisonThis example comes from [matplotlib gallery](https://matplotlib.org/stable/gallery/statistics/boxplot_vs_violin.htmlsphx-glr-gallery-statistics-boxplot-vs-violin-py), comparing two useful types of charts used for distribution visualisation: Box plot vs. violin plot.
###Code
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(9, 4))
# Fixing random state for reproducibility
np.random.seed(19680801)
# generate some random test data
all_data = [np.random.normal(0, std, 100) for std in range(6, 10)]
# plot violin plot
axs[0].violinplot(all_data,
showmeans=False,
showmedians=True)
axs[0].set_title('Violin plot')
# plot box plot
axs[1].boxplot(all_data)
axs[1].set_title('Box plot')
# adding horizontal grid lines
for ax in axs:
ax.yaxis.grid(True)
ax.set_xticks([y + 1 for y in range(len(all_data))])
ax.set_xlabel('Four separate samples')
ax.set_ylabel('Observed values')
# add x-tick labels
plt.setp(axs, xticks=[y + 1 for y in range(len(all_data))],
xticklabels=['x1', 'x2', 'x3', 'x4'])
plt.show()
###Output
_____no_output_____
###Markdown
Bar chart on polar axis¶Another example from [matplotlib gallery](https://matplotlib.org/stable/gallery/pie_and_polar_charts/polar_bar.htmlsphx-glr-gallery-pie-and-polar-charts-polar-bar-py), showing a bar chart on a polar axis, which can be used also for a radar chart type visualisation.
###Code
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
GitHub contribution-like visualisation in pure matplotlibHere is an example of a more advanced visualisation, which is in fact a time-series visualisation. I am taking the following approach:* Create a simple visualisation which resembles GitHub's contribution like visualisation.* Generate time series data first, without using Pandas (I'm going to test it in an another example), so using two vectors - one with dates for the last 12 months, the other one with numeric values between 0 and 10 to simulate GitHub contributions.* Generate a visualisation for the dataset above.
###Code
week_numbers = [f'Week {str(x)}' for x in np.arange(1, 53)]
week_days = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
contributions = np.random.randint(15, size=(7,52))
matplotlib.rc('figure', figsize=(12, 2))
fig, ax = plt.subplots()
im = ax.imshow(contributions)
# We want to show all ticks...
ax.set_xticks(np.arange(len(week_numbers)))
ax.set_yticks(np.arange(len(week_days)))
# ... and label them with the respective list entries
ax.set_xticks([])
ax.set_yticklabels(week_days)
ax.set_title("Contributions in the last 12 months")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Not bad. Now let's try to give it some love and styling! Chart is based on an [annotated heatmap example](https://matplotlib.org/stable/gallery/images_contours_and_fields/image_annotated_heatmap.html).
###Code
def heatmap(data, row_labels, col_labels, ax=None,
cbar_kw={}, cbarlabel="", **kwargs):
if not ax:
ax = plt.gca()
im = ax.imshow(data, **kwargs)
cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)
cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
ax.set_xticklabels([])
ax.set_yticklabels(row_labels)
ax.tick_params(top=False, bottom=False,
labeltop=False, labelbottom=False)
[ax.spines[x].set_visible(False) for x in ['top', 'right', 'bottom', 'left']]
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=3)
ax.tick_params(which="minor", bottom=False, left=False)
return im, cbar
fig, ax = plt.subplots()
im, cbar = heatmap(contributions, week_days, [], ax=ax, cmap="summer", cbarlabel="Contributions [daily]")
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
content/lessons/03/Now-You-Code/NYC1-Vote-Or-Retire.ipynb | ###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs: ageOutputs: if can vote or retireAlgorithm (Steps in Program): enter the ageif age less than 18, can't vote, cant retireif age greater than 18 but less than 65, can vote, can't retireif age greater thab 18 and 65, can vote, can retire
###Code
#Step 2: write code here
age = int(input("Input age here: "))
if age >=18:
print("You can vote")
if age>=65:
print("You can retire!")
else:
print("You are not old enough")
###Output
Input age here: 7
You are not old enough
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
#Step 2: write code here
try:
age = int(input("Input age here: "))
if age >=18:
print("You can vote")
if age>=65:
print("You can retire!")
else:
if age <18:
print("You can't vote.")
if age<65:
print("You can't retire")
if age<0:
print("That's not an age.")
except:
print("Invalid input.")
###Output
Input age here: h
Invalid input.
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
#Step 2: write code here
age= int(input("enter your age: "))
if age >=18:
print("you can vote.")
else:
print("You cant vote.")
if age >= 64:
print("you can retire.")
else:
print("you can't retire.")
###Output
enter your age: five
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
try:
age= int(input("enter your age: "))
if age >=18:
print("you can vote.")
else:
print("You cant vote.")
if age >= 64:
print("you can retire.")
else:
print("you can't retire.")
except:
print("try again")
###Output
enter your age: five
try again
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs: age Outputs: can vote and can retire Algorithm (Steps in Program):
###Code
age = int(input("how old are you?: "))
if age <18:
print("you cannot vote or retire")
else:
print("you can vote")
if age >65:
print("you can retire")
###Output
how old are you?: 16.5
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
try:
age = int(input("how old are you?: "))
if age <0:
print("that is not an age")
if age <18:
print("you cannot vote or retire")
elif age >=18:
print("you can vote")
if age >=65:
print("you can retire")
except:
print("you didn't put in an age")
###Output
how old are you?: 19
you can vote
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:ageOutputs:can/can not vote, can/can not retireAlgorithm (Steps in Program):ask for age, see if age is >= 18, give coresponding answer, see if age is >= 65, give coresponding answer
###Code
age=int(input("enter your age: "))
if age>=65:
print("you can vote and retire")
elif age>=18:
print("you can vote but not retire")
else:
print("you can not vote or retire")
###Output
enter your age: 70
you can vote and retire
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
age=input("enter your age: ")
try:
if int(age)>=65:
print("you can vote and retire")
elif int(age)>=18:
print("you can vote but not retire")
elif int(age)<0:
print("please enter a valid age")
else:
print("you can not vote or retire")
except:
print ("thats not an age")
###Output
enter your age: -9
please enter a valid age
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
#Step 2: write code here
###Output
_____no_output_____
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
###Output
_____no_output_____
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
#Step 2: write code here
age=int(input("Enter your age: "))
if age>=18 :
print("You can vote")
else:
print("You can't vote, the voting age is 18 or older.")
if age>=65 :
print("You can retire")
else:
print("You can't retire. The retirement age is 65 or older.")
###Output
Enter your age: 12
You can't vote, the voting age is 18 or older.
You can't retire. The retirement age is 65 or older.
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
try:
age=int(input("Enter your age: "))
if age>0:
print(age)
if age>=18 :
print("You can vote")
else:
print("You can't vote, the voting age is 18 or older.")
if age>=65 :
print("You can retire")
else:
print("You can't retire. The retirement age is 65 or older.")
else:
print("That's not an age!")
except:
print("That's not an age!")
###Output
Enter your age: two
That's not an age!
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs: Age Outputs: Can you retire and can you voteAlgorithm (Steps in Program):Input ageif age greater than or equal to 18 print "you can vote" else print"You can not vote"If age greater than or equal to 65 print "You can retire" else print "You can not retire"
###Code
#Step 2: write code here
age = int(input("Enter your age: "))
if age >= 18:
print("You can vote")
else:
print("You can not vote")
if age >= 65:
print("You can retire")
else:
print("You can not retire")
###Output
Enter your age: 22
You can vote
You can not retire
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
try:
age = int(input("Enter your age: "))
if age >= 18:
print("You can vote")
else:
print("You can not vote")
if age >= 65:
print("You can retire")
else:
print("You can not retire")
except:
print("That is not an age!")
###Output
_____no_output_____
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:user inputs their age.Outputs:the computer tells them if they can vote and if they can retire.Algorithm (Steps in Program):input age, if else age is 18 or 65, they can or cannot vote and or retire.
###Code
#Step 2: write code here
age=int(input("how old are you?"))
if(age>=18):
print("you are %d you can vote!" %(age))
if(age>=65):
print(" and you can retire!")
else:
print("sorry! you aren't of age!")
###Output
how old are you?47.5
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
try:
age=int(input("how old are you?"))
if(age>=18):
print("you are %d you can vote!" %(age))
if(age>=65):
print(" and you can retire!")
else:
print("sorry! you aren't of age!")
if(age<=0):
print("thats not an age!")
except:
print("Invalid input")
###Output
how old are you?-50
sorry! you aren't of age!
thats not an age!
###Markdown
Now You Code 1: Vote or Retire? Part 1Write a program to ask for your age as input, then output 1) whether or not you can vote and 2) whether your not you can retire. Let's assume the voting age is 18 or higher, and the retirement age is 65 or higher.**NOTE:** This program is making two seprate decisions, and thus should have two separate if else statements.Example Run:```Enter your age: 45You can vote.You cannot retire.``` Step 1: Problem AnalysisInputs:Outputs:Algorithm (Steps in Program):
###Code
#Step 2: write code here
age = int(input("Enter your age "))
if age >= 65:
print ("You can vote.")
print("You can retire.")
elif age < 18:
print ("You cannot vote.")
print ("Youu cannot retire.")
else:
print ("You can vote.")
print ("You cannot retire.")
###Output
Enter your age 5
You cannot vote.
Youu cannot retire.
###Markdown
Part 2Now that you have it working, re-write your code to handle bad input using Python's `try... except` statement:Example run:```Enter your age: threveThat's not an age!```**Note:** Exception handling is not part of our algorithm. It's a programming concern, not a problem-solving concern!
###Code
## Step 2 (again): write code again but handle errors with try...except
try:
age = int(input("Enter your age "))
if age >= 65:
print ("You can vote.")
print("You can retire.")
elif age<0:
print("That is not an age!")
elif age < 18:
print ("You cannot vote.")
print ("Youu cannot retire.")
else:
print ("You can vote.")
print ("You cannot retire.")
except:
print("That's not a number!")
###Output
Enter your age -5
That is not an age!
|
jupyter/chap18.ipynb | ###Markdown
Chapter 18 *Modeling and Simulation in Python*Copyright 2021 Allen DowneyLicense: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# check if the libraries we need are installed
try:
import pint
except ImportError:
!pip install pint
try:
import modsim
except ImportError:
!pip install modsimpy
###Output
_____no_output_____
###Markdown
Code from the previous chapterRead the data.
###Code
import os
filename = 'glucose_insulin.csv'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/glucose_insulin.csv
from pandas import read_csv
data = read_csv(filename, index_col='time');
###Output
_____no_output_____
###Markdown
Interpolate the insulin data.
###Code
from modsim import interpolate
I = interpolate(data.insulin)
###Output
_____no_output_____
###Markdown
In this chapter, we implement the glucose minimal model described in the previous chapter. We'll start with `run_simulation`, which solvesdifferential equations using discrete time steps. This method works well enough for many applications, but it is not very accurate. In this chapter we explore a better option: using an **ODE solver**. ImplementationTo get started, let's assume that the parameters of the model are known.We'll implement the model and use it to generate time series for `G` and `X`. Then we'll see how to find the parameters that generate the series that best fits the data. We can pass `params` and `data` to `make_system`:
###Code
from modsim import State, System
def make_system(params, data):
G0, k1, k2, k3 = params
Gb = data.glucose[0]
Ib = data.insulin[0]
I = interpolate(data.insulin)
t_0 = data.index[0]
t_end = data.index[-1]
init = State(G=G0, X=0)
return System(params=params, init=init,
Gb=Gb, Ib=Ib, I=I,
t_0=t_0, t_end=t_end, dt=2)
###Output
_____no_output_____
###Markdown
`make_system` uses the measurements at `t=0` as the basal levels, `Gb`and `Ib`. It gets `t_0` and `t_end` from the data. And it uses theparameter `G0` as the initial value for `G`. Then it packs everythinginto a `System` object. Taking advantage of estimates from prior work, we'll start with thesevalues:
###Code
# G0, k1, k2, k3
params = 290, 0.03, 0.02, 1e-05
system = make_system(params, data)
###Output
_____no_output_____
###Markdown
Here's the update function:
###Code
def update_func(state, t, system):
G, X = state
G0, k1, k2, k3 = system.params
I, Ib, Gb = system.I, system.Ib, system.Gb
dt = system.dt
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
G += dGdt * dt
X += dXdt * dt
return State(G=G, X=X)
###Output
_____no_output_____
###Markdown
As usual, the update function takes a `State` object, a time, and a`System` object as parameters. The first line uses multiple assignmentto extract the current values of `G` and `X`.The following lines unpack the parameters we need from the `System`object.Computing the derivatives `dGdt` and `dXdt` is straightforward; we justtranslate the equations from math notation to Python.Then, to perform the update, we multiply each derivative by the discretetime step `dt`, which is 2 min in this example. The return value is a`State` object with the new values of `G` and `X`.Before running the simulation, it is a good idea to run the updatefunction with the initial conditions:
###Code
update_func(system.init, system.t_0, system)
###Output
_____no_output_____
###Markdown
If it runs without errors and there is nothing obviously wrong with theresults, we are ready to run the simulation. We'll use this version of`run_simulation`, which is very similar to previous versions:
###Code
from modsim import linrange, TimeFrame
def run_simulation(system, update_func):
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
t_array = linrange(system.t_0, system.t_end, system.dt)
n = len(t_array)
frame = TimeFrame(index=t_array, columns=init.index)
frame.iloc[0] = system.init
for i in range(n-1):
t = t_array[i]
state = frame.iloc[i]
frame.iloc[i+1] = update_func(state, t, system)
return frame
###Output
_____no_output_____
###Markdown
We can run it like this:
###Code
results = run_simulation(system, update_func)
results.head()
from modsim import decorate
results.G.plot(style='-', label='simulation')
data.glucose.plot(style='o', color='C0', label='glucose data')
decorate(ylabel='Concentration (mg/dL)')
results.X.plot(color='C1', label='remote insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration (arbitrary units)')
###Output
_____no_output_____
###Markdown
shows the results. The top plot showssimulated glucose levels from the model along with the measured data.The bottom plot shows simulated insulin levels in tissue fluid, which is in unspecified units, and not to be confused with measured insulinlevels in the blood.With the parameters I chose, the model fits the data well, except forthe first few data points, where we don't expect the model to beaccurate. Solving differential equationsSo far we have solved differential equations by rewriting them asdifference equations. In the current example, the differential equations are: $$\frac{dG}{dt} = -k_1 \left[ G(t) - G_b \right] - X(t) G(t)$$$$\frac{dX}{dt} = k_3 \left[I(t) - I_b \right] - k_2 X(t)$$ If we multiply both sides by $dt$, we have:$$dG = \left[ -k_1 \left[ G(t) - G_b \right] - X(t) G(t) \right] dt$$$$dX = \left[ k_3 \left[I(t) - I_b \right] - k_2 X(t) \right] dt$$ When $dt$ is very small, or more precisely **infinitesimal**, this equation is exact. But in our simulations, $dt$ is 2 min, which is not very small. In effect, the simulations assume that the derivatives $dG/dt$ and $dX/dt$ are constant during each 2 min time step. This method, evaluating derivatives at discrete time steps and assuming that they are constant in between, is called **Euler's method** (see ).Euler's method is good enough for some simple problems, but it is notvery accurate. Other methods are more accurate, but many of them aresubstantially more complicated.One of the best simple methods is called **Ralston's method**. TheModSim library provides a function called `run_ode_solver` thatimplements it.The "ODE" in `run_ode_solver` stands for "ordinary differentialequation". The equations we are solving are "ordinary" because all thederivatives are with respect to the same variable; in other words, there are no partial derivatives.To use `run_ode_solver`, we have to provide a "slope function", likethis:
###Code
def slope_func(t, state, system):
G, X = state
G0, k1, k2, k3 = system.params
I, Ib, Gb = system.I, system.Ib, system.Gb
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
###Output
_____no_output_____
###Markdown
`slope_func` is similar to `update_func`; in fact, it takes the sameparameters in the same order. But `slope_func` is simpler, because allwe have to do is compute the derivatives, that is, the slopes. We don'thave to do the updates; `run_ode_solver` does them for us.Now we can call `run_ode_solver` like this:
###Code
from modsim import run_solve_ivp
results2, details = run_solve_ivp(system, slope_func)
details
###Output
_____no_output_____
###Markdown
`run_ode_solver` is similar to `run_simulation`: it takes a `System`object and a slope function as parameters. It returns two values: a`TimeFrame` with the solution and a `ModSimSeries` with additionalinformation.A `ModSimSeries` is like a `System` or `State` object; it contains a setof variables and their values. The `ModSimSeries` from `run_ode_solver`,which we assign to `details`, contains information about how the solverran, including a success code and diagnostic message.The `TimeFrame`, which we assign to `results`, has one row for each timestep and one column for each state variable. In this example, the rowsare time from 0 to 182 minutes; the columns are the state variables, `G`and `X`.
###Code
from modsim import decorate
results2.G.plot(style='-', label='simulation')
data.glucose.plot(style='o', color='C0', label='glucose data')
decorate(ylabel='Concentration (mg/dL)')
results2.X.plot(color='C1', label='remote insulin')
decorate(xlabel='Time (min)',
ylabel='Concentration (arbitrary units)')
###Output
_____no_output_____
###Markdown
shows the results from `run_simulation` and`run_ode_solver`. The difference between them is barely visible.We can compute the percentage differences like this:
###Code
diff = results.G - results2.G
percent_diff = diff / results2.G * 100
percent_diff.abs().max()
###Output
_____no_output_____
###Markdown
The largest percentage difference is less than 2%, which is small enough that it probably doesn't matter in practice. SummaryYou might be interested in this article about [people making a DIY artificial pancreas](https://www.bloomberg.com/news/features/2018-08-08/the-250-biohack-that-s-revolutionizing-life-with-diabetes). Exercises **Exercise:** Our solution to the differential equations is only approximate because we used a finite step size, `dt=2` minutes.If we make the step size smaller, we expect the solution to be more accurate. Run the simulation with `dt=1` and compare the results. What is the largest relative error between the two solutions?
###Code
# Solution
system.dt = 1
results3, details = run_simulation(system, slope_func)
details
# Solution
results2.G.plot(style='C2--', label='run_ode_solver (dt=2)')
results3.G.plot(style='C3:', label='run_ode_solver (dt=1)')
decorate(xlabel='Time (m)', ylabel='mg/dL')
# Solution
diff = (results2.G - results3.G).dropna()
percent_diff = diff / results2.G * 100
# Solution
max(abs(percent_diff))
###Output
_____no_output_____ |
02_Modeling_MFCC.ipynb | ###Markdown
**Capstone Project: Audio Classification Of Emotions** Modeling on MFCC
###Code
#Mount Drive
from google.colab import drive
drive.mount('/content/My_Drive/')
###Output
Drive already mounted at /content/My_Drive/; to attempt to forcibly remount, call drive.mount("/content/My_Drive/", force_remount=True).
###Markdown
Imports
###Code
#Import basic, plotting libraries
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
#Import SK-Learn Libraries
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier as R_Forest
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier as KNNClassifier
from sklearn import svm
from sklearn.preprocessing import LabelEncoder, StandardScaler, RobustScaler
#Import Keras Layers
import keras
from keras import layers
from keras import models
from keras.layers import GlobalAveragePooling1D, Dense, MaxPooling1D, Conv1D, Flatten, Dropout
from keras.models import Sequential
from keras.layers.core import Activation, Dense
from keras.layers import BatchNormalization
from tensorflow.keras.optimizers import RMSprop
from keras.layers import GaussianNoise
###Output
_____no_output_____
###Markdown
Functions
###Code
def cf_matrix(y_preds, y_test):
'''This function creates a confusion matrix on our data'''
cf_matrix = confusion_matrix(y_preds, y_test)
cmn = cf_matrix.astype('float') / cf_matrix.sum(axis=1)[:, np.newaxis]
fig, ax = plt.subplots(figsize=(10,10))
labels = ['Angry', 'Calm', 'Disgust', 'Fearful', 'Happy', 'Neutral', 'Sad', 'Suprised']
sns.heatmap(cmn, linewidths=1, annot=True, ax=ax, fmt='.2f', cmap="OrRd")
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(labels); ax.yaxis.set_ticklabels(labels);
#Set a random seed for reproducibility.
np.random.seed(42)
#Read_CSV
df = pd.read_csv('Data/DataFrames/MFCC_Data.csv')
#Encoded Labels
levels = ['Angry', 'Calm', 'Disgust', 'Fearful', 'Happy', 'Neutral', 'Sad', 'Suprised']
#Show dataframe
df.head(2)
le = LabelEncoder()
df['encoded_label']= le.fit_transform(df['label'])
#Select X and y features
X = df.iloc[:, 10:].values
y = df['encoded_label'].values
print(X.shape)
print(y.shape)
#Add as array
X = np.asarray(X)
y = np.asarray(y)
X.shape, y.shape
#Create train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, shuffle = True)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
#Scale X_training Data
sc = StandardScaler()
X_train_sc = sc.fit_transform(X_train)
X_test_sc = sc.transform(X_test)
#Expand dimensions for nueral net
X_train_sc_1 = np.expand_dims(X_train_sc, axis=2)
X_test_sc_1 = np.expand_dims(X_test_sc, axis=2)
X_train_sc_1.shape, X_test_sc_1.shape
###Output
_____no_output_____
###Markdown
Models
###Code
SVM = svm.SVC()
SVM.fit(X_train_sc, y_train)
y_predSVM = SVM.predict(X_test_sc)
print(classification_report(y_test,y_predSVM,target_names=levels))
cf_matrix(y_predSVM,y_test)
###Output
_____no_output_____
###Markdown
K-Nearest Neighbors
###Code
clf = KNNClassifier()
clf.fit(X_train_sc, y_train)
y_predKNN = clf.predict(X_test_sc)
print(classification_report(y_test,y_predKNN,target_names=levels))
cf_matrix(y_predKNN,y_test)
###Output
_____no_output_____
###Markdown
Random Forest Classifier
###Code
rforest = R_Forest(criterion="gini",
max_depth=5,
max_features="log2",
min_samples_leaf = 3,
min_samples_split = 20,
n_estimators= 22000,
)
#Fit Random forest
rforest.fit(X_train_sc, y_train)
#Get predictions
y_pred = rforest.predict(X_test_sc)
#Print classification report
print(classification_report(y_test,y_pred,target_names=levels))
#Plot confusion matrix
cf_matrix(y_pred,y_test)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in true_divide
"""
###Markdown
Decision Tree Classifier
###Code
#Instantiate decision tree
dtree = DecisionTreeClassifier()
#Fit decision tree
dtree.fit(X_train_sc, y_train)
#Predict data
y_preds = dtree.predict(X_test_sc)
#Print classification report
print(classification_report(y_test,y_preds,target_names=levels))
#Plot confusion matrix
cf_matrix(y_preds,y_test)
###Output
_____no_output_____
###Markdown
Convolutional Nueral Net (CNN)
###Code
#Develop model
model = models.Sequential()
model.add(Conv1D(264, kernel_size=5, strides=1, activation='relu', input_shape=(X_train.shape[1], 1)))
model.add(Dropout(0.2))
model.add(MaxPooling1D(pool_size=5, strides = 2))
model.add(GaussianNoise(0.1))
model.add(Conv1D(128, kernel_size=5, strides=1, activation='relu'))
model.add(Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(8, activation='softmax'))
# Select loss function and optimizer
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
#fit and train model
history = model.fit(
X_train_sc_1,
y_train,
epochs=60,
batch_size=15,
validation_data=(X_test_sc_1, y_test)
)
#Show nueral net performance
print("Train Accuracy : " , model.evaluate(X_train_sc_1,y_train)[1]*100 , "%")
print("Test Accuracy : " , model.evaluate(X_test_sc_1,y_test)[1]*100 , "%")
epochs = [i for i in range(60)]
fig , ax = plt.subplots(1,2)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
test_acc = history.history['val_accuracy']
test_loss = history.history['val_loss']
fig.set_size_inches(20,6)
ax[0].plot(epochs , train_loss , label = 'Training Loss')
ax[0].plot(epochs , test_loss , label = 'Testing Loss')
ax[0].set_title('Training & Testing Loss')
ax[0].legend()
ax[0].set_xlabel("Epochs")
ax[1].plot(epochs , train_acc , label = 'Training Accuracy')
ax[1].plot(epochs , test_acc , label = 'Testing Accuracy')
ax[1].set_title('Training & Testing Accuracy')
ax[1].legend()
ax[1].set_xlabel("Epochs")
plt.show()
#Predict on test data
y_predictions = model.predict(X_test_sc_1).argmax(axis=1)
#Print classification report
print(classification_report(y_test,y_predictions, target_names=levels))
#Plot confusion matrix
cf_matrix(y_predictions,y_test)
#Save Model
save_format='h5'
model.save("Chosen_Model.h5")
###Output
_____no_output_____ |
categorical-tabular-pytorch-classifier-ii.ipynb | ###Markdown
Categorical Tabular Pytorch Classifier_By Nick Brooks, 2020-01-10_
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
import sys
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
from sklearn import metrics
import matplotlib.pyplot as plt
from torch.optim.lr_scheduler import ReduceLROnPlateau
# from torch.utils.data import Dataset, DataLoader
print("\nPytorch Version: {}".format(torch.__version__))
print("Python Version: {}\n".format(sys.version))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Pytorch Compute Device: {}".format(device))
from contextlib import contextmanager
import time
import gc
notebookstart = time.time()
@contextmanager
def timer(name):
"""
Time Each Process
"""
t0 = time.time()
yield
print('\n[{}] done in {} Minutes'.format(name, round((time.time() - t0)/60,2)))
seed = 50
debug = None
if debug:
nrow = 20000
else:
nrow = None
with timer("Load"):
PATH = "/kaggle/input/cat-in-the-dat-ii/"
train = pd.read_csv(PATH + "train.csv", index_col = 'id', nrows = nrow)
test = pd.read_csv(PATH + "test.csv", index_col = 'id', nrows = nrow)
submission_df = pd.read_csv(PATH + "sample_submission.csv")
[print(x.shape) for x in [train, test, submission_df]]
traindex = train.index
testdex = test.index
y_var = train.target.copy()
print("Target Distribution:\n",y_var.value_counts(normalize = True).to_dict())
df = pd.concat([train.drop('target',axis = 1), test], axis = 0)
del train, test, submission_df
with timer("FE 1"):
drop_cols=["bin_0"]
# Split 2 Letters; This is the only part which is not generic and would actually require data inspection
df["ord_5a"]=df["ord_5"].str[0]
df["ord_5b"]=df["ord_5"].str[1]
drop_cols.append("ord_5")
xor_cols = []
nan_cols = []
for col in df.columns:
# NUll Values
tmp_null = df.loc[:,col].isnull().sum()
if tmp_null > 0:
print("{} has {} missing values.. Filling".format(col, tmp_null))
nan_cols.append(col)
if df.loc[:,col].dtype == "O":
df.loc[:,col].fillna("NAN", inplace=True)
else:
df.loc[:,col].fillna(-1, inplace=True)
# Categories that do not overlap
train_vals = set(df.loc[traindex, col].unique())
test_vals = set(df.loc[testdex, col].unique())
xor_cat_vals=train_vals ^ test_vals
if xor_cat_vals:
df.loc[df[col].isin(xor_cat_vals), col]="xor"
print("{} has {} xor factors, {} rows".format(col, len(xor_cat_vals),df.loc[df[col] == 'xor',col].shape[0]))
xor_cols.append(col)
# One Hot Encode None-Ordered Categories
ordinal_cols=['ord_1', 'ord_2', 'ord_3', 'ord_4', 'ord_5a', 'day', 'month']
X_oh=df[df.columns.difference(ordinal_cols)]
oh1=pd.get_dummies(X_oh, columns=X_oh.columns, drop_first=True, sparse=True)
ohc1=oh1.sparse.to_coo()
from sklearn.base import TransformerMixin
from itertools import repeat
import scipy
class ThermometerEncoder(TransformerMixin):
"""
Assumes all values are known at fit
"""
def __init__(self, sort_key=None):
self.sort_key = sort_key
self.value_map_ = None
def fit(self, X, y=None):
self.value_map_ = {val: i for i, val in enumerate(sorted(X.unique(), key=self.sort_key))}
return self
def transform(self, X, y=None):
values = X.map(self.value_map_)
possible_values = sorted(self.value_map_.values())
idx1 = []
idx2 = []
all_indices = np.arange(len(X))
for idx, val in enumerate(possible_values[:-1]):
new_idxs = all_indices[values > val]
idx1.extend(new_idxs)
idx2.extend(repeat(idx, len(new_idxs)))
result = scipy.sparse.coo_matrix(([1] * len(idx1), (idx1, idx2)), shape=(len(X), len(possible_values)), dtype="int8")
return result
other_classes = ["NAN", 'xor']
with timer("Thermometer Encoder"):
thermos=[]
for col in ordinal_cols:
if col=="ord_1":
sort_key=(other_classes + ['Novice', 'Contributor', 'Expert', 'Master', 'Grandmaster']).index
elif col=="ord_2":
sort_key= (other_classes + ['Freezing', 'Cold', 'Warm', 'Hot', 'Boiling Hot', 'Lava Hot']).index
elif col in ["ord_3", "ord_4", "ord_5a"]:
sort_key=str
elif col in ["day", "month"]:
sort_key=int
else:
raise ValueError(col)
enc=ThermometerEncoder(sort_key=sort_key)
thermos.append(enc.fit_transform(df[col]))
ohc=scipy.sparse.hstack([ohc1] + thermos).tocsr()
display(ohc)
X_sparse = ohc[:len(traindex)]
test_sparse = ohc[len(traindex):]
print(X_sparse.shape)
print(test_sparse.shape)
del ohc; gc.collect()
# Train Test Split
X_train, X_valid, y_train, y_valid = train_test_split(X_sparse, y_var, test_size=0.2, shuffle=True)
[print(table.shape) for table in [X_train, y_train, X_valid, y_valid]];
class TabularDataset(torch.utils.data.Dataset):
def __init__(self, data, y=None):
self.n = data.shape[0]
self.y = y
self.X = data
def __len__(self):
return self.n
def __getitem__(self, idx):
if self.y is not None:
return [self.X[idx].toarray(), self.y.astype(float).values[idx]]
else:
return [self.X[idx].toarray()]
train_dataset = TabularDataset(data = X_train, y = y_train)
valid_dataset = TabularDataset(data = X_valid, y = y_valid)
submission_dataset = TabularDataset(data = test_sparse, y = None)
batch_size = 16384
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True)
val_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=batch_size,
shuffle=False)
submission_loader = torch.utils.data.DataLoader(submission_dataset,
batch_size=batch_size,
shuffle=False)
next(iter(train_loader))
class Net(nn.Module):
def __init__(self, dropout = .60):
super().__init__()
self.dropout = dropout
self.fc1 = nn.Linear(X_sparse.shape[1], 4096)
self.d1 = nn.Dropout(p=self.dropout)
self.bn1 = nn.BatchNorm1d(num_features=4096)
self.fc2 = nn.Linear(4096, 2048)
self.d2 = nn.Dropout(p=self.dropout)
self.bn2 = nn.BatchNorm1d(num_features=2048)
self.fc3 = nn.Linear(2048, 64)
self.d3 = nn.Dropout(p=self.dropout)
self.bn3 = nn.BatchNorm1d(num_features=64)
self.fc4 = nn.Linear(64, 1)
self.out_act = nn.Sigmoid()
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.d1(x)
x = self.bn1(x)
x = F.relu(self.fc2(x))
x = self.d2(x)
x = self.bn2(x)
x = F.relu(self.fc3(x))
x = self.d3(x)
x = self.bn3(x)
x = self.fc4(x)
x = self.out_act(x)
return x
net = Net()
net.to(device)
###Output
_____no_output_____
###Markdown
Lets follow a recipe:1. Fix random seed.1. Do not trust learning rate decay defaults - Check, remove nesterov and momentum.1.
###Code
learning_rate = 0.01
# https://github.com/ncullen93/torchsample/blob/master/README.md
# optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate, momentum=0, nesterov=0)
# scheduler = ReduceLROnPlateau(optimizer, min_lr = 0.00001, mode='min', factor=0.5, patience=3, verbose=True)
EPOCHS = 50
criterion = nn.BCELoss()
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
nn_output = []
patience = 0
min_val_loss = np.Inf
full_train_loss = []
full_val_loss = []
for epoch in range(EPOCHS): # 3 full passes over the data
train_loss = []
train_metric_pred = []
train_metric_label = []
net.train()
for data in train_loader: # `data` is a batch of data
X, y = Variable(data[0].to(device).squeeze(1).float()), Variable(data[1].to(device)) # X is the batch of features, y is the batch of targets.
optimizer.zero_grad() # sets gradients to 0 before loss calc. You will do this likely every step.
output = net(X).squeeze() # pass in the reshaped batch
tloss = criterion(output, y) # calc and grab the loss value
tloss.backward() # apply this loss backwards thru the network's parameters
optimizer.step() # attempt to optimize weights to account for loss/gradients
train_loss.append(tloss.item())
train_metric_pred.append(output.detach().cpu().numpy())
train_metric_label.append(y.cpu().numpy())
# Evaluation with the validation set
train_metric_score = metrics.roc_auc_score(np.concatenate(train_metric_label), np.concatenate(train_metric_pred))
full_train_loss.append(train_loss)
net.eval() # eval mode
val_loss = []
val_metric_pred = []
val_metric_label = []
val_metric_score = 0
with torch.no_grad():
for data in val_loader:
X, y = Variable(data[0].to(device).squeeze(1).float()), Variable(data[1].to(device))
preds = net(X).squeeze() # get predictions
vloss = criterion(preds, y) # calculate the loss
val_loss.append(vloss.item())
val_metric_pred.append(preds.detach().cpu().numpy())
val_metric_label.append(y.cpu().numpy())
val_metric_score = metrics.roc_auc_score(np.concatenate(val_metric_label), np.concatenate(val_metric_pred))
full_val_loss.append(val_loss)
mean_val_loss = np.mean(val_loss)
tmp_nn_output = [epoch + 1,EPOCHS,
np.mean(train_loss),
train_metric_score,
mean_val_loss,
val_metric_score
]
nn_output.append(tmp_nn_output)
# ReduceLossOnPlateau
# scheduler.step(final_val_loss)
# Print the loss and accuracy for the validation set
print('Epoch [{}/{}] train loss: {:.4f} train metric: {:.4f} valid loss: {:.4f} val metric: {:.4f}'
.format(*tmp_nn_output))
# Early Stopping
if min_val_loss > round(mean_val_loss,4) :
min_val_loss = round(mean_val_loss,4)
patience = 0
# Checkpoint Best Model so far
checkpoint = {'model': Net(),
'state_dict': net.state_dict().copy(),
'optimizer' : optimizer.state_dict().copy()}
else:
patience += 1
if patience > 6:
print("Early Stopping..")
break
# Plot loss by batch.. √
# Checkpoint √
# Try batch norm.. √
# Examine Gradients
# Try Adam 3e-4
# Fix Seen
# Fix decay/ momentum
pd_results = pd.DataFrame(nn_output,
columns = ['epoch','total_epochs','train_loss','train_metric','valid_loss','valid_metric']
)
display(pd_results)
train_batch_loss = np.concatenate(full_train_loss)
val_batch_loss = np.concatenate(full_val_loss)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
axes[0].plot(train_batch_loss, label='validation_loss')
axes[0].plot(val_batch_loss, label='train_loss')
axes[0].set_title("Loss")
axes[0].legend()
axes[1].plot(pd_results['epoch'],pd_results['valid_metric'], label='Val')
axes[1].plot(pd_results['epoch'],pd_results['train_metric'], label='Train')
# axes[1].plot(pd_results['epoch'],pd_results['test_acc'], label='test_acc')
axes[1].set_title("Roc_AUC Score")
axes[1].legend()
plt.show()
# Load Best Model
net = checkpoint['model'].to(device)
net.load_state_dict(checkpoint['state_dict'])
net.eval() # Safety first
predictions = torch.Tensor().to(device) # Tensor for all predictions
# Go through the test set, saving the predictions in... 'predictions'
for data in submission_loader:
X = data[0].squeeze(1).float()
preds = net(X.to(device)).squeeze()
predictions = torch.cat((predictions, preds))
submission = pd.DataFrame({'id': testdex, 'target': predictions.cpu().detach().numpy()})
submission.to_csv('submission.csv', index=False)
submission.head()
###Output
_____no_output_____ |
Seminar 16/.ipynb_checkpoints/Tasks 345-checkpoint.ipynb | ###Markdown
Task 1
###Code
Image(filename='Task1.jpg')
###Output
_____no_output_____
###Markdown
Task 2 При n = 5. Если нарисовать тетраэдр из точек класса 1, и поместить внутрь него точку класса 2, то не найдётся плоскости, отделяющей классы друг от друга. При этом, если n = 4 (и точки не лежат в одной плоскости), всегда найдётся плоскость, разделяющая классы.  Task 3
###Code
df = pd.read_csv("../Bonus19/BRCA_pam50.tsv", sep="\t", index_col=0)
df3 = df.loc[df["Subtype"].isin(["Luminal A","Luminal B"])]
X3 = df3.iloc[:, :-1].to_numpy()
y3 = df3["Subtype"].to_numpy()
X3_train, X3_test, y3_train, y3_test = train_test_split(
X3, y3, stratify=y3, test_size=0.2, random_state=17
)
svm = SVC(kernel="linear", C=3.5)
svm.fit(X3_train, y3_train); pass
y3_pred = svm.predict(X3_test)
print("Balanced accuracy score:", balanced_accuracy_score(y3_pred, y3_test))
M = confusion_matrix(y3_test, y3_pred)
print(M)
TPR = M[0, 0] / (M[0, 0] + M[0, 1])
TNR = M[1, 1] / (M[1, 0] + M[1, 1])
print("TPR:", round(TPR, 3), "TNR:", round(TNR, 3))
plot_roc_curve(svm, X3_test, y3_test)
plt.plot(1 - TPR, TNR, "x", c="red")
plt.show()
coef = np.argsort(np.abs(svm.coef_[0]))[-5:]
df3 = df3.iloc[:, coef]
X3 = df3.iloc[:, :-1].to_numpy()
X3_train, X3_test, y3_train, y3_test = train_test_split(
X3, y3, stratify=y3, test_size=0.2, random_state=17
)
svm = SVC(kernel="linear", C=3.5)
svm.fit(X3_train, y3_train); pass
y3_pred = svm.predict(X3_test)
print("Balanced accuracy score:", balanced_accuracy_score(y3_pred, y3_test))
M = confusion_matrix(y3_test, y3_pred)
print(M)
TPR = M[0, 0] / (M[0, 0] + M[0, 1])
TNR = M[1, 1] / (M[1, 0] + M[1, 1])
print("TPR:", round(TPR, 3), "TNR:", round(TNR, 3))
plot_roc_curve(svm, X3_test, y3_test)
plt.plot(1 - TPR, TNR, "x", c="red")
plt.show()
###Output
Balanced accuracy score: 0.7395786642761093
[[76 7]
[21 16]]
TPR: 0.916 TNR: 0.432
###Markdown
Task 4
###Code
X4_pca = PCA(n_components=2).fit_transform(df.iloc[:, :-1].to_numpy())
X4 = df.iloc[:, :-1].to_numpy()
y4 = df["Subtype"].to_numpy()
X4_train, X4_test, y4_train, y4_test = train_test_split(
X4, y4, stratify=y4, test_size=0.2, random_state=17
)
X4_pca_train, X4_pca_test, y4_train, y4_test = train_test_split(
X4_pca, y4, stratify=y4, test_size=0.2, random_state=17
)
svm = SVC(kernel="linear", C=3.5)
svm.fit(X4_train, y4_train); pass
y4_pred = svm.predict(X4_test)
print("Balanced accuracy score:", balanced_accuracy_score(y4_pred, y4_test))
M = confusion_matrix(y4_test, y4_pred)
print(M)
svm = SVC(kernel="linear", C=3.5)
svm.fit(X4_pca_train, y4_train); pass
y4_pred = svm.predict(X4_pca_test)
print("Balanced accuracy score:", balanced_accuracy_score(y4_pred, y4_test))
M = confusion_matrix(y4_test, y4_pred)
print(M)
###Output
Balanced accuracy score: 0.8545343545343546
[[ 7 0 0 6 0 0]
[ 0 17 3 0 0 0]
[ 1 0 75 6 0 0]
[ 2 0 11 24 0 0]
[ 1 0 2 0 1 0]
[ 0 0 0 0 0 27]]
###Markdown
Task 5
###Code
X1_1 = np.random.multivariate_normal([0,0], [[1, 1], [1, 1]], 10000)
X1_2 = np.random.multivariate_normal([10,10], [[1, 1], [1, 1]], 10000)
X1 = np.concatenate([X1_1, X1_2])
y1 = np.concatenate([["0"]*10000, ["1"]*10000])
X2_1 = np.random.multivariate_normal([0,0], [[1, 1], [1, 1]], 10000)
X2_2 = np.random.multivariate_normal([0,0], [[1, 1], [1, 1]], 10000)
X2 = np.concatenate([X2_1, X2_2])
y2 = np.concatenate([["0"]*10000, ["1"]*10000])
X1_train, X1_test, y1_train, y1_test = train_test_split(
X1, y1, stratify=y1, test_size=0.2, random_state=17
)
X2_train, X2_test, y2_train, y2_test = train_test_split(
X2, y2, stratify=y2, test_size=0.2, random_state=17
)
svm = SVC(kernel="linear", C=3.5)
time_start = time()
svm.fit(X1_train, y1_train); pass
time_stop = time()
print("Time:", time_stop - time_start)
y1_pred = svm.predict(X1_test)
print("Balanced accuracy score:", balanced_accuracy_score(y1_pred, y1_test))
M = confusion_matrix(y1_test, y1_pred)
print(M)
time_start = time()
svm.fit(X2_train, y2_train); pass
time_stop = time()
print("Time:", time_stop - time_start)
y2_pred = svm.predict(X2_test)
print("Balanced accuracy score:", balanced_accuracy_score(y2_pred, y2_test))
M = confusion_matrix(y2_test, y2_pred)
print(M)
###Output
Time: 4.307581901550293
Balanced accuracy score: 0.4879797940337708
[[1017 983]
[1065 935]]
|
workflow/preprocessing/01_rgi7_reg_files.ipynb | ###Markdown
Modify the RGI6 regions files for RGI7 List of changes:- Region 12 (Caucasus and Middle East): there is a cluster of glaciers south of the current extent of the region and subregion polygons. There are no regions below them, so there shouldn't be much of an issue in simply updating the geometry a little bit: we shift the southern boundary by 2° (from 32°N to 30°N).- the data type of the RGI_CODE attribute in the region file of version 6 is int. For consistency with the RGI files, it should be str (format with leading zero, for example `02`). We change this as well.
###Code
# go down from rgi7_scripts/workflow/preprocessing
data_dir = '../../../rgi7_data/'
import os
import numpy as np
import shapely.geometry as shpg
import geopandas as gpd
from utils import mkdir
###Output
_____no_output_____
###Markdown
Regions
###Code
out_dir = os.path.abspath(os.path.join(data_dir, '00_rgi70_regions'))
mkdir(out_dir)
# Read the RGI region files
rgi_dir = os.path.join(data_dir, 'l0_RGIv6')
rgi_reg = gpd.read_file('zip://' + os.path.join(data_dir, 'l0_RGIv6', '00_rgi60_regions.zip', '00_rgi60_O1Regions.shp'))
# Select the RGI 12 polygon
poly = rgi_reg.loc[rgi_reg.RGI_CODE == 12].iloc[0].geometry
poly.bounds
###Output
_____no_output_____
###Markdown
Let's go down to 30° South instead:
###Code
x, y = poly.exterior.xy
ny = np.where(np.isclose(y, 31), 30, y)
new_poly = shpg.Polygon(np.array((x, ny)).T)
rgi_reg.loc[rgi_reg.RGI_CODE == 12, 'geometry'] = new_poly
# Change type and format
rgi_reg['RGI_CODE'] = ['{:02d}'.format(int(s)) for s in rgi_reg.RGI_CODE]
rgi_reg
rgi_reg.to_file(os.path.join(out_dir, '00_rgi70_O1Regions.shp'))
# Check
rgi_reg = gpd.read_file(os.path.join(out_dir, '00_rgi70_O1Regions.shp'))
assert rgi_reg.RGI_CODE.dtype == 'O'
###Output
_____no_output_____
###Markdown
Subregions
###Code
rgi_reg = gpd.read_file('zip://' + os.path.join(data_dir, 'l0_RGIv6', '00_rgi60_regions.zip', '00_rgi60_O2Regions.shp'))
poly = rgi_reg.loc[rgi_reg.RGI_CODE == '12-02'].iloc[0].geometry
poly.bounds
x, y = poly.exterior.xy
ny = np.where(np.isclose(y, 32), 30, y)
new_poly = shpg.Polygon(np.array((x, ny)).T)
rgi_reg.loc[rgi_reg.RGI_CODE == '12-02', 'geometry'] = new_poly
rgi_reg.to_file(os.path.join(out_dir, '00_rgi70_O2Regions.shp'))
rgi_reg
###Output
_____no_output_____
###Markdown
Modify the RGI6 regions files for RGI7 List of changes:- Region 12 (Caucasus and Middle East): there is a cluster of glaciers south of the current extent of the region and subregion polygons. There are no regions below them, so there shouldn't be much of an issue in simply updating the geometry a little bit: we shift the southern boundary by 2° (from 32°N to 30°N).- the data type of the RGI_CODE attribute in the region file of version 6 is int. For consistency with the RGI files, it should be str (format with leading zero, for example `02`). We change this as well.
###Code
# go down from rgi7_scripts/workflow/preprocessing
data_dir = '../../../rgi7_data/'
import os
import numpy as np
import shapely.geometry as shpg
import geopandas as gpd
from utils import mkdir
###Output
_____no_output_____
###Markdown
Regions
###Code
out_dir = os.path.abspath(os.path.join(data_dir, '00_rgi70_regions'))
mkdir(out_dir)
# Read the RGI region files
rgi_dir = os.path.join(data_dir, 'l0_RGIv6')
rgi_reg = gpd.read_file('zip://' + os.path.join(data_dir, 'l0_RGIv6', '00_rgi60_regions.zip', '00_rgi60_O1Regions.shp'))
# Select the RGI 12 polygon
poly = rgi_reg.loc[rgi_reg.RGI_CODE == 12].iloc[0].geometry
poly.bounds
###Output
_____no_output_____
###Markdown
Let's go down to 30° South instead:
###Code
x, y = poly.exterior.xy
ny = np.where(np.isclose(y, 31), 30, y)
new_poly = shpg.Polygon(np.array((x, ny)).T)
rgi_reg.loc[rgi_reg.RGI_CODE == 12, 'geometry'] = new_poly
# Change type and format
rgi_reg['RGI_CODE'] = ['{:02d}'.format(int(s)) for s in rgi_reg.RGI_CODE]
rgi_reg
rgi_reg.to_file(os.path.join(out_dir, '00_rgi70_O1Regions.shp'))
# Check
rgi_reg = gpd.read_file(os.path.join(out_dir, '00_rgi70_O1Regions.shp'))
assert rgi_reg.RGI_CODE.dtype == 'O'
###Output
_____no_output_____
###Markdown
Subregions
###Code
rgi_reg = gpd.read_file('zip://' + os.path.join(data_dir, 'l0_RGIv6', '00_rgi60_regions.zip', '00_rgi60_O2Regions.shp'))
poly = rgi_reg.loc[rgi_reg.RGI_CODE == '12-02'].iloc[0].geometry
poly.bounds
x, y = poly.exterior.xy
ny = np.where(np.isclose(y, 32), 30, y)
new_poly = shpg.Polygon(np.array((x, ny)).T)
rgi_reg.loc[rgi_reg.RGI_CODE == '12-02', 'geometry'] = new_poly
rgi_reg.to_file(os.path.join(out_dir, '00_rgi70_O2Regions.shp'))
rgi_reg
###Output
_____no_output_____ |
examples/talk_2020-0302-lale.ipynb | ###Markdown
Lale: Type-Driven Auto-ML with Scikit-Learn https://github.com/ibm/lale Example Dataset
###Code
!pip install 'liac-arff>=2.4.0'
import lale.datasets.openml
import pandas as pd
(train_X, train_y), (test_X, test_y) = lale.datasets.openml.fetch(
'credit-g', 'classification', preprocess=False)
print(f'train_X.shape {train_X.shape}')
pd.concat([train_y.tail(), train_X.tail()], axis=1)
###Output
train_X.shape (670, 20)
###Markdown
Algorithm Selection and Hyperparameter Tuning
###Code
from sklearn.preprocessing import Normalizer as Norm
from sklearn.preprocessing import OneHotEncoder as OneHot
from sklearn.linear_model import LogisticRegression as LR
from xgboost import XGBClassifier as XGBoost
from sklearn.svm import LinearSVC
from lale.operators import make_pipeline, make_union
from lale.lib.lale import Project, ConcatFeatures, NoOp
lale.wrap_imported_operators()
project_nums = Project(columns={'type': 'number'})
project_cats = Project(columns={'type': 'string'})
planned_pipeline = (
(project_nums >> (Norm | NoOp) & project_cats >> OneHot)
>> ConcatFeatures
>> (LR | LinearSVC(dual=False)| XGBoost))
planned_pipeline.visualize()
import sklearn.metrics
from lale.lib.lale import Hyperopt
auto_optimizer = Hyperopt(estimator=planned_pipeline, cv=3, max_evals=5)
auto_trained = auto_optimizer.fit(train_X, train_y)
auto_y = auto_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, auto_y):.1%}')
###Output
100%|█████████| 5/5 [01:08<00:00, 13.74s/trial, best loss: -0.7507273649370062]
accuracy 72.1%
###Markdown
Displaying Automation Results
###Code
best_pipeline = auto_trained.get_pipeline()
best_pipeline.visualize()
from lale.pretty_print import ipython_display
ipython_display(best_pipeline, show_imports=False)
###Output
_____no_output_____
###Markdown
JSON Schemashttps://json-schema.org/
###Code
ipython_display(XGBoost.hyperparam_schema('n_estimators'))
ipython_display(XGBoost.hyperparam_schema('booster'))
import jsonschema
import sys
try:
XGBoost(n_estimators=0.5, booster='gbtree')
except jsonschema.ValidationError as e:
print(e.message, file=sys.stderr)
###Output
Invalid configuration for XGBoost(n_estimators=0.5, booster='gbtree') due to invalid value n_estimators=0.5.
Schema of argument n_estimators: {
"description": "Number of trees to fit.",
"type": "integer",
"default": 1000,
"minimumForOptimizer": 500,
"maximumForOptimizer": 1500,
}
Value: 0.5
###Markdown
Customizing Schemas
###Code
import lale.schemas as schemas
Grove = XGBoost.customize_schema(
n_estimators=schemas.Int(minimum=2, maximum=10),
booster=schemas.Enum(['gbtree'], default='gbtree'))
grove_planned = ( Project(columns={'type': 'number'}) >> Norm
& Project(columns={'type': 'string'}) >> OneHot
) >> ConcatFeatures >> Grove
grove_optimizer = Hyperopt(estimator=grove_planned, cv=3, max_evals=10)
grove_trained = grove_optimizer.fit(train_X, train_y)
grove_y = grove_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, grove_y):.1%}')
grove_best = grove_trained.get_pipeline()
ipython_display(grove_best, show_imports=False)
###Output
_____no_output_____
###Markdown
Lale: Type-Driven Auto-ML with Scikit-Learn https://github.com/ibm/lale Example Dataset
###Code
!pip install 'liac-arff>=2.4.0'
import lale.datasets.openml
import pandas as pd
(train_X, train_y), (test_X, test_y) = lale.datasets.openml.fetch(
'credit-g', 'classification', preprocess=False)
pd.concat([pd.DataFrame({'y': train_y}, index=train_X.index).tail(),
train_X.tail()], axis=1)
###Output
_____no_output_____
###Markdown
Algorithm Selection and Hyperparameter Tuning
###Code
from sklearn.preprocessing import Normalizer as Norm
from sklearn.preprocessing import OneHotEncoder as OneHot
from sklearn.linear_model import LogisticRegression as LR
from xgboost import XGBClassifier as XGBoost
from sklearn.svm import LinearSVC
from lale.operators import make_pipeline, make_union
from lale.lib.lale import Project, ConcatFeatures, NoOp
lale.wrap_imported_operators()
project_nums = Project(columns={'type': 'number'})
project_cats = Project(columns={'type': 'string'})
planned_pipeline = (
(project_nums >> (Norm | NoOp) & project_cats >> OneHot)
>> ConcatFeatures
>> (LR | LinearSVC(dual=False)| XGBoost))
planned_pipeline.visualize()
import sklearn.metrics
from lale.lib.lale import Hyperopt
auto_optimizer = Hyperopt(estimator=planned_pipeline, cv=3, max_evals=5)
auto_trained = auto_optimizer.fit(train_X, train_y)
auto_y = auto_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, auto_y):.1%}')
###Output
100%|████████████| 5/5 [00:47<00:00, 9.36s/it, best loss: -0.7507273649370062]
accuracy 72.1%
###Markdown
Displaying Automation Results
###Code
best_pipeline = auto_trained.get_pipeline()
best_pipeline.visualize()
from lale.pretty_print import ipython_display
ipython_display(best_pipeline, show_imports=False)
###Output
_____no_output_____
###Markdown
JSON Schemashttps://json-schema.org/
###Code
ipython_display(XGBoost.hyperparam_schema('n_estimators'))
ipython_display(XGBoost.hyperparam_schema('booster'))
import jsonschema
import sys
try:
XGBoost(n_estimators=0.5, booster='gbtree')
except jsonschema.ValidationError as e:
print(e.message, file=sys.stderr)
###Output
Invalid configuration for XGBoost(n_estimators=0.5, booster='gbtree') due to invalid value n_estimators=0.5.
Schema of argument n_estimators: {
'description': 'Number of trees to fit.',
'type': 'integer',
'default': 100,
'minimumForOptimizer': 10,
'maximumForOptimizer': 1500}
Value: 0.5
###Markdown
Customizing Schemas
###Code
import lale.schemas as schemas
Grove = XGBoost.customize_schema(
n_estimators=schemas.Int(min=2, max=10),
booster=schemas.Enum(['gbtree']))
grove_planned = ( Project(columns={'type': 'number'}) >> Norm
& Project(columns={'type': 'string'}) >> OneHot
) >> ConcatFeatures >> Grove
grove_optimizer = Hyperopt(estimator=grove_planned, cv=3, max_evals=10)
grove_trained = grove_optimizer.fit(train_X, train_y)
grove_y = grove_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, grove_y):.1%}')
grove_best = grove_trained.get_pipeline()
ipython_display(grove_best, show_imports=False)
###Output
_____no_output_____
###Markdown
Lale: Type-Driven Auto-ML with Scikit-Learn https://github.com/ibm/lale Example Dataset
###Code
!pip install 'liac-arff>=2.4.0'
import lale.datasets.openml
import pandas as pd
(train_X, train_y), (test_X, test_y) = lale.datasets.openml.fetch(
'credit-g', 'classification', preprocess=False)
print(f'train_X.shape {train_X.shape}')
pd.concat([train_y.tail(), train_X.tail()], axis=1)
###Output
train_X.shape (670, 20)
###Markdown
Algorithm Selection and Hyperparameter Tuning
###Code
from sklearn.preprocessing import Normalizer as Norm
from sklearn.preprocessing import OneHotEncoder as OneHot
from sklearn.linear_model import LogisticRegression as LR
from xgboost import XGBClassifier as XGBoost
from sklearn.svm import LinearSVC
from lale.operators import make_pipeline, make_union
from lale.lib.lale import Project, ConcatFeatures, NoOp
lale.wrap_imported_operators()
project_nums = Project(columns={'type': 'number'})
project_cats = Project(columns={'type': 'string'})
planned_pipeline = (
(project_nums >> (Norm | NoOp) & project_cats >> OneHot)
>> ConcatFeatures
>> (LR | LinearSVC(dual=False)| XGBoost))
planned_pipeline.visualize()
import sklearn.metrics
from lale.lib.lale import Hyperopt
auto_optimizer = Hyperopt(estimator=planned_pipeline, cv=3, max_evals=5)
auto_trained = auto_optimizer.fit(train_X, train_y)
auto_y = auto_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, auto_y):.1%}')
###Output
100%|█████████| 5/5 [01:08<00:00, 13.74s/trial, best loss: -0.7507273649370062]
accuracy 72.1%
###Markdown
Displaying Automation Results
###Code
best_pipeline = auto_trained.get_pipeline()
best_pipeline.visualize()
from lale.pretty_print import ipython_display
ipython_display(best_pipeline, show_imports=False)
###Output
_____no_output_____
###Markdown
JSON Schemashttps://json-schema.org/
###Code
ipython_display(XGBoost.hyperparam_schema('n_estimators'))
ipython_display(XGBoost.hyperparam_schema('booster'))
import jsonschema
import sys
try:
XGBoost(n_estimators=0.5, booster='gbtree')
except jsonschema.ValidationError as e:
print(e.message, file=sys.stderr)
###Output
Invalid configuration for XGBoost(n_estimators=0.5, booster='gbtree') due to invalid value n_estimators=0.5.
Schema of argument n_estimators: {
"description": "Number of trees to fit.",
"type": "integer",
"default": 1000,
"minimumForOptimizer": 500,
"maximumForOptimizer": 1500,
}
Value: 0.5
###Markdown
Customizing Schemas
###Code
import lale.schemas as schemas
Grove = XGBoost.customize_schema(
n_estimators=schemas.Int(minimum=2, maximum=10),
booster=schemas.Enum(['gbtree'], default='gbtree'))
grove_planned = ( Project(columns={'type': 'number'}) >> Norm
& Project(columns={'type': 'string'}) >> OneHot
) >> ConcatFeatures >> Grove
grove_optimizer = Hyperopt(estimator=grove_planned, cv=3, max_evals=10)
grove_trained = grove_optimizer.fit(train_X, train_y)
grove_y = grove_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, grove_y):.1%}')
grove_best = grove_trained.get_pipeline()
ipython_display(grove_best, show_imports=False)
###Output
_____no_output_____
###Markdown
Lale: Type-Driven Auto-ML with Scikit-Learn https://github.com/ibm/lale Example Dataset
###Code
!pip install 'liac-arff>=2.4.0'
import lale.datasets.openml
import pandas as pd
(train_X, train_y), (test_X, test_y) = lale.datasets.openml.fetch(
'credit-g', 'classification', preprocess=False)
print(f'train_X.shape {train_X.shape}')
pd.concat([train_y.tail(), train_X.tail()], axis=1)
###Output
train_X.shape (670, 20)
###Markdown
Algorithm Selection and Hyperparameter Tuning
###Code
from sklearn.preprocessing import Normalizer as Norm
from sklearn.preprocessing import OneHotEncoder as OneHot
from sklearn.linear_model import LogisticRegression as LR
from xgboost import XGBClassifier as XGBoost
from sklearn.svm import LinearSVC
from lale.operators import make_pipeline, make_union
from lale.lib.lale import Project, ConcatFeatures, NoOp
lale.wrap_imported_operators()
project_nums = Project(columns={'type': 'number'})
project_cats = Project(columns={'type': 'string'})
planned_pipeline = (
(project_nums >> (Norm | NoOp) & project_cats >> OneHot)
>> ConcatFeatures
>> (LR | LinearSVC(dual=False)| XGBoost))
planned_pipeline.visualize()
import sklearn.metrics
from lale.lib.lale import Hyperopt
auto_optimizer = Hyperopt(estimator=planned_pipeline, cv=3, max_evals=5)
auto_trained = auto_optimizer.fit(train_X, train_y)
auto_y = auto_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, auto_y):.1%}')
###Output
100%|█████████| 5/5 [01:08<00:00, 13.74s/trial, best loss: -0.7507273649370062]
accuracy 72.1%
###Markdown
Displaying Automation Results
###Code
best_pipeline = auto_trained.get_pipeline()
best_pipeline.visualize()
from lale.pretty_print import ipython_display
ipython_display(best_pipeline, show_imports=False)
###Output
_____no_output_____
###Markdown
JSON Schemashttps://json-schema.org/
###Code
ipython_display(XGBoost.hyperparam_schema('n_estimators'))
ipython_display(XGBoost.hyperparam_schema('booster'))
import jsonschema
import sys
try:
XGBoost(n_estimators=0.5, booster='gbtree')
except jsonschema.ValidationError as e:
print(e.message, file=sys.stderr)
###Output
Invalid configuration for XGBoost(n_estimators=0.5, booster='gbtree') due to invalid value n_estimators=0.5.
Schema of argument n_estimators: {
"description": "Number of trees to fit.",
"type": "integer",
"default": 1000,
"minimumForOptimizer": 500,
"maximumForOptimizer": 1500,
}
Value: 0.5
###Markdown
Customizing Schemas
###Code
import lale.schemas as schemas
Grove = XGBoost.customize_schema(
n_estimators=schemas.Int(min=2, max=10),
booster=schemas.Enum(['gbtree'], default='gbtree'))
grove_planned = ( Project(columns={'type': 'number'}) >> Norm
& Project(columns={'type': 'string'}) >> OneHot
) >> ConcatFeatures >> Grove
grove_optimizer = Hyperopt(estimator=grove_planned, cv=3, max_evals=10)
grove_trained = grove_optimizer.fit(train_X, train_y)
grove_y = grove_trained.predict(test_X)
print(f'accuracy {sklearn.metrics.accuracy_score(test_y, grove_y):.1%}')
grove_best = grove_trained.get_pipeline()
ipython_display(grove_best, show_imports=False)
###Output
_____no_output_____ |
samples/chihuahua/train-chihuaha.ipynb | ###Markdown
Training deteksi Chihuahuya
###Code
import warnings
warnings.filterwarnings('ignore')
!pip install tensorflow-gpu==2.0.0
# %tensorflow_version 2.0.0
!pip install keras==2.3.1
import os
import sys
import json
import datetime
import numpy as np
import skimage.draw
from google.colab import drive
drive.mount('/content/drive')
# Root directory dari project
ROOT_DIR = os.path.abspath("/content/drive/MyDrive/Deep Learning/Tugas Kelompok DL/Tugas Implementasi Mask RCNN/Colab/Mask_RCNN-master")
print(ROOT_DIR)
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import model as modellib, utils
# Path to trained weights file
COCO_WEIGHTS_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Directory to save logs and model checkpoints, if not provided
# through the command line argument --logs
DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs")
class ChihuahuaConfig(Config):
"""Configuration for training on the balloon dataset.
Derives from the base Config class and overrides some values.
"""
# Give the configuration a recognizable name
NAME = "chihuahua"
# We use a GPU with 12GB memory, which can fit two images.
# Adjust down if you use a smaller GPU.
IMAGES_PER_GPU = 2
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # Background + balloon
# Number of training steps per epoch
STEPS_PER_EPOCH = 100
# Skip detections with < 90% confidence
DETECTION_MIN_CONFIDENCE = 0.9
class ChihuahuaDataset(utils.Dataset):
def load_chihuahua(self, dataset_dir, subset):
# Add classes. We have only one class to add.
self.add_class("chihuahua", 1, "chihuaha")
# Train or validation dataset?
assert subset in ["train", "val"]
dataset_dir = os.path.join(dataset_dir, subset)
# Load annotations
# VGG Image Annotator (up to version 1.6) saves each image in the form:
# { 'filename': '28503151_5b5b7ec140_b.jpg',
# 'regions': {
# '0': {
# 'region_attributes': {},
# 'shape_attributes': {
# 'all_points_x': [...],
# 'all_points_y': [...],
# 'name': 'polygon'}},
# ... more regions ...
# },
# 'size': 100202
# }
# We mostly care about the x and y coordinates of each region
# Note: In VIA 2.0, regions was changed from a dict to a list.
annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json")))
annotations = list(annotations.values()) # don't need the dict keys
# The VIA tool saves images in the JSON even if they don't have any
# annotations. Skip unannotated images.
annotations = [a for a in annotations if a['regions']]
# Add images
for a in annotations:
# Get the x, y coordinaets of points of the polygons that make up
# the outline of each object instance. These are stores in the
# shape_attributes (see json format above)
# The if condition is needed to support VIA versions 1.x and 2.x.
if type(a['regions']) is dict:
polygons = [r['shape_attributes'] for r in a['regions'].values()]
else:
polygons = [r['shape_attributes'] for r in a['regions']]
# load_mask() needs the image size to convert polygons to masks.
# Unfortunately, VIA doesn't include it in JSON, so we must read
# the image. This is only managable since the dataset is tiny.
image_path = os.path.join(dataset_dir, a['filename'])
image = skimage.io.imread(image_path)
height, width = image.shape[:2]
self.add_image(
"chihuahua",
image_id=a['filename'], # use file name as a unique image id
path=image_path,
width=width, height=height,
polygons=polygons)
def load_mask(self, image_id):
"""Generate instance masks for an image.
Returns:
masks: A bool array of shape [height, width, instance count] with
one mask per instance.
class_ids: a 1D array of class IDs of the instance masks.
"""
# If not a balloon dataset image, delegate to parent class.
image_info = self.image_info[image_id]
if image_info["source"] != "chihuahua":
return super(self.__class__, self).load_mask(image_id)
# Convert polygons to a bitmap mask of shape
# [height, width, instance_count]
info = self.image_info[image_id]
mask = np.zeros([info["height"], info["width"], len(info["polygons"])],
dtype=np.uint8)
for i, p in enumerate(info["polygons"]):
# Get indexes of pixels inside the polygon and set them to 1
rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
mask[rr, cc, i] = 1
# Return mask, and array of class IDs of each instance. Since we have
# one class ID only, we return an array of 1s
return mask.astype(np.bool), np.ones([mask.shape[-1]], dtype=np.int32)
def image_reference(self, image_id):
"""Return the path of the image."""
info = self.image_info[image_id]
if info["source"] == "chihuahua":
return info["path"]
else:
super(self.__class__, self).image_reference(image_id)
def train(model):
"""Train the model."""
# Training dataset.
dataset_train = ChihuahuaDataset()
dataset_train.load_chihuahua(dataset, "train")
dataset_train.prepare()
# Validation dataset
dataset_val = ChihuahuaDataset()
dataset_val.load_chihuahua(dataset, "val")
dataset_val.prepare()
# *** This training schedule is an example. Update to your needs ***
# Since we're using a very small dataset, and starting from
# COCO trained weights, we don't need to train too long. Also,
# no need to train all layers, just the heads should do it.
print("Training network heads")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=30,
layers='heads')
dataset = ROOT_DIR + '/datasets/chihuahua/'
# parameter untuk weights
# 'coco: load pretrained weights dari coco
# 'imagenet': load pretrained weights dari imagenet
# 'last': load pretrained weights dari proses training (belum selesai) yang terbaru
weights = 'last'
logs = DEFAULT_LOGS_DIR
print("Weights: ", weights)
print("Dataset: ", dataset)
print("Logs: ", logs)
# Configurations
config = ChihuahuaConfig()
# Create model
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=logs)
# Select weights file to load
if weights.lower() == "coco":
weights_path = COCO_WEIGHTS_PATH
if not os.path.exists(weights_path):
utils.download_trained_weights(weights_path)
elif weights.lower() == "last":
# Find last trained weights
weights_path = model.find_last()
elif weights.lower() == "imagenet":
# Start from ImageNet trained weights
weights_path = model.get_imagenet_weights()
else:
weights_path = weights
# Load weights
print("Loading weights ", weights_path)
if weights.lower() == "coco":
# Exclude the last layers because they require a matching
# number of classes
model.load_weights(weights_path, by_name=True, exclude=[
"mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
else:
model.load_weights(weights_path, by_name=True)
# Train or evaluate
train(model)
###Output
_____no_output_____ |
.ipynb_checkpoints/Ward49_plotly-checkpoint.ipynb | ###Markdown
Create basic map
###Code
# Create a map
px.set_mapbox_access_token(open(".mapbox_token").read())
fig = px.scatter_mapbox(ward49_df,
lat="latitude",
lon="longitude",
hover_data=["name","address"],
zoom=12)
#fig.update_layout(mapbox_style="open-street-map")
#fig.update_layout(mapbox_style="basic")
fig.update_layout(mapbox_style="streets")
fig.show()
# Add markers to the map
for lat, lon, name in zip(ward49_df['latitude'],
ward49_df['longitude'],
ward49_df['name']):
label = folium.Popup(str(name), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
fill=True,
fill_opacity=0.7).add_to(chicago_map)
# Display the map
chicago_map
###Output
_____no_output_____
###Markdown
Test interactive map
###Code
# Create a map
chicago_map = folium.Map(location=[latitude, longitude],
zoom_start=14)
#ward49_df[ward49_df['location_type'] == 'Library']
ward49_df[ward49_df['location_type'] == 'Library'][['latitude','longitude']]
points1 = [(42.006708, -87.673408)]
library_group = folium.FeatureGroup(name='Library').add_to(chicago_map)
for tuple_ in points1:
label = folium.Popup('Rogers Park Library', parse_html=True)
library_group.add_child(folium.CircleMarker(
(42.006708, -87.673408),
radius=5,
popup=label,
fill=True,
fill_opacity=0.7)
)
ward49_df[ward49_df['location_type'] == 'High School'][['latitude','longitude']]
points2 = [(42.002688, -87.669192),
(42.013031, -87.674818)]
hs_group = folium.FeatureGroup(name='High School').add_to(chicago_map)
for tuple_ in points2:
label = folium.Popup('High School', parse_html=True)
hs_group.add_child(folium.CircleMarker(
tuple_,
radius=5,
popup=label,
fill=True,
fill_opacity=0.7)
)
folium.LayerControl().add_to(chicago_map)
chicago_map
###Output
_____no_output_____
###Markdown
Create Interactive Map
###Code
# Create a map
chicago_map = folium.Map(location=[latitude, longitude],
zoom_start=14)
fgroup = []
for i in range(len(location_type_list)):
#print(i, location_type_list[i])
fgroup.append(folium.FeatureGroup(name=location_type_list[i]).add_to(chicago_map))
for row in ward49_df[ward49_df['location_type'] == location_type_list[i]].itertuples():
#print(row.name, row.latitude, row.longitude)
label = folium.Popup(row.name, parse_html=True)
fgroup[i].add_child(folium.CircleMarker(
[row.latitude, row.longitude],
radius=5,
popup=label,
fill=True,
fill_opacity=0.7).add_to(chicago_map)
)
fgroup
folium.LayerControl().add_to(chicago_map)
chicago_map
###Output
_____no_output_____
###Markdown
Try different marker types
###Code
# Create a map
chicago_map = folium.Map(location=[latitude, longitude],
zoom_start=14)
# Add markers to the map
for lat, lon, name in zip(ward49_df['latitude'],
ward49_df['longitude'],
ward49_df['name']):
label = folium.Popup(str(name), parse_html=True)
icon = folium.Icon(color='red', icon='bell', prefix='fa')
folium.Marker(
location=[lat, lon],
icon=icon,
popup=label).add_to(chicago_map)
chicago_map
###Output
_____no_output_____
###Markdown
Customize interactive map
###Code
# Create a map
chicago_map = folium.Map(location=[latitude, longitude],
zoom_start=13)
fgroup = []
for i in range(len(location_type_list)):
#print(i, location_type_list[i])
fgroup.append(folium.FeatureGroup(name=location_type_list[i]).add_to(chicago_map))
for row in ward49_df[ward49_df['location_type'] == location_type_list[i]].itertuples():
#print(row.name, row.latitude, row.longitude)
label = folium.Popup(row.name, parse_html=True)
if row.location_type == 'Elementary School':
icon = folium.Icon(color='red', icon='bell', prefix='fa')
elif row.location_type == 'High School':
icon = folium.Icon(color='blue', icon='graduation-cap', prefix='fa')
elif row.location_type == 'Library':
icon = folium.Icon(color='green', icon='book', prefix='fa')
else:
icon = folium.Icon(color='gray', icon='question-circle', prefix='fa')
fgroup[i].add_child(folium.Marker(
location=[row.latitude, row.longitude],
icon=icon,
popup=label).add_to(chicago_map)
)
folium.LayerControl().add_to(chicago_map)
chicago_map
###Output
_____no_output_____ |
reference_parsing/model_dev/M2_TEST.ipynb | ###Markdown
UNA TANTUM ingest parsed references.
###Code
# Export references
# Dump json
from support_functions import json_outputter
_, refs, _ = json_outputter(data2,40)
issues_dict = list()
# update processing collection
# get all bids and issues just dumped
for r in refs:
issues_dict.append((r["bid"], r["issue"]))
valid_issues = list()
for bid,issue in list(set(issues_dict)):
doc = db.processing.find_one({"bid":bid,"number":issue})
if not doc:
print(bid+" "+issue)
continue
if not doc["is_parsed"]:
valid_issues.append((bid,issue))
valid_issues = list(set(valid_issues))
print(valid_issues)
# clean refs
refs_keep = list()
for r in refs:
if (r["bid"],r["issue"]) in valid_issues:
refs_keep.append(r)
from pymongo import MongoClient
# dump in Mongo
con = MongoClient("128.178.60.49")
con.linkedbooks.authenticate('scripty', 'L1nk3dB00ks', source='admin')
db = con.linkedbooks_sandbox
db.references.insert_many(refs_keep)
for bid,issue in valid_issues:
try:
db.processing.find_one_and_update({'bid': bid,'number':issue}, {'$set': {'is_parsed': True}})
except:
print("Missing item in Processing: %s, %s"%(bid,issue))
continue
[mongo_prod]
db-host = 128.178.245.10
db-name = linkedbooks_refactored
db-port = 27017
username = scripty
password = L1nk3dB00ks
auth-db = admin
[mongo_dev]
db-host = 128.178.60.49
db-name = linkedbooks_refactored
db-port = 27017
username = scripty
password = L1nk3dB00ks
auth-db = admin
[mongo_sand]
db-host = 128.178.60.49
db-name = linkedbooks_sandbox
db-port = 27017
username = scripty
password = L1nk3dB00ks
auth-db = admin
[mongo_source]
db-host = 128.178.60.49
db-name = linkedbooks_dev
db-port = 27017
username = scripty
password = L1nk3dB00ks
auth-db = admin
###Output
_____no_output_____ |
notebooks/triclustering.ipynb | ###Markdown
Tri-clustering IntroductionThis notebook illustrates how to use [Clustering Geo-data Cubes (CGC)](https://cgc.readthedocs.io) to perform a tri-clustering analysis of geospatial data. This notebook builds on an analogous tutorial that illustrates how to run co-clustering analyses using CGC (see the [web page](https://cgc-tutorial.readthedocs.io/en/latest/notebooks/coclustering.html) or the [notebook on GitHub](https://github.com/esciencecenter-digital-skills/tutorial-cgc/blob/main/notebooks/coclustering.ipynb)), we recommend to have a look at that first. Tri-clustering is the natural generalization of co-clustering to three dimensions. As in co-clustering, one looks for similarity patterns in a data array (in 3D, a data cube) by simultaneously clustering all its dimensions. For geospatial data, the use of tri-clustering enables the analysis of datasets that include an additional dimension on top of space and time. This extra dimension is ofter referred to as the 'band' dimension by analogy with color images, which include three data layers (red, green and blue) for each pixel. Tri-clusters in such arrays can be identified as spatial regions for which data across a subset of bands behave similarly in a subset of the times considered. In this notebook we illustrate how to perform a tri-clustering analysis with the CGC package using a phenological dataset that includes three products: the day of the year of first leaf appearance, first bloom, and last freeze in the conterminous United States. For more information about this dataset please check the [co-clustering tutorial](https://github.com/esciencecenter-digital-skills/tutorial-cgc/blob/main/notebooks/coclustering.ipynb) or have a look at the [original publication](https://doi.org/10.1016/j.agrformet.2018.06.028). Note that in addition to CGC, whose installation instructions can be found [here](https://github.com/phenology/cgc), few other packages are required in order to run this notebook. Please have a look at this tutorial's [installation instructions](https://github.com/escience-academy/tutorial-cgc). Imports and general configuration
###Code
import cgc
import logging
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from dask.distributed import Client, LocalCluster
from cgc.triclustering import Triclustering
from cgc.kmeans import Kmeans
from cgc.utils import calculate_tricluster_averages
print(f'CGC version: {cgc.__version__}')
###Output
CGC version: 0.5.0
###Markdown
CGC makes use of `logging`, set the desired verbosity via:
###Code
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
Reading the dataWe use [the Xarray package](http://xarray.pydata.org) to read the data. We open the Zarr archive containing the data set and convert it from a collection of three data variables (each having dimensions: time, x and y) to a single array (a `DataArray`):
###Code
spring_indices = xr.open_zarr('../data/spring-indices.zarr', chunks=None)
print(spring_indices)
###Output
<xarray.Dataset>
Dimensions: (time: 40, y: 155, x: 312)
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 -125.5 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.23 15.01 14.78 14.56
Data variables:
first-bloom (time, y, x) float64 ...
first-leaf (time, y, x) float64 ...
last-freeze (time, y, x) float64 ...
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
(NOTE: if the dataset is not available locally, replace the path above with the following URL: https://raw.githubusercontent.com/esciencecenter-digital-skills/tutorial-cgc/main/data/spring-indices.zarr . The `aiohttp` and `requests` packages need to be installed to open remote data, which can be done via: `pip install aiohttp requests`)
###Code
spring_indices = spring_indices.to_array(dim='spring_index')
print(spring_indices)
###Output
<xarray.DataArray (spring_index: 3, time: 40, y: 155, x: 312)>
array([[[[117., 120., 126., ..., 184., 183., 179.],
[ nan, nan, 118., ..., 181., 178., 176.],
[ nan, nan, nan, ..., 176., 176., 176.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 92., 96., 104., ..., 173., 173., 170.],
[ nan, nan, 92., ..., 171., 167., 164.],
[ nan, nan, nan, ..., 165., 164., 163.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[131., 134., 139., ..., 187., 187., 183.],
[ nan, nan, 133., ..., 183., 180., 178.],
[ nan, nan, nan, ..., 176., 176., 176.],
...,
...
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 53., 55., 64., ..., 156., 155., 153.],
[ nan, nan, 54., ..., 154., 151., 148.],
[ nan, nan, nan, ..., 147., 147., 147.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 68., 69., 75., ..., 142., 135., 129.],
[ nan, nan, 68., ..., 137., 134., 132.],
[ nan, nan, nan, ..., 132., 133., 133.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]]]])
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.01 14.78 14.56
* spring_index (spring_index) <U11 'first-bloom' 'first-leaf' 'last-freeze'
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
The spring index is now loaded as a 4D-array whose dimensions (spring index, time, y, x) are labeled with coordinates (spring-index label, year, latitude, longitude). We can inspect the data set by plotting a slice along the time dimension:
###Code
# select years from 1990 to 1992
spring_indices.sel(time=slice(1990, 1992)).plot.imshow(row='spring_index', col='time')
###Output
_____no_output_____
###Markdown
We manipulate the array spatial dimensions creating a combined (x, y) dimension. We also drop the grid cells that have null values for any spring-index and year:
###Code
spring_indices = spring_indices.stack(space=['x', 'y'])
location = np.arange(spring_indices.space.size) # create a combined (x,y) index
spring_indices = spring_indices.assign_coords(location=('space', location))
# drop pixels that are null-valued for any year/spring-index
spring_indices = spring_indices.dropna('space', how='any')
print(spring_indices)
# size of the array
print("{} MB".format(spring_indices.nbytes/2**20))
###Output
21.591796875 MB
###Markdown
The tri-clustering analysis OverviewOnce we have loaded the data set as a 3D array, we can run the tri-clustering analysis. As for co-clustering, the algorithm implemented in CGC starts from a random cluster assignment and it iteratively updates the tri-clusters. When the loss function does not change by more than a given threshold in two consecutive iterations, the cluster assignment is considered converged. Also for tri-clustering multiple differently-initialized runs need to be performed in order to sample the cluster space, and to avoid local minima as much as possible. For more information about the algorithm, have a look at CGC's [co-clustering](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering) and [tri-clustering](https://cgc.readthedocs.io/en/latest/triclustering.htmltri-clustering) documentation. To run the analysis for the data set that we have loaded in the previous section, we first choose an initial number of clusters for the band, space, and time dimensions, and set the values of few other parameters:
###Code
num_band_clusters = 3
num_time_clusters = 5
num_space_clusters = 20
max_iterations = 50 # maximum number of iterations
conv_threshold = 0.1 # convergence threshold
nruns = 3 # number of differently-initialized runs
###Output
_____no_output_____
###Markdown
**NOTE**: the number of clusters have been selected in order to keep the memory requirement and the time of the execution suitable to run this tutorial on [mybinder.org](https://mybinder.org). If the infrastructure where you are running this notebook has more memory and computing power available, feel free to increase these values. We then instantiate a `Triclustering` object:
###Code
tc = Triclustering(
spring_indices.data, # data array with shape: (bands, rows, columns)
num_time_clusters,
num_space_clusters,
num_band_clusters,
max_iterations=max_iterations,
conv_threshold=conv_threshold,
nruns=nruns
)
###Output
_____no_output_____
###Markdown
As for co-clustering, one can now run the analysis on a local system, using a [Numpy](https://numpy.org)-based implementation, or on a distributed system, using a [Dask](https://dask.org)-based implementation. Numpy-based implementation (local) Also for tri-clustering, the `nthreads` argument sets the number of threads spawn (i.e. the number of runs that are simultaneously executed):
###Code
results = tc.run_with_threads(nthreads=1)
###Output
INFO:cgc.triclustering:Waiting for run 0
INFO:cgc.triclustering:Error = -896077199.4136138
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Waiting for run 1
INFO:cgc.triclustering:Error = -895762548.9296267
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Waiting for run 2
INFO:cgc.triclustering:Error = -895838866.2336675
WARNING:cgc.triclustering:Run not converged in 50 iterations
###Markdown
The output might indicate that for some of the runs convergence is not achieved within the specified number of iterations - increasing this value to ~500 should lead to converged solutions within the threshold provided. **NOTE**: The low-memory implementation that is available for the [co-clustering algorithm](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering) is currently not available for tri-clustering. Dask-based implementation (distributed systems) As for the [co-clusterign algorithm](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering), Dask arrays are employed in a dedicated implementation to process the data in chunks. If a compute cluster is used, data are distributed across the nodes of the cluster. In order to load the data set as a `DataArray` using Dask arrays as underlying structure, we specify the `chunks` argument in `xr.open_zarr()`:
###Code
# set a chunk size of 10 along the time dimension, no chunking in x and y
chunks = {'time': 10, 'x': -1, 'y': -1 }
spring_indices_dask = xr.open_zarr('../data/spring-indices.zarr', chunks=chunks)
spring_indices_dask = spring_indices_dask.to_array(dim='spring-index')
print(spring_indices_dask)
###Output
<xarray.DataArray 'stack-c18f111b8b48c3553ca6f85e4bf3b06f' (spring-index: 3, time: 40, y: 155, x: 312)>
dask.array<stack, shape=(3, 40, 155, 312), dtype=float64, chunksize=(1, 10, 155, 312), chunktype=numpy.ndarray>
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.01 14.78 14.56
* spring-index (spring-index) <U11 'first-bloom' 'first-leaf' 'last-freeze'
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
We perform the same data manipulation as carried out in the previuos section. Note that all operations involving Dask arrays (including the data loading) are computed ["lazily"](https://tutorial.dask.org/01x_lazy.html) (i.e. they are not carried out until the very end).
###Code
spring_indices_dask = spring_indices_dask.stack(space=['x', 'y'])
spring_indices_dask = spring_indices_dask.dropna('space', how='any')
tc_dask = Triclustering(
spring_indices_dask.data,
num_time_clusters,
num_space_clusters,
num_band_clusters,
max_iterations=max_iterations,
conv_threshold=conv_threshold,
nruns=nruns
)
###Output
_____no_output_____
###Markdown
For testing, we make use of a local Dask cluster, i.e. a cluster of processes and threads running on the same machine where the cluster is created:
###Code
cluster = LocalCluster()
print(cluster)
###Output
LocalCluster(a71fe34b, 'tcp://127.0.0.1:60738', workers=4, threads=8, memory=16.00 GiB)
###Markdown
Connection to the cluster takes place via the `Client` object:
###Code
client = Client(cluster)
print(client)
###Output
<Client: 'tcp://127.0.0.1:60738' processes=4 threads=8, memory=16.00 GiB>
###Markdown
To start the tri-clustering runs, we now pass the instance of the `Client` to the `run_with_dask` method (same as for co-clustering):
###Code
results = tc_dask.run_with_dask(client=client)
###Output
INFO:cgc.triclustering:Run 0
INFO:cgc.triclustering:Error = -896082522.82864
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Run 1
INFO:cgc.triclustering:Error = -896073513.6485721
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Run 2
INFO:cgc.triclustering:Error = -895915302.3762344
WARNING:cgc.triclustering:Run not converged in 50 iterations
###Markdown
When the runs are finished, we can close the connection to the cluster:
###Code
client.close()
###Output
_____no_output_____
###Markdown
Inspecting the results The tri-clustering results object includes the band-cluster assignment (`results.bnd_clusters`) in addition to cluster assignments in the two other dimensions, which are referred to as rows and columns by analogy with co-clustering:
###Code
print(f"Row (time) clusters: {results.row_clusters}")
print(f"Column (space) clusters: {results.col_clusters}")
print(f"Band clusters: {results.bnd_clusters}")
###Output
Row (time) clusters: [2 4 2 4 2 3 4 3 3 2 1 4 4 4 4 4 2 4 4 4 1 4 2 4 4 4 4 3 2 4 3 3 1 0 0 1 1
1 0 0]
Column (space) clusters: [17 17 17 ... 13 13 13]
Band clusters: [1 2 0]
###Markdown
We first create `DataArray`'s for the spatial, temporal and band clusters:
###Code
time_clusters = xr.DataArray(results.row_clusters, dims='time',
coords=spring_indices.time.coords,
name='time cluster')
space_clusters = xr.DataArray(results.col_clusters, dims='space',
coords=spring_indices.space.coords,
name='space cluster')
band_clusters = xr.DataArray(results.bnd_clusters, dims='spring_index',
coords=spring_indices.spring_index.coords,
name='band cluster')
###Output
_____no_output_____
###Markdown
We can now visualize the temporal clusters to which each year belongs, and make a histogram of the number of years in each cluster:
###Code
fig, ax = plt.subplots(1, 2)
# line plot
time_clusters.plot(ax=ax[0], x='time', marker='o')
ax[0].set_yticks(range(num_time_clusters))
# temporal cluster histogram
time_clusters.plot.hist(ax=ax[1], bins=num_time_clusters)
###Output
_____no_output_____
###Markdown
Similarly, we can visualize the assignment of the spring indices to the band clusters:
###Code
fig, ax = plt.subplots(1, 2)
# line plot
band_clusters.plot(ax=ax[0], x='spring_index', marker='o')
ax[0].set_yticks(range(num_band_clusters))
# band cluster histogram
band_clusters.plot.hist(ax=ax[1], bins=num_band_clusters)
ax[1].set_xticks(range(num_band_clusters))
###Output
_____no_output_____
###Markdown
Spatial clusters can also be visualized after 'unstacking' the location index that we have initially created, thus reverting to the original (x, y) coordinates:
###Code
space_clusters_xy = space_clusters.unstack('space')
space_clusters_xy.isel().plot.imshow(
x='x', y='y', levels=range(num_space_clusters+1)
)
###Output
_____no_output_____
###Markdown
The average spring index value of each tri-cluster can be computed via a dedicated utility function in CGC, which return the cluster means in a 3D-array with dimensions `(n_bnd_clusters, n_row_clusters, n_col_clusters)`. We calculate the cluster averages and create a `DataArray` for further manipulation and plotting:
###Code
# calculate the tri-cluster averages
means = calculate_tricluster_averages(
spring_indices.data,
time_clusters,
space_clusters,
band_clusters,
num_time_clusters,
num_space_clusters,
num_band_clusters
)
means = xr.DataArray(
means,
coords=(
('band_clusters', range(num_band_clusters)),
('time_clusters', range(num_time_clusters)),
('space_clusters', range(num_space_clusters))
)
)
###Output
_____no_output_____
###Markdown
The computed cluster means and the spatial clusters can be employed to plot the average spring-index value for each of the band and temporal clusters. It is important to realize that multiple spring indices might get assigned to the same band cluster, in which case the corresponding cluster-based means are computed over more than one spring index.
###Code
space_means = means.sel(space_clusters=space_clusters, drop=True)
space_means = space_means.unstack('space')
space_means.plot.imshow(
x='x', y='y',
row='band_clusters',
col='time_clusters',
vmin=50, vmax=120
)
###Output
_____no_output_____
###Markdown
K-means refinement Overview As co-clustering, also tri-clustering leads to a 'blocked' structure of the original data. Also here, an additional cluster-refinement analysis using [k-means](https://en.wikipedia.org/wiki/K-means_clustering) can help to identify patterns that are common across blocks (actually cubes for tri-clustering) and to merge the blocks with the highest similarity degree within the same cluster. See the [co-clustering](https://cgc-tutorial.readthedocs.io/en/latest/notebooks/coclustering.html) tutorial for more details on the selection of optimal k-value.
###Code
clusters = (results.bnd_clusters, results.row_clusters, results.col_clusters)
nclusters = (num_band_clusters, num_time_clusters, num_space_clusters)
km = Kmeans(
spring_indices.data,
clusters=clusters,
nclusters=nclusters,
k_range=range(2, 10)
)
###Output
_____no_output_____
###Markdown
The refinement analysis is then run as:
###Code
results_kmeans = km.compute()
###Output
_____no_output_____
###Markdown
Results The object returned by `Kmeans.compute` contains all results, most importantly the optimal `k` value:
###Code
print(f"Optimal k value: {results_kmeans.k_value}")
###Output
Optimal k value: 2
###Markdown
The centroids of the tri-cluster means can be employed to plot the refined cluster averages of the spring index dataset. Note that the refined clusters merge tri-clusters across all dimensions, including the band axis. Thus, the means of these refined clusters correspond to averages over more than one spring index.
###Code
means_kmeans = xr.DataArray(
results_kmeans.cl_mean_centroids,
coords=(
('band_clusters', range(num_band_clusters)),
('time_clusters', range(num_time_clusters)),
('space_clusters', range(num_space_clusters))
)
)
# drop tri-clusters that are not populated
means_kmeans = means_kmeans.dropna('band_clusters', how='all')
means_kmeans = means_kmeans.dropna('time_clusters', how='all')
means_kmeans = means_kmeans.dropna('space_clusters', how='all')
space_means = means_kmeans.sel(space_clusters=space_clusters, drop=True)
space_means = space_means.unstack('space')
space_means.squeeze().plot.imshow(
x='x', y='y',
row='band_clusters',
col='time_clusters',
vmin=50, vmax=120
)
###Output
_____no_output_____
###Markdown
Tri-clustering IntroductionThis notebook illustrates how to use [Clustering Geo-data Cubes (CGC)](https://cgc.readthedocs.io) to perform a tri-clustering analysis of geospatial data. This notebook builds on an analogous tutorial that illustrates how to run co-clustering analyses using CGC (see the [web page](https://cgc-tutorial.readthedocs.io/en/latest/notebooks/coclustering.html) or the [notebook on GitHub](https://github.com/esciencecenter-digital-skills/tutorial-cgc/blob/main/notebooks/coclustering.ipynb)), we recommend to have a look at that first. Tri-clustering is the natural generalization of co-clustering to three dimensions. As in co-clustering, one looks for similarity patterns in a data array (in 3D, a data cube) by simultaneously clustering all its dimensions. For geospatial data, the use of tri-clustering enables the analysis of datasets that include an additional dimension on top of space and time. This extra dimension is ofter referred to as the 'band' dimension by analogy with color images, which include three data layers (red, green and blue) for each pixel. Tri-clusters in such arrays can be identified as spatial regions for which data across a subset of bands behave similarly in a subset of the times considered. In this notebook we illustrate how to perform a tri-clustering analysis with the CGC package using a phenological dataset that includes three products: the day of the year of first leaf appearance, first bloom, and last freeze in the conterminous United States. For more information about this dataset please check the [co-clustering tutorial](https://github.com/esciencecenter-digital-skills/tutorial-cgc/blob/main/notebooks/coclustering.ipynb) or have a look at the [original publication](https://doi.org/10.1016/j.agrformet.2018.06.028). Note that in addition to CGC, whose installation instructions can be found [here](https://github.com/phenology/cgc), few other packages are required in order to run this notebook. Please have a look at this tutorial's [installation instructions](https://github.com/escience-academy/tutorial-cgc). Imports and general configuration
###Code
import cgc
import logging
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from dask.distributed import Client, LocalCluster
from cgc.triclustering import Triclustering
from cgc.kmeans import Kmeans
from cgc.utils import calculate_tricluster_averages
print(f'CGC version: {cgc.__version__}')
###Output
CGC version: 0.6.1
###Markdown
CGC makes use of `logging`, set the desired verbosity via:
###Code
logging.basicConfig(level=logging.INFO)
###Output
_____no_output_____
###Markdown
Reading the dataWe use [the Xarray package](http://xarray.pydata.org) to read the data. We open the Zarr archive containing the data set and convert it from a collection of three data variables (each having dimensions: time, x and y) to a single array (a `DataArray`):
###Code
spring_indices = xr.open_zarr('../data/spring-indices.zarr', chunks=None)
print(spring_indices)
###Output
<xarray.Dataset>
Dimensions: (time: 40, y: 155, x: 312)
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 -125.5 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.23 15.01 14.78 14.56
Data variables:
first-bloom (time, y, x) float64 ...
first-leaf (time, y, x) float64 ...
last-freeze (time, y, x) float64 ...
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
(NOTE: if the dataset is not available locally, replace the path above with the following URL: https://raw.githubusercontent.com/esciencecenter-digital-skills/tutorial-cgc/main/data/spring-indices.zarr . The `aiohttp` and `requests` packages need to be installed to open remote data, which can be done via: `pip install aiohttp requests`)
###Code
spring_indices = spring_indices.to_array(dim='spring_index')
print(spring_indices)
###Output
<xarray.DataArray (spring_index: 3, time: 40, y: 155, x: 312)>
array([[[[117., 120., 126., ..., 184., 183., 179.],
[ nan, nan, 118., ..., 181., 178., 176.],
[ nan, nan, nan, ..., 176., 176., 176.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 92., 96., 104., ..., 173., 173., 170.],
[ nan, nan, 92., ..., 171., 167., 164.],
[ nan, nan, nan, ..., 165., 164., 163.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[131., 134., 139., ..., 187., 187., 183.],
[ nan, nan, 133., ..., 183., 180., 178.],
[ nan, nan, nan, ..., 176., 176., 176.],
...,
...
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 53., 55., 64., ..., 156., 155., 153.],
[ nan, nan, 54., ..., 154., 151., 148.],
[ nan, nan, nan, ..., 147., 147., 147.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]],
[[ 68., 69., 75., ..., 142., 135., 129.],
[ nan, nan, 68., ..., 137., 134., 132.],
[ nan, nan, nan, ..., 132., 133., 133.],
...,
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]]]])
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.01 14.78 14.56
* spring_index (spring_index) <U11 'first-bloom' 'first-leaf' 'last-freeze'
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
The spring index is now loaded as a 4D-array whose dimensions (spring index, time, y, x) are labeled with coordinates (spring-index label, year, latitude, longitude). We can inspect the data set by plotting a slice along the time dimension:
###Code
# select years from 1990 to 1992
spring_indices.sel(time=slice(1990, 1992)).plot.imshow(row='spring_index', col='time')
###Output
_____no_output_____
###Markdown
We manipulate the array spatial dimensions creating a combined (x, y) dimension. We also drop the grid cells that have null values for any spring-index and year:
###Code
spring_indices = spring_indices.stack(space=['x', 'y'])
location = np.arange(spring_indices.space.size) # create a combined (x,y) index
spring_indices = spring_indices.assign_coords(location=('space', location))
# drop pixels that are null-valued for any year/spring-index
spring_indices = spring_indices.dropna('space', how='any')
print(spring_indices)
# size of the array
print("{} MB".format(spring_indices.nbytes/2**20))
###Output
21.591796875 MB
###Markdown
The tri-clustering analysis OverviewOnce we have loaded the data set as a 3D array, we can run the tri-clustering analysis. As for co-clustering, the algorithm implemented in CGC starts from a random cluster assignment and it iteratively updates the tri-clusters. When the loss function does not change by more than a given threshold in two consecutive iterations, the cluster assignment is considered converged. Also for tri-clustering multiple differently-initialized runs need to be performed in order to sample the cluster space, and to avoid local minima as much as possible. For more information about the algorithm, have a look at CGC's [co-clustering](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering) and [tri-clustering](https://cgc.readthedocs.io/en/latest/triclustering.htmltri-clustering) documentation. To run the analysis for the data set that we have loaded in the previous section, we first choose an initial number of clusters for the band, space, and time dimensions, and set the values of few other parameters:
###Code
num_band_clusters = 3
num_time_clusters = 5
num_space_clusters = 20
max_iterations = 50 # maximum number of iterations
conv_threshold = 0.1 # convergence threshold
nruns = 3 # number of differently-initialized runs
###Output
_____no_output_____
###Markdown
**NOTE**: the number of clusters have been selected in order to keep the memory requirement and the time of the execution suitable to run this tutorial on [mybinder.org](https://mybinder.org). If the infrastructure where you are running this notebook has more memory and computing power available, feel free to increase these values. We then instantiate a `Triclustering` object:
###Code
tc = Triclustering(
spring_indices.data, # data array with shape: (bands, rows, columns)
num_time_clusters,
num_space_clusters,
num_band_clusters,
max_iterations=max_iterations,
conv_threshold=conv_threshold,
nruns=nruns
)
###Output
_____no_output_____
###Markdown
As for co-clustering, one can now run the analysis on a local system, using a [Numpy](https://numpy.org)-based implementation, or on a distributed system, using a [Dask](https://dask.org)-based implementation. Numpy-based implementation (local) Also for tri-clustering, the `nthreads` argument sets the number of threads spawn (i.e. the number of runs that are simultaneously executed):
###Code
results = tc.run_with_threads(nthreads=1)
###Output
INFO:cgc.triclustering:Waiting for run 0
INFO:cgc.triclustering:Error = -895673372.3176122
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Waiting for run 1
INFO:cgc.triclustering:Error = -895965783.8523508
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Waiting for run 2
INFO:cgc.triclustering:Error = -895841515.538431
WARNING:cgc.triclustering:Run not converged in 50 iterations
###Markdown
The output might indicate that for some of the runs convergence is not achieved within the specified number of iterations - increasing this value to ~500 should lead to converged solutions within the threshold provided. **NOTE**: The low-memory implementation that is available for the [co-clustering algorithm](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering) is currently not available for tri-clustering. Dask-based implementation (distributed systems) As for the [co-clusterign algorithm](https://cgc.readthedocs.io/en/latest/coclustering.htmlco-clustering), Dask arrays are employed in a dedicated implementation to process the data in chunks. If a compute cluster is used, data are distributed across the nodes of the cluster. In order to load the data set as a `DataArray` using Dask arrays as underlying structure, we specify the `chunks` argument in `xr.open_zarr()`:
###Code
# set a chunk size of 10 along the time dimension, no chunking in x and y
chunks = {'time': 10, 'x': -1, 'y': -1 }
spring_indices_dask = xr.open_zarr('../data/spring-indices.zarr', chunks=chunks)
spring_indices_dask = spring_indices_dask.to_array(dim='spring-index')
print(spring_indices_dask)
###Output
<xarray.DataArray 'stack-c18f111b8b48c3553ca6f85e4bf3b06f' (spring-index: 3, time: 40, y: 155, x: 312)>
dask.array<stack, shape=(3, 40, 155, 312), dtype=float64, chunksize=(1, 10, 155, 312), chunktype=numpy.ndarray>
Coordinates:
* time (time) int64 1980 1981 1982 1983 1984 ... 2016 2017 2018 2019
* x (x) float64 -126.2 -126.0 -125.7 ... -56.8 -56.57 -56.35
* y (y) float64 49.14 48.92 48.69 48.47 ... 15.01 14.78 14.56
* spring-index (spring-index) <U11 'first-bloom' 'first-leaf' 'last-freeze'
Attributes:
crs: +init=epsg:4326
res: [0.22457882102988036, -0.22457882102988042]
transform: [0.22457882102988036, 0.0, -126.30312894720473, 0.0, -0.22457...
###Markdown
We perform the same data manipulation as carried out in the previuos section. Note that all operations involving Dask arrays (including the data loading) are computed ["lazily"](https://tutorial.dask.org/01x_lazy.html) (i.e. they are not carried out until the very end).
###Code
spring_indices_dask = spring_indices_dask.stack(space=['x', 'y'])
spring_indices_dask = spring_indices_dask.dropna('space', how='any')
tc_dask = Triclustering(
spring_indices_dask.data,
num_time_clusters,
num_space_clusters,
num_band_clusters,
max_iterations=max_iterations,
conv_threshold=conv_threshold,
nruns=nruns
)
###Output
_____no_output_____
###Markdown
For testing, we make use of a local Dask cluster, i.e. a cluster of processes and threads running on the same machine where the cluster is created:
###Code
cluster = LocalCluster()
print(cluster)
###Output
LocalCluster(a275c145, 'tcp://127.0.0.1:54694', workers=4, threads=8, memory=16.00 GiB)
###Markdown
Connection to the cluster takes place via the `Client` object:
###Code
client = Client(cluster)
print(client)
###Output
<Client: 'tcp://127.0.0.1:54694' processes=4 threads=8, memory=16.00 GiB>
###Markdown
To start the tri-clustering runs, we now pass the instance of the `Client` to the `run_with_dask` method (same as for co-clustering):
###Code
results = tc_dask.run_with_dask(client=client)
###Output
INFO:cgc.triclustering:Run 0
INFO:cgc.triclustering:Error = -895886041.7753079
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Run 1
INFO:cgc.triclustering:Error = -896127501.3032976
WARNING:cgc.triclustering:Run not converged in 50 iterations
INFO:cgc.triclustering:Run 2
INFO:cgc.triclustering:Error = -895944321.1238331
WARNING:cgc.triclustering:Run not converged in 50 iterations
###Markdown
When the runs are finished, we can close the connection to the cluster:
###Code
client.close()
###Output
_____no_output_____
###Markdown
Inspecting the results The tri-clustering results object includes the band-cluster assignment (`results.bnd_clusters`) in addition to cluster assignments in the two other dimensions, which are referred to as rows and columns by analogy with co-clustering:
###Code
print(f"Row (time) clusters: {results.row_clusters}")
print(f"Column (space) clusters: {results.col_clusters}")
print(f"Band clusters: {results.bnd_clusters}")
###Output
Row (time) clusters: [0 2 2 2 2 0 4 0 0 0 4 2 2 2 2 2 0 2 2 4 4 2 0 2 2 4 4 0 2 2 0 0 4 3 3 1 1
4 3 3]
Column (space) clusters: [ 9 9 9 ... 14 14 14]
Band clusters: [2 1 0]
###Markdown
We first create `DataArray`'s for the spatial, temporal and band clusters:
###Code
time_clusters = xr.DataArray(results.row_clusters, dims='time',
coords=spring_indices.time.coords,
name='time cluster')
space_clusters = xr.DataArray(results.col_clusters, dims='space',
coords=spring_indices.space.coords,
name='space cluster')
band_clusters = xr.DataArray(results.bnd_clusters, dims='spring_index',
coords=spring_indices.spring_index.coords,
name='band cluster')
###Output
_____no_output_____
###Markdown
We can now visualize the temporal clusters to which each year belongs, and make a histogram of the number of years in each cluster:
###Code
fig, ax = plt.subplots(1, 2)
# line plot
time_clusters.plot(ax=ax[0], x='time', marker='o')
ax[0].set_yticks(range(num_time_clusters))
# temporal cluster histogram
time_clusters.plot.hist(ax=ax[1], bins=num_time_clusters)
###Output
_____no_output_____
###Markdown
Similarly, we can visualize the assignment of the spring indices to the band clusters:
###Code
fig, ax = plt.subplots(1, 2)
# line plot
band_clusters.plot(ax=ax[0], x='spring_index', marker='o')
ax[0].set_yticks(range(num_band_clusters))
# band cluster histogram
band_clusters.plot.hist(ax=ax[1], bins=num_band_clusters)
ax[1].set_xticks(range(num_band_clusters))
###Output
_____no_output_____
###Markdown
Spatial clusters can also be visualized after 'unstacking' the location index that we have initially created, thus reverting to the original (x, y) coordinates:
###Code
space_clusters_xy = space_clusters.unstack('space')
space_clusters_xy.isel().plot.imshow(
x='x', y='y', levels=range(num_space_clusters+1)
)
###Output
_____no_output_____
###Markdown
The average spring index value of each tri-cluster can be computed via a dedicated utility function in CGC, which return the cluster means in a 3D-array with dimensions `(n_bnd_clusters, n_row_clusters, n_col_clusters)`. We calculate the cluster averages and create a `DataArray` for further manipulation and plotting:
###Code
# calculate the tri-cluster averages
means = calculate_tricluster_averages(
spring_indices.data,
time_clusters,
space_clusters,
band_clusters,
num_time_clusters,
num_space_clusters,
num_band_clusters
)
means = xr.DataArray(
means,
coords=(
('band_clusters', range(num_band_clusters)),
('time_clusters', range(num_time_clusters)),
('space_clusters', range(num_space_clusters))
)
)
###Output
_____no_output_____
###Markdown
The computed cluster means and the spatial clusters can be employed to plot the average spring-index value for each of the band and temporal clusters. It is important to realize that multiple spring indices might get assigned to the same band cluster, in which case the corresponding cluster-based means are computed over more than one spring index.
###Code
space_means = means.sel(space_clusters=space_clusters, drop=True)
space_means = space_means.unstack('space')
space_means.plot.imshow(
x='x', y='y',
row='band_clusters',
col='time_clusters',
vmin=50, vmax=120
)
###Output
_____no_output_____
###Markdown
K-means refinement Overview As co-clustering, also tri-clustering leads to a 'blocked' structure of the original data. Also here, an additional cluster-refinement analysis using [k-means](https://en.wikipedia.org/wiki/K-means_clustering) can help to identify patterns that are common across blocks (actually cubes for tri-clustering) and to merge the blocks with the highest similarity degree within the same cluster. See the [co-clustering](https://cgc-tutorial.readthedocs.io/en/latest/notebooks/coclustering.html) tutorial for more details on the selection of optimal k-value.
###Code
clusters = (results.bnd_clusters, results.row_clusters, results.col_clusters)
nclusters = (num_band_clusters, num_time_clusters, num_space_clusters)
km = Kmeans(
spring_indices.data,
clusters=clusters,
nclusters=nclusters,
k_range=range(2, 10)
)
###Output
_____no_output_____
###Markdown
The refinement analysis is then run as:
###Code
results_kmeans = km.compute()
###Output
_____no_output_____
###Markdown
Results The object returned by `Kmeans.compute` contains all results, most importantly the optimal `k` value:
###Code
print(f"Optimal k value: {results_kmeans.k_value}")
###Output
Optimal k value: 2
###Markdown
The refined tri-cluster averages of the spring index dataset are available as `results_kmeans.cluster_averages`. Note that the refined clusters merge tri-clusters across all dimensions, including the band axis. Thus, the means of these refined clusters correspond to averages over more than one spring index. In the following, we will plot instead the refined cluster labels (`results_kmeans.labels`):
###Code
labels = xr.DataArray(
results_kmeans.labels,
coords=(
('band_clusters', range(num_band_clusters)),
('time_clusters', range(num_time_clusters)),
('space_clusters', range(num_space_clusters))
)
)
# drop tri-clusters that are not populated
labels = labels.dropna('band_clusters', how='all')
labels = labels.dropna('time_clusters', how='all')
labels = labels.dropna('space_clusters', how='all')
labels = labels.sel(space_clusters=space_clusters, drop=True)
labels = labels.unstack('space')
labels.squeeze().plot.imshow(
x='x', y='y',
row='band_clusters',
col='time_clusters',
levels=range(results_kmeans.k_value + 1)
)
###Output
_____no_output_____ |
test/perceptron.ipynb | ###Markdown
感知机朴素模式
###Code
from algo import NaivePerceptron, Perceptron
import numpy as np
import matplotlib.pyplot as plt
n_perceptron = NaivePerceptron()
perceptron = Perceptron()
X = np.array([[3, 3], [4, 3], [1, 1], [1, 2.5]])
y = np.array([1, 1, -1, -1])
X, y
%timeit n_perceptron.fit(X, y)
%timeit perceptron.fit(X, y)
###Output
25 µs ± 1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
24.7 µs ± 2.14 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
$y(\sum_{j=1}^{N}\alpha_j y_j x_j \cdot x_i + b)$
###Code
from mpl_toolkits.mplot3d import Axes3D
X = np.array([[3, 3], [4, 3], [1, 1], [1, 2.5]])
y = np.array([1, 1, -1, -1])
fig = plt.figure(figsize=(12, 8))
ax1 = plt.axes(projection='3d')
for i, j, k in zip(X[:, 0], X[:, 1], y):
if k == 1:
ax1.scatter3D(i, j, k, c='red')
else:
ax1.scatter3D(i, j, k, c='blue')
a = np.linspace(-1,5,50)
b = np.linspace(-1,2,50)
X_a,Y_a = np.meshgrid(a,b)
Z = X_a * perceptron.weight[0] + Y_a * perceptron.weight[1] + perceptron.bias
ax1.plot_surface(X_a, Y_a, Z, alpha=0.3)
plt.show()
from mpl_toolkits.mplot3d import Axes3D
X = np.array([[3, 3], [4, 3], [1, 1], [1, 2.5]])
y = np.array([1, 1, -1, -1])
fig = plt.figure(figsize=(12, 8))
ax1 = plt.axes(projection='3d')
for i, j, k in zip(X[:, 0], X[:, 1], y):
if k == 1:
ax1.scatter3D(i, j, k, c='red')
else:
ax1.scatter3D(i, j, k, c='blue')
a = np.linspace(-1,5,500)
b = np.linspace(-1,4,50)
X_a,Y_a = np.meshgrid(a,b)
Z = X_a * n_perceptron.weight[0] + Y_a * n_perceptron.weight[1] + n_perceptron.bias
ax1.plot_surface(X_a, Y_a, Z, alpha=0.3)
plt.show()
###Output
_____no_output_____ |
0.15/_downloads/plot_linear_model_patterns.ipynb | ###Markdown
Linear classifier on sensor data with plot patterns and filtersDecoding, a.k.a MVPA or supervised machine learning applied to MEG and EEGdata in sensor space. Fit a linear classifier with the LinearModel objectproviding topographical patterns which are more neurophysiologicallyinterpretable [1]_ than the classifier filters (weight vectors).The patterns explain how the MEG and EEG data were generated from thediscriminant neural sources which are extracted by the filters.Note patterns/filters in MEG data are more similar than EEG databecause the noise is less spatially correlated in MEG than EEG.References----------.. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
###Code
# Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
###Output
_____no_output_____
###Markdown
Set parameters
###Code
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25, fir_design='firwin')
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=4, baseline=None, preload=True)
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
###Output
_____no_output_____
###Markdown
Decoding in sensor space using a LogisticRegression classifier
###Code
clf = LogisticRegression()
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name)
###Output
_____no_output_____
###Markdown
Let's do the same on EEG data using a scikit-learn pipeline
###Code
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel(LogisticRegression())) # 3) fits a logistic regression
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1])
###Output
_____no_output_____ |
_posts/scikit/permutations-the-significance-of-a-classification-score/Test with permutations the significance of a classification score.ipynb | ###Markdown
In order to test if a classification score is significative a technique in repeating the classification procedure after randomizing, permuting, the labels. The p-value is then given by the percentage of runs for which the score obtained is greater than the classification score obtained in the first place. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
Imports This tutorial imports [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC), [StratifiedKFold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.htmlsklearn.model_selection.StratifiedKFold) and [permutation_test_score](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.permutation_test_score.htmlsklearn.model_selection.permutation_test_score).
###Code
import plotly.plotly as py
import plotly.graph_objs as go
print(__doc__)
import numpy as np
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import permutation_test_score
from sklearn import datasets
###Output
Automatically created module for IPython interactive environment
###Markdown
Calculations Loading a dataset
###Code
iris = datasets.load_iris()
X = iris.data
y = iris.target
n_classes = np.unique(y).size
# Some noisy data not correlated
random = np.random.RandomState(seed=0)
E = random.normal(size=(len(X), 2200))
# Add noisy data to the informative features for make the task harder
X = np.c_[X, E]
svm = SVC(kernel='linear')
cv = StratifiedKFold(2)
score, permutation_scores, pvalue = permutation_test_score(
svm, X, y, scoring="accuracy", cv=cv, n_permutations=100, n_jobs=1)
print("Classification score %s (pvalue : %s)" % (score, pvalue))
###Output
Classification score 0.513333333333 (pvalue : 0.00990099009901)
###Markdown
Plot Results View histogram of permutation scores
###Code
trace = go.Histogram(x=permutation_scores,
nbinsx=20,
marker=dict(color='blue',
line=dict(color='black', width=1)),
name='Permutation scores')
trace1 = go.Scatter(x=2 * [score],
y=[0, 20],
mode='lines',
line=dict(color='green', dash='dash'),
name='Classification Score'
' (pvalue %s)' % pvalue
)
trace2 = go.Scatter(x=2 * [1. / n_classes],
y=[1, 20],
mode='lines',
line=dict(color='black', dash='dash'),
name='Luck'
)
data = [trace, trace1, trace2]
layout = go.Layout(xaxis=dict(title='Score'))
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
License Author: Alexandre Gramfort License: BSD 3 clause
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Test with permutations the significance of a classification score.ipynb', 'scikit-learn/plot-permutation-test-for-classification/', 'Test with Permutations the Significance of a Classification Score | plotly',
' ',
title = 'Test with Permutations the Significance of a Classification Score | plotly',
name = 'Test with Permutations the Significance of a Classification Score',
has_thumbnail='true', thumbnail='thumbnail/j-l-bound.jpg',
language='scikit-learn', page_type='example_index',
display_as='feature_selection', order=7,
ipynb= '~Diksha_Gabha/3094')
###Output
_____no_output_____ |
Transformer/image_classification_with_vision_transformer.ipynb | ###Markdown
***Source_Link:https://keras.io/examples/vision/image_classification_with_vision_transformer/***
###Code
pip install -U tensorflow-addons
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
#Prepare the data
num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")
#Configure the hyperparameters
learning_rate = 0.001
weight_decay = 0.0001
batch_size = 256
num_epochs = 100
image_size = 72 # We'll resize input images to this size
patch_size = 6 # Size of the patches to be extract from the input images
num_patches = (image_size // patch_size) ** 2
projection_dim = 64
num_heads = 4
transformer_units = [
projection_dim * 2,
projection_dim,
] # Size of the transformer layers
transformer_layers = 8
mlp_head_units = [2048, 1024] # Size of the dense layers of the final classifier
#Use data augmentation
data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.Resizing(image_size, image_size),
layers.RandomFlip("horizontal"),
layers.RandomRotation(factor=0.02),
layers.RandomZoom(
height_factor=0.2, width_factor=0.2
),
],
name="data_augmentation",
)
# Compute the mean and the variance of the training data for normalization.
data_augmentation.layers[0].adapt(x_train)
#Implement multilayer perceptron (MLP)
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=tf.nn.gelu)(x)
x = layers.Dropout(dropout_rate)(x)
return x
#Implement patch creation as a layer
class Patches(layers.Layer):
def __init__(self, patch_size):
super(Patches, self).__init__()
self.patch_size = patch_size
def call(self, images):
batch_size = tf.shape(images)[0]
patches = tf.image.extract_patches(
images=images,
sizes=[1, self.patch_size, self.patch_size, 1],
strides=[1, self.patch_size, self.patch_size, 1],
rates=[1, 1, 1, 1],
padding="VALID",
)
patch_dims = patches.shape[-1]
patches = tf.reshape(patches, [batch_size, -1, patch_dims])
return patches
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 4))
image = x_train[np.random.choice(range(x_train.shape[0]))]
plt.imshow(image.astype("uint8"))
plt.axis("off")
resized_image = tf.image.resize(
tf.convert_to_tensor([image]), size=(image_size, image_size)
)
patches = Patches(patch_size)(resized_image)
print(f"Image size: {image_size} X {image_size}")
print(f"Patch size: {patch_size} X {patch_size}")
print(f"Patches per image: {patches.shape[1]}")
print(f"Elements per patch: {patches.shape[-1]}")
n = int(np.sqrt(patches.shape[1]))
plt.figure(figsize=(4, 4))
for i, patch in enumerate(patches[0]):
ax = plt.subplot(n, n, i + 1)
patch_img = tf.reshape(patch, (patch_size, patch_size, 3))
plt.imshow(patch_img.numpy().astype("uint8"))
plt.axis("off")
#Implement the patch encoding layer
class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super(PatchEncoder, self).__init__()
self.num_patches = num_patches
self.projection = layers.Dense(units=projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patch):
positions = tf.range(start=0, limit=self.num_patches, delta=1)
encoded = self.projection(patch) + self.position_embedding(positions)
return encoded
#Build the ViT model
def create_vit_classifier():
inputs = layers.Input(shape=input_shape)
# Augment data.
augmented = data_augmentation(inputs)
# Create patches.
patches = Patches(patch_size)(augmented)
# Encode patches.
encoded_patches = PatchEncoder(num_patches, projection_dim)(patches)
# Create multiple layers of the Transformer block.
for _ in range(transformer_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, encoded_patches])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP.
x3 = mlp(x3, hidden_units=transformer_units, dropout_rate=0.1)
# Skip connection 2.
encoded_patches = layers.Add()([x3, x2])
# Create a [batch_size, projection_dim] tensor.
representation = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
representation = layers.Flatten()(representation)
representation = layers.Dropout(0.5)(representation)
# Add MLP.
features = mlp(representation, hidden_units=mlp_head_units, dropout_rate=0.5)
# Classify outputs.
logits = layers.Dense(num_classes)(features)
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=logits)
return model
#Compile, train, and evaluate the mode
def run_experiment(model):
optimizer = tfa.optimizers.AdamW(
learning_rate=learning_rate, weight_decay=weight_decay
)
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
checkpoint_filepath = "/tmp/checkpoint"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
history = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[checkpoint_callback],
)
model.load_weights(checkpoint_filepath)
_, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
return history
vit_classifier = create_vit_classifier()
history = run_experiment(vit_classifier)
###Output
_____no_output_____ |
content/lessons/11-WebAPIs/Slides.ipynb | ###Markdown
IST256 Lesson 11 Web Services and API's- Assigned Reading https://ist256.github.io/spring2020/readings/Web-APIs-In-Python.html Links- Participation: [https://poll.ist256.com](https://poll.ist256.com)- Zoom Chat! FEQT (Future Exam Questions Training) 1What prints on the last line of this program?
###Code
import requests
w = 'http://httpbin.org/get'
x = { 'a' :'b', 'c':'d'}
z = { 'w' : 'r'}
response = requests.get(w, params = x, headers = z)
print(response.url)
###Output
http://httpbin.org/get?a=b&c=d
###Markdown
A. `http://httpbin.org/get?a=b` B. `http://httpbin.org/get?c=d` C. `http://httpbin.org/get?a=b&c=d` D. `http://httpbin.org/get` Vote Now: [https://poll.ist256.com](https://poll.ist256.com) FEQT (Future Exam Questions Training) 2Which line de-serializes the response?
###Code
import requests # <= load up a bunch of pre-defined functions from the requests module
w = 'http://httpbin.org/ip' # <= string
response = requests.get(w) # <= w is a url. HTTP POST/GET/PUT/DELETE Verbs of HTTP
response.raise_for_status() # <= response code.if not 2??, throw exception 4 client 5 server
d = response.json() #<= de-serilaize!
d['origin']
###Output
_____no_output_____
###Markdown
A. `2` B. `3` C. `4` D. `5` Vote Now: [https://poll.ist256.com](https://poll.ist256.com) Agenda- Lesson 10 Homework Solution- A look at web API's - Places to find web API's- How to read API documentation- Examples of using API's Connect Activity**Question:** A common two-step verification process uses by API's discussed in the reading isA. `OAUTH2` B. `Multi-Factor` C. `API Key in Header` D. `JSON format` Vote Now: [https://poll.ist256.com](https://poll.ist256.com) The Web Has Evolved…. From **User-Consumption**- Check the news / weather in your browser- Search the web for "George Washington's birthday"- **Internet is for people**.To **Device-Consumption**- Get news/ weather alerts on your Phone- Ask Alexa “When is George Washingon's Birthday?"- **Internet of Things**. Device Consuption Requires a Web API- API = Application Program Interface. In essence is a formal definition of functions exposed by a service.- Web API - API which works over HTTP.- In essence you can call functions and access services using the HTTP protocol.- Basic use starts with an HTTP request and the output is typically a JSON response.- We saw examples of this previously with: - Open Street Maps Geocoding: https://nominatim.openstreetmap.org/search?q=address&format=json - Weather Data Service: https://openweathermap.org- Thanks to APIs' we can write programs to interact with a variety of services. Finding API's requires research… Start googling…"foreign exchange rate api" Then start reading the documentation on fixer.io … Then start hacking away in Python …
###Code
import requests
url = "http://data.fixer.io/api/latest?access_key=159f1a48ad7a3d6f4dbe5d5a"
response = requests.get(url)
response.json()
###Output
_____no_output_____ |
Week3/DQN_Torch.ipynb | ###Markdown
Reinforcement Learning: Deep Q Networks using Pytorch Custom Environment to train our model on
###Code
import gym
from gym import spaces
import numpy as np
import random
from copy import deepcopy
class gridworld_custom(gym.Env):
"""Custom Environment that follows gym interface"""
metadata = {'render.modes': ['human']}
def __init__(self, *args, **kwargs):
super(gridworld_custom, self).__init__()
self.current_step = 0
self.reward_range = (-10, 100)
self.action_space = spaces.Discrete(2)
self.observation_space = spaces.Box(low=np.array(
[0, 0]), high=np.array([4, 4]), dtype=np.int64)
self.target_coord = (4, 4)
self.death_coord = [(3, 1), (4, 2)]
def Reward_Function(self, obs):
if (obs[0] == self.target_coord[0] and obs[1] == self.target_coord[1]):
return 20
elif (obs[0] == self.death_coord[0][0] and obs[1] == self.death_coord[0][1]) or \
(obs[0] == self.death_coord[1][0] and obs[1] == self.death_coord[1][1]):
return -10
else:
return -1
return 0
def reset(self):
self.current_step = 0
self.prev_obs = [random.randint(0, 4), random.randint(0, 4)]
if (self.prev_obs[0] == self.target_coord[0] and self.prev_obs[1] == self.target_coord[1]):
return self.reset()
return self.prev_obs
def step(self, action):
action = int(action)
self.current_step += 1
obs = deepcopy(self.prev_obs)
if(action == 0):
if(self.prev_obs[0] < 4):
obs[0] = obs[0] + 1
else:
obs[0] = obs[0]
if(action == 1):
if(self.prev_obs[0] > 0):
obs[0] = obs[0] - 1
else:
obs[0] = obs[0]
if(action == 2):
if(self.prev_obs[1] < 4):
obs[1] = obs[1] + 1
else:
obs[1] = obs[1]
if(action == 3):
if(self.prev_obs[1] > 0):
obs[1] = obs[1] - 1
else:
obs[1] = obs[1]
reward = self.Reward_Function(obs)
if (obs[0] == self.target_coord[0] and obs[1] == self.target_coord[1]) or (self.current_step >= 250):
done = True
else:
done = False
self.prev_obs = obs
return obs, reward, done, {}
def render(self, mode='human', close=False):
for i in range(0, 5):
for j in range(0, 5):
if i == self.prev_obs[0] and j == self.prev_obs[1]:
print("*", end=" ")
elif i == self.target_coord[0] and j == self.target_coord[1]:
print("w", end=" ")
elif (i == self.death_coord[0][0] and j == self.death_coord[0][1]) or \
(i == self.death_coord[1][0] and j == self.death_coord[1][1]):
print("D", end=" ")
else:
print("_", end=" ")
print()
print()
print()
###Output
_____no_output_____
###Markdown
Import required Packages
###Code
import numpy as np
import matplotlib.pyplot as plt
from copy import deepcopy
from statistics import mean
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from tqdm.auto import tqdm
#from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Build The neuralnet
###Code
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.layer1 = nn.Linear(2, 8)
self.layer2 = nn.Linear(8, 8)
self.layer3 = nn.Linear(8, 4)
def forward(self, x):
l1 = self.layer1(x)
l1 = F.relu(l1)
l2 = self.layer2(l1)
l2 = F.relu(l2)
l3 = self.layer3(l2)
output = l3
return output
###Output
_____no_output_____
###Markdown
Check to see if there is a GPU which can be used to accelerate the workflows
###Code
device = 'cuda' if torch.cuda.is_available() else 'cpu'
## Force Use a Device
#device = 'cuda' #for GPU
#device = 'cpu' #for CPU
print(f'Using {device} device')
###Output
_____no_output_____
###Markdown
Initialize the neuralnets
###Code
q_network = NeuralNetwork().to(device)
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(q_network.parameters(), lr = 1e-3)
###Output
_____no_output_____
###Markdown
Initialise the environment
###Code
env = gridworld_custom()
###Output
_____no_output_____
###Markdown
Check up the functionality of epsilon greedy. Just for reference.
###Code
epsilon = 1
epsilon_decay = 0.999
episodes = 5000
epsilon_copy = deepcopy(epsilon)
eps = []
for i in range(episodes):
epsilon_copy = epsilon_copy * epsilon_decay
eps.append(epsilon_copy)
plt.plot(eps)
plt.show()
###Output
_____no_output_____
###Markdown
Run everything
###Code
gamma = 0.99
batch_size = 32
pbar = tqdm(range(episodes))
last_loss = 0.0
losses_array = []
rewards_array = []
for episode in pbar:
prev_obs = env.reset()
done = False
mem_size = 0
curr_state_mem = np.array([[0,0]] * batch_size)
prev_state_mem = np.array([[0,0]] * batch_size)
action_mem = np.array([0] * batch_size)
reward_mem = np.array([0] * batch_size)
rewards = []
epsilon = epsilon * epsilon_decay
while not(done) :
if(random.uniform(0, 1) > epsilon):
with torch.no_grad():
prev_q = q_network(torch.tensor(prev_obs, device=device).float())
prev_q = prev_q.cpu().detach().numpy()
action = np.argmax(prev_q)
else:
action = random.randint(0,3)
obs, reward, done, _ = env.step(action)
rewards.append(reward)
prev_state_mem[mem_size] = prev_obs
curr_state_mem[mem_size] = obs
action_mem[mem_size] = action
reward_mem[mem_size] = reward
mem_size = mem_size + 1
prev_obs = obs
if(mem_size == batch_size):
with torch.no_grad():
target_q = q_network(torch.tensor(curr_state_mem, device=device).float()).max(1)[0].detach()
expected_q_mem = torch.tensor(reward_mem, device=device).float() + ( gamma * target_q )
network_q_mem = q_network(torch.tensor(prev_state_mem, device=device).float()).gather(1, torch.tensor(action_mem, device=device).type(torch.int64).unsqueeze(1)).squeeze(1)
loss = loss_function(network_q_mem, expected_q_mem)
last_loss = "{:.3f}".format(loss.item())
mem_size = 0
optimizer.zero_grad()
loss.backward()
optimizer.step()
pbar.set_description("loss = %s" % last_loss)
losses_array.append(last_loss)
rewards_array.append(mean(rewards))
###Output
_____no_output_____
###Markdown
Plot Losses
###Code
plt.plot(losses_array, label="loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Plot Loss Trend
###Code
resolution = 50
cumsum_losses = np.array(pd.Series(np.array(losses_array)).rolling(window=resolution).mean() )
plt.plot(cumsum_losses, label="loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Plot Rewards
###Code
plt.plot(rewards_array, label="rewards")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Plot reward trend
###Code
resolution = 50
cumsum_rewards = np.array(pd.Series(np.array(rewards_array)).rolling(window=resolution).mean() )
plt.plot(cumsum_rewards, label="rewards")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Test the trained model
###Code
prev_obs = env.reset()
done = False
env.render()
while not(done):
with torch.no_grad():
prev_q = q_network(torch.tensor(prev_obs, device=device).float())
prev_q = prev_q.cpu().detach().numpy()
action = np.argmax(prev_q)
obs, reward, done, _ = env.step(action)
prev_obs = obs
env.render()
###Output
_____no_output_____ |
Data_Jobs_Listings_Glassdoor.ipynb | ###Markdown
Data Jobs Listings - GlassdoorThe purpose of this notebook is to select data related to data jobs in Brazil.https://www.kaggle.com/andresionek/data-jobs-listings-glassdoor
###Code
# Libraries
import pandas as pd
import json
###Output
_____no_output_____
###Markdown
Kaggle API key
###Code
# Kaggle API key
!pip install kaggle
!mkdir ~/.kaggle
!touch '/root/.kaggle/kaggle.json'
######################################################################################
# Copy USER NAME and API KEY from Kaggle
# api_token = {"username":"username","key":"TOKEN_HERE"}
api_token = {"username":"","key":""}
######################################################################################
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(api_token, file)
!chmod 600 /root/.kaggle/kaggle.json
# Download dataset
!kaggle datasets download -d andresionek/data-jobs-listings-glassdoor
###Output
data-jobs-listings-glassdoor.zip: Skipping, found more recently modified local copy (use --force to force download)
###Markdown
If you get the "Unauthorized" error as a result of this cell:1. Delete the file kaggle.json with the following command: !rm /root/.kaggle/kaggle.json 2. Check or regenerate the Kaggle token.
###Code
# Unzip the compressed file
!unzip data-jobs-listings-glassdoor.zip
# List available files
!ls
###Output
country_names_2_digit_codes.csv glassdoor_reviews.csv
currency_exchange.csv glassdoor_reviews_val_reviewResponses.csv
data-jobs-listings-glassdoor.zip glassdoor_salary_salaries.csv
glassdoor_benefits_comments.csv glassdoor_wwfu.csv
glassdoor_benefits_highlights.csv glassdoor_wwfu_val_captions.csv
glassdoor_breadCrumbs.csv glassdoor_wwfu_val_photos.csv
glassdoor.csv glassdoor_wwfu_val_videos.csv
glassdoor_overview_competitors.csv sample_data
glassdoor_photos.csv
###Markdown
Display configuration
###Code
# display all columns
pd.set_option('max_columns', None)
# display all rows
#pd.set_option('max_rows', None)
# display the entire column width
#pd.set_option('max_colwidth', None)
# display all values of an item
#pd.set_option('max_seq_item', None)
# Load data with labels
glassdoor = pd.read_csv('glassdoor.csv')
print("Dataset dimension (rows, columns): ", glassdoor.shape)
glassdoor.sample(5)
###Output
Dataset dimension (rows, columns): (165290, 163)
###Markdown
Explore columns
###Code
# List all columns
glassdoor.columns.to_list()
# Unique values for country names
glassdoor['map.country'].unique()
###Output
_____no_output_____
###Markdown
Brazil
###Code
# Select lines related to variations of "Brazil"
glassdoor_br = glassdoor.loc[glassdoor['map.country'].isin(['brasil', 'brazil', 'BRAZIL', 'BRASIL', 'Brasil', 'Brazil', 'Br', 'BR', 'br', 'bR', 'BRA', 'Bra', 'bra'])]
# Number of rows and columns
glassdoor_br.shape
glassdoor_br.sample(5)
###Output
_____no_output_____
###Markdown
CSV
###Code
# Export data from Google Colaboratory to local machine
from google.colab import files
glassdoor_br.to_csv('glassdoor_br.csv')
files.download('glassdoor_br.csv')
###Output
_____no_output_____ |
C4/W3/assignment/C4_W3_Assignment_Solution.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 10.0)# YOUR CODE HERE)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 100.0)# YOUR CODE HERE)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9),metrics=["mae"])# PUT YOUR LEARNING RATE HERE#, momentum=0.9),metrics=["mae"])
history = model.fit(dataset,epochs=500,verbose=1)
# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____
###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 10.0)# YOUR CODE HERE)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 100.0)# YOUR CODE HERE)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9),metrics=["mae"])# PUT YOUR LEARNING RATE HERE#, momentum=0.9),metrics=["mae"])
history = model.fit(dataset,epochs=500,verbose=1)
# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____
###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 10.0)# YOUR CODE HERE)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),# YOUR CODE HERE),
input_shape=[None]),
### START CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
### END CODE HERE
tf.keras.layers.Lambda(lambda x: x * 100.0)# YOUR CODE HERE)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9),metrics=["mae"])# PUT YOUR LEARNING RATE HERE#, momentum=0.9),metrics=["mae"])
history = model.fit(dataset,epochs=500,verbose=1)
# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
###Output
_____no_output_____ |
UQ_v2.ipynb | ###Markdown
The following are helper functions -- generally these should be used as is
###Code
def data_describe(x):
[min_value, p_1, p_10, p_50, p_90, p_99, max_value] = np.percentile(x, [0, 1, 10, 50, 90, 99, 100])
print('========================')
print('Number of Monte Carlo Samples: {:d}'.format(len(x)))
print('========================')
print('min : {:.7e}'.format(min_value))
print('max : {:.7e}'.format(max_value))
print('mean : {:.7e}'.format(x.mean()))
print('median : {:.7e}'.format(np.median(x)))
print('std : {:.7e}'.format(x.std()))
print('========================')
print('Data percentiles')
print('P1 : {:.7e}'.format(p_1))
print('P10 : {:.7e}'.format(p_10))
print('P50 : {:.7e}'.format(p_50))
print('P90 : {:.7e}'.format(p_90))
print('P99 : {:.7e}'.format(p_99))
print('========================')
def sensitivity_analysis(df, figsize=None, xlabel='Data'):
min_value = np.inf
max_value = -np.inf
n_subfigures = len(df.columns)
if figsize is None:
figsize=(8,5*n_subfigures)
fig, axs = plt.subplots(n_subfigures, 1, sharex=False, figsize=figsize)
sensitivity_df = df.copy()
for idx_ , col_ in enumerate(df.columns):
temp_df = df.copy()
for temp_col_ in temp_df.columns:
if temp_col_ == col_:
continue
temp_df[temp_col_] = temp_df[temp_col_].mean()
# perform the calculations on temp_df
temp_STOIIP = calculate_STOIIP(temp_df)
sensitivity_df[col_] = temp_STOIIP
min_value_ = temp_STOIIP.min()
max_value_ = temp_STOIIP.max()
if min_value_ < min_value:
min_value = min_value_
if max_value_ > max_value:
max_value = max_value_
plot_distribution(temp_STOIIP, axs=axs[idx_], xlabel='{}--{}'.format(xlabel, col_))
print('Sensitivity_analysis for {}--{}'.format(xlabel, col_))
data_describe(temp_STOIIP)
#axs[idx_].axvline(x=temp_STOIIP.mean(), color='r')
#axs[idx_].axvline(x=temp_STOIIP.min(), color='b')
#axs[idx_].axvline(x=temp_STOIIP.max(), color='b')
slack_value = (max_value-min_value)/20
for idx_ , col_ in enumerate(df.columns):
axs[idx_].set_xlim([min_value - slack_value, max_value + slack_value])
plt.show()
return sensitivity_df
def ordered_boxplot(sensitivity_df, xlabel='Data'):
variations_np = []
min_value = np.inf
max_value = -np.inf
for column_ in sensitivity_df.columns:
[p_0, p_10, p_90, p_100] = np.percentile(sensitivity_df[column_].values, [0, 10, 90, 100])
data_variation = p_90-p_10
variations_np.append(data_variation)
if p_0 < min_value:
min_value = p_0
if p_100 > max_value:
max_value = p_100
variations_np = np.array(variations_np)
sort_index = np.argsort(variations_np)
fig, axs = plt.subplots(1, 1, tight_layout=True, figsize=(12, 6))
new_median = sensitivity_df.mean(axis=0).values
sensitivity_df2 = sensitivity_df.iloc[:, sort_index]
# axs.boxplot(sensitivity_df2.values, labels=sensitivity_df2.columns, usermedians=new_median, whis=[0, 100], vert=False)
axs.boxplot(sensitivity_df2.values, labels=sensitivity_df2.columns, usermedians=new_median, whis=[10, 90], vert=False)
# axs.boxplot(sensitivity_df2.values, labels=sensitivity_df2.columns, usermedians=new_median, vert=False)
axs.axvline(x=sensitivity_df2.iloc[:,0].mean(), color='r', linewidth=2)
axs.grid(alpha=0.75)
# boxplot = sensitivity_df.boxplot()
# parts = axs.violinplot(
# sensitivity_df, showmeans=False, showmedians=False,
# showextrema=False)
slack_value = (max_value-min_value)/20
axs.set_xlim([min_value - slack_value, max_value + slack_value])
axs.set_xlabel(xlabel)
plt.show()
def plot_distribution(x, axs=None, xlim=None, ylim=None, figsize_=(15,8), bins_=100, kde_=True, cumulative_=False, xlabel='Distribution'):
# plot the data
if axs == None:
fig, axs = plt.subplots(figsize=figsize_)
axs.hist(x, density=False, bins=bins_, facecolor='b', alpha=0.5)
axs.grid(alpha=0.75)
axs.set_xlabel(xlabel)
axs.set_ylabel('Count')
# axs.set_title('Histogram plot of a random variable')
# axs.set_xlim([x.min(), x.max()])
if xlim:
x_min, x_max = xlim
else:
x_min, x_max = axs.get_xlim()
if ylim:
y_min, y_max = ylim
else:
y_min, y_max = axs.get_ylim()
axs.set_xlim([x_min, x_max])
axs.set_ylim([y_min, y_max])
# axs.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter('%.3e'))
def plot_ecdf(x, axs=None, xlim=None, ylim=None, figsize_=(15,8), bins_=100, xlabel='Data'):
if axs == None:
fig, axs = plt.subplots(figsize=figsize_)
axs.hist(x,cumulative=True, density=True, bins=bins_, alpha=0.5)
# plt.text(60, .025, '$P_{10}={:e},\\ P_{50}={:e},\\ P_90={:e}$'.format(p_10, p_50, p_90))
axs.grid(alpha=0.75)
axs.set_xlabel(xlabel)
axs.set_ylabel('ECDF')
axs.set_title('Empirical distribution function plot')
if xlim:
x_min, x_max = xlim
else:
x_min, x_max = axs.get_xlim()
if ylim:
y_min, y_max = ylim
else:
y_min, y_max = axs.get_ylim()
axs.set_xlim([x_min, x_max])
axs.set_ylim([y_min, y_max])
str_shift = 0.05
[p_10, p_50, p_90] = np.percentile(x, [10, 50, 90])
axs.hlines(y=0.1, xmin=x_min, xmax=p_10)
axs.vlines(x=p_10, ymin=0, ymax=0.1)
text_str = "$P_{10}=" + "{:.3e}".format(p_10) + "$"
plt.text(p_10, 0.1+str_shift, text_str,
{'color': 'black', 'fontsize': 18, 'ha': 'center', 'va': 'center'})
# 'bbox': dict(boxstyle="round", fc="white", ec="black", pad=0.5)})
axs.hlines(y=0.5, xmin=x_min, xmax=p_50)
axs.vlines(x=p_50, ymin=0, ymax=0.5)
text_str = "$P_{50}=" + "{:.3e}".format(p_50) + "$"
plt.text(p_50, 0.5+str_shift, text_str,
{'color': 'black', 'fontsize': 18, 'ha': 'center', 'va': 'center'})
axs.hlines(y=0.9, xmin=x_min, xmax=p_90)
axs.vlines(x=p_90, ymin=0, ymax=0.9)
text_str = "$P_{90}=" + "{:.3e}".format(p_90) + "$"
plt.text(p_90, 0.9+str_shift, text_str,
{'color': 'black', 'fontsize': 18, 'ha': 'center', 'va': 'center'})
axs.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter('%.3f'))
# number of Monte Carlo samples
# use the same for all Random Variables
n_samples = 1e6
n_samples = np.int64(n_samples)
# random numbers from uniform distribution
lower_limit = 10
upper_limit = 20
x_uniform = rand.uniform(low=lower_limit, high=upper_limit, size=n_samples)
plot_distribution(x_uniform,figsize_=(8,5), xlim=[lower_limit, upper_limit])
plot_ecdf(x_uniform, figsize_=(8,5), xlim=[lower_limit, upper_limit])
# generate random numbers from N(0,1)
x_mean = 0
x_std = 1
x_normal = rand.normal(loc=x_mean, scale=x_std, size=n_samples)
plot_distribution(x_normal,figsize_=(8,5))
plot_ecdf(x_normal, figsize_=(8,5))
# generate log-normal random variable --> always positive because it is exponential of Normal random variable
# this distribution is defined in
log_x_mean = 2
log_x_std = 1
x_lognormal = rand.lognormal(mean=log_x_mean, sigma=log_x_std, size=n_samples)
plot_distribution(x_lognormal,figsize_=(8,5), xlim=[0, 100])
plot_ecdf(x_lognormal, figsize_=(8,5), xlim=[0, 100])
###Output
_____no_output_____
###Markdown
logarithm of lognormal should be a normal distribution
###Code
print('Plot the logarithm of the distribution')
plot_distribution(np.log(x_lognormal),figsize_=(8,5))
# generate random numbers from a triangular distribution
x_low = 0
x_mode = 7
x_high = 10
x_tri = rand.triangular(left=x_low, mode=x_mode, right=x_high, size=n_samples)
plot_distribution(x_tri,figsize_=(8,5), xlim=[x_low, x_high])
plot_ecdf(x_tri, figsize_=(8,5), xlim=[x_low, x_high])
###Output
_____no_output_____
###Markdown
The following is a specific example for STOIIP calculation
###Code
# 1- define data distributions
Bo = rand.uniform(low=1.19, high=1.21, size=n_samples) # units Bbl/STB
Sw = rand.uniform(low=0.19, high=0.45, size=n_samples)
porosity = rand.triangular(left=0.17, mode=0.213, right=0.24, size=n_samples)
GRV = rand.triangular(left=0.55, mode=0.64, right=0.72, size=n_samples) # units 10^9 m3
# 2- put your data into named tuples in a pandas.DataFrame
data_df = pd.DataFrame()
data_df['GRV'] = GRV
data_df['porosity'] = porosity
data_df['Sw'] = Sw
data_df['Bo'] = Bo
# 3- define your calculation inside a function with data
# http://www.oilfieldwiki.com/wiki/Oil_in_place
# perform Monte Carlo simulation on the random variables
# define the calculations needed
def calculate_STOIIP(data_df):
STOIIP = 7758 * (data_df['GRV'] * 1e9) * data_df['porosity'] * (1 - data_df['Sw']) / data_df['Bo'] # units barrels
return STOIIP
# perform analysis
t0 = time.time()
STOIIP = calculate_STOIIP(data_df)
print('finished MC calucation of STOIIP in {} sec'.format(time.time()-time.time()))
data_describe(STOIIP)
plot_distribution(STOIIP, figsize_=(10,5), xlabel='STOIIP')
plot_ecdf(STOIIP, figsize_=(10,5), xlabel='STOIIP')
sensitivity_df = sensitivity_analysis(data_df, figsize=(12,6*5), xlabel='STOIIP')
ordered_boxplot(sensitivity_df, xlabel='STOIIP')
###Output
_____no_output_____ |
Chap 4: Metaclass & Property/Chap 34 Register class exists by metaclass.ipynb | ###Markdown
메타클래스로 클래스의 존재를 등록하자메타클래스를 사용하는 또 다른 일반적인 사례는 프로그램에 있는 타입을 자동으로 등록하는 것이다. 등록(registration)은 간단한 식별자(identifier)를 대응하는 클래스에 매핑하는 역방향 조회(reverse lookup)를 수행할 때 유용하다.예를 들어 Python 객체를 직렬화한 표현을 JSON으로 구현한다고 해보자.
###Code
import json
class Serializable(object):
def __init__(self, *args):
self.args = args
def serialize(self):
return json.dumps({'args': self.args})
###Output
_____no_output_____
###Markdown
간단한 불변 자료 구조를 문자열로 쉽게 직렬화할 수 있다.
###Code
class Point2D(Serializable):
def __init__(self, x, y):
super().__init__(x, y)
self.x = x
self.y = y
def __repr__(self):
return 'Point2D(%d, %d)' % (self.x, self.y)
point = Point2D(5, 3)
print('Object: ', point)
print('Serialized:', point.serialize())
###Output
Object: Point2D(5, 3)
Serialized: {"args": [5, 3]}
###Markdown
역직렬화하는 또 다른 클래스
###Code
class Deserializable(Serializable):
@classmethod
def deserialize(cls, json_data):
params = json.loads(json_data)
return cls(*params['args'])
###Output
_____no_output_____
###Markdown
Deserializable을 이용하면 간단한 불변 객체들을 범용적인 방식으로 쉽게 직렬화하고 역직렬화 할 수 있다.
###Code
class BetterPoint2D(Deserializable):
# ...
def __init__(self, x, y):
super().__init__(x, y)
self.x = x
self.y = y
def __repr__(self):
return 'Point2D(%d, %d)' % (self.x, self.y)
@classmethod
def deserialize(cls, json_data):
params = json.loads(json_data)
return cls(*params['args'])
point = BetterPoint2D(5, 3)
print('Before: ', point)
data = point.serialize()
print('Serialized: ', data)
after = BetterPoint2D.deserialize(data)
print('After: ', after)
###Output
Before: Point2D(5, 3)
Serialized: {"args": [5, 3]}
After: Point2D(5, 3)
###Markdown
직렬화된 데이터에 대응하는 타입을 미리 알고 있을 경우에만 대응 가능하다는 문제가 있다. 이상적으로 JSON으로 직렬화되는 클래스를 많이 갖추고 그중 어떤 클래스든 대응하는 Python 객체로 역직렬화하는 공통 함수를 하나만 두려고 할 것이다.이렇게 만들려면 직렬화할 객체의 클래스 이름을 JSON 데이터에 포함시키면 된다.
###Code
class BetterSerializable(object):
def __init__(self, *args):
self.args = args
def serialize(self):
return json.dumps({
'class': self.__class__.__name__,
'args': self.args,
})
def __repr__(self):
return 'Point2D(%d, %d)' % (self.x, self.y)
###Output
_____no_output_____
###Markdown
다음으로 클래스 이름을 해당 클ㄹ래스의 객체 생성자에 매핑하고 이 매핑을 관리한다. 범용 역직렬화 함수는 register_class에 넘긴 클래스가 어떤 것이든 제대로 동작한다.
###Code
registry = {}
def register_class(target_class):
registry[target_class.__name__] = target_class
def deserialize(data):
params = json.loads(data)
name = params['class']
target_class = registry[name]
return target_class(*params['args'])
###Output
_____no_output_____
###Markdown
deserialize가 항상 제대로 동작함을 보장하려면 추후에 역직렬화할 법한 모든 클래스에 register_class를 호출해야 한다.
###Code
class EvenBetterPoint2D(BetterSerializable):
def __init__(self, x, y):
super().__init__(x, y)
self.x = x
self.y = y
register_class(EvenBetterPoint2D)
###Output
_____no_output_____
###Markdown
이제 어떤 클래스를 담고 있는지 몰라도 임의의 JSON 문자열을 역직렬화할 수 있다.
###Code
point = EvenBetterPoint2D(5, 3)
print('Before: ', point)
data = point.serialize()
print('Serialized: ', data)
after = deserialize(data)
print('After: ', after)
class Point3D(BetterSerializable):
def __init__(self, x, y, z):
super().__init__(x, y, z)
self.x = x
self.y = y
self.z = z
point = Point3D(5, 9, -4)
data = point.serialize()
deserialize(data)
###Output
_____no_output_____
###Markdown
BetterSerializable를 상속해서 서브클래스를 만들더라도 class 문의 본문 이후에 register_class를 호출하지 않으면 실제로 모든 기능을 사용하진 못한다.프로그래머가 의도한 대로 BetterSerializable을 사용하고 모든 경우에 `register_class`가 호출된다고 확신 하기 위해 메타클래스를 이용하면 서브클래스가 정의될 때 class 문을 가로채는 방법으로 만들 수 있다. 메타 클래스로 클래스 본문이 끝나자마자 새 타입을 등록하면 된다.
###Code
class Meta(type):
def __new__(meta, name, bases, class_dict):
cls = type.__new__(meta, name, bases, class_dict)
register_class(cls)
return cls
class RegisteredSerializable(BetterSerializable,
metaclass=Meta):
pass
###Output
_____no_output_____
###Markdown
RegisterSerializable의 서브클래스를 정의할 때 register_class가 호출되어 deserialize가 항상 기대한 대로 동작할 것이라고 확신할 수 있다.
###Code
class Vector3D(RegisteredSerializable):
def __init__(self, x, y, z):
super().__init__(x, y, z)
self.x, self.y, self.z = x, y, z
v3 = Vector3D(10, -7, 3)
print('Before: ', v3)
data = v3.serialize()
print('Serialized: ', data)
after = deserialize(data)
print('After: ', deserialize(data))
###Output
Before: Point2D(10, -7)
Serialized: {"class": "Vector3D", "args": [10, -7, 3]}
After: Point2D(10, -7)
|
Calorias_respectoTiempo.ipynb | ###Markdown
Estimar el efecto del consumo calórico en el tiempo* Andrea Catalina Fernández Mena A01197705* Catedrático: Ing. David Rivera Rangel Crearemos una celda de código nueva para cada línea de código e iremos escribiendo y corriendo una por una para detectar posibles errores ademas de ver los resultados al ir avanzando:
###Code
import pandas as pd # importa la librería pandas y la asigna a la variable pd
###Output
_____no_output_____
###Markdown
Creamos la variable datos_consumo para cargar el archivo con la función read_excel de la librería Pandas:
###Code
datos_consumo = pd.read_excel('DatosSemana1a4.xlsx') # indicamos el nombre de nuestro archivo a ser leído
###Output
_____no_output_____
###Markdown
Usamos la función head() para comprobar que los datos se cargaron correctemente en el dataframe viendo los primeros 5 registros:
###Code
datos_consumo.head()
###Output
_____no_output_____
###Markdown
Creamos una variable datos para asignarle el DafaFrame que contendrá solo los datos que necesitamos:
###Code
datos = datos_consumo[["Fecha (dd/mm/aa)","Calorías (kcal)"]] # seleccionamos las dos columnas que necesitaremos
###Output
_____no_output_____
###Markdown
Usamos la función head() para comprobar que los datos se cargaron correctemente en el dataframe viendo los primeros 5 registros:
###Code
datos.head() # imprimiendo los datos selecccionados
###Output
_____no_output_____
###Markdown
Usaremos la función sum() para calcular el total de calorías consumidas:
###Code
suma_calorías = datos["Calorías (kcal)"].sum()
suma_calorías # despliega el total de calorias
###Output
_____no_output_____
###Markdown
Ahora contaremos el total de días diferentes de consumo de calorías con la función nunique():
###Code
días = datos["Fecha (dd/mm/aa)"].nunique()
días # despliega el total de días unicos
###Output
_____no_output_____
###Markdown
Calculamos el promedio de calorías:
###Code
calorías_promedio = suma_calorías/días # total de calorías consumidas entre el número de días que tomó consumirlas
print("Tu promedio de calorías consumidas en", días,"días es:", calorías_promedio)
###Output
Tu promedio de calorías consumidas en 25 días es: 1510.5439999999999
###Markdown
Definimos las variables requeridas para el cálculo en la ecuación, empleamos la función input() para habilitar la captura de datos por el usuario e int() para indicar que las variables son números enteros:
###Code
peso = int(input("Ingresa tu peso en kilogramos: "))
altura = int(input("Ingresa tu altura en centimetros: "))
edad = int(input("Ingresa tu edad en años: "))
genero = input("Ingresa tu género, Mujer/Hombre: ")
###Output
Ingresa tu peso en kilogramos: 52
Ingresa tu altura en centimetros: 159
Ingresa tu edad en años: 19
Ingresa tu género, Mujer/Hombre: Mujer
###Markdown
Con esto, procedemos a realizar la estimación de calorías requeridas diarias de acuerdo con los datos, utilizando la ecuación de Harris-Benedict:
###Code
if(genero == "Mujer"):
calorías_requeridas = 655+(9.56*peso)+(1.85*altura)-(4.68*edad) # fórmula para estimar calorías requeridas en mujer
elif(genero == "Hombre"):
calorías_requeridas = 66.5+(13.75*peso)+(5*altura)-(6.8*edad) # fórmula para estimar calorías requeridas en hombre
print("Con base en tus datos, tu consumo de calorías al día debe ser de:", calorías_requeridas)
###Output
Con base en tus datos, tu consumo de calorías al día debe ser de: 1357.35
###Markdown
Calculamos la diferencia entre las calorías consumidas y las calorías requeridas, esto indicará si tu consumo es mayor, menor o igual:
###Code
diferencia = calorías_promedio - calorías_requeridas
diferencia
###Output
_____no_output_____
###Markdown
Por último, usaremos esa diferencia para hacer una aproximación de su efecto en un año si se conoce que 3500 calorías equivalen alrededor de 450 gramos:
###Code
efecto_anual = diferencia * 450/3500 * 365 /1000 # realiza la proporción, se multiplica por 365 (días) y se divide entre 1000 (gramos) para obtener kilogramos
print("Si continuas con el consumo calórico actual, en un año tu cambio de masa corporal sería aproximadamente de:",efecto_anual,"kg")
###Output
Si continuas con el consumo calórico actual, en un año tu cambio de masa corporal sería aproximadamente de: 7.18917557142857 kg
|
2-horse-vs-humans-classifier/Horses-vs-Humans-using-Inception.ipynb | ###Markdown
Classification of Horses vs HumansData downloaded from https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip containing around 1280 CGI generated images of horses and humans in different poses.
###Code
#Importing the necessary libraries
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from os import getcwd
#Loading the pretrained inception model
INCEPTION_WEIGHTS = r"C:\Users\ku.kulshrestha\Downloads\inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
#Instance of the inception model from the local pre-trained weights
local_weights_file = INCEPTION_WEIGHTS
pre_trained_model = InceptionV3(input_shape=(150,150,3),
weights=None,
include_top=False)
pre_trained_model.load_weights(local_weights_file)
# Making all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable=False
pre_trained_model.summary()
#Using the outpt of mixed7 layer
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
#Callback definition for stoping training at 97% accuracy
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.97):
print("\nReached 97.0% accuracy so cancelling training!")
self.model.stop_training = True
# Addition of final trainable layers
from tensorflow.keras.optimizers import RMSprop
x = layers.Flatten()(last_output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense (1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
model.summary()
#Unziping the data
path_horse_or_human = r"C:\Users\ku.kulshrestha\Documents\CourseraTensorflow\C2CNN\horse-or-human.zip"
path_validation_horse_or_human = r"C:\Users\ku.kulshrestha\Documents\CourseraTensorflow\C2CNN\validation-horse-or-human.zip"
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
import shutil
local_zip = path_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall(r'C:\Users\ku.kulshrestha\Documents\github repos\mini-projects\data\horse-or-human-training')
zip_ref.close()
local_zip = path_validation_horse_or_human
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall(r'C:\Users\ku.kulshrestha\Documents\github repos\mini-projects\data\horse-or-human-validation')
zip_ref.close()
# Define our example directories and files
train_dir = r'C:\Users\ku.kulshrestha\Documents\github repos\mini-projects\data\horse-or-human-training'
validation_dir = r'C:\Users\ku.kulshrestha\Documents\github repos\mini-projects\data\horse-or-human-validation'
train_horses_dir = os.path.join(train_dir,'horses')
train_humans_dir = os.path.join(train_dir,'humans')
validation_horses_dir = os.path.join(validation_dir,'horses')
validation_humans_dir = os.path.join(validation_dir,'humans')
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
#Setting up training and validation ImageDataGenerators
train_datagen = ImageDataGenerator(rescale=1./255,
horizontal_flip=True,
shear_range=0.2,
height_shift_range=0.2,
width_shift_range=0.2,
rotation_range=40,
zoom_range=0.2,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(directory=train_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150))
validation_generator = test_datagen.flow_from_directory(directory=validation_dir,
batch_size=20,
class_mode='binary',
target_size=(150,150))
#Training the model
callbacks = myCallback()
history = model.fit_generator(train_generator,
epochs=10,
validation_data=validation_generator,
steps_per_epoch=50,
validation_steps=50,
verbose=1,
callbacks=[callbacks])
#Plotting training and validation loss and accuracy
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
###Output
_____no_output_____ |
examples/Learning Parameters in Discrete Bayesian Networks.ipynb | ###Markdown
Parameter Learning in Discrete Bayesian Networks In this notebook, we show an example for learning the parameters (CPDs) of a Discrete Bayesian Network given the data and the model structure. pgmpy has two main methods for learning the parameters:1. MaximumLikelihood Estimator (pgmpy.estimators.MaximumLikelihoodEstimator)2. Bayesian Estimator (pgmpy.estimators.BayesianEstimator)3. Expectation Maximization (pgmpy.estimators.ExpectationMaximization)In the examples, we will try to generate some data from given models and then try to learn the model parameters back from the generated data. Step 1: Generate some data
###Code
# Use the alarm model to generate data from it.
from pgmpy.utils import get_example_model
from pgmpy.sampling import BayesianModelSampling
alarm_model = get_example_model("alarm")
samples = BayesianModelSampling(alarm_model).forward_sample(size=int(1e5))
samples.head()
###Output
Generating for node: CVP: 100%|██████████| 37/37 [00:01<00:00, 24.08it/s]
###Markdown
Step 2: Define a model structureIn this case, since we are trying to learn the model parameters back we will use the model structure that we used to generate the data from.
###Code
# Defining the Bayesian Model structure
from pgmpy.models import BayesianNetwork
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.nodes()
###Output
_____no_output_____
###Markdown
Step 3: Learning the model parameters
###Code
# Fitting the model using Maximum Likelihood Estimator
from pgmpy.estimators import MaximumLikelihoodEstimator
mle = MaximumLikelihoodEstimator(model=model_struct, data=samples)
# Estimating the CPD for a single node.
print(mle.estimate_cpd(node="FIO2"))
print(mle.estimate_cpd(node="CVP"))
# Estimating CPDs for all the nodes in the model
mle.get_parameters()[:10] # Show just the first 10 CPDs in the output
# Verifying that the learned parameters are almost equal.
np.allclose(
alarm_model.get_cpds("FIO2").values, mle.estimate_cpd("FIO2").values, atol=0.01
)
# Fitting the using Bayesian Estimator
from pgmpy.estimators import BayesianEstimator
best = BayesianEstimator(model=model_struct, data=samples)
print(best.estimate_cpd(node="FIO2", prior_type="BDeu", equivalent_sample_size=1000))
# Uniform pseudo count for each state. Can also accept an array of the size of CPD.
print(best.estimate_cpd(node="CVP", prior_type="dirichlet", pseudo_counts=100))
# Learning CPDs for all the nodes in the model. For learning all parameters with BDeU prior, a dict of
# pseudo_counts need to be provided
best.get_parameters(prior_type="BDeu", equivalent_sample_size=1000)[:10]
# Shortcut for learning all the parameters and adding the CPDs to the model.
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(data=samples, estimator=MaximumLikelihoodEstimator)
print(model_struct.get_cpds("FIO2"))
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(
data=samples,
estimator=BayesianEstimator,
prior_type="BDeu",
equivalent_sample_size=1000,
)
print(model_struct.get_cpds("FIO2"))
###Output
+--------------+---------+
| FIO2(LOW) | 0.04859 |
+--------------+---------+
| FIO2(NORMAL) | 0.95141 |
+--------------+---------+
+--------------+-----------+
| FIO2(LOW) | 0.0530594 |
+--------------+-----------+
| FIO2(NORMAL) | 0.946941 |
+--------------+-----------+
###Markdown
The Expecation Maximization (EM) algorithm can also learn the parameters when we have some latent variables in the model.
###Code
from pgmpy.estimators import ExpectationMaximization as EM
# Define a model structure with latent variables
model_latent = BayesianNetwork(
ebunch=alarm_model.edges(), latents=["HYPOVOLEMIA", "LVEDVOLUME", "STROKEVOLUME"]
)
# Dataset for latent model which doesn't have values for the latent variables
samples_latent = samples.drop(model_latent.latents, axis=1)
model_latent.fit(samples_latent, estimator=EM)
###Output
11%|█ | 11/100 [28:03<3:46:14, 152.52s/it]
###Markdown
Parameter Learning in Discrete Bayesian Networks In this notebook, we show an example for learning the parameters (CPDs) of a Discrete Bayesian Network given the data and the model structure. pgmpy has two main methods for learning the parameters:1. MaximumLikelihood Estimator (pgmpy.estimators.MaximumLikelihoodEstimator)2. Bayesian Estimator (pgmpy.estimators.BayesianEstimator)3. Expectation Maximization (pgmpy.estimators.ExpectationMaximization)In the examples, we will try to generate some data from given models and then try to learn the model parameters back from the generated data. Step 1: Generate some data
###Code
# Use the alarm model to generate data from it.
from pgmpy.utils import get_example_model
from pgmpy.sampling import BayesianModelSampling
alarm_model = get_example_model('alarm')
samples = BayesianModelSampling(alarm_model).forward_sample(size=int(1e5))
samples.head()
###Output
Generating for node: CVP: 100%|██████████| 37/37 [00:01<00:00, 24.08it/s]
###Markdown
Step 2: Define a model structureIn this case, since we are trying to learn the model parameters back we will use the model structure that we used to generate the data from.
###Code
# Defining the Bayesian Model structure
from pgmpy.models import BayesianNetwork
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.nodes()
###Output
_____no_output_____
###Markdown
Step 3: Learning the model parameters
###Code
# Fitting the model using Maximum Likelihood Estimator
from pgmpy.estimators import MaximumLikelihoodEstimator
mle = MaximumLikelihoodEstimator(model=model_struct, data=samples)
# Estimating the CPD for a single node.
print(mle.estimate_cpd(node='FIO2'))
print(mle.estimate_cpd(node='CVP'))
# Estimating CPDs for all the nodes in the model
mle.get_parameters()[:10] # Show just the first 10 CPDs in the output
# Verifying that the learned parameters are almost equal.
np.allclose(alarm_model.get_cpds('FIO2').values, mle.estimate_cpd('FIO2').values, atol=0.01)
# Fitting the using Bayesian Estimator
from pgmpy.estimators import BayesianEstimator
best = BayesianEstimator(model=model_struct, data=samples)
print(best.estimate_cpd(node='FIO2', prior_type="BDeu", equivalent_sample_size=1000))
# Uniform pseudo count for each state. Can also accept an array of the size of CPD.
print(best.estimate_cpd(node='CVP', prior_type="dirichlet", pseudo_counts=100))
# Learning CPDs for all the nodes in the model. For learning all parameters with BDeU prior, a dict of
# pseudo_counts need to be provided
best.get_parameters(prior_type="BDeu", equivalent_sample_size=1000)[:10]
# Shortcut for learning all the parameters and adding the CPDs to the model.
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(data=samples, estimator=MaximumLikelihoodEstimator)
print(model_struct.get_cpds('FIO2'))
model_struct = BayesianNetwork(ebunch=alarm_model.edges())
model_struct.fit(data=samples, estimator=BayesianEstimator, prior_type='BDeu', equivalent_sample_size=1000)
print(model_struct.get_cpds('FIO2'))
###Output
+--------------+---------+
| FIO2(LOW) | 0.04859 |
+--------------+---------+
| FIO2(NORMAL) | 0.95141 |
+--------------+---------+
+--------------+-----------+
| FIO2(LOW) | 0.0530594 |
+--------------+-----------+
| FIO2(NORMAL) | 0.946941 |
+--------------+-----------+
###Markdown
The Expecation Maximization (EM) algorithm can also learn the parameters when we have some latent variables in the model.
###Code
from pgmpy.estimators import ExpectationMaximization as EM
# Define a model structure with latent variables
model_latent = BayesianNetwork(ebunch=alarm_model.edges(), latents=['HYPOVOLEMIA', 'LVEDVOLUME', 'STROKEVOLUME'])
# Dataset for latent model which doesn't have values for the latent variables
samples_latent = samples.drop(model_latent.latents, axis=1)
model_latent.fit(samples_latent, estimator=EM)
###Output
11%|█ | 11/100 [28:03<3:46:14, 152.52s/it] |
4-regression/4.1 - Linear Regression Ad Sales Revenue.ipynb | ###Markdown
Predicting Sales Amount Using Money Spent on Ads Welcome to the practical section of module 4.1, here we'll explore how to use python to implement a simple linear regression model. We'll be working with a small dataset that represents the the thousands of unit of product sales agains the thousands of dollars spent in the 3 media channels: TV, Radio and Newspaper. In this module, well be investigating the relation between TV expenditure and the amount of sales. When a company wants to sell more product they contact a TV channel to display ads, to inform consumers. As a company it would be very beneficial to know your expected sales based on how you spend your ad money. This will allow you to better allocate funds and potentially optimize profit! First we'll start by importing the necessary modules for our work:
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 10)
###Output
_____no_output_____
###Markdown
We first import the **pandas** library which we use to read and visualize our data. Then we import the **numpy** library to help with various calculations. Finally we import from *scikit-learn* linear models (refernced by **sklearn.linear_model**) the **SGDRegressor** which implements our gradient descent based linear regression algorithm.The last three lines in the end are configuration for **metaplotlib** (which is used interneally by **pandas** for visualization and plotting) to have our plots appear inline here in the notebook.The line in the middle that (which imports the **StandardScalar** algorithm) is necessary for the feature scaling implemented in the following function. No worries if you don't know what feature scaling is, just treat for now as a black-box function that puts the data in a cleaner form for the learning process. We'll get to the details of feature scaling in the next module.
###Code
def scale_features(X, scalar=None):
if(len(X.shape) == 1):
X = X.reshape(-1, 1)
if scalar == None:
scalar = StandardScaler()
scalar.fit(X)
return scalar.transform(X), scalar
###Output
_____no_output_____
###Markdown
Visualizing the Data The next thing to do now is to read our dataset and visualize it. Usually we find our datasets in **csv** files (Comma Separated Values) and these can be easily read using pandas **read_csv** method which takes the path of the csv files (be it a local disk path or a web url) and returns a *DataFrame* object that we can query it like python dictionaries and list (More info on how to work with *DataFrames* can be found in [this](http://pandas.pydata.org/pandas-docs/version/0.18.1/tutorials.htmlpandas-cookbook) quick cook book of pandas)
###Code
# get the advertising data set
dataset = pd.read_csv('../datasets/Advertising.csv')
dataset = dataset[["TV", "Radio", "Newspaper", "Sales"]] # filtering the Unamed index column out of the dataset
# here we the first 10 samples of the dataset
dataset[:10]
###Output
_____no_output_____
###Markdown
After reading the data set, it's a good idea to plot the data points to get a visual understanding of the data and how they are distribiuted. Plotting is made extermly simple with pandas, all we have to fo is to call the **plot** method on our DataFrame.The **plot** methods takes many arguments, but for now we're intersted in 3 of them:* *kind*: The type of the plot we wish to generate* *x*: What constitutes the x-axis* *y*: What constitutes the y-axisIn the following, we're creating a scatter plot of the data points with the thousands of dollars spent on TV on the x-axis and the thousands of unit sold on the y axis.
###Code
dataset.plot(kind='scatter', x='TV', y='Sales')
###Output
_____no_output_____
###Markdown
Now it's time prepare our data:1. First, we'll divide our dataset into two parts: one part we're going use to train our linear regression model, and the second we're going to use to evaluate the trained model and if it can generalize well to new unseen data.3. Second, we'll use the **scale_features** to scale our training and test variables. Training the Model
###Code
dataset_size = len(dataset)
training_size = np.floor(dataset_size * 0.8).astype(int)
# First we split the shuffled dataset into two parts: training and test
X_training = dataset["TV"][:training_size]
y_training = dataset["Sales"][:training_size]
X_test = dataset["TV"][training_size:]
y_test = dataset["Sales"][training_size:]
# Second we apply feature scaling on X_training and X_test
X_training, training_scalar = scale_features(X_training)
X_test,_ = scale_features(X_test, scalar=training_scalar)
###Output
_____no_output_____
###Markdown
Now we're ready to use the scikit-learn's **SGDRegressor** to build our linear regression model. The procedure simple: we need to construct an instance of **SGDRegressor** then use the **fit** method on that instance to train our model by passing to it our training X and y.Now, there are many arguments we can use to construct an **SGDRegressor** instance, and we'll go through some of them as we progress in the chapter, but for now we're focus on one argument called *loss*. This argument determines the cost function we're gonna use with our model. As we learned in the videos, we'll be using the Measn Squared Error cost function (aka Least Squard Error cost), and we can specify that in **SGDRegressor** by passing 'squared_loss' as the value of the *loss* argument.
###Code
model = SGDRegressor(loss='squared_loss')
model.fit(X_training, y_training)
###Output
_____no_output_____
###Markdown
Now we have trained our linear regression model. As we know, our hypothesis takes the form $y = w_0 + w_1x$. We can access the value of $w_0$ with **model.intercept_** and the value of $w_1$ with **model.coef_**.
###Code
w0 = model.intercept_
w1 = model.coef_
print "Trained model: y = %0.2f + %0.2fx" % (w0, w1)
###Output
Trained model: y = 12.19 + 3.65x
###Markdown
To get an idea of how well our model works, we need to try it on some data that it hasn't seen before in the training. Those are of X_test, y_test we seperated before from the training data. We can calculate the mean squared error (MSE) on the test data using the **predict** method of the model to get the predicted y values.
###Code
MSE = np.mean((y_test - model.predict(X_test)) ** 2)
print "The Test Data MSE is: %0.3f" % (MSE)
###Output
The Test Data MSE is: 13.170
###Markdown
Visualizing the Model Now, it's a good idea to plot our training data point along side with the regression model line to visualize the estimation. To do that we create a new column in our dataset DataFrame that contains the model's predicted values of sales. We get those using the model's method **predict**. Then we plot the line that represents the model predictions.
###Code
# We create the predicted sales column
scaled_tv,_ = scale_features(dataset["TV"], scalar=training_scalar)
dataset["Predicted Sales"] = model.predict(scaled_tv)
# We then scatter plot our data points as before but we save the resulting plot for later reuse
plot_ax = dataset.plot(kind='scatter', x='TV', y='Sales')
# Then we plot a line with the "Predicted Sales" column
# notice that we resued our prvious plot in the 'ax' argument to draw the line over the scatter points
# we also specify the xlim argument (the range of x axis visible in the plot) to prevent the plot form zooming in
dataset.plot(kind='line', x='TV', y='Predicted Sales', color='red', ax=plot_ax, xlim=(-50, 350))
###Output
_____no_output_____ |
examples/vision.ipynb | ###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(MNIST_PATH)
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`:
###Code
data = image_data_from_folder(MNIST_PATH)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_mnist()
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`:
###Code
data = image_data_from_folder(MNIST_PATH)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(Paths.MNIST)
Paths.MNIST
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = image_data_from_folder(Paths.MNIST, ds_tfms=(rand_pad(2, 28), []))
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = image_data_from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(MNIST_PATH)
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = image_data_from_folder(MNIST_PATH, ds_tfms=(rand_pad(2, 28), []))
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(MNIST_PATH)
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = image_data_from_folder(MNIST_PATH, ds_tfms=(rand_pad(2, 28), []))
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(MNIST_PATH)
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`, optionally with transforms:
###Code
data = image_data_from_folder(MNIST_PATH, ds_tfms=(rand_pad(2, 28), []))
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, tvm.resnet18, metrics=accuracy)
learn.fit(1)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
untar_data(MNIST_PATH)
MNIST_PATH
###Output
_____no_output_____
###Markdown
Create a `DataBunch`:
###Code
data = image_data_from_folder(MNIST_PATH)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Vision example Images can be in labeled folders, or a single folder with a CSV.
###Code
path = untar_data(URLs.MNIST_SAMPLE)
path
###Output
_____no_output_____
###Markdown
Image folder version Create a `DataBunch`, optionally with transforms:
###Code
data = ImageDataBunch.from_folder(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
###Output
_____no_output_____
###Markdown
Create and fit a `Learner`:
###Code
learn = ConvLearner(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
accuracy(*learn.get_preds())
###Output
_____no_output_____
###Markdown
CSV version Same as above, using CSV instead of folder name for labels
###Code
data = ImageDataBunch.from_csv(path, ds_tfms=(rand_pad(2, 28), []), bs=64)
data.normalize(imagenet_stats)
img,label = data.train_ds[0]
img
learn = ConvLearner(data, models.resnet18, metrics=accuracy)
learn.fit_one_cycle(1, 0.01)
###Output
_____no_output_____ |
code/final-notebooks/q1-infections-province-time.ipynb | ###Markdown
Importing Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Question: How did the disease spread within the provinces over time? (Descriptive / Exploratory) Reading Data - Infections over Time per Province [Dataset Description](https://www.kaggle.com/kimjihoo/ds4c-what-is-this-dataset-detailed-description) Quick Data Exploration
###Code
infections_prov_time = pd.read_csv('../../data/TimeProvince.csv')
infections_prov_time.head()
infections_prov_time.describe()
infections_prov_time[infections_prov_time['province'] == 'Seoul'].sort_values(['province','date']).head(20)
###Output
_____no_output_____
###Markdown
Munging data to compute the number of infections per day since 1st infection day
###Code
infections_prov_time_clean = infections_prov_time.rename(index=str, columns={'confirmed':'accum_confirmed',
'released':'accum_released',
'deceased':'accum_deceased'})
infections_prov_time_clean
#Finding date of 1st infection per province
day_one_per_province = infections_prov_time_clean[infections_prov_time_clean['accum_confirmed'] > 0] \
.sort_values(['province','date']) \
.groupby(['province']) \
.head(1) \
.reset_index() \
.filter(['province','date']) \
.assign(days_since_day1 = 1)
day_one_per_province
# Adding the 1st infection date per province as a column to the infections data frame
infections_since_day1 = pd.merge(infections_prov_time_clean,day_one_per_province, how='left')
# Computing the number of days since 1st infection date for each date per province and
# Keeping only data from 1st infection date on per province
infections_since_day1['after_day1'] = infections_since_day1.groupby(['province']).days_since_day1.transform(lambda x : x.ffill())
infections_since_day1 = infections_since_day1[infections_since_day1['after_day1'] == 1]
infections_since_day1['days_since_day1'] = infections_since_day1.groupby(['province']).after_day1.transform(lambda x : x.cumsum())
infections_since_day1 = infections_since_day1.drop(columns=['after_day1'], axis=1)
infections_since_day1
infections_since_day1.sort_values(['days_since_day1','province'])
###Output
_____no_output_____
###Markdown
Adding population density data to compute proportional rates
###Code
province_data = {
'province': ['Gyeonggi-do', 'Gangwon-do', 'Chungcheongbuk-do', 'Chungcheongnam-do', 'Jeollabuk-do', 'Jeollanam-do',
'Gyeongsangbuk-do', 'Gyeongsangnam-do', 'Busan', 'Daegu', 'Daejeon', 'Gwangju', 'Incheon', 'Ulsan', 'Seoul', 'Jeju-do', 'Sejong'],
'population': [12479061, 1518040, 1589377, 2107802, 1834114, 1799044, 2680294, 3334524, 3448737, 2466052, 1538394, 1502881, 2890451, 1166615, 9904312, 605619, 204088],
'area': [10183.5, 16827.1, 7407.3, 8226.1, 8069.1, 12318.8, 19031.4, 10539.6, 769.6, 883.6, 539.3, 501.2, 1062.6, 1060.8, 605.2, 1849.1, 464.9]
}
provinces = pd.DataFrame(data = province_data, columns = ['province', 'population', 'area'])
provinces['pop_density'] = provinces['population'] / provinces['area']
provinces
# Merging infections data with province data
infections_since_day1_full = pd.merge(infections_since_day1, provinces, how='left')
# Computing population infection metrics
infections_since_day1_full['accum_confirmed_perc_total_pop'] = 100 * (infections_since_day1_full['accum_confirmed'] / infections_since_day1_full['population'])
infections_since_day1_full['accum_confirmed_per_million_people'] = (infections_since_day1_full['accum_confirmed'] / (infections_since_day1_full['population']/10**6))
infections_since_day1_full
###Output
_____no_output_____
###Markdown
Finding the provinces with higher infection rate (actual numbers and proportional numbers)
###Code
total_per_province = infections_since_day1_full.sort_values(['province','date']).groupby('province').tail(1).reset_index()
total_per_province = total_per_province[['date','province','days_since_day1','population','pop_density','accum_confirmed','accum_confirmed_perc_total_pop','accum_confirmed_per_million_people']]
total_per_province
province_inf_rate_real_nos = total_per_province.sort_values('accum_confirmed', ascending=False)
province_inf_rate_real_nos
def plot_infection_curve_per_province(infections_data, provinces, infections_var, title, ylabel,
log_scale=False, figsize=(10,6), filepath=''):
'''
INPUT
infections_data - pandas dataframe, infections data
provinces - list of strings, the subset of provinces to be used
infections_var - string, varible in the dataset to be used in plot y axis
title - string, plot title
ylabel - string, plot y-axis label
log_scale - boolean, default False, whether or not to use log scale on y axis
figsize - int tuple, default (10, 6), plot figure size
filepath - string, default '' (not save), filepath to save plot image to
OUTPUT
A line plot representing the infection curve of the given provinces over time since the 1st day of infection
This function plots the COVID-19 infection curve of a set of provinces using a line plot
'''
# Defines a color palette for each province in the top provinces in the number of cases
provinces_palette = {'Daegu':'#9b59b6', 'Gyeongsangbuk-do':'#3498db', 'Gyeonggi-do':'#95a5a6','Seoul':'#e74c3c',
'Chungcheongnam-do':'#34495e', 'Sejong':'#2ecc71'}
# Plots figure with characteristics based on the input parameters
f = plt.figure(figsize=figsize)
sns.set_style('whitegrid')
p = sns.lineplot(x="days_since_day1",
y=infections_var,
hue="province",
data=infections_data[infections_data.province.isin(provinces)],
palette=provinces_palette)
p.axes.set_title(title, fontsize = 16, weight='bold')
p.set_xlabel('Days since day 1', fontsize = 10)
p.set_ylabel(ylabel, fontsize = 10)
# Uses log scale on the y axis (if requested)
if log_scale: p.set_yscale("log")
# Saves figure to the specified filepath (if passed)
if (filepath != ''): f.savefig(filepath, bbox_inches='tight', dpi=600);
# Computes top 5 provinces in terms of accumulated number of cases
# Plots infection curve for all provinces in the top-5
top_infected_provinces_real_nos = total_per_province.sort_values('accum_confirmed', ascending=False).head(5).province
plot_infection_curve_per_province(infections_data=infections_since_day1_full,
provinces=top_infected_provinces_real_nos,
infections_var='accum_confirmed',
title='Infection Curve per Province since day 1',
ylabel='Number of confirmed cases')
# Uses top 5 provinces in terms of accumulated number of cases
# Plots infection curve for all provinces in the top-5 applying a log-scale to y-axis
plot_infection_curve_per_province(infections_data=infections_since_day1_full,
provinces=top_infected_provinces_real_nos,
infections_var='accum_confirmed',
title='Infection Curve per Province since day 1',
ylabel='Number of confirmed cases - log scale',
log_scale=True)
# Computes top 5 provinces in terms of proportion of accumulated number of cases to the total population
# Plots infection curve for all provinces in the top-5
top_infected_provinces_prop_nos = total_per_province.sort_values('accum_confirmed_perc_total_pop', ascending=False).head(5).province
plot_infection_curve_per_province(infections_data=infections_since_day1_full,
provinces=top_infected_provinces_prop_nos,
infections_var='accum_confirmed_perc_total_pop',
title='Proportional Infection Curve per Province since day 1',
ylabel='Number of confirmed cases / Total Population (%)',
log_scale=False,
figsize=(16,6))
# Top 5 provinces in terms of accumulated infections per million people
total_per_province.sort_values('accum_confirmed_per_million_people', ascending=False).head(5)
# Computes top 5 provinces in terms of accumulated number of cases per million people
# Plots infection curve for all provinces in the top-5
top_infected_provinces_perm_nos = total_per_province.sort_values('accum_confirmed_per_million_people', ascending=False).head(5).province
plot_infection_curve_per_province(infections_data=infections_since_day1_full,
provinces=top_infected_provinces_perm_nos,
infections_var='accum_confirmed_per_million_people',
title='Proportional Infection Curve per Province since day 1 (real numbers)',
ylabel='Number of confirmed cases per million people',
filepath='../../assets/q1-province-infections-over-time-real.png')
# Uses top 5 provinces in terms of accumulated number of cases per million people
# Plots infection curve for all provinces in the top-5 applying a log-scale to y-axis
plot_infection_curve_per_province(infections_data=infections_since_day1_full,
provinces=top_infected_provinces_perm_nos,
infections_var='accum_confirmed_per_million_people',
title='Proportional Infection Curve per Province since day 1 (log scale)',
ylabel='Number of confirmed cases per million people - log scale',
filepath='../../assets/q1-province-infections-over-time-log.png',
log_scale=True)
###Output
_____no_output_____ |
regression_insurance.ipynb | ###Markdown
Regression Task: Medical Insurance CostsIn this project, we develop a data pipeline to handle the well-known [health insurance costs dataset](https://www.kaggle.com/mirichoi0218/insurance), and implement gradient-based and linear algebra solutions to perform linear regression. *** Step 1: The Problem
###Code
# Import some common packages
import os.path
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
# Setup matplotlib for graphical display
%matplotlib inline
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# To make this notebook's output stable across runs
SEED = 42
np.random.seed(SEED)
###Output
_____no_output_____
###Markdown
Below is a preview of our dataset for this project. We will be building a regression model to predict insurance costs given certain features.
###Code
data = pd.read_csv('insurance.csv')
data
feature_names = data.columns[:-1]
label_name = data.columns[-1]
print("Features:", feature_names.values)
print("Label:", label_name)
###Output
Features: ['age' 'sex' 'bmi' 'children' 'smoker' 'region']
Label: charges
###Markdown
The dataset consists of:- 1338 entries- 7 variables (6 features + 1 label)- 3 categorical features- 0 missing values
###Code
data.info()
num_feature_names = data.dtypes[data.dtypes != 'object'].index.drop(label_name)
cat_feature_names = data.dtypes[data.dtypes == 'object'].index
print("Numerical: ", num_feature_names.values)
print("Categorical: ", cat_feature_names.values)
###Output
Numerical: ['age' 'bmi' 'children']
Categorical: ['sex' 'smoker' 'region']
###Markdown
Below is a statistical summary of the numerical variables. Note, however, that two of these (age and children) are discrete data, which may result from phenomena vastly different from those of Gaussian-distributed data (e.g. Poisson processes). Therefore, we should be cautious in how we interpret the standard deviation.
###Code
data.describe()
###Output
_____no_output_____
###Markdown
We can visualize our data to get a better look. Below, we see that of the 4 numerical variables, only BMI has a normal distribution, making its standard deviation of ~6 a useful measure of variation.
###Code
data.hist(figsize=(7,5))
plt.show()
###Output
_____no_output_____
###Markdown
--- Step 2: Data Analysis & Preprocessing
###Code
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, test_size=0.2, random_state=SEED)
print(train.shape)
print(test.shape)
###Output
(1070, 7)
(268, 7)
###Markdown
We can inspect correlation scores with respect to the label to form conjectures about our predictors. It appears that **age may have some useful linear relationship with medical costs**, while number of children has essentially none.
###Code
corr_matrix = train.corr()
corr_scores = pd.DataFrame(corr_matrix[label_name].sort_values(ascending=False))
corr_scores
###Output
_____no_output_____
###Markdown
Our scatter matrix below confirms this, while revealing some interesting patterns. There appear to be three "lines" of medical costs, all trending upwards with age. BMI has two discernable clusters, while the number of children lacks a clear general relationship; the most that can be said is that families with 5 children have less variation in medical costs.Note that these 2D plots necessarily lack the dimensionality of the full dataset. **Ideally, our machine learning model should be able to combine features to pick apart the separate trends** we see in the age plot.
###Code
from pandas.plotting import scatter_matrix
scatter_matrix(train, figsize=(16, 10))
plt.show()
###Output
_____no_output_____
###Markdown
We construct a data preprocessing pipeline to perform imputation and scaling on numerical features, and one-hot encoding on categorical features.
###Code
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
cat_pipeline = Pipeline([
('onehot', OneHotEncoder())
])
full_pipeline = ColumnTransformer(transformers=[
('num', num_pipeline, num_feature_names),
('cat', cat_pipeline, cat_feature_names)
], remainder='passthrough')
# Reorder column names to account for post-transformation changes
cat_one_hot_feature_names = []
for cat_feature_name in cat_feature_names:
for val in data[cat_feature_name].unique():
cat_one_hot_feature_names.append(cat_feature_name + ' - ' + val)
columns_reordered = np.concatenate((num_feature_names, cat_one_hot_feature_names, [label_name]))
columns_reordered
train_prepared = pd.DataFrame(full_pipeline.fit_transform(train), columns=columns_reordered)
###Output
_____no_output_____
###Markdown
Below we can inspect an updated histogram graph to see that our desired transformations have taken place.
###Code
train_prepared.hist(figsize=(15,10))
plt.show()
###Output
_____no_output_____
###Markdown
Finally, we form our X and y matrices by respectively dropping and selecting the label column from our data. It is essential that we apply our pipeline to the test set, or else our evaluation will be invalid.
###Code
X_train = train_prepared.drop(label_name, axis=1)
y_train = train_prepared[[label_name]]
test_prepared = pd.DataFrame(full_pipeline.fit_transform(test), columns=columns_reordered)
X_test = test_prepared.drop(label_name, axis=1)
y_test = test_prepared[[label_name]]
###Output
_____no_output_____
###Markdown
- - - Step 3: Gradient DescentWe implement gradient descent for linear regression below, taking care to insert a column of 1s for our $\textbf{x}_0$ intercept on a copy of the user's data.
###Code
class MyLinearRegression:
"""
Define what a linear regressor can do
"""
def __init__ (self):
"""
Initialize the regressor
"""
# parameter vector - initialized to random floats
self.theta = np.random.randn(X_train.shape[1]+1, 1)
# learning rate - initialized to a good default
self.alpha = 0.001;
# cost history - initialized to empty list
self.cost = [];
def gradientDescent(self, X_train, y_train, theta, alpha, iters):
"""
Implementatation of the gradient descent
INPUT:
alpha: the learning rate
iters: number of iterations
OUTPUT:
theta: updated value for theta
cost: value of the cost function
"""
# Add x0 column
X = X_train.copy()
X.insert(0, 'dummy', 1)
m = X.shape[1]
cost = []
for iter in range(iters):
gradients = 2/m * X.T.dot(X.dot(theta).values - y_train)
theta -= alpha * gradients
diff = (X.dot(theta) - y_train).values
cost_iter = np.linalg.norm(1/m * diff.T * diff)
cost.append(cost_iter)
return theta, cost
def fitUsingGradientDescent(self, X_train, y_train):
"""
Train the regressor using gradient descent
"""
m = X_train.shape[1]+1 # add one for intercept
self.theta = np.random.randn(m, 1)
self.theta, self.cost = self.gradientDescent(X_train, y_train, self.theta, self.alpha, 200)
def fitUsingNormalEquation(self, X_train, y_train):
"""
Training using the Normal (close form) equation
"""
# Add x0 column
X = X_train.copy()
X.insert(0, 'dummy', 1)
self.theta = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(y_train).flatten()
def predict(self, X_test):
"""
Predicting the label
"""
# Add x0 column
X = X_test.copy()
X.insert(0, 'dummy', 1)
y_predict = X.dot(self.theta)
return y_predict
def __str__(self):
"""
Print out the parameter out when call print()
"""
return f"Parameter vector is {self.theta}"
###Output
_____no_output_____
###Markdown
**Learning Rate:** We try out different learning rates for the dataset to find a learning rate that converges quickly.
###Code
# Use the following code to plot out your learning rate
# iters and cost must be supplied to plot out the cost function
# You must plot multiple curves corresponding to different learning rates to justify the best one.
alphas = [0.0001, 0.0003, 0.001, 0.003]
models = {}
for alpha in alphas:
# Train model
model_gd = MyLinearRegression()
model_gd.alpha = alpha
model_gd.fitUsingGradientDescent(X_train, y_train)
iters = len(model_gd.cost)
models[model_gd.cost[-1]] = (model_gd, alpha)
# Plot cost
plt.plot(np.linspace(0, iters, num=iters), model_gd.cost)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Error vs. Training Iterations')
plt.legend(alphas)
plt.show()
###Output
_____no_output_____
###Markdown
We see that our gradient descent-based model converges quickest with a learning rate of 0.003 among the rates tested. We can also confirm that it achieves the lowest cost at the end of training. We select this model for evaluation.
###Code
best_cost = np.min(list(models.keys())) # lowest final cost
myGradientDescentModel = models[best_cost][0] # model of lowest final cost
best_alpha = models[best_cost][1] # learning rate for that model
print("Best learning rate: ", best_alpha)
print("Lowest cost: ", best_cost)
###Output
Best learning rate: 0.003
Lowest cost: 3323926618.4300027
###Markdown
- - - Step 4: Normal Equation Below is the closed-form solution for linear regression, known as the normal equation.$ \mathbf{\theta} = ({\mathbf{X}^{T}\mathbf{X}})^{-1}\mathbf{X}^{T}\mathbf{y}.$It is implemented in the regressor class above as an alternative method of finding the best-fit line.
###Code
# Implement the normalEquation method of the MyLinearRegression Class before executing the code below:
myNormalEquationModel = MyLinearRegression()
myNormalEquationModel.fitUsingNormalEquation(X_train, y_train)
###Output
_____no_output_____
###Markdown
- - - Step 5: Model Evaluation Next, we compare the gradient descent approach to the normal equation, also including Sklearn's Stochastic Gradient Descent model for good measure. We evaluate the models by computing their Root Mean Square Error on the test data.
###Code
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.exceptions import DataConversionWarning
from sklearn.utils.testing import ignore_warnings
@ignore_warnings(category=DataConversionWarning)
def evaluate(model, fit, name):
fit(X_train, y_train)
y_predict = model.predict(X_test)
mse = mean_squared_error(y_test, y_predict)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_test, y_predict)
print(name)
print('RMSE: ', str(rmse))
print('MAE: ', str(mae))
print()
###Output
_____no_output_____
###Markdown
We can see that Sklearn's SGD model performs the best as measured by RMSE, but just barely. All models essentially hold the same predictive power, with our custom GD and Normal Equation-based models being near replicas.An RMSE of ~5790 can be interpreted as indicating **a "typical" error of \$5790 dollars when predicting insurance costs, biased upwards by outliers**. We also include Mean Average Error for comparison, which confirms that all models are highly similar but Sklearn's SGD regressor is slightly less susceptible to errors when modeling outliers. MAE indicates a true "average" error in either direction of \$4140.Given that a median medical charge is $9382, **our linear models score poorly using either metric. None of them are ready to deploy for real-world use.**
###Code
from sklearn.metrics import mean_squared_error
# Use the built-in SGD Regressor model
from sklearn.linear_model import SGDRegressor
model_sgd = SGDRegressor(random_state=SEED)
evaluate(model_sgd, model_sgd.fit, 'Sklearn SGD Model')
evaluate(myGradientDescentModel, myGradientDescentModel.fitUsingGradientDescent, 'My GD Model (alpha: ' + str(best_alpha) + ')')
evaluate(myNormalEquationModel, myNormalEquationModel.fitUsingNormalEquation, 'My Normal Eq. Model')
###Output
Sklearn SGD Model
RMSE: 5791.587879020164
MAE: 4141.152345577996
My GD Model (alpha: 0.003)
RMSE: 5795.33253302007
MAE: 4167.870547846676
My Normal Eq. Model
RMSE: 5795.332533018759
MAE: 4167.870547845393
###Markdown
- - - Step 6: The Solution In this project, we develop a regressor capable of using either batch gradient descent or linear algebra to model linear relationships among variables. We apply this model to the task of predicting medical insurance costs, which it does poorly as measured by RMSE.However, this project moves us closer to our goal by uncovering important insights. By analyzing the coefficients of our linear model, we can assess the importance of different features on our predictions.- The high-magnitude coefficients of the smoker (yes/no) features suggest that smoking has a strong predictive value for medical costs, which makes intuitive sense, as smoking has been linked to lung and other diseases. - Sex and age appear to matter a good deal in our modeling attempt, the latter of which supports our earlier conjecture during the data visualization process- Similarly, the low coefficient for number of children reflects our earlier intuition that this feature is not very important for predictionsIt may be worth investigating these findings with a nonlinear model, such as a random forest regressor, as a linear model clearly lacks the power to predict with low error. Based on our work during the visualization steps, it may also be useful to transform number of children into a binary categorical variable (less than 5 or not), or experiment with other feature engineering approaches. In particular, the age feature stands out as a good candidate for feature engineering, as our scatter plot shows multiple overlapping linear trends.
###Code
coef = myGradientDescentModel.theta.sort_values(by='charges', ascending=False)
coef.columns = ['coefficient']
coef
###Output
_____no_output_____ |
object-detection/YOLO v3.ipynb | ###Markdown
YOLO3 example based on https://github.com/experiencor/keras-yolo3We first need to create a model and load some existing weights (as we don't want to retrain). The model architecture is called a “DarkNet” and was originally loosely based on the VGG-16 model. To help, we copy out some functions from: https://github.com/experiencor/keras-yolo3
###Code
# create a YOLOv3 Keras model and save it to file
# based on https://github.com/experiencor/keras-yolo3
import struct
import numpy as np
from keras.layers import Conv2D
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import LeakyReLU
from keras.layers import ZeroPadding2D
from keras.layers import UpSampling2D
from keras.layers.merge import add, concatenate
from keras.models import Model
def _conv_block(inp, convs, skip=True):
x = inp
count = 0
for conv in convs:
if count == (len(convs) - 2) and skip:
skip_connection = x
count += 1
if conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) # peculiar padding as darknet prefer left and top
x = Conv2D(conv['filter'],
conv['kernel'],
strides=conv['stride'],
padding='valid' if conv['stride'] > 1 else 'same', # peculiar padding as darknet prefer left and top
name='conv_' + str(conv['layer_idx']),
use_bias=False if conv['bnorm'] else True)(x)
if conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)
if conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)
return add([skip_connection, x]) if skip else x
def make_yolov3_model():
input_image = Input(shape=(None, None, 3))
# Layer 0 => 4
x = _conv_block(input_image, [{'filter': 32, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 0},
{'filter': 64, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 1},
{'filter': 32, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 2},
{'filter': 64, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 3}])
# Layer 5 => 8
x = _conv_block(x, [{'filter': 128, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 5},
{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 6},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 7}])
# Layer 9 => 11
x = _conv_block(x, [{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 9},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 10}])
# Layer 12 => 15
x = _conv_block(x, [{'filter': 256, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 12},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 13},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 14}])
# Layer 16 => 36
for i in range(7):
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 16+i*3},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 17+i*3}])
skip_36 = x
# Layer 37 => 40
x = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 37},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 38},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 39}])
# Layer 41 => 61
for i in range(7):
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 41+i*3},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 42+i*3}])
skip_61 = x
# Layer 62 => 65
x = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 62},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 63},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 64}])
# Layer 66 => 74
for i in range(3):
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 66+i*3},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 67+i*3}])
# Layer 75 => 79
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 75},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 76},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 77},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 78},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 79}], skip=False)
# Layer 80 => 82
yolo_82 = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 80},
{'filter': 255, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 81}], skip=False)
# Layer 83 => 86
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 84}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_61])
# Layer 87 => 91
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 87},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 88},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 89},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 90},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 91}], skip=False)
# Layer 92 => 94
yolo_94 = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 92},
{'filter': 255, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 93}], skip=False)
# Layer 95 => 98
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 96}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_36])
# Layer 99 => 106
yolo_106 = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 99},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 100},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 101},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 102},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 103},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 104},
{'filter': 255, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 105}], skip=False)
model = Model(input_image, [yolo_82, yolo_94, yolo_106])
return model
class WeightReader:
def __init__(self, weight_file):
with open(weight_file, 'rb') as w_f:
major, = struct.unpack('i', w_f.read(4))
minor, = struct.unpack('i', w_f.read(4))
revision, = struct.unpack('i', w_f.read(4))
if (major*10 + minor) >= 2 and major < 1000 and minor < 1000:
w_f.read(8)
else:
w_f.read(4)
transpose = (major > 1000) or (minor > 1000)
binary = w_f.read()
self.offset = 0
self.all_weights = np.frombuffer(binary, dtype='float32')
def read_bytes(self, size):
self.offset = self.offset + size
return self.all_weights[self.offset-size:self.offset]
def load_weights(self, model):
for i in range(106):
try:
conv_layer = model.get_layer('conv_' + str(i))
print("loading weights of convolution #" + str(i))
if i not in [81, 93, 105]:
norm_layer = model.get_layer('bnorm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = self.read_bytes(size) # bias
gamma = self.read_bytes(size) # scale
mean = self.read_bytes(size) # mean
var = self.read_bytes(size) # variance
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = self.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
except ValueError:
print("no convolution #" + str(i))
def reset(self):
self.offset = 0
# define the model
model = make_yolov3_model()
# load the model weights
weight_reader = WeightReader('yolov3.weights')
# set the model weights into the model
weight_reader.load_weights(model)
# save the model to file
model.save('model.h5')
###Output
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0702 17:11:54.469815 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
W0702 17:11:54.514363 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0702 17:11:54.527688 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
W0702 17:11:54.568421 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
W0702 17:11:54.568421 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
W0702 17:11:54.713616 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
W0702 17:12:04.409566 15024 deprecation_wrapper.py:119] From c:\appl\applications\miniconda3\envs\cv\lib\site-packages\keras\backend\tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.
###Markdown
Now that we have a saved model, we can simply load this and make predictions
###Code
# load yolov3 model and perform object detection
# based on https://github.com/experiencor/keras-yolo3
import numpy as np
from numpy import expand_dims
from keras.models import load_model
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from matplotlib import pyplot
from matplotlib.patches import Rectangle
class BoundBox:
def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
self.objness = objness
self.classes = classes
self.label = -1
self.score = -1
def get_label(self):
if self.label == -1:
self.label = np.argmax(self.classes)
return self.label
def get_score(self):
if self.score == -1:
self.score = self.classes[self.get_label()]
return self.score
def _sigmoid(x):
return 1. / (1. + np.exp(-x))
def decode_netout(netout, anchors, obj_thresh, net_h, net_w):
grid_h, grid_w = netout.shape[:2]
nb_box = 3
netout = netout.reshape((grid_h, grid_w, nb_box, -1))
nb_class = netout.shape[-1] - 5
boxes = []
netout[..., :2] = _sigmoid(netout[..., :2])
netout[..., 4:] = _sigmoid(netout[..., 4:])
netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]
netout[..., 5:] *= netout[..., 5:] > obj_thresh
for i in range(grid_h*grid_w):
row = i / grid_w
col = i % grid_w
for b in range(nb_box):
# 4th element is objectness score
objectness = netout[int(row)][int(col)][b][4]
if(objectness.all() <= obj_thresh): continue
# first 4 elements are x, y, w, and h
x, y, w, h = netout[int(row)][int(col)][b][:4]
x = (col + x) / grid_w # center position, unit: image width
y = (row + y) / grid_h # center position, unit: image height
w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width
h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height
# last elements are class probabilities
classes = netout[int(row)][col][b][5:]
box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes)
boxes.append(box)
return boxes
def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w):
new_w, new_h = net_w, net_h
for i in range(len(boxes)):
x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w
y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h
boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w)
boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w)
boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h)
boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h)
def _interval_overlap(interval_a, interval_b):
x1, x2 = interval_a
x3, x4 = interval_b
if x3 < x1:
if x4 < x1:
return 0
else:
return min(x2,x4) - x1
else:
if x2 < x3:
return 0
else:
return min(x2,x4) - x3
def bbox_iou(box1, box2):
intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
union = w1*h1 + w2*h2 - intersect
return float(intersect) / union
def do_nms(boxes, nms_thresh):
if len(boxes) > 0:
nb_class = len(boxes[0].classes)
else:
return
for c in range(nb_class):
sorted_indices = np.argsort([-box.classes[c] for box in boxes])
for i in range(len(sorted_indices)):
index_i = sorted_indices[i]
if boxes[index_i].classes[c] == 0: continue
for j in range(i+1, len(sorted_indices)):
index_j = sorted_indices[j]
if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:
boxes[index_j].classes[c] = 0
# load and prepare an image
def load_image_pixels(filename, shape):
# load the image to get its shape
image = load_img(filename)
width, height = image.size
# load the image with the required size
image = load_img(filename, target_size=shape)
# convert to numpy array
image = img_to_array(image)
# scale pixel values to [0, 1]
image = image.astype('float32')
image /= 255.0
# add a dimension so that we have one sample
image = expand_dims(image, 0)
return image, width, height
# get all of the results above a threshold
def get_boxes(boxes, labels, thresh):
v_boxes, v_labels, v_scores = list(), list(), list()
# enumerate all boxes
for box in boxes:
# enumerate all possible labels
for i in range(len(labels)):
# check if the threshold for this label is high enough
if box.classes[i] > thresh:
v_boxes.append(box)
v_labels.append(labels[i])
v_scores.append(box.classes[i]*100)
# don't break, many labels may trigger for one box
return v_boxes, v_labels, v_scores
# draw all results
def draw_boxes(filename, v_boxes, v_labels, v_scores):
# load the image
data = pyplot.imread(filename)
# plot the image
pyplot.imshow(data)
# get the context for drawing boxes
ax = pyplot.gca()
# plot each box
for i in range(len(v_boxes)):
box = v_boxes[i]
# get coordinates
y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax
# calculate width and height of the box
width, height = x2 - x1, y2 - y1
# create the shape
rect = Rectangle((x1, y1), width, height, fill=False, color='white')
# draw the box
ax.add_patch(rect)
# draw text and score in top left corner
label = "%s (%.3f)" % (v_labels[i], v_scores[i])
pyplot.text(x1, y1, label, color='white')
# show the plot
pyplot.show()
# load yolov3 model
model = load_model('model.h5')
# define the expected input shape for the model
input_w, input_h = 416, 416
# define our new photo
photo_filename = 'boat.png'
# load and prepare image
image, image_w, image_h = load_image_pixels(photo_filename, (input_w, input_h))
# make prediction
yhat = model.predict(image)
# summarize the shape of the list of arrays
print([a.shape for a in yhat])
# define the anchors
anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]]
# define the probability threshold for detected objects
class_threshold = 0.6
boxes = list()
for i in range(len(yhat)):
# decode the output of the network
boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w)
# correct the sizes of the bounding boxes for the shape of the image
correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w)
# suppress non-maximal boxes
do_nms(boxes, 0.5)
# define the labels
labels = ["person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck",
"boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench",
"bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe",
"backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard",
"sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana",
"apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake",
"chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse",
"remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator",
"book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]
# get the details of the detected objects
v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold)
# summarize what we found
for i in range(len(v_boxes)):
print(v_labels[i], v_scores[i])
# draw what we found
draw_boxes(photo_filename, v_boxes, v_labels, v_scores)
###Output
boat 99.89145398139954
|
eda-us-accidents-2016-2020.ipynb | ###Markdown
US Accidents - Exploratory Data AnalysisEDADataset (Source, What it contains, How it will be useful)* Dataset from Kaggle* Information about accidents* Can be useful to prevent further accidents* This dataset does not contain data for New York Select the dataset : US Accidents (2016 - 2020)
###Code
data_filename = 'US_Accidents_Dec20_updated.csv'
###Output
_____no_output_____
###Markdown
Data Preparation and Cleaning1. Load the file using Pandas2. Look at the informations about the file3. Fix all the missing and incorrect values
###Code
df = pd.read_csv(data_filename)
df
df.info()
df.describe()
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
numeric_df = df.select_dtypes(include=numerics)
len(numeric_df.columns)
missing_percentages = df.isna().sum().sort_values(ascending=False) / len(df) * 100
missing_percentages
missing_percentages[missing_percentages != 0]
type(missing_percentages)
missing_percentages[missing_percentages != 0].plot(kind='barh')
###Output
_____no_output_____
###Markdown
* Remove the columns that we don't want to use. Exploratory Analysis and Visualizations* Columns to be analyzed:1. City : To analyze the cities with the most and least number of accidents.2. Start_Time : To analyze the time at which the accidents are taking place.3. Start_Lat, Start_Lng: To analyze and track the exact location of accidents and plot on a map.4. Temperature5. Weather Conditon6. State
###Code
df.columns
###Output
_____no_output_____
###Markdown
Analyzing column 'City'
###Code
df.City
cities = df.City.unique()
len(cities)
cities_by_accident = df.City.value_counts()
cities_by_accident
cities_by_accident[:10]
cities_by_accident[:20].plot(kind='barh')
import seaborn as sns
sns.set_style('darkgrid')
sns.histplot(cities_by_accident, log_scale=True)
cities_by_accident[cities_by_accident == 1]
high_accident_cities = cities_by_accident[cities_by_accident >= 1000]
low_accident_cities = cities_by_accident[cities_by_accident < 1000]
len(high_accident_cities) / len(cities)
sns.histplot(high_accident_cities, log_scale=True)
sns.histplot(high_accident_cities, log_scale=True)
###Output
_____no_output_____
###Markdown
* Both the plot seems to be following an exponential distribution graph. Analyzing Column Start Time
###Code
df.Start_Time
df.Start_Time = pd.to_datetime(df.Start_Time)
df.Start_Time[0]
sns.distplot(df.Start_Time.dt.hour, bins=24, kde=False, norm_hist=True)
sns.distplot(df.Start_Time.dt.dayofweek, bins=7, kde=False, norm_hist=True)
###Output
_____no_output_____
###Markdown
- Is the distribution of accidents by hour the same on weekends as on weekdays?
###Code
sundays_start_time = df.Start_Time[df.Start_Time.dt.dayofweek == 6]
sns.distplot(sundays_start_time.dt.hour, bins=24, kde=False, norm_hist=True)
monday_start_time = df.Start_Time[df.Start_Time.dt.dayofweek == 0]
sns.distplot(monday_start_time.dt.hour, bins=24, kde=False, norm_hist=True)
###Output
_____no_output_____
###Markdown
- On Sundays, the peak occurs between 5 P.M. and 12 A.M. unlike weekdays (as shown in Monday's graph above)
###Code
sns.distplot(df.Start_Time.dt.month, bins=12, kde=False, norm_hist=True)
###Output
_____no_output_____
###Markdown
Analyzing Column Start_Lat and Start_Lng
###Code
df.Start_Lat
df.Start_Lng
sns.scatterplot(x = df.Start_Lng, y = df.Start_Lat)
import folium
from folium.plugins import HeatMap
sample_df = df.sample(int(0.001 * len(df)))
lat_lng_pairs = list(zip(list(sample_df.Start_Lat), list(sample_df.Start_Lng)))
map = folium.Map()
HeatMap(lat_lng_pairs).add_to(map)
map
###Output
_____no_output_____
###Markdown
Analyzing Column 'State'
###Code
df.State
states = df.State.unique()
len(states)
states_by_accident = df.State.value_counts()
states_by_accident[:10].plot(kind='barh')
###Output
_____no_output_____ |
5 - FC layers retraining/3 - FC layers retraining/Retraining_FC_layers.ipynb | ###Markdown
In this notebook, we load what was captured by the SCAMP5 host application when the camera was shown MNIST data. We use this to retrain the fully connected layers, taking noise into account. * Parse the .txt file to create numpy training/testing data. It also saved as .pck, for not having to re-parse the file each time. * This data is then used to train fully connected layers.
###Code
!nvidia-smi
# Partial MNIST by AnalogNet: 100 examples for each digit in each subset (train/test)
#RAW_OUTPUT_FILENAME = 'text_log_20190628_1735_33_reencoded.TXT'
# Whole MNIST capture by AnalogNet
#RAW_OUTPUT_FILENAME = 'text_log_20190702_1750.txt'
#-> 92.6% testing acc with legacy training
#-> 93.8% testing acc with long training
#-> 93.9% testing acc with very long training
# Whole MNIST capture by AnalogNet, with 12 bins instead of 9 (new pooling)
#RAW_OUTPUT_FILENAME = 'text_log_20190714_1101.txt'
# -> 96.4% testing acc with legacy training
# -> 96.8% testing acc with long/very long training
# Whole MNIST capture by AnalogNet, with 12 bins instead of 9, and no
# overlapping between bins
#RAW_OUTPUT_FILENAME = 'text_log_20190718_0030.txt'
# -> 96.3% testing acc with legacy training
# -> 96.7% testing acc with long training
# -> 96.6% testing acc with very long training
# Whole MNIST capture by 3 quantise 3
#RAW_OUTPUT_FILENAME = 'text_log_20190712_2041.txt'
#-> gives 91.9% testing acc with 2 layers, ReLU, very long training...
# Whole MNIST capture by 3 quantise 3, with 150 collected events per feature map
#RAW_OUTPUT_FILENAME = 'text_log_20190714_0030.txt'
#-> gives 92.7% testing acc with 2 layers, long training.
#-> 93.2% testing acc with 2 layers, very long training
# Whole MNIST capture by 3 quantise 3, with 150 collected events per feature map,
# and 12 bins instead of 9 (new pooling)
#RAW_OUTPUT_FILENAME = 'text_log_20190715_0000_38.txt'
# -> 94.6% testing acc with legacy training
# -> 95.7% testing acc with long training
# -> 95.8% testing acc with very long training
# Whole MNIST capture by 4 maxpool 8
#RAW_OUTPUT_FILENAME = 'text_log_20190717_0042.txt'
# -> 91.9% testing acc with legacy training
# -> 92.6% testing acc with long training
# -> 92.9% testing acc with very long training
# Whole MNIST capture by 4 maxpool 8 bis (debugged)
#RAW_OUTPUT_FILENAME = 'text_log_20190731_2346.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 92.9% testing acc with very long training
#######################################################################
# Depth separable convolutions, accumulation
# Whole MNIST capture by one layer (similar to AnalogNet, slightly
# different register managmenent)
#RAW_OUTPUT_FILENAME = 'text_log_20190719_0106.txt'
# -> 96.2% testing acc with legacy training
# -> 96.8% testing acc with long training
# -> 96.9% testing acc with very long training
# Whole MNIST capture by two layers of depth separable conv, with leaky ReLU (.25)
#RAW_OUTPUT_FILENAME = 'text_log_20190726_0022.txt'
# -> 93.4% testing acc with legacy training
# -> 94.7% testing acc with long training
# -> 94.4% testing acc with very long training
# Whole MNIST capture by three layers of depth separable conv, with leaky ReLU (.25)
#RAW_OUTPUT_FILENAME = 'text_log_20190727_0013.txt'
# -> 93.3% testing acc with legacy training
# -> 94.1% testing acc with long training
# -> 94.4% testing acc with very long training
# Whole MNIST capture by four layers of depth separable conv, with leaky ReLU (.25)
#RAW_OUTPUT_FILENAME = 'text_log_20190731_0245.txt'
# -> 92.0% testing acc with legacy training
# -> 93.2% testing acc with long training
# -> 93.3% testing acc with very long training
#######################################################################
# One layer network, with increasingly many kernels
# Whole MNIST capture by one layer 1 kernel
#RAW_OUTPUT_FILENAME = 'out13.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 89.99% testing acc with very long training
# Whole MNIST capture by one layer 2 kernel
#RAW_OUTPUT_FILENAME = 'out25.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 89.73% testing acc with very long training
## -> 93.78% testing acc, when simulating a 2 layers net by truncating a 6 layer one...
# Whole MNIST capture by one layer 3 kernel
#RAW_OUTPUT_FILENAME = 'out37.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 95.81% testing acc with very long training
## -> 96.09% testing acc, when simulating a 3 layers net by truncating a 6 layer one...
## -> 96.06% testing acc, when simulating a 3 layers net by truncating a 7 layer one...
# Whole MNIST capture by one layer 4 kernel
#RAW_OUTPUT_FILENAME = 'out49.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 96.94% testing acc with very long training
# Whole MNIST capture by one layer 5 kernel
#RAW_OUTPUT_FILENAME = 'out61.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 97.04% testing acc with very long training
# Whole MNIST capture by one layer 6 kernel
RAW_OUTPUT_FILENAME = 'out73.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 97.7% testing acc with very long training
# Whole MNIST capture by one layer 7 kernel
#RAW_OUTPUT_FILENAME = 'out85.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 98.21% testing acc with very long training
# Whole MNIST capture by one layer 8 kernel
#RAW_OUTPUT_FILENAME = 'out97.txt'
# -> % testing acc with legacy training
# -> % testing acc with long training
# -> 97.98% testing acc with very long training
BATCH_SIZE = 50
LR = 0.0001
EPOCHS = 100
###Output
_____no_output_____
###Markdown
0. Imports and utils functions
###Code
import ast
import numpy as np
import matplotlib.pyplot as plt
import pickle
import tensorflow as tf
def find_first(item, vec):
'''return the index of the first occurence of item in vec'''
for i in range(len(vec)):
if item == vec[i]:
return i
return len(vec) # Move to the end if item not found
def test_accuracy(verbose=False):
accs = np.zeros(x_test.shape[0] // BATCH_SIZE)
for i in range(x_test.shape[0] // BATCH_SIZE):
start = i * BATCH_SIZE
stop = start + BATCH_SIZE
xs = x_test[start:stop]
ys = y_test[start:stop, 0]
current_acc = sess.run(acc_op,
feed_dict={in_data_ph: xs,
gt_label_ph: ys})
accs[i] = current_acc
if verbose:
print('Testing Acc.: {}'.format(
accs.mean()))
return accs.mean()
###Output
_____no_output_____
###Markdown
1. Parse the raw .txt output file Structure of the file:* garbage...* garbage...* garbage...* [garbage too]* [garbage starting with 0 or 1]* [10, training 0s]* [garbage starting with 0 or 1]* [10, testing 0s]* [garbage starting with 0 or 1]* [10, training 1s]* [garbage starting with 0 or 1]* [10, testing 1s]...* [10, testing 9s]* [garbage starting with 0 or 1]
###Code
listedContent = []
with open(RAW_OUTPUT_FILENAME, 'r+') as f:
for line in f:
if line[0] == '[':
listedContent.append(ast.literal_eval(line))
raw_output = np.array(listedContent)
plt.plot(raw_output[:,0])
plt.show
# This should show the aforementioned alternating pattern
### Special case, for out61.txt and out73.txt datasets
# DO NOT EXECUTE OTHERWISE !
if RAW_OUTPUT_FILENAME == 'out61.txt' or RAW_OUTPUT_FILENAME == 'out73.txt':
l = raw_output[:,0]
K = -1
for i in range(l.shape[0] - 2):
if l[i-1] < 10 and l[i] == 10 and l[i+1] >= 2 and l[i+2] < 2:
print(i)
K = i
if K >= 0:
raw_output[K,0] = 0
raw_output[K+1,0] = 0
# Remove the starting garbage
moveIndex = find_first(10, raw_output[:,0])
raw_output = raw_output[moveIndex:]
trainingSetX = [0]*10
testingSetX = [0]*10
trainingSetY = [0]*10
testingSetY = [0]*10
for i in range(10):
moveIndex = min(find_first(0, raw_output[:,0]), find_first(1, raw_output[:,0]))
trainingSetX[i] = raw_output[:moveIndex,1:]
trainingSetY[i] = i*np.ones((trainingSetX[i].shape[0],1))
raw_output = raw_output[moveIndex:]
moveIndex = find_first(10, raw_output[:,0])
raw_output = raw_output[moveIndex:]
moveIndex = min(find_first(0, raw_output[:,0]), find_first(1, raw_output[:,0]))
testingSetX[i] = raw_output[:moveIndex,1:]
testingSetY[i] = i*np.ones((testingSetX[i].shape[0],1))
raw_output = raw_output[moveIndex:]
moveIndex = find_first(10, raw_output[:,0])
raw_output = raw_output[moveIndex:]
for label in range(10):
print('Training {0}s: {1}'.format(label, trainingSetX[label].shape[0]))
print('Testing {0}s: {1}'.format(label, testingSetX[label].shape[0]))
x_train = np.concatenate(trainingSetX)
y_train = np.concatenate(trainingSetY)
x_test = np.concatenate(testingSetX)
y_test = np.concatenate(testingSetY)
print('Training set input data: {}'.format(x_train.shape))
print('Training set labels: {}'.format(y_train.shape))
print('Testing set input data: {}'.format(x_test.shape))
print('Testing set labels: {}'.format(y_test.shape))
pickle.dump(((x_train, y_train),(x_test, y_test)),
open(RAW_OUTPUT_FILENAME + '.pck', 'wb'))
###Output
_____no_output_____
###Markdown
2. Train 1 FC layer 2.1 Load data from pickled files
###Code
(x_train, y_train),(x_test, y_test) = pickle.load(
open(RAW_OUTPUT_FILENAME + '.pck', 'rb'))
print('Training set input data: {}'.format(x_train.shape))
print('Training set labels: {}'.format(y_train.shape))
print('Testing set input data: {}'.format(x_test.shape))
print('Testing set labels: {}'.format(y_test.shape))
y_train = y_train.astype(np.uint8)
y_test = y_test.astype(np.uint8)
###Output
_____no_output_____
###Markdown
2.2 Network and graph definition
###Code
def network_1fc(input):
out = tf.layers.dense(input, 10, name='dense1')
return out
tf.reset_default_graph()
in_data_ph = tf.placeholder(tf.float32, [BATCH_SIZE,72])
gt_label_ph = tf.placeholder(tf.uint8)
out_label_op = network_1fc(in_data_ph)
pred_op = tf.dtypes.cast(
tf.keras.backend.argmax(out_label_op),
tf.uint8)
loss_op = tf.reduce_mean(
tf.keras.backend.sparse_categorical_crossentropy(gt_label_ph,
out_label_op,
from_logits=True))
acc_op = tf.contrib.metrics.accuracy(gt_label_ph, pred_op)
lr_ph = tf.placeholder(tf.float32)
opt_op = tf.train.AdamOptimizer(learning_rate=lr_ph).minimize(loss_op)
###Output
_____no_output_____
###Markdown
2.3 Training
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for epoch in range(EPOCHS):
if epoch < EPOCHS/2:
lr_feed = LR
else:
lr_feed = LR/5
random_perm = np.random.permutation(x_train.shape[0])
losses = np.zeros(x_train.shape[0] // BATCH_SIZE)
for i in range(x_train.shape[0] // BATCH_SIZE):
start = i * BATCH_SIZE
stop = start + BATCH_SIZE
selected = random_perm[start:stop]
xs = x_train[selected]
ys = y_train[selected]
_, current_loss = sess.run([opt_op, loss_op],
feed_dict={in_data_ph: xs,
gt_label_ph: ys,
lr_ph: lr_feed})
losses[i] = current_loss
if epoch % 20 == 0:
print('Epoch {} completed, average training loss is {}'.format(
epoch+1, losses.mean()))
test_accuracy()
test_accuracy()
###Output
Epoch 1 completed, average training loss is 6.933345697085063
###Markdown
3. Train 2 FC layer 3.1 Load data from pickled files
###Code
(x_train, y_train),(x_test, y_test) = pickle.load(
open(RAW_OUTPUT_FILENAME + '.pck', 'rb'))
print('Training set input data: {}'.format(x_train.shape))
print('Training set labels: {}'.format(y_train.shape))
print('Testing set input data: {}'.format(x_test.shape))
print('Testing set labels: {}'.format(y_test.shape))
############### RESTRICT TO FIRST 3 KERNELS
x_train = x_train[:,:36]
x_test = x_test[:,:36]
print('Training set input data: {}'.format(x_train.shape))
print('Training set labels: {}'.format(y_train.shape))
print('Testing set input data: {}'.format(x_test.shape))
print('Testing set labels: {}'.format(y_test.shape))
y_train = y_train.astype(np.uint8)
y_test = y_test.astype(np.uint8)
###Output
_____no_output_____
###Markdown
3.2 Network and graph definition
###Code
def network_2fc(input):
fc1 = tf.layers.dense(input, 50, name='dense1', activation=tf.nn.relu)
out = tf.layers.dense(fc1, 10, name='dense2')
return out
tf.reset_default_graph()
in_data_ph = tf.placeholder(tf.float32, [BATCH_SIZE,36])
gt_label_ph = tf.placeholder(tf.uint8)
out_label_op = network_2fc(in_data_ph)
pred_op = tf.dtypes.cast(
tf.keras.backend.argmax(out_label_op),
tf.uint8)
loss_op = tf.reduce_mean(
tf.keras.backend.sparse_categorical_crossentropy(gt_label_ph,
out_label_op,
from_logits=True))
acc_op = tf.contrib.metrics.accuracy(gt_label_ph, pred_op)
lr_ph = tf.placeholder(tf.float32)
opt_op = tf.train.AdamOptimizer(learning_rate=lr_ph).minimize(loss_op)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0807 14:27:04.759977 140005153736576 deprecation.py:323] From <ipython-input-23-24a71349388f>:2: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
W0807 14:27:04.770270 140005153736576 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0807 14:27:08.046772 140005153736576 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
###Markdown
3.3 Training
###Code
sess = tf.Session()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
"""
# Legacy training, providing 92.6% testing accuracy with AnalogNet data
for epoch in range(EPOCHS):
if epoch < EPOCHS/2:
lr_feed = LR
else:
lr_feed = LR/5
"""
"""
# Longer training -> 91.9% testing accuracy with 3-quantise-3 data
for epoch in range(EPOCHS*4):
if epoch < EPOCHS:
lr_feed = LR*4
elif epoch < 2*EPOCHS:
lr_feed = LR * 1.5
elif epoch < 3*EPOCHS:
lr_feed = LR / 2
else:
lr_feed = LR/5
"""
"""
# Even Longer training
for epoch in range(EPOCHS*10):
if epoch < EPOCHS:
lr_feed = LR*8
elif epoch < 3*EPOCHS:
lr_feed = LR * 4
elif epoch < 6*EPOCHS:
lr_feed = LR * 2
elif epoch < 8:
lr_feed = LR
elif epoch < 9:
lr_feed = LR / 2.
else:
lr_feed = LR / 5.
"""
max_test_accuracy = 0.
for epoch in range(EPOCHS*10):
if epoch < EPOCHS:
lr_feed = LR*8
elif epoch < 3*EPOCHS:
lr_feed = LR * 4
elif epoch < 6*EPOCHS:
lr_feed = LR * 2
elif epoch < 8:
lr_feed = LR
elif epoch < 9:
lr_feed = LR / 2.
else:
lr_feed = LR / 5.
random_perm = np.random.permutation(x_train.shape[0])
losses = np.zeros(x_train.shape[0] // BATCH_SIZE)
for i in range(x_train.shape[0] // BATCH_SIZE):
start = i * BATCH_SIZE
stop = start + BATCH_SIZE
selected = random_perm[start:stop]
xs = x_train[selected]
ys = y_train[selected, 0]
_, current_loss = sess.run([opt_op, loss_op],
feed_dict={in_data_ph: xs,
gt_label_ph: ys,
lr_ph: lr_feed})
losses[i] = current_loss
current_test_accuracy = test_accuracy()
# Save best model
if current_test_accuracy > max_test_accuracy:
saver.save(sess, '2_fc/model.ckpt')
max_test_accuracy = current_test_accuracy
if epoch % 20 == 0:
print('Epoch {} completed, average training loss is {}'.format(
epoch+1, losses.mean()))
print('Testing Acc.: {}'.format(current_test_accuracy))
# Restore best model
ckpt = tf.train.get_checkpoint_state('2_fc')
saver.restore(sess, ckpt.model_checkpoint_path)
_ = test_accuracy(verbose=True)
max_test_accuracy
###Output
_____no_output_____
###Markdown
3.4 Extract weights, and manually run the FC layers (matrix operations, as on SCAMP5's microcontroller) 3.4.1 Weights extraction
###Code
#[n.name for n in tf.get_default_graph().as_graph_def().node]
with tf.variable_scope('dense1', reuse=True) as scope_conv:
fc1_k = tf.get_variable('kernel')
fc1_b = tf.get_variable('bias')
with tf.variable_scope('dense2', reuse=True) as scope_conv:
fc2_k = tf.get_variable('kernel')
fc2_b = tf.get_variable('bias')
fc1_k, fc1_b, fc2_k, fc2_b = sess.run([fc1_k, fc1_b, fc2_k, fc2_b])
pickle.dump((fc1_k, fc1_b, fc2_k, fc2_b),
open(RAW_OUTPUT_FILENAME + '_trained_weights_2_fc.pck', 'wb'))
fc1_k, fc1_b, fc2_k, fc2_b = pickle.load(
open(RAW_OUTPUT_FILENAME + '_trained_weights_2_fc.pck', 'rb'))
###Output
_____no_output_____
###Markdown
3.4.2 Compute accuracy when manually running the FC layers (matrix mult)
###Code
def forward_pass(inputVec):
res1 = np.dot(inputVec, fc1_k) + fc1_b
np.maximum(res1, 0, res1)
res2 = np.dot(res1, fc2_k) + fc2_b
return res2
l = np.array([9, 0, 0, 9, 0, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
print(forward_pass(l))
print(fc1_k)
correctPrediction = []
for (x, y) in zip(x_test, y_test):
pred = np.argmax(forward_pass(x))
correctPrediction.append(pred == y)
np.array(correctPrediction).mean()
###Output
_____no_output_____
###Markdown
3.4.3 Round the weights and compute accuracy
###Code
PRECISION = 10000
fc1_k, fc1_b, fc2_k, fc2_b = fc1_k*PRECISION//1, fc1_b*PRECISION//1, fc2_k*PRECISION//1, fc2_b*PRECISION*PRECISION//1
correctPrediction = []
for (x, y) in zip(x_test, y_test):
pred = np.argmax(forward_pass(x))
correctPrediction.append(pred == y)
np.array(correctPrediction).mean()
print(fc1_k.shape)
###Output
(32, 50)
|
Lecture/Notebooks/Machine Learning/L2/ML_L2_27_Apr_Decision_Trees_ipynb_txt.ipynb | ###Markdown
**Machine Learning with Tree based Models**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Classification and Regression Trees (CART) --- Decision tree for classification---
###Code
cancer_data = pd.read_csv('wbc.csv')
cancer_data.head()
cancer_data.shape
print(cancer_data.isna().sum())
cancer_data.dropna(axis = 1, inplace=True) # Dropping column
cancer_data.head()
cancer_data.info()
cancer_data.dtypes
cancer_data.describe()
cancer_data.hist(figsize=(20,20))
cancer_data.diagnosis.value_counts()
sns.countplot(cancer_data.diagnosis, palette='Set1')
cancer_data.head()
cancer_data['diagnosis'] = cancer_data['diagnosis'].apply(lambda x: 0 if x=="B" else 1)
cancer_data.diagnosis.value_counts()
cancer_data.info()
cancer_data_sub = cancer_data[["diagnosis", "radius_mean", "concave points_mean"]]
cancer_data_sub.head(20)
# X = cancer_data_sub.drop(['diagnosis'], axis = 1)
X = cancer_data_sub.iloc[:,1:3]
y = cancer_data_sub.diagnosis
print(X,y)
# Import DecisionTreeClassifier
from sklearn.tree import DecisionTreeClassifier
# Import train_test_split
from sklearn.model_selection import train_test_split
# Import accuracy_score
from sklearn.metrics import accuracy_score
# Split dataset into 80% train, 20% test
X_train, X_test, y_train, y_test= train_test_split(X, y, test_size=0.2, stratify=y, random_state=1)
# Instantiate a DecisionTreeClassifier 'dt' with a maximum depth of 6
dt = DecisionTreeClassifier(max_depth=6, random_state=1)
# dt = DecisionTreeClassifier(max_depth=6, criterion='gini', random_state=1)
# default=gini
# Fit dt to the training set
dt.fit(X_train, y_train)
# Predict test set labels
y_pred = dt.predict(X_test)
# Compute test set accuracy
acc = accuracy_score(y_test, y_pred)
print("Test set accuracy: {:.2f}".format(acc))
from mlxtend.plotting import plot_decision_regions
# http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/
plot_decision_regions(X.values, y.values, clf=dt, legend=2)
# Import LogisticRegression from sklearn.linear_model
from sklearn.linear_model import LogisticRegression
# Instatiate logreg
logreg = LogisticRegression(random_state=1)
# Fit logreg to the training set
logreg.fit(X_train, y_train)
# Predict test set labels
y_pred = logreg.predict(X_test)
# Compute test set accuracy
acc = accuracy_score(y_test, y_pred)
print("Test set accuracy: {:.2f}".format(acc))
from mlxtend.plotting import plot_decision_regions
plot_decision_regions(X.values, y.values, clf=logreg, legend=2)
sns.scatterplot(cancer_data_sub.radius_mean, cancer_data_sub['concave points_mean'],hue = cancer_data_sub.diagnosis)
###Output
_____no_output_____
###Markdown
--- Decision tree for regression---
###Code
auto_data = pd.read_csv('auto.csv')
auto_data.head()
auto_data = pd.get_dummies(auto_data)
auto_data.head()
X = auto_data.drop(['mpg'], axis = 1)
y = auto_data['mpg']
print(X,y)
X_train, X_test, y_train, y_test= train_test_split(X, y, test_size=0.2, random_state=1)
# Import DecisionTreeRegressor from sklearn.tree
from sklearn.tree import DecisionTreeRegressor
# Instantiate dt
dt = DecisionTreeRegressor(max_depth=8,
min_samples_leaf=0.13,
random_state=3)
# Fit dt to the training set
dt.fit(X_train, y_train)
# Import mean_squared_error from sklearn.metrics as MSE
from sklearn.metrics import mean_squared_error as MSE
# Compute y_pred
y_pred = dt.predict(X_test)
# Compute mse_dt
mse_dt = MSE(y_test, y_pred)
# Compute rmse_dt
rmse_dt = mse_dt**0.5
# Print rmse_dt
print("Test set RMSE of dt: {:.2f}".format(rmse_dt))
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train)
# Predict test set labels
y_pred_lr = lr.predict(X_test)
# Compute mse_lr
mse_lr = MSE(y_test, y_pred_lr)
# Compute rmse_lr
rmse_lr = mse_lr**0.5
# Print rmse_lr
print('Linear Regression test set RMSE: {:.2f}'.format(rmse_lr))
# Print rmse_dt
print('Regression Tree test set RMSE: {:.2f}'.format(rmse_dt))
###Output
Linear Regression test set RMSE: 3.98
Regression Tree test set RMSE: 4.27
###Markdown
--- Advantages of CARTs---* Simple to understand.* Simple to interpret.* Easy to use.* Flexibility: ability to describe non-linear dependencies.* Preprocessing: no need to standardize or normalize features, ... --- Limitations of CARTs---* Sensitive to small variations in the training set.* High variance: unconstrained CARTs may overfit the training set.* Solution: ensemble learning. Ensemble Learning --- Voting Classifier ---
###Code
# Import functions to compute accuracy and split data
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# Import models, including VotingClassifier meta-model
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.ensemble import VotingClassifier
# Set seed for reproducibility
SEED = 1
cancer_data.head()
# Drop the id and diagnosis from the features
X = cancer_data.drop(['diagnosis', 'id'], axis=1)
# Select the diagnosis as a label
y = cancer_data.diagnosis
# Split data into 70% train and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.3, random_state= SEED)
# Instantiate individual classifiers
lr = LogisticRegression(random_state=SEED)
knn = KNN()
dt = DecisionTreeClassifier(random_state=SEED)
# Define a list called classifier that contains the tuples (classifier_name, classifier)
classifiers = [('Logistic Regression', lr),
('K Nearest Neighbours', knn),
('Classification Tree', dt)]
import warnings
warnings.filterwarnings("ignore")
# Iterate over the defined list of tuples containing the classifiers
for clf_name, clf in classifiers:
#fit clf to the training set
clf.fit(X_train, y_train)
# Predict the labels of the test set
y_pred = clf.predict(X_test)
# Evaluate the accuracy of clf on the test set
print('{:s} : {:.3f}'.format(clf_name, accuracy_score(y_test, y_pred)))
# Instantiate a VotingClassifier 'vc'
vc = VotingClassifier(estimators=classifiers)
# Fit 'vc' to the traing set and predict test set labels
vc.fit(X_train, y_train)
y_pred = vc.predict(X_test)
# Evaluate the test-set accuracy of 'vc'
print("Voting classifier: ", round(accuracy_score(y_test, y_pred),3))
###Output
Voting classifier: 0.953
###Markdown
--- Bagging ---Bagging is an ensemble method involving training the same algorithm many times using different subsets sampled from the training data.
###Code
# Import models and utility functions
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# Set seed for reproducibility
SEED = 1
cancer_data.head()
# Drop the id and diagnosis from the features
X = cancer_data.drop(['diagnosis', 'id'], axis=1)
# Select the diagnosis as a label
y = cancer_data.diagnosis
# Split data into 70% train and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=SEED)
# Instantiate a classification-tree 'dt'
dt = DecisionTreeClassifier(max_depth=4, min_samples_leaf=0.16, random_state=SEED)
# Instantiate a BaggingClassifier 'bc'
bc = BaggingClassifier(base_estimator=dt, n_estimators=300, n_jobs=-1) # n_jobs=-1 means that all the CPU cores are used in computation.
# Fit 'bc' to the training set
bc.fit(X_train, y_train)
# Predict test set labels
y_pred = bc.predict(X_test)
# Evaluate and print test-set accuracy
accuracy = accuracy_score(y_test, y_pred)
print('Accuracy of Bagging Classifier: {:.3f}'.format(accuracy))
###Output
_____no_output_____
###Markdown
--- Random Forests---An ensemble method which uses a decision tree as a base estimator.
###Code
# Basic imports
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error as MSE
# Set seed for reproducibility
SEED = 1
auto_data.head()
X = auto_data.drop(['mpg'], axis = 1)
y = auto_data['mpg']
# Split dataset into 70% train and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=SEED)
# Instantiate a random forests regressor 'rf' 400 estimators
rf = RandomForestRegressor(n_estimators=400, min_samples_leaf=0.12, random_state=SEED)
# Fit 'rf' to the training set
rf.fit(X_train, y_train)
# Predict the test set labels 'y_pred'
y_pred = rf.predict(X_test)
# Evaluate the test set RMSE
rmse_test = MSE(y_test, y_pred)**(1/2)
# Print the test set RMSE
print('Test set RMSE of rf: {:.2f}'.format(rmse_test))
# Create a pd.Series of features importances
importances_rf = pd.Series(rf.feature_importances_, index = X.columns)
# Sort importances_rf
sorted_importances_rf = importances_rf.sort_values()
# Make a horizontal bar plot
sorted_importances_rf.plot(kind='barh');
plt.show()
###Output
_____no_output_____
###Markdown
--- Boosting ---Boosting refers to an ensemble method in which several models are trained sequentially with each model learning from the errors of its predecessors.
###Code
cancer_data.head()
# Drop the id and diagnosis from the features
X = cancer_data.drop(['diagnosis', 'id'], axis=1)
# Select the diagnosis as a label
y = cancer_data.diagnosis
# Import models and utility functions
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
# Set seed for reproducibility
SEED = 1
# Split data into 70% train and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=SEED)
# Instantiate a classification-tree 'dt'
dt = DecisionTreeClassifier(max_depth=1, random_state=SEED)
# Instantiate an AdaBoost classifier 'adab_clf'
adb_clf = AdaBoostClassifier(base_estimator=dt, n_estimators=100)
# Fit 'adb_clf' to the training set
adb_clf.fit(X_train, y_train)
# Predict the test set probabilities of positive class
y_pred_proba = adb_clf.predict_proba(X_test)[:,1]
# Evaluate test-set roc_auc_score
adb_clf_roc_auc_score = roc_auc_score(y_test, y_pred_proba)
# Print adb_clf_roc_auc_score
print('ROC AUC score: {:.2f}'.format(adb_clf_roc_auc_score))
from sklearn.metrics import roc_curve
fper, tper, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot(fper, tper)
plt.plot([0,1], [0,1], 'k--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Adaboost ROC curve')
# show the plot
plt.show()
###Output
_____no_output_____ |
python/Logging_Images.ipynb | ###Markdown
Using WhyLogs to Profile Images--- This notebook provides an example how you can use whylogs to profile unstructure data like images.
###Code
from PIL import Image
import numpy as np
import os
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import seaborn as sns
with open("../../whylogs-python/testdata/images/flower2.jpg","rb") as img_f:
img= Image.open(img_f)
imshow(np.asarray(img))
w,h = img.size
total_num_pixels= w*h
print("withd :\t{}\nheight:\t{}\nnumber of pixels:{}".format(w,h,total_num_pixels))
###Output
withd : 300
height: 225
number of pixels:67500
###Markdown
We can create logger and create a profile sketch of the image data
###Code
from whylogs import get_or_create_session
_session=None
session = get_or_create_session()
logger=session.logger("image_dataset2")
logger.log_image("../../whylogs-python/testdata/images/flower2.jpg")
profile=logger.profile
###Output
_____no_output_____
###Markdown
You can obtain the histogram sketch of image data features. e.g Saturation below
###Code
imageProfiles = profile.flat_summary()["hist"]
print(imageProfiles["Saturation"])
###Output
{'bin_edges': [0.0, 8.50000085, 17.0000017, 25.500002549999998, 34.0000034, 42.500004249999996, 51.000005099999996, 59.500005949999995, 68.0000068, 76.50000764999999, 85.00000849999999, 93.50000935, 102.00001019999999, 110.50001104999998, 119.00001189999999, 127.50001275, 136.0000136, 144.50001444999998, 153.00001529999997, 161.50001615, 170.00001699999999, 178.50001784999998, 187.0000187, 195.50001955, 204.00002039999998, 212.50002124999997, 221.00002209999997, 229.50002295, 238.00002379999998, 246.50002464999997, 255.0000255], 'counts': [128, 384, 512, 256, 384, 904, 656, 800, 1424, 2718, 4132, 5966, 5330, 5994, 4826, 4738, 3738, 4256, 3076, 3440, 2486, 2330, 1506, 1084, 1108, 788, 648, 576, 768, 2544]}
###Markdown
Along with all the metadata collected from the image
###Code
print(profile.flat_summary()["summary"]["column"].values)
###Output
['Flash' 'ImageWidth' 'X-Resolution' 'Saturation' 'Compression' 'Quality'
'Y-Resolution' 'ResolutionUnit' 'Model' 'Orientation' 'RowsPerStrip'
'BitsPerSample' 'Brightness' 'ExposureTime' 'Software' 'Hue'
'BrightnessValue' 'ImageLength'
'PhotometricInterpretationSamplesPerPixel']
###Markdown
Custom Functions--- One can also create custom functions to profile image specific features. E.g. Two example below demostrate get the average of image pixels per column, while the second function simple allow you to create a distribution sketch of the blue values. Also ComposeTransforms functions allow you mix and match functions to create new features to monitor.
###Code
class AvgValue:
def __call__(self, x):
return np.mean(np.array(x)).reshape(-1,1)
def __repr__(self,):
return self.__class__.__name__
mylamdda =(lambda x: np.mean(x,axis=1).reshape(-1,1))
class MyBlue:
def __call__(self, x):
_,_,b= x.split()
return np.array(b).reshape(-1,1)
def __repr__(self,):
return self.__class__.__name__
from whylogs.features.transforms import ComposeTransforms, Brightness,Saturation
_session=None
session=None
session = get_or_create_session()
logger2=session.logger("image_dataset_custom_functions")
logger2.log_image(torch.(3,
feature_transforms = [ AvgValue(), MyBlue(), ComposeTransforms([MyBlue(),AvgValue()])])
logger2.log_annotation(pathfile, )
profile2=logger2.profile
print(profile2.flat_summary()["summary"]["column"].values)
###Output
_____no_output_____
###Markdown
Check histograms We can obtain the idenvidual histograms for the features
###Code
minnpf = np.frompyfunc(lambda x, y: min(x,y), 2, 1)
maxnpf = np.frompyfunc(lambda x, y: max(x,y), 2, 1)
def get_custom_histogram_info(profiles, variable, n_bins):
summaries = [profile.flat_summary()["summary"] for profile in profiles]
min_range= minnpf.accumulate([ summary[summary["column"]==variable]["min"].values[0] for summary in summaries], dtype=np.object).astype(np.int)
max_range= maxnpf.accumulate([ summary[summary["column"]==variable]["max"].values[0] for summary in summaries], dtype=np.object).astype(np.int)
bins = np.linspace(int(min_range), int(max_range), int((max_range-min_range)/n_bins))
counts= [ profile.columns[variable].number_tracker.histogram.get_pmf(bins[:-1]) for profile in profiles]
return bins, counts
def plot_distribution_shift(profiles, variable, n_bins):
"""Visualization for distribution shift"""
bins, counts = get_custom_histogram_info(profiles, variable, n_bins)
fig, ax = plt.subplots(figsize=(10, 3))
for idx, profile in enumerate(profiles):
sns.histplot(x=bins, weights=counts[idx], bins=n_bins,
label=profile.name, alpha=0.7, ax=ax)
ax.legend()
plt.show()
plot_distribution_shift([profile2],"MyBlue",10)
plot_distribution_shift([profile],"Saturation",10)
###Output
_____no_output_____
###Markdown
Using whylogs to Profile Images--- This notebook provides an example how you can use whylogs to profile unstructure data like images.
###Code
from PIL import Image
import numpy as np
import os
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import seaborn as sns
with open("flower2.jpg","rb") as img_f:
img= Image.open(img_f)
imshow(np.asarray(img))
w,h = img.size
total_num_pixels= w*h
print("withd :\t{}\nheight:\t{}\nnumber of pixels:{}".format(w,h,total_num_pixels))
###Output
withd : 300
height: 225
number of pixels:67500
###Markdown
We can create logger and create a profile sketch of the image data
###Code
from whylogs import get_or_create_session
_session=None
session = get_or_create_session()
logger=session.logger("image_dataset2")
logger.log_image("flower2.jpg")
profile=logger.profile
###Output
_____no_output_____
###Markdown
You can obtain the histogram sketch of image data features. e.g Saturation below
###Code
imageProfiles = profile.flat_summary()["hist"]
print(imageProfiles["Saturation"])
###Output
{'bin_edges': [0.0, 8.50000085, 17.0000017, 25.500002549999998, 34.0000034, 42.500004249999996, 51.000005099999996, 59.500005949999995, 68.0000068, 76.50000764999999, 85.00000849999999, 93.50000935, 102.00001019999999, 110.50001104999998, 119.00001189999999, 127.50001275, 136.0000136, 144.50001444999998, 153.00001529999997, 161.50001615, 170.00001699999999, 178.50001784999998, 187.0000187, 195.50001955, 204.00002039999998, 212.50002124999997, 221.00002209999997, 229.50002295, 238.00002379999998, 246.50002464999997, 255.0000255], 'counts': [64, 512, 576, 128, 512, 664, 1024, 896, 1240, 2550, 4156, 5940, 5594, 6010, 4836, 4636, 3712, 4232, 2972, 3506, 2506, 2484, 1258, 1286, 884, 1002, 416, 624, 704, 2576]}
###Markdown
Along with all the metadata collected from the image
###Code
print(profile.flat_summary()["summary"]["column"].values)
###Output
['Orientation' 'Software' 'Y-Resolution' 'Model' 'BrightnessValue' 'Flash'
'ImageLength' 'PhotometricInterpretationSamplesPerPixel' 'ImageWidth'
'BitsPerSample' 'Saturation' 'Compression' 'ExposureTime' 'RowsPerStrip'
'ResolutionUnit' 'X-Resolution' 'Quality' 'Hue' 'Brightness']
###Markdown
Custom Functions--- One can also create custom functions to profile image specific features. E.g. Two example below demostrate get the average of image pixels per column, while the second function simple allow you to create a distribution sketch of the blue values. Also ComposeTransforms functions allow you mix and match functions to create new features to monitor.
###Code
class AvgValue:
def __call__(self, x):
return np.mean(np.array(x)).reshape(-1,1)
def __repr__(self,):
return self.__class__.__name__
class MyBlue:
def __call__(self, x):
_,_,b= x.split()
return np.array(b).reshape(-1,1)
def __repr__(self,):
return self.__class__.__name__
from whylogs.features.transforms import ComposeTransforms, Brightness,Saturation
_session=None
session=None
session = get_or_create_session()
logger2=session.logger("image_dataset_custom_functions")
logger2.log_image("flower2.jpg",feature_transforms = [ AvgValue(), MyBlue(),
ComposeTransforms([MyBlue(),AvgValue()])])
profile2=logger2.profile
print(profile2.flat_summary()["summary"]["column"].values)
###Output
_____no_output_____
###Markdown
Check histograms We can obtain the idenvidual histograms for the features
###Code
minnpf = np.frompyfunc(lambda x, y: min(x,y), 2, 1)
maxnpf = np.frompyfunc(lambda x, y: max(x,y), 2, 1)
def get_custom_histogram_info(profiles, variable, n_bins):
summaries = [profile.flat_summary()["summary"] for profile in profiles]
min_range= minnpf.accumulate([ summary[summary["column"]==variable]["min"].values[0] for summary in summaries], dtype=np.object).astype(np.int)
max_range= maxnpf.accumulate([ summary[summary["column"]==variable]["max"].values[0] for summary in summaries], dtype=np.object).astype(np.int)
bins = np.linspace(int(min_range), int(max_range), int((max_range-min_range)/n_bins))
counts= [ profile.columns[variable].number_tracker.histogram.get_pmf(bins[:-1]) for profile in profiles]
return bins, counts
def plot_distribution_shift(profiles, variable, n_bins):
"""Visualization for distribution shift"""
bins, counts = get_custom_histogram_info(profiles, variable, n_bins)
fig, ax = plt.subplots(figsize=(10, 3))
for idx, profile in enumerate(profiles):
sns.histplot(x=bins, weights=counts[idx], bins=n_bins,
label=profile.name, alpha=0.7, ax=ax)
ax.legend()
plt.show()
plot_distribution_shift([profile2],"MyBlue",10)
plot_distribution_shift([profile],"Saturation",10)
###Output
_____no_output_____ |
ML_Model_Building/Clean_Training_Data.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. Licensed under the MIT license. Clean Training Data This notebook will clean the training dataset and load the cleaned data into a spark database for training the models.
###Code
DATA_LAKE_ACCOUNT_NAME = ""
FILE_SYSTEM_NAME = ""
df = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Oct.csv", format="csv", header = True)
df1 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Nov.csv", format="csv", header = True)
df2 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Dec.csv", format="csv", header = True)
df3 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2020-Jan.csv", format="csv", header = True)
df = df.union(df1)
df = df.union(df2)
df = df.union(df3)
df.write.saveAsTable("full_dataset", mode="overwrite", format="delta")
full_dataset = spark.read.format("delta").load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/full_dataset/")
# remove all null values from category and brand
filtered_df = full_dataset.filter((full_dataset.category_code != 'null') & (full_dataset.brand != 'null'))
#filter on construction and remove misplaced brands
construction_df = filtered_df.filter((filtered_df.category_code.contains('construction')) & (filtered_df.brand != 'apple') & (filtered_df.brand != 'philips') & (filtered_df.brand != 'oystercosmetics')& (filtered_df.brand != 'tefal') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'polaris') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'maybellinenewyork') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'nokia') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'fila') & (filtered_df.brand != 'milanicosmetics') & (filtered_df.brand != 'shoesrepublic') &(filtered_df.brand != 'hp')&(filtered_df.brand != 'jbl'))
#filter on electronics and remove misplaced brands
electronic_df = filtered_df.filter((filtered_df.category_code.contains('electronics'))& (filtered_df.brand != 'houseofseasons') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'shoesrepublic') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'puma'))
#filter on apparel and remove misplaced brands
apparel_df = filtered_df.filter((filtered_df.category_code.contains('apparel')) & (filtered_df.brand != 'toyota') & (filtered_df.brand != 'canon')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hp')& (filtered_df.brand != 'nikon') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'x-digital') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'toysmax') & (filtered_df.brand != 'houseofseasons') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'playdoh') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'tv-shop') & (filtered_df.brand != 'xp-pen') & (filtered_df.brand != 'philips') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'm-audio') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'hyundai'))
#filtered on computers and removed misplaced brands
computer_df = filtered_df.filter((filtered_df.category_code.contains('computers')) & (filtered_df.brand != 'fila') & (filtered_df.brand != 'moosetoys') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'hotwheels') & (filtered_df.brand != 'taftoys') & (filtered_df.brand != 'barbi') & (filtered_df.brand != 'fitbit') & (filtered_df.brand != 'nike'))
#filtered on appliances and removed misplaced brands
appliance_df = filtered_df.filter((filtered_df.category_code.contains('appliances')) & (filtered_df.brand != 'fila')& (filtered_df.brand != 'shoesrepublic') & (filtered_df.brand != 'toshiba')& (filtered_df.brand != 'hp')& (filtered_df.brand != 'nokia')&(filtered_df.brand != 'hyundai')& (filtered_df.brand != 'moosetoys') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'colorkid') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'toyota') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'logitech'))
#filtered on auto and removed misplaced brands
auto_df = filtered_df.filter((filtered_df.category_code.contains('auto')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'sony') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hp'))
#filtered on furniture and removed misplaced brands
furniture_df = filtered_df.filter((filtered_df.category_code.contains('furniture')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'lg')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hyundai')& (filtered_df.brand != 'sony') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'dell') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'newsuntoys') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'hp') )
#filtered on kids and removed misplaced brands
kids_df = filtered_df.filter((filtered_df.category_code.contains('kids')) & (filtered_df.brand != 'tefal')& (filtered_df.brand != 'puma') & (filtered_df.brand != 'hp') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'samsung'))
#filtered on sports and removed misplaced brands
sports_df = filtered_df.filter((filtered_df.category_code.contains('sport')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'hp') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'mersedes-benz') & (filtered_df.brand != 'toyland') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'ikea') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'bmw') & (filtered_df.brand != 'jeep') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'asus') & (filtered_df.brand != 'hyundai'))
#filtered on country_yard and removed misplaced brands
country_df = filtered_df.filter((filtered_df.category_code.contains('country_yard')) & (filtered_df.brand != 'nike')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'vans') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'columbia') & (filtered_df.brand != 'adidas')& (filtered_df.brand != 'apple'))
#filtered on stationary and removed misplaced brands
stationery_df = filtered_df.filter((filtered_df.category_code.contains('stationery')) & (filtered_df.brand !='hyundai') & (filtered_df.brand !='puma') & (filtered_df.brand !='nike') & (filtered_df.brand !='jeep') & (filtered_df.brand !='jaguar') & (filtered_df.brand !='toyota') & (filtered_df.brand !='shoesrepublic') & (filtered_df.brand !='tefal') & (filtered_df.brand !='fila'))
#filtered on accessories and removed misplaced brands
accessories_df = filtered_df.filter((filtered_df.category_code == 'accessories.umbrella') |(filtered_df.category_code == 'accessories.wallet') |(filtered_df.category_code == 'accessories.bag') &(filtered_df.brand != 'hyundai'))
medicine_df = filtered_df.filter((filtered_df.category_code.contains('medicine')) & (filtered_df.brand != 'ikea'))
# combine all the separated DataFrames into one to load into a table.
df = medicine_df.union(accessories_df)
df = df.union(stationery_df)
df = df.union(country_df)
df = df.union(sports_df)
df = df.union(kids_df)
df = df.union(furniture_df)
df = df.union(auto_df)
df = df.union(appliance_df)
df = df.union(computer_df)
df = df.union(apparel_df)
df = df.union(electronic_df)
df = df.union(construction_df)
# load the cleaned data to a spark database
try:
spark.sql("CREATE DATABASE retailaidb")
except:
print("Database already exists")
df.write.saveAsTable("retailaidb.cleaned_dataset")
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. Licensed under the MIT license. Clean Training Data This notebook will clean the training dataset and load the cleaned data into a spark database for training the models.
###Code
DATA_LAKE_ACCOUNT_NAME = ""
FILE_SYSTEM_NAME = ""
df = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Oct.csv", format="csv", header = True)
df1 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Nov.csv", format="csv", header = True)
df2 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2019-Dec.csv", format="csv", header = True)
df3 = spark.read.load(f"abfss://{FILE_SYSTEM_NAME}@{DATA_LAKE_ACCOUNT_NAME}.dfs.core.windows.net/synapse/workspaces/2020-Jan.csv", format="csv", header = True)
df = df.union(df1)
df = df.union(df2)
df = df.union(df3)
df.write.saveAsTable("full_dataset", mode="overwrite", format="delta")
full_dataset = spark.read.table('full_dataset')
# remove all null values from category and brand
filtered_df = full_dataset.filter((full_dataset.category_code != 'null') & (full_dataset.brand != 'null'))
#filter on construction and remove misplaced brands
construction_df = filtered_df.filter((filtered_df.category_code.contains('construction')) & (filtered_df.brand != 'apple') & (filtered_df.brand != 'philips') & (filtered_df.brand != 'oystercosmetics')& (filtered_df.brand != 'tefal') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'polaris') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'maybellinenewyork') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'nokia') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'fila') & (filtered_df.brand != 'milanicosmetics') & (filtered_df.brand != 'shoesrepublic') &(filtered_df.brand != 'hp')&(filtered_df.brand != 'jbl'))
#filter on electronics and remove misplaced brands
electronic_df = filtered_df.filter((filtered_df.category_code.contains('electronics'))& (filtered_df.brand != 'houseofseasons') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'shoesrepublic') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'puma'))
#filter on apparel and remove misplaced brands
apparel_df = filtered_df.filter((filtered_df.category_code.contains('apparel')) & (filtered_df.brand != 'toyota') & (filtered_df.brand != 'canon')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hp')& (filtered_df.brand != 'nikon') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'x-digital') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'toysmax') & (filtered_df.brand != 'houseofseasons') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'playdoh') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'tv-shop') & (filtered_df.brand != 'xp-pen') & (filtered_df.brand != 'philips') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'm-audio') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'hyundai'))
#filtered on computers and removed misplaced brands
computer_df = filtered_df.filter((filtered_df.category_code.contains('computers')) & (filtered_df.brand != 'fila') & (filtered_df.brand != 'moosetoys') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'hotwheels') & (filtered_df.brand != 'taftoys') & (filtered_df.brand != 'barbi') & (filtered_df.brand != 'fitbit') & (filtered_df.brand != 'nike'))
#filtered on appliances and removed misplaced brands
appliance_df = filtered_df.filter((filtered_df.category_code.contains('appliances')) & (filtered_df.brand != 'fila')& (filtered_df.brand != 'shoesrepublic') & (filtered_df.brand != 'toshiba')& (filtered_df.brand != 'hp')& (filtered_df.brand != 'nokia')&(filtered_df.brand != 'hyundai')& (filtered_df.brand != 'moosetoys') & (filtered_df.brand != 'jaguar') & (filtered_df.brand != 'colorkid') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'toyota') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'logitech'))
#filtered on auto and removed misplaced brands
auto_df = filtered_df.filter((filtered_df.category_code.contains('auto')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'sony') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hp'))
#filtered on furniture and removed misplaced brands
furniture_df = filtered_df.filter((filtered_df.category_code.contains('furniture')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'lg')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'hyundai')& (filtered_df.brand != 'sony') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'toshiba') & (filtered_df.brand != 'fujifilm') & (filtered_df.brand != 'tefal') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'dell') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'newsuntoys') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'hp') )
#filtered on kids and removed misplaced brands
kids_df = filtered_df.filter((filtered_df.category_code.contains('kids')) & (filtered_df.brand != 'tefal')& (filtered_df.brand != 'puma') & (filtered_df.brand != 'hp') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'nike') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'samsung'))
#filtered on sports and removed misplaced brands
sports_df = filtered_df.filter((filtered_df.category_code.contains('sport')) & (filtered_df.brand != 'philips')& (filtered_df.brand != 'hp') & (filtered_df.brand != 'canon') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'microsoft') & (filtered_df.brand != 'apple') & (filtered_df.brand != 'jbl') & (filtered_df.brand != 'nikon') & (filtered_df.brand != 'mersedes-benz') & (filtered_df.brand != 'toyland') & (filtered_df.brand != 'lg') & (filtered_df.brand != 'samsung') & (filtered_df.brand != 'ikea') & (filtered_df.brand != 'logitech') & (filtered_df.brand != 'bmw') & (filtered_df.brand != 'jeep') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'asus') & (filtered_df.brand != 'hyundai'))
#filtered on country_yard and removed misplaced brands
country_df = filtered_df.filter((filtered_df.category_code.contains('country_yard')) & (filtered_df.brand != 'nike')& (filtered_df.brand != 'samsung') & (filtered_df.brand != 'sony') & (filtered_df.brand != 'vans') & (filtered_df.brand != 'hyundai') & (filtered_df.brand != 'puma') & (filtered_df.brand != 'columbia') & (filtered_df.brand != 'adidas')& (filtered_df.brand != 'apple'))
#filtered on stationary and removed misplaced brands
stationery_df = filtered_df.filter((filtered_df.category_code.contains('stationery')) & (filtered_df.brand !='hyundai') & (filtered_df.brand !='puma') & (filtered_df.brand !='nike') & (filtered_df.brand !='jeep') & (filtered_df.brand !='jaguar') & (filtered_df.brand !='toyota') & (filtered_df.brand !='shoesrepublic') & (filtered_df.brand !='tefal') & (filtered_df.brand !='fila'))
#filtered on accessories and removed misplaced brands
accessories_df = filtered_df.filter((filtered_df.category_code == 'accessories.umbrella') |(filtered_df.category_code == 'accessories.wallet') |(filtered_df.category_code == 'accessories.bag') &(filtered_df.brand != 'hyundai'))
medicine_df = filtered_df.filter((filtered_df.category_code.contains('medicine')) & (filtered_df.brand != 'ikea'))
# combine all the separated DataFrames into one to load into a table.
df = medicine_df.union(accessories_df)
df = df.union(stationery_df)
df = df.union(country_df)
df = df.union(sports_df)
df = df.union(kids_df)
df = df.union(furniture_df)
df = df.union(auto_df)
df = df.union(appliance_df)
df = df.union(computer_df)
df = df.union(apparel_df)
df = df.union(electronic_df)
df = df.union(construction_df)
# load the cleaned data to a spark database
try:
spark.sql("CREATE DATABASE retailaidb")
except:
print("Database already exists")
df.write.saveAsTable("retailaidb.cleaned_dataset")
###Output
_____no_output_____ |
sf-crime/sf-crime-vec-sklearn.ipynb | ###Markdown
[San Francisco Crime Classification | Kaggle](https://www.kaggle.com/c/sf-crime) [SF Crime Prediction with scikit-learn 을 따라해 본다. | Kaggle](https://www.kaggle.com/rhoslug/sf-crime-prediction-with-scikit-learn) Data fields* 날짜 - 범죄 사건의 타임 스탬프* 범주 - 범죄 사건 카테고리 (train.csv에만 해당) 이 변수를 예측하는 게 이 경진대회 과제임* 설명 - 범죄 사건에 대한 자세한 설명 (train.csv에만 있음)* DayOfWeek - 요일* PdDistrict - 경찰서 구의 이름* 해결 방법 - 범죄 사건이 어떻게 해결 되었는지 (train.csv에서만)* 주소 - 범죄 사건의 대략적인 주소 * X - 경도* Y - 위도* Dates - timestamp of the crime incident* Category - category of the crime incident (only in train.csv). This is the target variable you are going to predict.* Descript - detailed description of the crime incident (only in train.csv)* DayOfWeek - the day of the week* PdDistrict - name of the Police Department District* Resolution - how the crime incident was resolved (only in train.csv)* Address - the approximate street address of the crime incident * X - Longitude * Y - Latitude
###Code
from __future__ import print_function, division
import pandas as pd
import numpy as np
df_train = pd.read_csv('data/train.csv', parse_dates=['Dates'])
df_train.shape
df_train.head()
# 'Descript', 'Dates', 'Resolution' 는 제거
df_train.drop(['Descript', 'Dates', 'Resolution'], axis=1, inplace=True)
df_train.shape
df_test = pd.read_csv('data/test.csv', parse_dates=['Dates'])
df_test.shape
df_test.head()
df_test.drop(['Dates'], axis=1, inplace=True)
df_test.head()
# 트레이닝과 검증셋을 선택한다.
inds = np.arange(df_train.shape[0])
inds
np.random.shuffle(inds)
df_train.shape[0]
# 트레인 셋
train_inds = inds[:int(0.2 * df_train.shape[0])]
print(train_inds.shape)
# 검증 셋
val_inds = inds[int(0.2) * df_train.shape[0]:]
print(val_inds.shape)
# 컬럼명을 추출한다.
col_names = np.sort(df_train['Category'].unique())
col_names
# 카테고리를 숫자로 변환해 준다.
df_train['Category'] = pd.Categorical(df_train['Category']).codes
df_train['DayOfWeek'] = pd.Categorical(df_train['DayOfWeek']).codes
df_train['PdDistrict'] = pd.Categorical(df_train['PdDistrict']).codes
df_test['DayOfWeek'] = pd.Categorical(df_test['DayOfWeek']).codes
df_test['PdDistrict'] = pd.Categorical(df_test['PdDistrict']).codes
df_train.head()
df_test.head()
from sklearn.feature_extraction.text import CountVectorizer
# text 빈도를 추출한다.
cvec = CountVectorizer()
cvec
bows_train = cvec.fit_transform(df_train['Address'].values)
bows_test = cvec.fit_transform(df_test['Address'].values)
# 트레이닝과 검증셋을 나눈다.
df_val = df_train.iloc[val_inds]
df_val.head()
df_val.shape
df_train = df_train.iloc[train_inds]
df_train.shape
df_train.head()
from patsy import dmatrices, dmatrix
y_train, X_train = dmatrices('Category ~ X + Y + DayOfWeek + PdDistrict', df_train)
y_train.shape
# 벡터화 된 주소
X_train = np.hstack((X_train, bows_train[train_inds, :].toarray()))
X_train.shape
y_val, X_val = dmatrices('Category ~ X + Y + DayOfWeek + PdDistrict', df_val)
X_val = np.hstack((X_val, bows_train[val_inds, :].toarray()))
X_test = dmatrix('X + Y + DayOfWeek + PdDistrict', df_test)
X_test = np.hstack((X_test, bows_test.toarray()))
# IncrementalPCA
from sklearn.decomposition import IncrementalPCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
ipca = IncrementalPCA(n_components=4, batch_size=5)
ipca
# 로컬 메모리 부족으로 실행 실패 T_T
X_train = ipca.fit_transform(X_train)
X_val = ipca.transform(X_val)
X_test = ipca.transform(X_test)
# 로지스틱 회귀를 생성하고 fit 시킨다.
logistic = LogisticRegression()
logistic.fit(X_train, y_train.ravel())
# 정확도를 본다.
print('Mean accuracy (Logistic): {:.4f}.format(logistic.score(X_val, y_val.ravel())))')
# 랜덤 포레스트로 fit 시키고 정확도를 본다.
randforest = RandomForestClassifier()
randforest.fit(X_train, y_train.ravel())
# 정확도를 본다.
print('Mean accuracy (Logistic): {:.4f}.format(logistic.score(X_val, y_val.ravel())))')
# Make predictions
predict_probs = logistic.predict_proba(X_test)
df_pred = pd.DataFrame(data=predict_probs, columns=col_names)
df_pred['Id'] = df_test['Id'].astype(int)
df_pred.to_csv('output.csv', index=False)
###Output
_____no_output_____
###Markdown
[San Francisco Crime Classification | Kaggle](https://www.kaggle.com/c/sf-crime) [SF Crime Prediction with scikit-learn 을 따라해 본다. | Kaggle](https://www.kaggle.com/rhoslug/sf-crime-prediction-with-scikit-learn) Data fields* 날짜 - 범죄 사건의 타임 스탬프* 범주 - 범죄 사건 카테고리 (train.csv에만 해당) 이 변수를 예측하는 게 이 경진대회 과제임* 설명 - 범죄 사건에 대한 자세한 설명 (train.csv에만 있음)* DayOfWeek - 요일* PdDistrict - 경찰서 구의 이름* 해결 방법 - 범죄 사건이 어떻게 해결 되었는지 (train.csv에서만)* 주소 - 범죄 사건의 대략적인 주소 * X - 경도* Y - 위도* Dates - timestamp of the crime incident* Category - category of the crime incident (only in train.csv). This is the target variable you are going to predict.* Descript - detailed description of the crime incident (only in train.csv)* DayOfWeek - the day of the week* PdDistrict - name of the Police Department District* Resolution - how the crime incident was resolved (only in train.csv)* Address - the approximate street address of the crime incident * X - Longitude * Y - Latitude
###Code
import pandas as pd
import numpy as np
df_train = pd.read_csv('data/train.csv', parse_dates=['Dates'])
df_train.shape
df_train.head()
# 'Descript', 'Dates', 'Resolution' 는 제거
df_train.drop(['Descript', 'Dates', 'Resolution'], axis=1, inplace=True)
df_train.shape
df_test = pd.read_csv('data/test.csv', parse_dates=['Dates'])
df_test.shape
df_test.head()
df_submit = pd.read_csv("data/sampleSubmission.csv")
df_submit.head()
df_train["Category"].value_counts()
df_test.drop(['Dates'], axis=1, inplace=True)
df_test.head()
# 트레이닝과 검증셋을 선택한다.
inds = np.arange(df_train.shape[0])
inds
np.random.shuffle(inds)
df_train.shape[0]
df_train.shape[0] * 0.8
# 트레인 셋
train_inds = inds[:int(0.8 * df_train.shape[0])]
print(train_inds.shape)
# 검증 셋
val_inds = inds[int(0.8) * df_train.shape[0]:]
print(val_inds.shape)
# 컬럼명을 추출한다.
col_names = np.sort(df_train['Category'].unique())
col_names
# 카테고리를 숫자로 변환해 준다.
df_train['Category'] = pd.Categorical(df_train['Category']).codes
df_train['DayOfWeek'] = pd.Categorical(df_train['DayOfWeek']).codes
df_train['PdDistrict'] = pd.Categorical(df_train['PdDistrict']).codes
df_test['DayOfWeek'] = pd.Categorical(df_test['DayOfWeek']).codes
df_test['PdDistrict'] = pd.Categorical(df_test['PdDistrict']).codes
df_train.head()
df_test.head()
from sklearn.feature_extraction.text import CountVectorizer
# text 빈도를 추출한다.
cvec = CountVectorizer()
cvec
bows_train = cvec.fit_transform(df_train['Address'].values)
bows_test = cvec.transform(df_test['Address'].values)
# 트레이닝과 검증셋을 나눈다.
df_val = df_train.iloc[val_inds]
df_val.head()
df_val.shape
df_train = df_train.iloc[train_inds]
df_train.shape
df_train.head()
from patsy import dmatrices, dmatrix
y_train, X_train = dmatrices('Category ~ X + Y + DayOfWeek + PdDistrict', df_train)
y_train.shape
# 벡터화 된 주소
X_train = np.hstack((X_train, bows_train[train_inds, :].toarray()))
X_train.shape
y_val, X_val = dmatrices('Category ~ X + Y + DayOfWeek + PdDistrict', df_val)
X_val = np.hstack((X_val, bows_train[val_inds, :].toarray()))
X_test = dmatrix('X + Y + DayOfWeek + PdDistrict', df_test)
X_test = np.hstack((X_test, bows_test.toarray()))
# IncrementalPCA
from sklearn.decomposition import IncrementalPCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
ipca = IncrementalPCA(n_components=4, batch_size=5)
ipca
X_train = ipca.fit_transform(X_train)
X_val = ipca.transform(X_val)
X_test = ipca.transform(X_test)
# 로지스틱 회귀를 생성하고 fit 시킨다.
logistic = LogisticRegression()
logistic.fit(X_train, y_train.ravel())
# 정확도를 본다.
print('Mean accuracy (Logistic):', logistic.score(X_val, y_val.ravel()))
# 랜덤 포레스트로 fit 시키고 정확도를 본다.
randforest = RandomForestClassifier()
randforest.fit(X_train, y_train.ravel())
# 정확도를 본다.
print('Mean accuracy (Logistic):', logistic.score(X_val, y_val.ravel()))
# Make predictions
predict_probs = logistic.predict_proba(X_test)
df_pred = pd.DataFrame(data=predict_probs, columns=col_names)
df_pred['Id'] = df_test['Id'].astype(int)
df_pred
df_pred.to_csv('output.csv', index=False)
###Output
_____no_output_____ |
examples/text-processing/text_preprocessing_demo.ipynb | ###Markdown
This notebook demos some functionality in ConvoKit to preprocess text, and store the results. In particular, it shows examples of:* A `TextProcessor` base class that maps per-utterance attributes to per-utterance outputs;* A `TextParser` class that does dependency parsing;* Selective and decoupled data storage and loading;* Per-utterance calls to a transformer;* Pipelining transformers. Preliminaries: loading an existing corpus. To start, we load a clean version of a corpus. For speed we will use a 200-utterance subset of the tennis corpus.
###Code
import os
os.chdir('../..')
import convokit
from convokit import download
# OPTION 1: DOWNLOAD CORPUS
# UNCOMMENT THESE LINES TO DOWNLOAD CORPUS
# DATA_DIR = '<YOUR DIRECTORY>'
ROOT_DIR = download('tennis-corpus')
# OPTION 2: READ PREVIOUSLY-DOWNLOADED CORPUS FROM DISK
# UNCOMMENT THIS LINE AND REPLACE WITH THE DIRECTORY WHERE THE TENNIS-CORPUS IS LOCATED
# ROOT_DIR = '<YOUR DIRECTORY>'
corpus = convokit.Corpus(ROOT_DIR, utterance_end_index=199)
corpus.print_summary_stats()
# SET YOUR OWN OUTPUT DIRECTORY HERE.
OUT_DIR = '<YOUR OUTPUT DIRECTORY>'
###Output
_____no_output_____
###Markdown
Here's an example of an utterance from this corpus (questions asked to tennis players after matches, and the answers they give):
###Code
test_utt_id = '1681_14.a'
utt = corpus.get_utterance(test_utt_id)
utt.text
###Output
_____no_output_____
###Markdown
Right now, `utt.meta` contains the following fields:
###Code
utt.meta
###Output
_____no_output_____
###Markdown
The TextProcessor class Many of our transformers are per-utterance mappings of one attribute of an utterance to another. To facilitate these calls, we use a `TextProcessor` class that inherits from `Transformer`. `TextProcessor` is initialized with the following arguments:* `proc_fn`: the mapping function. Supports one of two function signatures: `proc_fn(input)` and `proc_fn(input, auxiliary_info)`. * `input_field`: the attribute of the utterance that `proc_fn` will take as input. If set to `None`, will default to reading `utt.text`, as seems to be presently done.* `output_field`: the name of the attribute that the output of `proc_fn` will be written to. * `aux_input`: any auxiliary input that `proc_fn` needs (e.g., a pre-loaded model); passed in as a dict.* `input_filter`: a boolean function of signature `input_filter(utterance, aux_input)`, where `aux_input` is again passed as a dict. If this returns `False` then the particular utterance will be skipped; by default it will always return `True`.Both `input_field` and `output_field` support multiple items -- that is, `proc_fn` could take in multiple attributes of an utterance and output multiple attributes. I'll show how this works in advanced usage, below."Attribute" is a deliberately generic term. `TextProcessor` could produce "features" as we may conventionally think of them (e.g., wordcount, politeness strategies). It can also be used to pre-process text, i.e., generate alternate representations of the text.
###Code
from convokit.text_processing import TextProcessor
###Output
_____no_output_____
###Markdown
simple example: cleaning the text As a simple example, suppose we want to remove hyphens "`--`" from the text as a preprocessing step. To use `TextProcessor` to do this for us, we'd define the following as a `proc_fn`:
###Code
def preprocess_text(text):
text = text.replace(' -- ', ' ')
return text
###Output
_____no_output_____
###Markdown
Below, we initialize `prep`, a `TextProcessor` object that will run `preprocess_text` on each utterance.When we call `prep.transform()`, the following will occur:* Because we didn't specify an input field, `prep` will pass `utterance.text` into `preprocess_text`* It will write the output -- the text minus the hyphens -- to a field called `clean_text` that will be stored in the utterance meta and that can be accessed as `utt.meta['clean_text']` or `utt.get_info('clean_text')`
###Code
prep = TextProcessor(proc_fn=preprocess_text, output_field='clean_text')
corpus = prep.transform(corpus)
###Output
_____no_output_____
###Markdown
And as desired, we now have a new field attached to `utt`.
###Code
utt.get_info('clean_text')
###Output
_____no_output_____
###Markdown
Parsing text with the TextParser class One common utterance-level thing we want to do is parse the text. In practice, in increasing order of (computational) difficulty, this typically entails:* proper tokenizing of words and sentences;* POS-tagging;* dependency-parsing. As such, we provide a `TextParser` class that inherits from `TextProcessor` to do all of this, taking in the following arguments:* `output_field`: defaults to `'parsed'`* `input_field`* `mode`: whether we want to go through all of the above steps (which may be expensive) or stop mid-way through. Supports the following options: `'tokenize'`, `'tag'`, `'parse'` (the default).Under the surface, `TextParser` actually uses two separate models: a `spacy` object that does word tokenization, tagging and parsing _per sentence_, and `nltk`'s sentence tokenizer. The rationale is:* `spacy` doesn't support sentence tokenization without dependency-parsing, and we often want sentence tokenization without having to go through the effort of parsing.* We want to be consistent (as much as possible, given changes to spacy and nltk) in the tokenizations we produce, between runs where we don't want parsing and runs where we do.If we've pre-loaded these models, we can pass them into the constructor too, as:* `spacy_nlp`* `sent_tokenizer`
###Code
from convokit.text_processing import TextParser
parser = TextParser(input_field='clean_text', verbosity=50)
corpus = parser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
parse outputA parse produced by `TextParser` is serialized in text form. It is a list consisting of sentences, where each sentence is a dict with* `toks`: a list of tokens (i.e., words) in the sentence;* `rt`: the index of the root of the dependency tree (i.e., `sentence['toks'][sentence['rt']` gives the root)Each token, in turn, contains the following:* `tok`: the text of the token;* `tag`: the tag;* `up`: the index of the parent of the token in the dependency tree (no entry for the root);* `down`: the indices of the children of the token;* `dep`: the dependency of the edge between the token and its parent.
###Code
test_parse = utt.get_info('parsed')
test_parse[0]
###Output
_____no_output_____
###Markdown
If we didn't want to go through the trouble of dependency-parsing (which could be expensive) we could initialize `TextParser` with `mode='tag'`, which only POS-tags tokens:
###Code
texttagger = TextParser(output_field='tagged', input_field='clean_text', mode='tag')
corpus = texttagger.transform(corpus)
utt.get_info('tagged')[0]
###Output
_____no_output_____
###Markdown
Storing and loading corpora We've now computed a bunch of utterance-level attributes.
###Code
list(utt.meta.keys())
###Output
_____no_output_____
###Markdown
By default, calling `corpus.dump` will write all of these attributes to disk, within the file that stores utterances; later calling `corpus.load` will load all of these attributes back into a new corpus. For big objects like parses, this incurs a high computational burden (especially if in a later use case you might not even need to look at parses). To avoid this, `corpus.dump` takes an optional argument `fields_to_skip`, which is a dict of object type (`'utterance'`, `'conversation'`, `'user'`, `'corpus'`) to a list of fields that we do not want to write to disk. The following call will write the corpus to disk, without any of the preprocessing output we generated above:
###Code
corpus.dump(os.path.basename(OUT_DIR), base_path=os.path.dirname(OUT_DIR),
fields_to_skip={'utterance': ['parsed','tagged','clean_text']})
###Output
_____no_output_____
###Markdown
For attributes we want to keep around, but that we don't want to read and write to disk in a big batch with all the other corpus data, `corpus.dump_info` will dump fields of a Corpus object into separate files. This takes the following arguments as input:* `obj_type`: which type of Corpus object you're dealing with.* `fields`: a list of the fields to write. * `dir_name`: which directory to write to; by default will write to the directory you read the corpus from.This function will write each field in `fields` to a separate file called `info..jsonl` where each line of the file is a json-serialized dict: `{"id": , "value": }`.
###Code
corpus.dump_info('utterance',['parsed','tagged'], dir_name = OUT_DIR)
###Output
_____no_output_____
###Markdown
As expected, we now have the following files in the output directory:
###Code
ls $OUT_DIR
###Output
_____no_output_____
###Markdown
If we now initialize a new corpus by reading from this directory:
###Code
new_corpus = convokit.Corpus(OUT_DIR)
new_utt = new_corpus.get_utterance(test_utt_id)
###Output
_____no_output_____
###Markdown
We see that things that we've omitted in the `corpus.dump` call will not be read.
###Code
new_utt.meta.keys()
###Output
_____no_output_____
###Markdown
As a counterpart to `corpus.dump_info` we can also load auxiliary information on-demand. Here, this call will look for `info..jsonl` in the directory of `new_corpus` (or an optionally-specified `dir_name`) and attach the value specified in each line of the file to the utterance with the associated id:
###Code
new_corpus.load_info('utterance',['parsed'])
new_utt.get_info('parsed')
###Output
_____no_output_____
###Markdown
Per-utterance calls `TextProcessor` objects also support calls per-utterance via `TextProcessor.transform_utterance()`. These calls take in raw strings as well as utterances, and will return an utterance:
###Code
test_str = "I played -- a tennis match."
prep.transform_utterance(test_str)
from convokit.model import Utterance
adhoc_utt = Utterance(text=test_str)
adhoc_utt = prep.transform_utterance(adhoc_utt)
adhoc_utt.get_info('clean_text')
###Output
_____no_output_____
###Markdown
Pipelines Finally, we can string together multiple transformers, and hence `TextProcessors`, into a pipeline, using a `ConvokitPipeline` object. This is analogous to (and in fact inherits from) scikit-learn's `Pipeline` class.
###Code
from convokit.convokitPipeline import ConvokitPipeline
###Output
_____no_output_____
###Markdown
As an example, suppose we want to both clean the text and parse it. We can chain the required steps to get there by initializing `ConvokitPipeline` with a list of steps, represented as a tuple of `(, initialized transformer-like object)`:* `'prep'`, our de-hyphenator* `'parse'`, our parser
###Code
parse_pipe = ConvokitPipeline([('prep', TextProcessor(preprocess_text, 'clean_text_pipe')),
('parse', TextParser('parsed_pipe', input_field='clean_text_pipe',
verbosity=50))])
corpus = parse_pipe.transform(corpus)
utt.get_info('parsed_pipe')
###Output
_____no_output_____
###Markdown
As promised, the pipeline also works to transform utterances.
###Code
test_utt = parse_pipe.transform_utterance(test_str)
test_utt.get_info('parsed_pipe')
###Output
_____no_output_____
###Markdown
Some advanced usage: playing around with parameters The point of the following is to demonstrate more elaborate calls to `TextProcessor`. As an example, we will count words in an utterance.First, we'll initialize a `TextProcessor` that does wordcounts (i.e., `len(x.split())`) on just the raw text (`utt.text`), writing output to field `wc_raw`.
###Code
wc_raw = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc_raw')
corpus = wc_raw.transform(corpus)
utt.get_info('wc_raw')
###Output
_____no_output_____
###Markdown
If we instead wanted to wordcount our preprocessed text, with the hyphens removed, we can specify `input_field='clean_text'` -- as such, the `TextProcessor` will read from `utt.get_info('clean_text')` instead.
###Code
wc = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc', input_field='clean_text')
corpus = wc.transform(corpus)
###Output
_____no_output_____
###Markdown
Here we see that we are no longer counting the extra hyphen.
###Code
utt.get_info('wc')
###Output
_____no_output_____
###Markdown
Likewise, we can count characters:
###Code
chars = TextProcessor(proc_fn=lambda x: len(x), output_field='ch', input_field='clean_text')
corpus = chars.transform(corpus)
utt.get_info('ch')
###Output
_____no_output_____
###Markdown
Suppose that for some reason we now wanted to calculate:* characters per word* words per character (the reciprocal)This requires:* a `TextProcessor` that takes in multiple input fields, `'ch'` and `'wc'`;* and that writes to multiple output fields, `'char_per_word'` and `'word_per_char'`.Here's how the resultant object, `char_per_word`, handles this:* in `transform()`, we pass `proc_fn` a dict mapping input field name to value, e.g., `{'wc': 22, 'ch': 120}`* `proc_fn` will be written to return a tuple, where each element of that tuple corresponds to each element of the list we've passed to `output_field`, e.g., ```out0, out1 = proc_fn(input)utt.set_info('char_per_word', out0) utt.set_info('word_per_char', out1)```
###Code
char_per_word = TextProcessor(proc_fn=lambda x: (x['ch']/x['wc'], x['wc']/x['ch']),
output_field=['char_per_word', 'word_per_char'], input_field=['ch','wc'])
corpus = char_per_word.transform(corpus)
utt.get_info('char_per_word')
utt.get_info('word_per_char')
###Output
_____no_output_____
###Markdown
Some advanced usage: input filters Just for the sake of demonstration, suppose we wished to save some computation time and only parse the questions in a corpus. We can do this by specifying `input_filter` (which, recall discussion above, takes as argument an `Utterance` object).
###Code
def is_question(utt, aux={}):
return utt.meta['is_question']
qparser = TextParser(output_field='qparsed', input_field='clean_text', input_filter=is_question, verbosity=50)
corpus = qparser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
Since our test utterance is not a question, `qparser.transform()` will skip over it, and hence the utterance won't have the 'qparsed' attribute (and `get_info` returns `None`):
###Code
utt.get_info('qparsed')
###Output
_____no_output_____
###Markdown
However, if we take an utterance that's a question, we see that it is indeed parsed:
###Code
q_utt_id = '1681_14.q'
q_utt = corpus.get_utterance(q_utt_id)
q_utt.text
q_utt.get_info('qparsed')
###Output
_____no_output_____
###Markdown
This notebook demos some functionality in ConvoKit to preprocess text, and store the results. In particular, it shows examples of:* A `TextProcessor` base class that maps per-utterance attributes to per-utterance outputs;* A `TextParser` class that does dependency parsing;* Selective and decoupled data storage and loading;* Per-utterance calls to a transformer;* Pipelining transformers. Preliminaries: loading an existing corpus. To start, we load a clean version of a corpus. For speed we will use a 200-utterance subset of the tennis corpus.
###Code
import os
import convokit
from convokit import download, Speaker
# OPTION 1: DOWNLOAD CORPUS
# UNCOMMENT THESE LINES TO DOWNLOAD CORPUS
# DATA_DIR = '<YOUR DIRECTORY>'
# ROOT_DIR = download('tennis-corpus')
# OPTION 2: READ PREVIOUSLY-DOWNLOADED CORPUS FROM DISK
# UNCOMMENT THIS LINE AND REPLACE WITH THE DIRECTORY WHERE THE TENNIS-CORPUS IS LOCATED
# ROOT_DIR = '<YOUR DIRECTORY>'
corpus = convokit.Corpus(ROOT_DIR, utterance_end_index=199)
corpus.print_summary_stats()
# SET YOUR OWN OUTPUT DIRECTORY HERE.
# OUT_DIR = '<YOUR DIRECTORY>'
###Output
_____no_output_____
###Markdown
Here's an example of an utterance from this corpus (questions asked to tennis players after matches, and the answers they give):
###Code
test_utt_id = '1681_14.a'
utt = corpus.get_utterance(test_utt_id)
utt.text
###Output
_____no_output_____
###Markdown
Right now, `utt.meta` contains the following fields:
###Code
utt.meta
###Output
_____no_output_____
###Markdown
The TextProcessor class Many of our transformers are per-utterance mappings of one attribute of an utterance to another. To facilitate these calls, we use a `TextProcessor` class that inherits from `Transformer`. `TextProcessor` is initialized with the following arguments:* `proc_fn`: the mapping function. Supports one of two function signatures: `proc_fn(input)` and `proc_fn(input, auxiliary_info)`. * `input_field`: the attribute of the utterance that `proc_fn` will take as input. If set to `None`, will default to reading `utt.text`, as seems to be presently done.* `output_field`: the name of the attribute that the output of `proc_fn` will be written to. * `aux_input`: any auxiliary input that `proc_fn` needs (e.g., a pre-loaded model); passed in as a dict.* `input_filter`: a boolean function of signature `input_filter(utterance, aux_input)`, where `aux_input` is again passed as a dict. If this returns `False` then the particular utterance will be skipped; by default it will always return `True`.Both `input_field` and `output_field` support multiple items -- that is, `proc_fn` could take in multiple attributes of an utterance and output multiple attributes. I'll show how this works in advanced usage, below."Attribute" is a deliberately generic term. `TextProcessor` could produce "features" as we may conventionally think of them (e.g., wordcount, politeness strategies). It can also be used to pre-process text, i.e., generate alternate representations of the text.
###Code
from convokit.text_processing import TextProcessor
###Output
_____no_output_____
###Markdown
simple example: cleaning the text As a simple example, suppose we want to remove hyphens "`--`" from the text as a preprocessing step. To use `TextProcessor` to do this for us, we'd define the following as a `proc_fn`:
###Code
def preprocess_text(text):
text = text.replace(' -- ', ' ')
return text
###Output
_____no_output_____
###Markdown
Below, we initialize `prep`, a `TextProcessor` object that will run `preprocess_text` on each utterance.When we call `prep.transform()`, the following will occur:* Because we didn't specify an input field, `prep` will pass `utterance.text` into `preprocess_text`* It will write the output -- the text minus the hyphens -- to a field called `clean_text` that will be stored in the utterance meta and that can be accessed as `utt.meta['clean_text']` or `utt.get_info('clean_text')`
###Code
prep = TextProcessor(proc_fn=preprocess_text, output_field='clean_text')
corpus = prep.transform(corpus)
###Output
_____no_output_____
###Markdown
And as desired, we now have a new field attached to `utt`.
###Code
utt.retrieve_meta('clean_text')
###Output
_____no_output_____
###Markdown
Parsing text with the TextParser class One common utterance-level thing we want to do is parse the text. In practice, in increasing order of (computational) difficulty, this typically entails:* proper tokenizing of words and sentences;* POS-tagging;* dependency-parsing. As such, we provide a `TextParser` class that inherits from `TextProcessor` to do all of this, taking in the following arguments:* `output_field`: defaults to `'parsed'`* `input_field`* `mode`: whether we want to go through all of the above steps (which may be expensive) or stop mid-way through. Supports the following options: `'tokenize'`, `'tag'`, `'parse'` (the default).Under the surface, `TextParser` actually uses two separate models: a `spacy` object that does word tokenization, tagging and parsing _per sentence_, and `nltk`'s sentence tokenizer. The rationale is:* `spacy` doesn't support sentence tokenization without dependency-parsing, and we often want sentence tokenization without having to go through the effort of parsing.* We want to be consistent (as much as possible, given changes to spacy and nltk) in the tokenizations we produce, between runs where we don't want parsing and runs where we do.If we've pre-loaded these models, we can pass them into the constructor too, as:* `spacy_nlp`* `sent_tokenizer`
###Code
from convokit.text_processing import TextParser
parser = TextParser(input_field='clean_text', verbosity=50)
corpus = parser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
parse outputA parse produced by `TextParser` is serialized in text form. It is a list consisting of sentences, where each sentence is a dict with* `toks`: a list of tokens (i.e., words) in the sentence;* `rt`: the index of the root of the dependency tree (i.e., `sentence['toks'][sentence['rt']` gives the root)Each token, in turn, contains the following:* `tok`: the text of the token;* `tag`: the tag;* `up`: the index of the parent of the token in the dependency tree (no entry for the root);* `down`: the indices of the children of the token;* `dep`: the dependency of the edge between the token and its parent.
###Code
test_parse = utt.retrieve_meta('parsed')
test_parse[0]
###Output
_____no_output_____
###Markdown
If we didn't want to go through the trouble of dependency-parsing (which could be expensive) we could initialize `TextParser` with `mode='tag'`, which only POS-tags tokens:
###Code
texttagger = TextParser(output_field='tagged', input_field='clean_text', mode='tag')
corpus = texttagger.transform(corpus)
utt.retrieve_meta('tagged')[0]
###Output
_____no_output_____
###Markdown
Storing and loading corpora We've now computed a bunch of utterance-level attributes.
###Code
list(utt.meta.keys())
###Output
_____no_output_____
###Markdown
By default, calling `corpus.dump` will write all of these attributes to disk, within the file that stores utterances; later calling `corpus.load` will load all of these attributes back into a new corpus. For big objects like parses, this incurs a high computational burden (especially if in a later use case you might not even need to look at parses). To avoid this, `corpus.dump` takes an optional argument `fields_to_skip`, which is a dict of object type (`'utterance'`, `'conversation'`, `'speaker'`, `'corpus'`) to a list of fields that we do not want to write to disk. The following call will write the corpus to disk, without any of the preprocessing output we generated above:
###Code
corpus.dump(os.path.basename(OUT_DIR), base_path=os.path.dirname(OUT_DIR),
fields_to_skip={'utterance': ['parsed','tagged','clean_text']})
###Output
_____no_output_____
###Markdown
For attributes we want to keep around, but that we don't want to read and write to disk in a big batch with all the other corpus data, `corpus.dump_info` will dump fields of a Corpus object into separate files. This takes the following arguments as input:* `obj_type`: which type of Corpus object you're dealing with.* `fields`: a list of the fields to write. * `dir_name`: which directory to write to; by default will write to the directory you read the corpus from.This function will write each field in `fields` to a separate file called `info..jsonl` where each line of the file is a json-serialized dict: `{"id": , "value": }`.
###Code
corpus.dump_info('utterance',['parsed','tagged'], dir_name = OUT_DIR)
###Output
_____no_output_____
###Markdown
As expected, we now have the following files in the output directory:
###Code
ls $OUT_DIR
###Output
conversations.json index.json info.tagged.jsonl users.json
corpus.json info.parsed.jsonl speakers.json utterances.jsonl
###Markdown
If we now initialize a new corpus by reading from this directory:
###Code
new_corpus = convokit.Corpus(OUT_DIR)
new_utt = new_corpus.get_utterance(test_utt_id)
###Output
_____no_output_____
###Markdown
We see that things that we've omitted in the `corpus.dump` call will not be read.
###Code
new_utt.meta.keys()
###Output
_____no_output_____
###Markdown
As a counterpart to `corpus.dump_info` we can also load auxiliary information on-demand. Here, this call will look for `info..jsonl` in the directory of `new_corpus` (or an optionally-specified `dir_name`) and attach the value specified in each line of the file to the utterance with the associated id:
###Code
new_corpus.load_info('utterance',['parsed'])
new_utt.retrieve_meta('parsed')
###Output
_____no_output_____
###Markdown
Per-utterance calls `TextProcessor` objects also support calls per-utterance via `TextProcessor.transform_utterance()`. These calls take in raw strings as well as utterances, and will return an utterance:
###Code
test_str = "I played -- a tennis match."
prep.transform_utterance(test_str)
adhoc_utt = prep.transform_utterance(adhoc_utt)
adhoc_utt.retrieve_meta('clean_text')
###Output
_____no_output_____
###Markdown
Pipelines Finally, we can string together multiple transformers, and hence `TextProcessors`, into a pipeline, using a `ConvokitPipeline` object. This is analogous to (and in fact inherits from) scikit-learn's `Pipeline` class.
###Code
from convokit.convokitPipeline import ConvokitPipeline
###Output
_____no_output_____
###Markdown
As an example, suppose we want to both clean the text and parse it. We can chain the required steps to get there by initializing `ConvokitPipeline` with a list of steps, represented as a tuple of `(, initialized transformer-like object)`:* `'prep'`, our de-hyphenator* `'parse'`, our parser
###Code
parse_pipe = ConvokitPipeline([('prep', TextProcessor(preprocess_text, 'clean_text_pipe')),
('parse', TextParser('parsed_pipe', input_field='clean_text_pipe',
verbosity=50))])
corpus = parse_pipe.transform(corpus)
utt.retrieve_meta('parsed_pipe')
###Output
_____no_output_____
###Markdown
As promised, the pipeline also works to transform utterances.
###Code
test_utt = parse_pipe.transform_utterance(test_str)
test_utt.retrieve_meta('parsed_pipe')
###Output
_____no_output_____
###Markdown
Some advanced usage: playing around with parameters The point of the following is to demonstrate more elaborate calls to `TextProcessor`. As an example, we will count words in an utterance.First, we'll initialize a `TextProcessor` that does wordcounts (i.e., `len(x.split())`) on just the raw text (`utt.text`), writing output to field `wc_raw`.
###Code
wc_raw = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc_raw')
corpus = wc_raw.transform(corpus)
utt.retrieve_meta('wc_raw')
###Output
_____no_output_____
###Markdown
If we instead wanted to wordcount our preprocessed text, with the hyphens removed, we can specify `input_field='clean_text'` -- as such, the `TextProcessor` will read from `utt.get_info('clean_text')` instead.
###Code
wc = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc', input_field='clean_text')
corpus = wc.transform(corpus)
###Output
_____no_output_____
###Markdown
Here we see that we are no longer counting the extra hyphen.
###Code
utt.retrieve_meta('wc')
###Output
_____no_output_____
###Markdown
Likewise, we can count characters:
###Code
chars = TextProcessor(proc_fn=lambda x: len(x), output_field='ch', input_field='clean_text')
corpus = chars.transform(corpus)
utt.retrieve_meta('ch')
###Output
_____no_output_____
###Markdown
Suppose that for some reason we now wanted to calculate:* characters per word* words per character (the reciprocal)This requires:* a `TextProcessor` that takes in multiple input fields, `'ch'` and `'wc'`;* and that writes to multiple output fields, `'char_per_word'` and `'word_per_char'`.Here's how the resultant object, `char_per_word`, handles this:* in `transform()`, we pass `proc_fn` a dict mapping input field name to value, e.g., `{'wc': 22, 'ch': 120}`* `proc_fn` will be written to return a tuple, where each element of that tuple corresponds to each element of the list we've passed to `output_field`, e.g., ```out0, out1 = proc_fn(input)utt.set_info('char_per_word', out0) utt.set_info('word_per_char', out1)```
###Code
char_per_word = TextProcessor(proc_fn=lambda x: (x['ch']/x['wc'], x['wc']/x['ch']),
output_field=['char_per_word', 'word_per_char'], input_field=['ch','wc'])
corpus = char_per_word.transform(corpus)
utt.retrieve_meta('char_per_word')
utt.retrieve_meta('word_per_char')
###Output
_____no_output_____
###Markdown
Some advanced usage: input filters Just for the sake of demonstration, suppose we wished to save some computation time and only parse the questions in a corpus. We can do this by specifying `input_filter` (which, recall discussion above, takes as argument an `Utterance` object).
###Code
def is_question(utt, aux={}):
return utt.meta['is_question']
qparser = TextParser(output_field='qparsed', input_field='clean_text', input_filter=is_question, verbosity=50)
corpus = qparser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
Since our test utterance is not a question, `qparser.transform()` will skip over it, and hence the utterance won't have the 'qparsed' attribute (and `get_info` returns `None`):
###Code
utt.retrieve_meta('qparsed')
###Output
_____no_output_____
###Markdown
However, if we take an utterance that's a question, we see that it is indeed parsed:
###Code
q_utt_id = '1681_14.q'
q_utt = corpus.get_utterance(q_utt_id)
q_utt.text
q_utt.retrieve_meta('qparsed')
###Output
_____no_output_____
###Markdown
This notebook demos some functionality in ConvoKit to preprocess text, and store the results. In particular, it shows examples of:* A `TextProcessor` base class that maps per-utterance attributes to per-utterance outputs;* A `TextParser` class that does dependency parsing;* Selective and decoupled data storage and loading;* Per-utterance calls to a transformer;* Pipelining transformers. Preliminaries: loading an existing corpus. To start, we load a clean version of a corpus. For speed we will use a 200-utterance subset of the tennis corpus.
###Code
import os
import convokit
from convokit import download
# OPTION 1: DOWNLOAD CORPUS
# UNCOMMENT THESE LINES TO DOWNLOAD CORPUS
# DATA_DIR = '<YOUR DIRECTORY>'
# ROOT_DIR = download('tennis-corpus')
# OPTION 2: READ PREVIOUSLY-DOWNLOADED CORPUS FROM DISK
# UNCOMMENT THIS LINE AND REPLACE WITH THE DIRECTORY WHERE THE TENNIS-CORPUS IS LOCATED
# ROOT_DIR = '<YOUR DIRECTORY>'
corpus = convokit.Corpus(ROOT_DIR, utterance_end_index=199)
corpus.print_summary_stats()
# SET YOUR OWN OUTPUT DIRECTORY HERE.
OUT_DIR = '<YOUR DIRECTORY>'
###Output
_____no_output_____
###Markdown
Here's an example of an utterance from this corpus (questions asked to tennis players after matches, and the answers they give):
###Code
test_utt_id = '1681_14.a'
utt = corpus.get_utterance(test_utt_id)
utt.text
###Output
_____no_output_____
###Markdown
Right now, `utt.meta` contains the following fields:
###Code
utt.meta
###Output
_____no_output_____
###Markdown
The TextProcessor class Many of our transformers are per-utterance mappings of one attribute of an utterance to another. To facilitate these calls, we use a `TextProcessor` class that inherits from `Transformer`. `TextProcessor` is initialized with the following arguments:* `proc_fn`: the mapping function. Supports one of two function signatures: `proc_fn(input)` and `proc_fn(input, auxiliary_info)`. * `input_field`: the attribute of the utterance that `proc_fn` will take as input. If set to `None`, will default to reading `utt.text`, as seems to be presently done.* `output_field`: the name of the attribute that the output of `proc_fn` will be written to. * `aux_input`: any auxiliary input that `proc_fn` needs (e.g., a pre-loaded model); passed in as a dict.* `input_filter`: a boolean function of signature `input_filter(utterance, aux_input)`, where `aux_input` is again passed as a dict. If this returns `False` then the particular utterance will be skipped; by default it will always return `True`.Both `input_field` and `output_field` support multiple items -- that is, `proc_fn` could take in multiple attributes of an utterance and output multiple attributes. I'll show how this works in advanced usage, below."Attribute" is a deliberately generic term. `TextProcessor` could produce "features" as we may conventionally think of them (e.g., wordcount, politeness strategies). It can also be used to pre-process text, i.e., generate alternate representations of the text.
###Code
from convokit.text_processing import TextProcessor
###Output
_____no_output_____
###Markdown
simple example: cleaning the text As a simple example, suppose we want to remove hyphens "`--`" from the text as a preprocessing step. To use `TextProcessor` to do this for us, we'd define the following as a `proc_fn`:
###Code
def preprocess_text(text):
text = text.replace(' -- ', ' ')
return text
###Output
_____no_output_____
###Markdown
Below, we initialize `prep`, a `TextProcessor` object that will run `preprocess_text` on each utterance.When we call `prep.transform()`, the following will occur:* Because we didn't specify an input field, `prep` will pass `utterance.text` into `preprocess_text`* It will write the output -- the text minus the hyphens -- to a field called `clean_text` that will be stored in the utterance meta and that can be accessed as `utt.meta['clean_text']` or `utt.get_info('clean_text')`
###Code
prep = TextProcessor(proc_fn=preprocess_text, output_field='clean_text')
corpus = prep.transform(corpus)
###Output
_____no_output_____
###Markdown
And as desired, we now have a new field attached to `utt`.
###Code
utt.get_info('clean_text')
###Output
_____no_output_____
###Markdown
Parsing text with the TextParser class One common utterance-level thing we want to do is parse the text. In practice, in increasing order of (computational) difficulty, this typically entails:* proper tokenizing of words and sentences;* POS-tagging;* dependency-parsing. As such, we provide a `TextParser` class that inherits from `TextProcessor` to do all of this, taking in the following arguments:* `output_field`: defaults to `'parsed'`* `input_field`* `mode`: whether we want to go through all of the above steps (which may be expensive) or stop mid-way through. Supports the following options: `'tokenize'`, `'tag'`, `'parse'` (the default).Under the surface, `TextParser` actually uses two separate models: a `spacy` object that does word tokenization, tagging and parsing _per sentence_, and `nltk`'s sentence tokenizer. The rationale is:* `spacy` doesn't support sentence tokenization without dependency-parsing, and we often want sentence tokenization without having to go through the effort of parsing.* We want to be consistent (as much as possible, given changes to spacy and nltk) in the tokenizations we produce, between runs where we don't want parsing and runs where we do.If we've pre-loaded these models, we can pass them into the constructor too, as:* `spacy_nlp`* `sent_tokenizer`
###Code
from convokit.text_processing import TextParser
parser = TextParser(input_field='clean_text', verbosity=50)
corpus = parser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
parse outputA parse produced by `TextParser` is serialized in text form. It is a list consisting of sentences, where each sentence is a dict with* `toks`: a list of tokens (i.e., words) in the sentence;* `rt`: the index of the root of the dependency tree (i.e., `sentence['toks'][sentence['rt']` gives the root)Each token, in turn, contains the following:* `tok`: the text of the token;* `tag`: the tag;* `up`: the index of the parent of the token in the dependency tree (no entry for the root);* `down`: the indices of the children of the token;* `dep`: the dependency of the edge between the token and its parent.
###Code
test_parse = utt.get_info('parsed')
test_parse[0]
###Output
_____no_output_____
###Markdown
If we didn't want to go through the trouble of dependency-parsing (which could be expensive) we could initialize `TextParser` with `mode='tag'`, which only POS-tags tokens:
###Code
texttagger = TextParser(output_field='tagged', input_field='clean_text', mode='tag')
corpus = texttagger.transform(corpus)
utt.get_info('tagged')[0]
###Output
_____no_output_____
###Markdown
Storing and loading corpora We've now computed a bunch of utterance-level attributes.
###Code
list(utt.meta.keys())
###Output
_____no_output_____
###Markdown
By default, calling `corpus.dump` will write all of these attributes to disk, within the file that stores utterances; later calling `corpus.load` will load all of these attributes back into a new corpus. For big objects like parses, this incurs a high computational burden (especially if in a later use case you might not even need to look at parses). To avoid this, `corpus.dump` takes an optional argument `fields_to_skip`, which is a dict of object type (`'utterance'`, `'conversation'`, `'speaker'`, `'corpus'`) to a list of fields that we do not want to write to disk. The following call will write the corpus to disk, without any of the preprocessing output we generated above:
###Code
corpus.dump(os.path.basename(OUT_DIR), base_path=os.path.dirname(OUT_DIR),
fields_to_skip={'utterance': ['parsed','tagged','clean_text']})
###Output
_____no_output_____
###Markdown
For attributes we want to keep around, but that we don't want to read and write to disk in a big batch with all the other corpus data, `corpus.dump_info` will dump fields of a Corpus object into separate files. This takes the following arguments as input:* `obj_type`: which type of Corpus object you're dealing with.* `fields`: a list of the fields to write. * `dir_name`: which directory to write to; by default will write to the directory you read the corpus from.This function will write each field in `fields` to a separate file called `info..jsonl` where each line of the file is a json-serialized dict: `{"id": , "value": }`.
###Code
corpus.dump_info('utterance',['parsed','tagged'], dir_name = OUT_DIR)
###Output
_____no_output_____
###Markdown
As expected, we now have the following files in the output directory:
###Code
ls $OUT_DIR
###Output
conversations.json index.json info.tagged.jsonl utterances.jsonl
corpus.json info.parsed.jsonl speakers.json
###Markdown
If we now initialize a new corpus by reading from this directory:
###Code
new_corpus = convokit.Corpus(OUT_DIR)
new_utt = new_corpus.get_utterance(test_utt_id)
###Output
_____no_output_____
###Markdown
We see that things that we've omitted in the `corpus.dump` call will not be read.
###Code
new_utt.meta.keys()
###Output
_____no_output_____
###Markdown
As a counterpart to `corpus.dump_info` we can also load auxiliary information on-demand. Here, this call will look for `info..jsonl` in the directory of `new_corpus` (or an optionally-specified `dir_name`) and attach the value specified in each line of the file to the utterance with the associated id:
###Code
new_corpus.load_info('utterance',['parsed'])
new_utt.get_info('parsed')
###Output
_____no_output_____
###Markdown
Per-utterance calls `TextProcessor` objects also support calls per-utterance via `TextProcessor.transform_utterance()`. These calls take in raw strings as well as utterances, and will return an utterance:
###Code
test_str = "I played -- a tennis match."
prep.transform_utterance(test_str)
from convokit.model import Utterance
adhoc_utt = Utterance(text=test_str)
adhoc_utt = prep.transform_utterance(adhoc_utt)
adhoc_utt.get_info('clean_text')
###Output
_____no_output_____
###Markdown
Pipelines Finally, we can string together multiple transformers, and hence `TextProcessors`, into a pipeline, using a `ConvokitPipeline` object. This is analogous to (and in fact inherits from) scikit-learn's `Pipeline` class.
###Code
from convokit.convokitPipeline import ConvokitPipeline
###Output
_____no_output_____
###Markdown
As an example, suppose we want to both clean the text and parse it. We can chain the required steps to get there by initializing `ConvokitPipeline` with a list of steps, represented as a tuple of `(, initialized transformer-like object)`:* `'prep'`, our de-hyphenator* `'parse'`, our parser
###Code
parse_pipe = ConvokitPipeline([('prep', TextProcessor(preprocess_text, 'clean_text_pipe')),
('parse', TextParser('parsed_pipe', input_field='clean_text_pipe',
verbosity=50))])
corpus = parse_pipe.transform(corpus)
utt.get_info('parsed_pipe')
###Output
_____no_output_____
###Markdown
As promised, the pipeline also works to transform utterances.
###Code
test_utt = parse_pipe.transform_utterance(test_str)
test_utt.get_info('parsed_pipe')
###Output
_____no_output_____
###Markdown
Some advanced usage: playing around with parameters The point of the following is to demonstrate more elaborate calls to `TextProcessor`. As an example, we will count words in an utterance.First, we'll initialize a `TextProcessor` that does wordcounts (i.e., `len(x.split())`) on just the raw text (`utt.text`), writing output to field `wc_raw`.
###Code
wc_raw = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc_raw')
corpus = wc_raw.transform(corpus)
utt.get_info('wc_raw')
###Output
_____no_output_____
###Markdown
If we instead wanted to wordcount our preprocessed text, with the hyphens removed, we can specify `input_field='clean_text'` -- as such, the `TextProcessor` will read from `utt.get_info('clean_text')` instead.
###Code
wc = TextProcessor(proc_fn=lambda x: len(x.split()), output_field='wc', input_field='clean_text')
corpus = wc.transform(corpus)
###Output
_____no_output_____
###Markdown
Here we see that we are no longer counting the extra hyphen.
###Code
utt.get_info('wc')
###Output
_____no_output_____
###Markdown
Likewise, we can count characters:
###Code
chars = TextProcessor(proc_fn=lambda x: len(x), output_field='ch', input_field='clean_text')
corpus = chars.transform(corpus)
utt.get_info('ch')
###Output
_____no_output_____
###Markdown
Suppose that for some reason we now wanted to calculate:* characters per word* words per character (the reciprocal)This requires:* a `TextProcessor` that takes in multiple input fields, `'ch'` and `'wc'`;* and that writes to multiple output fields, `'char_per_word'` and `'word_per_char'`.Here's how the resultant object, `char_per_word`, handles this:* in `transform()`, we pass `proc_fn` a dict mapping input field name to value, e.g., `{'wc': 22, 'ch': 120}`* `proc_fn` will be written to return a tuple, where each element of that tuple corresponds to each element of the list we've passed to `output_field`, e.g., ```out0, out1 = proc_fn(input)utt.set_info('char_per_word', out0) utt.set_info('word_per_char', out1)```
###Code
char_per_word = TextProcessor(proc_fn=lambda x: (x['ch']/x['wc'], x['wc']/x['ch']),
output_field=['char_per_word', 'word_per_char'], input_field=['ch','wc'])
corpus = char_per_word.transform(corpus)
utt.get_info('char_per_word')
utt.get_info('word_per_char')
###Output
_____no_output_____
###Markdown
Some advanced usage: input filters Just for the sake of demonstration, suppose we wished to save some computation time and only parse the questions in a corpus. We can do this by specifying `input_filter` (which, recall discussion above, takes as argument an `Utterance` object).
###Code
def is_question(utt, aux={}):
return utt.meta['is_question']
qparser = TextParser(output_field='qparsed', input_field='clean_text', input_filter=is_question, verbosity=50)
corpus = qparser.transform(corpus)
###Output
050/200 utterances processed
100/200 utterances processed
150/200 utterances processed
200/200 utterances processed
###Markdown
Since our test utterance is not a question, `qparser.transform()` will skip over it, and hence the utterance won't have the 'qparsed' attribute (and `get_info` returns `None`):
###Code
utt.get_info('qparsed')
###Output
_____no_output_____
###Markdown
However, if we take an utterance that's a question, we see that it is indeed parsed:
###Code
q_utt_id = '1681_14.q'
q_utt = corpus.get_utterance(q_utt_id)
q_utt.text
q_utt.get_info('qparsed')
###Output
_____no_output_____ |
PennyLane/Data Reuploading Classifier/4_QConv2ent_QFC2_Branch (best).ipynb | ###Markdown
Loading Raw Data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[:, 0:27, 0:27]
x_test = x_test[:, 0:27, 0:27]
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0
print(x_train_flatten.shape, y_train.shape)
print(x_test_flatten.shape, y_test.shape)
x_train_0 = x_train_flatten[y_train == 0]
x_train_1 = x_train_flatten[y_train == 1]
x_train_2 = x_train_flatten[y_train == 2]
x_train_3 = x_train_flatten[y_train == 3]
x_train_4 = x_train_flatten[y_train == 4]
x_train_5 = x_train_flatten[y_train == 5]
x_train_6 = x_train_flatten[y_train == 6]
x_train_7 = x_train_flatten[y_train == 7]
x_train_8 = x_train_flatten[y_train == 8]
x_train_9 = x_train_flatten[y_train == 9]
x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9]
print(x_train_0.shape)
print(x_train_1.shape)
print(x_train_2.shape)
print(x_train_3.shape)
print(x_train_4.shape)
print(x_train_5.shape)
print(x_train_6.shape)
print(x_train_7.shape)
print(x_train_8.shape)
print(x_train_9.shape)
x_test_0 = x_test_flatten[y_test == 0]
x_test_1 = x_test_flatten[y_test == 1]
x_test_2 = x_test_flatten[y_test == 2]
x_test_3 = x_test_flatten[y_test == 3]
x_test_4 = x_test_flatten[y_test == 4]
x_test_5 = x_test_flatten[y_test == 5]
x_test_6 = x_test_flatten[y_test == 6]
x_test_7 = x_test_flatten[y_test == 7]
x_test_8 = x_test_flatten[y_test == 8]
x_test_9 = x_test_flatten[y_test == 9]
x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9]
print(x_test_0.shape)
print(x_test_1.shape)
print(x_test_2.shape)
print(x_test_3.shape)
print(x_test_4.shape)
print(x_test_5.shape)
print(x_test_6.shape)
print(x_test_7.shape)
print(x_test_8.shape)
print(x_test_9.shape)
###Output
(980, 729)
(1135, 729)
(1032, 729)
(1010, 729)
(982, 729)
(892, 729)
(958, 729)
(1028, 729)
(974, 729)
(1009, 729)
###Markdown
Selecting the datasetOutput: X_train, Y_train, X_test, Y_test
###Code
n_train_sample_per_class = 200
n_class = 4
X_train = x_train_list[0][:n_train_sample_per_class, :]
Y_train = np.zeros((X_train.shape[0]*n_class,), dtype=int)
for i in range(n_class-1):
X_train = np.concatenate((X_train, x_train_list[i+1][:n_train_sample_per_class, :]), axis=0)
Y_train[(i+1)*n_train_sample_per_class:(i+2)*n_train_sample_per_class] = i+1
X_train.shape, Y_train.shape
n_test_sample_per_class = int(0.25*n_train_sample_per_class)
X_test = x_test_list[0][:n_test_sample_per_class, :]
Y_test = np.zeros((X_test.shape[0]*n_class,), dtype=int)
for i in range(n_class-1):
X_test = np.concatenate((X_test, x_test_list[i+1][:n_test_sample_per_class, :]), axis=0)
Y_test[(i+1)*n_test_sample_per_class:(i+2)*n_test_sample_per_class] = i+1
X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
Dataset Preprocessing
###Code
X_train = X_train.reshape(X_train.shape[0], 27, 27)
X_test = X_test.reshape(X_test.shape[0], 27, 27)
X_train.shape, X_test.shape
Y_train_dict = []
for i in range(np.unique(Y_train).shape[0]):
temp_Y = np.zeros(Y_train.shape)
temp_Y[Y_train == i] = 0 # positive class
temp_Y[Y_train != i] = 1 # negative class
temp_Y = to_categorical(temp_Y)
Y_train_dict += [('Y' + str(i), temp_Y)]
Y_train_dict = dict(Y_train_dict)
Y_test_dict = []
for i in range(np.unique(Y_test).shape[0]):
temp_Y = np.zeros(Y_test.shape)
temp_Y[Y_test == i] = 0 # positive class
temp_Y[Y_test != i] = 1 # negative class
temp_Y = to_categorical(temp_Y)
Y_test_dict += [('Y' + str(i), temp_Y)]
Y_test_dict = dict(Y_test_dict)
Y_train_dict['Y1'].shape, Y_test_dict['Y0'].shape
###Output
_____no_output_____
###Markdown
Quantum
###Code
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer
qml.enable_tape()
from tensorflow.keras.utils import to_categorical
# Set a random seed
np.random.seed(2020)
# Define output labels as quantum state vectors
def density_matrix(state):
"""Calculates the density matrix representation of a state.
Args:
state (array[complex]): array representing a quantum state vector
Returns:
dm: (array[complex]): array representing the density matrix
"""
return state * np.conj(state).T
label_0 = [[1], [0]]
label_1 = [[0], [1]]
state_labels = [label_0, label_1]
n_qubits = 2
dev_fc = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_fc)
def q_fc(params, inputs):
"""A variational quantum circuit representing the DRC.
Args:
params (array[float]): array of parameters
inputs = [x, y]
x (array[float]): 1-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
dev_conv = qml.device("default.qubit", wires=9)
@qml.qnode(dev_conv)
def q_conv(conv_params, inputs):
"""A variational quantum circuit representing the Universal classifier + Conv.
Args:
params (array[float]): array of parameters
x (array[float]): 2-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(conv_params[0])):
# RY layer
# height iteration
for i in range(3):
# width iteration
for j in range(3):
qml.RY((conv_params[0][l][3*i+j] * inputs[i, j] + conv_params[1][l][3*i+j]), wires=(3*i+j))
# entangling layer
for i in range(9):
if i != (9-1):
qml.CNOT(wires=[i, i+1])
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1) @ qml.PauliZ(2) @ qml.PauliZ(3) @ qml.PauliZ(4) @ qml.PauliZ(5) @ qml.PauliZ(6) @ qml.PauliZ(7) @ qml.PauliZ(8))
a = np.zeros((2, 1, 9))
q_conv(a, X_train[0, 0:3, 0:3])
a = np.zeros((2, 1, 9))
q_fc(a, X_train[0, 0, 0:9])
class class_weights(tf.keras.layers.Layer):
def __init__(self):
super(class_weights, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(1, 2), dtype="float32"),
trainable=True,
)
def call(self, inputs):
return (inputs * self.w)
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 2
q_fc_layer_0 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_1 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_2 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_3 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
# Alpha Layer
alpha_layer_0 = class_weights()(q_fc_layer_0)
alpha_layer_1 = class_weights()(q_fc_layer_1)
alpha_layer_2 = class_weights()(q_fc_layer_2)
alpha_layer_3 = class_weights()(q_fc_layer_3)
model = tf.keras.Model(inputs=X, outputs=[alpha_layer_0, alpha_layer_1, alpha_layer_2, alpha_layer_3])
for i in range(len(Y_train_dict)):
new_key = model.layers[len(model.layers)-4+i].name
old_key = "Y" + str(i)
Y_train_dict[new_key] = Y_train_dict.pop(old_key)
Y_test_dict[new_key] = Y_test_dict.pop(old_key)
Y_train_dict
Y_test_dict
model(X_train[0:5, :, :])
model.summary()
losses = {
model.layers[len(model.layers)-4+0].name: "mse",
model.layers[len(model.layers)-4+1].name: "mse",
model.layers[len(model.layers)-4+2].name: "mse",
model.layers[len(model.layers)-4+3].name: "mse"
}
#lossWeights = {"Y0": 1.0, "Y1": 1.0, "Y2": 1.0, "Y3": 1.0}
print(losses)
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.1,
decay_steps=int(len(X_train)/32),
decay_rate=0.95,
staircase=True)
opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(opt, loss=losses, metrics=["accuracy"])
cp_val_acc = tf.keras.callbacks.ModelCheckpoint(filepath="./Model/4_QConv2ent_2QFC_valacc.hdf5",
monitor='val_accuracy', verbose=1, save_weights_only=True, save_best_only=True, mode='max')
cp_val_loss = tf.keras.callbacks.ModelCheckpoint(filepath="./Model/4_QConv2ent_2QFC_valloss.hdf5",
monitor='val_loss', verbose=1, save_weights_only=True, save_best_only=True, mode='min')
H = model.fit(X_train, Y_train_dict, epochs=10, batch_size=32,
validation_data=(X_test, Y_test_dict), verbose=1, initial_epoch=0,
callbacks=[cp_val_acc, cp_val_loss])
# model weights with best val loss
model.load_weights('./Model/4_QConv2ent_2QFC_valloss.hdf5')
model.weights
# next 10 epochs after lr decay
model.weights
# next 10 epochs after lr decay
H.history
# first 10 epochs before lr decay
H.history
# first 10 epochs before lr decay
model.weights
###Output
_____no_output_____
###Markdown
Result Analysis
###Code
# model weights with best val loss
model.load_weights('./Model/4_QConv2ent_2QFC_valloss.hdf5')
model.weights
test_res = model.predict(X_test)
train_res = model.predict(X_train)
test_res[0][0]
train_res[0].shape
def ave_loss(class_pred):
return ((class_pred[0] - 1)**2 + (class_pred[1] - 0)**2)
train_pred = np.zeros((len(train_res[0]), ))
# samples loop
for i in range(len(train_res[0])):
temp_max = 0
class_max = None
# class loop
for j in range(4):
# check positive class
if temp_max < train_res[j][i][0]:
temp_max = train_res[j][i][0]
class_max = j
train_pred[i] = class_max
((Y_train == train_pred).sum())/(len(train_pred))
train_pred = np.zeros((len(train_res[0]), ))
# samples loop
for i in range(len(train_res[0])):
temp_min = 100
class_min = None
# class loop
for j in range(4):
# check loss value
if temp_min > ave_loss(train_res[j][i]):
temp_min = ave_loss(train_res[j][i])
class_min = j
train_pred[i] = class_min
((Y_train == train_pred).sum())/(len(train_pred))
# best val loss weights
# lowest mse
# wrong train sample
np.where((Y_train == train_pred) == False)[0]
# method of determining true class
# weights after 10 epochs lr decay
# highest positive value: train 0.90125, test 0.83
# lowest mse: train 0.90375, test 0.83
# best val loss weights
# highest positive value: train 0.8975, test 0.865
# lowest mse: train 0.9, test 0.865
test_pred = np.zeros((len(test_res[0]), ))
# samples loop
for i in range(len(test_res[0])):
temp_max = 0
class_max = None
# class loop
for j in range(4):
# check positive class
if temp_max < test_res[j][i][0]:
temp_max = test_res[j][i][0]
class_max = j
test_pred[i] = class_max
((Y_test == test_pred).sum())/(len(test_pred))
test_pred = np.zeros((len(test_res[0]), ))
# samples loop
for i in range(len(test_res[0])):
temp_min = 100
class_min = None
# class loop
for j in range(4):
# check loss value
if temp_min > ave_loss(test_res[j][i]):
temp_min = ave_loss(test_res[j][i])
class_min = j
test_pred[i] = class_min
((Y_test == test_pred).sum())/(len(test_pred))
# best val loss weights
# lowest mse
# wrong test sample
np.where((Y_test == test_pred) == False)[0]
###Output
_____no_output_____
###Markdown
Exploring the results
###Code
# model weights with best val loss
model.load_weights('./Model/4_QConv2ent_2QFC_valloss.hdf5')
model.weights
###Output
_____no_output_____
###Markdown
First Layer
###Code
qconv_1_weights = np.array([[[-3.5068619e-01, -7.3729032e-01, 4.9220048e-02, -1.3541983e+00,
6.9659483e-01, 2.0142789e+00, -1.1912005e-01, 4.4253272e-01,
8.0796504e-01],
[ 2.8853995e-01, 2.5525689e-03, -7.5066173e-01, -5.1612389e-01,
-7.6931775e-01, 3.9495945e-02, -2.9847270e-01, -2.9303998e-01,
5.8868647e-01]],
[[ 1.7989293e+00, 2.7588477e+00, 1.4450849e+00, -1.1718978e+00,
-2.5184264e-02, 1.3628511e+00, -7.9603838e-03, -4.2075574e-01,
5.2138257e-01],
[ 1.3797082e+00, -1.3904944e-01, -3.8255316e-01, -8.2376450e-02,
-1.5615442e-01, 3.5362953e-01, 2.2989626e-01, 2.2489822e-01,
5.5747521e-01]]])
qconv_1_weights.shape
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
qconv1_model = tf.keras.Model(inputs=X, outputs=reshape_layer_1)
qconv1_model(X_train[0:1])
qconv1_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
np.isclose(qconv1_model.get_weights()[0], qconv_1_weights).sum()
preprocessed_img_train = qconv1_model(X_train)
preprocessed_img_test = qconv1_model(X_test)
data_train = preprocessed_img_train.numpy().reshape(-1, 13*13)
np.savetxt('./4_QConv2ent_QFC2_Branch-Filter1_Image_Train.txt', data_train)
data_test = preprocessed_img_test.numpy().reshape(-1, 13*13)
np.savetxt('./4_QConv2ent_QFC2_Branch-Filter1_Image_Test.txt', data_test)
print(data_train.shape, data_test.shape)
###Output
(800, 169) (200, 169)
###Markdown
Second Layer
###Code
qconv_2_weights = np.array([[[ 2.8364928 , -2.4098628 , -1.5612396 , -1.9017003 ,
1.9548664 , -0.37646097, -4.222284 , 0.26775557,
0.18441878],
[ 0.7034124 , 0.4393435 , -0.32212317, -0.17706996,
0.2777927 , -0.40236515, -0.33229282, 0.35953867,
-1.9918324 ]],
[[-1.6619883 , 0.33638576, 0.49042726, 0.6765302 ,
0.22028887, -0.72008365, 2.4235497 , 0.13619412,
-0.69446284],
[-0.54379666, 0.40716565, 0.07379556, -0.01504666,
0.5636293 , 0.11656392, -0.08756571, 0.3454725 ,
0.37661582]]])
qconv_2_weights.shape
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
qconv2_model = tf.keras.Model(inputs=X, outputs=reshape_layer_2)
qconv2_model(X_train[0:1])
qconv2_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
qconv2_model.get_layer('Quantum_Conv_Layer_2').set_weights([qconv_2_weights])
np.isclose(qconv2_model.get_weights()[0], qconv_1_weights).sum()
np.isclose(qconv2_model.get_weights()[1], qconv_2_weights).sum()
preprocessed_img_train = qconv2_model(X_train)
preprocessed_img_test = qconv2_model(X_test)
data_train = preprocessed_img_train.numpy().reshape(-1, 6*6)
np.savetxt('./4_QConv2ent_QFC2_Branch-Filter2_Image_Train.txt', data_train)
data_test = preprocessed_img_test.numpy().reshape(-1, 6*6)
np.savetxt('./4_QConv2ent_QFC2_Branch-Filter2_Image_Test.txt', data_test)
print(data_train.shape, data_test.shape)
###Output
(800, 36) (200, 36)
###Markdown
Quantum States
###Code
q_fc_weights_0 = np.array([[[ 0.34919307, 1.280577 , -0.40389746, 2.7567825 ,
-1.8981032 , -0.58490497, -2.6140049 , 0.55854434,
-0.14549442],
[ 1.8742485 , 1.3923526 , -0.48553988, -4.0282655 ,
-1.0092568 , -1.726109 , 0.28595045, 0.35788605,
0.13558954]],
[[-0.06619656, 0.29138508, 0.34191862, 0.7155059 ,
-0.20389102, -1.6070857 , -1.5218158 , 1.034849 ,
-0.06948825],
[-0.16024663, 0.61659706, 0.14865518, -0.59474736,
1.3341626 , -0.05620752, 0.3439594 , -0.09109917,
-0.01229791]]])
q_fc_weights_1 = np.array([[[ 0.28862742, 0.8386173 , -1.0520895 , -0.76006484,
1.6054868 , -0.8180273 , -1.3015922 , 0.146214 ,
-2.9870028 ],
[-1.1344436 , -1.3247255 , 0.58105224, 0.66553676,
2.252441 , -0.13002443, -1.3606563 , 0.9464437 ,
-0.31959775]],
[[ 0.20303592, 0.5243242 , -0.9218817 , -1.370076 ,
0.7210135 , -0.6125907 , -0.33028948, 0.49510303,
-0.53149074],
[-0.5199091 , -1.8823092 , -0.45752335, -0.5516297 ,
-1.2591928 , -0.37027845, -0.88656336, -0.14877637,
0.04090607]]])
q_fc_weights_2 = np.array([[[ 0.32965487, -0.48179072, 0.59025586, -3.1451197 ,
2.5917895 , -0.71461445, -1.5514388 , -1.2567754 ,
0.03566303],
[-2.6445682 , 0.18470715, 0.8170568 , -1.2547797 ,
1.6798987 , -0.895823 , -2.0204744 , 2.1893585 ,
0.38608813]],
[[-0.46725035, -0.88657665, 0.08115988, -0.33190268,
0.3567504 , -0.06429264, 0.4678363 , 1.11554 ,
-0.7310539 ],
[-0.2545552 , 0.45082113, -0.31482646, -0.3524591 ,
0.19939618, -0.83299035, -1.3128988 , -0.33097702,
0.36383504]]])
q_fc_weights_3 = np.array([[[ 0.43416622, -0.5376355 , -0.48654264, 4.231484 ,
-0.8790685 , 1.179932 , -1.6252736 , -2.3226252 ,
2.8246262 ],
[ 0.46730754, 0.44019 , 0.5064762 , -2.5414548 ,
0.8346419 , 0.67727995, -1.7355382 , 3.571513 ,
-0.22530685]],
[[-0.21687755, -0.71872264, 1.7950757 , 1.1021243 ,
-1.156439 , 0.4487198 , 0.40195227, -0.9239927 ,
0.26137996],
[ 0.30011192, -1.3315674 , -0.7748441 , -1.0567622 ,
-0.95007855, -2.145618 , -1.6848673 , -0.6859795 ,
-0.507362 ]]])
q_fc_weights_0.shape, q_fc_weights_1.shape, q_fc_weights_2.shape, q_fc_weights_3.shape
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
maxpool_model = tf.keras.Model(inputs=X, outputs=reshape_layer_3)
maxpool_model(X_train[0:1])
maxpool_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
maxpool_model.get_layer('Quantum_Conv_Layer_2').set_weights([qconv_2_weights])
maxpool_train = maxpool_model(X_train)
maxpool_test = maxpool_model(X_test)
maxpool_train.shape, maxpool_test.shape
n_qubits = 1 # number of class
dev_state = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_state)
def q_fc_state(params, inputs):
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
#return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
return qml.expval(qml.Hermitian(density_matrix(state_labels[0]), wires=[0]))
q_fc_state(np.zeros((2,1,9)), maxpool_train[0])
# branch 0
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(q_fc_weights_0, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(q_fc_weights_0, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_0_QConv2ent_QFC2_Branch-State_Train.txt', train_state)
np.savetxt('./4_0_QConv2ent_QFC2_Branch-State_Test.txt', test_state)
# branch 1
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(q_fc_weights_1, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(q_fc_weights_1, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_1_QConv2ent_QFC2_Branch-State_Train.txt', train_state)
np.savetxt('./4_1_QConv2ent_QFC2_Branch-State_Test.txt', test_state)
# branch 2
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(q_fc_weights_2, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(q_fc_weights_2, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_2_QConv2ent_QFC2_Branch-State_Train.txt', train_state)
np.savetxt('./4_2_QConv2ent_QFC2_Branch-State_Test.txt', test_state)
# branch 3
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(q_fc_weights_3, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(q_fc_weights_3, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_3_QConv2ent_QFC2_Branch-State_Train.txt', train_state)
np.savetxt('./4_3_QConv2ent_QFC2_Branch-State_Test.txt', test_state)
###Output
_____no_output_____
###Markdown
Saving trained max pool output
###Code
maxpool_train.shape, maxpool_test.shape
np.savetxt('./4_QConv2ent_QFC2_Branch-TrainedMaxPool_Train.txt', maxpool_train)
np.savetxt('./4_QConv2ent_QFC2_Branch-TrainedMaxPool_Test.txt', maxpool_test)
###Output
_____no_output_____
###Markdown
Random Starting State
###Code
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 2
q_fc_layer_0 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_1 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_2 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
q_fc_layer_3 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
# Alpha Layer
alpha_layer_0 = class_weights()(q_fc_layer_0)
alpha_layer_1 = class_weights()(q_fc_layer_1)
alpha_layer_2 = class_weights()(q_fc_layer_2)
alpha_layer_3 = class_weights()(q_fc_layer_3)
model_random = tf.keras.Model(inputs=X, outputs=[alpha_layer_0, alpha_layer_1, alpha_layer_2, alpha_layer_3])
model_maxpool_random = tf.keras.Model(inputs=X, outputs=reshape_layer_3)
model_random(X_train[0:1])
model_random.weights
random_weights_0 = np.array([[[-0.38205916, -0.32157356, -0.36946476, -0.14519015,
-0.1741243 , -0.14436567, 0.41515827, 0.46430767,
0.05232906],
[ 0.45858866, -0.27274096, 0.09459215, 0.1331594 ,
0.26793003, 0.35317045, -0.25254235, 0.35575753,
-0.00269699]],
[[ 0.20839894, -0.06481433, -0.389221 , 0.18636137,
0.0322125 , -0.4043268 , -0.23117393, 0.2731933 ,
-0.33924854],
[ 0.00189614, 0.47282887, -0.47041848, -0.2506976 ,
0.23154783, 0.5169259 , -0.38120353, -0.29712826,
-0.3661686 ]]])
random_weights_1 = np.array([[[-3.8573855e-01, 1.2338161e-04, -3.4994566e-01, 1.6507030e-02,
2.7931094e-02, 1.4965594e-01, 1.9558185e-01, 3.7240016e-01,
4.1224837e-01],
[-2.0730710e-01, 1.4665091e-01, 2.2953910e-01, -1.8294707e-01,
-2.9422033e-01, -1.0954219e-01, -4.8812094e-01, 2.3804653e-01,
1.2762904e-02]],
[[-3.7277770e-01, 4.7162807e-01, 1.7469132e-01, 1.9624650e-01,
6.5971136e-02, -3.0559468e-01, 5.2143711e-01, 2.9053259e-01,
-3.3940887e-01],
[ 7.6271355e-02, 2.2447646e-02, -1.9267979e-01, -3.3340788e-01,
3.0921632e-01, -8.3895922e-03, -4.2881757e-02, -1.0280296e-01,
1.6796750e-01]]])
random_weights_2 = np.array([[[ 0.3375085 , -0.5039589 , -0.12458649, 0.03081298,
-0.3590887 , 0.10382867, 0.40024424, -0.36897716,
0.31312758],
[ 0.42523754, -0.03742361, 0.06040829, -0.06957746,
0.30570823, -0.11539704, -0.40476683, -0.23915961,
-0.0829832 ]],
[[ 0.3559941 , 0.3155442 , 0.08222359, 0.41432273,
0.01732248, -0.26297218, -0.01981091, -0.04592776,
0.39101595],
[ 0.3062536 , -0.08849475, -0.20818016, -0.44495705,
0.06605953, -0.13090187, -0.3172878 , -0.5133143 ,
-0.4394003 ]]])
random_weights_3 = np.array([[[ 0.21720552, -0.46527594, -0.01723516, 0.32298315,
-0.17747 , -0.26591384, -0.43713358, 0.08005935,
-0.44423178],
[-0.37649596, 0.41977262, 0.15621603, -0.3686198 ,
0.34089315, -0.07570398, 0.30436516, -0.04764476,
-0.3341527 ]],
[[-0.19360352, 0.0107705 , 0.05996364, -0.30747455,
0.3622191 , 0.27814162, -0.01553947, 0.0343135 ,
0.09682399],
[-0.37713268, -0.2690144 , -0.34324157, -0.16356263,
0.24849337, -0.23426789, -0.02752119, 0.22051013,
-0.14259636]]])
maxpool_train = model_maxpool_random(X_train)
maxpool_test = model_maxpool_random(X_test)
maxpool_train.shape, maxpool_test.shape
np.savetxt('./4_QConv2ent_QFC2_Branch-RandomMaxPool_Train.txt', maxpool_train)
np.savetxt('./4_QConv2ent_QFC2_Branch-RandomMaxPool_Test.txt', maxpool_test)
n_qubits = 1 # number of class
dev_state = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_state)
def q_fc_state(params, inputs):
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
#return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
return qml.expval(qml.Hermitian(density_matrix(state_labels[0]), wires=[0]))
q_fc_state(np.zeros((2,1,9)), maxpool_train[0])
# branch 0
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(random_weights_0, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(random_weights_0, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_0_QConv2ent_QFC2_Branch-RandomState_Train.txt', train_state)
np.savetxt('./4_0_QConv2ent_QFC2_Branch-RandomState_Test.txt', test_state)
# branch 1
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(random_weights_1, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(random_weights_1, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_1_QConv2ent_QFC2_Branch-RandomState_Train.txt', train_state)
np.savetxt('./4_1_QConv2ent_QFC2_Branch-RandomState_Test.txt', test_state)
# branch 2
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(random_weights_2, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(random_weights_2, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_2_QConv2ent_QFC2_Branch-RandomState_Train.txt', train_state)
np.savetxt('./4_2_QConv2ent_QFC2_Branch-RandomState_Test.txt', test_state)
# branch 3
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(random_weights_3, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
for i in range(len(test_state)):
q_fc_state(random_weights_3, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
train_state.shape, test_state.shape
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./4_3_QConv2ent_QFC2_Branch-RandomState_Train.txt', train_state)
np.savetxt('./4_3_QConv2ent_QFC2_Branch-RandomState_Test.txt', test_state)
###Output
_____no_output_____ |
Samples/PythonInterop/tomography-sample.ipynb | ###Markdown
Quantum Process Tomography with Q and Python Abstract In this sample, we will demonstrate interoperability between Q and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q.In particular, this sample will use *quantum process tomography* to learn about the behavior of a "noisy" Hadamard operation from the results of random Pauli measurements. Preamble When working with Q from languages other than C, we normally need to separate Q operations and functions into their own project.We can then add the project containing our new operations as a *reference* to our interoperability project in order to run from within Visual Studio.For running from Jupyter, though, we must manually add the other project to `sys.path` so that Python.NET can find the assemblies containing our new operations, as well as the assemblies from the rest of the Quantum Development Kit that get included as dependencies.
###Code
import qsharp
from qsharp.tomography import single_qubit_process_tomography
import warnings; warnings.filterwarnings('ignore')
import sys
sys.path.append('./bin/Debug/netstandard2.0')
###Output
_____no_output_____
###Markdown
With the new assemblies added to `sys.path`, we can reference them using the Python.NET `clr` package.
###Code
import clr
clr.AddReference("Microsoft.Quantum.Canon")
clr.AddReference("PythonInterop")
###Output
_____no_output_____
###Markdown
Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
###Output
_____no_output_____
###Markdown
Setting up the Simulator This sample is provided along with a small Python package to help facilitate interoperability.We can import this package like any other Python package, provided that we have set `sys.path` correctly. In particular, the `qsharp` package provides a `QuantumSimulator` class which wraps the `Microsoft.Quantum.Simulation.Simulators.QuantumSimulator` .NET class provided with the Quantum Development Kit.This wrapper provides a few useful convienence features that we will use along the way.
###Code
qsim = qsharp.QuantumSimulator()
qsim
###Output
_____no_output_____
###Markdown
We can use our new simulator instance to run operations and functions defined in Q.To do so, we import the operation and function names as though the Q namespaces were Python packages.
###Code
from Microsoft.Quantum.Samples.Python import (
HelloWorld, NoisyHadamardChannel
)
###Output
_____no_output_____
###Markdown
Once we've imported the new names, we can then ask our simulator to run each function and operation using the `run` method.Jupyter will denote messages emitted by the simulator with a blue sidebar to separate them from normal Python output.
###Code
hello_world = qsim.get(HelloWorld)
hello_world
qsim.run(hello_world, qsharp.Pauli.Z).Wait()
###Output
_____no_output_____
###Markdown
To obtain the output from a Q operation or function, we must use the `result` function, since outputs are wrapped in .NET asynchronous tasks.These tasks are wrapped in a future-like object for convienence.
###Code
noisy_h = qsim.run(NoisyHadamardChannel, 0.1).result()
noisy_h
###Output
_____no_output_____
###Markdown
Tomography The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q operations.Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.
###Code
estimation_results = single_qubit_process_tomography(qsim, noisy_h, n_measurements=10000)
###Output
Preparing tomography model...
Performing tomography...
###Markdown
To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.
###Code
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
###Output
_____no_output_____
###Markdown
We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.
###Code
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
###Output
_____no_output_____
###Markdown
We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.This shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.
###Code
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
Diagnostics
###Code
import sys
print("""
Python version: {}
Quantum Development Kit version: {}
""".format(sys.version, qsharp.__version__))
###Output
Python version: 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]
Quantum Development Kit version: 0.2.1806.3001
###Markdown
Quantum Process Tomography with Q and Python Abstract In this sample, we will demonstrate interoperability between Q and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q.In particular, this sample will use *quantum process tomography* to learn about the behavior of a "noisy" Hadamard operation from the results of random Pauli measurements. Preamble When working with Q from languages other than C, we normally need to separate Q operations and functions into their own project.We can then add the project containing our new operations as a *reference* to our interoperability project in order to run from within Visual Studio.For running from Jupyter, though, we must manually add the other project to `sys.path` so that Python.NET can find the assemblies containing our new operations, as well as the assemblies from the rest of the Quantum Development Kit that get included as dependencies.
###Code
import qsharp
from qsharp.tomography import single_qubit_process_tomography
import warnings; warnings.filterwarnings('ignore')
import sys
sys.path.append('./bin/Debug/netstandard2.0')
###Output
_____no_output_____
###Markdown
With the new assemblies added to `sys.path`, we can reference them using the Python.NET `clr` package.
###Code
import clr
clr.AddReference("Microsoft.Quantum.Canon")
clr.AddReference("PythonInterop")
###Output
_____no_output_____
###Markdown
Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
###Output
_____no_output_____
###Markdown
Setting up the Simulator This sample is provided along with a small Python package to help facilitate interoperability.We can import this package like any other Python package, provided that we have set `sys.path` correctly. In particular, the `qsharp` package provides a `QuantumSimulator` class which wraps the `Microsoft.Quantum.Simulation.Simulators.QuantumSimulator` .NET class provided with the Quantum Development Kit.This wrapper provides a few useful convienence features that we will use along the way.
###Code
qsim = qsharp.QuantumSimulator()
qsim
###Output
_____no_output_____
###Markdown
We can use our new simulator instance to run operations and functions defined in Q.To do so, we import the operation and function names as though the Q namespaces were Python packages.
###Code
from Microsoft.Quantum.Samples.Python import (
HelloWorld, NoisyHadamardChannel
)
###Output
_____no_output_____
###Markdown
Once we've imported the new names, we can then ask our simulator to run each function and operation using the `run` method.Jupyter will denote messages emitted by the simulator with a blue sidebar to separate them from normal Python output.
###Code
hello_world = qsim.get(HelloWorld)
hello_world
qsim.run(hello_world, qsharp.Pauli.Z).Wait()
###Output
_____no_output_____
###Markdown
To obtain the output from a Q operation or function, we must use the `result` function, since outputs are wrapped in .NET asynchronous tasks.These tasks are wrapped in a future-like object for convienence.
###Code
noisy_h = qsim.run(NoisyHadamardChannel, 0.1).result()
noisy_h
###Output
_____no_output_____
###Markdown
Tomography The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q operations.Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.
###Code
estimation_results = single_qubit_process_tomography(qsim, noisy_h, n_measurements=10000)
###Output
Preparing tomography model...
Performing tomography...
###Markdown
To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.
###Code
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
###Output
_____no_output_____
###Markdown
We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.
###Code
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
###Output
_____no_output_____
###Markdown
We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.This shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.
###Code
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
Diagnostics
###Code
import sys
print("""
Python version: {}
Quantum Development Kit version: {}
""".format(sys.version, qsharp.__version__))
###Output
Python version: 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]
Quantum Development Kit version: 0.2.1806.3001
###Markdown
Quantum Process Tomography with Q and Python Abstract In this sample, we will demonstrate interoperability between Q and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q.In particular, this sample will use *quantum process tomography* to learn about the behavior of a "noisy" Hadamard operation from the results of random Pauli measurements. Preamble When working with Q from languages other than C, we normally need to separate Q operations and functions into their own project.We can then add the project containing our new operations as a *reference* to our interoperability project in order to run from within Visual Studio.For running from Jupyter, though, we must manually add the other project to `sys.path` so that Python.NET can find the assemblies containing our new operations, as well as the assemblies from the rest of the Quantum Development Kit that get included as dependencies.
###Code
import qsharp
from qsharp.tomography import single_qubit_process_tomography
import warnings; warnings.filterwarnings('ignore')
import sys
sys.path.append('./bin/Debug/netstandard2.0')
###Output
_____no_output_____
###Markdown
With the new assemblies added to `sys.path`, we can reference them using the Python.NET `clr` package.
###Code
import clr
clr.AddReference("Microsoft.Quantum.Canon")
clr.AddReference("PythonInterop")
###Output
_____no_output_____
###Markdown
Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
###Output
_____no_output_____
###Markdown
Setting up the Simulator This sample is provided along with a small Python package to help facilitate interoperability.We can import this package like any other Python package, provided that we have set `sys.path` correctly. In particular, the `qsharp` package provides a `QuantumSimulator` class which wraps the `Microsoft.Quantum.Simulation.Simulators.QuantumSimulator` .NET class provided with the Quantum Development Kit.This wrapper provides a few useful convenience features that we will use along the way.
###Code
qsim = qsharp.QuantumSimulator()
qsim
###Output
_____no_output_____
###Markdown
We can use our new simulator instance to run operations and functions defined in Q.To do so, we import the operation and function names as though the Q namespaces were Python packages.
###Code
from Microsoft.Quantum.Samples.Python import (
HelloWorld, NoisyHadamardChannel
)
###Output
_____no_output_____
###Markdown
Once we've imported the new names, we can then ask our simulator to run each function and operation using the `run` method.Jupyter will denote messages emitted by the simulator with a blue sidebar to separate them from normal Python output.
###Code
hello_world = qsim.get(HelloWorld)
hello_world
qsim.run(hello_world, qsharp.Pauli.Z).Wait()
###Output
_____no_output_____
###Markdown
To obtain the output from a Q operation or function, we must use the `result` function, since outputs are wrapped in .NET asynchronous tasks.These tasks are wrapped in a future-like object for convenience.
###Code
noisy_h = qsim.run(NoisyHadamardChannel, 0.1).result()
noisy_h
###Output
_____no_output_____
###Markdown
Tomography The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q operations.Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.
###Code
estimation_results = single_qubit_process_tomography(qsim, noisy_h, n_measurements=10000)
###Output
Preparing tomography model...
Performing tomography...
###Markdown
To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.
###Code
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
###Output
_____no_output_____
###Markdown
We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.
###Code
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
###Output
_____no_output_____
###Markdown
We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.This shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.
###Code
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
Diagnostics
###Code
import sys
print("""
Python version: {}
Quantum Development Kit version: {}
""".format(sys.version, qsharp.__version__))
###Output
Python version: 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]
Quantum Development Kit version: 0.2.1806.3001
###Markdown
Quantum Process Tomography with Q and Python Abstract In this sample, we will demonstrate interoperability between Q and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q.In particular, this sample will use *quantum process tomography* to learn about the behavior of a "noisy" Hadamard operation from the results of random Pauli measurements. Preamble When working with Q from languages other than C, we normally need to separate Q operations and functions into their own project.We can then add the project containing our new operations as a *reference* to our interoperability project in order to run from within Visual Studio.For running from Jupyter, though, we must manually add the other project to `sys.path` so that Python.NET can find the assemblies containing our new operations, as well as the assemblies from the rest of the Quantum Development Kit that get included as dependencies.
###Code
import qsharp
from qsharp.tomography import single_qubit_process_tomography
import warnings; warnings.filterwarnings('ignore')
import sys
sys.path.append('./bin/Debug/netstandard2.0')
###Output
_____no_output_____
###Markdown
With the new assemblies added to `sys.path`, we can reference them using the Python.NET `clr` package.
###Code
import clr
clr.AddReference("Microsoft.Quantum.Canon")
clr.AddReference("PythonInterop")
###Output
_____no_output_____
###Markdown
Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
###Output
_____no_output_____
###Markdown
Setting up the Simulator This sample is provided along with a small Python package to help facilitate interoperability.We can import this package like any other Python package, provided that we have set `sys.path` correctly. In particular, the `qsharp` package provides a `QuantumSimulator` class which wraps the `Microsoft.Quantum.Simulation.Simulators.QuantumSimulator` .NET class provided with the Quantum Development Kit.This wrapper provides a few useful convienence features that we will use along the way.
###Code
qsim = qsharp.QuantumSimulator()
qsim
###Output
_____no_output_____
###Markdown
We can use our new simulator instance to run operations and functions defined in Q.To do so, we import the operation and function names as though the Q namespaces were Python packages.
###Code
from Microsoft.Quantum.Samples.Python import (
HelloWorld, NoisyHadamardChannel
)
###Output
_____no_output_____
###Markdown
Once we've imported the new names, we can then ask our simulator to run each function and operation using the `run` method.Jupyter will denote messages emitted by the simulator with a blue sidebar to separate them from normal Python output.
###Code
hello_world = qsim.get(HelloWorld)
hello_world
qsim.run(hello_world, qsharp.Pauli.Z).Wait()
###Output
_____no_output_____
###Markdown
To obtain the output from a Q operation or function, we must use the `result` function, since outputs are wrapped in .NET asynchronous tasks.These tasks are wrapped in a future-like object for convienence.
###Code
noisy_h = qsim.run(NoisyHadamardChannel, 0.1).result()
noisy_h
###Output
_____no_output_____
###Markdown
Tomography The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q operations.Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.
###Code
estimation_results = single_qubit_process_tomography(qsim, noisy_h, n_measurements=10000)
###Output
Preparing tomography model...
Performing tomography...
###Markdown
To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.
###Code
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
###Output
_____no_output_____
###Markdown
We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.
###Code
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
###Output
_____no_output_____
###Markdown
We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.This shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.
###Code
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
Diagnostics
###Code
import sys
print("""
Python version: {}
Quantum Development Kit version: {}
""".format(sys.version, qsharp.__version__))
###Output
Python version: 3.6.4 | packaged by conda-forge | (default, Dec 24 2017, 10:11:43) [MSC v.1900 64 bit (AMD64)]
Quantum Development Kit version: 0.2.1802.2202
|
numerical_integration_with_python.ipynb | ###Markdown
Numerical integration with Python **Author:** Simon Mutch **Date:** 2017-08-02 --- Update 2017-08-03* More information about Jupyter notebooks can be found [here](http://jupyter-notebook.readthedocs.io/en/stable/notebook.html).* The notebook used to generate this page can be downloaded [here](https://github.com/smutch/numerical_integration_tut).Good luck with your assignments everyone! 👍---In this notebook we'll very briefly cover how to do simple numerical integration of a function with Python.I'm going to assume that you have some basic knowledge of Python, but don't worry if you don't. This notebook is very simple and will act as a basic introduction in itself. For a proper introduction to Python, the [Software Carpentry course](http://swcarpentry.github.io/python-novice-inflammation/) is a decent place to start. There are a few prerequisite packages that we'll need for doing our simple numerical integration:* [Numpy](http://www.numpy.org) - "the fundamental package for scientific computing with Python"* [Scipy](https://scipy.org/scipylib/index.html) - "many user-friendly and efficient numerical routines such as routines for numerical integration and optimization"* [matplotlib](http://matplotlib.org) - "Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments"Many Python environments will have these pre-installed. If you get errors when trying to import them below, then the chances are they aren't installed in your current Python environment. If this is the case, please see the installation instructions for each package by following the links above or, alternatively, consider trying the excellent [Anaconda distribution](https://www.continuum.io/what-is-anaconda) (recommended).For reference, here are the versions of the relevant packages used to create this notebook:> numpy 1.12.1 > scipy 0.19.0 > matplotlib 2.0.2 The first thing we need to do is import our packages...
###Code
# unfortunately the Windows machines in this lab only have Python v2 (not v3)
# and so there are a couple of things we need to do to get everything to run smoothly...
from __future__ import print_function, division
# import the necessary packages
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
# set a few options that will make the plots generated in this notebook look better
%matplotlib inline
%config InlineBackend.print_figure_kwargs={'bbox_inches': 'tight'}
plt.rcParams['figure.dpi'] = 100
###Output
_____no_output_____
###Markdown
Setting up the problemLet's take a contrived example where we can easily calculate an analytic solution to check the validity of our result.Let's consider the area under a $\sin$ wave between $\theta=0$ and $\pi$.
###Code
theta = np.linspace(0, np.pi, 50)
y = np.sin(theta)
fig, ax = plt.subplots()
ax.plot(theta, y)
ax.fill_between(theta, y, alpha=0.5)
ax.set(xlabel=r'$\theta$', ylabel=r'$\sin(\theta)$', ylim=(0, 1.05));
###Output
_____no_output_____
###Markdown
Doing the integralTo numerically integrate this, we first define the function that we want to integrate. In this case it's rather simple, however, in practice this could be an arbitrarily complex function.
###Code
def f(theta):
return np.sin(theta)
###Output
_____no_output_____
###Markdown
Next we can use the [scipy.integrate](https://docs.scipy.org/doc/scipy/reference/integrate.html) module to do a simple numerical integration of this function. Note that we imported this module as `integrate` above.
###Code
numerical_result = integrate.quad(f, 0, np.pi)
print(numerical_result)
###Output
(2.0, 2.220446049250313e-14)
###Markdown
The first number here is the result, whilst the second number is an estimate of the absolute error. Let's throw away the latter.
###Code
numerical_result = numerical_result[0]
###Output
_____no_output_____
###Markdown
Checking the resultLastly, lets compare this with the analytic solution.
###Code
analytic_result = -np.cos(np.pi) + np.cos(0)
print("The analytic result is {:.2f}".format(analytic_result))
print("Identical to numerical result within available precision? ", np.isclose(analytic_result, numerical_result))
###Output
The analytic result is 2.00
Identical to numerical result within available precision? True
|
3_2_ Robot_localisation/.ipynb_checkpoints/2. Move Function, solution-checkpoint.ipynb | ###Markdown
Move FunctionNow that you know how a robot uses sensor measurements to update its idea of its own location, let's see how we can incorporate motion into this location. In this notebook, let's go over the steps a robot takes to help localize itself from an initial, uniform distribution to sensing, moving and updating that distribution.We include the `sense` function that you've seen, which updates an initial distribution based on whether a robot senses a grid color: red or green. Next, you're tasked with writing a function `move` that incorporates motion into the distribution. As seen below, **one motion `U= 1` to the right, causes all values in a distribution to shift one grid cell to the right.** First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
You are given the initial variables and the complete `sense` function, below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
# You are given the complete sense function
def sense(p, Z):
''' Takes in a current probability distribution, p, and a sensor reading, Z.
Returns a *normalized* distribution after the sensor measurement has been made, q.
This should be accurate whether Z is 'red' or 'green'. '''
q=[]
# loop through all grid cells
for i in range(len(p)):
# check if the sensor reading is equal to the color of the grid cell
# if so, hit = 1
# if not, hit = 0
hit = (Z == world[i])
q.append(p[i] * (hit * pHit + (1-hit) * pMiss))
# sum up all the components
s = sum(q)
# divide all elements of q by the sum to normalize
for i in range(len(p)):
q[i] = q[i] / s
return q
# Commented out code for measurements
# for k in range(len(measurements)):
# p = sense(p, measurements)
###Output
_____no_output_____
###Markdown
QUIZ: Program a function that returns a new distribution q, shifted to the right by the motion (U) units. This function should shift a distribution with the motion, U. Keep in mind that this world is cyclic and that if U=0, q should be the same as the given p. You should see all the values in `p` are moved to the right by 1, for U=1.
###Code
## TODO: Complete this move function so that it shifts a probability distribution, p
## by a given motion, U
def move(p, U):
q=[]
# iterate through all values in p
for i in range(len(p)):
# use the modulo operator to find the new location for a p value
# this finds an index that is shifted by the correct amount
index = (i-U) % len(p)
# append the correct value of p to q
q.append(p[index])
return q
p = move(p,1)
print(p)
display_map(p)
###Output
[0, 0, 1, 0, 0]
|
3.4_Multilayer_Perceptron.ipynb | ###Markdown
Multilayer Perceptron
###Code
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("tmp/data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.01
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost)
print "Optimization Finished!"
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
###Output
Epoch: 0001 cost= 43.282013203
Epoch: 0002 cost= 8.349755084
Epoch: 0003 cost= 4.672669222
Epoch: 0004 cost= 3.394152646
Epoch: 0005 cost= 2.467624623
Epoch: 0006 cost= 2.193587589
Epoch: 0007 cost= 2.039622562
Epoch: 0008 cost= 1.736807735
Epoch: 0009 cost= 1.386611417
Epoch: 0010 cost= 1.591948932
Epoch: 0011 cost= 1.320480854
Epoch: 0012 cost= 1.160013903
Epoch: 0013 cost= 1.042314344
Epoch: 0014 cost= 0.775135798
Epoch: 0015 cost= 0.914223716
Optimization Finished!
Accuracy: 0.9594
###Markdown
Exercice 1Modify the architecture of the network. You can add extra hidden layers and/or the number of neurons per layer. How do you obtain the best results?
###Code
###Output
_____no_output_____ |
advanced_functionality/inference_pipeline_sparkml_xgboost_abalone/inference_pipeline_sparkml_xgboost_abalone.ipynb | ###Markdown
Feature processing with Spark, training with XGBoost and deploying as Inference PipelineTypically a Machine Learning (ML) process consists of few steps: gathering data with various ETL jobs, pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.In many cases, when the trained model is used for processing real time or batch prediction requests, the model receives data in a format which needs to pre-processed (e.g. featurized) before it can be passed to the algorithm. In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging Spark Feature Transformers and SageMaker XGBoost algorithm & after the model is trained, deploy the Pipeline (Feature Transformer and XGBoost) as an Inference Pipeline behind a single Endpoint for real-time inference and for batch inferences using Amazon SageMaker Batch Transform.In this notebook, we use Amazon Glue to run serverless Spark. Though the notebook demonstrates the end-to-end flow on a small dataset, the setup can be seamlessly used to scale to larger datasets. Objective: predict the age of an Abalone from its physical measurement The dataset is available from [UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/abalone). The aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. At the core, it's a regression problem. The dataset contains several features - `sex` (categorical), `length` (continuous), `diameter` (continuous), `height` (continuous), `whole_weight` (continuous), `shucked_weight` (continuous), `viscera_weight` (continuous), `shell_weight` (continuous) and `rings` (integer).Our goal is to predict the variable `rings` which is a good approximation for age (age is `rings` + 1.5). We'll use SparkML to process the dataset (apply one or many feature transformers) and upload the transformed dataset to S3 so that it can be used for training with XGBoost. MethodologiesThe Notebook consists of a few high-level steps:* Using AWS Glue for executing the SparkML feature processing job.* Using SageMaker XGBoost to train on the processed dataset produced by SparkML job.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform job. Using AWS Glue for executing the SparkML job We'll be running the SparkML job using [AWS Glue](https://aws.amazon.com/glue). AWS Glue is a serverless ETL service which can be used to execute standard Spark/PySpark jobs. Glue currently only supports `Python 2.7`, hence we'll write the script in `Python 2.7`. Permission setup for invoking AWS Glue from this NotebookIn order to enable this Notebook to run AWS Glue jobs, we need to add one additional permission to the default execution role of this notebook. We will be using SageMaker Python SDK to retrieve the default execution role and then you have to go to [IAM Dashboard](https://console.aws.amazon.com/iam/home) to edit the Role to add AWS Glue specific permission. Finding out the current execution role of the NotebookWe are using SageMaker Python SDK to retrieve the current role for this Notebook which needs to be enhanced.
###Code
# Import SageMaker Python SDK to get the Session and execution_role
import sagemaker
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
print(role[role.rfind("/") + 1 :])
###Output
_____no_output_____
###Markdown
Adding AWS Glue as an additional trusted entity to this roleThis step is needed if you want to pass the execution role of this Notebook while calling Glue APIs as well without creating an additional **Role**. If you have not used AWS Glue before, then this step is mandatory. If you have used AWS Glue previously, then you should have an already existing role that can be used to invoke Glue APIs. In that case, you can pass that role while calling Glue (later in this notebook) and skip this next step. On the IAM dashboard, please click on **Roles** on the left sidenav and search for this Role. Once the Role appears, click on the Role to go to its **Summary** page. Click on the **Trust relationships** tab on the **Summary** page to add AWS Glue as an additional trusted entity. Click on **Edit trust relationship** and replace the JSON with this JSON.```{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "sagemaker.amazonaws.com", "glue.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ]}```Once this is complete, click on **Update Trust Policy** and you are done. Downloading dataset and uploading to S3SageMaker team has downloaded the dataset from UCI and uploaded to one of the S3 buckets in our account. In this Notebook, we will download from that bucket and upload to your bucket so that AWS Glue can access the data. The default AWS Glue permissions we just added expects the data to be present in a bucket with the string `aws-glue`. Hence, after we download the dataset, we will create an S3 bucket in your account with a valid name and then upload the data to S3.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/abalone/abalone.csv
###Output
_____no_output_____
###Markdown
Creating an S3 bucket and uploading this datasetNext we will create an S3 bucket with the `aws-glue` string in the name and upload this data to the S3 bucket. In case you want to use some existing bucket to run your Spark job via AWS Glue, you can use that bucket to upload your data provided the `Role` has access permission to upload and download from that bucket.Once the bucket is created, the following cell would also update the `abalone.csv` file downloaded locally to this bucket under the `input/abalone` prefix.
###Code
import boto3
import botocore
from botocore.exceptions import ClientError
boto_session = sess.boto_session
s3 = boto_session.resource("s3")
account = boto_session.client("sts").get_caller_identity()["Account"]
region = boto_session.region_name
default_bucket = "aws-glue-{}-{}".format(account, region)
try:
if region == "us-east-1":
s3.create_bucket(Bucket=default_bucket)
else:
s3.create_bucket(
Bucket=default_bucket, CreateBucketConfiguration={"LocationConstraint": region}
)
except ClientError as e:
error_code = e.response["Error"]["Code"]
message = e.response["Error"]["Message"]
if error_code == "BucketAlreadyOwnedByYou":
print("A bucket with the same name already exists in your account - using the same bucket.")
pass
# Uploading the training data to S3
sess.upload_data(path="abalone.csv", bucket=default_bucket, key_prefix="input/abalone")
###Output
_____no_output_____
###Markdown
Writing the feature processing script using SparkMLThe code for feature transformation using SparkML can be found in `abalone_processing.py` file written in the same directory. You can go through the code itself to see how it is using standard SparkML constructs to define the Pipeline for featurizing the data.Once the Spark ML Pipeline `fit` and `transform` is done, we are splitting our dataset into 80-20 train & validation as part of the script and uploading to S3 so that it can be used with XGBoost for training. Serializing the trained Spark ML Model with [MLeap](https://github.com/combust/mleap)Apache Spark is best suited batch processing workloads. In order to use the Spark ML model we trained for low latency inference, we need to use the MLeap library to serialize it to an MLeap bundle and later use the [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container) to perform realtime and batch inference. By using the `SerializeToBundle()` method from MLeap in the script, we are serializing the ML Pipeline into an MLeap bundle and uploading to S3 in `tar.gz` format as SageMaker expects. Uploading the code and other dependencies to S3 for AWS GlueUnlike SageMaker, in order to run your code in AWS Glue, we do not need to prepare a Docker image. We can upload the code and dependencies directly to S3 and pass those locations while invoking the Glue job. Upload the SparkML script to S3We will be uploading the `abalone_processing.py` script to S3 now so that Glue can use it to run the PySpark job. You can replace it with your own script if needed. If your code has multiple files, you need to zip those files and upload to S3 instead of uploading a single file like it's being done here.
###Code
script_location = sess.upload_data(
path="abalone_processing.py", bucket=default_bucket, key_prefix="codes"
)
###Output
_____no_output_____
###Markdown
Upload MLeap dependencies to S3 For our job, we will also have to pass MLeap dependencies to Glue. MLeap is an additional library we are using which does not come bundled with default Spark.Similar to most of the packages in the Spark ecosystem, MLeap is also implemented as a Scala package with a front-end wrapper written in Python so that it can be used from PySpark. We need to make sure that the MLeap Python library as well as the JAR is available within the Glue job environment. In the following cell, we will download the MLeap Python dependency & JAR from a SageMaker hosted bucket and upload to the S3 bucket we created above in your account. If you are using some other Python libraries like `nltk` in your code, you need to download the wheel file from PyPI and upload to S3 in the same way. At this point, Glue only supports passing pure Python libraries in this way (e.g. you can not pass `Pandas` or `OpenCV`). However you can use `NumPy` & `SciPy` without having to pass these as packages because these are pre-installed in the Glue environment.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/python/python.zip
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/jar/mleap_spark_assembly.jar
python_dep_location = sess.upload_data(
path="python.zip", bucket=default_bucket, key_prefix="dependencies/python"
)
jar_dep_location = sess.upload_data(
path="mleap_spark_assembly.jar", bucket=default_bucket, key_prefix="dependencies/jar"
)
###Output
_____no_output_____
###Markdown
Defining output locations for the data and modelNext we define the output location where the transformed dataset should be uploaded. We are also specifying a model location where the MLeap serialized model would be updated. This locations should be consumed as part of the Spark script using `getResolvedOptions` method of AWS Glue library (see `abalone_processing.py` for details).By designing our code in that way, we can re-use these variables as part of other SageMaker operations from this Notebook (details below).
###Code
from time import gmtime, strftime
import time
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Input location of the data, We uploaded our train.csv file to input key previously
s3_input_bucket = default_bucket
s3_input_key_prefix = "input/abalone"
# Output location of the data. The input data will be split, transformed, and
# uploaded to output/train and output/validation
s3_output_bucket = default_bucket
s3_output_key_prefix = timestamp_prefix + "/abalone"
# the MLeap serialized SparkML model will be uploaded to output/mleap
s3_model_bucket = default_bucket
s3_model_key_prefix = s3_output_key_prefix + "/mleap"
###Output
_____no_output_____
###Markdown
Calling Glue APIs Next we'll be creating Glue client via Boto so that we can invoke the `create_job` API of Glue. `create_job` API will create a job definition which can be used to execute your jobs in Glue. The job definition created here is mutable. While creating the job, we are also passing the code location as well as the dependencies location to Glue.`AllocatedCapacity` parameter controls the hardware resources that Glue will use to execute this job. It is measures in units of `DPU`. For more information on `DPU`, please see [here](https://docs.aws.amazon.com/glue/latest/dg/add-job.html).
###Code
glue_client = boto_session.client("glue")
job_name = "sparkml-abalone-" + timestamp_prefix
response = glue_client.create_job(
Name=job_name,
Description="PySpark job to featurize the Abalone dataset",
Role=role, # you can pass your existing AWS Glue role here if you have used Glue before
ExecutionProperty={"MaxConcurrentRuns": 1},
Command={"Name": "glueetl", "ScriptLocation": script_location},
DefaultArguments={
"--job-language": "python",
"--extra-jars": jar_dep_location,
"--extra-py-files": python_dep_location,
},
AllocatedCapacity=5,
Timeout=60,
)
glue_job_name = response["Name"]
print(glue_job_name)
###Output
_____no_output_____
###Markdown
The aforementioned job will be executed now by calling `start_job_run` API. This API creates an immutable run/execution corresponding to the job definition created above. We will require the `job_run_id` for the particular job execution to check for status. We'll pass the data and model locations as part of the job execution parameters.
###Code
job_run_id = glue_client.start_job_run(
JobName=job_name,
Arguments={
"--S3_INPUT_BUCKET": s3_input_bucket,
"--S3_INPUT_KEY_PREFIX": s3_input_key_prefix,
"--S3_OUTPUT_BUCKET": s3_output_bucket,
"--S3_OUTPUT_KEY_PREFIX": s3_output_key_prefix,
"--S3_MODEL_BUCKET": s3_model_bucket,
"--S3_MODEL_KEY_PREFIX": s3_model_key_prefix,
},
)["JobRunId"]
print(job_run_id)
###Output
_____no_output_____
###Markdown
Checking Glue job status Now we will check for the job status to see if it has `succeeded`, `failed` or `stopped`. Once the job is succeeded, we have the transformed data into S3 in CSV format which we can use with XGBoost for training. If the job fails, you can go to [AWS Glue console](https://us-west-2.console.aws.amazon.com/glue/home), click on **Jobs** tab on the left, and from the page, click on this particular job and you will be able to find the CloudWatch logs (the link under **Logs**) link for these jobs which can help you to see what exactly went wrong in the job execution.
###Code
job_run_status = glue_client.get_job_run(JobName=job_name, RunId=job_run_id)["JobRun"][
"JobRunState"
]
while job_run_status not in ("FAILED", "SUCCEEDED", "STOPPED"):
job_run_status = glue_client.get_job_run(JobName=job_name, RunId=job_run_id)["JobRun"][
"JobRunState"
]
print(job_run_status)
time.sleep(30)
###Output
_____no_output_____
###Markdown
Using SageMaker XGBoost to train on the processed dataset produced by SparkML job Now we will use SageMaker XGBoost algorithm to train on this dataset. We already know the S3 locationwhere the preprocessed training data was uploaded as part of the Glue job. We need to retrieve the XGBoost algorithm imageWe will retrieve the XGBoost built-in algorithm image so that it can leveraged for the training job.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, "xgboost", repo_version="latest")
print(training_image)
###Output
_____no_output_____
###Markdown
Next XGBoost model parameters and dataset details will be set properlyWe have parameterized this Notebook so that the same data location which was used in the PySpark script can now be passed to XGBoost Estimator as well.
###Code
s3_train_data = "s3://{}/{}/{}".format(s3_output_bucket, s3_output_key_prefix, "train")
s3_validation_data = "s3://{}/{}/{}".format(s3_output_bucket, s3_output_key_prefix, "validation")
s3_output_location = "s3://{}/{}/{}".format(s3_output_bucket, s3_output_key_prefix, "xgboost_model")
xgb_model = sagemaker.estimator.Estimator(
training_image,
role,
train_instance_count=1,
train_instance_type="ml.m5.xlarge",
train_volume_size=20,
train_max_run=3600,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
xgb_model.set_hyperparameters(
objective="reg:linear",
eta=0.2,
gamma=4,
max_depth=5,
num_round=10,
subsample=0.7,
silent=0,
min_child_weight=6,
)
train_data = sagemaker.session.s3_input(
s3_train_data, distribution="FullyReplicated", content_type="text/csv", s3_data_type="S3Prefix"
)
validation_data = sagemaker.session.s3_input(
s3_validation_data,
distribution="FullyReplicated",
content_type="text/csv",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Finally XGBoost training will be performed.
###Code
xgb_model.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint Next we will proceed with deploying the models in SageMaker to create an Inference Pipeline. You can create an Inference Pipeline with upto five containers.Deploying a model in SageMaker requires two components:* Docker image residing in ECR.* Model artifacts residing in S3.**SparkML**For SparkML, Docker image for MLeap based SparkML serving is provided by SageMaker team. For more information on this, please see [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container). MLeap serialized SparkML model was uploaded to S3 as part of the SparkML job we executed in AWS Glue.**XGBoost**For XGBoost, we will use the same Docker image we used for training. The model artifacts for XGBoost was uploaded as part of the training job we just ran. Passing the schema of the payload via environment variableSparkML serving container needs to know the schema of the request that'll be passed to it while calling the `predict` method. In order to alleviate the pain of not having to pass the schema with every request, `sagemaker-sparkml-serving` allows you to pass it via an environment variable while creating the model definitions. This schema definition will be required in our next step for creating a model.We will see later that you can overwrite this schema on a per request basis by passing it as part of the individual request payload as well.
###Code
import json
schema = {
"input": [
{"name": "sex", "type": "string"},
{"name": "length", "type": "double"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "features", "type": "double", "struct": "vector"},
}
schema_json = json.dumps(schema)
print(schema_json)
###Output
_____no_output_____
###Markdown
Creating a `PipelineModel` which comprises of the SparkML and XGBoost model in the right orderNext we'll create a SageMaker `PipelineModel` with SparkML and XGBoost.The `PipelineModel` will ensure that both the containers get deployed behind a single API endpoint in the correct order. The same model would later be used for Batch Transform as well to ensure that a single job is sufficient to do prediction against the Pipeline. Here, during the `Model` creation for SparkML, we will pass the schema definition that we built in the previous cell.
###Code
from sagemaker.model import Model
from sagemaker.pipeline import PipelineModel
from sagemaker.sparkml.model import SparkMLModel
sparkml_data = "s3://{}/{}/{}".format(s3_model_bucket, s3_model_key_prefix, "model.tar.gz")
# passing the schema defined above by using an environment variable that sagemaker-sparkml-serving understands
sparkml_model = SparkMLModel(model_data=sparkml_data, env={"SAGEMAKER_SPARKML_SCHEMA": schema_json})
xgb_model = Model(model_data=xgb_model.model_data, image=training_image)
model_name = "inference-pipeline-" + timestamp_prefix
sm_model = PipelineModel(name=model_name, role=role, models=[sparkml_model, xgb_model])
###Output
_____no_output_____
###Markdown
Deploying the `PipelineModel` to an endpoint for realtime inferenceNext we will deploy the model we just created with the `deploy()` method to start an inference endpoint and we will send some requests to the endpoint to verify that it works as expected.
###Code
endpoint_name = "inference-pipeline-ep-" + timestamp_prefix
sm_model.deploy(initial_instance_count=1, instance_type="ml.c4.xlarge", endpoint_name=endpoint_name)
###Output
_____no_output_____
###Markdown
Invoking the newly created inference endpoint with a payload to transform the dataNow we will invoke the endpoint with a valid payload that SageMaker SparkML Serving can recognize. There are three ways in which input payload can be passed to the request:* Pass it as a valid CSV string. In this case, the schema passed via the environment variable will be used to determine the schema. For CSV format, every column in the input has to be a basic datatype (e.g. int, double, string) and it can not be a Spark `Array` or `Vector`.* Pass it as a valid JSON string. In this case as well, the schema passed via the environment variable will be used to infer the schema. With JSON format, every column in the input can be a basic datatype or a Spark `Vector` or `Array` provided that the corresponding entry in the schema mentions the correct value.* Pass the request in JSON format along with the schema and the data. In this case, the schema passed in the payload will take precedence over the one passed via the environment variable (if any). Passing the payload in CSV formatWe will first see how the payload can be passed to the endpoint in CSV format.
###Code
from sagemaker.predictor import (
json_serializer,
csv_serializer,
json_deserializer,
RealTimePredictor,
)
from sagemaker.content_types import CONTENT_TYPE_CSV, CONTENT_TYPE_JSON
payload = "F,0.515,0.425,0.14,0.766,0.304,0.1725,0.255"
predictor = RealTimePredictor(
endpoint=endpoint_name,
sagemaker_session=sess,
serializer=csv_serializer,
content_type=CONTENT_TYPE_CSV,
accept=CONTENT_TYPE_CSV,
)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
Passing the payload in JSON formatWe will now pass a different payload in JSON format.
###Code
payload = {"data": ["F", 0.515, 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255]}
predictor = RealTimePredictor(
endpoint=endpoint_name,
sagemaker_session=sess,
serializer=json_serializer,
content_type=CONTENT_TYPE_JSON,
accept=CONTENT_TYPE_CSV,
)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Passing the payload with both schema and the dataNext we will pass the input payload comprising of both the schema and the data. If you notice carefully, this schema will be slightly different than what we have passed via the environment variable. The locations of `length` and `sex` column have been swapped and so the data. The server now parses the payload with this schema and works properly.
###Code
payload = {
"schema": {
"input": [
{"name": "length", "type": "double"},
{"name": "sex", "type": "string"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "features", "type": "double", "struct": "vector"},
},
"data": [0.515, "F", 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255],
}
predictor = RealTimePredictor(
endpoint=endpoint_name,
sagemaker_session=sess,
serializer=json_serializer,
content_type=CONTENT_TYPE_JSON,
accept=CONTENT_TYPE_CSV,
)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Deleting the EndpointIf you do not plan to use this endpoint, then it is a good practice to delete the endpoint so that you do not incur the cost of running it.
###Code
sm_client = boto_session.client("sagemaker")
sm_client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform jobSageMaker Batch Transform also supports chaining multiple containers together when deploying an Inference Pipeline and performing a single batch transform jobs to transform your data for a batch use-case similar to the real-time use-case we have seen above. Preparing data for Batch TransformBatch Transform requires data in the same format described above, with one CSV or JSON being per line. For this Notebook, SageMaker team has created a sample input in CSV format which Batch Transform can process. The input is basically a similar CSV file to the training file with only difference is that it does not contain the label (``rings``) field.Next we will download a sample of this data from one of the SageMaker buckets (named `batch_input_abalone.csv`) and upload to your S3 bucket. We will also inspect first five rows of the data post downloading.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/batch_input_abalone.csv
!printf "\n\nShowing first five lines\n\n"
!head -n 5 batch_input_abalone.csv
!printf "\n\nAs we can see, it is identical to the training file apart from the label being absent here.\n\n"
batch_input_loc = sess.upload_data(
path="batch_input_abalone.csv", bucket=default_bucket, key_prefix="batch"
)
###Output
_____no_output_____
###Markdown
Invoking the Transform API to create a Batch Transform jobNext we will create a Batch Transform job using the `Transformer` class from Python SDK to create a Batch Transform job.
###Code
input_data_path = "s3://{}/{}/{}".format(default_bucket, "batch", "batch_input_abalone.csv")
output_data_path = "s3://{}/{}/{}".format(default_bucket, "batch_output/abalone", timestamp_prefix)
job_name = "serial-inference-batch-" + timestamp_prefix
transformer = sagemaker.transformer.Transformer(
# This was the model created using PipelineModel and it contains feature processing and XGBoost
model_name=model_name,
instance_count=1,
instance_type="ml.m5.xlarge",
strategy="SingleRecord",
assemble_with="Line",
output_path=output_data_path,
base_transform_job_name="serial-inference-batch",
sagemaker_session=sess,
accept=CONTENT_TYPE_CSV,
)
transformer.transform(
data=input_data_path, job_name=job_name, content_type=CONTENT_TYPE_CSV, split_type="Line"
)
transformer.wait()
###Output
_____no_output_____
###Markdown
Feature processing with Spark, training with XGBoost and deploying as Inference PipelineTypically a Machine Learning (ML) process consists of few steps: gathering data with various ETL jobs, pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.In many cases, when the trained model is used for processing real time or batch prediction requests, the model receives data in a format which needs to pre-processed (e.g. featurized) before it can be passed to the algorithm. In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging Spark Feature Transformers and SageMaker XGBoost algorithm & after the model is trained, deploy the Pipeline (Feature Transformer and XGBoost) as an Inference Pipeline behind a single Endpoint for real-time inference and for batch inferences using Amazon SageMaker Batch Transform.In this notebook, we use Amazon Glue to run serverless Spark. Though the notebook demonstrates the end-to-end flow on a small dataset, the setup can be seamlessly used to scale to larger datasets. Objective: predict the age of an Abalone from its physical measurement The dataset is available from [UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/abalone). The aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. At the core, it's a regression problem. The dataset contains several features - `sex` (categorical), `length` (continuous), `diameter` (continuous), `height` (continuous), `whole_weight` (continuous), `shucked_weight` (continuous), `viscera_weight` (continuous), `shell_weight` (continuous) and `rings` (integer).Our goal is to predict the variable `rings` which is a good approximation for age (age is `rings` + 1.5). We'll use SparkML to process the dataset (apply one or many feature transformers) and upload the transformed dataset to S3 so that it can be used for training with XGBoost. MethodologiesThe Notebook consists of a few high-level steps:* Using AWS Glue for executing the SparkML feature processing job.* Using SageMaker XGBoost to train on the processed dataset produced by SparkML job.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform job. Using AWS Glue for executing the SparkML job We'll be running the SparkML job using [AWS Glue](https://aws.amazon.com/glue). AWS Glue is a serverless ETL service which can be used to execute standard Spark/PySpark jobs. Glue currently only supports `Python 2.7`, hence we'll write the script in `Python 2.7`. Permission setup for invoking AWS Glue from this NotebookIn order to enable this Notebook to run AWS Glue jobs, we need to add one additional permission to the default execution role of this notebook. We will be using SageMaker Python SDK to retrieve the default execution role and then you have to go to [IAM Dashboard](https://console.aws.amazon.com/iam/home) to edit the Role to add AWS Glue specific permission. Finding out the current execution role of the NotebookWe are using SageMaker Python SDK to retrieve the current role for this Notebook which needs to be enhanced.
###Code
# Import SageMaker Python SDK to get the Session and execution_role
import sagemaker
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
print(role[role.rfind('/') + 1:])
###Output
_____no_output_____
###Markdown
Adding AWS Glue as an additional trusted entity to this roleThis step is needed if you want to pass the execution role of this Notebook while calling Glue APIs as well without creating an additional **Role**. If you have not used AWS Glue before, then this step is mandatory. If you have used AWS Glue previously, then you should have an already existing role that can be used to invoke Glue APIs. In that case, you can pass that role while calling Glue (later in this notebook) and skip this next step. On the IAM dashboard, please click on **Roles** on the left sidenav and search for this Role. Once the Role appears, click on the Role to go to its **Summary** page. Click on the **Trust relationships** tab on the **Summary** page to add AWS Glue as an additional trusted entity. Click on **Edit trust relationship** and replace the JSON with this JSON.```{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "sagemaker.amazonaws.com", "glue.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ]}```Once this is complete, click on **Update Trust Policy** and you are done. Downloading dataset and uploading to S3SageMaker team has downloaded the dataset from UCI and uploaded to one of the S3 buckets in our account. In this Notebook, we will download from that bucket and upload to your bucket so that AWS Glue can access the data. The default AWS Glue permissions we just added expects the data to be present in a bucket with the string `aws-glue`. Hence, after we download the dataset, we will create an S3 bucket in your account with a valid name and then upload the data to S3.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/abalone/abalone.csv
###Output
_____no_output_____
###Markdown
Creating an S3 bucket and uploading this datasetNext we will create an S3 bucket with the `aws-glue` string in the name and upload this data to the S3 bucket. In case you want to use some existing bucket to run your Spark job via AWS Glue, you can use that bucket to upload your data provided the `Role` has access permission to upload and download from that bucket.Once the bucket is created, the following cell would also update the `abalone.csv` file downloaded locally to this bucket under the `input/abalone` prefix.
###Code
import boto3
import botocore
from botocore.exceptions import ClientError
boto_session = sess.boto_session
s3 = boto_session.resource('s3')
account = boto_session.client('sts').get_caller_identity()['Account']
region = boto_session.region_name
default_bucket = 'aws-glue-{}-{}'.format(account, region)
try:
if region == 'us-east-1':
s3.create_bucket(Bucket=default_bucket)
else:
s3.create_bucket(Bucket=default_bucket, CreateBucketConfiguration={'LocationConstraint': region})
except ClientError as e:
error_code = e.response['Error']['Code']
message = e.response['Error']['Message']
if error_code == 'BucketAlreadyOwnedByYou':
print ('A bucket with the same name already exists in your account - using the same bucket.')
pass
# Uploading the training data to S3
sess.upload_data(path='abalone.csv', bucket=default_bucket, key_prefix='input/abalone')
###Output
_____no_output_____
###Markdown
Writing the feature processing script using SparkMLThe code for feature transformation using SparkML can be found in `abalone_processing.py` file written in the same directory. You can go through the code itself to see how it is using standard SparkML constructs to define the Pipeline for featurizing the data.Once the Spark ML Pipeline `fit` and `transform` is done, we are splitting our dataset into 80-20 train & validation as part of the script and uploading to S3 so that it can be used with XGBoost for training. Serializing the trained Spark ML Model with [MLeap](https://github.com/combust/mleap)Apache Spark is best suited batch processing workloads. In order to use the Spark ML model we trained for low latency inference, we need to use the MLeap library to serialize it to an MLeap bundle and later use the [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container) to perform realtime and batch inference. By using the `SerializeToBundle()` method from MLeap in the script, we are serializing the ML Pipeline into an MLeap bundle and uploading to S3 in `tar.gz` format as SageMaker expects. Uploading the code and other dependencies to S3 for AWS GlueUnlike SageMaker, in order to run your code in AWS Glue, we do not need to prepare a Docker image. We can upload the code and dependencies directly to S3 and pass those locations while invoking the Glue job. Upload the SparkML script to S3We will be uploading the `abalone_processing.py` script to S3 now so that Glue can use it to run the PySpark job. You can replace it with your own script if needed. If your code has multiple files, you need to zip those files and upload to S3 instead of uploading a single file like it's being done here.
###Code
script_location = sess.upload_data(path='abalone_processing.py', bucket=default_bucket, key_prefix='codes')
###Output
_____no_output_____
###Markdown
Upload MLeap dependencies to S3 For our job, we will also have to pass MLeap dependencies to Glue. MLeap is an additional library we are using which does not come bundled with default Spark.Similar to most of the packages in the Spark ecosystem, MLeap is also implemented as a Scala package with a front-end wrapper written in Python so that it can be used from PySpark. We need to make sure that the MLeap Python library as well as the JAR is available within the Glue job environment. In the following cell, we will download the MLeap Python dependency & JAR from a SageMaker hosted bucket and upload to the S3 bucket we created above in your account. If you are using some other Python libraries like `nltk` in your code, you need to download the wheel file from PyPI and upload to S3 in the same way. At this point, Glue only supports passing pure Python libraries in this way (e.g. you can not pass `Pandas` or `OpenCV`). However you can use `NumPy` & `SciPy` without having to pass these as packages because these are pre-installed in the Glue environment.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/python/python.zip
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/jar/mleap_spark_assembly.jar
python_dep_location = sess.upload_data(path='python.zip', bucket=default_bucket, key_prefix='dependencies/python')
jar_dep_location = sess.upload_data(path='mleap_spark_assembly.jar', bucket=default_bucket, key_prefix='dependencies/jar')
###Output
_____no_output_____
###Markdown
Defining output locations for the data and modelNext we define the output location where the transformed dataset should be uploaded. We are also specifying a model location where the MLeap serialized model would be updated. This locations should be consumed as part of the Spark script using `getResolvedOptions` method of AWS Glue library (see `abalone_processing.py` for details).By designing our code in that way, we can re-use these variables as part of other SageMaker operations from this Notebook (details below).
###Code
from time import gmtime, strftime
import time
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Input location of the data, We uploaded our train.csv file to input key previously
s3_input_bucket = default_bucket
s3_input_key_prefix = 'input/abalone'
# Output location of the data. The input data will be split, transformed, and
# uploaded to output/train and output/validation
s3_output_bucket = default_bucket
s3_output_key_prefix = timestamp_prefix + '/abalone'
# the MLeap serialized SparkML model will be uploaded to output/mleap
s3_model_bucket = default_bucket
s3_model_key_prefix = s3_output_key_prefix + '/mleap'
###Output
_____no_output_____
###Markdown
Calling Glue APIs Next we'll be creating Glue client via Boto so that we can invoke the `create_job` API of Glue. `create_job` API will create a job definition which can be used to execute your jobs in Glue. The job definition created here is mutable. While creating the job, we are also passing the code location as well as the dependencies location to Glue.`AllocatedCapacity` parameter controls the hardware resources that Glue will use to execute this job. It is measures in units of `DPU`. For more information on `DPU`, please see [here](https://docs.aws.amazon.com/glue/latest/dg/add-job.html).
###Code
glue_client = boto_session.client('glue')
job_name = 'sparkml-abalone-' + timestamp_prefix
response = glue_client.create_job(
Name=job_name,
Description='PySpark job to featurize the Abalone dataset',
Role=role, # you can pass your existing AWS Glue role here if you have used Glue before
ExecutionProperty={
'MaxConcurrentRuns': 1
},
Command={
'Name': 'glueetl',
'ScriptLocation': script_location
},
DefaultArguments={
'--job-language': 'python',
'--extra-jars' : jar_dep_location,
'--extra-py-files': python_dep_location
},
AllocatedCapacity=5,
Timeout=60,
)
glue_job_name = response['Name']
print(glue_job_name)
###Output
_____no_output_____
###Markdown
The aforementioned job will be executed now by calling `start_job_run` API. This API creates an immutable run/execution corresponding to the job definition created above. We will require the `job_run_id` for the particular job execution to check for status. We'll pass the data and model locations as part of the job execution parameters.
###Code
job_run_id = glue_client.start_job_run(JobName=job_name,
Arguments = {
'--S3_INPUT_BUCKET': s3_input_bucket,
'--S3_INPUT_KEY_PREFIX': s3_input_key_prefix,
'--S3_OUTPUT_BUCKET': s3_output_bucket,
'--S3_OUTPUT_KEY_PREFIX': s3_output_key_prefix,
'--S3_MODEL_BUCKET': s3_model_bucket,
'--S3_MODEL_KEY_PREFIX': s3_model_key_prefix
})['JobRunId']
print(job_run_id)
###Output
_____no_output_____
###Markdown
Checking Glue job status Now we will check for the job status to see if it has `succeeded`, `failed` or `stopped`. Once the job is succeeded, we have the transformed data into S3 in CSV format which we can use with XGBoost for training. If the job fails, you can go to [AWS Glue console](https://us-west-2.console.aws.amazon.com/glue/home), click on **Jobs** tab on the left, and from the page, click on this particular job and you will be able to find the CloudWatch logs (the link under **Logs**) link for these jobs which can help you to see what exactly went wrong in the job execution.
###Code
job_run_status = glue_client.get_job_run(JobName=job_name,RunId=job_run_id)['JobRun']['JobRunState']
while job_run_status not in ('FAILED', 'SUCCEEDED', 'STOPPED'):
job_run_status = glue_client.get_job_run(JobName=job_name,RunId=job_run_id)['JobRun']['JobRunState']
print (job_run_status)
time.sleep(30)
###Output
_____no_output_____
###Markdown
Using SageMaker XGBoost to train on the processed dataset produced by SparkML job Now we will use SageMaker XGBoost algorithm to train on this dataset. We already know the S3 locationwhere the preprocessed training data was uploaded as part of the Glue job. We need to retrieve the XGBoost algorithm imageWe will retrieve the XGBoost built-in algorithm image so that it can leveraged for the training job.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'xgboost', repo_version="latest")
print (training_image)
###Output
_____no_output_____
###Markdown
Next XGBoost model parameters and dataset details will be set properlyWe have parameterized this Notebook so that the same data location which was used in the PySpark script can now be passed to XGBoost Estimator as well.
###Code
s3_train_data = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'train')
s3_validation_data = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'validation')
s3_output_location = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'xgboost_model')
xgb_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.m5.xlarge',
train_volume_size = 20,
train_max_run = 3600,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
xgb_model.set_hyperparameters(objective = "reg:linear",
eta = .2,
gamma = 4,
max_depth = 5,
num_round = 10,
subsample = 0.7,
silent = 0,
min_child_weight = 6)
train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated',
content_type='text/csv', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated',
content_type='text/csv', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
###Output
_____no_output_____
###Markdown
Finally XGBoost training will be performed.
###Code
xgb_model.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint Next we will proceed with deploying the models in SageMaker to create an Inference Pipeline. You can create an Inference Pipeline with upto five containers.Deploying a model in SageMaker requires two components:* Docker image residing in ECR.* Model artifacts residing in S3.**SparkML**For SparkML, Docker image for MLeap based SparkML serving is provided by SageMaker team. For more information on this, please see [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container). MLeap serialized SparkML model was uploaded to S3 as part of the SparkML job we executed in AWS Glue.**XGBoost**For XGBoost, we will use the same Docker image we used for training. The model artifacts for XGBoost was uploaded as part of the training job we just ran. Passing the schema of the payload via environment variableSparkML serving container needs to know the schema of the request that'll be passed to it while calling the `predict` method. In order to alleviate the pain of not having to pass the schema with every request, `sagemaker-sparkml-serving` allows you to pass it via an environment variable while creating the model definitions. This schema definition will be required in our next step for creating a model.We will see later that you can overwrite this schema on a per request basis by passing it as part of the individual request payload as well.
###Code
import json
schema = {
"input": [
{
"name": "sex",
"type": "string"
},
{
"name": "length",
"type": "double"
},
{
"name": "diameter",
"type": "double"
},
{
"name": "height",
"type": "double"
},
{
"name": "whole_weight",
"type": "double"
},
{
"name": "shucked_weight",
"type": "double"
},
{
"name": "viscera_weight",
"type": "double"
},
{
"name": "shell_weight",
"type": "double"
},
],
"output":
{
"name": "features",
"type": "double",
"struct": "vector"
}
}
schema_json = json.dumps(schema)
print(schema_json)
###Output
_____no_output_____
###Markdown
Creating a `PipelineModel` which comprises of the SparkML and XGBoost model in the right orderNext we'll create a SageMaker `PipelineModel` with SparkML and XGBoost.The `PipelineModel` will ensure that both the containers get deployed behind a single API endpoint in the correct order. The same model would later be used for Batch Transform as well to ensure that a single job is sufficient to do prediction against the Pipeline. Here, during the `Model` creation for SparkML, we will pass the schema definition that we built in the previous cell.
###Code
from sagemaker.model import Model
from sagemaker.pipeline import PipelineModel
from sagemaker.sparkml.model import SparkMLModel
sparkml_data = 's3://{}/{}/{}'.format(s3_model_bucket, s3_model_key_prefix, 'model.tar.gz')
# passing the schema defined above by using an environment variable that sagemaker-sparkml-serving understands
sparkml_model = SparkMLModel(model_data=sparkml_data, env={'SAGEMAKER_SPARKML_SCHEMA' : schema_json})
xgb_model = Model(model_data=xgb_model.model_data, image=training_image)
model_name = 'inference-pipeline-' + timestamp_prefix
sm_model = PipelineModel(name=model_name, role=role, models=[sparkml_model, xgb_model])
###Output
_____no_output_____
###Markdown
Deploying the `PipelineModel` to an endpoint for realtime inferenceNext we will deploy the model we just created with the `deploy()` method to start an inference endpoint and we will send some requests to the endpoint to verify that it works as expected.
###Code
endpoint_name = 'inference-pipeline-ep-' + timestamp_prefix
sm_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge', endpoint_name=endpoint_name)
###Output
_____no_output_____
###Markdown
Invoking the newly created inference endpoint with a payload to transform the dataNow we will invoke the endpoint with a valid payload that SageMaker SparkML Serving can recognize. There are three ways in which input payload can be passed to the request:* Pass it as a valid CSV string. In this case, the schema passed via the environment variable will be used to determine the schema. For CSV format, every column in the input has to be a basic datatype (e.g. int, double, string) and it can not be a Spark `Array` or `Vector`.* Pass it as a valid JSON string. In this case as well, the schema passed via the environment variable will be used to infer the schema. With JSON format, every column in the input can be a basic datatype or a Spark `Vector` or `Array` provided that the corresponding entry in the schema mentions the correct value.* Pass the request in JSON format along with the schema and the data. In this case, the schema passed in the payload will take precedence over the one passed via the environment variable (if any). Passing the payload in CSV formatWe will first see how the payload can be passed to the endpoint in CSV format.
###Code
from sagemaker.predictor import json_serializer, csv_serializer, json_deserializer, RealTimePredictor
from sagemaker.content_types import CONTENT_TYPE_CSV, CONTENT_TYPE_JSON
payload = "F,0.515,0.425,0.14,0.766,0.304,0.1725,0.255"
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=csv_serializer,
content_type=CONTENT_TYPE_CSV, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
Passing the payload in JSON formatWe will now pass a different payload in JSON format.
###Code
payload = {"data": ["F",0.515,0.425,0.14,0.766,0.304,0.1725,0.255]}
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=json_serializer,
content_type=CONTENT_TYPE_JSON, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Passing the payload with both schema and the dataNext we will pass the input payload comprising of both the schema and the data. If you notice carefully, this schema will be slightly different than what we have passed via the environment variable. The locations of `length` and `sex` column have been swapped and so the data. The server now parses the payload with this schema and works properly.
###Code
payload = {
"schema": {
"input": [
{
"name": "length",
"type": "double"
},
{
"name": "sex",
"type": "string"
},
{
"name": "diameter",
"type": "double"
},
{
"name": "height",
"type": "double"
},
{
"name": "whole_weight",
"type": "double"
},
{
"name": "shucked_weight",
"type": "double"
},
{
"name": "viscera_weight",
"type": "double"
},
{
"name": "shell_weight",
"type": "double"
},
],
"output":
{
"name": "features",
"type": "double",
"struct": "vector"
}
},
"data": [0.515,"F",0.425,0.14,0.766,0.304,0.1725,0.255]
}
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=json_serializer,
content_type=CONTENT_TYPE_JSON, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Deleting the EndpointIf you do not plan to use this endpoint, then it is a good practice to delete the endpoint so that you do not incur the cost of running it.
###Code
sm_client = boto_session.client('sagemaker')
sm_client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform jobSageMaker Batch Transform also supports chaining multiple containers together when deploying an Inference Pipeline and performing a single batch transform jobs to transform your data for a batch use-case similar to the real-time use-case we have seen above. Preparing data for Batch TransformBatch Transform requires data in the same format described above, with one CSV or JSON being per line. For this Notebook, SageMaker team has created a sample input in CSV format which Batch Transform can process. The input is basically a similar CSV file to the training file with only difference is that it does not contain the label (``rings``) field.Next we will download a sample of this data from one of the SageMaker buckets (named `batch_input_abalone.csv`) and upload to your S3 bucket. We will also inspect first five rows of the data post downloading.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/batch_input_abalone.csv
!printf "\n\nShowing first five lines\n\n"
!head -n 5 batch_input_abalone.csv
!printf "\n\nAs we can see, it is identical to the training file apart from the label being absent here.\n\n"
batch_input_loc = sess.upload_data(path='batch_input_abalone.csv', bucket=default_bucket, key_prefix='batch')
###Output
_____no_output_____
###Markdown
Invoking the Transform API to create a Batch Transform jobNext we will create a Batch Transform job using the `Transformer` class from Python SDK to create a Batch Transform job.
###Code
input_data_path = 's3://{}/{}/{}'.format(default_bucket, 'batch', 'batch_input_abalone.csv')
output_data_path = 's3://{}/{}/{}'.format(default_bucket, 'batch_output/abalone', timestamp_prefix)
job_name = 'serial-inference-batch-' + timestamp_prefix
transformer = sagemaker.transformer.Transformer(
# This was the model created using PipelineModel and it contains feature processing and XGBoost
model_name = model_name,
instance_count = 1,
instance_type = 'ml.m5.xlarge',
strategy = 'SingleRecord',
assemble_with = 'Line',
output_path = output_data_path,
base_transform_job_name='serial-inference-batch',
sagemaker_session=sess,
accept = CONTENT_TYPE_CSV
)
transformer.transform(data = input_data_path,
job_name = job_name,
content_type = CONTENT_TYPE_CSV,
split_type = 'Line')
transformer.wait()
###Output
_____no_output_____
###Markdown
Feature processing with Spark, training with XGBoost and deploying as Inference PipelineTypically a Machine Learning (ML) process consists of few steps: gathering data with various ETL jobs, pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.In many cases, when the trained model is used for processing real time or batch prediction requests, the model receives data in a format which needs to pre-processed (e.g. featurized) before it can be passed to the algorithm. In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging Spark Feature Transformers and SageMaker XGBoost algorithm & after the model is trained, deploy the Pipeline (Feature Transformer and XGBoost) as an Inference Pipeline behind a single Endpoint for real-time inference and for batch inferences using Amazon SageMaker Batch Transform.In this notebook, we use Amazon Glue to run serverless Spark. Though the notebook demonstrates the end-to-end flow on a small dataset, the setup can be seamlessly used to scale to larger datasets. Objective: predict the age of an Abalone from its physical measurement The dataset is available from [UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/abalone). The aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. At the core, it's a regression problem. The dataset contains several features - `sex` (categorical), `length` (continuous), `diameter` (continuous), `height` (continuous), `whole_weight` (continuous), `shucked_weight` (continuous), `viscera_weight` (continuous), `shell_weight` (continuous) and `rings` (integer).Our goal is to predict the variable `rings` which is a good approximation for age (age is `rings` + 1.5). We'll use SparkML to process the dataset (apply one or many feature transformers) and upload the transformed dataset to S3 so that it can be used for training with XGBoost. MethodologiesThe Notebook consists of a few high-level steps:* Using AWS Glue for executing the SparkML feature processing job.* Using SageMaker XGBoost to train on the processed dataset produced by SparkML job.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint.* Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform job. Using AWS Glue for executing the SparkML job We'll be running the SparkML job using [AWS Glue](https://aws.amazon.com/glue). AWS Glue is a serverless ETL service which can be used to execute standard Spark/PySpark jobs. Glue currently only supports `Python 2.7`, hence we'll write the script in `Python 2.7`. Permission setup for invoking AWS Glue from this NotebookIn order to enable this Notebook to run AWS Glue jobs, we need to add one additional permission to the default execution role of this notebook. We will be using SageMaker Python SDK to retrieve the default execution role and then you have to go to [IAM Dashboard](https://console.aws.amazon.com/iam/home) to edit the Role to add AWS Glue specific permission. Finding out the current execution role of the NotebookWe are using SageMaker Python SDK to retrieve the current role for this Notebook which needs to be enhanced.
###Code
# Import SageMaker Python SDK to get the Session and execution_role
import sagemaker
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
print(role[role.rfind('/') + 1:])
###Output
_____no_output_____
###Markdown
Adding AWS Glue as an additional trusted entity to this roleThis step is needed if you want to pass the execution role of this Notebook while calling Glue APIs as well without creating an additional **Role**. If you have not used AWS Glue before, then this step is mandatory. If you have used AWS Glue previously, then you should have an already existing role that can be used to invoke Glue APIs. In that case, you can pass that role while calling Glue (later in this notebook) and skip this next step. On the IAM dashboard, please click on **Roles** on the left sidenav and search for this Role. Once the Role appears, click on the Role to go to its **Summary** page. Click on the **Trust relationships** tab on the **Summary** page to add AWS Glue as an additional trusted entity. Click on **Edit trust relationship** and replace the JSON with this JSON.```{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "sagemaker.amazonaws.com", "glue.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ]}```Once this is complete, click on **Update Trust Policy** and you are done. Downloading dataset and uploading to S3SageMaker team has downloaded the dataset from UCI and uploaded to one of the S3 buckets in our account. In this Notebook, we will download from that bucket and upload to your bucket so that AWS Glue can access the data. The default AWS Glue permissions we just added expects the data to be present in a bucket with the string `aws-glue`. Hence, after we download the dataset, we will create an S3 bucket in your account with a valid name and then upload the data to S3.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/abalone/abalone.csv
###Output
_____no_output_____
###Markdown
Creating an S3 bucket and uploading this datasetNext we will create an S3 bucket with the `aws-glue` string in the name and upload this data to the S3 bucket. In case you want to use some existing bucket to run your Spark job via AWS Glue, you can use that bucket to upload your data provided the `Role` has access permission to upload and download from that bucket.Once the bucket is created, the following cell would also update the `abalone.csv` file downloaded locally to this bucket under the `input/abalone` prefix.
###Code
import boto3
import botocore
from botocore.exceptions import ClientError
boto_session = sess.boto_session
s3 = boto_session.resource('s3')
account = boto_session.client('sts').get_caller_identity()['Account']
region = boto_session.region_name
default_bucket = 'aws-glue-{}-{}'.format(account, region)
try:
if region == 'us-east-1':
s3.create_bucket(Bucket=default_bucket)
else:
s3.create_bucket(Bucket=default_bucket, CreateBucketConfiguration={'LocationConstraint': region})
except ClientError as e:
error_code = e.response['Error']['Code']
message = e.response['Error']['Message']
if error_code == 'BucketAlreadyOwnedByYou':
print ('A bucket with the same name already exists in your account - using the same bucket.')
pass
# Uploading the training data to S3
sess.upload_data(path='abalone.csv', bucket=default_bucket, key_prefix='input/abalone')
###Output
_____no_output_____
###Markdown
Writing the feature processing script using SparkMLThe code for feature transformation using SparkML can be found in `abalone_processing.py` file written in the same directory. You can go through the code itself to see how it is using standard SparkML constructs to define the Pipeline for featurizing the data.Once the Spark ML Pipeline `fit` and `transform` is done, we are splitting our dataset into 80-20 train & validation as part of the script and uploading to S3 so that it can be used with XGBoost for training. Serializing the trained Spark ML Model with [MLeap](https://github.com/combust/mleap)Apache Spark is best suited batch processing workloads. In order to use the Spark ML model we trained for low latency inference, we need to use the MLeap library to serialize it to an MLeap bundle and later use the [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container) to perform realtime and batch inference. By using the `SerializeToBundle()` method from MLeap in the script, we are serializing the ML Pipeline into an MLeap bundle and uploading to S3 in `tar.gz` format as SageMaker expects. Uploading the code and other dependencies to S3 for AWS GlueUnlike SageMaker, in order to run your code in AWS Glue, we do not need to prepare a Docker image. We can upload the code and dependencies directly to S3 and pass those locations while invoking the Glue job. Upload the SparkML script to S3We will be uploading the `abalone_processing.py` script to S3 now so that Glue can use it to run the PySpark job. You can replace it with your own script if needed. If your code has multiple files, you need to zip those files and upload to S3 instead of uploading a single file like it's being done here.
###Code
script_location = sess.upload_data(path='abalone_processing.py', bucket=default_bucket, key_prefix='codes')
###Output
_____no_output_____
###Markdown
Upload MLeap dependencies to S3 For our job, we will also have to pass MLeap dependencies to Glue. MLeap is an additional library we are using which does not come bundled with default Spark.Similar to most of the packages in the Spark ecosystem, MLeap is also implemented as a Scala package with a front-end wrapper written in Python so that it can be used from PySpark. We need to make sure that the MLeap Python library as well as the JAR is available within the Glue job environment. In the following cell, we will download the MLeap Python dependency & JAR from a SageMaker hosted bucket and upload to the S3 bucket we created above in your account. If you are using some other Python libraries like `nltk` in your code, you need to download the wheel file from PyPI and upload to S3 in the same way. At this point, Glue only supports passing pure Python libraries in this way (e.g. you can not pass `Pandas` or `OpenCV`). However you can use `NumPy` & `SciPy` without having to pass these as packages because these are pre-installed in the Glue environment.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/python/python.zip
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/0.9.6/jar/mleap_spark_assembly.jar
python_dep_location = sess.upload_data(path='python.zip', bucket=default_bucket, key_prefix='dependencies/python')
jar_dep_location = sess.upload_data(path='mleap_spark_assembly.jar', bucket=default_bucket, key_prefix='dependencies/jar')
###Output
_____no_output_____
###Markdown
Defining output locations for the data and modelNext we define the output location where the transformed dataset should be uploaded. We are also specifying a model location where the MLeap serialized model would be updated. This locations should be consumed as part of the Spark script using `getResolvedOptions` method of AWS Glue library (see `abalone_processing.py` for details).By designing our code in that way, we can re-use these variables as part of other SageMaker operations from this Notebook (details below).
###Code
from time import gmtime, strftime
import time
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Input location of the data, We uploaded our train.csv file to input key previously
s3_input_bucket = default_bucket
s3_input_key_prefix = 'input/abalone'
# Output location of the data. The input data will be split, transformed, and
# uploaded to output/train and output/validation
s3_output_bucket = default_bucket
s3_output_key_prefix = timestamp_prefix + '/abalone'
# the MLeap serialized SparkML model will be uploaded to output/mleap
s3_model_bucket = default_bucket
s3_model_key_prefix = s3_output_key_prefix + '/mleap'
###Output
_____no_output_____
###Markdown
Calling Glue APIs Next we'll be creating Glue client via Boto so that we can invoke the `create_job` API of Glue. `create_job` API will create a job definition which can be used to execute your jobs in Glue. The job definition created here is mutable. While creating the job, we are also passing the code location as well as the dependencies location to Glue.`AllocatedCapacity` parameter controls the hardware resources that Glue will use to execute this job. It is measures in units of `DPU`. For more information on `DPU`, please see [here](https://docs.aws.amazon.com/glue/latest/dg/add-job.html).
###Code
glue_client = boto_session.client('glue')
job_name = 'sparkml-abalone-' + timestamp_prefix
response = glue_client.create_job(
Name=job_name,
Description='PySpark job to featurize the Abalone dataset',
Role=role, # you can pass your existing AWS Glue role here if you have used Glue before
ExecutionProperty={
'MaxConcurrentRuns': 1
},
Command={
'Name': 'glueetl',
'ScriptLocation': script_location
},
DefaultArguments={
'--job-language': 'python',
'--extra-jars' : jar_dep_location,
'--extra-py-files': python_dep_location
},
AllocatedCapacity=5,
Timeout=60,
)
glue_job_name = response['Name']
print(glue_job_name)
###Output
_____no_output_____
###Markdown
The aforementioned job will be executed now by calling `start_job_run` API. This API creates an immutable run/execution corresponding to the job definition created above. We will require the `job_run_id` for the particular job execution to check for status. We'll pass the data and model locations as part of the job execution parameters.
###Code
job_run_id = glue_client.start_job_run(JobName=job_name,
Arguments = {
'--S3_INPUT_BUCKET': s3_input_bucket,
'--S3_INPUT_KEY_PREFIX': s3_input_key_prefix,
'--S3_OUTPUT_BUCKET': s3_output_bucket,
'--S3_OUTPUT_KEY_PREFIX': s3_output_key_prefix,
'--S3_MODEL_BUCKET': s3_model_bucket,
'--S3_MODEL_KEY_PREFIX': s3_model_key_prefix
})['JobRunId']
print(job_run_id)
###Output
_____no_output_____
###Markdown
Checking Glue job status Now we will check for the job status to see if it has `succeeded`, `failed` or `stopped`. Once the job is succeeded, we have the transformed data into S3 in CSV format which we can use with XGBoost for training. If the job fails, you can go to [AWS Glue console](https://us-west-2.console.aws.amazon.com/glue/home), click on **Jobs** tab on the left, and from the page, click on this particular job and you will be able to find the CloudWatch logs (the link under **Logs**) link for these jobs which can help you to see what exactly went wrong in the job execution.
###Code
job_run_status = glue_client.get_job_run(JobName=job_name,RunId=job_run_id)['JobRun']['JobRunState']
while job_run_status not in ('FAILED', 'SUCCEEDED', 'STOPPED'):
job_run_status = glue_client.get_job_run(JobName=job_name,RunId=job_run_id)['JobRun']['JobRunState']
print (job_run_status)
time.sleep(30)
###Output
_____no_output_____
###Markdown
Using SageMaker XGBoost to train on the processed dataset produced by SparkML job Now we will use SageMaker XGBoost algorithm to train on this dataset. We already know the S3 locationwhere the preprocessed training data was uploaded as part of the Glue job. We need to retrieve the XGBoost algorithm imageWe will retrieve the XGBoost built-in algorithm image so that it can leveraged for the training job.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'xgboost', repo_version="latest")
print (training_image)
###Output
_____no_output_____
###Markdown
Next XGBoost model parameters and dataset details will be set properlyWe have parameterized this Notebook so that the same data location which was used in the PySpark script can now be passed to XGBoost Estimator as well.
###Code
s3_train_data = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'train')
s3_validation_data = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'validation')
s3_output_location = 's3://{}/{}/{}'.format(s3_output_bucket, s3_output_key_prefix, 'xgboost_model')
xgb_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
train_volume_size = 20,
train_max_run = 3600,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
xgb_model.set_hyperparameters(objective = "reg:linear",
eta = .2,
gamma = 4,
max_depth = 5,
num_round = 10,
subsample = 0.7,
silent = 0,
min_child_weight = 6)
train_data = sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated',
content_type='text/csv', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3_validation_data, distribution='FullyReplicated',
content_type='text/csv', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
###Output
_____no_output_____
###Markdown
Finally XGBoost training will be performed.
###Code
xgb_model.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a realtime inference endpoint Next we will proceed with deploying the models in SageMaker to create an Inference Pipeline. You can create an Inference Pipeline with upto five containers.Deploying a model in SageMaker requires two components:* Docker image residing in ECR.* Model artifacts residing in S3.**SparkML**For SparkML, Docker image for MLeap based SparkML serving is provided by SageMaker team. For more information on this, please see [SageMaker SparkML Serving](https://github.com/aws/sagemaker-sparkml-serving-container). MLeap serialized SparkML model was uploaded to S3 as part of the SparkML job we executed in AWS Glue.**XGBoost**For XGBoost, we will use the same Docker image we used for training. The model artifacts for XGBoost was uploaded as part of the training job we just ran. Passing the schema of the payload via environment variableSparkML serving container needs to know the schema of the request that'll be passed to it while calling the `predict` method. In order to alleviate the pain of not having to pass the schema with every request, `sagemaker-sparkml-serving` allows you to pass it via an environment variable while creating the model definitions. This schema definition will be required in our next step for creating a model.We will see later that you can overwrite this schema on a per request basis by passing it as part of the individual request payload as well.
###Code
import json
schema = {
"input": [
{
"name": "sex",
"type": "string"
},
{
"name": "length",
"type": "double"
},
{
"name": "diameter",
"type": "double"
},
{
"name": "height",
"type": "double"
},
{
"name": "whole_weight",
"type": "double"
},
{
"name": "shucked_weight",
"type": "double"
},
{
"name": "viscera_weight",
"type": "double"
},
{
"name": "shell_weight",
"type": "double"
},
],
"output":
{
"name": "features",
"type": "double",
"struct": "vector"
}
}
schema_json = json.dumps(schema)
print(schema_json)
###Output
_____no_output_____
###Markdown
Creating a `PipelineModel` which comprises of the SparkML and XGBoost model in the right orderNext we'll create a SageMaker `PipelineModel` with SparkML and XGBoost.The `PipelineModel` will ensure that both the containers get deployed behind a single API endpoint in the correct order. The same model would later be used for Batch Transform as well to ensure that a single job is sufficient to do prediction against the Pipeline. Here, during the `Model` creation for SparkML, we will pass the schema definition that we built in the previous cell.
###Code
from sagemaker.model import Model
from sagemaker.pipeline import PipelineModel
from sagemaker.sparkml.model import SparkMLModel
sparkml_data = 's3://{}/{}/{}'.format(s3_model_bucket, s3_model_key_prefix, 'model.tar.gz')
# passing the schema defined above by using an environment variable that sagemaker-sparkml-serving understands
sparkml_model = SparkMLModel(model_data=sparkml_data, env={'SAGEMAKER_SPARKML_SCHEMA' : schema_json})
xgb_model = Model(model_data=xgb_model.model_data, image=training_image)
model_name = 'inference-pipeline-' + timestamp_prefix
sm_model = PipelineModel(name=model_name, role=role, models=[sparkml_model, xgb_model])
###Output
_____no_output_____
###Markdown
Deploying the `PipelineModel` to an endpoint for realtime inferenceNext we will deploy the model we just created with the `deploy()` method to start an inference endpoint and we will send some requests to the endpoint to verify that it works as expected.
###Code
endpoint_name = 'inference-pipeline-ep-' + timestamp_prefix
sm_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge', endpoint_name=endpoint_name)
###Output
_____no_output_____
###Markdown
Invoking the newly created inference endpoint with a payload to transform the dataNow we will invoke the endpoint with a valid payload that SageMaker SparkML Serving can recognize. There are three ways in which input payload can be passed to the request:* Pass it as a valid CSV string. In this case, the schema passed via the environment variable will be used to determine the schema. For CSV format, every column in the input has to be a basic datatype (e.g. int, double, string) and it can not be a Spark `Array` or `Vector`.* Pass it as a valid JSON string. In this case as well, the schema passed via the environment variable will be used to infer the schema. With JSON format, every column in the input can be a basic datatype or a Spark `Vector` or `Array` provided that the corresponding entry in the schema mentions the correct value.* Pass the request in JSON format along with the schema and the data. In this case, the schema passed in the payload will take precedence over the one passed via the environment variable (if any). Passing the payload in CSV formatWe will first see how the payload can be passed to the endpoint in CSV format.
###Code
from sagemaker.predictor import json_serializer, csv_serializer, json_deserializer, RealTimePredictor
from sagemaker.content_types import CONTENT_TYPE_CSV, CONTENT_TYPE_JSON
payload = "F,0.515,0.425,0.14,0.766,0.304,0.1725,0.255"
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=csv_serializer,
content_type=CONTENT_TYPE_CSV, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
Passing the payload in JSON formatWe will now pass a different payload in JSON format.
###Code
payload = {"data": ["F",0.515,0.425,0.14,0.766,0.304,0.1725,0.255]}
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=json_serializer,
content_type=CONTENT_TYPE_JSON, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Passing the payload with both schema and the dataNext we will pass the input payload comprising of both the schema and the data. If you notice carefully, this schema will be slightly different than what we have passed via the environment variable. The locations of `length` and `sex` column have been swapped and so the data. The server now parses the payload with this schema and works properly.
###Code
payload = {
"schema": {
"input": [
{
"name": "length",
"type": "double"
},
{
"name": "sex",
"type": "string"
},
{
"name": "diameter",
"type": "double"
},
{
"name": "height",
"type": "double"
},
{
"name": "whole_weight",
"type": "double"
},
{
"name": "shucked_weight",
"type": "double"
},
{
"name": "viscera_weight",
"type": "double"
},
{
"name": "shell_weight",
"type": "double"
},
],
"output":
{
"name": "features",
"type": "double",
"struct": "vector"
}
},
"data": [0.515,"F",0.425,0.14,0.766,0.304,0.1725,0.255]
}
predictor = RealTimePredictor(endpoint=endpoint_name, sagemaker_session=sess, serializer=json_serializer,
content_type=CONTENT_TYPE_JSON, accept=CONTENT_TYPE_CSV)
print(predictor.predict(payload))
###Output
_____no_output_____
###Markdown
[Optional] Deleting the EndpointIf you do not plan to use this endpoint, then it is a good practice to delete the endpoint so that you do not incur the cost of running it.
###Code
sm_client = boto_session.client('sagemaker')
sm_client.delete_endpoint(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Building an Inference Pipeline consisting of SparkML & XGBoost models for a single Batch Transform jobSageMaker Batch Transform also supports chaining multiple containers together when deploying an Inference Pipeline and performing a single batch transform jobs to transform your data for a batch use-case similar to the real-time use-case we have seen above. Preparing data for Batch TransformBatch Transform requires data in the same format described above, with one CSV or JSON being per line. For this Notebook, SageMaker team has created a sample input in CSV format which Batch Transform can process. The input is basically a similar CSV file to the training file with only difference is that it does not contain the label (``rings``) field.Next we will download a sample of this data from one of the SageMaker buckets (named `batch_input_abalone.csv`) and upload to your S3 bucket. We will also inspect first five rows of the data post downloading.
###Code
!wget https://s3-us-west-2.amazonaws.com/sparkml-mleap/data/batch_input_abalone.csv
!printf "\n\nShowing first five lines\n\n"
!head -n 5 batch_input_abalone.csv
!printf "\n\nAs we can see, it is identical to the training file apart from the label being absent here.\n\n"
batch_input_loc = sess.upload_data(path='batch_input_abalone.csv', bucket=default_bucket, key_prefix='batch')
###Output
_____no_output_____
###Markdown
Invoking the Transform API to create a Batch Transform jobNext we will create a Batch Transform job using the `Transformer` class from Python SDK to create a Batch Transform job.
###Code
input_data_path = 's3://{}/{}/{}'.format(default_bucket, 'batch', 'batch_input_abalone.csv')
output_data_path = 's3://{}/{}/{}'.format(default_bucket, 'batch_output/abalone', timestamp_prefix)
job_name = 'serial-inference-batch-' + timestamp_prefix
transformer = sagemaker.transformer.Transformer(
# This was the model created using PipelineModel and it contains feature processing and XGBoost
model_name = model_name,
instance_count = 1,
instance_type = 'ml.m4.xlarge',
strategy = 'SingleRecord',
assemble_with = 'Line',
output_path = output_data_path,
base_transform_job_name='serial-inference-batch',
sagemaker_session=sess,
accept = CONTENT_TYPE_CSV
)
transformer.transform(data = input_data_path,
job_name = job_name,
content_type = CONTENT_TYPE_CSV,
split_type = 'Line')
transformer.wait()
###Output
_____no_output_____ |
prediction/RecSys-Part3-New-NegSampling-Predict-Replies-Retweets.ipynb | ###Markdown
Data Sequence Generators
###Code
from transformers import BertTokenizer, TFBertModel
from tensorflow.keras.models import Model
from spektral.utils import normalized_adjacency
import tensorflow as tf
from tensorflow.keras.utils import Sequence
import random
import gridfs
import math
import functools
class TwitterDataset(Sequence):
def __init__(self, user_id, users,
replies, mentions, retweets, full_graph, graph_test,
max_tweets, batch_size, date_limit, db, filename='user_tweets.np'):
self.users_id = user_id
self.id_users = [None] * len(self.users_id)
for u_id, idx in self.users_id.items():
self.id_users[idx] = u_id
self.graph_replies = replies
self.graph_mentions = mentions
self.graph_retweets = retweets
self.graph_full = full_graph
self.max_tweets = max_tweets
self.batch_size = batch_size
self.valid_users = list()
self.target_users = list(user_id.keys())
self.target_users.sort()
for u in self.target_users:
if u not in graph_test.nodes:
continue
if len(list(graph_test.neighbors(u))) > 0:
self.valid_users.append(u)
self.valid_users.sort()
#empty tweet representation
#bert_model = TFBertModel.from_pretrained("bert-base-uncased")
#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
#self.empty_tweet = bert_model(**tokenizer('', return_tensors='tf'))['pooler_output'].numpy()
self.empty_tweet = None
self.date_limit = date_limit
self.gridfs = gridfs.GridFS(db, collection='fsProcessedTweets')
#del bert_model
#del tokenizer
self.filename = filename
self._init_tweet_cache()
self.current_target = -1
self.batch_per_pass = math.ceil(len(self.target_users)/ self.batch_size)
pass
def create_data(self):
self.user_data = []
print('Preprocessing batchs')
for i in tqdm(range(0, len(self.valid_users))):
self.user_data.append(self._get_instance(self.valid_users[i]))
#data = [self._get_instance(self.valid_users[i])]
#max_users = max([len(instance[0]) for instance in data])
#self.user_data.append(self._to_batch(data, max_users))
self.internal_get_item = self.internal_get_item_cache
pass
def _init_tweet_cache(self):
if not os.path.exists('training_tweets.npy'):
self.tweets = np.zeros((len(self.id_users), 768), dtype=np.float32)
for i, t in tqdm(enumerate(self.id_users), total=len(self.id_users)):
self.tweets[i, ...] = self._get_tweets_bert_base(t)
np.save('training_tweets.npy', self.tweets)
return
self.tweets = np.load('training_tweets.npy')
self.tweets = np.mean(self.tweets, axis=1)
pass
def __len__(self):
return len(self.valid_users) * self.batch_per_pass
def _get_graph_for_node(self, node):
user = node#self.user_id[node]
node_map = {user: 0}
#Maps all the 1-level node to create the matrix
for neighbor in self.graph_replies.neighbors(node):
if neighbor not in node_map:
node_map[neighbor] = len(node_map)
for neighbor in self.graph_retweets.neighbors(node):
if neighbor not in node_map:
node_map[neighbor] = len(node_map)
#Creates the 3 matrixes
replies = np.eye(len(node_map))
retweets = np.eye(len(node_map))
#creates the  matrix for the key node
for node, node_id in node_map.items():
for neighbor in self.graph_replies.neighbors(node):
if neighbor in node_map:
replies[node_id, node_map[neighbor]] = 1
for neighbor in self.graph_retweets.neighbors(node):
if neighbor in node_map:
retweets[node_id, node_map[neighbor]] = 1
replies = normalized_adjacency(replies)
retweets = normalized_adjacency(retweets)
#Create the embedding vector
embeddings = np.zeros((len(node_map)))
for k, v in node_map.items():
#Convert the tweeter user id to the id acording to the nn
embeddings[v] = self.users_id[k]
return embeddings, replies, retweets
def _get_tweets_bert(self, node):
idx = int(node)
return self.tweets[idx, ...]
def _get_tweets_bert_db(self, node):
user_id = node
query = {'userId': int(user_id)}
if self.date_limit is not None:
query['created'] = {'$lte': self.date_limit}
cursor = (
self.gridfs.
find(query).
sort([('created', pymongo.DESCENDING)]).
limit(self.max_tweets)
)
result = np.empty((self.max_tweets, 768))
i = 0
for file in cursor:
result[i, :] = np.load(file)['pooler_output']
i += 1
while i < self.max_tweets:
result[i, :] = self.empty_tweet
i += 1
return result
def _get_instance(self, node):
embeddings, replies, retweets = self._get_graph_for_node(node)
bert_emb = np.empty((embeddings.shape[0], 768))
for i, node in enumerate(embeddings):
bert_emb[i, ...] = self._get_tweets_bert(node)
return embeddings, replies[:1, :], retweets[:1, :], bert_emb
def _to_batch(self, instances, max_users, batch_size):
user_i = np.zeros((batch_size, max_users))
user_replies = np.zeros((batch_size, 1, max_users))
user_retweet = np.zeros((batch_size, 1, max_users))
user_bert = np.zeros((batch_size, max_users, 768))
for i, (embeddings, replies, retweets, bert_emb) in enumerate(instances):
user_i[i, :embeddings.shape[0]] = embeddings
user_replies[i, :replies.shape[0], :replies.shape[1]] = replies
user_retweet[i, :retweets.shape[0], :retweets.shape[1]] = retweets
user_bert[i, :bert_emb.shape[0], ...] = bert_emb
return [user_i, user_replies, user_retweet, user_bert]
def _to_batch_single(self, instance, repeat):
user_i = instance[0]
user_replies = instance[1]
user_retweet = instance[2]
user_bert = instance[3]
user_i = np.expand_dims(user_i, axis=0)
user_replies = np.expand_dims(user_replies, axis=0)
user_retweet = np.expand_dims(user_retweet, axis=0)
user_bert = np.expand_dims(user_bert, axis=0)
user_i = np.repeat(user_i, repeat, axis=0)
user_replies = np.repeat(user_replies, repeat, axis=0)
user_retweet = np.repeat(user_retweet, repeat, axis=0)
user_bert = np.repeat(user_bert, repeat, axis=0)
return [user_i, user_replies, user_retweet, user_bert]
def internal_get_item_cache(self, idx):
current_user = idx % len(self.valid_users)
current_target = idx // len(self.valid_users)
if current_target != self.current_target:
target_list = self.target_users[current_target * self.batch_size :
(current_target + 1) * self.batch_size]
target_list = [self._get_instance(idx) for idx in target_list]
max_user = max([len(instance[0]) for instance in target_list])
self.current_target_data = self._to_batch(target_list, max_user, len(target_list))
self.current_target = current_target
target_batch = self.current_target_data
#Busco los datos y lo hago crecer al tamaño del batch
user_data = self._to_batch_single(self.user_data[current_user], target_batch[0].shape[0])
return user_data + target_batch
def internal_get_item(self, idx):
current_user = self.valid_users[idx % len(self.valid_users)]
current_target = idx // len(self.valid_users)
if current_target != self.current_target:
target_list = self.target_users[current_target * self.batch_size :
(current_target + 1) * self.batch_size]
target_list = [self._get_instance(idx) for idx in target_list]
max_user = max([len(instance[0]) for instance in target_list])
self.current_target_data = self._to_batch(target_list, max_user, len(target_list))
self.current_target = current_target
target_batch = self.current_target_data
#Busco los datos y lo hago crecer al tamaño del batch
user_data = self._to_batch_single(self._get_instance(current_user), target_batch[0].shape[0])
return user_data + target_batch
def __getitem__(self, idx):
return self.internal_get_item(idx)
def gen_users_pairs(self, idx):
current_user = idx % len(self.valid_users)
current_target = idx // len(self.valid_users)
target_list = self.target_users[current_target * self.batch_size :
(current_target + 1) * self.batch_size]
current_user = self.valid_users[current_user]
return [(current_user, d) for d in target_list]
max_tweets = 15
batch_size = 50
with open('test_ds.pickle', 'rb') as f:
dataset = pickle.load(f)
user_id = dataset.users_id
for i in dataset[0]:
print(i.shape)
###Output
(500, 16)
(500, 1, 16)
(500, 1, 16)
(500, 16, 768)
(500, 254)
(500, 1, 254)
(500, 1, 254)
(500, 254, 768)
###Markdown
Neural Network
###Code
from transformers import BertTokenizer, TFBertModel, BertConfig
from tensorflow.keras.layers import LSTM, Bidirectional, Input, Embedding, Concatenate, \
TimeDistributed, Lambda, Dot, Attention, GlobalMaxPool1D, Dense
from tensorflow.keras.models import Model
from spektral.layers.convolutional import GCNConv
import tensorflow as tf
def loss(y_true, y_pred):
#recibe indices con forma 1xvaloresx3 (indices + valor)
#trasnforma los indices a valoresx2 y los valores valoresx1
v_true, dist = y_true[:, 0], y_true[:, 1]
return K.mean(dist * K.square(y_pred - K.log(2 * v_true) / K.log(2.0)))
emb_size = 64
kernels = 32
deep = 1
embedded = Embedding(len(user_id), emb_size, name='user_embeddings')
user_i = Input(shape=(None,), name='user_list', dtype=tf.int32)
emb_user = embedded(user_i)
target_i = Input(shape=(None,), name='target_list', dtype=tf.int32)
emb_target = embedded(target_i)
replies_user_i = Input(shape=(None, None), name='replies_user', dtype=tf.float32)
retweets_user_i = Input(shape=(None, None), name='retweets_user', dtype=tf.float32)
replies_target_i = Input(shape=(None, None), name='replies_target', dtype=tf.float32)
retweets_target_i = Input(shape=(None, None), name='retweets_target', dtype=tf.float32)
user_tweets_bert = Input(shape=(None, 768), name='user_tweets_bert')
target_tweets_bert = Input(shape=(None, 768), name='target_tweets_bert')
user_bert = Dense(emb_size, name='user_bert_dense')(user_tweets_bert)
target_bert = Dense(emb_size, name='target_bert_dense')(target_tweets_bert)
user_emb = Concatenate(name='user_emb_plus_bert', axis=-1)([emb_user, user_bert])
target_emb = Concatenate(name='target_emb_plus_bert', axis=-1)([emb_target, target_bert])
emb_rep, emb_men, emb_rt = user_emb, user_emb, user_emb
emb_t_rep, emb_t_men, emb_t_rt = target_emb, target_emb, target_emb
for i in range(deep):
emb_rep = GCNConv(kernels, name='gcn_replies_{}'.format(i))([emb_rep, replies_user_i])
emb_rt = GCNConv(kernels, name='gcn_retweets_{}'.format(i))([emb_rt, retweets_user_i])
emb_t_rep = GCNConv(kernels, name='gcn_t_replies_{}'.format(i))([emb_t_rep, replies_target_i])
emb_t_rt = GCNConv(kernels, name='gcn_t_retweets_{}'.format(i))([emb_t_rt, retweets_target_i])
mat = Concatenate(name='user_gnc')([emb_rep, emb_rt])
mat = Lambda(lambda x: x[:, 0, :], name='user_row')(mat)
mat_t = Concatenate(name='target_gnc')([emb_t_rep, emb_t_rt])
mat_t = Lambda(lambda x: x[:, 0, :], name='target_row')(mat_t)
#Wide
user_wide = Lambda(lambda x: x[:, 0, :], name='user_wide')(emb_user)
target_wide = Lambda(lambda x: x[:, 0, :], name='target_wide')(emb_target)
wide = Concatenate(name='reps_concat')([user_wide, target_wide])
wide = Dense(1)(wide)
#Falta unir con bert
mat = Concatenate(name='graph_reps_concat')([mat, mat_t])
mat = Dense(kernels)(mat)#, [0, 2, 1]
mat = Dense(1)(mat)
mat = mat + wide
model = Model([user_i, replies_user_i, retweets_user_i, user_tweets_bert,
target_i, replies_target_i, retweets_target_i,target_tweets_bert], mat)
model.summary()
model.compile(loss=loss, optimizer='adam')
model.load_weights('connected-neg-replies-retweets/model_rec-neg.h5')
import csv
class OffsetLimitedDs(Sequence):
def __init__(self, ds, offset, limit):
self.ds = ds
self.offset = offset
self.limit = limit
self.len = min(self.limit, len(self.ds) - self.offset)
def __len__(self):
return self.len
def __getitem__(self, idx):
return self.ds[idx + self.offset]
partial = 100
with open('connected-neg-replies-retweets/predictions-neg.csv', 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(['Origin', 'Destiny', 'Prediction'])
for offset in tqdm(range(0, len(dataset), partial)):
c_ds = OffsetLimitedDs(dataset, offset, partial)
pred = model.predict(c_ds, max_queue_size=10)
j = 0
for i in range(len(c_ds)):
pairs = dataset.gen_users_pairs(offset + i)
for o, d in pairs:
csvwriter.writerow([o, d, pred[j, 0]])
j = j + 1
print(len(dataset))
###Output
_____no_output_____ |
DV0101EN-2-2-1-Area-Plots-Histograms-and-Bar-Charts-py-v2.0.ipynb | ###Markdown
Area Plots, Histograms, and Bar Plots IntroductionIn this lab, we will continue exploring the Matplotlib library and will learn how to create additional plots, namely area plots, histograms, and bar charts. Table of Contents1. [Exploring Datasets with *pandas*](0)2. [Downloading and Prepping Data](2)3. [Visualizing Data using Matplotlib](4) 4. [Area Plots](6) 5. [Histograms](8) 6. [Bar Charts](10) Exploring Datasets with *pandas* and MatplotlibToolkits: The course heavily relies on [*pandas*](http://pandas.pydata.org/) and [**Numpy**](http://www.numpy.org/) for data wrangling, analysis, and visualization. The primary plotting library that we are exploring in the course is [Matplotlib](http://matplotlib.org/).Dataset: Immigration to Canada from 1980 to 2013 - [International migration flows to and from selected countries - The 2015 revision](http://www.un.org/en/development/desa/population/migration/data/empirical2/migrationflows.shtml) from United Nation's website.The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. For this lesson, we will focus on the Canadian Immigration data. Downloading and Prepping Data Import Primary Modules. The first thing we'll do is import two key data analysis modules: *pandas* and **Numpy**.
###Code
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
!conda install -c anaconda xlrd --yes
###Output
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.5.11
latest version: 4.7.12
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /home/jupyterlab/conda/envs/python
added / updated specs:
- xlrd
The following packages will be downloaded:
package | build
---------------------------|-----------------
numpy-base-1.15.4 | py36h81de0dd_0 4.2 MB anaconda
numpy-1.15.4 | py36h1d66e8a_0 35 KB anaconda
certifi-2019.9.11 | py36_0 154 KB anaconda
openssl-1.1.1 | h7b6447c_0 5.0 MB anaconda
mkl_fft-1.0.6 | py36h7dd41cf_0 150 KB anaconda
blas-1.0 | mkl 6 KB anaconda
scipy-1.1.0 | py36hfa4b5c9_1 18.0 MB anaconda
xlrd-1.2.0 | py_0 108 KB anaconda
mkl_random-1.0.1 | py36h4414c95_1 373 KB anaconda
scikit-learn-0.20.1 | py36h4989274_0 5.7 MB anaconda
------------------------------------------------------------
Total: 33.7 MB
The following packages will be UPDATED:
certifi: 2019.9.11-py36_0 conda-forge --> 2019.9.11-py36_0 anaconda
mkl_fft: 1.0.4-py37h4414c95_1 --> 1.0.6-py36h7dd41cf_0 anaconda
mkl_random: 1.0.1-py37h4414c95_1 --> 1.0.1-py36h4414c95_1 anaconda
numpy-base: 1.15.1-py37h81de0dd_0 --> 1.15.4-py36h81de0dd_0 anaconda
openssl: 1.1.1d-h516909a_0 conda-forge --> 1.1.1-h7b6447c_0 anaconda
xlrd: 1.1.0-py37_1 --> 1.2.0-py_0 anaconda
The following packages will be DOWNGRADED:
blas: 1.1-openblas conda-forge --> 1.0-mkl anaconda
numpy: 1.16.2-py36_blas_openblash1522bff_0 conda-forge [blas_openblas] --> 1.15.4-py36h1d66e8a_0 anaconda
scikit-learn: 0.20.1-py36_blas_openblashebff5e3_1200 conda-forge [blas_openblas] --> 0.20.1-py36h4989274_0 anaconda
scipy: 1.2.1-py36_blas_openblash1522bff_0 conda-forge [blas_openblas] --> 1.1.0-py36hfa4b5c9_1 anaconda
Downloading and Extracting Packages
numpy-base-1.15.4 | 4.2 MB | ##################################### | 100%
numpy-1.15.4 | 35 KB | ##################################### | 100%
certifi-2019.9.11 | 154 KB | ##################################### | 100%
openssl-1.1.1 | 5.0 MB | ##################################### | 100%
mkl_fft-1.0.6 | 150 KB | ##################################### | 100%
blas-1.0 | 6 KB | ##################################### | 100%
scipy-1.1.0 | 18.0 MB | ##################################### | 100%
xlrd-1.2.0 | 108 KB | ##################################### | 100%
mkl_random-1.0.1 | 373 KB | ##################################### | 100%
scikit-learn-0.20.1 | 5.7 MB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
###Markdown
Let's download and import our primary Canadian Immigration dataset using *pandas* `read_excel()` method. Normally, before we can do that, we would need to download a module which *pandas* requires to read in excel files. This module is **xlrd**. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the **xlrd** module:```!conda install -c anaconda xlrd --yes``` Download the dataset and read it into a *pandas* dataframe.
###Code
df_can = pd.read_excel('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/labs/Data_Files/Canada.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2
)
print('Data downloaded and read into a dataframe!')
###Output
Data downloaded and read into a dataframe!
###Markdown
Let's take a look at the first five items in our dataset.
###Code
df_can.head()
###Output
_____no_output_____
###Markdown
Let's find out how many entries there are in our dataset.
###Code
# print the dimensions of the dataframe
print(df_can.shape)
###Output
(195, 43)
###Markdown
Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to `Introduction to Matplotlib and Line Plots` lab for the rational and detailed description of the changes. 1. Clean up the dataset to remove columns that are not informative to us for visualization (eg. Type, AREA, REG).
###Code
df_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the columns Type, Coverage, AREA, REG, and DEV got removed from the dataframe. 2. Rename some of the columns so that they make sense.
###Code
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the column names now make much more sense, even to an outsider. 3. For consistency, ensure that all column labels of type string.
###Code
# let's examine the types of the column labels
all(isinstance(column, str) for column in df_can.columns)
###Output
_____no_output_____
###Markdown
Notice how the above line of code returned *False* when we tested if all the column labels are of type **string**. So let's change them all to **string** type.
###Code
df_can.columns = list(map(str, df_can.columns))
# let's check the column labels types now
all(isinstance(column, str) for column in df_can.columns)
###Output
_____no_output_____
###Markdown
4. Set the country name as index - useful for quickly looking up countries using .loc method.
###Code
df_can.set_index('Country', inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the country names now serve as indices. 5. Add total column.
###Code
df_can['Total'] = df_can.sum(axis=1)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Now the dataframe has an extra column that presents the total number of immigrants from each country in the dataset from 1980 - 2013. So if we print the dimension of the data, we get:
###Code
print ('data dimensions:', df_can.shape)
###Output
data dimensions: (195, 38)
###Markdown
So now our dataframe has 38 columns instead of 37 columns that we had before.
###Code
# finally, let's create a list of years from 1980 - 2013
# this will come in handy when we start plotting the data
years = list(map(str, range(1980, 2014)))
years
###Output
_____no_output_____
###Markdown
Visualizing Data using Matplotlib Import `Matplotlib` and **Numpy**.
###Code
# use the inline backend to generate the plots within the browser
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
###Output
Matplotlib version: 3.1.1
###Markdown
Area Plots In the last module, we created a line plot that visualized the top 5 countries that contribued the most immigrants to Canada from 1980 to 2013. With a little modification to the code, we can visualize this plot as a cumulative plot, also knows as a **Stacked Line Plot** or **Area plot**.
###Code
df_can.sort_values(['Total'], ascending=False, axis=0, inplace=True)
# get the top 5 entries
df_top5 = df_can.head()
# transpose the dataframe
df_top5 = df_top5[years].transpose()
df_top5.head()
###Output
_____no_output_____
###Markdown
Area plots are stacked by default. And to produce a stacked area plot, each column must be either all positive or all negative values (any NaN values will defaulted to 0). To produce an unstacked plot, pass `stacked=False`.
###Code
df_top5.index = df_top5.index.map(int) # let's change the index values of df_top5 to type integer for plotting
df_top5.plot(kind='area',
stacked=False,
figsize=(20, 10), # pass a tuple (x, y) size
)
plt.title('Immigration Trend of Top 5 Countries')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
The unstacked plot has a default transparency (alpha value) at 0.5. We can modify this value by passing in the `alpha` parameter.
###Code
df_top5.plot(kind='area',
alpha=0.25, # 0-1, default value a= 0.5
stacked=False,
figsize=(20, 10),
)
plt.title('Immigration Trend of Top 5 Countries')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
Two types of plottingAs we discussed in the video lectures, there are two styles/options of ploting with `matplotlib`. Plotting using the Artist layer and plotting using the scripting layer.**Option 1: Scripting layer (procedural method) - using matplotlib.pyplot as 'plt' **You can use `plt` i.e. `matplotlib.pyplot` and add more elements by calling different methods procedurally; for example, `plt.title(...)` to add title or `plt.xlabel(...)` to add label to the x-axis.```python Option 1: This is what we have been using so far df_top5.plot(kind='area', alpha=0.35, figsize=(20, 10)) plt.title('Immigration trend of top 5 countries') plt.ylabel('Number of immigrants') plt.xlabel('Years')``` **Option 2: Artist layer (Object oriented method) - using an `Axes` instance from Matplotlib (preferred) **You can use an `Axes` instance of your current plot and store it in a variable (eg. `ax`). You can add more elements by calling methods with a little change in syntax (by adding "*set_*" to the previous methods). For example, use `ax.set_title()` instead of `plt.title()` to add title, or `ax.set_xlabel()` instead of `plt.xlabel()` to add label to the x-axis. This option sometimes is more transparent and flexible to use for advanced plots (in particular when having multiple plots, as you will see later). In this course, we will stick to the **scripting layer**, except for some advanced visualizations where we will need to use the **artist layer** to manipulate advanced aspects of the plots.
###Code
# option 2: preferred option with more flexibility
ax = df_top5.plot(kind='area', alpha=0.35, figsize=(20, 10))
ax.set_title('Immigration Trend of Top 5 Countries')
ax.set_ylabel('Number of Immigrants')
ax.set_xlabel('Years')
###Output
_____no_output_____
###Markdown
**Question**: Use the scripting layer to create a stacked area plot of the 5 countries that contributed the least to immigration to Canada **from** 1980 to 2013. Use a transparency value of 0.45.
###Code
### type your answer here
# get the 5 countries with the least contribution
df_least5 = df_can.tail(5)
# transpose the dataframe
df_least5 = df_least5[years].transpose()
df_least5.head()
df_least5.index = df_least5.index.map(int) # let's change the index values of df_least5 to type integer for plotting
df_least5.plot(kind='area', alpha=0.45, figsize=(20, 10))
plt.title('Immigration Trend of 5 Countries with Least Contribution to Immigration')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ get the 5 countries with the least contributiondf_least5 = df_can.tail(5)--><!--\\ transpose the dataframedf_least5 = df_least5[years].transpose() df_least5.head()--><!--df_least5.index = df_least5.index.map(int) let's change the index values of df_least5 to type integer for plottingdf_least5.plot(kind='area', alpha=0.45, figsize=(20, 10)) --><!--plt.title('Immigration Trend of 5 Countries with Least Contribution to Immigration')plt.ylabel('Number of Immigrants')plt.xlabel('Years')--><!--plt.show()--> **Question**: Use the artist layer to create an unstacked area plot of the 5 countries that contributed the least to immigration to Canada **from** 1980 to 2013. Use a transparency value of 0.55.
###Code
### type your answer here
# get the 5 countries with the least contribution
df_least5 = df_can.tail(5)
# transpose the dataframe
df_least5 = df_least5[years].transpose()
df_least5.head()
df_least5.index = df_least5.index.map(int) # let's change the index values of df_least5 to type integer for plotting
ax = df_least5.plot(kind='area', alpha=0.55, stacked=False, figsize=(20, 10))
ax.set_title('Immigration Trend of 5 Countries with Least Contribution to Immigration')
ax.set_ylabel('Number of Immigrants')
ax.set_xlabel('Years')
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ get the 5 countries with the least contributiondf_least5 = df_can.tail(5)--><!--\\ transpose the dataframedf_least5 = df_least5[years].transpose() df_least5.head()--><!--df_least5.index = df_least5.index.map(int) let's change the index values of df_least5 to type integer for plotting--><!--ax = df_least5.plot(kind='area', alpha=0.55, stacked=False, figsize=(20, 10))--><!--ax.set_title('Immigration Trend of 5 Countries with Least Contribution to Immigration')ax.set_ylabel('Number of Immigrants')ax.set_xlabel('Years')--> HistogramsA histogram is a way of representing the *frequency* distribution of numeric dataset. The way it works is it partitions the x-axis into *bins*, assigns each data point in our dataset to a bin, and then counts the number of data points that have been assigned to each bin. So the y-axis is the frequency or the number of data points in each bin. Note that we can change the bin size and usually one needs to tweak it so that the distribution is displayed nicely. **Question:** What is the frequency distribution of the number (population) of new immigrants from the various countries to Canada in 2013? Before we proceed with creating the histogram plot, let's first examine the data split into intervals. To do this, we will us **Numpy**'s `histrogram` method to get the bin ranges and frequency counts as follows:
###Code
# let's quickly view the 2013 data
df_can['2013'].head()
# np.histogram returns 2 values
count, bin_edges = np.histogram(df_can['2013'])
print(count) # frequency count
print(bin_edges) # bin ranges, default = 10 bins
###Output
[178 11 1 2 0 0 0 0 1 2]
[ 0. 3412.9 6825.8 10238.7 13651.6 17064.5 20477.4 23890.3 27303.2
30716.1 34129. ]
###Markdown
By default, the `histrogram` method breaks up the dataset into 10 bins. The figure below summarizes the bin ranges and the frequency distribution of immigration in 2013. We can see that in 2013:* 178 countries contributed between 0 to 3412.9 immigrants * 11 countries contributed between 3412.9 to 6825.8 immigrants* 1 country contributed between 6285.8 to 10238.7 immigrants, and so on.. We can easily graph this distribution by passing `kind=hist` to `plot()`.
###Code
df_can['2013'].plot(kind='hist', figsize=(8, 5))
plt.title('Histogram of Immigration from 195 Countries in 2013') # add a title to the histogram
plt.ylabel('Number of Countries') # add y-label
plt.xlabel('Number of Immigrants') # add x-label
plt.show()
###Output
_____no_output_____
###Markdown
In the above plot, the x-axis represents the population range of immigrants in intervals of 3412.9. The y-axis represents the number of countries that contributed to the aforementioned population. Notice that the x-axis labels do not match with the bin size. This can be fixed by passing in a `xticks` keyword that contains the list of the bin sizes, as follows:
###Code
# 'bin_edges' is a list of bin intervals
count, bin_edges = np.histogram(df_can['2013'])
df_can['2013'].plot(kind='hist', figsize=(8, 5), xticks=bin_edges)
plt.title('Histogram of Immigration from 195 countries in 2013') # add a title to the histogram
plt.ylabel('Number of Countries') # add y-label
plt.xlabel('Number of Immigrants') # add x-label
plt.show()
###Output
_____no_output_____
###Markdown
*Side Note:* We could use `df_can['2013'].plot.hist()`, instead. In fact, throughout this lesson, using `some_data.plot(kind='type_plot', ...)` is equivalent to `some_data.plot.type_plot(...)`. That is, passing the type of the plot as argument or method behaves the same. See the *pandas* documentation for more info http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html. We can also plot multiple histograms on the same plot. For example, let's try to answer the following questions using a histogram.**Question**: What is the immigration distribution for Denmark, Norway, and Sweden for years 1980 - 2013?
###Code
# let's quickly view the dataset
df_can.loc[['Denmark', 'Norway', 'Sweden'], years]
# generate histogram
df_can.loc[['Denmark', 'Norway', 'Sweden'], years].plot.hist()
###Output
_____no_output_____
###Markdown
That does not look right! Don't worry, you'll often come across situations like this when creating plots. The solution often lies in how the underlying dataset is structured.Instead of plotting the population frequency distribution of the population for the 3 countries, *pandas* instead plotted the population frequency distribution for the `years`.This can be easily fixed by first transposing the dataset, and then plotting as shown below.
###Code
# transpose dataframe
df_t = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose()
df_t.head()
# generate histogram
df_t.plot(kind='hist', figsize=(10, 6))
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Let's make a few modifications to improve the impact and aesthetics of the previous plot:* increase the bin size to 15 by passing in `bins` parameter* set transparency to 60% by passing in `alpha` paramemter* label the x-axis by passing in `x-label` paramater* change the colors of the plots by passing in `color` parameter
###Code
# let's get the x-tick values
count, bin_edges = np.histogram(df_t, 15)
# un-stacked histogram
df_t.plot(kind ='hist',
figsize=(10, 6),
bins=15,
alpha=0.6,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen']
)
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Tip:For a full listing of colors available in Matplotlib, run the following code in your python shell:```pythonimport matplotlibfor name, hex in matplotlib.colors.cnames.items(): print(name, hex)``` If we do no want the plots to overlap each other, we can stack them using the `stacked` paramemter. Let's also adjust the min and max x-axis labels to remove the extra gap on the edges of the plot. We can pass a tuple (min,max) using the `xlim` paramater, as show below.
###Code
count, bin_edges = np.histogram(df_t, 15)
xmin = bin_edges[0] - 10 # first bin value is 31.0, adding buffer of 10 for aesthetic purposes
xmax = bin_edges[-1] + 10 # last bin value is 308.0, adding buffer of 10 for aesthetic purposes
# stacked Histogram
df_t.plot(kind='hist',
figsize=(10, 6),
bins=15,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen'],
stacked=True,
xlim=(xmin, xmax)
)
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
**Question**: Use the scripting layer to display the immigration distribution for Greece, Albania, and Bulgaria for years 1980 - 2013? Use an overlapping plot with 15 bins and a transparency value of 0.35.
###Code
### type your answer here
#create a dataframe of the countries of interest (cof)
df_cof = df_can.loc[['Greece','Albania', 'Bulgaria'], years]
# transpose the dataframe
df_cof = df_cof.transpose()
# let's get the x-tick values
count, bin_edges = np.histogram(df_cof, 15)
# Un-stacked Histogram
df_cof.plot(kind ='hist',
figsize=(10, 6),
bins=15,
alpha=0.35,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen']
)
plt.title('Histogram of Immigration from Greece, Albania, and Bulgaria from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ create a dataframe of the countries of interest (cof)df_cof = df_can.loc[['Greece', 'Albania', 'Bulgaria'], years]--><!--\\ transpose the dataframedf_cof = df_cof.transpose() --><!--\\ let's get the x-tick valuescount, bin_edges = np.histogram(df_cof, 15)--><!--\\ Un-stacked Histogramdf_cof.plot(kind ='hist', figsize=(10, 6), bins=15, alpha=0.35, xticks=bin_edges, color=['coral', 'darkslateblue', 'mediumseagreen'] )--><!--plt.title('Histogram of Immigration from Greece, Albania, and Bulgaria from 1980 - 2013')plt.ylabel('Number of Years')plt.xlabel('Number of Immigrants')--><!--plt.show()--> Bar Charts (Dataframe) A bar plot is a way of representing data where the *length* of the bars represents the magnitude/size of the feature/variable. Bar graphs usually represent numerical and categorical variables grouped in intervals. To create a bar plot, we can pass one of two arguments via `kind` parameter in `plot()`:* `kind=bar` creates a *vertical* bar plot* `kind=barh` creates a *horizontal* bar plot **Vertical bar plot**In vertical bar graphs, the x-axis is used for labelling, and the length of bars on the y-axis corresponds to the magnitude of the variable being measured. Vertical bar graphs are particuarly useful in analyzing time series data. One disadvantage is that they lack space for text labelling at the foot of each bar. **Let's start off by analyzing the effect of Iceland's Financial Crisis:**The 2008 - 2011 Icelandic Financial Crisis was a major economic and political event in Iceland. Relative to the size of its economy, Iceland's systemic banking collapse was the largest experienced by any country in economic history. The crisis led to a severe economic depression in 2008 - 2011 and significant political unrest.**Question:** Let's compare the number of Icelandic immigrants (country = 'Iceland') to Canada from year 1980 to 2013.
###Code
# step 1: get the data
df_iceland = df_can.loc['Iceland', years]
df_iceland.head()
# step 2: plot data
df_iceland.plot(kind='bar', figsize=(10, 6))
plt.xlabel('Year') # add to x-label to the plot
plt.ylabel('Number of immigrants') # add y-label to the plot
plt.title('Icelandic immigrants to Canada from 1980 to 2013') # add title to the plot
plt.show()
###Output
_____no_output_____
###Markdown
The bar plot above shows the total number of immigrants broken down by each year. We can clearly see the impact of the financial crisis; the number of immigrants to Canada started increasing rapidly after 2008. Let's annotate this on the plot using the `annotate` method of the **scripting layer** or the **pyplot interface**. We will pass in the following parameters:- `s`: str, the text of annotation.- `xy`: Tuple specifying the (x,y) point to annotate (in this case, end point of arrow).- `xytext`: Tuple specifying the (x,y) point to place the text (in this case, start point of arrow).- `xycoords`: The coordinate system that xy is given in - 'data' uses the coordinate system of the object being annotated (default).- `arrowprops`: Takes a dictionary of properties to draw the arrow: - `arrowstyle`: Specifies the arrow style, `'->'` is standard arrow. - `connectionstyle`: Specifies the connection type. `arc3` is a straight line. - `color`: Specifes color of arror. - `lw`: Specifies the line width.I encourage you to read the Matplotlib documentation for more details on annotations: http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.annotate.
###Code
df_iceland.plot(kind='bar', figsize=(10, 6), rot=90) # rotate the bars by 90 degrees
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
plt.title('Icelandic Immigrants to Canada from 1980 to 2013')
# Annotate arrow
plt.annotate('', # s: str. Will leave it blank for no text
xy=(32, 70), # place head of the arrow at point (year 2012 , pop 70)
xytext=(28, 20), # place base of the arrow at point (year 2008 , pop 20)
xycoords='data', # will use the coordinate system of the object being annotated
arrowprops=dict(arrowstyle='->', connectionstyle='arc3', color='blue', lw=2)
)
plt.show()
###Output
_____no_output_____
###Markdown
Let's also annotate a text to go over the arrow. We will pass in the following additional parameters:- `rotation`: rotation angle of text in degrees (counter clockwise)- `va`: vertical alignment of text [‘center’ | ‘top’ | ‘bottom’ | ‘baseline’]- `ha`: horizontal alignment of text [‘center’ | ‘right’ | ‘left’]
###Code
df_iceland.plot(kind='bar', figsize=(10, 6), rot=90)
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
plt.title('Icelandic Immigrants to Canada from 1980 to 2013')
# Annotate arrow
plt.annotate('', # s: str. will leave it blank for no text
xy=(32, 70), # place head of the arrow at point (year 2012 , pop 70)
xytext=(28, 20), # place base of the arrow at point (year 2008 , pop 20)
xycoords='data', # will use the coordinate system of the object being annotated
arrowprops=dict(arrowstyle='->', connectionstyle='arc3', color='blue', lw=2)
)
# Annotate Text
plt.annotate('2008 - 2011 Financial Crisis', # text to display
xy=(28, 30), # start the text at at point (year 2008 , pop 30)
rotation=72.5, # based on trial and error to match the arrow
va='bottom', # want the text to be vertically 'bottom' aligned
ha='left', # want the text to be horizontally 'left' algned.
)
plt.show()
###Output
_____no_output_____
###Markdown
**Horizontal Bar Plot**Sometimes it is more practical to represent the data horizontally, especially if you need more room for labelling the bars. In horizontal bar graphs, the y-axis is used for labelling, and the length of bars on the x-axis corresponds to the magnitude of the variable being measured. As you will see, there is more room on the y-axis to label categetorical variables.**Question:** Using the scripting layter and the `df_can` dataset, create a *horizontal* bar plot showing the *total* number of immigrants to Canada from the top 15 countries, for the period 1980 - 2013. Label each country with the total immigrant count. Step 1: Get the data pertaining to the top 15 countries.
###Code
### type your answer here
# sort dataframe on 'Total' column (descending)
df_can.sort_values(by='Total', ascending=True, inplace=True)
# get top 15 countries
df_top15 = df_can['Total'].tail(15)
df_top15
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ sort dataframe on 'Total' column (descending)df_can.sort_values(by='Total', ascending=True, inplace=True)--><!--\\ get top 15 countriesdf_top15 = df_can['Total'].tail(15)df_top15--> Step 2: Plot data: 1. Use `kind='barh'` to generate a bar chart with horizontal bars. 2. Make sure to choose a good size for the plot and to label your axes and to give the plot a title. 3. Loop through the countries and annotate the immigrant population using the anotate function of the scripting interface.
###Code
### type your answer here
# generate plot
df_top15.plot(kind='barh', figsize=(12, 12), color='steelblue')
plt.xlabel('Number of Immigrants')
plt.title('Top 15 Conuntries Contributing to the Immigration to Canada between 1980 - 2013')
# annotate value labels to each country
for index, value in enumerate(df_top15):
label = format(int(value), ',') # format int with commas
# place text at the end of bar (subtracting 47000 from x, and 0.1 from y to make it fit within the bar)
plt.annotate(label, xy=(value - 47000, index - 0.10), color='white')
plt.show()
###Output
_____no_output_____
###Markdown
Area Plots, Histograms, and Bar Plots IntroductionIn this lab, we will continue exploring the Matplotlib library and will learn how to create additional plots, namely area plots, histograms, and bar charts. Table of Contents1. [Exploring Datasets with *pandas*](0)2. [Downloading and Prepping Data](2)3. [Visualizing Data using Matplotlib](4) 4. [Area Plots](6) 5. [Histograms](8) 6. [Bar Charts](10) Exploring Datasets with *pandas* and MatplotlibToolkits: The course heavily relies on [*pandas*](http://pandas.pydata.org/) and [**Numpy**](http://www.numpy.org/) for data wrangling, analysis, and visualization. The primary plotting library that we are exploring in the course is [Matplotlib](http://matplotlib.org/).Dataset: Immigration to Canada from 1980 to 2013 - [International migration flows to and from selected countries - The 2015 revision](http://www.un.org/en/development/desa/population/migration/data/empirical2/migrationflows.shtml) from United Nation's website.The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. For this lesson, we will focus on the Canadian Immigration data. Downloading and Prepping Data Import Primary Modules. The first thing we'll do is import two key data analysis modules: *pandas* and **Numpy**.
###Code
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
###Output
_____no_output_____
###Markdown
Let's download and import our primary Canadian Immigration dataset using *pandas* `read_excel()` method. Normally, before we can do that, we would need to download a module which *pandas* requires to read in excel files. This module is **xlrd**. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the **xlrd** module:```!conda install -c anaconda xlrd --yes``` Download the dataset and read it into a *pandas* dataframe.
###Code
conda install -c anaconda xlrd --yes
df_can = pd.read_excel('https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DV0101EN/labs/Data_Files/Canada.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2
)
print('Data downloaded and read into a dataframe!')
###Output
Data downloaded and read into a dataframe!
###Markdown
Let's take a look at the first five items in our dataset.
###Code
df_can.head()
###Output
_____no_output_____
###Markdown
Let's find out how many entries there are in our dataset.
###Code
# print the dimensions of the dataframe
print(df_can.shape)
###Output
(195, 43)
###Markdown
Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to `Introduction to Matplotlib and Line Plots` lab for the rational and detailed description of the changes. 1. Clean up the dataset to remove columns that are not informative to us for visualization (eg. Type, AREA, REG).
###Code
df_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the columns Type, Coverage, AREA, REG, and DEV got removed from the dataframe. 2. Rename some of the columns so that they make sense.
###Code
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the column names now make much more sense, even to an outsider. 3. For consistency, ensure that all column labels of type string.
###Code
# let's examine the types of the column labels
all(isinstance(column, str) for column in df_can.columns)
###Output
_____no_output_____
###Markdown
Notice how the above line of code returned *False* when we tested if all the column labels are of type **string**. So let's change them all to **string** type.
###Code
df_can.columns = list(map(str, df_can.columns))
# let's check the column labels types now
all(isinstance(column, str) for column in df_can.columns)
###Output
_____no_output_____
###Markdown
4. Set the country name as index - useful for quickly looking up countries using .loc method.
###Code
df_can.set_index('Country', inplace=True)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Notice how the country names now serve as indices. 5. Add total column.
###Code
df_can['Total'] = df_can.sum(axis=1)
# let's view the first five elements and see how the dataframe was changed
df_can.head()
###Output
_____no_output_____
###Markdown
Now the dataframe has an extra column that presents the total number of immigrants from each country in the dataset from 1980 - 2013. So if we print the dimension of the data, we get:
###Code
print ('data dimensions:', df_can.shape)
###Output
data dimensions: (195, 38)
###Markdown
So now our dataframe has 38 columns instead of 37 columns that we had before.
###Code
# finally, let's create a list of years from 1980 - 2013
# this will come in handy when we start plotting the data
years = list(map(str, range(1980, 2014)))
years
###Output
_____no_output_____
###Markdown
Visualizing Data using Matplotlib Import `Matplotlib` and **Numpy**.
###Code
# use the inline backend to generate the plots within the browser
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
###Output
Matplotlib version: 3.1.0
###Markdown
Area Plots In the last module, we created a line plot that visualized the top 5 countries that contribued the most immigrants to Canada from 1980 to 2013. With a little modification to the code, we can visualize this plot as a cumulative plot, also knows as a **Stacked Line Plot** or **Area plot**.
###Code
df_can.sort_values(['Total'], ascending=False, axis=0, inplace=True)
# get the top 5 entries
df_top5 = df_can.head()
# transpose the dataframe
df_top5 = df_top5[years].transpose()
df_top5.head()
###Output
_____no_output_____
###Markdown
Area plots are stacked by default. And to produce a stacked area plot, each column must be either all positive or all negative values (any NaN values will defaulted to 0). To produce an unstacked plot, pass `stacked=False`.
###Code
df_top5.index = df_top5.index.map(int) # let's change the index values of df_top5 to type integer for plotting
df_top5.plot(kind='area',
stacked=False,
figsize=(20, 10), # pass a tuple (x, y) size
)
plt.title('Immigration Trend of Top 5 Countries')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
The unstacked plot has a default transparency (alpha value) at 0.5. We can modify this value by passing in the `alpha` parameter.
###Code
df_top5.plot(kind='area',
alpha=0.25, # 0-1, default value a= 0.5
stacked=False,
figsize=(20, 10),
)
plt.title('Immigration Trend of Top 5 Countries')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
Two types of plottingAs we discussed in the video lectures, there are two styles/options of ploting with `matplotlib`. Plotting using the Artist layer and plotting using the scripting layer.**Option 1: Scripting layer (procedural method) - using matplotlib.pyplot as 'plt' **You can use `plt` i.e. `matplotlib.pyplot` and add more elements by calling different methods procedurally; for example, `plt.title(...)` to add title or `plt.xlabel(...)` to add label to the x-axis.```python Option 1: This is what we have been using so far df_top5.plot(kind='area', alpha=0.35, figsize=(20, 10)) plt.title('Immigration trend of top 5 countries') plt.ylabel('Number of immigrants') plt.xlabel('Years')``` **Option 2: Artist layer (Object oriented method) - using an `Axes` instance from Matplotlib (preferred) **You can use an `Axes` instance of your current plot and store it in a variable (eg. `ax`). You can add more elements by calling methods with a little change in syntax (by adding "*set_*" to the previous methods). For example, use `ax.set_title()` instead of `plt.title()` to add title, or `ax.set_xlabel()` instead of `plt.xlabel()` to add label to the x-axis. This option sometimes is more transparent and flexible to use for advanced plots (in particular when having multiple plots, as you will see later). In this course, we will stick to the **scripting layer**, except for some advanced visualizations where we will need to use the **artist layer** to manipulate advanced aspects of the plots.
###Code
# option 2: preferred option with more flexibility
ax = df_top5.plot(kind='area', alpha=0.35, figsize=(20, 10))
ax.set_title('Immigration Trend of Top 5 Countries')
ax.set_ylabel('Number of Immigrants')
ax.set_xlabel('Years')
###Output
_____no_output_____
###Markdown
**Question**: Use the scripting layer to create a stacked area plot of the 5 countries that contributed the least to immigration to Canada **from** 1980 to 2013. Use a transparency value of 0.45.
###Code
# get the 5 countries with the least contribution
df_least5 = df_can.tail(5)
# transpose the dataframe
df_least5 = df_least5[years].transpose()
df_least5.head()
df_least5.index = df_least5.index.map(int) # let's change the index values of df_least5 to type integer for plotting
df_least5.plot(kind='area', alpha=0.45, figsize=(20, 10))
plt.title('Immigration Trend of 5 Countries with Least Contribution to Immigration')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ get the 5 countries with the least contributiondf_least5 = df_can.tail(5)--><!--\\ transpose the dataframedf_least5 = df_least5[years].transpose() df_least5.head()--><!--df_least5.index = df_least5.index.map(int) let's change the index values of df_least5 to type integer for plottingdf_least5.plot(kind='area', alpha=0.45, figsize=(20, 10)) --><!--plt.title('Immigration Trend of 5 Countries with Least Contribution to Immigration')plt.ylabel('Number of Immigrants')plt.xlabel('Years')--><!--plt.show()--> **Question**: Use the artist layer to create an unstacked area plot of the 5 countries that contributed the least to immigration to Canada **from** 1980 to 2013. Use a transparency value of 0.55.
###Code
### type your answer here
df_least5=df_can.tail(5)
df_least5=df_least5[years].transpose()
df_least5.head()
df_least5.plot(kind='area',alpha=0.55,stacked=False,figsize=(20,10))
ax.set_title('Immigration Trend of 5 Countries with Least Contribution to Immigration')
ax.set_ylabel('Number of Immigrants')
ax.set_xlabel('Years')
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ get the 5 countries with the least contributiondf_least5 = df_can.tail(5)--><!--\\ transpose the dataframedf_least5 = df_least5[years].transpose() df_least5.head()--><!--df_least5.index = df_least5.index.map(int) let's change the index values of df_least5 to type integer for plotting--><!--ax = df_least5.plot(kind='area', alpha=0.55, stacked=False, figsize=(20, 10))--><!--ax.set_title('Immigration Trend of 5 Countries with Least Contribution to Immigration')ax.set_ylabel('Number of Immigrants')ax.set_xlabel('Years')--> HistogramsA histogram is a way of representing the *frequency* distribution of numeric dataset. The way it works is it partitions the x-axis into *bins*, assigns each data point in our dataset to a bin, and then counts the number of data points that have been assigned to each bin. So the y-axis is the frequency or the number of data points in each bin. Note that we can change the bin size and usually one needs to tweak it so that the distribution is displayed nicely. **Question:** What is the frequency distribution of the number (population) of new immigrants from the various countries to Canada in 2013? Before we proceed with creating the histogram plot, let's first examine the data split into intervals. To do this, we will us **Numpy**'s `histrogram` method to get the bin ranges and frequency counts as follows:
###Code
# let's quickly view the 2013 data
df_can['2013'].head()
# np.histogram returns 2 values
count, bin_edges = np.histogram(df_can['2013'])
print(count) # frequency count
print(bin_edges) # bin ranges, default = 10 bins
###Output
[178 11 1 2 0 0 0 0 1 2]
[ 0. 3412.9 6825.8 10238.7 13651.6 17064.5 20477.4 23890.3 27303.2
30716.1 34129. ]
###Markdown
By default, the `histrogram` method breaks up the dataset into 10 bins. The figure below summarizes the bin ranges and the frequency distribution of immigration in 2013. We can see that in 2013:* 178 countries contributed between 0 to 3412.9 immigrants * 11 countries contributed between 3412.9 to 6825.8 immigrants* 1 country contributed between 6285.8 to 10238.7 immigrants, and so on.. We can easily graph this distribution by passing `kind=hist` to `plot()`.
###Code
df_can['2013'].plot(kind='hist', figsize=(8, 5))
plt.title('Histogram of Immigration from 195 Countries in 2013') # add a title to the histogram
plt.ylabel('Number of Countries') # add y-label
plt.xlabel('Number of Immigrants') # add x-label
plt.show()
###Output
_____no_output_____
###Markdown
In the above plot, the x-axis represents the population range of immigrants in intervals of 3412.9. The y-axis represents the number of countries that contributed to the aforementioned population. Notice that the x-axis labels do not match with the bin size. This can be fixed by passing in a `xticks` keyword that contains the list of the bin sizes, as follows:
###Code
# 'bin_edges' is a list of bin intervals
count, bin_edges = np.histogram(df_can['2013'])
df_can['2013'].plot(kind='hist', figsize=(8, 5), xticks=bin_edges)
plt.title('Histogram of Immigration from 195 countries in 2013') # add a title to the histogram
plt.ylabel('Number of Countries') # add y-label
plt.xlabel('Number of Immigrants') # add x-label
plt.show()
###Output
_____no_output_____
###Markdown
*Side Note:* We could use `df_can['2013'].plot.hist()`, instead. In fact, throughout this lesson, using `some_data.plot(kind='type_plot', ...)` is equivalent to `some_data.plot.type_plot(...)`. That is, passing the type of the plot as argument or method behaves the same. See the *pandas* documentation for more info http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html. We can also plot multiple histograms on the same plot. For example, let's try to answer the following questions using a histogram.**Question**: What is the immigration distribution for Denmark, Norway, and Sweden for years 1980 - 2013?
###Code
# let's quickly view the dataset
df_can.loc[['Denmark', 'Norway', 'Sweden'], years]
# generate histogram
df_can.loc[['Denmark', 'Norway', 'Sweden'], years].plot.hist()
###Output
_____no_output_____
###Markdown
That does not look right! Don't worry, you'll often come across situations like this when creating plots. The solution often lies in how the underlying dataset is structured.Instead of plotting the population frequency distribution of the population for the 3 countries, *pandas* instead plotted the population frequency distribution for the `years`.This can be easily fixed by first transposing the dataset, and then plotting as shown below.
###Code
# transpose dataframe
df_t = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose()
df_t.head()
# generate histogram
df_t.plot(kind='hist', figsize=(10, 6))
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Let's make a few modifications to improve the impact and aesthetics of the previous plot:* increase the bin size to 15 by passing in `bins` parameter* set transparency to 60% by passing in `alpha` paramemter* label the x-axis by passing in `x-label` paramater* change the colors of the plots by passing in `color` parameter
###Code
# let's get the x-tick values
count, bin_edges = np.histogram(df_t, 15)
# un-stacked histogram
df_t.plot(kind ='hist',
figsize=(10, 6),
bins=15,
alpha=0.6,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen']
)
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Tip:For a full listing of colors available in Matplotlib, run the following code in your python shell:```pythonimport matplotlibfor name, hex in matplotlib.colors.cnames.items(): print(name, hex)``` If we do no want the plots to overlap each other, we can stack them using the `stacked` paramemter. Let's also adjust the min and max x-axis labels to remove the extra gap on the edges of the plot. We can pass a tuple (min,max) using the `xlim` paramater, as show below.
###Code
count, bin_edges = np.histogram(df_t, 15)
xmin = bin_edges[0] - 10 # first bin value is 31.0, adding buffer of 10 for aesthetic purposes
xmax = bin_edges[-1] + 10 # last bin value is 308.0, adding buffer of 10 for aesthetic purposes
# stacked Histogram
df_t.plot(kind='hist',
figsize=(10, 6),
bins=15,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen'],
stacked=True,
xlim=(xmin, xmax)
)
plt.title('Histogram of Immigration from Denmark, Norway, and Sweden from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
**Question**: Use the scripting layer to display the immigration distribution for Greece, Albania, and Bulgaria for years 1980 - 2013? Use an overlapping plot with 15 bins and a transparency value of 0.35.
###Code
# create a dataframe of the countries of interest (cof)
df_cof = df_can.loc[['Greece', 'Albania', 'Bulgaria'], years]
# transpose the dataframe
df_cof = df_cof.transpose()
# let's get the x-tick values
count, bin_edges = np.histogram(df_cof, 15)
# Un-stacked Histogram
df_cof.plot(kind ='hist',
figsize=(10, 6),
bins=15,
alpha=0.35,
xticks=bin_edges,
color=['coral', 'darkslateblue', 'mediumseagreen']
)
plt.title('Histogram of Immigration from Greece, Albania, and Bulgaria from 1980 - 2013')
plt.ylabel('Number of Years')
plt.xlabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ create a dataframe of the countries of interest (cof)df_cof = df_can.loc[['Greece', 'Albania', 'Bulgaria'], years]--><!--\\ transpose the dataframedf_cof = df_cof.transpose() --><!--\\ let's get the x-tick valuescount, bin_edges = np.histogram(df_cof, 15)--><!--\\ Un-stacked Histogramdf_cof.plot(kind ='hist', figsize=(10, 6), bins=15, alpha=0.35, xticks=bin_edges, color=['coral', 'darkslateblue', 'mediumseagreen'] )--><!--plt.title('Histogram of Immigration from Greece, Albania, and Bulgaria from 1980 - 2013')plt.ylabel('Number of Years')plt.xlabel('Number of Immigrants')--><!--plt.show()-->
###Code
df_cof.head()
###Output
_____no_output_____
###Markdown
Bar Charts (Dataframe) A bar plot is a way of representing data where the *length* of the bars represents the magnitude/size of the feature/variable. Bar graphs usually represent numerical and categorical variables grouped in intervals. To create a bar plot, we can pass one of two arguments via `kind` parameter in `plot()`:* `kind=bar` creates a *vertical* bar plot* `kind=barh` creates a *horizontal* bar plot **Vertical bar plot**In vertical bar graphs, the x-axis is used for labelling, and the length of bars on the y-axis corresponds to the magnitude of the variable being measured. Vertical bar graphs are particuarly useful in analyzing time series data. One disadvantage is that they lack space for text labelling at the foot of each bar. **Let's start off by analyzing the effect of Iceland's Financial Crisis:**The 2008 - 2011 Icelandic Financial Crisis was a major economic and political event in Iceland. Relative to the size of its economy, Iceland's systemic banking collapse was the largest experienced by any country in economic history. The crisis led to a severe economic depression in 2008 - 2011 and significant political unrest.**Question:** Let's compare the number of Icelandic immigrants (country = 'Iceland') to Canada from year 1980 to 2013.
###Code
# step 1: get the data
df_iceland = df_can.loc['Iceland', years]
df_iceland.head()
# step 2: plot data
df_iceland.plot(kind='bar', figsize=(10, 6))
plt.xlabel('Year') # add to x-label to the plot
plt.ylabel('Number of immigrants') # add y-label to the plot
plt.title('Icelandic immigrants to Canada from 1980 to 2013') # add title to the plot
plt.show()
###Output
_____no_output_____
###Markdown
The bar plot above shows the total number of immigrants broken down by each year. We can clearly see the impact of the financial crisis; the number of immigrants to Canada started increasing rapidly after 2008. Let's annotate this on the plot using the `annotate` method of the **scripting layer** or the **pyplot interface**. We will pass in the following parameters:- `s`: str, the text of annotation.- `xy`: Tuple specifying the (x,y) point to annotate (in this case, end point of arrow).- `xytext`: Tuple specifying the (x,y) point to place the text (in this case, start point of arrow).- `xycoords`: The coordinate system that xy is given in - 'data' uses the coordinate system of the object being annotated (default).- `arrowprops`: Takes a dictionary of properties to draw the arrow: - `arrowstyle`: Specifies the arrow style, `'->'` is standard arrow. - `connectionstyle`: Specifies the connection type. `arc3` is a straight line. - `color`: Specifes color of arror. - `lw`: Specifies the line width.I encourage you to read the Matplotlib documentation for more details on annotations: http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.annotate.
###Code
df_iceland.plot(kind='bar', figsize=(10, 6), rot=90) # rotate the bars by 90 degrees
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
plt.title('Icelandic Immigrants to Canada from 1980 to 2013')
# Annotate arrow
plt.annotate('', # s: str. Will leave it blank for no text
xy=(32, 70), # place head of the arrow at point (year 2012 , pop 70)
xytext=(28, 20), # place base of the arrow at point (year 2008 , pop 20)
xycoords='data', # will use the coordinate system of the object being annotated
arrowprops=dict(arrowstyle='->', connectionstyle='arc3', color='blue', lw=2)
)
plt.show()
###Output
_____no_output_____
###Markdown
Let's also annotate a text to go over the arrow. We will pass in the following additional parameters:- `rotation`: rotation angle of text in degrees (counter clockwise)- `va`: vertical alignment of text [‘center’ | ‘top’ | ‘bottom’ | ‘baseline’]- `ha`: horizontal alignment of text [‘center’ | ‘right’ | ‘left’]
###Code
df_iceland.plot(kind='bar', figsize=(10, 6), rot=90)
plt.xlabel('Year')
plt.ylabel('Number of Immigrants')
plt.title('Icelandic Immigrants to Canada from 1980 to 2013')
# Annotate arrow
plt.annotate('', # s: str. will leave it blank for no text
xy=(32, 70), # place head of the arrow at point (year 2012 , pop 70)
xytext=(28, 20), # place base of the arrow at point (year 2008 , pop 20)
xycoords='data', # will use the coordinate system of the object being annotated
arrowprops=dict(arrowstyle='->', connectionstyle='arc3', color='blue', lw=2)
)
# Annotate Text
plt.annotate('2008 - 2011 Financial Crisis', # text to display
xy=(28, 30), # start the text at at point (year 2008 , pop 30)
rotation=72.5, # based on trial and error to match the arrow
va='bottom', # want the text to be vertically 'bottom' aligned
ha='left', # want the text to be horizontally 'left' algned.
)
plt.show()
###Output
_____no_output_____
###Markdown
**Horizontal Bar Plot**Sometimes it is more practical to represent the data horizontally, especially if you need more room for labelling the bars. In horizontal bar graphs, the y-axis is used for labelling, and the length of bars on the x-axis corresponds to the magnitude of the variable being measured. As you will see, there is more room on the y-axis to label categetorical variables.**Question:** Using the scripting layter and the `df_can` dataset, create a *horizontal* bar plot showing the *total* number of immigrants to Canada from the top 15 countries, for the period 1980 - 2013. Label each country with the total immigrant count. Step 1: Get the data pertaining to the top 15 countries.
###Code
### type your answer here
df_can.sort_values(by='Total', ascending=True, inplace=True)
df_top15 = df_can['Total'].tail(15)
df_top15
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:\\ sort dataframe on 'Total' column (descending)df_can.sort_values(by='Total', ascending=True, inplace=True)--><!--\\ get top 15 countriesdf_top15 = df_can['Total'].tail(15)df_top15--> Step 2: Plot data: 1. Use `kind='barh'` to generate a bar chart with horizontal bars. 2. Make sure to choose a good size for the plot and to label your axes and to give the plot a title. 3. Loop through the countries and annotate the immigrant population using the anotate function of the scripting interface.
###Code
### type your answer here
df_top15.plot(kind='barh', figsize=(12, 12), color='steelblue')
plt.xlabel('Number of Immigrants')
plt.title('Top 15 Conuntries Contributing to the Immigration to Canada between 1980 - 2013')
###Output
_____no_output_____ |
tutorials/Certification_Trainings/Public/databricks_notebooks/2. Pretrained pipelines for Grammar, NER and Sentiment.ipynb | ###Markdown
 2. Pretrained pipelines for Grammar, NER and Sentiment
###Code
import sparknlp
print("Spark NLP version", sparknlp.version())
print("Apache Spark version:", spark.version)
spark
###Output
_____no_output_____
###Markdown
Using Pretrained Pipelines https://github.com/JohnSnowLabs/spark-nlp-modelshttps://nlp.johnsnowlabs.com/models
###Code
from sparknlp.pretrained import PretrainedPipeline
testDoc = '''Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
testDoc
###Output
_____no_output_____
###Markdown
Explain Document DL **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- NER (NER with GloVe 100D embeddings, CoNLL2003 dataset)- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
pipeline_dl.model.stages
pipeline_dl.model.stages[-2].getStorageRef()
pipeline_dl.model.stages[-2].getClasses()
result = pipeline_dl.annotate(testDoc)
result.keys()
result['entities']
import pandas as pd
df = pd.DataFrame({'token':result['token'], 'ner_label':result['ner'],
'spell_corrected':result['checked'], 'POS':result['pos'],
'lemmas':result['lemma'], 'stems':result['stem']})
df
###Output
_____no_output_____
###Markdown
Recognize Entities DL
###Code
recognize_entities = PretrainedPipeline('recognize_entities_dl', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = recognize_entities.annotate(testDoc)
list(zip(result['token'], result['ner']))
###Output
_____no_output_____
###Markdown
Clean Stop Words
###Code
clean_stop = PretrainedPipeline('clean_stop', lang='en')
result = clean_stop.annotate(testDoc)
result.keys()
' '.join(result['cleanTokens'])
###Output
_____no_output_____
###Markdown
Spell Checker (Norvig Algo)ref: https://norvig.com/spell-correct.html
###Code
spell_checker = PretrainedPipeline('check_spelling', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = spell_checker.annotate(testDoc)
result.keys()
list(zip(result['token'], result['checked']))
###Output
_____no_output_____
###Markdown
Parsing a list of texts
###Code
testDoc_list = ['French author who helped pioner the science-fiction genre.',
'Verne wrate about space, air, and underwater travel before navigable aircrast',
'Practical submarines were invented, and before any means of space travel had been devised.']
testDoc_list
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
result_list = pipeline_dl.annotate(testDoc_list)
len(result_list)
result_list[0]
###Output
_____no_output_____
###Markdown
Using fullAnnotate to get more details ```annotatorType: String, begin: Int, end: Int, result: String, (this is what annotate returns)metadata: Map[String, String], embeddings: Array[Float]```
###Code
text = 'Peter Parker is a nice guy and lives in New York'
# pipeline_dl >> explain_document_dl
detailed_result = pipeline_dl.fullAnnotate(text)
detailed_result
detailed_result[0]['entities']
detailed_result[0]['entities'][0].result
import pandas as pd
chunks=[]
entities=[]
for n in detailed_result[0]['entities']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
tuples = []
for x,y,z in zip(detailed_result[0]["token"], detailed_result[0]["pos"], detailed_result[0]["ner"]):
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end, y.result, z.result))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end','pos', 'ner'])
df
###Output
_____no_output_____
###Markdown
Sentiment Analysis Vivek algopaper: `Fast and accurate sentiment classification using an enhanced Naive Bayes model`https://arxiv.org/abs/1305.6143code `https://github.com/vivekn/sentiment`
###Code
sentiment = PretrainedPipeline('analyze_sentiment', lang='en')
result = sentiment.annotate("The movie I watched today was not a good one")
result['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on imdb) `analyze_sentimentdl_use_imdb`: A pre-trained pipeline to classify IMDB reviews in neg and pos classes using tfhub_use embeddings.`analyze_sentimentdl_glove_imdb`: A pre-trained pipeline to classify IMDB reviews in neg and pos classes using glove_100d embeddings.
###Code
sentiment_imdb_glove = PretrainedPipeline('analyze_sentimentdl_glove_imdb', lang='en')
comment = '''
It's a very scary film but what impressed me was how true the film sticks to the original's tricks; it isn't filled with loud in-your-face jump scares, in fact, a lot of what makes this film scary is the slick cinematography and intricate shadow play. The use of lighting and creation of atmosphere is what makes this film so tense, which is why it's perfectly suited for those who like Horror movies but without the obnoxious gore.
'''
result = sentiment_imdb_glove.annotate(comment)
result['sentiment']
sentiment_imdb_glove.fullAnnotate(comment)[0]['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on twitter dataset)
###Code
sentiment_twitter = PretrainedPipeline('analyze_sentimentdl_use_twitter', lang='en')
result = sentiment_twitter.annotate("The movie I watched today was a good one.")
result['sentiment']
sentiment_twitter.fullAnnotate("The movie I watched today was a good one.")[0]['sentiment']
###Output
_____no_output_____
###Markdown
 2. Pretrained pipelines for Grammar, NER and Sentiment
###Code
import sparknlp
print("Spark NLP version", sparknlp.version())
print("Apache Spark version:", spark.version)
spark
###Output
_____no_output_____
###Markdown
Using Pretrained Pipelines https://github.com/JohnSnowLabs/spark-nlp-modelshttps://nlp.johnsnowlabs.com/models
###Code
from sparknlp.pretrained import PretrainedPipeline
testDoc = '''Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
testDoc
###Output
_____no_output_____
###Markdown
Explain Document ML **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
pipeline = PretrainedPipeline('explain_document_ml', lang='en')
pipeline.model.stages
result = pipeline.annotate(testDoc)
result.keys()
result['sentence']
result['token']
list(zip(result['token'], result['pos']))
list(zip(result['token'], result['lemmas'], result['stems'], result['spell']))
import pandas as pd
df = pd.DataFrame({'token':result['token'],
'corrected':result['spell'], 'POS':result['pos'],
'lemmas':result['lemmas'], 'stems':result['stems']})
df
###Output
_____no_output_____
###Markdown
Explain Document DL **Stages**- DocumentAssembler- SentenceDetector- Tokenizer- NER (NER with GloVe 100D embeddings, CoNLL2003 dataset)- Lemmatizer- Stemmer- Part of Speech- SpellChecker (Norvig)
###Code
pipeline_dl = PretrainedPipeline('explain_document_dl', lang='en')
pipeline_dl.model.stages
pipeline_dl.model.stages[-2].getStorageRef()
pipeline_dl.model.stages[-2].getClasses()
result = pipeline_dl.annotate(testDoc)
result.keys()
result['entities']
df = pd.DataFrame({'token':result['token'], 'ner_label':result['ner'],
'spell_corrected':result['checked'], 'POS':result['pos'],
'lemmas':result['lemma'], 'stems':result['stem']})
df
###Output
_____no_output_____
###Markdown
Recognize Entities DL
###Code
recognize_entities = PretrainedPipeline('recognize_entities_dl', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = recognize_entities.annotate(testDoc)
list(zip(result['token'], result['ner']))
###Output
_____no_output_____
###Markdown
Clean Stop Words
###Code
clean_stop = PretrainedPipeline('clean_stop', lang='en')
result = clean_stop.annotate(testDoc)
result.keys()
' '.join(result['cleanTokens'])
###Output
_____no_output_____
###Markdown
Spell Checker (Norvig Algo)ref: https://norvig.com/spell-correct.html
###Code
spell_checker = PretrainedPipeline('check_spelling', lang='en')
testDoc = '''
Peter is a very good persn.
My life in Russia is very intersting.
John and Peter are brthers. However they don't support each other that much.
Lucas Nogal Dunbercker is no longer happy. He has a good car though.
Europe is very culture rich. There are huge churches! and big houses!
'''
result = spell_checker.annotate(testDoc)
result.keys()
list(zip(result['token'], result['checked']))
###Output
_____no_output_____
###Markdown
Parsing a list of texts
###Code
testDoc_list = ['French author who helped pioner the science-fiction genre.',
'Verne wrate about space, air, and underwater travel before navigable aircrast',
'Practical submarines were invented, and before any means of space travel had been devised.']
testDoc_list
pipeline = PretrainedPipeline('explain_document_ml', lang='en')
result_list = pipeline.annotate(testDoc_list)
len (result_list)
result_list[0]
###Output
_____no_output_____
###Markdown
Using fullAnnotate to get more details ```annotatorType: String, begin: Int, end: Int, result: String, (this is what annotate returns)metadata: Map[String, String], embeddings: Array[Float]```
###Code
text = 'Peter Parker is a nice guy and lives in New York'
# pipeline_dl >> explain_document_dl
detailed_result = pipeline_dl.fullAnnotate(text)
detailed_result
detailed_result[0]['entities']
detailed_result[0]['entities'][0].result
chunks=[]
entities=[]
for n in detailed_result[0]['entities']:
chunks.append(n.result)
entities.append(n.metadata['entity'])
df = pd.DataFrame({'chunks':chunks, 'entities':entities})
df
tuples = []
for x,y,z in zip(detailed_result[0]["token"], detailed_result[0]["pos"], detailed_result[0]["ner"]):
tuples.append((int(x.metadata['sentence']), x.result, x.begin, x.end, y.result, z.result))
df = pd.DataFrame(tuples, columns=['sent_id','token','start','end','pos', 'ner'])
df
###Output
_____no_output_____
###Markdown
Sentiment Analysis Vivek algopaper: `Fast and accurate sentiment classification using an enhanced Naive Bayes model`https://arxiv.org/abs/1305.6143code `https://github.com/vivekn/sentiment`
###Code
sentiment = PretrainedPipeline('analyze_sentiment', lang='en')
result = sentiment.annotate("The movie I watched today was not a good one")
result['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on imdb)
###Code
sentiment_imdb = PretrainedPipeline('analyze_sentimentdl_use_imdb', lang='en')
sentiment_imdb_glove = PretrainedPipeline('analyze_sentimentdl_glove_imdb', lang='en')
comment = '''
It's a very scary film but what impressed me was how true the film sticks to the original's tricks; it isn't filled with loud in-your-face jump scares, in fact, a lot of what makes this film scary is the slick cinematography and intricate shadow play. The use of lighting and creation of atmosphere is what makes this film so tense, which is why it's perfectly suited for those who like Horror movies but without the obnoxious gore.
'''
result = sentiment_imdb_glove.annotate(comment)
result['sentiment']
sentiment_imdb_glove.fullAnnotate(comment)[0]['sentiment']
###Output
_____no_output_____
###Markdown
DL version (trained on twitter dataset)
###Code
sentiment_twitter = PretrainedPipeline('analyze_sentimentdl_use_twitter', lang='en')
result = sentiment_twitter.annotate("The movie I watched today was a good one.")
result['sentiment']
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/explainable_ai/labs/integrated_gradients.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Integrated gradients This tutorial: Run in Google Colab View source on GitHub Download notebook A shorter version of this notebook is also available as a TensorFlow tutorial: View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to implement **Integrated Gradients (IG)**, an explainable AI technique described in the paper [Axiomatic Attribution for Deep Networks](https://arxiv.org/abs/1703.01365). IG aims to explain the relationship between a model's predictions in terms of its features. It has many use cases including understanding feature importances, identifying data skew, and debugging model performance.IG has become a popular interpretability technique due to its broad applicability to any differentiable model, ease of implementation, theoretical justifications, and computational efficiency relative to alternative approaches that allows it to scale to large networks and feature spaces such as images.You will start by walking through an implementation of IG step-by-step. Next, you will apply IG attributions to understand the pixel feature importances of an image classifier and explore applied machine learning use cases. Lastly, you will conclude with discussion of IG's properties, limitations, and suggestions for next steps in your learning journey.To motivate this tutorial, here is the result of using IG to highlight important pixels that were used to classify this [image](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg) as a fireboat. Explaining an image classifier
###Code
import matplotlib.pylab as plt
import numpy as np
import math
import sys
import tensorflow as tf
import tensorflow_hub as hub
###Output
_____no_output_____
###Markdown
Download Inception V1 from TF-Hub **TensorFlow Hub Module**IG can be applied to any neural network. To mirror the paper's implementation, you will use a pre-trained version of [Inception V1]((https://arxiv.org/abs/1409.4842)) from [TensorFlow Hub](https://tfhub.dev/google/imagenet/inception_v1/classification/4).
###Code
inception_v1_url = "https://tfhub.dev/google/imagenet/inception_v1/classification/4"
inception_v1_classifier = tf.keras.Sequential([
hub.KerasLayer(name='inception_v1',
handle=inception_v1_url,
trainable=False),
])
inception_v1_classifier.build([None, 224, 224, 3])
inception_v1_classifier.summary()
###Output
_____no_output_____
###Markdown
From the TF Hub module page, you need to keep in mind the following about Inception V1 for image classification:**Inputs**: The expected input shape for the model is `(None, 224, 224, 3,)`. This is a dense 4D tensor of dtype float32 and shape `(batch_size, height, width, RGB channels)` whose elements are RGB color values of pixels normalized to the range [0, 1]. The first element is `None` to indicate that the model can take any integer batch size.**Outputs**: A `tf.Tensor` of logits in the shape of `(n_images, 1001)`. Each row represents the model's predicted score for each of ImageNet's 1,001 classes. For the model's top predicted class index you can use `tf.argmax(predictions, axis=-1)`. Furthmore, you can also covert the model's logit output to predicted probabilities across all classes using `tf.nn.softmax(predictions, axis=-1)` to quantify the model's uncertainty as well as explore similar predicted classes for debugging.
###Code
def load_imagenet_labels(file_path):
"""
Args:
file_path(str): A URL download path.
Returns:
imagenet_label_array(numpy.ndarray): Array of strings with shape (1001,).
"""
labels_file = tf.keras.utils.get_file('ImageNetLabels.txt', file_path)
with open(labels_file, "r") as reader:
f = reader.read()
labels = f.splitlines()
imagenet_label_array = np.array(labels)
return imagenet_label_array
imagenet_label_vocab = load_imagenet_labels('https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
###Output
_____no_output_____
###Markdown
Load and preprocess images with `tf.image`You will illustrate IG using several images. Links to the original images are as follows ([Fireboat](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg), [School Bus](https://commons.wikimedia.org/wiki/File:Thomas_School_Bus_Bus.jpg), [Giant Panda](https://commons.wikimedia.org/wiki/File:Giant_Panda_2.JPG), [Black Beetle](https://commons.wikimedia.org/wiki/File:Lucanus.JPG), [Golden Retriever](https://commons.wikimedia.org/wiki/File:Golden_retriever.jpg), [General Ulysses S. Grant](https://commons.wikimedia.org/wiki/Category:Ulysses_S._Grant/media/File:Portrait_of_Maj._Gen._Ulysses_S._Grant,_officer_of_the_Federal_Army_LOC_cwpb.06941.jpg), [Greece Presidential Guard](https://commons.wikimedia.org/wiki/File:Greek_guard_uniforms_1.jpg)).
###Code
def parse_image(file_name):
"""
This function downloads and standardizes input JPEG images for the
inception_v1 model. Its applies the following processing:
- Reads JPG file.
- Decodes JPG file into colored image.
- Converts data type to standard tf.float32.
- Resizes image to expected Inception V1 input dimension of
(224, 224, 3) with preserved aspect ratio. E.g. don't stretch image.
- Pad image to (224, 224, 3) shape with black pixels.
Args:
file_name(str): Direct URL path to the JPG image.
Returns:
image(Tensor): A Tensor of floats with shape (224, 224, 3).
label(str): A text label for display above the image.
"""
image = tf.io.read_file(file_name)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, (224, 224), preserve_aspect_ratio=True)
image = tf.image.resize_with_pad(image, target_height=224, target_width=224)
return image
# img_name_url {image_name: origin_url}
img_name_url = {
'Fireboat': 'https://storage.googleapis.com/applied-dl/temp/San_Francisco_fireboat_showing_off.jpg',
'School Bus': 'https://storage.googleapis.com/applied-dl/temp/Thomas_School_Bus_Bus.jpg',
'Giant Panda': 'https://storage.googleapis.com/applied-dl/temp/Giant_Panda_2.jpeg',
'Black Beetle': 'https://storage.googleapis.com/applied-dl/temp/Lucanus.jpeg',
'Golden Retriever': 'https://storage.googleapis.com/applied-dl/temp/Golden_retriever.jpg',
'Yellow Labrador Retriever': 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg',
'Military Uniform (Grace Hopper)': 'https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
'Military Uniform (General Ulysses S. Grant)': 'https://storage.googleapis.com/applied-dl/temp/General_Ulysses_S._Grant%2C_Union_Army_(6186252896).jpg',
'Military Uniform (Greek Presidential Guard)': 'https://storage.googleapis.com/applied-dl/temp/Greek_guard_uniforms_1.jpg',
}
# img_name_path {image_name: downloaded_image_local_path}
img_name_path = {name: tf.keras.utils.get_file(name, url) for (name, url) in img_name_url.items()}
# img_name_tensors {image_name: parsed_image_tensor}
img_name_tensors = {name: parse_image(img_path) for (name, img_path) in img_name_path.items()}
plt.figure(figsize=(14,14))
for n, (name, img_tensors) in enumerate(img_name_tensors.items()):
ax = plt.subplot(3,3,n+1)
ax.imshow(img_tensors)
ax.set_title(name)
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Applying integrated gradients IG is an elegant and simple idea to explain a model's predictions in relation to its input. The basic intuition is to measure a feature's importance to your model by incrementally increasing a feature's intensity between its absense (baseline) and its input value, compute the change between your model's predictions with respect to the original feature at each step, and average these incremental changes together. To gain a deeper understanding for how IG works, you will walk through its application over the sub-sections below. Step 1: Identify model input and output tensors IG is a post-hoc explanatory method that works with any differentiable model regardless of its implementation. As such, you can pass any input example tensor to a model to generate an output prediction tensor. Note that InceptionV1 outputs a multiclass un-normalized logits prediction tensor. So you will use a softmax operator to turn the logits tensor into an output softmax predicted probabilities tensor for use to compute IG feature attributions.
###Code
# stack images into a batch for processing.
image_titles = tf.convert_to_tensor(list(img_name_tensors.keys()))
image_batch = tf.convert_to_tensor(list(img_name_tensors.values()))
image_batch.shape
def top_k_predictions_scores_labels(model, img, label_vocab, top_k=3):
"""
Args:
model(tf.keras.Model): Trained Keras model.
img(tf.Tensor): A 4D tensor of floats with the shape
(img_n, img_height, img_width, 3).
label_vocab(numpy.ndarray): An array of strings with shape (1001,).
top_k(int): Number of results to return.
Returns:
k_predictions_idx(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.int32
prediction indicies.
k_predictions_proba(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.float32
prediction probabilities.
k_predictions_label(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.string
prediction labels.
"""
# These are logits (unnormalized scores).
predictions = model(img)
# Convert logits into probabilities.
predictions_proba = tf.nn.softmax(predictions, axis=-1)
# Filter top k prediction probabilities and indices.
k_predictions_proba, k_predictions_idx = tf.math.top_k(
input=predictions_proba, k=top_k)
# Lookup top k prediction labels in label_vocab array.
k_predictions_label = tf.convert_to_tensor(
label_vocab[k_predictions_idx.numpy()],
dtype=tf.string)
return k_predictions_idx, k_predictions_label, k_predictions_proba
def plot_img_predictions(model, img, img_titles, label_vocab, top_k=3):
"""Plot images with top_k predictions.
Args:
model(tf.keras.Model): Trained Keras model.
img(Tensor): A 4D Tensor of floats with the shape
(img_n, img_height, img_width, 3).
img_titles(Tensor): A Tensor of strings with the shape
(img_n, img_height, img_width, 3).
label_vocab(numpy.ndarray): An array of strings with shape (1001,).
top_k(int): Number of results to return.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
pred_idx, pred_label, pred_proba = \
top_k_predictions_scores_labels(
model=model,
img=img,
label_vocab=label_vocab,
top_k=top_k)
img_arr = img.numpy()
title_arr = img_titles.numpy()
pred_idx_arr = pred_idx.numpy()
pred_label_arr = pred_label.numpy()
pred_proba_arr = pred_proba.numpy()
n_rows = img_arr.shape[0]
# Preserve image height by converting pixels to inches based on dpi.
size = n_rows * (224 // 48)
fig, axs = plt.subplots(nrows=img_arr.shape[0], ncols=1, figsize=(size, size), squeeze=False)
for idx, image in enumerate(img_arr):
axs[idx, 0].imshow(image)
axs[idx, 0].set_title(title_arr[idx].decode('utf-8'), fontweight='bold')
axs[idx, 0].axis('off')
for k in range(top_k):
k_idx = pred_idx_arr[idx][k]
k_label = pred_label_arr[idx][k].decode('utf-8')
k_proba = pred_proba_arr[idx][k]
if k==0:
s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)
axs[idx, 0].text(244 + size, 102+(k*40), s, fontsize=12, fontweight='bold')
else:
s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)
axs[idx, 0].text(244 + size, 102+(k*20), s, fontsize=12)
plt.tight_layout()
return fig
_ = plot_img_predictions(
model=inception_v1_classifier,
img=image_batch,
img_titles=image_titles,
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Step 2: establish baseline to compare inputs against Defining missingness or a starting point in feature spaces for comparison is at the core of machine learning interpretability methods. For IG, this concept is encoded as a baseline. A **baseline** is an uniformative input used as a starting point for defining IG attributions in relation to and essential for interpreting IG prediction attributions as a function of individual input features.When selecting a baseline for neural networks, the goal is to choose a baseline such as the prediction at the baseline is near zero to minimize aspects of the baseline impacting interpretation of the prediction attributions.For image classification networks, a baseline image with its pixels set to 0 meets this objective. For text networks, an all zero input embedding vector makes for a good baseline. Models with structured data that typically involve a mix of continuous numeric features will typically use the observed median value as a baseline because 0 is an informative value for these features. Note, however, that this changes the interpretation of the features to their importance in relation to the baseline value as opposed to the input data directly. The paper author's provide additional guidance on baseline selection for different input feature data types and models under a [How to Use Integrated Gradients Guide](https://github.com/ankurtaly/Integrated-Gradients/blob/master/howto.mdsanity-checking-baselines) on Github.
###Code
# name_baseline_tensors. Set random seed for reproducibility of random baseline image and associated attributions.
tf.random.set_seed(42)
name_baseline_tensors = {
'Baseline Image: Black': tf.zeros(shape=(224,224,3)),
'Baseline Image: Random': tf.random.uniform(shape=(224,224,3), minval=0.0, maxval=1.0),
'Baseline Image: White': tf.ones(shape=(224,224,3)),
}
plt.figure(figsize=(12,12))
for n, (name, baseline_tensor) in enumerate(name_baseline_tensors.items()):
ax = plt.subplot(1,3,n+1)
ax.imshow(baseline_tensor)
ax.set_title(name)
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Step 3: Integrated gradients in TensorFlow 2.x The exact formula for Integrated Gradients from the original paper is the following:$IntegratedGradients_{i}(x) ::= (x_{i} - x'_{i})\times\int_{\alpha=0}^1\frac{\partial F(x'+\alpha \times (x - x'))}{\partial x_i}{d\alpha}$where:$_{i}$ = feature $x$ = input $x'$ = baseline $\alpha$ = interpolation constant to perturbe features by However, in practice, computing a definite integral is not always numerically possible and computationally costly so you compute the following numerical approximation:$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\partial F(x' + \frac{k}{m}\times(x - x'))}{\partial x_{i}} \times \frac{1}{m}$where:$_{i}$ = feature (individual pixel) $x$ = input (image tensor) $x'$ = baseline (image tensor) $k$ = scaled feature perturbation constant $m$ = number of steps in the Riemann sum approximation of the integral. This is covered in depth in the section *Compute integral approximation* below. You will walk through the intuition and implementation of the above equation in the sections below. Generate interpolated path inputs $IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\partial F(\overbrace{x' + \frac{k}{m}\times(x - x')}^\text{generate m interpolated images at k intervals})}{\partial x_{i}} \times \frac{1}{m}$ The first step is to generate a [linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation) path between your known baseline and input images. You can think of interpolated images as small steps in the feature space between each feature pixel between your baseline and input images. These steps are represented by $\alpha$ in the original equation. You will revisit $\alpha$ in greater depth in the subsequent section *Compute approximate integral* as its values are tied to the your choice of integration approximation method.For now, you can use the handy `tf.linspace` function to generate a `Tensor` with 20 m_steps at k linear intervals between 0 and 1 as an input to the `generate_path_inputs` function below.
###Code
m_steps=20
alphas = tf.linspace(start=0.0, stop=1.0, num=m_steps+1)
def generate_path_inputs(baseline,
input,
alphas):
"""Generate m interpolated inputs between baseline and input features.
Args:
baseline(Tensor): A 3D image tensor of floats with the shape
(img_height, img_width, 3).
input(Tensor): A 3D image tensor of floats with the shape
(img_height, img_width, 3).
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
Returns:
path_inputs(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
"""
# Expand dimensions for vectorized computation of interpolations.
alphas_x = alphas[:, tf.newaxis, tf.newaxis, tf.newaxis]
baseline_x = tf.expand_dims(baseline, axis=0)
input_x = tf.expand_dims(input, axis=0)
delta = input_x - baseline_x
path_inputs = baseline_x + alphas_x * delta
return path_inputs
###Output
_____no_output_____
###Markdown
Generate interpolated images along a linear path at alpha intervals between a black baseline image and the example "Giant Panda" image.
###Code
path_inputs = generate_path_inputs(
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
alphas=alphas)
path_inputs.shape
###Output
_____no_output_____
###Markdown
The interpolated images are visualized below. Note that another way of thinking about the $\alpha$ constant is that it is monotonically and consistently increasing each interpolated image's intensity.
###Code
fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))
axs[0,0].set_title('Baseline \n alpha: {:.2f}'.format(alphas[0]))
axs[0,0].imshow(path_inputs[0])
axs[0,0].axis('off')
axs[0,1].set_title('=> Interpolated Image # 1 \n alpha: {:.2f}'.format(alphas[1]))
axs[0,1].imshow(path_inputs[1])
axs[0,1].axis('off')
axs[0,2].set_title('=> Interpolated Image # 2 \n alpha: {:.2f}'.format(alphas[2]))
axs[0,2].imshow(path_inputs[2])
axs[0,2].axis('off')
axs[0,3].set_title('... => Interpolated Image # 10 \n alpha: {:.2f}'.format(alphas[10]))
axs[0,3].imshow(path_inputs[10])
axs[0,3].axis('off')
axs[0,4].set_title('... => Input Image \n alpha: {:.2f}'.format(alphas[-1]))
axs[0,4].imshow(path_inputs[-1])
axs[0,4].axis('off')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Compute gradients Now that you generated 20 interpolated images between a black baseline and your example "Giant Panda" photo, lets take a look at how to calculate [gradients](https://en.wikipedia.org/wiki/Gradient) to measure the relationship between changes to your feature pixels and changes in your model's predictions.The gradients of F, your Inception V1 model function, represents the direction of maximum increase between your predictions with respect to your input. In the case of images, your gradient tells you which pixels have the steepest local slope between your output model's predicted class probabilities with respect to the original pixels. $IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\overbrace{\partial F(\text{interpolated images})}^\text{Compute gradients}}{\partial x_{i}} \times \frac{1}{m}$where: $F()$ = your model's prediction function $\frac{\partial{F}}{\partial{x_i}}$ = gradient (vector of partial derivatives $\partial$) of your model F's prediction function relative to each feature $x_i$ TensorFlow 2.x makes computing gradients extremely easy for you with the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) object which performantly computes and records gradient operations.
###Code
def compute_gradients(model, path_inputs, target_class_idx):
"""Compute gradients of model predicted probabilties with respect to inputs.
Args:
mode(tf.keras.Model): Trained Keras model.
path_inputs(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
target_class_idx(Tensor): A 0D tensor of an int corresponding to the correct
ImageNet target class index.
Returns:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
"""
with tf.GradientTape() as tape:
tape.watch(path_inputs)
predictions = model(path_inputs)
# Note: IG requires softmax probabilities; converting Inception V1 logits.
outputs = tf.nn.softmax(predictions, axis=-1)[:, target_class_idx]
gradients = tape.gradient(outputs, path_inputs)
return gradients
###Output
_____no_output_____
###Markdown
Compute gradients between your model Inception V1's predicted probabilities for the target class on each interpolated image with respect to each interpolated input. Recall that your model returns a `(1, 1001)` shaped `Tensor` with of logits that you will convert to predicted probabilities for every class. You need to pass the correct ImageNet target class index to the `compute_gradients` function below in order to identify the specific output tensor you wish to explain in relation to your input and baseline.
###Code
path_gradients = compute_gradients(
model=inception_v1_classifier,
path_inputs=path_inputs,
target_class_idx=389)
###Output
_____no_output_____
###Markdown
Note the output shape `(n_interpolated_images, img_height, img_width, RGB)`. Below you can see the local gradients visualized for the first 5 interpolated inputs relative to the input "Giant Panda" image as a series of ghostly shapes. You can think these gradients as measuring the change in your model's predictions for each small step in the feature space. *The largest gradient magnitudes generally occur at the lowest alphas*.
###Code
fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))
for i in range(5):
axs[0,i].imshow(tf.cast(255 * path_gradients[i], tf.uint8), cmap=plt.cm.inferno)
axs[0,i].axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Why not just use gradients for attribution? Saturation** You may be wondering at this point, why not just compute the gradients of the predictions with respect to the input as feature attributions? Why bother with slowly changing the intensity of the input image at all? The reason why is networks can *saturate*, meaning the magnitude of the local feature gradients can become extremely small and go toward zero resulting in important features having a small gradient. *The implication is that saturation can result in discontinuous feature importances and miss important features.*This concept is visualized in the 2 graphs below:
###Code
pred = inception_v1_classifier(path_inputs)
pred_proba = tf.nn.softmax(pred, axis=-1)[:, 389]
plt.figure(figsize=(10,4))
ax1 = plt.subplot(1,2,1)
ax1.plot(alphas, pred_proba)
ax1.set_title('Target class predicted probability over alpha')
ax1.set_ylabel('model p(target class)')
ax1.set_xlabel('alpha')
ax1.set_ylim([0,1])
ax2 = plt.subplot(1,2,2)
# Average across interpolation steps
average_grads = tf.math.reduce_mean(path_gradients, axis=[1,2,3])
# Normalize average gradients to 0 to 1 scale. E.g. (x - min(x))/(max(x)-min(x))
average_grads_norm = (average_grads-tf.math.reduce_min(average_grads))/(tf.math.reduce_max(average_grads)-tf.reduce_min(average_grads))
ax2.plot(alphas, average_grads_norm)
ax2.set_title('Average pixel gradients (normalized) over alpha')
ax2.set_ylabel('Average pixel gradients')
ax2.set_xlabel('alpha')
ax2.set_ylim([0,1]);
###Output
_____no_output_____
###Markdown
Notice in the left plot above, how the model prediction function quickly learns the correct "Giant Panda" class when alpha is between 0.0 and 0.3 and then largely flattens between 0.3 and 1.0. There could still be features that the model relies on for correct prediction that differ from the baseline but the magnitudes of those feature gradients become really small and bounce around 0 starting from 0.3 to 1.0. Similarly, in the right plot of the average pixel gradients plotted over alpha, you can see the peak "aha" moment where the model learns the target "Giant Panda" but also that the gradient magnitudes quickly minimize toward 0 and even become discontinuous briefly around 0.6. In practice, this can result in gradient attributions to miss important features that differ between input and baseline and to focus on irrelvant features.**The beauty of IG is that is solves the problem of discontinuous gradient feature importances by taking small steps in the feature space to compute local gradients between predictions and inputs across the feature space and then averages these gradients together to produce IG feature attributions.** Compute integral approximation There are many different ways you can go about computing the numeric approximation of an integral for IG with different tradeoffs in accuracy and convergence across varying functions. A popular class of methods is called [Riemann sums](https://en.wikipedia.org/wiki/Riemann_sum). The code below shows the visual geometric interpretation for Left, Right, Midpoint, and Trapezoidal Riemann Sums for intuition below:
###Code
def plot_riemann_sums(fn, start_val, end_val, m_steps=10):
"""
Plot Riemann Sum integral approximations for single variable functions.
Args:
fn(function): Any single variable function.
start_val(int): Minimum function value constraint.
end_val(int): Maximum function value constraint.
m_steps(int): Linear interpolation steps for approximation.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
# fn plot args
x = tf.linspace(start_val, end_val, m_steps**2+1)
y = fn(x)
fig = plt.figure(figsize=(16,4))
# Left Riemann Sum
lr_ax = plt.subplot(1,4,1)
lr_ax.plot(x, y)
lr_x = tf.linspace(0.0, 1.0, m_steps+1)
lr_point = lr_x[:-1]
lr_height = fn(lr_x[:-1])
lr_ax.plot(lr_point, lr_height, 'b.', markersize=10)
lr_ax.bar(lr_point, lr_height, width=(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')
lr_ax.set_title('Left Riemann Sum \n m_steps = {}'.format(m_steps))
lr_ax.set_xlabel('alpha')
# Right Riemann Sum
rr_ax = plt.subplot(1,4,2)
rr_ax.plot(x, y)
rr_x = tf.linspace(0.0, 1.0, m_steps+1)
rr_point = rr_x[1:]
rr_height = fn(rr_x[1:])
rr_ax.plot(rr_point, rr_height, 'b.', markersize=10)
rr_ax.bar(rr_point, rr_height, width=-(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')
rr_ax.set_title('Right Riemann Sum \n m_steps = {}'.format(m_steps))
rr_ax.set_xlabel('alpha')
# Midpoint Riemann Sum
mr_ax = plt.subplot(1,4,3)
mr_ax.plot(x, y)
mr_x = tf.linspace(0.0, 1.0, m_steps+1)
mr_point = (mr_x[:-1] + mr_x[1:])/2
mr_height = fn(mr_point)
mr_ax.plot(mr_point, mr_height, 'b.', markersize=10)
mr_ax.bar(mr_point, mr_height, width=(end_val-start_val)/m_steps, alpha=0.2, edgecolor='b')
mr_ax.set_title('Midpoint Riemann Sum \n m_steps = {}'.format(m_steps))
mr_ax.set_xlabel('alpha')
# Trapezoidal Riemann Sum
tp_ax = plt.subplot(1,4,4)
tp_ax.plot(x, y)
tp_x = tf.linspace(0.0, 1.0, m_steps+1)
tp_y = fn(tp_x)
for i in range(m_steps):
xs = [tp_x[i], tp_x[i], tp_x[i+1], tp_x[i+1]]
ys = [0, tp_y[i], tp_y[i+1], 0]
tp_ax.plot(tp_x,tp_y,'b.',markersize=10)
tp_ax.fill_between(xs, ys, color='C0', edgecolor='blue', alpha=0.2)
tp_ax.set_title('Trapezoidal Riemann Sum \n m_steps = {}'.format(m_steps))
tp_ax.set_xlabel('alpha')
return fig
###Output
_____no_output_____
###Markdown
Recall that a feature's gradient will vary in magnitude over the interpolated images between the baseline and input. You want to choose a method to best approximate the area of difference, also know as the [integral](https://en.wikipedia.org/wiki/Integral) between your baseline and input in the feature space. Lets consider the down facing parabola function $y = sin(x*\pi)$ varying between 0 and 1 as a proxy for how a feature gradient could vary in magnitude and sign over different alphas. To implement IG, you care about approximation accuracy and covergence. Left, Right, and Midpoint Riemann Sums utilize rectangles to approximate areas under the function while Trapezoidal Riemann Sums utilize trapezoids.
###Code
_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=5)
_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=10)
###Output
_____no_output_____
###Markdown
**Which integral approximation method should you choose for IG?**From the Riemann sum plots above you can see that the Trapezoidal Riemann Sum clearly provides a more accurate approximation and coverges more quickly over m_steps than the alternatives e.g. less white space under function not covered by shapes. Consequently, it is presented as the default method in the code below while also showing alternative methods for further study. Additional support for Trapezoidal Riemann approximation for IG is presented in section 4 of ["Computing Linear Restrictions of Neural Networks"](https://arxiv.org/abs/1908.06214). Let us return to the $\alpha$ constant previously introduced in the *Generate interpolated path inputs* section for varying the intensity of the interpolated images between the baseline and input image. In the `generate_alphas` function below, you can see that $\alpha$ changes with each approximation method to reflect different start and end points and underlying geometric shapes of either a rectangle or trapezoid used to approximate the integral area. It takes a `method` parameter and a `m_steps` parameter that controls the accuracy of the integral approximation.
###Code
def generate_alphas(m_steps=50,
method='riemann_trapezoidal'):
"""
Args:
m_steps(Tensor): A 0D tensor of an int corresponding to the number of linear
interpolation steps for computing an approximate integral. Default is 50.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
"""
m_steps_float = tf.cast(m_steps, float) # cast to float for division operations.
if method == 'riemann_trapezoidal':
alphas = tf.linspace(0.0, 1.0, m_steps+1) # needed to make m_steps intervals.
elif method == 'riemann_left':
alphas = tf.linspace(0.0, 1.0 - (1.0 / m_steps_float), m_steps)
elif method == 'riemann_midpoint':
alphas = tf.linspace(1.0 / (2.0 * m_steps_float), 1.0 - 1.0 / (2.0 * m_steps_float), m_steps)
elif method == 'riemann_right':
alphas = tf.linspace(1.0 / m_steps_float, 1.0, m_steps)
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
return alphas
alphas = generate_alphas(m_steps=20, method='riemann_trapezoidal')
alphas.shape
###Output
_____no_output_____
###Markdown
$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times \overbrace{\sum_{k=1}^{m}}^\text{4. Sum m local gradients}\text{gradients(interpolated images)} \times \overbrace{\frac{1}{m}}^\text{4. Divide by m steps}$From the equation, you can see you are summing over m gradients and dividing by m steps. You can implement the two operations together for step 4 as an *average of the local gradients of m interpolated predictions and input images*.
###Code
def integral_approximation(gradients,
method='riemann_trapezoidal'):
"""Compute numerical approximation of integral from gradients.
Args:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the shape
(img_height, img_width, 3).
"""
if method == 'riemann_trapezoidal':
grads = (gradients[:-1] + gradients[1:]) / tf.constant(2.0)
elif method == 'riemann_left':
grads = gradients
elif method == 'riemann_midpoint':
grads = gradients
elif method == 'riemann_right':
grads = gradients
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
# Average integration approximation.
integrated_gradients = tf.math.reduce_mean(grads, axis=0)
return integrated_gradients
###Output
_____no_output_____
###Markdown
The `integral_approximation` function takes the gradients of the predicted probability of the "Giant Panda" class with respect to the interpolated images between the baseline and "Giant Panda" image.
###Code
ig = integral_approximation(
gradients=path_gradients,
method='riemann_trapezoidal')
###Output
_____no_output_____
###Markdown
You can confirm averaging across the gradients of m interpolated images returns an integrated gradients tensor with the same shape as the original "Giant Panda" image.
###Code
ig.shape
###Output
_____no_output_____
###Markdown
Putting it all together Now you will combine the previous steps together into an `IntegratedGradients` function. To recap: $IntegratedGrads^{approx}_{i}(x)::=\overbrace{(x_{i}-x'_{i})}^\text{5.}\times \overbrace{\sum_{k=1}^{m}}^\text{4.} \frac{\partial \overbrace{F(\overbrace{x' + \overbrace{\frac{k}{m}}^\text{1.}\times(x - x'))}^\text{2.}}^\text{3.}}{\partial x_{i}} \times \overbrace{\frac{1}{m}}^\text{4.}$ 1. Generate alphas $\alpha$2. Generate interpolated path inputs = $(x' + \frac{k}{m}\times(x - x'))$3. Compute gradients between model output predictions with respect to input features = $\frac{\partial F(\text{interpolated path inputs})}{\partial x_{i}}$4. Integral approximation through averaging = $\sum_{k=1}^m \text{gradients} \times \frac{1}{m}$5. Scale integrated gradients with respect to original image = $(x_{i}-x'_{i}) \times \text{average gradients}$
###Code
@tf.function
def integrated_gradients(model,
baseline,
input,
target_class_idx,
m_steps=50,
method='riemann_trapezoidal',
batch_size=32
):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
input(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
batch_size(Tensor): A 0D tensor of an integer corresponding to a batch
size for alpha to scale computation and prevent OOM errors. Note: needs to
be tf.int64 and shoud be < m_steps. Default value is 32.
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the same
shape as the input tensor (image_height, image_width, 3).
"""
# 1. Generate alphas.
alphas = generate_alphas(m_steps=m_steps,
method=method)
# Initialize TensorArray outside loop to collect gradients. Note: this data structure
# is similar to a Python list but more performant and supports backpropogation.
# See https://www.tensorflow.org/api_docs/python/tf/TensorArray for additional details.
gradient_batches = tf.TensorArray(tf.float32, size=m_steps+1)
# Iterate alphas range and batch computation for speed, memory efficiency, and scaling to larger m_steps.
# Note: this implementation opted for lightweight tf.range iteration with @tf.function.
# Alternatively, you could also use tf.data, which adds performance overhead for the IG
# algorithm but provides more functionality for working with tensors and image data pipelines.
for alpha in tf.range(0, len(alphas), batch_size):
from_ = alpha
to = tf.minimum(from_ + batch_size, len(alphas))
alpha_batch = alphas[from_:to]
# 2. Generate interpolated inputs between baseline and input.
interpolated_path_input_batch = generate_path_inputs(baseline=baseline,
input=input,
alphas=alpha_batch)
# 3. Compute gradients between model outputs and interpolated inputs.
gradient_batch = compute_gradients(model=model,
path_inputs=interpolated_path_input_batch,
target_class_idx=target_class_idx)
# Write batch indices and gradients to TensorArray. Note: writing batch indices with
# scatter() allows for uneven batch sizes. Note: this operation is similar to a Python list extend().
# See https://www.tensorflow.org/api_docs/python/tf/TensorArray#scatter for additional details.
gradient_batches = gradient_batches.scatter(tf.range(from_, to), gradient_batch)
# Stack path gradients together row-wise into single tensor.
total_gradients = gradient_batches.stack()
# 4. Integral approximation through averaging gradients.
avg_gradients = integral_approximation(gradients=total_gradients,
method=method)
# 5. Scale integrated gradients with respect to input.
integrated_gradients = (input - baseline) * avg_gradients
return integrated_gradients
ig_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
target_class_idx=389,
m_steps=55,
method='riemann_trapezoidal')
ig_attributions.shape
###Output
_____no_output_____
###Markdown
Again, you can check that the IG feature attributions have the same shape as the input "Giant Panda" image. Step 4: checks to pick number of steps for IG approximation One of IG nice theoretical properties is **completeness**. It is desireable because it holds that IG feature attributions break down the entire model's output prediction. Each feature importance score captures each feature's individual contribution to the prediction, and when added together, you can recover the entire example prediction value itself as tidy book keeping. This provides a principled means to select the `m_steps` hyperparameter for IG.$IntegratedGrads_i(x) = F(x) - F(x')$where:$F(x)$ = model's predictions on input at target class $F(x')$ = model's predictions on baseline at target classYou can translate this formula to return a numeric score, with 0 representing convergance, through the following:$\delta(score) = \sum{(IntegratedGrads_i(x))} - (\sum{F(input)} - \sum{F(x')})$ The original paper suggests the number of steps to range between 20 to 300 depending upon the example and application for the integral approximation. In practice, this can vary up to a few thousand `m_steps` to achieve an integral approximation within 5% error of the actual integral. Visual result convergence can generally be achieved with far few steps.
###Code
def convergence_check(model, attributions, baseline, input, target_class_idx):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
input(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
Returns:
(none): Prints scores and convergence delta to sys.stdout.
"""
# Your model's prediction on the baseline tensor. Ideally, the baseline score
# should be close to zero.
baseline_prediction = model(tf.expand_dims(baseline, 0))
baseline_score = tf.nn.softmax(tf.squeeze(baseline_prediction))[target_class_idx]
# Your model's prediction and score on the input tensor.
input_prediction = model(tf.expand_dims(input, 0))
input_score = tf.nn.softmax(tf.squeeze(input_prediction))[target_class_idx]
# Sum of your IG prediction attributions.
ig_score = tf.math.reduce_sum(attributions)
delta = ig_score - (input_score - baseline_score)
try:
# Test your IG score is <= 5% of the input minus baseline score.
tf.debugging.assert_near(ig_score, (input_score - baseline_score), rtol=0.05)
tf.print('Approximation accuracy within 5%.', output_stream=sys.stdout)
except tf.errors.InvalidArgumentError:
tf.print('Increase or decrease m_steps to increase approximation accuracy.', output_stream=sys.stdout)
tf.print('Baseline score: {:.3f}'.format(baseline_score))
tf.print('Input score: {:.3f}'.format(input_score))
tf.print('IG score: {:.3f}'.format(ig_score))
tf.print('Convergence delta: {:.3f}'.format(delta))
convergence_check(model=inception_v1_classifier,
attributions=ig_attributions,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
target_class_idx=389)
###Output
_____no_output_____
###Markdown
Through utilizing the completeness axiom and the corresponding `convergence` function above, you were able to identify that you needed about 50 steps to approximate feature importances within 5% error for the "Giant Panda" image. Step 5: visualize IG attributions Finally, you are ready to visualize IG attributions. In order to visualize IG, you will utilize the plotting code below which sums the absolute values of the IG attributions across the color channels for simplicity to return a greyscale attribution mask for standalone visualization and overlaying on the original image. This plotting method captures the relative impact of pixels on the model's predictions well. Note that another visualization option for you to try is to preserve the direction of the gradient sign e.g. + or - for visualization on different channels to more accurately represent how the features might combine.
###Code
def plot_img_attributions(model,
baseline,
img,
target_class_idx,
m_steps=50,
cmap=None,
overlay_alpha=0.4):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
img(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
cmap(matplotlib.cm): Defaults to None. Reference for colormap options -
https://matplotlib.org/3.2.1/tutorials/colors/colormaps.html. Interesting
options to try are None and high contrast 'inferno'.
overlay_alpha(float): A float between 0 and 1 that represents the intensity
of the original image overlay.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
# Attributions
ig_attributions = integrated_gradients(model=model,
baseline=baseline,
input=img,
target_class_idx=target_class_idx,
m_steps=m_steps)
convergence_check(model, ig_attributions, baseline, img, target_class_idx)
# Per the original paper, take the absolute sum of the attributions across
# color channels for visualization. The attribution mask shape is a greyscale image
# with shape (224, 224).
attribution_mask = tf.reduce_sum(tf.math.abs(ig_attributions), axis=-1)
# Visualization
fig, axs = plt.subplots(nrows=2, ncols=2, squeeze=False, figsize=(8, 8))
axs[0,0].set_title('Baseline Image')
axs[0,0].imshow(baseline)
axs[0,0].axis('off')
axs[0,1].set_title('Original Image')
axs[0,1].imshow(img)
axs[0,1].axis('off')
axs[1,0].set_title('IG Attribution Mask')
axs[1,0].imshow(attribution_mask, cmap=cmap)
axs[1,0].axis('off')
axs[1,1].set_title('Original + IG Attribution Mask Overlay')
axs[1,1].imshow(attribution_mask, cmap=cmap)
axs[1,1].imshow(img, alpha=overlay_alpha)
axs[1,1].axis('off')
plt.tight_layout()
return fig
###Output
_____no_output_____
###Markdown
Visual inspection of the IG attributions on the "Fireboat" image, show that Inception V1 identifies the water cannons and spouts as contributing to its correct prediction.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Fireboat'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=555,
m_steps=240,
cmap=plt.cm.inferno,
overlay_alpha=0.4)
###Output
_____no_output_____
###Markdown
IG attributions on the "School Bus" image highlight the shape, front lighting, and front stop sign.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['School Bus'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=780,
m_steps=100,
cmap=None,
overlay_alpha=0.2)
###Output
_____no_output_____
###Markdown
Returning to the "Giant Panda" image, IG attributions hightlight the texture, nose shape, and white fur of the Panda's face.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Giant Panda'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=389,
m_steps=55,
cmap=None,
overlay_alpha=0.5)
###Output
_____no_output_____
###Markdown
How do different baselines impact interpretation of IG attributions? In the section **Step 2: Establish baseline to compare against inputs**, the explanation from the original IG paper and discussion recommended a black baseline image to "ignore" and allow for interpretation of the predictions solely as a function of the input pixels. To motivate the choice of a black baseline image for interpretation, lets take a look at how a random baseline influences IG attributions. Recall from above that a black baseline with the fireboat image, the IG attributions were primarily focused on the right water cannon on the fireboat. Now with a random baseline, the interpretation is much less clear. The IG attribution mask below shows a hazy attribution cloud of varying pixel intensity around the entire region of the water cannon streams. Are these truly significant features identified by the model or artifacts of random dark pixels from the random basline? Inconclusive without more investigation. The random baseline has changed interpretation of the pixel intensities from being solely in relation to the input features to input features plus spurious attributions from the baseline.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Fireboat'],
baseline=name_baseline_tensors['Baseline Image: Random'],
target_class_idx=555,
m_steps=240,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Returning to the school bus image, a black baseline really highlighted the school bus shape and stop sign as strongly distingushing features. In contrast, a random noise baseline makes interpretation of the IG attribution mask significantly more difficult. In particular, this attribution mask would wrongly leave you to believe that the model found a small area of pixels along the side of the bus significant.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['School Bus'],
baseline=name_baseline_tensors['Baseline Image: Random'],
target_class_idx=780,
m_steps=100,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
**Are there any scenarios where you prefer a non-black baseline? Yes.** Consider the photo below of an all black beetle on a white background. The beetle primarily receives 0 pixel attribution with a black baseline and only highlights small bright portions of the beetle caused by glare and some of the spurious background and colored leg pixels. *For this example, black pixels are meaningful and do not provide an uninformative baseline.*
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Black Beetle'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=307,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
A white baseline is a better contrastive choice here to highlight the important pixels on the beetle.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Black Beetle'],
baseline=name_baseline_tensors['Baseline Image: White'],
target_class_idx=307,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Ultimately, picking any constant color baseline has potential interpretation problems through just visual inspection alone without consideration of the underlying values and their signs. Baseline selection is still an area of active research with various proposals e.g. averaging multiple random baselines, blurred inputs, etc. discussed in depth in the distill.pub article [Visualizing the Impact of Feature Attribution Baselines](https://distill.pub/2020/attribution-baselines/). Use cases IG is a model-agnostic interpretability method that can be applied to any differentiable model (e.g. neural networks) to understand its predictions in terms of its input features; whether they be images, video, text, or structured data.**At Google, IG has been applied in 20+ product areas to recommender system, classification, and regression models for feature importance and selection, model error analysis, train-test data skew monitoring, and explaining model behavior to stakeholders.**The subsections below present a non-exhaustive list of the most common use cases for IG biased toward production machine learning workflows. Use case: understanding feature importances IG relative feature importances provide better understanding of your model's learned features to both model builders and stakeholders, insight into the underlying data it was trained on, as well as provide a basis for feature selection. Lets take a look at an example of how IG relative feature importances can provide insight into the underlying input data. **What is the difference between a Golden Retriever and Labrador Retriever?** Consider again the example images of the [Golden Retriever](https://en.wikipedia.org/wiki/Golden_Retriever) and the Yellow [Labrador Retriever](https://en.wikipedia.org/wiki/Labrador_Retriever) below. If you are not a domain expert familiar with these breeds, you might reasonably conclude these are 2 images of the same type of dog. They both have similar face and body shapes as well as coloring. Your model, Inception V1, already correctly identifies a Golden Retriever and Labrador Retriever. In fact, it is quite confident about the Golden Retriever in the top image, even though there is a bit of lingering doubt about the Labrador Retriever as seen with its appearance in prediction 4. In comparison, the model is relatively less confident about its correct prediction of the Labrador Retriever in the second image and also sees some shades of similarity with the Golden Retriever which also makes an appearance in the top 5 predictions.
###Code
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Golden Retriever'],
img_name_tensors['Yellow Labrador Retriever']]),
img_titles=tf.stack(['Golden Retriever',
'Yellow Labrador Retriever']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Without any prior understanding of how to differentiate these dogs or the features to do so, what can you learn from IG's feature importances? Review the Golden Retriever IG attribution mask and IG Overlay of the original image below. Notice how it the pixel intensities are primarily highlighted on the face and shape of the dog but are brightest on the front and back legs and tail in areas of *lengthy and wavy fur*. A quick Google search validates that this is indeed a key distinguishing feature of Golden Retrievers compared to Labrador Retrievers.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Golden Retriever'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=208,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Comparatively, IG also highlights the face and body shape of the Labrador Retriever with a density of bright pixels on its *straight and short hair coat*. This provides additional evidence toward the length and texture of the coats being key differentiators between these 2 breeds.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Yellow Labrador Retriever'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=209,
m_steps=100,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
From visual inspection of the IG attributions, you now have insight into the underlying causal structure behind distringuishing Golden Retrievers and Yellow Labrador Retrievers without any prior knowledge. Going forward, you can use this insight to improve your model's performance further through refining its learned representations of these 2 breeds by retraining with additional examples of each dog breed and augmenting your training data through random perturbations of each dog's coat textures and colors. Use case: debugging data skew Training-serving data skew, a difference between performance during training and during model serving, is a hard to detect and widely prevalent issue impacting the performance of production machine learning systems. ML systems require dense samplings of input spaces in their training data to learn representations that generalize well to unseen data. To complement existing production ML monitoring of dataset and model performance statistics, tracking IG feature importances across time (e.g. "next day" splits) and data splits (e.g. train/dev/test splits) allows for meaningful monitoring of train-serving feature drift and skew. **Military uniforms change across space and time.** Recall from this tutorial's section on ImageNet that each class (e.g. military uniform) in the ILSVRC-2012-CLS training dataset is represented by an average of 1,000 images that Inception V1 could learn from. At present, there are about 195 countries around the world that have significantly different military uniforms by service branch, climate, and occasion, etc. Additionally, military uniforms have changed significantly over time within the same country. As a result, the potential input space for military uniforms is enormous with many uniforms over-represented (e.g. US military) while others sparsely represented (e.g. US Union Army) or absent from the training data altogether (e.g. Greece Presidential Guard).
###Code
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Military Uniform (Grace Hopper)'],
img_name_tensors['Military Uniform (General Ulysses S. Grant)'],
img_name_tensors['Military Uniform (Greek Presidential Guard)']]),
img_titles=tf.stack(['Military Uniform (Grace Hopper)',
'Military Uniform (General Ulysses S. Grant)',
'Military Uniform (Greek Presidential Guard)']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Inception V1 correctly classifies this image of [United States Rear Admiral and Computer Scientist, Grace Hopper](https://en.wikipedia.org/wiki/Grace_Hopper), under the class "military uniform" above. From visual inspection of the IG feature attributions, you can see that brightest intensity pixels are focused around the shirt colar and tie, military insignia on the jacket and hat, and various pixel areas around her face. Note that there are potentially spurious pixels also highlighted in the background worth investigating empirically to refine the model's learned representation of military uniforms. However, IG does not provide insight into how these pixels were combined into the final prediction so its possible these pixels helped the model distinguish between military uniform and other similar classes such as the windsor tie and suit.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (Grace Hopper)'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=653,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Below is an image of the [United States General Ulysses S. Grant](https://en.wikipedia.org/wiki/Ulysses_S._Grant) circa 1865. He is wearing a military uniform for the same country as Rear Admiral Hopper above, but how well can the model identify a military uniform to this image of different coloring and taken 120+ years earlier? From the model predictions above, you can see not very well as the model incorrectly predicts a trench coat and suit above a military uniform.From visual inspection of the IG attribution mask, it is clear the model struggled to identify a military uniform with the faded black and white image lacking the contrastive range of a color image. Since this is a faded black and white image with prominent darker features, a white baseline is a better choice.The IG Overlay of the original image does suggest that the model identified the military insignia patch on the right shoulder, face, collar, jacket buttons, and pixels around the edges of the coat. Using this insight, you can improve model performance by adding data augmentation to your input data pipeline to include additional colorless images and image translations as well as additional example images with military coats.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (General Ulysses S. Grant)'],
baseline=name_baseline_tensors['Baseline Image: White'],
target_class_idx=870,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Yikes! Inception V1 incorrectly predicted the image of a [Greek Presidential Guard](https://en.wikipedia.org/wiki/Presidential_Guard_(Greece)) as a vestment with low confidence. The underlying training data does not appear to have sufficient representation and density of Greek military uniforms. In fact, the lack of geo-diversity in large public image datasets, including ImageNet, was studied in the paper S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, and D. Sculley. ["No classification without representation: Assessing geodiversity issues in open datasets for the developing world."](https://arxiv.org/abs/1711.08536), 2017. The authors found "observable amerocentric and eurocentric representation bias" and strong differences in relative model performance across geographic areas.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (Greek Presidential Guard)'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=653,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Using the IG attributions above, you can see the model focused primarily on the face and high contrast white wavy kilt in the front and vest rather than the military insignia on the red hat or sword hilt. While IG attributions alone will not identify or fix data skew or bias, when combined with model evaluation performance metrics and dataset statistics, IG attributions provide you with a guided path forward to collecting more and diverse data to improve model performance.Re-training the model on this more diverse sampling of the input space of Greece military uniforms, in particular those that emphasize military insignia, as well as utilizing weighting strategies during training can help mitigate biased data and further refine model performance and generalization. Use case: debugging model performance IG feature attributions provide a useful debugging complement to dataset statistics and model performance evaluation metrics to better understand model quality.When using IG feature attributions for debugging, you are looking for insights into the following questions:* Which features are important? * How well does the model's learned features generalize? * Does the model learn "incorrect" or spurious features in the image beyond the true class object?* What features did my model miss?* Comparing correct and incorrect examples of the same class, what is the difference in the feature attributions? IG feature attributions are well suited for counterfactual reasoning to gain insight into your model's performance and limitations. This involves comparing feature attributions for images of the same class that receive different predictions. When combined with model performance metrics and dataset statistics, IG feature attributions give greater insight into model errors during debuggin to understand which features contributed to the incorrect prediction when compared to feature attributions on correct predictions. To go deeper on model debugging, see The Google AI [What-if tool](https://pair-code.github.io/what-if-tool/) to interactively inspect your dataset, and model, and IG feature attributions. In the example below, you will apply 3 transformations to the "Yellow Labrador Retriever" image and constrast correct and incorrect IG feature attributions to gain insight into your model's limitations.
###Code
rotate90_labrador_retriever_img = tf.image.rot90(img_name_tensors['Yellow Labrador Retriever'])
upsidedown_labrador_retriever_img = tf.image.flip_up_down(img_name_tensors['Yellow Labrador Retriever'])
zoom_labrador_retriever_img = tf.keras.preprocessing.image.random_zoom(x=img_name_tensors['Yellow Labrador Retriever'], zoom_range=(0.45,0.45))
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Yellow Labrador Retriever'],
rotate90_labrador_retriever_img,
upsidedown_labrador_retriever_img,
zoom_labrador_retriever_img]),
img_titles=tf.stack(['Yellow Labrador Retriever (original)',
'Yellow Labrador Retriever (rotated 90 degrees)',
'Yellow Labrador Retriever (flipped upsidedown)',
'Yellow Labrador Retriever (zoomed in)']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
These rotation and zooming examples serve to highlight an important limitation of convolutional neural networks like Inception V1 - *CNNs are not naturally rotationally or scale invariant.* All of these examples resulted in incorrect predictions. Now you will see an example of how comparing 2 example attributions - one incorrect prediction vs. one known correct prediction - gives a deeper feature-level insight into why the model made an error to take corrective action.
###Code
labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Yellow Labrador Retriever'],
target_class_idx=209,
m_steps=200,
method='riemann_trapezoidal')
zoom_labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=zoom_labrador_retriever_img,
target_class_idx=209,
m_steps=200,
method='riemann_trapezoidal')
###Output
_____no_output_____
###Markdown
Zooming in on the Labrador Retriever image causes Inception V1 to incorrectly predict a different dog breed, a [Saluki](https://en.wikipedia.org/wiki/Saluki). Compare the IG attributions on the incorrect and correct predictions below. You can see the IG attributions on the zoomed image still focus on the legs but they are now much further apart and the midsection is proportionally narrower. Compared to the IG attributions on the original image, the visible head size is significantly smaller as well. Aimed with deeper feature-level understanding of your model's error, you can improve model performance by pursuing strategies such as training data augmentation to make your model more robust to changes in object proportions or check your image preprocessing code is the same during training and serving to prevent data skew introduced from by zooming or resizing operations.
###Code
fig, axs = plt.subplots(nrows=1, ncols=3, squeeze=False, figsize=(16, 12))
axs[0,0].set_title('IG Attributions - Incorrect Prediction: Saluki')
axs[0,0].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno)
axs[0,0].axis('off')
axs[0,1].set_title('IG Attributions - Correct Prediction: Labrador Retriever')
axs[0,1].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None)
axs[0,1].axis('off')
axs[0,2].set_title('IG Attributions - both predictions overlayed')
axs[0,2].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno, alpha=0.99)
axs[0,2].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None, alpha=0.5)
axs[0,2].axis('off')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Integrated gradients This tutorial: Run in Google Colab View source on GitHub Download notebook A shorter version of this notebook is also available as a TensorFlow tutorial: View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to implement **Integrated Gradients (IG)**, an explainable AI technique described in the paper [Axiomatic Attribution for Deep Networks](https://arxiv.org/abs/1703.01365). IG aims to explain the relationship between a model's predictions in terms of its features. It has many use cases including understanding feature importances, identifying data skew, and debugging model performance.IG has become a popular interpretability technique due to its broad applicability to any differentiable model, ease of implementation, theoretical justifications, and computational efficiency relative to alternative approaches that allows it to scale to large networks and feature spaces such as images.You will start by walking through an implementation of IG step-by-step. Next, you will apply IG attributions to understand the pixel feature importances of an image classifier and explore applied machine learning use cases. Lastly, you will conclude with discussion of IG's properties, limitations, and suggestions for next steps in your learning journey.To motivate this tutorial, here is the result of using IG to highlight important pixels that were used to classify this [image](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg) as a fireboat. Explaining an image classifier
###Code
import matplotlib.pylab as plt
import numpy as np
import math
import sys
import tensorflow as tf
import tensorflow_hub as hub
###Output
_____no_output_____
###Markdown
Download Inception V1 from TF-Hub **TensorFlow Hub Module**IG can be applied to any neural network. To mirror the paper's implementation, you will use a pre-trained version of [Inception V1]((https://arxiv.org/abs/1409.4842)) from [TensorFlow Hub](https://tfhub.dev/google/imagenet/inception_v1/classification/4).
###Code
inception_v1_url = "https://tfhub.dev/google/imagenet/inception_v1/classification/4"
inception_v1_classifier = tf.keras.Sequential([
hub.KerasLayer(name='inception_v1',
handle=inception_v1_url,
trainable=False),
])
inception_v1_classifier.build([None, 224, 224, 3])
inception_v1_classifier.summary()
###Output
_____no_output_____
###Markdown
From the TF Hub module page, you need to keep in mind the following about Inception V1 for image classification:**Inputs**: The expected input shape for the model is `(None, 224, 224, 3,)`. This is a dense 4D tensor of dtype float32 and shape `(batch_size, height, width, RGB channels)` whose elements are RGB color values of pixels normalized to the range [0, 1]. The first element is `None` to indicate that the model can take any integer batch size.**Outputs**: A `tf.Tensor` of logits in the shape of `(n_images, 1001)`. Each row represents the model's predicted score for each of ImageNet's 1,001 classes. For the model's top predicted class index you can use `tf.argmax(predictions, axis=-1)`. Furthmore, you can also covert the model's logit output to predicted probabilities across all classes using `tf.nn.softmax(predictions, axis=-1)` to quantify the model's uncertainty as well as explore similar predicted classes for debugging.
###Code
def load_imagenet_labels(file_path):
"""
Args:
file_path(str): A URL download path.
Returns:
imagenet_label_array(numpy.ndarray): Array of strings with shape (1001,).
"""
labels_file = tf.keras.utils.get_file('ImageNetLabels.txt', file_path)
with open(labels_file, "r") as reader:
f = reader.read()
labels = f.splitlines()
imagenet_label_array = np.array(labels)
return imagenet_label_array
imagenet_label_vocab = load_imagenet_labels('https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
###Output
_____no_output_____
###Markdown
Load and preprocess images with `tf.image`You will illustrate IG using several images. Links to the original images are as follows ([Fireboat](https://commons.wikimedia.org/wiki/File:San_Francisco_fireboat_showing_off.jpg), [School Bus](https://commons.wikimedia.org/wiki/File:Thomas_School_Bus_Bus.jpg), [Giant Panda](https://commons.wikimedia.org/wiki/File:Giant_Panda_2.JPG), [Black Beetle](https://commons.wikimedia.org/wiki/File:Lucanus.JPG), [Golden Retriever](https://commons.wikimedia.org/wiki/File:Golden_retriever.jpg), [General Ulysses S. Grant](https://commons.wikimedia.org/wiki/Category:Ulysses_S._Grant/media/File:Portrait_of_Maj._Gen._Ulysses_S._Grant,_officer_of_the_Federal_Army_LOC_cwpb.06941.jpg), [Greece Presidential Guard](https://commons.wikimedia.org/wiki/File:Greek_guard_uniforms_1.jpg)).
###Code
def parse_image(file_name):
"""
This function downloads and standardizes input JPEG images for the
inception_v1 model. Its applies the following processing:
- Reads JPG file.
- Decodes JPG file into colored image.
- Converts data type to standard tf.float32.
- Resizes image to expected Inception V1 input dimension of
(224, 224, 3) with preserved aspect ratio. E.g. don't stretch image.
- Pad image to (224, 224, 3) shape with black pixels.
Args:
file_name(str): Direct URL path to the JPG image.
Returns:
image(Tensor): A Tensor of floats with shape (224, 224, 3).
label(str): A text label for display above the image.
"""
image = tf.io.read_file(file_name)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, (224, 224), preserve_aspect_ratio=True)
image = tf.image.resize_with_pad(image, target_height=224, target_width=224)
return image
# img_name_url {image_name: origin_url}
img_name_url = {
'Fireboat': 'https://storage.googleapis.com/applied-dl/temp/San_Francisco_fireboat_showing_off.jpg',
'School Bus': 'https://storage.googleapis.com/applied-dl/temp/Thomas_School_Bus_Bus.jpg',
'Giant Panda': 'https://storage.googleapis.com/applied-dl/temp/Giant_Panda_2.jpeg',
'Black Beetle': 'https://storage.googleapis.com/applied-dl/temp/Lucanus.jpeg',
'Golden Retriever': 'https://storage.googleapis.com/applied-dl/temp/Golden_retriever.jpg',
'Yellow Labrador Retriever': 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg',
'Military Uniform (Grace Hopper)': 'https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
'Military Uniform (General Ulysses S. Grant)': 'https://storage.googleapis.com/applied-dl/temp/General_Ulysses_S._Grant%2C_Union_Army_(6186252896).jpg',
'Military Uniform (Greek Presidential Guard)': 'https://storage.googleapis.com/applied-dl/temp/Greek_guard_uniforms_1.jpg',
}
# img_name_path {image_name: downloaded_image_local_path}
img_name_path = {name: tf.keras.utils.get_file(name, url) for (name, url) in img_name_url.items()}
# img_name_tensors {image_name: parsed_image_tensor}
img_name_tensors = {name: parse_image(img_path) for (name, img_path) in img_name_path.items()}
plt.figure(figsize=(14,14))
for n, (name, img_tensors) in enumerate(img_name_tensors.items()):
ax = plt.subplot(3,3,n+1)
ax.imshow(img_tensors)
ax.set_title(name)
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Applying integrated gradients IG is an elegant and simple idea to explain a model's predictions in relation to its input. The basic intuition is to measure a feature's importance to your model by incrementally increasing a feature's intensity between its absense (baseline) and its input value, compute the change between your model's predictions with respect to the original feature at each step, and average these incremental changes together. To gain a deeper understanding for how IG works, you will walk through its application over the sub-sections below. Step 1: Identify model input and output tensors IG is a post-hoc explanatory method that works with any differentiable model regardless of its implementation. As such, you can pass any input example tensor to a model to generate an output prediction tensor. Note that InceptionV1 outputs a multiclass un-normalized logits prediction tensor. So you will use a softmax operator to turn the logits tensor into an output softmax predicted probabilities tensor for use to compute IG feature attributions.
###Code
# stack images into a batch for processing.
image_titles = tf.convert_to_tensor(list(img_name_tensors.keys()))
image_batch = tf.convert_to_tensor(list(img_name_tensors.values()))
image_batch.shape
def top_k_predictions_scores_labels(model, img, label_vocab, top_k=3):
"""
Args:
model(tf.keras.Model): Trained Keras model.
img(tf.Tensor): A 4D tensor of floats with the shape
(img_n, img_height, img_width, 3).
label_vocab(numpy.ndarray): An array of strings with shape (1001,).
top_k(int): Number of results to return.
Returns:
k_predictions_idx(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.int32
prediction indicies.
k_predictions_proba(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.float32
prediction probabilities.
k_predictions_label(tf.Tensor): A tf.Tensor [n_images, top_k] of tf.string
prediction labels.
"""
# These are logits (unnormalized scores).
predictions = model(img)
# Convert logits into probabilities.
predictions_proba = tf.nn.softmax(predictions, axis=-1)
# Filter top k prediction probabilities and indices.
k_predictions_proba, k_predictions_idx = tf.math.top_k(
input=predictions_proba, k=top_k)
# Lookup top k prediction labels in label_vocab array.
k_predictions_label = tf.convert_to_tensor(
label_vocab[k_predictions_idx.numpy()],
dtype=tf.string)
return k_predictions_idx, k_predictions_label, k_predictions_proba
def plot_img_predictions(model, img, img_titles, label_vocab, top_k=3):
"""Plot images with top_k predictions.
Args:
model(tf.keras.Model): Trained Keras model.
img(Tensor): A 4D Tensor of floats with the shape
(img_n, img_height, img_width, 3).
img_titles(Tensor): A Tensor of strings with the shape
(img_n, img_height, img_width, 3).
label_vocab(numpy.ndarray): An array of strings with shape (1001,).
top_k(int): Number of results to return.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
pred_idx, pred_label, pred_proba = \
top_k_predictions_scores_labels(
model=model,
img=img,
label_vocab=label_vocab,
top_k=top_k)
img_arr = img.numpy()
title_arr = img_titles.numpy()
pred_idx_arr = pred_idx.numpy()
pred_label_arr = pred_label.numpy()
pred_proba_arr = pred_proba.numpy()
n_rows = img_arr.shape[0]
# Preserve image height by converting pixels to inches based on dpi.
size = n_rows * (224 // 48)
fig, axs = plt.subplots(nrows=img_arr.shape[0], ncols=1, figsize=(size, size), squeeze=False)
for idx, image in enumerate(img_arr):
axs[idx, 0].imshow(image)
axs[idx, 0].set_title(title_arr[idx].decode('utf-8'), fontweight='bold')
axs[idx, 0].axis('off')
for k in range(top_k):
k_idx = pred_idx_arr[idx][k]
k_label = pred_label_arr[idx][k].decode('utf-8')
k_proba = pred_proba_arr[idx][k]
if k==0:
s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)
axs[idx, 0].text(244 + size, 102+(k*40), s, fontsize=12, fontweight='bold')
else:
s = 'Prediction {:}: ({:}-{:}) Score: {:.1%}'.format(k+1, k_idx, k_label, k_proba)
axs[idx, 0].text(244 + size, 102+(k*20), s, fontsize=12)
plt.tight_layout()
return fig
_ = plot_img_predictions(
model=inception_v1_classifier,
img=image_batch,
img_titles=image_titles,
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Step 2: establish baseline to compare inputs against Defining missingness or a starting point in feature spaces for comparison is at the core of machine learning interpretability methods. For IG, this concept is encoded as a baseline. A **baseline** is an uniformative input used as a starting point for defining IG attributions in relation to and essential for interpreting IG prediction attributions as a function of individual input features.When selecting a baseline for neural networks, the goal is to choose a baseline such as the prediction at the baseline is near zero to minimize aspects of the baseline impacting interpretation of the prediction attributions.For image classification networks, a baseline image with its pixels set to 0 meets this objective. For text networks, an all zero input embedding vector makes for a good baseline. Models with structured data that typically involve a mix of continuous numeric features will typically use the observed median value as a baseline because 0 is an informative value for these features. Note, however, that this changes the interpretation of the features to their importance in relation to the baseline value as opposed to the input data directly. The paper author's provide additional guidance on baseline selection for different input feature data types and models under a [How to Use Integrated Gradients Guide](https://github.com/ankurtaly/Integrated-Gradients/blob/master/howto.mdsanity-checking-baselines) on Github.
###Code
# name_baseline_tensors. Set random seed for reproducibility of random baseline image and associated attributions.
tf.random.set_seed(42)
name_baseline_tensors = {
'Baseline Image: Black': tf.zeros(shape=(224,224,3)),
'Baseline Image: Random': tf.random.uniform(shape=(224,224,3), minval=0.0, maxval=1.0),
'Baseline Image: White': tf.ones(shape=(224,224,3)),
}
plt.figure(figsize=(12,12))
for n, (name, baseline_tensor) in enumerate(name_baseline_tensors.items()):
ax = plt.subplot(1,3,n+1)
ax.imshow(baseline_tensor)
ax.set_title(name)
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Step 3: Integrated gradients in TensorFlow 2.x The exact formula for Integrated Gradients from the original paper is the following:$IntegratedGradients_{i}(x) ::= (x_{i} - x'_{i})\times\int_{\alpha=0}^1\frac{\partial F(x'+\alpha \times (x - x'))}{\partial x_i}{d\alpha}$where:$_{i}$ = feature $x$ = input $x'$ = baseline $\alpha$ = interpolation constant to perturbe features by However, in practice, computing a definite integral is not always numerically possible and computationally costly so you compute the following numerical approximation:$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\partial F(x' + \frac{k}{m}\times(x - x'))}{\partial x_{i}} \times \frac{1}{m}$where:$_{i}$ = feature (individual pixel) $x$ = input (image tensor) $x'$ = baseline (image tensor) $k$ = scaled feature perturbation constant $m$ = number of steps in the Riemann sum approximation of the integral. This is covered in depth in the section *Compute integral approximation* below. You will walk through the intuition and implementation of the above equation in the sections below. Generate interpolated path inputs $IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\partial F(\overbrace{x' + \frac{k}{m}\times(x - x')}^\text{generate m interpolated images at k intervals})}{\partial x_{i}} \times \frac{1}{m}$ The first step is to generate a [linear interpolation](https://en.wikipedia.org/wiki/Linear_interpolation) path between your known baseline and input images. You can think of interpolated images as small steps in the feature space between each feature pixel between your baseline and input images. These steps are represented by $\alpha$ in the original equation. You will revisit $\alpha$ in greater depth in the subsequent section *Compute approximate integral* as its values are tied to the your choice of integration approximation method.For now, you can use the handy `tf.linspace` function to generate a `Tensor` with 20 m_steps at k linear intervals between 0 and 1 as an input to the `generate_path_inputs` function below.
###Code
m_steps=20
alphas = tf.linspace(start=0.0, stop=1.0, num=m_steps+1)
def generate_path_inputs(baseline,
input,
alphas):
"""Generate m interpolated inputs between baseline and input features.
Args:
baseline(Tensor): A 3D image tensor of floats with the shape
(img_height, img_width, 3).
input(Tensor): A 3D image tensor of floats with the shape
(img_height, img_width, 3).
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
Returns:
path_inputs(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
"""
# Expand dimensions for vectorized computation of interpolations.
alphas_x = alphas[:, tf.newaxis, tf.newaxis, tf.newaxis]
baseline_x = tf.expand_dims(baseline, axis=0)
input_x = tf.expand_dims(input, axis=0)
delta = input_x - baseline_x
path_inputs = baseline_x + alphas_x * delta
return path_inputs
###Output
_____no_output_____
###Markdown
Generate interpolated images along a linear path at alpha intervals between a black baseline image and the example "Giant Panda" image.
###Code
path_inputs = generate_path_inputs(
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
alphas=alphas)
path_inputs.shape
###Output
_____no_output_____
###Markdown
The interpolated images are visualized below. Note that another way of thinking about the $\alpha$ constant is that it is monotonically and consistently increasing each interpolated image's intensity.
###Code
fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))
axs[0,0].set_title('Baseline \n alpha: {:.2f}'.format(alphas[0]))
axs[0,0].imshow(path_inputs[0])
axs[0,0].axis('off')
axs[0,1].set_title('=> Interpolated Image # 1 \n alpha: {:.2f}'.format(alphas[1]))
axs[0,1].imshow(path_inputs[1])
axs[0,1].axis('off')
axs[0,2].set_title('=> Interpolated Image # 2 \n alpha: {:.2f}'.format(alphas[2]))
axs[0,2].imshow(path_inputs[2])
axs[0,2].axis('off')
axs[0,3].set_title('... => Interpolated Image # 10 \n alpha: {:.2f}'.format(alphas[10]))
axs[0,3].imshow(path_inputs[10])
axs[0,3].axis('off')
axs[0,4].set_title('... => Input Image \n alpha: {:.2f}'.format(alphas[-1]))
axs[0,4].imshow(path_inputs[-1])
axs[0,4].axis('off')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Compute gradients Now that you generated 20 interpolated images between a black baseline and your example "Giant Panda" photo, lets take a look at how to calculate [gradients](https://en.wikipedia.org/wiki/Gradient) to measure the relationship between changes to your feature pixels and changes in your model's predictions.The gradients of F, your Inception V1 model function, represents the direction of maximum increase between your predictions with respect to your input. In the case of images, your gradient tells you which pixels have the steepest local slope between your output model's predicted class probabilities with respect to the original pixels. $IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times\sum_{k=1}^{m}\frac{\overbrace{\partial F(\text{interpolated images})}^\text{Compute gradients}}{\partial x_{i}} \times \frac{1}{m}$where: $F()$ = your model's prediction function $\frac{\partial{F}}{\partial{x_i}}$ = gradient (vector of partial derivatives $\partial$) of your model F's prediction function relative to each feature $x_i$ TensorFlow 2.x makes computing gradients extremely easy for you with the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) object which performantly computes and records gradient operations.
###Code
def compute_gradients(model, path_inputs, target_class_idx):
"""Compute gradients of model predicted probabilties with respect to inputs.
Args:
mode(tf.keras.Model): Trained Keras model.
path_inputs(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
target_class_idx(Tensor): A 0D tensor of an int corresponding to the correct
ImageNet target class index.
Returns:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
"""
with tf.GradientTape() as tape:
tape.watch(path_inputs)
predictions = model(path_inputs)
# Note: IG requires softmax probabilities; converting Inception V1 logits.
outputs = tf.nn.softmax(predictions, axis=-1)[:, target_class_idx]
gradients = tape.gradient(outputs, path_inputs)
return gradients
###Output
_____no_output_____
###Markdown
Compute gradients between your model Inception V1's predicted probabilities for the target class on each interpolated image with respect to each interpolated input. Recall that your model returns a `(1, 1001)` shaped `Tensor` with of logits that you will convert to predicted probabilities for every class. You need to pass the correct ImageNet target class index to the `compute_gradients` function below in order to identify the specific output tensor you wish to explain in relation to your input and baseline.
###Code
path_gradients = compute_gradients(
model=inception_v1_classifier,
path_inputs=path_inputs,
target_class_idx=389)
###Output
_____no_output_____
###Markdown
Note the output shape `(n_interpolated_images, img_height, img_width, RGB)`. Below you can see the local gradients visualized for the first 5 interpolated inputs relative to the input "Giant Panda" image as a series of ghostly shapes. You can think these gradients as measuring the change in your model's predictions for each small step in the feature space. *The largest gradient magnitudes generally occur at the lowest alphas*.
###Code
fig, axs = plt.subplots(nrows=1, ncols=5, squeeze=False, figsize=(24, 24))
for i in range(5):
axs[0,i].imshow(tf.cast(255 * path_gradients[i], tf.uint8), cmap=plt.cm.inferno)
axs[0,i].axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Why not just use gradients for attribution? Saturation** You may be wondering at this point, why not just compute the gradients of the predictions with respect to the input as feature attributions? Why bother with slowly changing the intensity of the input image at all? The reason why is networks can *saturate*, meaning the magnitude of the local feature gradients can become extremely small and go toward zero resulting in important features having a small gradient. *The implication is that saturation can result in discontinuous feature importances and miss important features.*This concept is visualized in the 2 graphs below:
###Code
pred = inception_v1_classifier(path_inputs)
pred_proba = tf.nn.softmax(pred, axis=-1)[:, 389]
plt.figure(figsize=(10,4))
ax1 = plt.subplot(1,2,1)
ax1.plot(alphas, pred_proba)
ax1.set_title('Target class predicted probability over alpha')
ax1.set_ylabel('model p(target class)')
ax1.set_xlabel('alpha')
ax1.set_ylim([0,1])
ax2 = plt.subplot(1,2,2)
# Average across interpolation steps
average_grads = tf.math.reduce_mean(path_gradients, axis=[1,2,3])
# Normalize average gradients to 0 to 1 scale. E.g. (x - min(x))/(max(x)-min(x))
average_grads_norm = (average_grads-tf.math.reduce_min(average_grads))/(tf.math.reduce_max(average_grads)-tf.reduce_min(average_grads))
ax2.plot(alphas, average_grads_norm)
ax2.set_title('Average pixel gradients (normalized) over alpha')
ax2.set_ylabel('Average pixel gradients')
ax2.set_xlabel('alpha')
ax2.set_ylim([0,1]);
###Output
_____no_output_____
###Markdown
Notice in the left plot above, how the model prediction function quickly learns the correct "Giant Panda" class when alpha is between 0.0 and 0.3 and then largely flattens between 0.3 and 1.0. There could still be features that the model relies on for correct prediction that differ from the baseline but the magnitudes of those feature gradients become really small and bounce around 0 starting from 0.3 to 1.0. Similarly, in the right plot of the average pixel gradients plotted over alpha, you can see the peak "aha" moment where the model learns the target "Giant Panda" but also that the gradient magnitudes quickly minimize toward 0 and even become discontinuous briefly around 0.6. In practice, this can result in gradient attributions to miss important features that differ between input and baseline and to focus on irrelvant features.**The beauty of IG is that is solves the problem of discontinuous gradient feature importances by taking small steps in the feature space to compute local gradients between predictions and inputs across the feature space and then averages these gradients together to produce IG feature attributions.** Compute integral approximation There are many different ways you can go about computing the numeric approximation of an integral for IG with different tradeoffs in accuracy and convergence across varying functions. A popular class of methods is called [Riemann sums](https://en.wikipedia.org/wiki/Riemann_sum). The code below shows the visual geometric interpretation for Left, Right, Midpoint, and Trapezoidal Riemann Sums for intuition below:
###Code
def plot_riemann_sums(fn, start_val, end_val, m_steps=10):
"""
Plot Riemann Sum integral approximations for single variable functions.
Args:
fn(function): Any single variable function.
start_val(int): Minimum function value constraint.
end_val(int): Maximum function value constraint.
m_steps(int): Linear interpolation steps for approximation.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
# fn plot args
x = tf.linspace(start_val, end_val, m_steps**2+1)
y = fn(x)
fig = plt.figure(figsize=(16,4))
# Left Riemann Sum
lr_ax = plt.subplot(1,4,1)
lr_ax.plot(x, y)
lr_x = tf.linspace(0.0, 1.0, m_steps+1)
lr_point = lr_x[:-1]
lr_height = fn(lr_x[:-1])
lr_ax.plot(lr_point, lr_height, 'b.', markersize=10)
lr_ax.bar(lr_point, lr_height, width=(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')
lr_ax.set_title('Left Riemann Sum \n m_steps = {}'.format(m_steps))
lr_ax.set_xlabel('alpha')
# Right Riemann Sum
rr_ax = plt.subplot(1,4,2)
rr_ax.plot(x, y)
rr_x = tf.linspace(0.0, 1.0, m_steps+1)
rr_point = rr_x[1:]
rr_height = fn(rr_x[1:])
rr_ax.plot(rr_point, rr_height, 'b.', markersize=10)
rr_ax.bar(rr_point, rr_height, width=-(end_val-start_val)/m_steps, alpha=0.2, align='edge', edgecolor='b')
rr_ax.set_title('Right Riemann Sum \n m_steps = {}'.format(m_steps))
rr_ax.set_xlabel('alpha')
# Midpoint Riemann Sum
mr_ax = plt.subplot(1,4,3)
mr_ax.plot(x, y)
mr_x = tf.linspace(0.0, 1.0, m_steps+1)
mr_point = (mr_x[:-1] + mr_x[1:])/2
mr_height = fn(mr_point)
mr_ax.plot(mr_point, mr_height, 'b.', markersize=10)
mr_ax.bar(mr_point, mr_height, width=(end_val-start_val)/m_steps, alpha=0.2, edgecolor='b')
mr_ax.set_title('Midpoint Riemann Sum \n m_steps = {}'.format(m_steps))
mr_ax.set_xlabel('alpha')
# Trapezoidal Riemann Sum
tp_ax = plt.subplot(1,4,4)
tp_ax.plot(x, y)
tp_x = tf.linspace(0.0, 1.0, m_steps+1)
tp_y = fn(tp_x)
for i in range(m_steps):
xs = [tp_x[i], tp_x[i], tp_x[i+1], tp_x[i+1]]
ys = [0, tp_y[i], tp_y[i+1], 0]
tp_ax.plot(tp_x,tp_y,'b.',markersize=10)
tp_ax.fill_between(xs, ys, color='C0', edgecolor='blue', alpha=0.2)
tp_ax.set_title('Trapezoidal Riemann Sum \n m_steps = {}'.format(m_steps))
tp_ax.set_xlabel('alpha')
return fig
###Output
_____no_output_____
###Markdown
Recall that a feature's gradient will vary in magnitude over the interpolated images between the baseline and input. You want to choose a method to best approximate the area of difference, also know as the [integral](https://en.wikipedia.org/wiki/Integral) between your baseline and input in the feature space. Lets consider the down facing parabola function $y = sin(x*\pi)$ varying between 0 and 1 as a proxy for how a feature gradient could vary in magnitude and sign over different alphas. To implement IG, you care about approximation accuracy and covergence. Left, Right, and Midpoint Riemann Sums utilize rectangles to approximate areas under the function while Trapezoidal Riemann Sums utilize trapezoids.
###Code
_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=5)
_ = plot_riemann_sums(lambda x: tf.math.sin(x*math.pi), 0.0, 1.0, m_steps=10)
###Output
_____no_output_____
###Markdown
**Which integral approximation method should you choose for IG?**From the Riemann sum plots above you can see that the Trapezoidal Riemann Sum clearly provides a more accurate approximation and coverges more quickly over m_steps than the alternatives e.g. less white space under function not covered by shapes. Consequently, it is presented as the default method in the code below while also showing alternative methods for further study. Additional support for Trapezoidal Riemann approximation for IG is presented in section 4 of ["Computing Linear Restrictions of Neural Networks"](https://arxiv.org/abs/1908.06214). Let us return to the $\alpha$ constant previously introduced in the *Generate interpolated path inputs* section for varying the intensity of the interpolated images between the baseline and input image. In the `generate_alphas` function below, you can see that $\alpha$ changes with each approximation method to reflect different start and end points and underlying geometric shapes of either a rectangle or trapezoid used to approximate the integral area. It takes a `method` parameter and a `m_steps` parameter that controls the accuracy of the integral approximation.
###Code
def generate_alphas(m_steps=50,
method='riemann_trapezoidal'):
"""
Args:
m_steps(Tensor): A 0D tensor of an int corresponding to the number of linear
interpolation steps for computing an approximate integral. Default is 50.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
"""
m_steps_float = tf.cast(m_steps, float) # cast to float for division operations.
if method == 'riemann_trapezoidal':
alphas = tf.linspace(0.0, 1.0, m_steps+1) # needed to make m_steps intervals.
elif method == 'riemann_left':
alphas = tf.linspace(0.0, 1.0 - (1.0 / m_steps_float), m_steps)
elif method == 'riemann_midpoint':
alphas = tf.linspace(1.0 / (2.0 * m_steps_float), 1.0 - 1.0 / (2.0 * m_steps_float), m_steps)
elif method == 'riemann_right':
alphas = tf.linspace(1.0 / m_steps_float, 1.0, m_steps)
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
return alphas
alphas = generate_alphas(m_steps=20, method='riemann_trapezoidal')
alphas.shape
###Output
_____no_output_____
###Markdown
$IntegratedGrads^{approx}_{i}(x)::=(x_{i}-x'_{i})\times \overbrace{\sum_{k=1}^{m}}^\text{4. Sum m local gradients}\text{gradients(interpolated images)} \times \overbrace{\frac{1}{m}}^\text{4. Divide by m steps}$From the equation, you can see you are summing over m gradients and dividing by m steps. You can implement the two operations together for step 4 as an *average of the local gradients of m interpolated predictions and input images*.
###Code
def integral_approximation(gradients,
method='riemann_trapezoidal'):
"""Compute numerical approximation of integral from gradients.
Args:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the shape
(img_height, img_width, 3).
"""
if method == 'riemann_trapezoidal':
grads = (gradients[:-1] + gradients[1:]) / tf.constant(2.0)
elif method == 'riemann_left':
grads = gradients
elif method == 'riemann_midpoint':
grads = gradients
elif method == 'riemann_right':
grads = gradients
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
# Average integration approximation.
integrated_gradients = tf.math.reduce_mean(grads, axis=0)
return integrated_gradients
###Output
_____no_output_____
###Markdown
The `integral_approximation` function takes the gradients of the predicted probability of the "Giant Panda" class with respect to the interpolated images between the baseline and "Giant Panda" image.
###Code
ig = integral_approximation(
gradients=path_gradients,
method='riemann_trapezoidal')
###Output
_____no_output_____
###Markdown
You can confirm averaging across the gradients of m interpolated images returns an integrated gradients tensor with the same shape as the original "Giant Panda" image.
###Code
ig.shape
###Output
_____no_output_____
###Markdown
Putting it all together Now you will combine the previous steps together into an `IntegratedGradients` function. To recap: $IntegratedGrads^{approx}_{i}(x)::=\overbrace{(x_{i}-x'_{i})}^\text{5.}\times \overbrace{\sum_{k=1}^{m}}^\text{4.} \frac{\partial \overbrace{F(\overbrace{x' + \overbrace{\frac{k}{m}}^\text{1.}\times(x - x'))}^\text{2.}}^\text{3.}}{\partial x_{i}} \times \overbrace{\frac{1}{m}}^\text{4.}$ 1. Generate alphas $\alpha$2. Generate interpolated path inputs = $(x' + \frac{k}{m}\times(x - x'))$3. Compute gradients between model output predictions with respect to input features = $\frac{\partial F(\text{interpolated path inputs})}{\partial x_{i}}$4. Integral approximation through averaging = $\sum_{k=1}^m \text{gradients} \times \frac{1}{m}$5. Scale integrated gradients with respect to original image = $(x_{i}-x'_{i}) \times \text{average gradients}$
###Code
@tf.function
def integrated_gradients(model,
baseline,
input,
target_class_idx,
m_steps=50,
method='riemann_trapezoidal',
batch_size=32
):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
input(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
batch_size(Tensor): A 0D tensor of an integer corresponding to a batch
size for alpha to scale computation and prevent OOM errors. Note: needs to
be tf.int64 and shoud be < m_steps. Default value is 32.
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the same
shape as the input tensor (image_height, image_width, 3).
"""
# 1. Generate alphas.
alphas = generate_alphas(m_steps=m_steps,
method=method)
# Initialize TensorArray outside loop to collect gradients. Note: this data structure
# is similar to a Python list but more performant and supports backpropogation.
# See https://www.tensorflow.org/api_docs/python/tf/TensorArray for additional details.
gradient_batches = tf.TensorArray(tf.float32, size=m_steps+1)
# Iterate alphas range and batch computation for speed, memory efficiency, and scaling to larger m_steps.
# Note: this implementation opted for lightweight tf.range iteration with @tf.function.
# Alternatively, you could also use tf.data, which adds performance overhead for the IG
# algorithm but provides more functionality for working with tensors and image data pipelines.
for alpha in tf.range(0, len(alphas), batch_size):
from_ = alpha
to = tf.minimum(from_ + batch_size, len(alphas))
alpha_batch = alphas[from_:to]
# 2. Generate interpolated inputs between baseline and input.
interpolated_path_input_batch = generate_path_inputs(baseline=baseline,
input=input,
alphas=alpha_batch)
# 3. Compute gradients between model outputs and interpolated inputs.
gradient_batch = compute_gradients(model=model,
path_inputs=interpolated_path_input_batch,
target_class_idx=target_class_idx)
# Write batch indices and gradients to TensorArray. Note: writing batch indices with
# scatter() allows for uneven batch sizes. Note: this operation is similar to a Python list extend().
# See https://www.tensorflow.org/api_docs/python/tf/TensorArray#scatter for additional details.
gradient_batches = gradient_batches.scatter(tf.range(from_, to), gradient_batch)
# Stack path gradients together row-wise into single tensor.
total_gradients = gradient_batches.stack()
# 4. Integral approximation through averaging gradients.
avg_gradients = integral_approximation(gradients=total_gradients,
method=method)
# 5. Scale integrated gradients with respect to input.
integrated_gradients = (input - baseline) * avg_gradients
return integrated_gradients
ig_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
target_class_idx=389,
m_steps=55,
method='riemann_trapezoidal')
ig_attributions.shape
###Output
_____no_output_____
###Markdown
Again, you can check that the IG feature attributions have the same shape as the input "Giant Panda" image. Step 4: checks to pick number of steps for IG approximation One of IG nice theoretical properties is **completeness**. It is desireable because it holds that IG feature attributions break down the entire model's output prediction. Each feature importance score captures each feature's individual contribution to the prediction, and when added together, you can recover the entire example prediction value itself as tidy book keeping. This provides a principled means to select the `m_steps` hyperparameter for IG.$IntegratedGrads_i(x) = F(x) - F(x')$where:$F(x)$ = model's predictions on input at target class $F(x')$ = model's predictions on baseline at target classYou can translate this formula to return a numeric score, with 0 representing convergance, through the following:$\delta(score) = \sum{(IntegratedGrads_i(x))} - (\sum{F(input)} - \sum{F(x')})$ The original paper suggests the number of steps to range between 20 to 300 depending upon the example and application for the integral approximation. In practice, this can vary up to a few thousand `m_steps` to achieve an integral approximation within 5% error of the actual integral. Visual result convergence can generally be achieved with far few steps.
###Code
def convergence_check(model, attributions, baseline, input, target_class_idx):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
input(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
Returns:
(none): Prints scores and convergence delta to sys.stdout.
"""
# Your model's prediction on the baseline tensor. Ideally, the baseline score
# should be close to zero.
baseline_prediction = model(tf.expand_dims(baseline, 0))
baseline_score = tf.nn.softmax(tf.squeeze(baseline_prediction))[target_class_idx]
# Your model's prediction and score on the input tensor.
input_prediction = model(tf.expand_dims(input, 0))
input_score = tf.nn.softmax(tf.squeeze(input_prediction))[target_class_idx]
# Sum of your IG prediction attributions.
ig_score = tf.math.reduce_sum(attributions)
delta = ig_score - (input_score - baseline_score)
try:
# Test your IG score is <= 5% of the input minus baseline score.
tf.debugging.assert_near(ig_score, (input_score - baseline_score), rtol=0.05)
tf.print('Approximation accuracy within 5%.', output_stream=sys.stdout)
except tf.errors.InvalidArgumentError:
tf.print('Increase or decrease m_steps to increase approximation accuracy.', output_stream=sys.stdout)
tf.print('Baseline score: {:.3f}'.format(baseline_score))
tf.print('Input score: {:.3f}'.format(input_score))
tf.print('IG score: {:.3f}'.format(ig_score))
tf.print('Convergence delta: {:.3f}'.format(delta))
convergence_check(model=inception_v1_classifier,
attributions=ig_attributions,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Giant Panda'],
target_class_idx=389)
###Output
_____no_output_____
###Markdown
Through utilizing the completeness axiom and the corresponding `convergence` function above, you were able to identify that you needed about 50 steps to approximate feature importances within 5% error for the "Giant Panda" image. Step 5: visualize IG attributions Finally, you are ready to visualize IG attributions. In order to visualize IG, you will utilize the plotting code below which sums the absolute values of the IG attributions across the color channels for simplicity to return a greyscale attribution mask for standalone visualization and overlaying on the original image. This plotting method captures the relative impact of pixels on the model's predictions well. Note that another visualization option for you to try is to preserve the direction of the gradient sign e.g. + or - for visualization on different channels to more accurately represent how the features might combine.
###Code
def plot_img_attributions(model,
baseline,
img,
target_class_idx,
m_steps=50,
cmap=None,
overlay_alpha=0.4):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3) with the same shape as the input tensor.
img(Tensor): A 3D image tensor with the shape
(image_height, image_width, 3).
target_class_idx(Tensor): An integer that corresponds to the correct
ImageNet class index in the model's output predictions tensor. Default
value is 50 steps.
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
cmap(matplotlib.cm): Defaults to None. Reference for colormap options -
https://matplotlib.org/3.2.1/tutorials/colors/colormaps.html. Interesting
options to try are None and high contrast 'inferno'.
overlay_alpha(float): A float between 0 and 1 that represents the intensity
of the original image overlay.
Returns:
fig(matplotlib.pyplot.figure): fig object to utilize for displaying, saving
plots.
"""
# Attributions
ig_attributions = integrated_gradients(model=model,
baseline=baseline,
input=img,
target_class_idx=target_class_idx,
m_steps=m_steps)
convergence_check(model, ig_attributions, baseline, img, target_class_idx)
# Per the original paper, take the absolute sum of the attributions across
# color channels for visualization. The attribution mask shape is a greyscale image
# with shape (224, 224).
attribution_mask = tf.reduce_sum(tf.math.abs(ig_attributions), axis=-1)
# Visualization
fig, axs = plt.subplots(nrows=2, ncols=2, squeeze=False, figsize=(8, 8))
axs[0,0].set_title('Baseline Image')
axs[0,0].imshow(baseline)
axs[0,0].axis('off')
axs[0,1].set_title('Original Image')
axs[0,1].imshow(img)
axs[0,1].axis('off')
axs[1,0].set_title('IG Attribution Mask')
axs[1,0].imshow(attribution_mask, cmap=cmap)
axs[1,0].axis('off')
axs[1,1].set_title('Original + IG Attribution Mask Overlay')
axs[1,1].imshow(attribution_mask, cmap=cmap)
axs[1,1].imshow(img, alpha=overlay_alpha)
axs[1,1].axis('off')
plt.tight_layout()
return fig
###Output
_____no_output_____
###Markdown
Visual inspection of the IG attributions on the "Fireboat" image, show that Inception V1 identifies the water cannons and spouts as contributing to its correct prediction.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Fireboat'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=555,
m_steps=240,
cmap=plt.cm.inferno,
overlay_alpha=0.4)
###Output
_____no_output_____
###Markdown
IG attributions on the "School Bus" image highlight the shape, front lighting, and front stop sign.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['School Bus'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=780,
m_steps=100,
cmap=None,
overlay_alpha=0.2)
###Output
_____no_output_____
###Markdown
Returning to the "Giant Panda" image, IG attributions hightlight the texture, nose shape, and white fur of the Panda's face.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Giant Panda'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=389,
m_steps=55,
cmap=None,
overlay_alpha=0.5)
###Output
_____no_output_____
###Markdown
How do different baselines impact interpretation of IG attributions? In the section **Step 2: Establish baseline to compare against inputs**, the explanation from the original IG paper and discussion recommended a black baseline image to "ignore" and allow for interpretation of the predictions solely as a function of the input pixels. To motivate the choice of a black baseline image for interpretation, lets take a look at how a random baseline influences IG attributions. Recall from above that a black baseline with the fireboat image, the IG attributions were primarily focused on the right water cannon on the fireboat. Now with a random baseline, the interpretation is much less clear. The IG attribution mask below shows a hazy attribution cloud of varying pixel intensity around the entire region of the water cannon streams. Are these truly significant features identified by the model or artifacts of random dark pixels from the random basline? Inconclusive without more investigation. The random baseline has changed interpretation of the pixel intensities from being solely in relation to the input features to input features plus spurious attributions from the baseline.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Fireboat'],
baseline=name_baseline_tensors['Baseline Image: Random'],
target_class_idx=555,
m_steps=240,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Returning to the school bus image, a black baseline really highlighted the school bus shape and stop sign as strongly distingushing features. In contrast, a random noise baseline makes interpretation of the IG attribution mask significantly more difficult. In particular, this attribution mask would wrongly leave you to believe that the model found a small area of pixels along the side of the bus significant.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['School Bus'],
baseline=name_baseline_tensors['Baseline Image: Random'],
target_class_idx=780,
m_steps=100,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
**Are there any scenarios where you prefer a non-black baseline? Yes.** Consider the photo below of an all black beetle on a white background. The beetle primarily receives 0 pixel attribution with a black baseline and only highlights small bright portions of the beetle caused by glare and some of the spurious background and colored leg pixels. *For this example, black pixels are meaningful and do not provide an uninformative baseline.*
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Black Beetle'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=307,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
A white baseline is a better contrastive choice here to highlight the important pixels on the beetle.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Black Beetle'],
baseline=name_baseline_tensors['Baseline Image: White'],
target_class_idx=307,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Ultimately, picking any constant color baseline has potential interpretation problems through just visual inspection alone without consideration of the underlying values and their signs. Baseline selection is still an area of active research with various proposals e.g. averaging multiple random baselines, blurred inputs, etc. discussed in depth in the distill.pub article [Visualizing the Impact of Feature Attribution Baselines](https://distill.pub/2020/attribution-baselines/). Use cases IG is a model-agnostic interpretability method that can be applied to any differentiable model (e.g. neural networks) to understand its predictions in terms of its input features; whether they be images, video, text, or structured data.**At Google, IG has been applied in 20+ product areas to recommender system, classification, and regression models for feature importance and selection, model error analysis, train-test data skew monitoring, and explaining model behavior to stakeholders.**The subsections below present a non-exhaustive list of the most common use cases for IG biased toward production machine learning workflows. Use case: understanding feature importances IG relative feature importances provide better understanding of your model's learned features to both model builders and stakeholders, insight into the underlying data it was trained on, as well as provide a basis for feature selection. Lets take a look at an example of how IG relative feature importances can provide insight into the underlying input data. **What is the difference between a Golden Retriever and Labrador Retriever?** Consider again the example images of the [Golden Retriever](https://en.wikipedia.org/wiki/Golden_Retriever) and the Yellow [Labrador Retriever](https://en.wikipedia.org/wiki/Labrador_Retriever) below. If you are not a domain expert familiar with these breeds, you might reasonably conclude these are 2 images of the same type of dog. They both have similar face and body shapes as well as coloring. Your model, Inception V1, already correctly identifies a Golden Retriever and Labrador Retriever. In fact, it is quite confident about the Golden Retriever in the top image, even though there is a bit of lingering doubt about the Labrador Retriever as seen with its appearance in prediction 4. In comparison, the model is relatively less confident about its correct prediction of the Labrador Retriever in the second image and also sees some shades of similarity with the Golden Retriever which also makes an appearance in the top 5 predictions.
###Code
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Golden Retriever'],
img_name_tensors['Yellow Labrador Retriever']]),
img_titles=tf.stack(['Golden Retriever',
'Yellow Labrador Retriever']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Without any prior understanding of how to differentiate these dogs or the features to do so, what can you learn from IG's feature importances? Review the Golden Retriever IG attribution mask and IG Overlay of the original image below. Notice how it the pixel intensities are primarily highlighted on the face and shape of the dog but are brightest on the front and back legs and tail in areas of *lengthy and wavy fur*. A quick Google search validates that this is indeed a key distinguishing feature of Golden Retrievers compared to Labrador Retrievers.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Golden Retriever'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=208,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Comparatively, IG also highlights the face and body shape of the Labrador Retriever with a density of bright pixels on its *straight and short hair coat*. This provides additional evidence toward the length and texture of the coats being key differentiators between these 2 breeds.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Yellow Labrador Retriever'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=209,
m_steps=100,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
From visual inspection of the IG attributions, you now have insight into the underlying causal structure behind distringuishing Golden Retrievers and Yellow Labrador Retrievers without any prior knowledge. Going forward, you can use this insight to improve your model's performance further through refining its learned representations of these 2 breeds by retraining with additional examples of each dog breed and augmenting your training data through random perturbations of each dog's coat textures and colors. Use case: debugging data skew Training-serving data skew, a difference between performance during training and during model serving, is a hard to detect and widely prevalent issue impacting the performance of production machine learning systems. ML systems require dense samplings of input spaces in their training data to learn representations that generalize well to unseen data. To complement existing production ML monitoring of dataset and model performance statistics, tracking IG feature importances across time (e.g. "next day" splits) and data splits (e.g. train/dev/test splits) allows for meaningful monitoring of train-serving feature drift and skew. **Military uniforms change across space and time.** Recall from this tutorial's section on ImageNet that each class (e.g. military uniform) in the ILSVRC-2012-CLS training dataset is represented by an average of 1,000 images that Inception V1 could learn from. At present, there are about 195 countries around the world that have significantly different military uniforms by service branch, climate, and occasion, etc. Additionally, military uniforms have changed significantly over time within the same country. As a result, the potential input space for military uniforms is enormous with many uniforms over-represented (e.g. US military) while others sparsely represented (e.g. US Union Army) or absent from the training data altogether (e.g. Greece Presidential Guard).
###Code
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Military Uniform (Grace Hopper)'],
img_name_tensors['Military Uniform (General Ulysses S. Grant)'],
img_name_tensors['Military Uniform (Greek Presidential Guard)']]),
img_titles=tf.stack(['Military Uniform (Grace Hopper)',
'Military Uniform (General Ulysses S. Grant)',
'Military Uniform (Greek Presidential Guard)']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
Inception V1 correctly classifies this image of [United States Rear Admiral and Computer Scientist, Grace Hopper](https://en.wikipedia.org/wiki/Grace_Hopper), under the class "military uniform" above. From visual inspection of the IG feature attributions, you can see that brightest intensity pixels are focused around the shirt colar and tie, military insignia on the jacket and hat, and various pixel areas around her face. Note that there are potentially spurious pixels also highlighted in the background worth investigating empirically to refine the model's learned representation of military uniforms. However, IG does not provide insight into how these pixels were combined into the final prediction so its possible these pixels helped the model distinguish between military uniform and other similar classes such as the windsor tie and suit.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (Grace Hopper)'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=653,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Below is an image of the [United States General Ulysses S. Grant](https://en.wikipedia.org/wiki/Ulysses_S._Grant) circa 1865. He is wearing a military uniform for the same country as Rear Admiral Hopper above, but how well can the model identify a military uniform to this image of different coloring and taken 120+ years earlier? From the model predictions above, you can see not very well as the model incorrectly predicts a trench coat and suit above a military uniform.From visual inspection of the IG attribution mask, it is clear the model struggled to identify a military uniform with the faded black and white image lacking the contrastive range of a color image. Since this is a faded black and white image with prominent darker features, a white baseline is a better choice.The IG Overlay of the original image does suggest that the model identified the military insignia patch on the right shoulder, face, collar, jacket buttons, and pixels around the edges of the coat. Using this insight, you can improve model performance by adding data augmentation to your input data pipeline to include additional colorless images and image translations as well as additional example images with military coats.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (General Ulysses S. Grant)'],
baseline=name_baseline_tensors['Baseline Image: White'],
target_class_idx=870,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Yikes! Inception V1 incorrectly predicted the image of a [Greek Presidential Guard](https://en.wikipedia.org/wiki/Presidential_Guard_(Greece)) as a vestment with low confidence. The underlying training data does not appear to have sufficient representation and density of Greek military uniforms. In fact, the lack of geo-diversity in large public image datasets, including ImageNet, was studied in the paper S. Shankar, Y. Halpern, E. Breck, J. Atwood, J. Wilson, and D. Sculley. ["No classification without representation: Assessing geodiversity issues in open datasets for the developing world."](https://arxiv.org/abs/1711.08536), 2017. The authors found "observable amerocentric and eurocentric representation bias" and strong differences in relative model performance across geographic areas.
###Code
_ = plot_img_attributions(model=inception_v1_classifier,
img=img_name_tensors['Military Uniform (Greek Presidential Guard)'],
baseline=name_baseline_tensors['Baseline Image: Black'],
target_class_idx=653,
m_steps=200,
cmap=None,
overlay_alpha=0.3)
###Output
_____no_output_____
###Markdown
Using the IG attributions above, you can see the model focused primarily on the face and high contrast white wavy kilt in the front and vest rather than the military insignia on the red hat or sword hilt. While IG attributions alone will not identify or fix data skew or bias, when combined with model evaluation performance metrics and dataset statistics, IG attributions provide you with a guided path forward to collecting more and diverse data to improve model performance.Re-training the model on this more diverse sampling of the input space of Greece military uniforms, in particular those that emphasize military insignia, as well as utilizing weighting strategies during training can help mitigate biased data and further refine model performance and generalization. Use case: debugging model performance IG feature attributions provide a useful debugging complement to dataset statistics and model performance evaluation metrics to better understand model quality.When using IG feature attributions for debugging, you are looking for insights into the following questions:* Which features are important? * How well does the model's learned features generalize? * Does the model learn "incorrect" or spurious features in the image beyond the true class object?* What features did my model miss?* Comparing correct and incorrect examples of the same class, what is the difference in the feature attributions? IG feature attributions are well suited for counterfactual reasoning to gain insight into your model's performance and limitations. This involves comparing feature attributions for images of the same class that receive different predictions. When combined with model performance metrics and dataset statistics, IG feature attributions give greater insight into model errors during debuggin to understand which features contributed to the incorrect prediction when compared to feature attributions on correct predictions. To go deeper on model debugging, see The Google AI [What-if tool](https://pair-code.github.io/what-if-tool/) to interactively inspect your dataset, and model, and IG feature attributions. In the example below, you will apply 3 transformations to the "Yellow Labrador Retriever" image and constrast correct and incorrect IG feature attributions to gain insight into your model's limitations.
###Code
rotate90_labrador_retriever_img = tf.image.rot90(img_name_tensors['Yellow Labrador Retriever'])
upsidedown_labrador_retriever_img = tf.image.flip_up_down(img_name_tensors['Yellow Labrador Retriever'])
zoom_labrador_retriever_img = tf.keras.preprocessing.image.random_zoom(x=img_name_tensors['Yellow Labrador Retriever'], zoom_range=(0.45,0.45))
_ = plot_img_predictions(
model=inception_v1_classifier,
img=tf.stack([img_name_tensors['Yellow Labrador Retriever'],
rotate90_labrador_retriever_img,
upsidedown_labrador_retriever_img,
zoom_labrador_retriever_img]),
img_titles=tf.stack(['Yellow Labrador Retriever (original)',
'Yellow Labrador Retriever (rotated 90 degrees)',
'Yellow Labrador Retriever (flipped upsidedown)',
'Yellow Labrador Retriever (zoomed in)']),
label_vocab=imagenet_label_vocab,
top_k=5
)
###Output
_____no_output_____
###Markdown
These rotation and zooming examples serve to highlight an important limitation of convolutional neural networks like Inception V1 - *CNNs are not naturally rotationally or scale invariant.* All of these examples resulted in incorrect predictions. Now you will see an example of how comparing 2 example attributions - one incorrect prediction vs. one known correct prediction - gives a deeper feature-level insight into why the model made an error to take corrective action.
###Code
labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=img_name_tensors['Yellow Labrador Retriever'],
target_class_idx=209,
m_steps=200,
method='riemann_trapezoidal')
zoom_labrador_retriever_attributions = integrated_gradients(model=inception_v1_classifier,
baseline=name_baseline_tensors['Baseline Image: Black'],
input=zoom_labrador_retriever_img,
target_class_idx=209,
m_steps=200,
method='riemann_trapezoidal')
###Output
_____no_output_____
###Markdown
Zooming in on the Labrador Retriever image causes Inception V1 to incorrectly predict a different dog breed, a [Saluki](https://en.wikipedia.org/wiki/Saluki). Compare the IG attributions on the incorrect and correct predictions below. You can see the IG attributions on the zoomed image still focus on the legs but they are now much further apart and the midsection is proportionally narrower. Compared to the IG attributions on the original image, the visible head size is significantly smaller as well. Aimed with deeper feature-level understanding of your model's error, you can improve model performance by pursuing strategies such as training data augmentation to make your model more robust to changes in object proportions or check your image preprocessing code is the same during training and serving to prevent data skew introduced from by zooming or resizing operations.
###Code
fig, axs = plt.subplots(nrows=1, ncols=3, squeeze=False, figsize=(16, 12))
axs[0,0].set_title('IG Attributions - Incorrect Prediction: Saluki')
axs[0,0].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno)
axs[0,0].axis('off')
axs[0,1].set_title('IG Attributions - Correct Prediction: Labrador Retriever')
axs[0,1].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None)
axs[0,1].axis('off')
axs[0,2].set_title('IG Attributions - both predictions overlayed')
axs[0,2].imshow(tf.reduce_sum(tf.abs(zoom_labrador_retriever_attributions), axis=-1), cmap=plt.cm.inferno, alpha=0.99)
axs[0,2].imshow(tf.reduce_sum(tf.abs(labrador_retriever_attributions), axis=-1), cmap=None, alpha=0.5)
axs[0,2].axis('off')
plt.tight_layout();
###Output
_____no_output_____ |
doc/source/ray-core/examples/plot_parameter_server.ipynb | ###Markdown
Parameter ServerThe parameter server is a framework for distributed machine learning training.In the parameter server framework, a centralized server (or group of servernodes) maintains global shared parameters of a machine-learning model(e.g., a neural network) while the data and computation of calculatingupdates (i.e., gradient descent updates) are distributed over worker nodes.```{image} /ray-core/images/param_actor.png:align: center```Parameter servers are a core part of many machine learning applications. Thisdocument walks through how to implement simple synchronous and asynchronousparameter servers using Ray actors.To run the application, first install some dependencies.```bashpip install torch torchvision filelock```Let's first define some helper functions and import some dependencies.
###Code
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from filelock import FileLock
import numpy as np
import ray
def get_data_loader():
"""Safely downloads data. Returns training/validation set dataloader."""
mnist_transforms = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
# We add FileLock here because multiple workers will want to
# download data, and this may cause overwrites since
# DataLoader is not threadsafe.
with FileLock(os.path.expanduser("~/data.lock")):
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"~/data", train=True, download=True, transform=mnist_transforms
),
batch_size=128,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST("~/data", train=False, transform=mnist_transforms),
batch_size=128,
shuffle=True,
)
return train_loader, test_loader
def evaluate(model, test_loader):
"""Evaluates the accuracy of the model on a validation dataset."""
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
# This is only set to finish evaluation faster.
if batch_idx * len(data) > 1024:
break
outputs = model(data)
_, predicted = torch.max(outputs.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
return 100.0 * correct / total
###Output
_____no_output_____
###Markdown
Setup: Defining the Neural NetworkWe define a small neural network to use in training. We providesome helper functions for obtaining data, including getter/settermethods for gradients and weights.
###Code
class ConvNet(nn.Module):
"""Small ConvNet for MNIST."""
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 3, kernel_size=3)
self.fc = nn.Linear(192, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 3))
x = x.view(-1, 192)
x = self.fc(x)
return F.log_softmax(x, dim=1)
def get_weights(self):
return {k: v.cpu() for k, v in self.state_dict().items()}
def set_weights(self, weights):
self.load_state_dict(weights)
def get_gradients(self):
grads = []
for p in self.parameters():
grad = None if p.grad is None else p.grad.data.cpu().numpy()
grads.append(grad)
return grads
def set_gradients(self, gradients):
for g, p in zip(gradients, self.parameters()):
if g is not None:
p.grad = torch.from_numpy(g)
###Output
_____no_output_____
###Markdown
Defining the Parameter ServerThe parameter server will hold a copy of the model.During training, it will:1. Receive gradients and apply them to its model.2. Send the updated model back to the workers.The ``@ray.remote`` decorator defines a remote process. It wraps theParameterServer class and allows users to instantiate it as aremote actor.
###Code
@ray.remote
class ParameterServer(object):
def __init__(self, lr):
self.model = ConvNet()
self.optimizer = torch.optim.SGD(self.model.parameters(), lr=lr)
def apply_gradients(self, *gradients):
summed_gradients = [
np.stack(gradient_zip).sum(axis=0) for gradient_zip in zip(*gradients)
]
self.optimizer.zero_grad()
self.model.set_gradients(summed_gradients)
self.optimizer.step()
return self.model.get_weights()
def get_weights(self):
return self.model.get_weights()
###Output
_____no_output_____
###Markdown
Defining the WorkerThe worker will also hold a copy of the model. During training. it willcontinuously evaluate data and send gradientsto the parameter server. The worker will synchronize its model with theParameter Server model weights.
###Code
@ray.remote
class DataWorker(object):
def __init__(self):
self.model = ConvNet()
self.data_iterator = iter(get_data_loader()[0])
def compute_gradients(self, weights):
self.model.set_weights(weights)
try:
data, target = next(self.data_iterator)
except StopIteration: # When the epoch ends, start a new epoch.
self.data_iterator = iter(get_data_loader()[0])
data, target = next(self.data_iterator)
self.model.zero_grad()
output = self.model(data)
loss = F.nll_loss(output, target)
loss.backward()
return self.model.get_gradients()
###Output
_____no_output_____
###Markdown
Synchronous Parameter Server TrainingWe'll now create a synchronous parameter server training scheme. We'll firstinstantiate a process for the parameter server, along with multipleworkers.
###Code
iterations = 200
num_workers = 2
ray.init(ignore_reinit_error=True)
ps = ParameterServer.remote(1e-2)
workers = [DataWorker.remote() for i in range(num_workers)]
###Output
_____no_output_____
###Markdown
We'll also instantiate a model on the driver process to evaluate the testaccuracy during training.
###Code
model = ConvNet()
test_loader = get_data_loader()[1]
###Output
_____no_output_____
###Markdown
Training alternates between:1. Computing the gradients given the current weights from the server2. Updating the parameter server's weights with the gradients.
###Code
print("Running synchronous parameter server training.")
current_weights = ps.get_weights.remote()
for i in range(iterations):
gradients = [worker.compute_gradients.remote(current_weights) for worker in workers]
# Calculate update after all gradients are available.
current_weights = ps.apply_gradients.remote(*gradients)
if i % 10 == 0:
# Evaluate the current model.
model.set_weights(ray.get(current_weights))
accuracy = evaluate(model, test_loader)
print("Iter {}: \taccuracy is {:.1f}".format(i, accuracy))
print("Final accuracy is {:.1f}.".format(accuracy))
# Clean up Ray resources and processes before the next example.
ray.shutdown()
###Output
_____no_output_____
###Markdown
Asynchronous Parameter Server TrainingWe'll now create a synchronous parameter server training scheme. We'll firstinstantiate a process for the parameter server, along with multipleworkers.
###Code
print("Running Asynchronous Parameter Server Training.")
ray.init(ignore_reinit_error=True)
ps = ParameterServer.remote(1e-2)
workers = [DataWorker.remote() for i in range(num_workers)]
###Output
_____no_output_____
###Markdown
Here, workers will asynchronously compute the gradients given itscurrent weights and send these gradients to the parameter server assoon as they are ready. When the Parameter server finishes applying thenew gradient, the server will send back a copy of the current weights to theworker. The worker will then update the weights and repeat.
###Code
current_weights = ps.get_weights.remote()
gradients = {}
for worker in workers:
gradients[worker.compute_gradients.remote(current_weights)] = worker
for i in range(iterations * num_workers):
ready_gradient_list, _ = ray.wait(list(gradients))
ready_gradient_id = ready_gradient_list[0]
worker = gradients.pop(ready_gradient_id)
# Compute and apply gradients.
current_weights = ps.apply_gradients.remote(*[ready_gradient_id])
gradients[worker.compute_gradients.remote(current_weights)] = worker
if i % 10 == 0:
# Evaluate the current model after every 10 updates.
model.set_weights(ray.get(current_weights))
accuracy = evaluate(model, test_loader)
print("Iter {}: \taccuracy is {:.1f}".format(i, accuracy))
print("Final accuracy is {:.1f}.".format(accuracy))
###Output
_____no_output_____
###Markdown
Parameter Server```{tip}For a production-grade implementation of distributedtraining, use [Ray Train](https://docs.ray.io/en/master/train/train.html).```The parameter server is a framework for distributed machine learning training.In the parameter server framework, a centralized server (or group of servernodes) maintains global shared parameters of a machine-learning model(e.g., a neural network) while the data and computation of calculatingupdates (i.e., gradient descent updates) are distributed over worker nodes.```{image} /ray-core/images/param_actor.png:align: center```Parameter servers are a core part of many machine learning applications. Thisdocument walks through how to implement simple synchronous and asynchronousparameter servers using Ray actors.To run the application, first install some dependencies.```bashpip install torch torchvision filelock```Let's first define some helper functions and import some dependencies.
###Code
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from filelock import FileLock
import numpy as np
import ray
def get_data_loader():
"""Safely downloads data. Returns training/validation set dataloader."""
mnist_transforms = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
# We add FileLock here because multiple workers will want to
# download data, and this may cause overwrites since
# DataLoader is not threadsafe.
with FileLock(os.path.expanduser("~/data.lock")):
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"~/data", train=True, download=True, transform=mnist_transforms
),
batch_size=128,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST("~/data", train=False, transform=mnist_transforms),
batch_size=128,
shuffle=True,
)
return train_loader, test_loader
def evaluate(model, test_loader):
"""Evaluates the accuracy of the model on a validation dataset."""
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
# This is only set to finish evaluation faster.
if batch_idx * len(data) > 1024:
break
outputs = model(data)
_, predicted = torch.max(outputs.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
return 100.0 * correct / total
###Output
_____no_output_____
###Markdown
Setup: Defining the Neural NetworkWe define a small neural network to use in training. We providesome helper functions for obtaining data, including getter/settermethods for gradients and weights.
###Code
class ConvNet(nn.Module):
"""Small ConvNet for MNIST."""
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 3, kernel_size=3)
self.fc = nn.Linear(192, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 3))
x = x.view(-1, 192)
x = self.fc(x)
return F.log_softmax(x, dim=1)
def get_weights(self):
return {k: v.cpu() for k, v in self.state_dict().items()}
def set_weights(self, weights):
self.load_state_dict(weights)
def get_gradients(self):
grads = []
for p in self.parameters():
grad = None if p.grad is None else p.grad.data.cpu().numpy()
grads.append(grad)
return grads
def set_gradients(self, gradients):
for g, p in zip(gradients, self.parameters()):
if g is not None:
p.grad = torch.from_numpy(g)
###Output
_____no_output_____
###Markdown
Defining the Parameter ServerThe parameter server will hold a copy of the model.During training, it will:1. Receive gradients and apply them to its model.2. Send the updated model back to the workers.The ``@ray.remote`` decorator defines a remote process. It wraps theParameterServer class and allows users to instantiate it as aremote actor.
###Code
@ray.remote
class ParameterServer(object):
def __init__(self, lr):
self.model = ConvNet()
self.optimizer = torch.optim.SGD(self.model.parameters(), lr=lr)
def apply_gradients(self, *gradients):
summed_gradients = [
np.stack(gradient_zip).sum(axis=0) for gradient_zip in zip(*gradients)
]
self.optimizer.zero_grad()
self.model.set_gradients(summed_gradients)
self.optimizer.step()
return self.model.get_weights()
def get_weights(self):
return self.model.get_weights()
###Output
_____no_output_____
###Markdown
Defining the WorkerThe worker will also hold a copy of the model. During training. it willcontinuously evaluate data and send gradientsto the parameter server. The worker will synchronize its model with theParameter Server model weights.
###Code
@ray.remote
class DataWorker(object):
def __init__(self):
self.model = ConvNet()
self.data_iterator = iter(get_data_loader()[0])
def compute_gradients(self, weights):
self.model.set_weights(weights)
try:
data, target = next(self.data_iterator)
except StopIteration: # When the epoch ends, start a new epoch.
self.data_iterator = iter(get_data_loader()[0])
data, target = next(self.data_iterator)
self.model.zero_grad()
output = self.model(data)
loss = F.nll_loss(output, target)
loss.backward()
return self.model.get_gradients()
###Output
_____no_output_____
###Markdown
Synchronous Parameter Server TrainingWe'll now create a synchronous parameter server training scheme. We'll firstinstantiate a process for the parameter server, along with multipleworkers.
###Code
iterations = 200
num_workers = 2
ray.init(ignore_reinit_error=True)
ps = ParameterServer.remote(1e-2)
workers = [DataWorker.remote() for i in range(num_workers)]
###Output
_____no_output_____
###Markdown
We'll also instantiate a model on the driver process to evaluate the testaccuracy during training.
###Code
model = ConvNet()
test_loader = get_data_loader()[1]
###Output
_____no_output_____
###Markdown
Training alternates between:1. Computing the gradients given the current weights from the server2. Updating the parameter server's weights with the gradients.
###Code
print("Running synchronous parameter server training.")
current_weights = ps.get_weights.remote()
for i in range(iterations):
gradients = [worker.compute_gradients.remote(current_weights) for worker in workers]
# Calculate update after all gradients are available.
current_weights = ps.apply_gradients.remote(*gradients)
if i % 10 == 0:
# Evaluate the current model.
model.set_weights(ray.get(current_weights))
accuracy = evaluate(model, test_loader)
print("Iter {}: \taccuracy is {:.1f}".format(i, accuracy))
print("Final accuracy is {:.1f}.".format(accuracy))
# Clean up Ray resources and processes before the next example.
ray.shutdown()
###Output
_____no_output_____
###Markdown
Asynchronous Parameter Server TrainingWe'll now create a synchronous parameter server training scheme. We'll firstinstantiate a process for the parameter server, along with multipleworkers.
###Code
print("Running Asynchronous Parameter Server Training.")
ray.init(ignore_reinit_error=True)
ps = ParameterServer.remote(1e-2)
workers = [DataWorker.remote() for i in range(num_workers)]
###Output
_____no_output_____
###Markdown
Here, workers will asynchronously compute the gradients given itscurrent weights and send these gradients to the parameter server assoon as they are ready. When the Parameter server finishes applying thenew gradient, the server will send back a copy of the current weights to theworker. The worker will then update the weights and repeat.
###Code
current_weights = ps.get_weights.remote()
gradients = {}
for worker in workers:
gradients[worker.compute_gradients.remote(current_weights)] = worker
for i in range(iterations * num_workers):
ready_gradient_list, _ = ray.wait(list(gradients))
ready_gradient_id = ready_gradient_list[0]
worker = gradients.pop(ready_gradient_id)
# Compute and apply gradients.
current_weights = ps.apply_gradients.remote(*[ready_gradient_id])
gradients[worker.compute_gradients.remote(current_weights)] = worker
if i % 10 == 0:
# Evaluate the current model after every 10 updates.
model.set_weights(ray.get(current_weights))
accuracy = evaluate(model, test_loader)
print("Iter {}: \taccuracy is {:.1f}".format(i, accuracy))
print("Final accuracy is {:.1f}.".format(accuracy))
###Output
_____no_output_____ |
Kaggle_Panda_Curso.ipynb | ###Markdown
https://www.kaggle.com/residentmario/creating-reading-and-writing **Pandas Home Page**
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Creating data¶There are two core objects in pandas: the DataFrame and the Series.DataFrameA DataFrame is a table. It contains an array of individual entries, each of which has a certain value. Each entry corresponds to a row (or record) and a column.For example, consider the following simple DataFrame:
###Code
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
###Output
_____no_output_____
###Markdown
In this example, the "0, No" entry has the value of 131. The "0, Yes" entry has a value of 50, and so on.DataFrame entries are not limited to integers. For instance, here's a DataFrame whose values are strings:
###Code
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']})
###Output
_____no_output_____
###Markdown
The dictionary-list constructor assigns values to the column labels, but just uses an ascending count from 0 (0, 1, 2, 3, ...) for the row labels. Sometimes this is OK, but oftentimes we will want to assign these labels ourselves.The list of row labels used in a DataFrame is known as an Index. We can assign values to it by using an index parameter in our constructor:
###Code
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'],
'Sue': ['Pretty good.', 'Bland.']},
index=['Product A', 'Product B'])
###Output
_____no_output_____
###Markdown
SeriesA Series, by contrast, is a sequence of data values. If a DataFrame is a table, a Series is a list. And in fact you can create one with nothing more than a list
###Code
pd.Series([1, 2, 3, 4, 5])
###Output
_____no_output_____
###Markdown
A Series is, in essence, a single column of a DataFrame. So you can assign column values to the Series the same way as before, using an index parameter. However, a Series does not have a column name, it only has one overall name:
###Code
pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A')
###Output
_____no_output_____
###Markdown
The Series and the DataFrame are intimately related. It's helpful to think of a DataFrame as actually being just a bunch of Series "glued together". We'll see more of this in the next section of this tutorial. **Reading data files**Being able to create a DataFrame or Series by hand is handy. But, most of the time, we won't actually be creating our own data by hand. Instead, we'll be working with data that already exists.Data can be stored in any of a number of different forms and formats. By far the most basic of these is the humble CSV file. When you open a CSV file you get something that looks like this:Product A,Product B,Product C,30,21,9,35,34,1,41,11,11 So a CSV file is a table of values separated by commas. Hence the name: "Comma-Separated Values", or CSV.Let's now set aside our toy datasets and see what a real dataset looks like when we read it into a DataFrame. We'll use the pd.read_csv() function to read the data into a DataFrame. This goes thusly:
###Code
# wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv")
###Output
_____no_output_____
###Markdown
Para bajar los datos:https://www.kaggle.com/luisrivera/exercise-creating-reading-and-writing/editPara subir los datos a Google Collaborative:D:\kaggle\Cursos\Panda
###Code
wine_reviews = pd.read_csv("winemag-data-130k-v2.csv")
###Output
_____no_output_____
###Markdown
We can use the shape attribute to check how large the resulting DataFrame is:
###Code
wine_reviews.shape
###Output
_____no_output_____
###Markdown
So our new DataFrame has 130,000 records split across 14 different columns. That's almost 2 million entries!We can examine the contents of the resultant DataFrame using the head() command, which grabs the first five rows:
###Code
wine_reviews.head()
###Output
_____no_output_____
###Markdown
The pd.read_csv() function is well-endowed, with over 30 optional parameters you can specify. For example, you can see in this dataset that the CSV file has a built-in index, which pandas did not pick up on automatically. To make pandas use that column for the index (instead of creating a new one from scratch), we can specify an index_col.
###Code
# wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
# wine_reviews.head()
wine_reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0)
wine_reviews.head()
###Output
_____no_output_____
###Markdown
Para practicar directo en Kaggle:https://www.kaggle.com/luisrivera/exercise-creating-reading-and-writing/edit
###Code
import pandas as pd
pd.set_option('max_rows', 5)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.creating_reading_and_writing import *
# print("Setup complete.")
###Output
_____no_output_____
###Markdown
1.In the cell below, create a DataFrame `fruits` that looks like this:
###Code
# Your code goes here. Create a dataframe matching the above diagram and assign it to the variable fruits.
fruits = pd.DataFrame({'Apples': ['30'],
'Bananas': ['21']})
#q1.check()
fruits
fruits = pd.DataFrame([[30, 21]], columns=['Apples', 'Bananas'])
fruits
###Output
_____no_output_____
###Markdown
2.Create a dataframe `fruit_sales` that matches the diagram below:
###Code
fruit_sales = pd.DataFrame({'Apples': ['35', '41'],
'Bananas': ['21', '34' ]},
index=['2017 Sales', '2018 Sales'])
fruit_sales
###Output
_____no_output_____
###Markdown
3.Create a variable `ingredients` with a Series that looks like:```Flour 4 cupsMilk 1 cupEggs 2 largeSpam 1 canName: Dinner, dtype: object```
###Code
quantities = ['4 cups', '1 cup', '2 large', '1 can']
items = ['Flour', 'Milk', 'Eggs', 'Spam']
recipe = pd.Series(quantities, index=items, name='Dinner')
recipe
###Output
_____no_output_____
###Markdown
4.Read the following csv dataset of wine reviews into a DataFrame called `reviews`:The filepath to the csv file is `../input/wine-reviews/winemag-data_first150k.csv`. The first few lines look like:```,country,description,designation,points,price,province,region_1,region_2,variety,winery0,US,"This tremendous 100% varietal wine[...]",Martha's Vineyard,96,235.0,California,Napa Valley,Napa,Cabernet Sauvignon,Heitz1,Spain,"Ripe aromas of fig, blackberry and[...]",Carodorum Selección Especial Reserva,96,110.0,Northern Spain,Toro,,Tinta de Toro,Bodega Carmen Rodríguez```
###Code
#reviews = pd.read_csv("../input/wine-reviews/winemag-data_first150k.csv", index_col=0)
reviews = pd.read_csv("winemag-data_first150k.csv", index_col=0)
reviews
###Output
_____no_output_____
###Markdown
5.Run the cell below to create and display a DataFrame called `animals`:
###Code
animals = pd.DataFrame({'Cows': [12, 20], 'Goats': [22, 19]},
index=['Year 1', 'Year 2'])
animals
###Output
_____no_output_____
###Markdown
In the cell below, write code to save this DataFrame to disk as a csv file with the name `cows_and_goats.csv`.
###Code
animals.to_csv("cows_and_goats.csv")
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/residentmario/indexing-selecting-assigning **Naive accessors**Native Python objects provide good ways of indexing data. Pandas carries all of these over, which helps make it easy to start with.Consider this DataFrame:
###Code
reviews
###Output
_____no_output_____
###Markdown
In Python, we can access the property of an object by accessing it as an attribute. A book object, for example, might have a title property, which we can access by calling book.title. Columns in a pandas DataFrame work in much the same way.Hence to access the country property of reviews we can use:
###Code
reviews.country
###Output
_____no_output_____
###Markdown
If we have a Python dictionary, we can access its values using the indexing ([]) operator. We can do the same with columns in a DataFrame:
###Code
reviews['country']
###Output
_____no_output_____
###Markdown
These are the two ways of selecting a specific Series out of a DataFrame. Neither of them is more or less syntactically valid than the other, but the indexing operator [] does have the advantage that it can handle column names with reserved characters in them (e.g. if we had a country providence column, reviews.country providence wouldn't work).Doesn't a pandas Series look kind of like a fancy dictionary? It pretty much is, so it's no surprise that, to drill down to a single specific value, we need only use the indexing operator [] once more:
###Code
reviews['country'][0]
###Output
_____no_output_____
###Markdown
**Indexing in pandas**The indexing operator and attribute selection are nice because they work just like they do in the rest of the Python ecosystem. As a novice, this makes them easy to pick up and use. However, pandas has its own accessor operators, loc and iloc. For more advanced operations, these are the ones you're supposed to be using.Index-based selectionPandas indexing works in one of two paradigms. The first is index-based selection: selecting data based on its numerical position in the data. iloc follows this paradigm.To select the first row of data in a DataFrame, we may use the following:
###Code
reviews.iloc[0]
###Output
_____no_output_____
###Markdown
Both loc and iloc are row-first, column-second. This is the opposite of what we do in native Python, which is column-first, row-second.This means that it's marginally easier to retrieve rows, and marginally harder to get retrieve columns. To get a column with iloc, we can do the following:
###Code
reviews.iloc[:, 0]
On its own, the : operator, which also comes from native Python, means "everything". When combined with other selectors, however, it can be used to indicate a range of values. For example, to select the country column from just the first, second, and third row, we would do:
reviews.iloc[:3, 0]
###Output
_____no_output_____
###Markdown
Or, to select just the second and third entries, we would do:
###Code
reviews.iloc[1:3, 0]
###Output
_____no_output_____
###Markdown
It's also possible to pass a list:
###Code
reviews.iloc[[0, 1, 2], 0]
###Output
_____no_output_____
###Markdown
Finally, it's worth knowing that negative numbers can be used in selection. This will start counting forwards from the end of the values. So for example here are the last five elements of the dataset.
###Code
reviews.iloc[-5:]
###Output
_____no_output_____
###Markdown
Label-based selectionThe second paradigm for attribute selection is the one followed by the loc operator: label-based selection. In this paradigm, it's the data index value, not its position, which matters.For example, to get the first entry in reviews, we would now do the following:
###Code
reviews.loc[0, 'country']
###Output
_____no_output_____
###Markdown
iloc is conceptually simpler than loc because it ignores the dataset's indices. When we use iloc we treat the dataset like a big matrix (a list of lists), one that we have to index into by position. loc, by contrast, uses the information in the indices to do its work. Since your dataset usually has meaningful indices, it's usually easier to do things using loc instead. For example, here's one operation that's much easier using loc:
###Code
reviews.loc[:, ['taster_name', 'taster_twitter_handle', 'points']]
###Output
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py:1418: FutureWarning:
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike
return self._getitem_tuple(key)
###Markdown
Choosing between loc and ilocWhen choosing or transitioning between loc and iloc, there is one "gotcha" worth keeping in mind, which is that the two methods use slightly different indexing schemes.iloc uses the Python stdlib indexing scheme, where the first element of the range is included and the last one excluded. So 0:10 will select entries 0,...,9. loc, meanwhile, indexes inclusively. So 0:10 will select entries 0,...,10.Why the change? Remember that loc can index any stdlib type: strings, for example. If we have a DataFrame with index values Apples, ..., Potatoes, ..., and we want to select "all the alphabetical fruit choices between Apples and Potatoes", then it's a lot more convenient to index df.loc['Apples':'Potatoes'] than it is to index something like df.loc['Apples', 'Potatoet] (t coming after s in the alphabet).This is particularly confusing when the DataFrame index is a simple numerical list, e.g. 0,...,1000. In this case df.iloc[0:1000] will return 1000 entries, while df.loc[0:1000] return 1001 of them! To get 1000 elements using loc, you will need to go one lower and ask for df.iloc[0:999].Otherwise, the semantics of using loc are the same as those for iloc. **Manipulating the index**Label-based selection derives its power from the labels in the index. Critically, the index we use is not immutable. We can manipulate the index in any way we see fit.The set_index() method can be used to do the job. Here is what happens when we set_index to the title field:
###Code
# reviews.set_index("title") # da error, no existe esa columna
reviews.set_index("variety")
###Output
_____no_output_____
###Markdown
This is useful if you can come up with an index for the dataset which is better than the current one.**Conditional selection**So far we've been indexing various strides of data, using structural properties of the DataFrame itself. To do interesting things with the data, however, we often need to ask questions based on conditions.For example, suppose that we're interested specifically in better-than-average wines produced in Italy.We can start by checking if each wine is Italian or not:
###Code
reviews.country == 'Italy'
###Output
_____no_output_____
###Markdown
This operation produced a Series of True/False booleans based on the country of each record. This result can then be used inside of loc to select the relevant data:
###Code
reviews.loc[reviews.country == 'Italy']
###Output
_____no_output_____
###Markdown
This DataFrame has ~20,000 rows. The original had ~130,000. That means that around 15% of wines originate from Italy.We also wanted to know which ones are better than average. Wines are reviewed on a 80-to-100 point scale, so this could mean wines that accrued at least 90 points.We can use the ampersand (&) to bring the two questions together:
###Code
reviews.loc[(reviews.country == 'Italy') & (reviews.points >= 90)]
###Output
_____no_output_____
###Markdown
Suppose we'll buy any wine that's made in Italy or which is rated above average. For this we use a pipe (|):
###Code
reviews.loc[(reviews.country == 'Italy') | (reviews.points >= 90)]
###Output
_____no_output_____
###Markdown
Pandas comes with a few built-in conditional selectors, two of which we will highlight here.The first is isin. isin is lets you select data whose value "is in" a list of values. For example, here's how we can use it to select wines only from Italy or France:
###Code
reviews.loc[reviews.country.isin(['Italy', 'France'])]
###Output
_____no_output_____
###Markdown
The second is isnull (and its companion notnull). These methods let you highlight values which are (or are not) empty (NaN). For example, to filter out wines lacking a price tag in the dataset, here's what we would do:
###Code
reviews.loc[reviews.price.notnull()]
###Output
_____no_output_____
###Markdown
**Assigning data**Going the other way, assigning data to a DataFrame is easy. You can assign either a constant value:
###Code
reviews['critic'] = 'everyone'
reviews['critic']
###Output
_____no_output_____
###Markdown
Or with an iterable of values:
###Code
reviews['index_backwards'] = range(len(reviews), 0, -1)
reviews['index_backwards']
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/luisrivera/exercise-indexing-selecting-assigning/edit **Ejercicios** Run the following cell to load your data and some utility functions (including code to check your answers).
###Code
import pandas as pd
# reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0)
pd.set_option("display.max_rows", 5)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.indexing_selecting_and_assigning import *
# print("Setup complete.")
###Output
_____no_output_____
###Markdown
Look at an overview of your data by running the following line.
###Code
reviews.head()
###Output
_____no_output_____
###Markdown
 **Exercises** 1.Select the `description` column from `reviews` and assign the result to the variable `desc`.
###Code
desc = reviews['description']
desc = reviews.description
#or
#desc = reviews["description"]
# desc is a pandas Series object, with an index matching the reviews DataFrame. In general, when we select a single column from a DataFrame, we'll get a Series.
desc.head(10)
###Output
_____no_output_____
###Markdown
2.Select the first value from the description column of `reviews`, assigning it to variable `first_description`.
###Code
first_description = reviews["description"][0]
# q2.check()
first_description
# Solution:
first_description = reviews.description.iloc[0]
# Note that while this is the preferred way to obtain the entry in the DataFrame, many other options will return a valid result,
# such as reviews.description.loc[0], reviews.description[0], and more!
first_description
###Output
_____no_output_____
###Markdown
3. Select the first row of data (the first record) from `reviews`, assigning it to the variable `first_row`.
###Code
first_row = reviews.iloc[0]
# q3.check()
first_row
# Solution:
first_row = reviews.iloc[0]
###Output
_____no_output_____
###Markdown
4.Select the first 10 values from the `description` column in `reviews`, assigning the result to variable `first_descriptions`.Hint: format your output as a pandas Series.
###Code
first_descriptions = reviews.iloc[:10, 1]
# first_descriptions = reviews.description.iloc[0:9]
# first_descriptions = reviews.description.loc[0:9,'description']
# q4.check()
first_descriptions
# Solution:
first_descriptions = reviews.description.iloc[:10]
# Note that many other options will return a valid result, such as desc.head(10) and reviews.loc[:9, "description"].
first_descriptions
###Output
_____no_output_____
###Markdown
5.Select the records with index labels `1`, `2`, `3`, `5`, and `8`, assigning the result to the variable `sample_reviews`.In other words, generate the following DataFrame:
###Code
sample_reviews = reviews.iloc[[1,2,3,5,8],]
# q5.check()
sample_reviews
# Solution:
indices = [1, 2, 3, 5, 8]
sample_reviews = reviews.loc[indices]
sample_reviews
###Output
_____no_output_____
###Markdown
6.Create a variable `df` containing the `country`, `province`, `region_1`, and `region_2` columns of the records with the index labels `0`, `1`, `10`, and `100`. In other words, generate the following DataFrame:
###Code
df = reviews.loc[[0,1,10,100],['country', 'province', 'region_1', 'region_2']]
# q6.check()
df
# Solution:
cols = ['country', 'province', 'region_1', 'region_2']
indices = [0, 1, 10, 100]
df = reviews.loc[indices, cols]
df
###Output
_____no_output_____
###Markdown
7.Create a variable `df` containing the `country` and `variety` columns of the first 100 records. Hint: you may use `loc` or `iloc`. When working on the answer this question and the several of the ones that follow, keep the following "gotcha" described in the tutorial:> `iloc` uses the Python stdlib indexing scheme, where the first element of the range is included and the last one excluded. `loc`, meanwhile, indexes inclusively. > This is particularly confusing when the DataFrame index is a simple numerical list, e.g. `0,...,1000`. In this case `df.iloc[0:1000]` will return 1000 entries, while `df.loc[0:1000]` return 1001 of them! To get 1000 elements using `loc`, you will need to go one lower and ask for `df.iloc[0:999]`.
###Code
df = reviews.loc[0:99,['country', 'variety']]
# q7.check()
df
# # Correct:
cols = ['country', 'variety']
df = reviews.loc[:99, cols]
# or
# cols_idx = [0, 11]
# df = reviews.iloc[:100, cols_idx]
df
###Output
_____no_output_____
###Markdown
8.Create a DataFrame `italian_wines` containing reviews of wines made in `Italy`. Hint: `reviews.country` equals what?
###Code
italian_wines = reviews.loc[reviews.country == 'Italy']
# q8.check()
italian_wines
# Solution:
italian_wines = reviews[reviews.country == 'Italy']
italian_wines
###Output
_____no_output_____
###Markdown
9.Create a DataFrame `top_oceania_wines` containing all reviews with at least 95 points (out of 100) for wines from Australia or New Zealand.
###Code
top_oceania_wines = reviews.loc[reviews.country.isin(['Australia', 'New Zealand'])
& (reviews.points >= 95)]
# q9.check()
top_oceania_wines
# Solution:
top_oceania_wines = reviews.loc[
(reviews.country.isin(['Australia', 'New Zealand']))
& (reviews.points >= 95)
]
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/residentmario/summary-functions-and-maps **Funciones y Mapas**
###Code
reviews
###Output
_____no_output_____
###Markdown
Summary functionsPandas provides many simple "summary functions" (not an official name) which restructure the data in some useful way. For example, consider the describe() method:
###Code
reviews.points.describe()
###Output
_____no_output_____
###Markdown
This method generates a high-level summary of the attributes of the given column. It is type-aware, meaning that its output changes based on the data type of the input. The output above only makes sense for numerical data; for string data here's what we get:
###Code
reviews.taster_name.describe()
###Output
_____no_output_____
###Markdown
If you want to get some particular simple summary statistic about a column in a DataFrame or a Series, there is usually a helpful pandas function that makes it happen.For example, to see the mean of the points allotted (e.g. how well an averagely rated wine does), we can use the mean() function:
###Code
reviews.points.mean()
###Output
_____no_output_____
###Markdown
To see a list of unique values we can use the unique() function:
###Code
# reviews.taster_name.unique() # se demora mucho??
###Output
_____no_output_____
###Markdown
To see a list of unique values and how often they occur in the dataset, we can use the value_counts() method:
###Code
reviews.taster_name.value_counts()
###Output
_____no_output_____
###Markdown
**Maps**A map is a term, borrowed from mathematics, for a function that takes one set of values and "maps" them to another set of values. In data science we often have a need for creating new representations from existing data, or for transforming data from the format it is in now to the format that we want it to be in later. Maps are what handle this work, making them extremely important for getting your work done!There are two mapping methods that you will use often.map() is the first, and slightly simpler one. For example, suppose that we wanted to remean the scores the wines received to 0. We can do this as follows:
###Code
review_points_mean = reviews.points.mean()
reviews.points.map(lambda p: p - review_points_mean)
###Output
_____no_output_____
###Markdown
The function you pass to map() should expect a single value from the Series (a point value, in the above example), and return a transformed version of that value. map() returns a new Series where all the values have been transformed by your function.apply() is the equivalent method if we want to transform a whole DataFrame by calling a custom method on each row. La función que pase a map () debería esperar un único valor de la Serie (un valor de punto, en el ejemplo anterior) y devolver una versión transformada de ese valor. map () devuelve una nueva serie donde todos los valores han sido transformados por su función.apply () es el método equivalente si queremos transformar un DataFrame completo llamando a un método personalizado en cada fila.
###Code
def remean_points(row):
row.points = row.points - review_points_mean
return row
reviews.apply(remean_points, axis='columns')
###Output
_____no_output_____
###Markdown
If we had called reviews.apply() with axis='index', then instead of passing a function to transform each row, we would need to give a function to transform each column.Note that map() and apply() return new, transformed Series and DataFrames, respectively. They don't modify the original data they're called on. If we look at the first row of reviews, we can see that it still has its original points value. Si hubiéramos llamado reviews.apply () con axis = 'index', entonces, en lugar de pasar una función para transformar cada fila, tendríamos que dar una función para transformar cada columna.Tenga en cuenta que map () y apply () devuelven Series y DataFrames nuevos y transformados, respectivamente. No modifican los datos originales a los que se les solicita. Si miramos la primera fila de revisiones, podemos ver que todavía tiene su valor de puntos original.
###Code
reviews.head(1)
###Output
_____no_output_____
###Markdown
Pandas provides many common mapping operations as built-ins. For example, here's a faster way of remeaning our points column:
###Code
review_points_mean = reviews.points.mean()
reviews.points - review_points_mean
###Output
_____no_output_____
###Markdown
In this code we are performing an operation between a lot of values on the left-hand side (everything in the Series) and a single value on the right-hand side (the mean value). Pandas looks at this expression and figures out that we must mean to subtract that mean value from every value in the dataset.Pandas will also understand what to do if we perform these operations between Series of equal length. For example, an easy way of combining country and region information in the dataset would be to do the following:
###Code
reviews.country + " - " + reviews.region_1
###Output
_____no_output_____
###Markdown
These operators are faster than map() or apply() because they uses speed ups built into pandas. All of the standard Python operators (>, <, ==, and so on) work in this manner.However, they are not as flexible as map() or apply(), which can do more advanced things, like applying conditional logic, which cannot be done with addition and subtraction alone. Estos operadores son más rápidos que map () o apply () porque usan aceleraciones integradas en pandas. Todos los operadores estándar de Python (>, <, ==, etc.) funcionan de esta manera.Sin embargo, no son tan flexibles como map () o apply (), que pueden hacer cosas más avanzadas, como aplicar lógica condicional, que no se puede hacer solo con la suma y la resta. https://www.kaggle.com/residentmario/grouping-and-sorting **Groupwise analysis**One function we've been using heavily thus far is the value_counts() function. We can replicate what value_counts() does by doing the following:
###Code
reviews.groupby('points').points.count()
reviews.groupby('points').price.min()
reviews.groupby('winery').apply(lambda df: df.title.iloc[0])
reviews.groupby(['country', 'province']).apply(lambda df: df.loc[df.points.idxmax()])
reviews.groupby(['country']).price.agg([len, min, max])
###Output
_____no_output_____
###Markdown
**Multi-indexes**In all of the examples we've seen thus far we've been working with DataFrame or Series objects with a single-label index. groupby() is slightly different in the fact that, depending on the operation we run, it will sometimes result in what is called a multi-index.A multi-index differs from a regular index in that it has multiple levels. For example:
###Code
countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len])
countries_reviewed
mi = countries_reviewed.index
type(mi)
countries_reviewed.reset_index()
###Output
_____no_output_____
###Markdown
**Sorting**Looking again at countries_reviewed we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a groupby, the order of the rows is dependent on the values in the index, not in the data.To get data in the order want it in we can sort it ourselves. The sort_values() method is handy for this.
###Code
countries_reviewed = countries_reviewed.reset_index()
countries_reviewed.sort_values(by='len')
countries_reviewed.sort_values(by='len', ascending=False)
countries_reviewed.sort_index()
countries_reviewed.sort_values(by=['country', 'len'])
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/luisrivera/exercise-grouping-and-sorting/edit **Exercise: Grouping and Sorting**
###Code
import pandas as pd
# reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
#pd.set_option("display.max_rows", 5)
reviews = pd.read_csv("./winemag-data-130k-v2.csv", index_col=0)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.grouping_and_sorting import *
# print("Setup complete.")
###Output
_____no_output_____
###Markdown
1.Who are the most common wine reviewers in the dataset? Create a `Series` whose index is the `taster_twitter_handle` category from the dataset, and whose values count how many reviews each person wrote.
###Code
reviews_written = reviews.groupby('taster_twitter_handle').size()
#or
reviews_written
reviews_written = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count()
reviews_written
###Output
_____no_output_____
###Markdown
2.What is the best wine I can buy for a given amount of money? Create a `Series` whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that `4.0` dollars is at the top and `3300.0` dollars is at the bottom).
###Code
best_rating_per_price = reviews.groupby('price')['points'].max().sort_index()
best_rating_per_price
###Output
_____no_output_____
###Markdown
3.What are the minimum and maximum prices for each `variety` of wine? Create a `DataFrame` whose index is the `variety` category from the dataset and whose values are the `min` and `max` values thereof.
###Code
price_extremes = reviews.groupby('variety').price.agg([min, max])
price_extremes
###Output
_____no_output_____
###Markdown
4.What are the most expensive wine varieties? Create a variable `sorted_varieties` containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties).
###Code
sorted_varieties = price_extremes.sort_values(by=['min', 'max'], ascending=False)
sorted_varieties
###Output
_____no_output_____
###Markdown
5.Create a `Series` whose index is reviewers and whose values is the average review score given out by that reviewer. Hint: you will need the `taster_name` and `points` columns.
###Code
reviewer_mean_ratings = reviews.groupby('taster_name').points.mean()
reviewer_mean_ratings
reviewer_mean_ratings.describe()
###Output
_____no_output_____
###Markdown
6.What combination of countries and varieties are most common? Create a `Series` whose index is a `MultiIndex`of `{country, variety}` pairs. For example, a pinot noir produced in the US should map to `{"US", "Pinot Noir"}`. Sort the values in the `Series` in descending order based on wine count.
###Code
country_variety_counts = reviews.groupby(['country', 'variety']).size().sort_values(ascending=False)
country_variety_counts
###Output
_____no_output_____
###Markdown
https://www.kaggle.com/residentmario/data-types-and-missing-values **data-types-and-missing-values** **Dtypes**The data type for a column in a DataFrame or a Series is known as the dtype.You can use the dtype property to grab the type of a specific column. For instance, we can get the dtype of the price column in the reviews DataFrame:
###Code
reviews.price.dtype
###Output
_____no_output_____
###Markdown
Alternatively, the dtypes property returns the dtype of every column in the DataFrame:
###Code
reviews.dtypes
###Output
_____no_output_____
###Markdown
Data types tell us something about how pandas is storing the data internally. float64 means that it's using a 64-bit floating point number; int64 means a similarly sized integer instead, and so on.One peculiarity to keep in mind (and on display very clearly here) is that columns consisting entirely of strings do not get their own type; they are instead given the object type.It's possible to convert a column of one type into another wherever such a conversion makes sense by using the astype() function. For example, we may transform the points column from its existing int64 data type into a float64 data type:
###Code
reviews.points.astype('float64')
###Output
_____no_output_____
###Markdown
A DataFrame or Series index has its own dtype, too:
###Code
reviews.index.dtype
###Output
_____no_output_____
###Markdown
Pandas also supports more exotic data types, such as categorical data and timeseries data. Because these data types are more rarely used, we will omit them until a much later section of this tutorial. **Missing data**Entries missing values are given the value NaN, short for "Not a Number". For technical reasons these NaN values are always of the float64 dtype.Pandas provides some methods specific to missing data. To select NaN entries you can use pd.isnull() (or its companion pd.notnull()). This is meant to be used thusly:
###Code
reviews[pd.isnull(reviews.country)]
###Output
_____no_output_____
###Markdown
Replacing missing values is a common operation. Pandas provides a really handy method for this problem: fillna(). fillna() provides a few different strategies for mitigating such data. For example, we can simply replace each NaN with an "Unknown":
###Code
reviews.region_2.fillna("Unknown")
###Output
_____no_output_____
###Markdown
Or we could fill each missing value with the first non-null value that appears sometime after the given record in the database. This is known as the backfill strategy.Alternatively, we may have a non-null value that we would like to replace. For example, suppose that since this dataset was published, reviewer Kerin O'Keefe has changed her Twitter handle from @kerinokeefe to @kerino. One way to reflect this in the dataset is using the replace() method:
###Code
reviews.taster_twitter_handle.replace("@kerinokeefe", "@kerino")
###Output
_____no_output_____
###Markdown
The replace() method is worth mentioning here because it's handy for replacing missing data which is given some kind of sentinel value in the dataset: things like "Unknown", "Undisclosed", "Invalid", and so on. **Exercise: Data Types and Missing Values**https://www.kaggle.com/luisrivera/exercise-data-types-and-missing-values/edit
###Code
# import pandas as pd
# reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.data_types_and_missing_data import *
# print("Setup complete.")
###Output
_____no_output_____
###Markdown
**Exercises**1.What is the data type of the points column in the dataset?
###Code
# Your code here
dtype = reviews.points.dtype
###Output
_____no_output_____
###Markdown
2. Create a Series from entries in the `points` column, but convert the entries to strings. Hint: strings are `str` in native Python.
###Code
point_strings = reviews.points.astype(str)
point_strings
###Output
_____no_output_____
###Markdown
3.Sometimes the price column is null. How many reviews in the dataset are missing a price?
###Code
missing_price_reviews = reviews[reviews.price.isnull()]
n_missing_prices = len(missing_price_reviews)
n_missing_prices
# Cute alternative solution: if we sum a boolean series, True is treated as 1 and False as 0
n_missing_prices = reviews.price.isnull().sum()
n_missing_prices
# or equivalently:
n_missing_prices = pd.isnull(reviews.price).sum()
n_missing_prices
## 4.
What are the most common wine-producing regions? Create a Series counting the number of times each value occurs in the `region_1` field. This field is often missing data, so replace missing values with `Unknown`. Sort in descending order. Your output should look something like this:
```
Unknown 21247
Napa Valley 4480
...
Bardolino Superiore 1
Primitivo del Tarantino 1
Name: region_1, Length: 1230, dtype: int64
reviews_per_region = reviews.region_1.fillna('Unknown').value_counts().sort_values(ascending=False)
reviews_per_region
###Output
_____no_output_____
###Markdown
**Renaming-and-combining columns**https://www.kaggle.com/residentmario/renaming-and-combining **Introduction**Oftentimes data will come to us with column names, index names, or other naming conventions that we are not satisfied with. In that case, you'll learn how to use pandas functions to change the names of the offending entries to something better.You'll also explore how to combine data from multiple DataFrames and/or Series. **Renaming**The first function we'll introduce here is rename(), which lets you change index names and/or column names. For example, to change the points column in our dataset to score, we would do:
###Code
import pandas as pd
# reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.renaming_and_combining import *
# print("Setup complete.")
reviews.head()
reviews.rename(columns={'points': 'score'})
###Output
_____no_output_____
###Markdown
rename() lets you rename index or column values by specifying a index or column keyword parameter, respectively. It supports a variety of input formats, but usually a Python dictionary is the most convenient. Here is an example using it to rename some elements of the index.
###Code
reviews.rename(index={0: 'firstEntry', 1: 'secondEntry'})
###Output
_____no_output_____
###Markdown
You'll probably rename columns very often, but rename index values very rarely. For that, set_index() is usually more convenient.Both the row index and the column index can have their own name attribute. The complimentary rename_axis() method may be used to change these names. For example:
###Code
reviews.rename_axis("wines", axis='rows').rename_axis("fields", axis='columns')
###Output
_____no_output_____
###Markdown
**Combining**When performing operations on a dataset, we will sometimes need to combine different DataFrames and/or Series in non-trivial ways. Pandas has three core methods for doing this. In order of increasing complexity, these are concat(), join(), and merge(). Most of what merge() can do can also be done more simply with join(), so we will omit it and focus on the first two functions here.The simplest combining method is concat(). Given a list of elements, this function will smush those elements together along an axis.This is useful when we have data in different DataFrame or Series objects but having the same fields (columns). One example: the YouTube Videos dataset, which splits the data up based on country of origin (e.g. Canada and the UK, in this example). If we want to study multiple countries simultaneously, we can use concat() to smush them together:https://www.kaggle.com/datasnaek/youtube-new ; datos
###Code
# canadian_youtube = pd.read_csv("../input/youtube-new/CAvideos.csv")
# british_youtube = pd.read_csv("../input/youtube-new/GBvideos.csv")
canadian_youtube = pd.read_csv("CAvideos.csv")
british_youtube = pd.read_csv("GBvideos.csv")
pd.concat([canadian_youtube, british_youtube])
###Output
_____no_output_____
###Markdown
The middlemost combiner in terms of complexity is join(). join() lets you combine different DataFrame objects which have an index in common. For example, to pull down videos that happened to be trending on the same day in both Canada and the UK, we could do the following:
###Code
left = canadian_youtube.set_index(['title', 'trending_date'])
right = british_youtube.set_index(['title', 'trending_date'])
left.join(right, lsuffix='_CAN', rsuffix='_UK')
###Output
_____no_output_____
###Markdown
The lsuffix and rsuffix parameters are necessary here because the data has the same column names in both British and Canadian datasets. If this wasn't true (because, say, we'd renamed them beforehand) we wouldn't need them. **Exercise: Renaming and Combining**https://www.kaggle.com/luisrivera/exercise-renaming-and-combining/edit
###Code
# import pandas as pd
# reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
# from learntools.core import binder; binder.bind(globals())
# from learntools.pandas.renaming_and_combining import *
# print("Setup complete.")
###Output
_____no_output_____
###Markdown
ExercisesView the first several lines of your data by running the cell below:
###Code
reviews.head()
###Output
_____no_output_____
###Markdown
1.`region_1` and `region_2` are pretty uninformative names for locale columns in the dataset. Create a copy of `reviews` with these columns renamed to `region` and `locale`, respectively.
###Code
renamed = reviews.rename(columns=dict(region_1='region', region_2='locale'))
# q1.check()
renamed
###Output
_____no_output_____
###Markdown
2.Set the index name in the dataset to `wines`.
###Code
reindexed = reviews.rename_axis('wines', axis='rows')
reindexed
###Output
_____no_output_____
###Markdown
3.The [Things on Reddit](https://www.kaggle.com/residentmario/things-on-reddit/data) dataset includes product links from a selection of top-ranked forums ("subreddits") on reddit.com. Run the cell below to load a dataframe of products mentioned on the */r/gaming* subreddit and another dataframe for products mentioned on the *r//movies* subreddit.
###Code
# gaming_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/g/gaming.csv")
gaming_products = pd.read_csv("gaming.csv")
gaming_products['subreddit'] = "r/gaming"
# movie_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/m/movies.csv")
movie_products = pd.read_csv("movies.csv")
movie_products['subreddit'] = "r/movies"
###Output
_____no_output_____
###Markdown
Create a `DataFrame` of products mentioned on *either* subreddit.
###Code
combined_products = pd.concat([gaming_products, movie_products])
# q3.check()
combined_products.head()
###Output
_____no_output_____
###Markdown
4.The [Powerlifting Database](https://www.kaggle.com/open-powerlifting/powerlifting-database) dataset on Kaggle includes one CSV table for powerlifting meets and a separate one for powerlifting competitors. Run the cell below to load these datasets into dataframes:
###Code
# powerlifting_meets = pd.read_csv("../input/powerlifting-database/meets.csv")
# powerlifting_competitors = pd.read_csv("../input/powerlifting-database/openpowerlifting.csv")
powerlifting_meets = pd.read_csv("meets.csv")
powerlifting_meets.head()
powerlifting_competitors = pd.read_csv("openpowerlifting.csv")
powerlifting_competitors.head()
###Output
_____no_output_____
###Markdown
Both tables include references to a `MeetID`, a unique key for each meet (competition) included in the database. Using this, generate a dataset combining the two tables into one.
###Code
powerlifting_combined = powerlifting_meets.set_index("MeetID").join(powerlifting_competitors.set_index("MeetID"))
powerlifting_combined.head()
###Output
_____no_output_____ |
Lab_4_Pandas/4/exercise-grouping-and-sorting.ipynb | ###Markdown
**[Pandas Home Page](https://www.kaggle.com/learn/pandas)**--- IntroductionIn these exercises we'll apply groupwise analysis to our dataset.Run the code cell below to load the data before running the exercises.
###Code
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
#pd.set_option("display.max_rows", 5)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.grouping_and_sorting import *
print("Setup complete.")
###Output
Setup complete.
###Markdown
Exercises 1.Who are the most common wine reviewers in the dataset? Create a `Series` whose index is the `taster_twitter_handle` category from the dataset, and whose values count how many reviews each person wrote.
###Code
# Your code here
reviews_written = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count()
# Check your answer
q1.check()
#q1.hint()
#q1.solution()
###Output
_____no_output_____
###Markdown
2.What is the best wine I can buy for a given amount of money? Create a `Series` whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that `4.0` dollars is at the top and `3300.0` dollars is at the bottom).
###Code
best_rating_per_price = reviews.groupby('price').points.max()
# Check your answer
q2.check()
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3.What are the minimum and maximum prices for each `variety` of wine? Create a `DataFrame` whose index is the `variety` category from the dataset and whose values are the `min` and `max` values thereof.
###Code
price_extremes = reviews.groupby(['variety']).price.agg([min, max])
# Check your answer
q3.check()
#q3.hint()
#q3.solution()
###Output
_____no_output_____
###Markdown
4.What are the most expensive wine varieties? Create a variable `sorted_varieties` containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties).
###Code
sorted_varieties = price_extremes.sort_values(by=['min', 'max'], ascending=False)
# Check your answer
q4.check()
#q4.hint()
#q4.solution()
###Output
_____no_output_____
###Markdown
5.Create a `Series` whose index is reviewers and whose values is the average review score given out by that reviewer. Hint: you will need the `taster_name` and `points` columns.
###Code
reviewer_mean_ratings = reviews.groupby(['taster_name']).points.mean()
# Check your answer
q5.check()
#q5.hint()
#q5.solution()
###Output
_____no_output_____
###Markdown
Are there significant differences in the average scores assigned by the various reviewers? Run the cell below to use the `describe()` method to see a summary of the range of values.
###Code
reviewer_mean_ratings.describe()
###Output
_____no_output_____
###Markdown
6.What combination of countries and varieties are most common? Create a `Series` whose index is a `MultiIndex`of `{country, variety}` pairs. For example, a pinot noir produced in the US should map to `{"US", "Pinot Noir"}`. Sort the values in the `Series` in descending order based on wine count.
###Code
country_variety_counts = reviews.groupby(['country', 'variety']).size().sort_values(ascending=False)
# Check your answer
q6.check()
#q6.hint()
#q6.solution()
###Output
_____no_output_____ |
notebooks/week2_recap.ipynb | ###Markdown
Assignment1. Write python commands using pandas to learn how to output tables as follows: - Read the dataset `metabric_clinical_and_expression_data.csv` and store its summary statistics into a new variable called `metabric_summary`. - Just like the `.read_csv()` method allows reading data from a file, `pandas` provides a `.to_csv()` method to write `DataFrames` to files. Write your summary statistics object into a file called `metabric_summary.csv`. You can use `help(metabric.to_csv)` to get information on how to use this function. - Use the help information to modify the previous step so that you can generate a Tab Separated Value (TSV) file instead - Similarly, explore the method `to_excel()` to output an excel spreadsheet containing summary statistics
###Code
# Load library
import pandas as pd
# Read metabric dataset
metabric = pd.read_csv("../data/metabric_clinical_and_expression_data.csv")
# Store summary statistics
metabric_summary = metabric.describe()
metabric_summary
# Write summary statistics in csv and tsv
#help(metabric.to_csv)
metabric_summary.to_csv("~/Desktop/metabric_summary.csv")
metabric_summary.to_csv("~/Desktop/metabric_summary.tsv", sep = '\t')
#metabric_summary.to_csv("~/Desktop/metabric_summary.csv", columns = ["Cohort", "Age_at_diagnosis"])
#metabric_summary.to_csv("~/Desktop/metabric_summary.csv", header = False)
#metabric_summary.to_csv("~/Desktop/metabric_summary.csv", index = False)
# Write an excel spreadsheet
#help(metabric.to_excel)
metabric_summary.to_excel("~/Desktop/metabric_summary.xlsx")
#If: ModuleNotFoundError: No module named 'openpyxl'
#pip3 install openpyxl OR conda install openpyxl
###Output
_____no_output_____
###Markdown
2. Write python commands to perform basic statistics in the metabric dataset and answer the following questions: - Read the dataset `metabric_clinical_and_expression_data.csv` into a variable e.g. `metabric`. - Calculate mean tumour size of patients grouped by vital status and tumour stage - Find the cohort of patients and tumour stage where the average expression of genes TP53 and FOXA1 is the highest - Do patients with greater tumour size live longer? How about patients with greater tumour stage? How about greater Nottingham_prognostic_index?
###Code
# Calculate the mean tumour size of patients grouped by vital status and tumour stage
import pandas as pd
metabric = pd.read_csv("../data/metabric_clinical_and_expression_data.csv")
#help(metabric.groupby)
#metabric.groupby(['Vital_status', 'Tumour_stage']).mean()
#metabric.groupby(['Vital_status', 'Tumour_stage']).mean()[['Tumour_size', 'Survival_time']]
#metabric.groupby(['Vital_status', 'Tumour_stage']).size()
#metabric.groupby(['Vital_status', 'Tumour_stage']).agg(['mean', 'size'])
metabric.groupby(['Vital_status', 'Tumour_stage']).agg(['mean', 'size'])['Tumour_size']
# Find the cohort of patients and tumour stage where the average expression of genes TP53 and FOXA1 is highest
#metabric[['TP53', 'FOXA1']]
#metabric['TP53_FOXA1_mean'] = metabric[['TP53', 'FOXA1']].mean(axis=1)
#metabric.groupby(['Cohort', 'Tumour_stage']).agg(['mean', 'size'])['TP53_FOXA1_mean']
#metabric.groupby(['Cohort', 'Tumour_stage']).agg(['mean', 'size'])['TP53_FOXA1_mean'].sort_values('mean', ascending=False)
metabric.groupby(['Cohort', 'Tumour_stage']).agg(['mean', 'size'])['TP53_FOXA1_mean'].sort_values('mean', ascending=False).head(1)
# Do patients with greater tumour size live longer?
metabric_dead = metabric[metabric['Vital_status'] == 'Died of Disease']
#metabric_dead[['Tumour_size', 'Survival_time']]
#metabric_dead['Tumour_size'].corr(metabric_dead['Survival_time'])
help(metabric_dead['Tumour_size'].corr)
# How about patients with greater tumour stage?
#metabric_dead[['Tumour_stage', 'Survival_time']]
#metabric_dead[['Tumour_stage', 'Survival_time']].groupby('Tumour_stage').agg(['mean', 'std', 'size'])
metabric_dead['Tumour_stage'].corr(metabric_dead['Survival_time'])
# How about greater Nottingham_prognostic_index?
# https://en.wikipedia.org/wiki/Nottingham_Prognostic_Index
#metabric_dead[['Nottingham_prognostic_index', 'Survival_time']]
metabric_dead['Nottingham_prognostic_index'].corr(metabric_dead['Survival_time'])
###Output
_____no_output_____
###Markdown
3. Review the section on missing data presented in the lecture. Consulting the [user's guide section dedicated to missing data](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html) and any other materials as necessary use the functionality provided by pandas to answer the following questions: - Which variables (columns) of the metabric dataset have missing data? - Find the patients ids who have missing tumour size and/or missing mutation count data. Which cohorts do they belong to? - For the patients identified to have missing tumour size data for each cohort, calculate the average tumour size of the patients with tumour size data available within the same cohort to fill in the missing data
###Code
# Which variables (columns) of the metabric dataset have missing data?
metabric.info()
# Find the patients ids who have missing tumour size and/or missing mutation count data. Which cohorts do they belong to?
#metabric[metabric['Tumour_size'].isna()]
metabric[metabric['Tumour_size'].isna()]['Cohort'].unique()
#metabric[metabric['Mutation_count'].isna()]
#metabric[metabric['Mutation_count'].isna()]['Cohort'].unique()
metabric[(metabric['Tumour_size'].isna()) & (metabric['Mutation_count'].isna())]
metabric[(metabric['Tumour_size'].isna()) | (metabric['Mutation_count'].isna())]
# For the patients identified to have missing tumour size data for each cohort, calculate the average tumour size of the patients with tumour size data available within the same cohort to fill in the missing data
# Cohort 1
metabric_c1 = metabric[metabric['Cohort'] == 1]
#metabric_c1
#metabric_c1[metabric_c1['Tumour_size'].isna()]
#metabric_c1[metabric_c1['Tumour_size'].notna()]['Tumour_size'].mean()
mean_c1 = round(metabric_c1[metabric_c1['Tumour_size'].notna()]['Tumour_size'].mean(),1)
#mean_c1
metabric_c1 = metabric_c1.fillna(value={'Tumour_size': mean_c1})
metabric_c1[metabric_c1['Patient_ID'].isin(["MB-0259", "MB-0284", "MB-0522"])]
# Cohort 3
#metabric_c3 = metabric[metabric['Cohort'] == 3]
#metabric_c3[metabric_c3['Tumour_size'].isna()]
#mean_c3 = round(metabric_c3[metabric_c3['Tumour_size'].notna()]['Tumour_size'].mean(),1)
#metabric_c3 = metabric_c3.fillna(value={'Tumour_size': mean_c3})
# Cohort 5
#metabric_c5 = metabric[metabric['Cohort'] == 5]
#metabric_c5[metabric_c5['Tumour_size'].isna()]
#mean_c5 = round(metabric_c5[metabric_c5['Tumour_size'].notna()]['Tumour_size'].mean(),1)
#metabric_c5 = metabric_c5.fillna(value={'Tumour_size': mean_c5})
###Output
_____no_output_____ |
python/demo1/scipy-advanced-tutorial-master/Part1/MyFirstExtension.ipynb | ###Markdown
Reminder of what you should do :- open developper pannel- go to console- enter the following```IPython.notebook.config.update({ "load_extensions": {"hello-scipy":true}})```
###Code
# you will probably need this line to be sure your extension works
import datetime as d
d.datetime.now()
###Output
_____no_output_____ |
WeatherPy/Resources/WeatherPy_Starter.ipynb | ###Markdown
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
# Import API key
from api_keys import api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
###Output
_____no_output_____
###Markdown
Generate Cities List
###Code
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
###Output
_____no_output_____ |
c02_experimentos_textos.ipynb | ###Markdown
Experimentos para os dados textuais
###Code
import datetime
import re
import json
import yaml
import sys
import os
import logging
import logging.config
import time
import multiprocessing
from collections import OrderedDict
import requests
import sqlalchemy
import string
import unicodedata
import yaml
import warnings
warnings.filterwarnings('ignore')
########################################
# external libs
########################################
import joblib
from joblib import delayed, Parallel
########################################
# ml
########################################
from lightgbm import LGBMClassifier
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from scipy.sparse import issparse
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import train_test_split
from sklearn.metrics import (
make_scorer,
accuracy_score,
balanced_accuracy_score,
average_precision_score,
brier_score_loss,
f1_score,
log_loss,
precision_score,
recall_score,
jaccard_score,
roc_auc_score,
classification_report,
confusion_matrix,
roc_curve,
auc,
precision_recall_curve,
)
from sklearn.utils import resample
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.svm import SVC, LinearSVC, NuSVC
from sklearn.neural_network import MLPClassifier
from sklearn.feature_selection import SelectPercentile, VarianceThreshold, SelectFromModel
from sklearn.model_selection import GridSearchCV, cross_val_score, cross_validate, RepeatedStratifiedKFold
from sklearn.calibration import CalibratedClassifierCV
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import RobustScaler, StandardScaler, MinMaxScaler, Binarizer
from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, PCA
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from lightgbm import LGBMClassifier
import xgboost as xgb
from xgboost import XGBClassifier
#################################
# VARIÁVEIS GLOBAIS
#################################
N_JOBS = -1
BASE_DIR = './'
DEFAULT_RANDOM_STATE = 42
#################################
# LOGS
#################################
with open(os.path.join(BASE_DIR, 'log.conf.yaml'), 'r') as f:
config = yaml.safe_load(f.read())
logging.config.dictConfig(config)
class TextCleaner(BaseEstimator, TransformerMixin):
stop_words = ['de', 'a', 'o', 'que', 'e', 'do', 'da', 'em', 'um', 'para', 'é', 'com', 'não', 'uma', 'os', 'no',
'se', 'na', 'por', 'mais', 'as', 'dos', 'como', 'mas', 'foi', 'ao', 'ele', 'das', 'tem', 'à', 'seu',
'sua', 'ou', 'ser', 'quando', 'muito', 'há', 'nos', 'já', 'está', 'eu', 'também', 'só', 'pelo',
'pela', 'até', 'isso', 'ela', 'entre', 'era', 'depois', 'sem', 'mesmo', 'aos', 'ter', 'seus', 'quem',
'nas', 'me', 'esse', 'eles', 'estão', 'você', 'tinha', 'foram', 'essa', 'num', 'nem', 'suas', 'meu',
'às', 'minha', 'têm', 'numa', 'pelos', 'elas', 'havia', 'seja', 'qual', 'será', 'nós', 'tenho', 'lhe',
'deles', 'essas', 'esses', 'pelas', 'este', 'fosse', 'dele', 'tu', 'te', 'vocês', 'vos', 'lhes',
'meus', 'minhas', 'teu', 'tua', 'teus', 'tuas', 'nosso', 'nossa', 'nossos', 'nossas', 'dela', 'delas',
'esta', 'estes', 'estas', 'aquele', 'aquela', 'aqueles', 'aquelas', 'isto', 'aquilo', 'estou', 'está',
'estamos', 'estão', 'estive', 'esteve', 'estivemos', 'estiveram', 'estava', 'estávamos', 'estavam',
'estivera', 'estivéramos', 'esteja', 'estejamos', 'estejam', 'estivesse', 'estivéssemos',
'estivessem', 'estiver', 'estivermos', 'estiverem', 'hei', 'há', 'havemos', 'hão', 'houve',
'houvemos', 'houveram', 'houvera', 'houvéramos', 'haja', 'hajamos', 'hajam', 'houvesse',
'houvéssemos', 'houvessem', 'houver', 'houvermos', 'houverem', 'houverei', 'houverá', 'houveremos',
'houverão', 'houveria', 'houveríamos', 'houveriam', 'sou', 'somos', 'são', 'era', 'éramos', 'eram',
'fui', 'foi', 'fomos', 'foram', 'fora', 'fôramos', 'seja', 'sejamos', 'sejam', 'fosse', 'fôssemos',
'fossem', 'for', 'formos', 'forem', 'serei', 'será', 'seremos', 'serão', 'seria', 'seríamos',
'seriam', 'tenho', 'tem', 'temos', 'tém', 'tinha', 'tínhamos', 'tinham', 'tive', 'teve', 'tivemos',
'tiveram', 'tivera', 'tivéramos', 'tenha', 'tenhamos', 'tenham', 'tivesse', 'tivéssemos', 'tivessem',
'tiver', 'tivermos', 'tiverem', 'terei', 'terá', 'teremos', 'terão', 'teria', 'teríamos', 'teriam']
def __init__(self, n_jobs=1):
self.n_jobs = n_jobs
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
sX = pd.Series(X)
def tratar_texto(t):
if not t:
return ''
if type(t) != str:
t = str(t)
t = t.replace('\\n', ' ')
t = t.replace('null', ' ')
t = t.lower()
regex = re.compile('[' + re.escape(string.punctuation) + '\\r\\t\\n]')
t = regex.sub(" ", t)
lista = t.split()
# retira stopwords e sinais de pontuação
lista = [palavra for palavra in lista if palavra not in self.stop_words and palavra not in string.punctuation]
# retira os dígitos
lista = ' '.join([str(elemento) for elemento in lista if not elemento.isdigit()])
lista = lista.replace('\n', ' ').replace('\r', ' ')
lista = lista.replace(' o ', ' ').replace(' a ', ' ').replace(' os ', ' ').replace(' as ', ' ') # retira o, a, os, as que ainda permaneceiam no texto
lista = re.sub(r" +", ' ', lista) # retira espaços em excesso
nfkd = unicodedata.normalize('NFKD', lista)
lista = u"".join([c for c in nfkd if not unicodedata.combining(c)]) # retira acento
return lista
def tratar_serie(s):
return s.apply(tratar_texto)
split = np.array_split(sX, self.n_jobs)
r = Parallel(n_jobs=self.n_jobs, verbose=0)(delayed(tratar_serie)(s) for s in split)
return pd.concat(r)
class FeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, feature_names, default_value=0):
self.feature_names = feature_names
self.default_value = default_value
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
# incluir teste para colunas ausentes
X = X.copy()
for c in self.feature_names:
if c not in X.columns:
X[c] = self.default_value
return X[self.feature_names]
###Output
_____no_output_____
###Markdown
Dataset Texto
###Code
# # manter no dataset apenas manifestacoes até maio
# sdp = sqlalchemy.create_engine('mssql+pyodbc://xxxxxxxxxxx')
# query = """
# SELECT [IdManifestacao]
# FROM [db_denuncias_origem].[eouv].[Manifestacao]
# WHERE [DatRegistro] < convert(datetime, '01/06/2021', 103)
# ORDER BY [DatRegistro] desc
# """
# df_datas = pd.read_sql(query, sdp)
# # dataset não anonimizado
# # não será disponibilizado com o código
# # fonte devido a restrições de sigilo institucional
# df = pd.read_parquet('datasets/df_treinamento_faro.parquet')
# df = df[df['IdManifestacao'].isin(df_datas['IdManifestacao'])].copy()
# df['TextoCompleto'] = df['TxtFatoManifestacao'].str.cat(df['TextoAnexo'], sep=' ').astype(str)
# X_txt, y_txt = df[['TextoCompleto']], df['GrauAptidao'].apply(lambda x: 1 if x > 50 else 0)
# %%time
# # preprocessamento do texto (remoção de stopwords, remoção de acentos, remoção de pontuações, remoção de números, transformação de todos os caracteres em lowercase)
# X_txt = TextCleaner(n_jobs=N_JOBS).fit_transform(X_txt['TextoCompleto']).to_frame()
# %%time
# # separação do dataset em treino e teste
# X_txt_train, X_txt_test, y_txt_train, y_txt_test = train_test_split(X_txt, y_txt, test_size=.2, random_state=DEFAULT_RANDOM_STATE, stratify=y_txt)
# # utiliza o tf-idf para aprender o vocabulário e pesos a partir dos dados de treinamento
# tf_idf = TfidfVectorizer(min_df=5, max_df=.5, max_features=2000)
# tf_idf.fit(X_txt_train['TextoCompleto'])
# transforma os datasets de treinamento
# X_txt_train_idf = tf_idf.transform(X_txt_train['TextoCompleto'])
# X_txt_test_idf = tf_idf.transform(X_txt_test['TextoCompleto'])
# X_txt_train.index
# X_txt_train_idf.todense().shape
# df_tmp = pd.DataFrame(X_txt_train_idf.todense(),
# index=X_txt_train.index,
# columns=[f'{i:>04}' for i in range(0, X_txt_train_idf.shape[1])])
# df_tmp['LABEL'] = y_txt_train
# df_tmp.to_parquet('datasets/df_train_txt.parquet')
# df_tmp = pd.DataFrame(X_txt_test_idf.todense(),
# index=X_txt_test.index,
# columns=[f'{i:>04}' for i in range(0, X_txt_test_idf.shape[1])])
# df_tmp['LABEL'] = y_txt_test
# df_tmp.to_parquet('datasets/df_test_txt.parquet')
df_train = pd.read_parquet('datasets/df_train_txt.parquet')
X_txt_train_idf, y_txt_train = df_train.drop(columns=['LABEL']), df_train['LABEL']
df_test = pd.read_parquet('datasets/df_test_txt.parquet')
X_txt_test_idf, y_txt_test = df_test.drop(columns=['LABEL']), df_test['LABEL']
%%time
pipeline = Pipeline(steps=[
('model', RandomForestClassifier())
])
metrics = ['roc_auc','balanced_accuracy', 'average_precision', 'recall', 'accuracy', 'f1_macro','f1_weighted']
results = [
]
model = [
RandomForestClassifier,
LogisticRegression,
XGBClassifier,
KNeighborsClassifier,
BaggingClassifier,
ExtraTreesClassifier,
SGDClassifier,
SVC,
NuSVC,
LinearSVC,
BernoulliNB,
LGBMClassifier,
MLPClassifier,
AdaBoostClassifier,
]
N_ESTIMATORS_ITERATORS = 200
POS_WEIGHT = pd.Series(y_txt_train).value_counts()[0]/pd.Series(y_txt_train).value_counts()[1]
class_weight = {0: 1, 1: POS_WEIGHT}
params = [
{
'model__n_estimators': [N_ESTIMATORS_ITERATORS],
'model__max_depth': [5,7,9],
'model__min_samples_split': [2,3],
'model__min_samples_leaf': [1,2],
'model__class_weight': [class_weight],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__max_samples': [.8, 1],
},
{
'model__penalty' : ['l2'],
'model__C' : [1],
'model__solver' : ['liblinear'],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__learning_rate': [0.01],
'model__n_estimators': [N_ESTIMATORS_ITERATORS],
'model__subsample' : [.8,.45],
'model__min_child_weight': [1],
'model__max_depth': [3,4,7],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__reg_lambda': [2],
'model__scale_pos_weight': [POS_WEIGHT]
},
{
'model__n_neighbors' : [5,7,9,11],
},
{
'model__n_estimators': [5],
'model__max_samples': [.8],
'model__random_state': [DEFAULT_RANDOM_STATE],
},
{
'model__n_estimators': [N_ESTIMATORS_ITERATORS],
'model__max_samples' : [.8],
'model__max_depth': [6,7],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__gamma': ['auto'],
'model__C': [0.5],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__gamma': ['auto'],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
},
{
'model__n_estimators': [N_ESTIMATORS_ITERATORS],
'model__subsample': [.6,.7,.8,1],
'model__random_state': [DEFAULT_RANDOM_STATE],
'model__class_weight': [class_weight],
},
{
'model__alpha': [1],
'model__max_iter': [50],
},
{
}
]
logging.info('Início')
for m, p in zip(model, params):
logging.info('Modelo: {}'.format(m.__name__))
p['model'] = [m()]
rskfcv = RepeatedStratifiedKFold(n_splits=10, n_repeats=1, random_state=DEFAULT_RANDOM_STATE)
cv = GridSearchCV(estimator=pipeline,param_grid=p, cv=rskfcv, n_jobs=N_JOBS, error_score=0, refit=True, scoring='roc_auc', verbose=1)
cv.fit(X_txt_train_idf, y_txt_train)
model = cv.best_estimator_
best_params = cv.best_params_
valores = cross_validate(m(**{k[7:]: v for k,v in best_params.items() if k.startswith('model__')}), X_txt_train_idf, y_txt_train, scoring=metrics, cv=rskfcv, verbose=1)
cv_scores = {k[5:]: np.mean(v) for k, v in valores.items() if k not in ['fit_time', 'score_time']}
linha = {
'Modelo': m.__name__,
'ScoreTreino': cv.score(X_txt_train_idf, y_txt_train),
'BestParams': best_params,
'RawScores': {k[5:]: v for k, v in valores.items() if k not in ['fit_time', 'score_time']}
}
linha.update(cv_scores)
results.append(linha)
logging.info('Fim')
df_results_txt = pd.DataFrame(results)
df_results_txt = pd.DataFrame(df_results_txt)
df_results_txt = df_results_txt.sort_values('roc_auc', ascending=False)
df_results_txt
metricas = ['roc_auc', 'average_precision', 'balanced_accuracy', 'f1_weighted']
matplotlib.rcParams.update({'font.size': 13})
fig, axis = plt.subplots(2,2, figsize=(14, 10), dpi=80)
axis = np.ravel(axis)
for i, m in enumerate(metricas):
df_score = pd.DataFrame({m: s for m, s in zip(df_results_txt['Modelo'], df_results_txt['RawScores'].apply(lambda x: x[m]))})
df_score = pd.melt(df_score, var_name='Modelo', value_name='Score')
sns.boxplot(x='Modelo', y='Score', data=df_score, color='#45B39D', linewidth=1, ax=axis[i])
axis[i].set_xlabel('Modelo')
axis[i].set_ylabel(f'Score ({m})')
axis[i].set_xticklabels(labels=df_score['Modelo'].drop_duplicates(), rotation=70, ha='right', fontsize=12)
axis[i].grid(which='major',linestyle='--', linewidth=0.5, )
plt.tight_layout()
# plt.savefig('./docs/tcc/fig_00500_comparacao_score_modelos_texto.png')
plt.show()
###Output
_____no_output_____
###Markdown
Tunning Hiperparametros
###Code
df_results_txt['BestParams'].iloc[0]
from skopt import forest_minimize
from sklearn.model_selection import RepeatedStratifiedKFold
def tune_lgbm(params):
logging.info(params)
n_estimators = params[0]
max_depth = params[1]
reg_lambda = params[2]
learning_rate = params[3]
subsample = params[4]
reg_alpha = params[5]
gamma = params[6]
# min_df = params[7]
# max_df = params[8]
# max_features = params[9]
# ngram_range = (1, params[10])
scale_pos_weight = y_txt_train.value_counts()[0]/y_txt_train.value_counts()[1]
model = XGBClassifier(base_score=None, colsample_bylevel=None,
colsample_bynode=None, colsample_bytree=None, gamma=gamma,
importance_type='gain', interaction_constraints=None,
learning_rate=learning_rate, max_delta_step=None, max_depth=max_depth,
n_estimators=n_estimators, n_jobs=None, num_parallel_tree=None,
random_state=DEFAULT_RANDOM_STATE, reg_alpha=reg_alpha, reg_lambda=reg_lambda,
scale_pos_weight=scale_pos_weight, subsample=subsample,
validate_parameters=None, verbosity=None)
rskfcv = RepeatedStratifiedKFold(n_splits=10, n_repeats=1, random_state=DEFAULT_RANDOM_STATE)
score = cross_val_score(model, X_txt_train_idf, y_txt_train, scoring='roc_auc', cv=rskfcv)
return -np.mean(score)
space = [
(100, 1000), # n_estimators
(1, 20), # max_depth
(0.01, 5.0), # reg_lambda
(0.0001, 0.03), # learning_rate
(0.4, 1.), # subsample
(0.01, 5.0), # reg_alpha
(0.01, 5.0), # gamma
# (2, 5), # min_df
# (0.5, 1.0), # max_df
# (100, 5000), # max_features
# (1, 2), # ngram_range
]
#alterar qdo colocar no HPC
res = forest_minimize(tune_lgbm, space, random_state=DEFAULT_RANDOM_STATE, n_random_starts=20, n_calls=50, verbose=1)
res.x
params = [759, 4, 4.038900470867496, 0.022685086880347725, 0.7851344045803921, 0.1308550296924623, 0.022216673300650254, 5, 0.6568820361887584, 4358, 1]
params = [561, 8, 4.224223904903976, 0.02244487129310769, 0.7238152794334479, 2.937888316662603, 4.82662398324805, 5, 0.7713480415791243, 2162, 1]
params = res.x
scale_pos_weight = pd.Series(y_txt_train).value_counts()[0]/pd.Series(y_txt_train).value_counts()[1]
n_estimators = params[0]
max_depth = params[1]
reg_lambda = params[2]
learning_rate = params[3]
subsample = params[4]
reg_alpha = params[5]
gamma = params[6]
# min_df = params[7]
# max_df = params[8]
# max_features = params[9]
# ngram_range = (1, params[10])
# tfidf = TfidfVectorizer(min_df=min_df, max_df=max_df, max_features=max_features, ngram_range=ngram_range)
# tfidf.fit(X_txt_train['TextoCompleto'])
# X_txt_train_idf = tfidf.transform(X_txt_train['TextoCompleto'])
# X_txt_test_idf = tfidf.transform(X_txt_test['TextoCompleto'])
scale_pos_weight = y_txt_train.value_counts()[0]/y_txt_train.value_counts()[1]
model = XGBClassifier(base_score=None, colsample_bylevel=None,
colsample_bynode=None, colsample_bytree=None, gamma=gamma,
gpu_id=None, importance_type='gain', interaction_constraints=None,
learning_rate=learning_rate, max_delta_step=None, max_depth=max_depth,
n_estimators=n_estimators, n_jobs=None, num_parallel_tree=None,
random_state=DEFAULT_RANDOM_STATE, reg_alpha=reg_alpha, reg_lambda=reg_lambda,
scale_pos_weight=scale_pos_weight, subsample=subsample, tree_method=None,
validate_parameters=None, verbosity=None)
model.fit(X_txt_train_idf, y_txt_train)
p = model.predict(X_txt_test_idf)
balanced_accuracy_score(y_txt_test, model.predict(X_txt_test_idf) )
print(classification_report(y_txt_test, model.predict(X_txt_test_idf) ))
print(confusion_matrix(y_txt_test, model.predict(X_txt_test_idf) ))
balanced_accuracy_score(y_txt_test, p)
f1_score(y_txt_test, p)
recall_score(y_txt_test, p)
precision_score(y_txt_test, p)
accuracy_score(y_txt_test, p)
roc_auc_score(y_txt_test, model.predict_proba(X_txt_test_idf)[:, 1])
matplotlib.rcParams.update({'font.size': 12.5})
plt.figure(figsize=(14, 6), dpi=80)
# plt.title(' Curva Característica de Operação do Receptor (ROC)')
lr_fpr, lr_tpr, thresholds = roc_curve(y_txt_test.values, model.predict_proba(X_txt_test_idf)[:,1], drop_intermediate=False, pos_label=1)
plt.plot(lr_fpr, lr_tpr, label='XGBClassifier',color='#45B39D')
plt.plot([0, 1], [0,1], linestyle='--', label='Aleatório/Chute')
plt.xlabel('Taxa de Falsos Positivos (FPR)')
plt.ylabel('Taxa de Verdadeiros Positivos (TPR ou Recall)')
plt.legend()
plt.grid(which='major',linestyle='--', linewidth=0.5)
plt.tight_layout()
# plt.savefig('./docs/tcc/fig_00600_roc_auc_texto.png')
plt.show()
matplotlib.rcParams.update({'font.size': 12.5})
plt.figure(figsize=(14, 6), dpi=80)
plt.title('Curva Precisão / Revocação')
lr_precision, lr_recall, thresholds = precision_recall_curve(y_txt_test.values, model.predict_proba(X_txt_test_idf)[:,1], pos_label=1)
plt.plot(lr_recall, lr_precision, label='RandomForest', color='#45B39D')
plt.plot([0, 1], [0.5,0.5], linestyle='--', label='Aleatório/Chute')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
plt.grid(which='major',linestyle='--', linewidth=0.5)
plt.tight_layout()
# plt.savefig('./docs/tcc/fig_00610_pr_auc_dados_estr.png')
plt.show()
pr_auc_score = auc(lr_recall, lr_precision)
pr_auc_score
df_histograma = pd.Series(model.predict_proba(X_txt_test_idf)[:,1]).to_frame().rename(columns={0:'Score'})
df_histograma['Bins'] = pd.cut(df_histograma['Score'], bins=np.arange(0,1.05,0.05))
df_histograma['Y'] = y_txt_test.values
df_histograma['Acertos Thr 0.5'] = df_histograma.apply(lambda x: 1 if (1 if x['Score']>=.5 else 0)==x['Y'] else 0,axis=1)
df_histograma.head()
df_barplot = df_histograma[['Bins','Acertos Thr 0.5']].groupby(['Bins']).apply(lambda x: x['Acertos Thr 0.5'].sum()/x.shape[0]).fillna(0).to_frame().rename(columns={0: 'Acertos (%)'})
df_barplot['Contagem'] = df_histograma[['Bins','Acertos Thr 0.5']].groupby(['Bins']).count()
df_barplot = df_barplot.reset_index()
df_barplot['left'] = df_barplot['Bins'].apply(lambda x: x.left+0.025)
df_barplot
from matplotlib.colors import ListedColormap
from matplotlib.cm import ScalarMappable
N = 20
vals = np.ones((N, 4))
vals[:, 0] = np.linspace(.5,45/256, N)
vals[:, 1] = np.linspace(0, 179/256, N)
vals[:, 2] = np.linspace(0, 157/256, N)
newcmp = ListedColormap(vals)
matplotlib.rcParams.update({'font.size': 12.5})
plt.figure(figsize=(14, 6), dpi=80)
color='#45B39D'
scalarMappable = ScalarMappable(cmap=newcmp)
plt.bar(df_barplot['left'], df_barplot['Contagem'], width=0.05, color=scalarMappable.cmap(df_barplot['Acertos (%)']), alpha=1, linewidth=1, edgecolor='white')
colorbar = plt.colorbar(scalarMappable)
colorbar.set_label('Índice de Acertos na Faixa')
plt.xlim(0,1)
plt.grid(which='both',linestyle='--', linewidth=0.5)
plt.title('Histograma para os Scores dados pelo modelo')
plt.xlabel('Score')
plt.ylabel('Quantidade de Observações')
plt.tight_layout()
plt.xticks(ticks=np.arange(0,1.05, 0.05), rotation=90)
# plt.savefig('./docs/tcc/fig_00430_pos_prob_dados_estr.png')
plt.show()
###Output
_____no_output_____ |
test_scripts/test_cifar10.ipynb | ###Markdown
--- DkNN
###Code
layers = ['layer2']
with torch.no_grad():
dknn = DKNN(net, x_train, y_train, x_valid, y_valid, layers,
k=75, num_classes=10)
y_pred = dknn.classify(x_test)
(y_pred.argmax(1) == y_test.numpy()).sum() / y_test.size(0)
cred = dknn.credibility(y_pred)
plt.hist(cred)
correct = np.argmax(y_pred, 1) == y_test.numpy()
num_correct_by_cred = np.zeros((10, ))
num_cred = np.zeros((10, ))
for i in np.arange(10):
ind = (cred > i * 0.1) & (cred <= i* 0.1 + 0.1)
num_cred[i] = np.sum(ind)
num_correct_by_cred[i] = np.sum(correct[ind])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(np.arange(10) * 0.1, num_cred, width=0.05)
ax.bar(np.arange(10) * 0.1 + 0.05, num_correct_by_cred, width=0.05)
num_correct_by_cred / num_cred
###Output
/home/user/miniconda/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in true_divide
"""Entry point for launching an IPython kernel.
###Markdown
--- DkNN Attack
###Code
attack = DKNNAttack()
def attack_batch(x, y, batch_size):
x_adv = torch.zeros_like(x)
total_num = x.size(0)
num_batches = total_num // batch_size
for i in range(num_batches):
begin = i * batch_size
end = (i + 1) * batch_size
x_adv[begin:end] = attack(
dknn, x[begin:end], y_test[begin:end],
guide_layer=layers[0], m=100, binary_search_steps=5,
max_iterations=500, learning_rate=1e-1,
initial_const=1e2, abort_early=True)
return x_adv
x_adv = attack_batch(x_test[:1000].cuda(), y_test[:1000], 100)
with torch.no_grad():
y_pred = dknn.classify(x_adv)
print((y_pred.argmax(1) == y_test[:1000].numpy()).sum() / len(y_pred))
with torch.no_grad():
y_clean = dknn.classify(x_test[:1000])
ind = (y_clean.argmax(1) == y_test[:1000].numpy()) & (y_pred.argmax(1) != y_test[:1000].numpy())
dist = np.mean(np.sqrt(np.sum((x_adv.cpu().detach().numpy()[ind] - x_test.numpy()[:1000][ind])**2, (1, 2, 3))))
print(dist)
cred = dknn.credibility(y_pred[ind])
plt.hist(cred)
for i in range(5):
plt.imshow(x_adv[i].cpu().detach().permute(1, 2, 0).numpy(), cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
--- CW L2 on undefended network
###Code
attack = CWL2Attack()
def attack_batch(x, y, batch_size):
x_adv = torch.zeros_like(x)
total_num = x.size(0)
num_batches = total_num // batch_size
for i in range(num_batches):
begin = i * batch_size
end = (i + 1) * batch_size
x_adv[begin:end] = attack(
net, x[begin:end], y[begin:end], targeted=False,
binary_search_steps=5, max_iterations=500,
confidence=0, learning_rate=1e-1,
initial_const=1e-2, abort_early=True)
return x_adv
x_adv = attack_batch(x_test[:1000].cuda(), y_test[:1000].cuda(), 100)
with torch.no_grad():
y_pred = net(x_adv)
print((y_pred.argmax(1).cpu() == y_test[:1000]).numpy().sum() / y_pred.size(0))
with torch.no_grad():
y_clean = net(x_test[:1000].cuda())
ind = (y_clean.argmax(1).cpu() == y_test[:1000]).numpy() & (y_pred.argmax(1).cpu() != y_test[:1000]).numpy()
dist = np.mean(np.sqrt(np.sum((x_adv.cpu().detach().numpy()[ind] - x_test.numpy()[:1000][ind])**2, (1, 2, 3))))
print(dist)
for i in range(5):
plt.imshow(x_adv[i].cpu().detach().permute(1, 2, 0).numpy(), cmap='gray')
plt.show()
with torch.no_grad():
y_pred = dknn.classify(x_adv)
print((y_pred.argmax(1) == y_test[:1000].numpy()).sum() / y_test.size(0))
###Output
0.0143
|
LinkedInLearning/MachineLearningWithScikitlearn/Pipeline.ipynb | ###Markdown
Reading the Data
###Code
df = pd.read_csv('../data/MNISTonly0_1.csv')
df.head()
###Output
_____no_output_____
###Markdown
Separating X and y
###Code
X = df.iloc[:,:-1]
y = df.iloc[:,-1]
###Output
_____no_output_____
###Markdown
Standardizing the data
###Code
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42,test_size=0.2)
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
###Output
_____no_output_____
###Markdown
Without Pipeline
###Code
pca = PCA(n_components=.90, random_state=42)
pca.fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform (X_test)
clf = LogisticRegression(random_state=42)
clf.fit(X_train_pca, y_train)
print(classification_report(y_true=y_train, y_pred=clf.predict(X_train_pca)))
pipe = Pipeline([('scaler', StandardScaler()),
('PCA', PCA(n_components=.90, random_state=42)),
('Logistic',LogisticRegression(random_state=42))])
pipe.fit(X_train, y_train)
print(f'Accuracy of pipeline on test set is {pipe.score(X_test, y_test):.3%}')
train_pred_pipe = pipe.predict(X_train)
test_pred_pipe = pipe.predict(X_test)
print('Classification report for train set using pipeline')
print(classification_report(y_true=y_train,y_pred=train_pred_pipe))
print('Classification report for test set using pipeline')
print(classification_report(y_true=y_test, y_pred=test_pred_pipe))
from sklearn import set_config
set_config(display='diagram')
pipe
###Output
_____no_output_____ |
publish/PLOT_QC_ROC.ipynb | ###Markdown
The ROC evaluation of QC experiments
###Code
import sys
from glob import glob
from datetime import datetime, timedelta
import h5py
import numpy as np
import pandas as pd
from scipy import interp
from sklearn.utils import resample
from sklearn.metrics import classification_report, confusion_matrix, roc_curve, auc
save_dir = '/glade/work/ksha/data/Keras/QC_publish/'
eval_dir = '/glade/work/ksha/data/evaluation/'
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/utils/')
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/QC_OBS/')
from namelist import *
import data_utils as du
import graph_utils as gu
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes, mark_inset
%matplotlib inline
REDs = []
REDs.append(gu.xcolor('light salmon'))
REDs.append(gu.xcolor('light coral'))
REDs.append(gu.xcolor('indian red'))
REDs.append(gu.xcolor('dark red'))
BLUEs = []
BLUEs.append(gu.xcolor('light blue'))
BLUEs.append(gu.xcolor('sky blue'))
BLUEs.append(gu.xcolor('royal blue'))
BLUEs.append(gu.xcolor('midnight blue'))
JET = []
JET.append(gu.xcolor('indian red'))
JET.append(gu.xcolor('gold'))
JET.append(gu.xcolor('dark sea green'))
JET.append(gu.xcolor('deep sky blue'))
JET.append(gu.xcolor('royal blue'))
JET = JET[::-1]
import warnings
warnings.filterwarnings("ignore")
need_publish = False
# True: publication quality figures
# False: low resolution figures in the notebook
if need_publish:
dpi_ = fig_keys['dpi']
else:
dpi_ = 75
###Output
_____no_output_____
###Markdown
ROC and AUC bootstrapping
###Code
# bootstrapped ROC curve
def ROC_range(FP_boost, TP_boost, N):
FP_base = np.linspace(0, 1, N)
TP_base = np.empty((TP_boost.shape[0], N))
for i in range(TP_boost.shape[0]):
TP_base[i, :] = interp(FP_base, FP_boost[i, :], TP_boost[i, :])
TP_std = np.nanstd(TP_base, axis=0)
TP_mean = np.nanmean(TP_base, axis=0)
TP_upper = np.minimum(TP_mean + 3*TP_std, 1)
TP_lower = np.maximum(TP_mean - 3*TP_std, 0)
return FP_base, TP_mean, TP_lower, TP_upper
###Output
_____no_output_____
###Markdown
AUC results (no bootstrap)
###Code
# # Importing results
data_temp = np.load(eval_dir+'EVAL_QC_members.npy', allow_pickle=True)
AUC_elev_eval = data_temp[()]['AUC']
# cate_train = data_temp[()]['cate_train']
# cate_valid = data_temp[()]['cate_valid']
# cate_test = data_temp[()]['cate_test']
# REPORT = data_temp[()]['REPORT']
# TP = data_temp[()]['TP']
# FP = data_temp[()]['FP']
# names = list(REPORT.keys())
data_temp = np.load(eval_dir+'EVAL_QC_noelev_members.npy', allow_pickle=True)
AUC_noelev_eval = data_temp[()]['AUC']
data_temp = np.load(eval_dir+'EVAL_QC_MLP_members.npy', allow_pickle=True)
AUC_mlp_eval = data_temp[()]['AUC']
###Output
_____no_output_____
###Markdown
Bootstrap results
###Code
with h5py.File(BACKUP_dir+'HIGH_CAPA_TEST_pack.hdf', 'r') as h5io:
cate_out = h5io['cate_out'][...]
with h5py.File(eval_dir+'EVAL_QC_MLP_boost.hdf', 'r') as h5io:
cate_boost_mlp = h5io['cate_boost'][...]
FP_mlp = h5io['FP_boost'][...]
TP_mlp = h5io['TP_boost'][...]
AUC_mlp = h5io['AUC_boost'][...]
with h5py.File(eval_dir+'EVAL_QC_noelev_boost.hdf', 'r') as h5io:
FP_noelev = h5io['FP_boost'][...]
TP_noelev = h5io['TP_boost'][...]
AUC_noelev = h5io['AUC_boost'][...]
with h5py.File(eval_dir+'EVAL_QC_boost.hdf', 'r') as h5io:
cate_p = h5io['cate_p'][...]
cate_boost = h5io['cate_boost'][...]
FP_elev = h5io['FP_boost'][...]
TP_elev = h5io['TP_boost'][...]
AUC_elev = h5io['AUC_boost'][...]
temp_data = np.load(eval_dir+'ENS_boost.npy', allow_pickle=True)
TP_ens = temp_data[()]['TP']
FP_ens = temp_data[()]['FP']
AUC_ens = temp_data[()]['AUC']
###Output
_____no_output_____
###Markdown
Extra decision tree results
###Code
data_temp = np.load(eval_dir+'EVAL_QC_TREE_members.npy', allow_pickle=True)
cate_tree = data_temp[()]['cate_test']
auc_tree = data_temp[()]['AUC']
AUC_tree = np.empty(AUC_mlp.shape)
FP_tree = np.empty(FP_mlp.shape); FP_tree[...] = np.nan
TP_tree = np.empty(TP_mlp.shape); TP_tree[...] = np.nan
inds = np.arange(len(cate_out), dtype=np.int)
for i in range(5):
for j in range(200):
inds_ = resample(inds)
fpr_, tpr_, _ = roc_curve(cate_out[inds_], cate_tree[inds_, i])
AUC_tree[i, j] = auc(fpr_, tpr_)
L = len(fpr_)
FP_tree[i, j, :L] = fpr_
TP_tree[i, j, :L] = tpr_
###Output
_____no_output_____
###Markdown
Plot
###Code
ens = 5
labels = ['10 km', '15 km', '22 km', '30 km', '38 km']
#BINS = np.linspace(np.min(AUC_mlp), np.max(AUC_elev), 100)
BINS1 = np.arange(np.min(AUC_mlp), np.max(AUC_mlp)+0.001, 0.001)
BINS2 = np.arange(np.min(AUC_tree), np.max(AUC_tree)+0.001, 0.001)
BINS3 = np.arange(np.min(AUC_noelev), np.max(AUC_noelev)+0.001, 0.001)
BINS4 = np.arange(np.min(AUC_elev), np.max(AUC_ens)+0.001, 0.001)
fig = plt.figure(figsize=(13, 7))
ax1 = plt.subplot2grid((4, 2), (0, 0), rowspan=4)
ax2 = plt.subplot2grid((4, 2), (0, 1))
ax3 = plt.subplot2grid((4, 2), (1, 1))
ax4 = plt.subplot2grid((4, 2), (2, 1))
ax5 = plt.subplot2grid((4, 2), (3, 1))
AX_hist = [ax2, ax3, ax4, ax5]
ax1 = gu.ax_decorate(ax1, True, True)
ax1.spines["bottom"].set_visible(True)
ax1.grid(False)
for ax in AX_hist:
ax = gu.ax_decorate(ax, True, True)
ax.grid(False)
ax.spines["bottom"].set_visible(True)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(True)
ax.tick_params(axis="both", which="both",
bottom=False, top=False, labelbottom=True,
left=False, labelleft=False, right=False, labelright=True)
ax.set_ylim([0, 45])
ax2.text(0.975, 0.95, 'Bin counts', ha='right', va='center', transform=ax2.transAxes, fontsize=14)
ax3.yaxis.set_label_position("right")
ax5.set_xticks([0.89, 0.90, 0.91, 0.92, 0.93])
ax5.set_xlabel('AUC', fontsize=14)
ax1.set_xlabel('False Positive Rate (FPR)', fontsize=14)
ax1.text(0.02, 0.98, 'True Positive Rate (TPR)', ha='left', va='center', transform=ax1.transAxes, fontsize=14)
ax1_sub = zoomed_inset_axes(ax1, 1.5, loc=5)
ax1_sub = gu.ax_decorate(ax1_sub, False, False)
[j.set_linewidth(2.5) for j in ax1_sub.spines.values()]
ax1_sub.spines["left"].set_visible(True)
ax1_sub.spines["right"].set_visible(True)
ax1_sub.spines["bottom"].set_visible(True)
ax1_sub.spines["top"].set_visible(True)
ax1_sub.set_xlim([0.025, 0.4])
ax1_sub.set_ylim([0.6, 0.975])
ax1_sub.grid(False)
ind = 1
FP_base, TP_base, TP_lower, TP_upper = ROC_range(FP_mlp[ind, ...], TP_mlp[ind, ...], 1000)
ax1.fill_between(FP_base, TP_lower, TP_upper, color=JET[0], alpha=0.75)
ax1.plot(FP_base, TP_base, lw=3, color='k')
ax1_sub.fill_between(FP_base, TP_lower, TP_upper, color=JET[0], alpha=0.75)
ax1_sub.plot(FP_base, TP_base, lw=3, color='k')
ind = 1
FP_base, TP_base, TP_lower, TP_upper = ROC_range(FP_tree[ind, ...], TP_tree[ind, ...], 1000)
ax1.fill_between(FP_base, TP_lower, TP_upper, color=JET[1], alpha=0.75)
ax1.plot(FP_base, TP_base, lw=3, ls=':', color='k')
ax1_sub.fill_between(FP_base, TP_lower, TP_upper, color=JET[1], alpha=0.75)
ax1_sub.plot(FP_base, TP_base, lw=3, ls=':', color='k')
ind = 2
FP_base, TP_base, TP_lower, TP_upper = ROC_range(FP_noelev[ind, ...], TP_noelev[ind, ...], 1000)
ax1.fill_between(FP_base, TP_lower, TP_upper, color=JET[2], alpha=0.75)
ax1.plot(FP_base, TP_base, lw=3, color='k')
ax1_sub.fill_between(FP_base, TP_lower, TP_upper, color=JET[2], alpha=0.75)
ax1_sub.plot(FP_base, TP_base, lw=3, color='k')
ind = 0
FP_base, TP_base, TP_lower, TP_upper = ROC_range(FP_elev[ind, ...], TP_elev[ind, ...], 1000)
ax1.fill_between(FP_base, TP_lower, TP_upper, color=JET[1], alpha=0.75)
ax1.plot(FP_base, TP_base, lw=3, color='k')
ax1_sub.fill_between(FP_base, TP_lower, TP_upper, color=JET[1], alpha=0.75)
ax1_sub.plot(FP_base, TP_base, lw=3, color='k')
FP_base, TP_base, TP_lower, TP_upper = ROC_range(FP_ens, TP_ens, 1000)
ax1.fill_between(FP_base, TP_lower, TP_upper, color='0.5', alpha=0.75)
ax1.plot(FP_base, TP_base, lw=3, color='k')
ax1_sub.fill_between(FP_base, TP_lower, TP_upper, color='0.5', alpha=0.75)
ax1_sub.plot(FP_base, TP_base, lw=3, color='k')
mark_inset(ax1, ax1_sub, loc1=1, loc2=3, fc='none', ec='k', lw=2.5, ls='--')
legend_patch = [];
for i in range(ens):
std_auc = []
# MLP baseline
ax2.hist(AUC_mlp[ens-1-i, :], alpha=0.75, histtype='stepfilled', facecolor=JET[ens-1-i], bins=BINS1)
ax2.hist(AUC_mlp[ens-1-i, :], alpha=0.75, histtype='step', linewidth=2.5, edgecolor='k', bins=BINS1)
std_auc.append(np.around(1e3*np.std(AUC_mlp[ens-1-i, :]), 2))
# TREE baseline
ax3.hist(AUC_tree[ens-1-i, :], alpha=0.75, histtype='stepfilled', facecolor=JET[ens-1-i], bins=BINS2)
ax3.hist(AUC_tree[ens-1-i, :], alpha=0.75, histtype='step', linewidth=2.5, edgecolor='k', bins=BINS2)
std_auc.append(np.around(1e3*np.std(AUC_tree[ens-1-i, :]), 2))
# CNN baseline
ax4.hist(AUC_noelev[ens-1-i, :], alpha=0.75, histtype='stepfilled', facecolor=JET[ens-1-i], bins=BINS3)
ax4.hist(AUC_noelev[ens-1-i, :], alpha=0.75, histtype='step', linewidth=2.5, edgecolor='k', bins=BINS3)
std_auc.append(np.around(1e3*np.std(AUC_noelev[ens-1-i, :]), 2))
# CNN ours
ax5.hist(AUC_elev[ens-1-i, :], alpha=0.75, histtype='stepfilled', facecolor=JET[ens-1-i], bins=BINS4)
ax5.hist(AUC_elev[ens-1-i, :], alpha=0.75, histtype='step', linewidth=2.5, edgecolor='k', bins=BINS4)
# std_auc.append(np.around(1e3*np.std(AUC_elev[ens-1-i, :]), 2))
# legend_patch.append(mpatches.Patch(facecolor=JET[ens-1-i], edgecolor='k', linewidth=2.5, alpha=0.75,
# label='Classifier with {} input, std=({},{},{},{}) 1e-3'.format(
# labels[i], std_auc[0], std_auc[1], std_auc[2], std_auc[3])))
legend_patch.append(mpatches.Patch(facecolor=JET[ens-1-i], edgecolor='k', linewidth=2.5, alpha=0.75,
label='Classifier with {} grid spacing input'.format(labels[i])))
# (extra) classifier ensemble
ax5.hist(AUC_ens, alpha=0.75, histtype='stepfilled', facecolor='0.5', bins=BINS4)
ax5.hist(AUC_ens, alpha=0.75, histtype='step', linewidth=2.5, edgecolor='k', bins=BINS4)
# std_ens = np.around(1e3*np.std(AUC_ens), 2)
# legend_patch.append(mpatches.Patch(facecolor='0.5', edgecolor='k', linewidth=2.5, alpha=0.75,
# label='Classifier ensemble, std = {} 1e-3'.format(std_ens)))
legend_patch.append(mpatches.Patch(facecolor='0.5', edgecolor='k', linewidth=2.5, alpha=0.75,
label='Classifier ensemble'))
legend_roc = []
legend_roc.append(
mpatches.Patch(facecolor=JET[0], edgecolor='k', linewidth=2.5, alpha=0.75,
label='The best MLP baseline (35 km; AUC={0:0.3f})'.format(AUC_mlp_eval['ENS0'])))
legend_roc.append(
mpatches.Patch(facecolor=JET[1], hatch='.', edgecolor='k', linewidth=2.5, alpha=0.75,
label='The best decision tree baseline (35 km; AUC={0:0.3f})'.format(AUC_mlp_eval['ENS0'])))
legend_roc.append(
mpatches.Patch(facecolor=JET[2], edgecolor='k', linewidth=2.5, alpha=0.75,
label='The best CNN baseline (22 km; AUC={0:0.3f})'.format(AUC_noelev_eval['ENS2'])))
legend_roc.append(
mpatches.Patch(facecolor=JET[1], edgecolor='k', linewidth=2.5, alpha=0.75,
label='The best main classifier (30 km; AUC={0:0.3f})'.format(AUC_elev_eval['ENS1'])))
legend_roc.append(
mpatches.Patch(facecolor='0.5', edgecolor='k', linewidth=2.5, alpha=0.75,
label='Main classifier ensemble (AUC={0:0.3f})'.format(AUC_elev_eval['ENS'])))
ax_lg = fig.add_axes([0.0675, -0.075, 0.425, 0.1])
ax_lg.set_axis_off()
LG = ax_lg.legend(handles=legend_roc, bbox_to_anchor=(1, 1), ncol=1, prop={'size':14});
LG.get_frame().set_facecolor('white')
LG.get_frame().set_edgecolor('k')
LG.get_frame().set_linewidth(0)
ax_lg = fig.add_axes([0.5, -0.075, 0.375, 0.1])
ax_lg.set_axis_off()
LG = ax_lg.legend(handles=legend_patch, bbox_to_anchor=(1, 1), ncol=1, prop={'size':14});
LG.get_frame().set_facecolor('white')
LG.get_frame().set_edgecolor('k')
LG.get_frame().set_linewidth(0)
ax1.set_title('(a) ROCs of the best classifiers', fontsize=14)
ax2.set_title('(b) MLP baseline classifiers', fontsize=14)
ax3.set_title('(c) Decision tree baseline classifiers', fontsize=14)
ax4.set_title('(d) CNN baseline classifiers', fontsize=14)
ax5.set_title('(e) Main classifiers', fontsize=14)
plt.tight_layout()
if need_publish:
# Save figure
fig.savefig(fig_dir+'QC_ROC_Boostrap.png', format='png', **fig_keys)
###Output
_____no_output_____ |
mwdsbe/Notebooks/By Data/Professional_Services/Professional_Servies.ipynb | ###Markdown
Professional Services ContractsThe entire dataset for Professional Services Contracts by fiscal quarter - from 2013 Q4 to 2019 Q3
###Code
import mwdsbe
import schuylkill as skool
import pandas as pd
import glob
import time
###Output
_____no_output_____
###Markdown
Functions
###Code
def drop_duplicates_by_date(df, date_column):
df.sort_values(by=date_column, ascending=False, inplace=True)
df = df.loc[~df.index.duplicated(keep="first")]
df.sort_index(inplace=True)
return df
###Output
_____no_output_____
###Markdown
1. Only read vendor column from Professional ServicesIn order to have a sense of how many matches we get from Professional Services data Data
###Code
registry = mwdsbe.load_registry() # geopandas df
path = r'C:\Users\dabinlee\Documents\GitHub\mwdsbe\mwdsbe\data\professional_services'
ps_vendor = pd.concat([pd.read_csv(file, usecols=['vendor']) for file in glob.glob(path + "/*.csv")], ignore_index = True)
ps_vendor
ps_vendor = ps_vendor.drop_duplicates()
len(ps_vendor)
###Output
_____no_output_____
###Markdown
Clean Data
###Code
ignore_words = ['inc', 'group', 'llc', 'corp', 'pc', 'incorporated', 'ltd', 'co', 'associates', 'services', 'company', 'enterprises', 'enterprise', 'service', 'corporation']
cleaned_registry = skool.clean_strings(registry, ['company_name', 'dba_name'], True, ignore_words)
cleaned_ps_vendor = skool.clean_strings(ps_vendor, ['vendor'], True, ignore_words)
cleaned_registry = cleaned_registry.dropna(subset=['company_name'])
cleaned_ps_vendor = cleaned_ps_vendor.dropna(subset=['vendor'])
cleaned_ps_vendor = cleaned_ps_vendor.drop_duplicates()
len(cleaned_ps_vendor)
###Output
_____no_output_____
###Markdown
TF-IDF Merge Registry and Professional Serviceson company_name and vendor before full merge
###Code
t1 = time.time()
merged = (
skool.tf_idf_merge(cleaned_registry, cleaned_ps_vendor, left_on="company_name", right_on="vendor", score_cutoff=85)
.pipe(skool.tf_idf_merge, cleaned_registry, cleaned_ps_vendor, left_on="dba_name", right_on="vendor", score_cutoff=85)
)
t = time.time() - t1
print('Execution time:', t, 'sec')
len(merged)
matched_PS = merged.dropna(subset=['vendor'])
matched_PS
len(matched_PS)
###Output
_____no_output_____
###Markdown
New matches
###Code
matched_OL = pd.read_excel(r'C:\Users\dabinlee\Desktop\mwdsbe\data\license-opendataphilly\tf-idf\tf-idf-85.xlsx')
matched_OL = matched_OL.set_index('left_index')
matched_OL = drop_duplicates_by_date(matched_OL, "issue_date") # without duplicates
len(matched_OL)
new_matches = matched_PS.index.difference(matched_OL.index).tolist()
len(new_matches)
###Output
_____no_output_____
###Markdown
2. Load useful columns* vendor* tot_payments* department_name* year* fiscal quarter
###Code
all_files = glob.glob(path + "/*.csv")
li = []
for file in all_files:
# get vendor, tot_payments, and department_name from original data
df = pd.read_csv(file, usecols=['vendor', 'tot_payments', 'department_name'])
file_name = file.split('\\')[-1]
year = file_name.split('-')[1]
quarter = file_name.split('-')[2].split('.')[0]
df['fy_year'] = year
df['fy_quarter'] = quarter
li.append(df)
ps = pd.concat(li, ignore_index=False)
# save cleaned professional services
ps.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\professional_services\cleaned_ps.xlsx', header=True, index=False)
ps = pd.read_excel(r'C:\Users\dabinlee\Desktop\mwdsbe\data\professional_services\cleaned_ps.xlsx')
ps
###Output
_____no_output_____
###Markdown
Full Merge with Registry* TF-IDF 85* on company_name and vendor
###Code
# clean ps vendor column
ignore_words = ['inc', 'group', 'llc', 'corp', 'pc', 'incorporated', 'ltd', 'co', 'associates', 'services', 'company', 'enterprises', 'enterprise', 'service', 'corporation']
cleaned_ps = skool.clean_strings(ps, ['vendor'], True, ignore_words)
cleaned_ps = cleaned_ps.dropna(subset=['vendor'])
###Output
_____no_output_____
###Markdown
keep duplicates: one vendor can have multiple payments
###Code
t1 = time.time()
merged = (
skool.tf_idf_merge(cleaned_registry, cleaned_ps, left_on="company_name", right_on="vendor", score_cutoff=85, max_matches = 100)
.pipe(skool.tf_idf_merge, cleaned_registry, cleaned_ps, left_on="dba_name", right_on="vendor", score_cutoff=85, max_matches = 100)
)
t = time.time() - t1
print('Execution time:', t, 'sec')
len(merged)
matched = merged.dropna(subset=['vendor'])
len(matched)
matched.head()
# save cleaned professional services
matched.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\professional_services\matched.xlsx', header=True)
matched = pd.read_excel(r'C:\Users\dabinlee\Desktop\mwdsbe\data\professional_services\matched.xlsx')
matched.rename(columns={'Unnamed: 0': 'left_index'}, inplace=True)
matched.set_index('left_index', inplace=True)
matched
###Output
_____no_output_____ |
Assignement.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
CS6421 Final Project: Deep Learning**Implementing Real-World examples**This assignment is the core programming assignment for Deep Learning (CS6421). You will be given a tutorial introduction to the deep autoencoder, and will then need to use this model to solve two real-world problems:- text noise removal- pedestrian safety analysis on footpathsIf you follow the code provided you should be able to score well. To obtain top marks you will need to extend the provided code fragments with high-performing models that show what you have learned in the course.The marks for the project are as follows:- 1: extensions to basic autoencoder [10 marks]- 2: extensions to de-noising autoencoder [10 marks]- 3: text reconstruction model and results [40 marks] Please submit your work ideally in a clear Jupyter notebook, highlighting the code that you have written. Present the comparisons of different model performance results in clear tables. Alternatively, you can creat a pdf document that summarises all of this. In any event, I will need a Jupyter notebook that I can run if I have any queries about your work. If I cannot compile any code submitted then you will get 0 for the results obtained for that code. For each assignment, the following text needs to be attached and agreed to: By submitting this exam, I declare(1) that all work of it is my own;(2) that I did not seek whole or partial solutions for any part of my submission from others; and(3) that I did not and will not discuss, exchange, share, or publish complete or partial solutions for this exam or any part of it. Introduction: Basic AutoencoderIn this assignment, we will create a **simple autoencoder** model using the [TensorFlow subclassing API](https://www.tensorflow.org/guide/kerasmodel_subclassing). We start with the popular [MNIST dataset](http://yann.lecun.com/exdb/mnist/) (Grayscale images of hand-written digits from 0 to 9)._[This first section is based on a notebook orignially contributed by: [afagarap](https://github.com/afagarap)]_"Autoencoding" is a data compression algorithm where the compression and decompression functions are 1) data-specific, 2) lossy, and 3) learned automatically from examples rather than engineered by a human. Additionally, in almost all contexts where the term "autoencoder" is used, the compression and decompression functions are implemented with neural networks. 1) Autoencoders are _data-specific_, which means that they will only be able to compress data similar to what they have been trained on. This is different from, say, the MPEG-2 Audio Layer III (MP3) compression algorithm, which only holds assumptions about "sound" in general, but not about specific types of sounds. An autoencoder trained on pictures of faces would do a rather poor job of compressing pictures of trees, because the features it would learn would be face-specific.2) Autoencoders are _lossy_, which means that the decompressed outputs will be degraded compared to the original inputs (similar to MP3 or JPEG compression). This differs from lossless arithmetic compression.3) Autoencoders are _learned automatically from data examples_, which is a useful property: it means that it is easy to train specialized instances of the algorithm that will perform well on a specific type of input. It doesn't require any new engineering, just appropriate training data.To build an autoencoder, you need three things: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of your data and the decompressed representation (i.e. a "loss" function). The encoder and decoder will be chosen to be parametric functions (typically neural networks), and to be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimize to minimize the reconstruction loss, using Stochastic Gradient Descent. In general, a neural network is a computational model that is used for finding a function describing the relationship between data features $x$ and its values or labels $y$, i.e. $y = f(x)$. An autoencoder is specific type of neural network, which consists of encoder and decoder components: (1) the **encoder**, which learns a compressed data representation $z$, and (2) the **decoder**, which reconstructs the data $\hat{x}$ based on its idea $z$ of how it is structured:$$ z = f\big(h_{e}(x)\big)$$$$ \hat{x} = f\big(h_{d}(z)\big),$$where $z$ is the learned data representation by encoder $h_{e}$, and $\hat{x}$ is the reconstructed data by decoder $h_{d}$ based on $z$. SetupWe start by importing the libraries and functions that we will need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#try:
# The %tensorflow_version magic only works in colab.
# tensorflow_version 2.x
#except Exception:
# pass
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.datasets import mnist
print('TensorFlow version:', tf.__version__)
print('Is Executing Eagerly?', tf.executing_eagerly())
###Output
_____no_output_____
###Markdown
Autoencoder modelThe encoder and decoder are defined as:$$ z = f\big(h_{e}(x)\big)$$$$ \hat{x} = f\big(h_{d}(z)\big),$$where $z$ is the compressed data representation generated by encoder $h_{e}$, and $\hat{x}$ is the reconstructed data generated by decoder $h_{d}$ based on $z$.In this figure, we take as input an image, and compress that image before decompressing it using a Dense network. We further define a simple model for this below. Define an encoder layerThe first component, the **encoder**, is similar to a conventional feed-forward network. However, it's function is not predicting values (a _regression_ task) or categories (a _classification_ task). Instead, it's function is to learn a compressed data structure $z$. We can implement the encoder layer as dense layers, as follows:
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, intermediate_dim):
super(Encoder, self).__init__()
self.hidden_layer = tf.keras.layers.Dense(units=intermediate_dim, activation=tf.nn.relu)
self.output_layer = tf.keras.layers.Dense(units=intermediate_dim, activation=tf.nn.relu)
def call(self, input_features):
activation = self.hidden_layer(input_features)
return self.output_layer(activation)
###Output
_____no_output_____
###Markdown
The _encoding_ is done by passing data input $x$ to the encoder's hidden layer $h$ in order to learn the data representation $z = f(h(x))$.We first create an `Encoder` class that inherits the [`tf.keras.layers.Layer`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) class to define it as a layer. The compressed layer $z$ is a _component_ of the autoencoder model.Analyzing the code, the `Encoder` layer is defined to have a single hidden layer of neurons (`self.hidden_layer`) to learn the input features. Then, we connect the hidden layer to a layer (`self.output_layer`) that encodes the learned activations to the lower dimensional layer for $z$. Define a decoder layerThe second component, the **decoder**, is also similar to a feed-forward network. However, instead of reducing data to lower dimension, it attempts to reverse the process, i.e. reconstruct the data $\hat{x}$ from its lower dimension representation $z$ to its original dimension.The _decoding_ is done by passing the lower dimension representation $z$ to the decoder's hidden layer $h$ in order to reconstruct the data to its original dimension $\hat{x} = f(h(z))$. We can implement the decoder layer as follows,
###Code
class Decoder(tf.keras.layers.Layer):
def __init__(self, intermediate_dim, original_dim):
super(Decoder, self).__init__()
self.hidden_layer = tf.keras.layers.Dense(units=intermediate_dim, activation=tf.nn.relu)
self.output_layer = tf.keras.layers.Dense(units=original_dim, activation=tf.nn.relu)
def call(self, code):
activation = self.hidden_layer(code)
return self.output_layer(activation)
###Output
_____no_output_____
###Markdown
We now create a `Decoder` class that also inherits the `tf.keras.layers.Layer`.The `Decoder` layer is also defined to have a single hidden layer of neurons to reconstruct the input features $\hat{x}$ from the learned representation $z$ by the encoder $f\big(h_{e}(x)\big)$. Then, we connect its hidden layer to a layer that decodes the data representation from lower dimension $z$ to its original dimension $\hat{x}$. Hence, the "output" of the `Decoder` layer is the reconstructed data $\hat{x}$ from the data representation $z$.Ultimately, the output of the decoder is the autoencoder's output.Now that we have defined the components of our autoencoder, we can finally build our model. Build the autoencoder modelWe can now build the autoencoder model by instantiating `Encoder` and `Decoder` layers.
###Code
class Autoencoder(tf.keras.Model):
def __init__(self, intermediate_dim, original_dim):
super(Autoencoder, self).__init__()
self.loss = []
self.encoder = Encoder(intermediate_dim=intermediate_dim)
self.decoder = Decoder(intermediate_dim=intermediate_dim, original_dim=original_dim)
def call(self, input_features):
code = self.encoder(input_features)
reconstructed = self.decoder(code)
return reconstructed
###Output
_____no_output_____
###Markdown
As discussed above, the encoder's output is the input to the decoder, as it is written above (`reconstructed = self.decoder(code)`). Reconstruction errorTo learn the compressed layer $z$, we define a loss function over the difference between the input data $x$ and the reconstruction of $x$, which is $\hat{x}$.We call this comparison the reconstruction error function, a given by the following equation:$$ L = \dfrac{1}{n} \sum_{i=0}^{n-1} \big(\hat{x}_{i} - x_{i}\big)^{2}$$where $\hat{x}$ is the reconstructed data while $x$ is the original data.
###Code
def loss(preds, real):
return tf.reduce_mean(tf.square(tf.subtract(preds, real)))
###Output
_____no_output_____
###Markdown
Forward pass and optimizationWe will write a function for computing the forward pass, and applying a chosen optimization function.
###Code
def train(loss, model, opt, original):
with tf.GradientTape() as tape:
preds = model(original)
reconstruction_error = loss(preds, original)
gradients = tape.gradient(reconstruction_error, model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
return reconstruction_error
###Output
_____no_output_____
###Markdown
The training loopFinally, we will write a function to run the training loop. This function will take arguments for the model, the optimization function, the loss, the dataset, and the training epochs.The training loop itself uses a `GradientTape` context defined in `train` for each batch.
###Code
def train_loop(model, opt, loss, dataset, epochs):
for epoch in range(epochs):
epoch_loss = 0
for step, batch_features in enumerate(dataset):
loss_values = train(loss, model, opt, batch_features)
epoch_loss += loss_values
model.loss.append(epoch_loss)
print('Epoch {}/{}. Loss: {}'.format(epoch + 1, epochs, epoch_loss.numpy()))
###Output
_____no_output_____
###Markdown
Process the datasetNow that we have defined our `Autoencoder` class, the loss function, and the training loop, let's import the dataset. We will normalize the pixel values for each example through dividing by maximum pixel value. We shall flatten the examples from 28 by 28 arrays to 784-dimensional vectors.
###Code
from tensorflow.keras.datasets import mnist
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train / 255.
x_train = x_train.astype(np.float32)
x_train = np.reshape(x_train, (x_train.shape[0], 784))
x_test = np.reshape(x_test, (x_test.shape[0], 784))
training_dataset = tf.data.Dataset.from_tensor_slices(x_train).batch(256)
###Output
_____no_output_____
###Markdown
Train the modelNow all we have to do is instantiate the autoencoder model and choose an optimization function, then pass the intermediate dimension and the original dimension of the images.
###Code
model = Autoencoder(intermediate_dim=128, original_dim=784)
opt = tf.keras.optimizers.Adam(learning_rate=1e-2)
train_loop(model, opt, loss, training_dataset, 20)
###Output
_____no_output_____
###Markdown
Plot the in-training performanceLet's take a look at how the model performed during training in a couple of plots.
###Code
plt.plot(range(20), model.loss)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
###Output
_____no_output_____
###Markdown
PredictionsFinally, we will look at some of the predictions. The wrong predictions are labeled in red.
###Code
number = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for index in range(number):
# display original
ax = plt.subplot(2, number, index + 1)
plt.imshow(x_test[index].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, number, index + 1 + number)
plt.imshow(model(x_test)[index].numpy().reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
1. Tasks for Basic Autoencoder AssignmentAs you may see after training this model, the reconstructed images are quite blurry. A number of things could be done to move forward from this point, e.g. adding more layers, or using a convolutional neural network architecture as the basis of the autoencoder, or use a different kind of autoencoder.- generate results for 3 different Dense architectures and summarise the impact of architecture on performance- define 2 CNN architectures (similar to as described) and compare their performance to that of the Dense models Since our inputs are images, it makes sense to use convolutional neural networks (CNNs) as encoders and decoders. In practical settings, autoencoders applied to images are always convolutional autoencoders -- they simply perform much better.- implement a CNN model, where the encoder will consist of a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers. To improve the quality of the reconstructed image, we use more filters per layer. The model details are:
###Code
input_img = tf.keras.layers.Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(x)
x = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(x)
x = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = tf.keras.layers.MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
decoded = tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = tf.keras.models.Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
# to train this model we will with original MNIST digits with shape (samples, 3, 28, 28) and we will just normalize pixel values between 0 and 1
# (x_train, _), (x_test, _) = load_data('../input/mnist.npz')
from tensorflow.keras.datasets import mnist
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))
autoencoder.fit(x_train, x_train, epochs=50, batch_size=128,
shuffle=True, validation_data=(x_test, x_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir='./tmp/autoencoder')])
###Output
_____no_output_____
###Markdown
- train the model for 100 epochs and compare the results to the model where you use a dense encoding rather than convolutions.**Scoring**: 10 marks total- models [5 marks]: - Dense, multi-layer model [given]; - CNN basic model [given]; - CNN complex model [5 marks].- results and discussion [5 marks]: present the results in a clear fashion and explain why the results were as obtained. Good experimental design and statistical significance testing will be rewarded. 2. Denoising autoencoderFor this real-world application, we will use an autoencoder to remove noise from an image. To do this, we- learn a more robust representation by forcing the autoencoder to learn an input from a corrupted version of itselfThe first step: generate synthetic noisy digits as follows: apply a gaussian noise matrix and clip the images between 0 and 1.
###Code
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))
# Introduce noise with a probability factor of 0.5
noise_factor = 0.5
x_train_noisy = x_train + noise_factor + np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor + np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
###Output
_____no_output_____
###Markdown
Next, plot some figures to see what the digits look like with noise added.
###Code
# Plot figures to show what the noisy digits look like
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
- Train the model for 100 epochs and compare the results to the model where you use a dense encoding rather than convolutions.
###Code
# This will train for 100 epochs
autoencoder.fit(x_train_noisy, x_train, epochs=100, batch_size=128,
shuffle=True, validation_data=(x_test_noisy, x_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir='./tmp/tb', histogram_freq=0, write_graph=False)])
###Output
_____no_output_____
###Markdown
**Scoring**: 10 marks total- models [5 marks]: - Dense, multi-layer model [given]; - CNN basic model [given]; - CNN complex model [5 marks].- results and discussion [5 marks]: present the results in a clear fashion and explain why the results were as obtained. Good experimental design and statistical significance testing will be rewarded. **3. Text Reconstruction Application**You will now use the approach just described to reconstruct corrupted text. We will use a small dataset with grey-scale images of size $420 \times 540$. The steps required are as follows:- Apply this autoencoder approach (as just described) to the text data provided as noted below.- The data has two sets of images, train (https://github.com/gmprovan/CS6421-Assignment1/blob/master/train.zip) and test (https://github.com/gmprovan/CS6421-Assignment1/blob/master/test.zip). These images contain various styles of text, to which synthetic noise has been added to simulate real-world, messy artifacts. The training set includes the test without the noise (train_cleaned: https://github.com/gmprovan/CS6421-Assignment1/blob/master/train_cleaned.zip). - You must create an algorithm to clean the images in the test set, and report the error as RMSE (root-mean-square error).**Scoring**: 40 marks total- models [25 marks]: - Dense, multi-layer model [5 marks]; - CNN basic model [5 marks]; - CNN complex models (at least 2) [15 marks].- results and discussion [15 marks]: present the results in a clear fashion and explain why the results were as obtained. Good experimental design and statistical significance testing will be rewarded.you will get full marks if you achieve RMSE < 0.005. Deductions are as follows:- -1: 0.01 $\leq$ RMSE $\leq$ 0.005- -5: 0.05 $\leq$ RMSE $\leq$ 0.01- -10: RMSE $>$ 0.05 The data should have been properly pre-processed already. If you want to pre-process the image data more, use the code environments below (e.g., skimage, keras.preprocessing.image), and then plot some samples of data.
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import os
from pathlib import Path
import glob
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from skimage.io import imread, imshow, imsave
from keras.preprocessing.image import load_img, array_to_img, img_to_array
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Input
from keras.optimizers import SGD, Adam, Adadelta, Adagrad
from keras import backend as K
from sklearn.model_selection import train_test_split
np.random.seed(111)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
###Output
_____no_output_____
###Markdown
Next you must build the model. I provide the code framework, with the model details left up to you.
###Code
# Lets' define our autoencoder now
def build_autoenocder():
input_img = Input(shape=(420,540,1), name='image_input')
#enoder
# enter encoder model here
#decoder
# enter decoder model model
#model
autoencoder = Model(inputs=input_img, outputs=x)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
return autoencoder
autoencoder = build_autoenocder()
autoencoder.summary()
###Output
_____no_output_____
###Markdown
Next we have code to compile and run the model. Please modify the code below to fit your purposes.
###Code
X = []
Y = []
for img in train_images:
img = load_img(train / img, grayscale=True,target_size=(420,540))
img = img_to_array(img).astype('float32')/255.
X.append(img)
for img in train_labels:
img = load_img(train_cleaned / img, grayscale=True,target_size=(420,540))
img = img_to_array(img).astype('float32')/255.
Y.append(img)
X = np.array(X)
Y = np.array(Y)
print("Size of X : ", X.shape)
print("Size of Y : ", Y.shape)
# Split the dataset into training and validation. Always set the random state!!
X_train, X_valid, y_train, y_valid = train_test_split(X, Y, test_size=0.1, random_state=111)
print("Total number of training samples: ", X_train.shape)
print("Total number of validation samples: ", X_valid.shape)
# Train your model
autoencoder.fit(X_train, y_train, epochs=10, batch_size=8, validation_data=(X_valid, y_valid))
###Output
_____no_output_____
###Markdown
Next we compute the predictions from the trained model. Again, modify the code structure below as necessary.
###Code
# Compute the prediction
predicted_label = np.squeeze(autoencoder.predict(sample_test_img))
f, ax = plt.subplots(1,2, figsize=(10,8))
ax[0].imshow(np.squeeze(sample_test), cmap='gray')
ax[1].imshow(np.squeeze(predicted_label.astype('int8')), cmap='gray')
plt.show()
###Output
_____no_output_____ |
notebooks/doc-001-quickstart.ipynb | ###Markdown
Quickstart
###Code
from daskpeeker import Peeker, Metric
import dask.dataframe as dd
import pandas as pd
class MyPeeker(Peeker):
def get_shared_figures(self):
pass
return []
def get_report_elems(self, filtered_ddf):
return [Metric(filtered_ddf.loc[:, "n1"].mean().compute(), "Average N1")]
df = pd.DataFrame({"n1": [1, 2, 3, 4], "c1": list("ABCD")})
ddf = dd.from_pandas(df, npartitions=4).persist()
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from erudition import __version__
###Output
_____no_output_____
###Markdown
Quickstart Assign Columns
###Code
import pandas as pd
from colassigner import ColAssigner
class Cols(ColAssigner):
def col1(self, df):
return df.iloc[:, 0] * 2
def col2(self, df):
return "added-another"
df = pd.DataFrame({"a": [1, 2, 3]}).pipe(Cols())
df
df.loc[:, Cols.col2]
###Output
_____no_output_____
###Markdown
Access Columnswhile also documenting datatypes
###Code
from colassigner import ColAccessor
class Cols(ColAccessor):
x = int
y = float
df = pd.DataFrame({Cols.x: [1, 2, 3], Cols.y: [0.3, 0.1, 0.9]})
df
df.loc[:, Cols.y]
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from aswan import __version__
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from parquetranger import __version__
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from atqo import __version__
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from sscutils import __version__
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from sscutils import __version__
###Output
_____no_output_____
###Markdown
Quickstart
###Code
from encoref import __version__
###Output
_____no_output_____ |
Linear_Regression/Univariate_Linear_Regression_Using_Scikit_Learn.ipynb | ###Markdown
 In this tutorial we are going to use the Linear Models from Sklearn library. We are also going to use the same test data used in [Univariate Linear Regression From Scratch With Python](http://satishgunjal.github.io/univariate_lr/) tutorial **Introduction** Scikit-learn is one of the most popular open source machine learning library for python. It provides range of machine learning models, here we are going to use linear model. Sklearn linear models are used when target value is some kind of linear combination of input value. Sklearn library has multiple types of linear models to choose form. The way we have implemented the 'Batch Gradient Descent' algorithm in [Univariate Linear Regression From Scratch With Python](http://satishgunjal.github.io/univariate_lr/) tutorial, every Sklearn linear model also use specific mathematical model to find the best fit line. **Hypothesis Function Comparison** The hypothesis function used by Linear Models of Sklearn library is as below $\hat{y}$(w, x) = w_0 + w_1 * x_1 Where,* $\hat{y}$(w, x) = Target/output value* x_1 = Dependent/Input value* w_0 = intercept_* w_1 = as coef_ You must have noticed that above hypothesis function is not matching with the hypothesis function used in [Univariate Linear Regression From Scratch With Python](http://satishgunjal.github.io/univariate_lr/) tutorial. Actually both are same, just different notations are used h(θ, x) = θ_0 + θ_1 * x_1 Where, * Both the hypothesis function use 'x' to represent input values or features* $\hat{y}$(w, x) = h(θ, x) = Target or output value* w_0 = θ__0 = intercept_ or Y intercept* w_1 = θ__1 = coef_ or slope/gradient **Python Code** Yes, we are jumping to coding right after hypothesis function, because we are going to use Sklearn library which has multiple algorithms to choose from. **Import the required libraries*** numpy : Numpy is the core library for scientific computing in Python. It is used for working with arrays and matrices.* pandas: Used for data manipulation and analysis* matplotlib : It’s plotting library, and we are going to use it for data visualization* linear_model: Sklearn linear regression model *In case you don't have any experience using these libraries, don't worry I will explain every bit of code for better understanding*
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import linear_model
###Output
_____no_output_____
###Markdown
**Load the data*** We are going to use ‘profits_and_populations_from_the_cities.csv’ CSV file* File contains two columns, the first column is the population of a city and the second column is the profit of a food truck in that city. A negative value for profit indicates a loss.
###Code
df =pd.read_csv('https://raw.githubusercontent.com/satishgunjal/datasets/master/univariate_profits_and_populations_from_the_cities.csv')
df.head(5) # Show first 5 rows from datset
X = df.values[:,0] # Get input values from first column
y = df.values[:,1] # Get output values froms econd column
m = len(X) # Total number training examples
print('X = ', X[: 5]) # Show first 5 records
print('y = ', y[: 5]) # Show first 5 records
print('m = ', m)
###Output
X = [6.1101 5.5277 8.5186 7.0032 5.8598]
y = [17.592 9.1302 13.662 11.854 6.8233]
m = 97
###Markdown
**Understand The Data*** Population of City in 10,000s and Profit in $10,000s. i.e. 10K is multiplier for each data point* There are total 97 training examples (m= 97 or 97 no of rows)* There is only one feature (one column of feature and one of label/target/y) **Data Visualization**Let's assign the features(independent variables) values to variable X and target(dependent variable) values to variable yFor this dataset, we can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population).Many other problems that you will encounter in real life are multi-dimensional and can’t be plotted on a 2D plot
###Code
plt.scatter(X,y, color='red',marker= '+')
plt.grid()
plt.rcParams["figure.figsize"] = (10,6)
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.title('Scatter Plot Of Training Data')
###Output
_____no_output_____
###Markdown
**Which Sklearn Linear Regression Algorithm To Choose** * Sklearn library have multiple linear regression algorithms* Note: The way we have implemented the cost function and gradient descent algorithm every Sklearn algorithm also have some kind of mathematical model.* Different algorithms are better suited for different types of data and type of problems* The flow chart below will give you brief idea on how to choose right algorithm  **Ordinary Least Squares Algorithm*** This is one of the most basic linear regression algorithm. * Mathematical formula used by ordinary least square algorithm is as below, * The objective of Ordinary Least Square Algorithm is to minimize the residual sum of squares. Here the term residual means 'deviation of predicted value(Xw) from actual value(y)'* Problem with ordinary least square model is size of coefficients increase exponentially with increase in model complexity
###Code
model_ols = linear_model.LinearRegression()
model_ols.fit(X.reshape(m, 1),y)
# fit() method is used for training the model
# Note the first parameter(feature) is must be 2D array(feature matrix). Using reshape function convert 'X' which is 1D array to 2D array of dimension 97x1
# Remember we don' thave to add column of 1 in X matrix, which is not required for sklearn library and we can avoid all that work
###Output
_____no_output_____
###Markdown
**Understanding Training Results*** Note: If training is successful then we get the result like above. Where all the default values used by LinearRgression() model are displayed. If required we can also pass these values in fit method. We are not going to change any of these values for now.* As per our hypothesis function, 'model' object contains the coef and intercept values
###Code
coef = model_ols.coef_
intercept = model_ols.intercept_
print('coef= ', coef)
print('intercept= ', intercept)
###Output
coef= [1.19303364]
intercept= -3.89578087831185
###Markdown
You can compare above values with the values from [Univariate Linear Regression From Scratch With Python](http://satishgunjal.github.io/univariate_lr/) tutorial.Remember the notation difference...* coef(1.19303364) = θ_1 (1.16636235)* intercept(-3.89578087831185) = θ_0(-3.63029144) The values from our earlier model and Ordinary Least Squares model are not matching which is fine. Both models using different algorithm. Remember you have to choose the algorithm based on your data and problem type. And besides that this is just simple example with only 97 rows of data. Let's visualize the results.. **Visualization*** model.predict() method will give us the predicted values for our input values* Lets plot the line using predicted values.
###Code
plt.scatter(X, y, color='red', marker= '+', label= 'Training Data')
plt.plot(X, model_ols.predict(X.reshape(m, 1)), color='green', label='Linear Regression')
plt.rcParams["figure.figsize"] = (10,6)
plt.grid()
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.title('Linear Regression Fit')
plt.legend()
###Output
_____no_output_____
###Markdown
**Testing the model*** **Question: Predict the profit for population 35,000** **Manual Calculations*** Hypothesis function is $\hat{y}$(w, x) = w_0 + w_1 * x_1 * Predicted values from model are, * θ_1(coef) = 1.19303364 * θ_0(intercept) = -3.89578087831185* x_1 = 3.5 (remember all our values are in multiples ok 10,000)* $\hat{y}$(w, x) = (-3.89578087831185) + (1.19303364 * 3.5)* $\hat{y}$(w, x) = 0.27983686168815* Since all our values are in multiples of 10,000 * $\hat{y}$(w, x) = 0.27983686168815 * 10000 * $\hat{y}$(w, x) = 2798.3686168815* For population = 35,000, we predict a profit of 2798.3686168815We can predict the result using our model as below
###Code
predict1 = model_ols.predict([[3.5]])
print("For population = 35,000, our prediction of profit is", predict1 * 10000)
###Output
For population = 35,000, our prediction of profit is [2798.36876352]
###Markdown
So using sklearn library, we can train our model and predict the results with only few lines of code. Lets test our data with few other algorithms **Ridge Regression Algorithm*** Ridge regression addresses some problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients* Ridge model uses complexity parameter alpha to control the size of coefficients* Note: alpha should be more than '0', or else it will perform same as ordinary linear square model* Mathematical formula used by Ridge Regression algorithm is as below, 
###Code
model_r = linear_model.Ridge(alpha=35)
model_r.fit(X.reshape(m, 1),y)
coef = model_r.coef_
intercept = model_r.intercept_
print('coef= ' , coef)
print('intercept= ' , intercept)
predict1 = model_r.predict([[3.5]])
print("For population = 35,000, our prediction of profit is", predict1 * 10000)
###Output
For population = 35,000, our prediction of profit is [4119.58817955]
###Markdown
**LASSO Regression Algorithm*** Similar to Ridge regression LASSO also uses regularization parameter alpha but it estimates sparse coefficients i.e. more number of 0 coefficients* That's why its best suited when dataset contains few important features* LASSO model uses regularization parameter alpha to control the size of coefficients* Note: alpha should be more than '0', or else it will perform same as ordinary linear square model* Mathematical formula used by LASSO Regression algorithm is as below, 
###Code
model_l = linear_model.Lasso(alpha=0.55)
model_l.fit(X.reshape(m, 1),y)
coef = model_l.coef_
intercept = model_l.intercept_
print('coef= ' , coef)
print('intercept= ' , intercept)
predict1 = model_l.predict([[3.5]])
print("For population = 35,000, our prediction of profit is", predict1 * 10000)
###Output
For population = 35,000, our prediction of profit is [4527.52676756]
|
FeatureCollection/minimum_bounding_geometry.ipynb | ###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
ee_layers.append(EarthEngineLayer(ee_object=ee.Image().paint(roi,0,1), vis_params={}))
bound = ee.Geometry(roi.geometry()).bounds()
ee_layers.append(EarthEngineLayer(ee_object=ee.Image().paint(bound,0,1), vis_params={'palette':'red'}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
HUC10 = ee.FeatureCollection("USGS/WBD/2017/HUC10")
HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
roi = HUC08.filter(ee.Filter.eq('name', 'Pipestem'))
Map.centerObject(roi, 10)
Map.addLayer(ee.Image().paint(roi, 0, 1), {}, 'HUC8')
bound = ee.Geometry(roi.geometry()).bounds()
Map.addLayer(ee.Image().paint(bound, 0, 1), {'palette': 'red'}, "Minimum bounding geometry")
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
01.Python for Programmers/01.Python Basics/handson_02.ipynb | ###Markdown
Programming Assignment 002Below are the problem sets for basics python.
###Code
# checking python version
!python --version
###Output
Python 3.7.12
###Markdown
Question 1Write a Python program to convert kilometers to miles? Solution
###Code
# user input (kilometers)
kms = float(input("Enter kilometers value\t"))
# formula
miles = kms * 0.62137
print(f"{kms}Kms is equivalent to {miles:.3f} miles")
###Output
Enter kilometers value 20
20.0Kms is equivalent to 12.427 miles
###Markdown
Question 2Write a Python program to convert Celsius to Fahrenheit? Solution
###Code
# user input (celsius)
c = float(input("Enter Celcius degrees\t"))
# formula
f = c * (9/5) + 32
print(f"{c}C is equivalent to {f:.3f}F")
###Output
Enter Celcius degrees 100
100.0C is equivalent to 212.000F
###Markdown
Question 3Write a Python program to display calendar? Solution
###Code
# using built-in module
import calendar
# display month
year = 2021
month = 12
print(calendar.month(year, month))
# display full year
year = 2022
print(calendar.calendar(year))
###Output
2022
January February March
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 1 2 3 4 5 6 1 2 3 4 5 6
3 4 5 6 7 8 9 7 8 9 10 11 12 13 7 8 9 10 11 12 13
10 11 12 13 14 15 16 14 15 16 17 18 19 20 14 15 16 17 18 19 20
17 18 19 20 21 22 23 21 22 23 24 25 26 27 21 22 23 24 25 26 27
24 25 26 27 28 29 30 28 28 29 30 31
31
April May June
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 1 1 2 3 4 5
4 5 6 7 8 9 10 2 3 4 5 6 7 8 6 7 8 9 10 11 12
11 12 13 14 15 16 17 9 10 11 12 13 14 15 13 14 15 16 17 18 19
18 19 20 21 22 23 24 16 17 18 19 20 21 22 20 21 22 23 24 25 26
25 26 27 28 29 30 23 24 25 26 27 28 29 27 28 29 30
30 31
July August September
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 3 1 2 3 4 5 6 7 1 2 3 4
4 5 6 7 8 9 10 8 9 10 11 12 13 14 5 6 7 8 9 10 11
11 12 13 14 15 16 17 15 16 17 18 19 20 21 12 13 14 15 16 17 18
18 19 20 21 22 23 24 22 23 24 25 26 27 28 19 20 21 22 23 24 25
25 26 27 28 29 30 31 29 30 31 26 27 28 29 30
October November December
Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su
1 2 1 2 3 4 5 6 1 2 3 4
3 4 5 6 7 8 9 7 8 9 10 11 12 13 5 6 7 8 9 10 11
10 11 12 13 14 15 16 14 15 16 17 18 19 20 12 13 14 15 16 17 18
17 18 19 20 21 22 23 21 22 23 24 25 26 27 19 20 21 22 23 24 25
24 25 26 27 28 29 30 28 29 30 26 27 28 29 30 31
31
###Markdown
Question 4Write a Python program to solve quadratic equation? Solutionthe main aim is to find the roots of the equation which is,$$ax^2 + bx + c$$where,a, b, and c are coefficient and real numbers and also $a ≠ 0$.Using below formula we find the roots of the equation,$$ x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$
###Code
# user input two variables
a = int(input("Input first number, (0 not allowed)\t"))
b = int(input("Input second number\t"))
c = int(input("Input third number\t"))
# calculate discriminant
dis = (b**2) - (4*a*c)
# finding 2 roots
import cmath
root1 = (-b + cmath.sqrt(dis))/(2*a)
root2 = (-b - cmath.sqrt(dis))/(2*a)
print(f"Roots of equation {a}x^2 + {b}x + {c} are,\n{root1}\n{root2}")
###Output
Input first number, (0 not allowed) 1
Input second number 1
Input third number 1
Roots of equation 1x^2 + 1x + 1 are,
(-0.5+0.8660254037844386j)
(-0.5-0.8660254037844386j)
###Markdown
Question 5Write a Python program to swap two variables without temp variable? Solution
###Code
# user input two variables
a = int(input("Input first number\t"))
b = int(input("Input second number\t"))
print("--------------------")
print(f"Value of a = {a}, value of b = {b}")
# using tuple swap
a, b = b, a
print(f"After Swapped Operation\nValue of a = {a}, value of b = {b}")
# user input two variables
a = int(input("Input first number\t"))
b = int(input("Input second number\t"))
print("--------------------")
print(f"Value of a = {a}, value of b = {b}")
# using arthematic operation
a = a+b
b = a-b
a = a-b
print(f"After Swapped Operation\nValue of a = {a}, value of b = {b}")
###Output
_____no_output_____ |
5-Sequence-Modelling/week3/Trigger word detection/Trigger+Word+Detection+-+Final+-+learners.ipynb | ###Markdown
Trigger Word DetectionWelcome to the final programming assignment of this specialization! In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wakeword detection). Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word. For this exercise, our trigger word will be "Activate." Every time it hears you say "activate," it will make a "chiming" sound. By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying "activate." After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say "activate" it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event? In this assignment you will learn to: - Structure a speech recognition project- Synthesize and process audio recordings to create train/dev datasets- Train a trigger word detection model and make predictionsLets get started! Run the following cell to load the package you are going to use.
###Code
!pip install pydub
import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
from td_utils import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Data synthesis: Creating a speech dataset Let's start by building a dataset for your trigger word detection algorithm. A speech dataset should ideally be as close as possible to the application you will want to run it on. In this case, you'd like to detect the word "activate" in working environments (library, home, offices, open-spaces ...). You thus need to create recordings with a mix of positive words ("activate") and negative words (random words other than activate) on different background sounds. Let's see how you can create such a dataset. 1.1 - Listening to the data One of your friends is helping you out on this project, and they've gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents. In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. The "activate" directory contains positive examples of people saying the word "activate". The "negatives" directory contains negative examples of people saying random words other than "activate". There is one word per audio recording. The "backgrounds" directory contains 10 second clips of background noise in different environments.Run the cells below to listen to some examples.
###Code
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")
###Output
_____no_output_____
###Markdown
You will use these three type of recordings (positives/negatives/backgrounds) to create a labelled dataset. 1.2 - From audio recordings to spectrogramsWhat really is an audio recording? A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. We will use audio sampled at 44100 Hz (or 44100 Hertz). This means the microphone gives us 44100 numbers per second. Thus, a 10 second audio clip is represented by 441000 numbers (= $10 \times 44100$). It is quite difficult to figure out from this "raw" representation of audio whether the word "activate" was said. In order to help your sequence model more easily learn to detect triggerwords, we will compute a *spectrogram* of the audio. The spectrogram tells us how much different frequencies are present in an audio clip at a moment in time. (If you've ever taken an advanced class on signal processing or on Fourier transforms, a spectrogram is computed by sliding a window over the raw audio signal, and calculates the most active frequencies in each window using a Fourier transform. If you don't understand the previous sentence, don't worry about it.) Lets see an example.
###Code
IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")
###Output
_____no_output_____
###Markdown
The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). **Figure 1**: Spectrogram of an audio recording, where the color shows the degree to which different frequencies are present (loud) in the audio at different points in time. Green squares means a certain frequency is more active or more present in the audio clip (louder); blue squares denote less active frequencies. The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. In this notebook, we will be working with 10 second audio clips as the "standard length" for our training examples. The number of timesteps of the spectrogram will be 5511. You'll see later that the spectrogram will be the input $x$ into the network, and so $T_x = 5511$.
###Code
_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:,0].shape)
print("Time steps in input after spectrogram", x.shape)
###Output
_____no_output_____
###Markdown
Now, you can define:
###Code
Tx = 5511 # The number of time steps input to the model from the spectrogram
n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram
###Output
_____no_output_____
###Markdown
Note that even with 10 seconds being our default training example length, 10 seconds of time can be discretized to different numbers of value. You've seen 441000 (raw audio) and 5511 (spectrogram). In the former case, each step represents $10/441000 \approx 0.000023$ seconds. In the second case, each step represents $10/5511 \approx 0.0018$ seconds. For the 10sec of audio, the key values you will see in this assignment are:- $441000$ (raw audio)- $5511 = T_x$ (spectrogram output, and dimension of input to the neural network). - $10000$ (used by the `pydub` module to synthesize audio) - $1375 = T_y$ (the number of steps in the output of the GRU you'll build). Note that each of these representations correspond to exactly 10 seconds of time. It's just that they are discretizing them to different degrees. All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). We have chosen values that are within the standard ranges uses for speech systems. Consider the $T_y = 1375$ number above. This means that for the output of the model, we discretize the 10s into 1375 time-intervals (each one of length $10/1375 \approx 0.0072$s) and try to predict for each of these intervals whether someone recently finished saying "activate." Consider also the 10000 number above. This corresponds to discretizing the 10sec clip into 10/10000 = 0.001 second itervals. 0.001 seconds is also called 1 millisecond, or 1ms. So when we say we are discretizing according to 1ms intervals, it means we are using 10,000 steps.
###Code
Ty = 1375 # The number of time steps in the output of our model
###Output
_____no_output_____
###Markdown
1.3 - Generating a single training exampleBecause speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. It is quite slow to record lots of 10 second audio clips with random "activates" in it. Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources). To synthesize a single training example, you will:- Pick a random 10 second background audio clip- Randomly insert 0-4 audio clips of "activate" into this 10sec clip- Randomly insert 0-2 audio clips of negative words into this 10sec clipBecause you had synthesized the word "activate" into the background clip, you know exactly when in the 10sec clip the "activate" makes its appearance. You'll see later that this makes it easier to generate the labels $y^{\langle t \rangle}$ as well. You will use the pydub package to manipulate audio. Pydub converts raw audio files into lists of Pydub data structures (it is not important to know the details here). Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds) which is why a 10sec clip is always represented using 10,000 steps.
###Code
# Load audio segments using pydub
activates, negatives, backgrounds = load_raw_audio()
print("background len: " + str(len(backgrounds[0]))) # Should be 10,000, since it is a 10 sec clip
print("activate[0] len: " + str(len(activates[0]))) # Maybe around 1000, since an "activate" audio clip is usually around 1 sec (but varies a lot)
print("activate[1] len: " + str(len(activates[1]))) # Different "activate" clips can have different lengths
###Output
_____no_output_____
###Markdown
**Overlaying positive/negative words on the background**:Given a 10sec background clip and a short audio clip (positive or negative word), you need to be able to "add" or "insert" the word's short audio clip onto the background. To ensure audio segments inserted onto the background do not overlap, you will keep track of the times of previously inserted audio clips. You will be inserting multiple clips of positive/negative words onto the background, and you don't want to insert an "activate" or a random word somewhere that overlaps with another clip you had previously added. For clarity, when you insert a 1sec "activate" onto a 10sec clip of cafe noise, you end up with a 10sec clip that sounds like someone sayng "activate" in a cafe, with "activate" superimposed on the background cafe noise. You do *not* end up with an 11 sec clip. You'll see later how pydub allows you to do this. **Creating the labels at the same time you overlay**:Recall also that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying "activate." Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn't contain any "activates." When you insert or overlay an "activate" clip, you will also update labels for $y^{\langle t \rangle}$, so that 50 steps of the output now have target label 1. You will train a GRU to detect when someone has *finished* saying "activate". For example, suppose the synthesized "activate" clip ends at the 5sec mark in the 10sec audio---exactly halfway into the clip. Recall that $T_y = 1375$, so timestep $687 = $ `int(1375*0.5)` corresponds to the moment at 5sec into the audio. So, you will set $y^{\langle 688 \rangle} = 1$. Further, you would quite satisfied if the GRU detects "activate" anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label $y^{\langle t \rangle}$ to 1. Specifically, we have $y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1$. This is another reason for synthesizing the training data: It's relatively straightforward to generate these labels $y^{\langle t \rangle}$ as described above. In contrast, if you have 10sec of audio recorded on a microphone, it's quite time consuming for a person to listen to it and mark manually exactly when "activate" finished. Here's a figure illustrating the labels $y^{\langle t \rangle}$, for a clip which we have inserted "activate", "innocent", activate", "baby." Note that the positive labels "1" are associated only with the positive words. **Figure 2** To implement the training set synthesis process, you will use the following helper functions. All of these function will use a 1ms discretization interval, so the 10sec of audio is alwsys discretized into 10,000 steps. 1. `get_random_time_segment(segment_ms)` gets a random time segment in our background audio2. `is_overlapping(segment_time, existing_segments)` checks if a time segment overlaps with existing segments3. `insert_audio_clip(background, audio_clip, existing_times)` inserts an audio segment at a random time in our background audio using `get_random_time_segment` and `is_overlapping`4. `insert_ones(y, segment_end_ms)` inserts 1's into our label vector y after the word "activate" The function `get_random_time_segment(segment_ms)` returns a random time segment onto which we can insert an audio clip of duration `segment_ms`. Read through the code to make sure you understand what it is doing.
###Code
def get_random_time_segment(segment_ms):
"""
Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
Arguments:
segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
Returns:
segment_time -- a tuple of (segment_start, segment_end) in ms
"""
segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background
segment_end = segment_start + segment_ms - 1
return (segment_start, segment_end)
###Output
_____no_output_____
###Markdown
Next, suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). I.e., the first segment starts at step 1000, and ends at step 1800. Now, if we are considering inserting a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here. For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. However, (100,199) and (200,250) are non-overlapping. **Exercise**: Implement `is_overlapping(segment_time, existing_segments)` to check if a new time segment overlaps with any of the previous segments. You will need to carry out 2 steps:1. Create a "False" flag, that you will later set to "True" if you find that there is an overlap.2. Loop over the previous_segments' start and end times. Compare these times to the segment's start and end times. If there is an overlap, set the flag defined in (1) as True. You can use:```pythonfor ....: if ... = ...: ...```Hint: There is overlap if the segment starts before the previous segment ends, and the segment ends after the previous segment starts.
###Code
# GRADED FUNCTION: is_overlapping
def is_overlapping(segment_time, previous_segments):
"""
Checks if the time of a segment overlaps with the times of existing segments.
Arguments:
segment_time -- a tuple of (segment_start, segment_end) for the new segment
previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
Returns:
True if the time segment overlaps with any of the existing segments, False otherwise
"""
segment_start, segment_end = segment_time
### START CODE HERE ### (≈ 4 line)
# Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
overlap = None
# Step 2: loop over the previous_segments start and end times.
# Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
for previous_start, previous_end in previous_segments:
if None:
None
### END CODE HERE ###
return overlap
overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)])
overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)])
print("Overlap 1 = ", overlap1)
print("Overlap 2 = ", overlap2)
###Output
_____no_output_____
###Markdown
**Expected Output**: **Overlap 1** False **Overlap 2** True Now, lets use the previous helper functions to insert a new audio clip onto the 10sec background at a random time, but making sure that any newly inserted segment doesn't overlap with the previous segments. **Exercise**: Implement `insert_audio_clip()` to overlay an audio clip onto the background 10sec clip. You will need to carry out 4 steps:1. Get a random time segment of the right duration in ms.2. Make sure that the time segment does not overlap with any of the previous time segments. If it is overlapping, then go back to step 1 and pick a new time segment.3. Add the new time segment to the list of existing time segments, so as to keep track of all the segments you've inserted. 4. Overlay the audio clip over the background using pydub. We have implemented this for you.
###Code
# GRADED FUNCTION: insert_audio_clip
def insert_audio_clip(background, audio_clip, previous_segments):
"""
Insert a new audio segment over the background noise at a random time step, ensuring that the
audio segment does not overlap with existing segments.
Arguments:
background -- a 10 second background audio recording.
audio_clip -- the audio clip to be inserted/overlaid.
previous_segments -- times where audio segments have already been placed
Returns:
new_background -- the updated background audio
"""
# Get the duration of the audio clip in ms
segment_ms = len(audio_clip)
### START CODE HERE ###
# Step 1: Use one of the helper functions to pick a random time segment onto which to insert
# the new audio clip. (≈ 1 line)
segment_time = None
# Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep
# picking new segment_time at random until it doesn't overlap. (≈ 2 lines)
while None:
segment_time = None
# Step 3: Add the new segment_time to the list of previous_segments (≈ 1 line)
None
### END CODE HERE ###
# Step 4: Superpose audio segment and background
new_background = background.overlay(audio_clip, position = segment_time[0])
return new_background, segment_time
np.random.seed(5)
audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)])
audio_clip.export("insert_test.wav", format="wav")
print("Segment Time: ", segment_time)
IPython.display.Audio("insert_test.wav")
###Output
_____no_output_____
###Markdown
**Expected Output** **Segment Time** (2254, 3169)
###Code
# Expected audio
IPython.display.Audio("audio_examples/insert_reference.wav")
###Output
_____no_output_____
###Markdown
Finally, implement code to update the labels $y^{\langle t \rangle}$, assuming you just inserted an "activate." In the code below, `y` is a `(1,1375)` dimensional vector, since $T_y = 1375$. If the "activate" ended at time step $t$, then set $y^{\langle t+1 \rangle} = 1$ as well as for up to 49 additional consecutive values. However, make sure you don't run off the end of the array and try to update `y[0][1375]`, since the valid indices are `y[0][0]` through `y[0][1374]` because $T_y = 1375$. So if "activate" ends at step 1370, you would get only `y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1`**Exercise**: Implement `insert_ones()`. You can use a for loop. (If you are an expert in python's slice operations, feel free also to use slicing to vectorize this.) If a segment ends at `segment_end_ms` (using a 10000 step discretization), to convert it to the indexing for the outputs $y$ (using a $1375$ step discretization), we will use this formula: ``` segment_end_y = int(segment_end_ms * Ty / 10000.0)```
###Code
# GRADED FUNCTION: insert_ones
def insert_ones(y, segment_end_ms):
"""
Update the label vector y. The labels of the 50 output steps after the end of the segment should be set to 1.
Arguments:
y -- numpy array of shape (1, Ty), the labels of the training example
segment_end_ms -- the end time of the segment in ms
Returns:
y -- updated labels
"""
# duration of the background (in terms of spectrogram time-steps)
segment_end_y = int(segment_end_ms * Ty / 10000.0)
# Add 1 to the correct index in the background label (y)
### START CODE HERE ### (≈ 3 lines)
for i in None:
if None:
None
### END CODE HERE ###
return y
arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0,:])
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])
###Output
_____no_output_____
###Markdown
**Expected Output** **sanity checks**: 0.0 1.0 0.0 Finally, you can use `insert_audio_clip` and `insert_ones` to create a new training example.**Exercise**: Implement `create_training_example()`. You will need to carry out the following steps:1. Initialize the label vector $y$ as a numpy array of zeros and shape $(1, T_y)$.2. Initialize the set of existing segments to an empty list.3. Randomly select 0 to 4 "activate" audio clips, and insert them onto the 10sec clip. Also insert labels at the correct position in the label vector $y$.4. Randomly select 0 to 2 negative audio clips, and insert them into the 10sec clip.
###Code
# GRADED FUNCTION: create_training_example
def create_training_example(background, activates, negatives):
"""
Creates a training example with a given background, activates, and negatives.
Arguments:
background -- a 10 second background audio recording
activates -- a list of audio segments of the word "activate"
negatives -- a list of audio segments of random words that are not "activate"
Returns:
x -- the spectrogram of the training example
y -- the label at each time step of the spectrogram
"""
# Set the random seed
np.random.seed(18)
# Make background quieter
background = background - 20
### START CODE HERE ###
# Step 1: Initialize y (label vector) of zeros (≈ 1 line)
y = None
# Step 2: Initialize segment times as empty list (≈ 1 line)
previous_segments = None
### END CODE HERE ###
# Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
number_of_activates = np.random.randint(0, 4)
random_indices = np.random.randint(len(activates), size=number_of_activates)
random_activates = [activates[i] for i in random_indices]
### START CODE HERE ### (≈ 3 lines)
# Step 3: Loop over randomly selected "activate" clips and insert in background
for random_activate in random_activates:
# Insert the audio clip on the background
background, segment_time = None
# Retrieve segment_start and segment_end from segment_time
segment_start, segment_end = None
# Insert labels in "y"
y = None
### END CODE HERE ###
# Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
number_of_negatives = np.random.randint(0, 2)
random_indices = np.random.randint(len(negatives), size=number_of_negatives)
random_negatives = [negatives[i] for i in random_indices]
### START CODE HERE ### (≈ 2 lines)
# Step 4: Loop over randomly selected negative clips and insert in background
for random_negative in random_negatives:
# Insert the audio clip on the background
background, _ = None
### END CODE HERE ###
# Standardize the volume of the audio clip
background = match_target_amplitude(background, -20.0)
# Export new training example
file_handle = background.export("train" + ".wav", format="wav")
print("File (train.wav) was saved in your directory.")
# Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
x = graph_spectrogram("train.wav")
return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
###Output
_____no_output_____
###Markdown
**Expected Output** Now you can listen to the training example you created and compare it to the spectrogram generated above.
###Code
IPython.display.Audio("train.wav")
###Output
_____no_output_____
###Markdown
**Expected Output**
###Code
IPython.display.Audio("audio_examples/train_reference.wav")
###Output
_____no_output_____
###Markdown
Finally, you can plot the associated labels for the generated training example.
###Code
plt.plot(y[0])
###Output
_____no_output_____
###Markdown
**Expected Output** 1.4 - Full training setYou've now implemented the code needed to generate a single training example. We used this process to generate a large training set. To save time, we've already generated a set of training examples.
###Code
# Load preprocessed training examples
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")
###Output
_____no_output_____
###Markdown
1.5 - Development setTo test our model, we recorded a development set of 25 examples. While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. Thus, we recorded 25 10-second audio clips of people saying "activate" and other random words, and labeled them by hand. This follows the principle described in Course 3 that we should create the dev set to be as similar as possible to the test set distribution; that's why our dev set uses real rather than synthesized audio.
###Code
# Load preprocessed dev set examples
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")
###Output
_____no_output_____
###Markdown
2 - ModelNow that you've built a dataset, lets write and train a trigger word detection model! The model will use 1-D convolutional layers, GRU layers, and dense layers. Let's load the packages that will allow you to use these layers in Keras. This might take a minute to load.
###Code
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
2.1 - Build the modelHere is the architecture we will use. Take some time to look over the model and see if it makes sense. **Figure 3** One key step of this model is the 1D convolutional step (near the bottom of Figure 3). It inputs the 5511 step spectrogram, and outputs a 1375 step output, which is then further processed by multiple layers to get the final $T_y = 1375$ step output. This layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension. Computationally, the 1-D conv layer also helps speed up the model because now the GRU has to process only 1375 timesteps rather than 5511 timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately uses a dense+sigmoid layer to make a prediction for $y^{\langle t \rangle}$. Because $y$ is binary valued (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said "activate."Note that we use a uni-directional RNN rather than a bi-directional RNN. This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. If we used a bi-directional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if "activate" was said in the first second of the audio clip. Implementing the model can be done in four steps: **Step 1**: CONV layer. Use `Conv1D()` to implement this, with 196 filters, a filter size of 15 (`kernel_size=15`), and stride of 4. [[See documentation.](https://keras.io/layers/convolutional/conv1d)]**Step 2**: First GRU layer. To generate the GRU layer, use:```X = GRU(units = 128, return_sequences = True)(X)```Setting `return_sequences=True` ensures that all the GRU's hidden states are fed to the next layer. Remember to follow this with Dropout and BatchNorm layers. **Step 3**: Second GRU layer. This is similar to the previous GRU layer (remember to use `return_sequences=True`), but has an extra dropout layer. **Step 4**: Create a time-distributed dense layer as follows: ```X = TimeDistributed(Dense(1, activation = "sigmoid"))(X)```This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. [[See documentation](https://keras.io/layers/wrappers/).]**Exercise**: Implement `model()`, the architecture is presented in Figure 3.
###Code
# GRADED FUNCTION: model
def model(input_shape):
"""
Function creating the model's graph in Keras.
Argument:
input_shape -- shape of the model's input data (using Keras conventions)
Returns:
model -- Keras model instance
"""
X_input = Input(shape = input_shape)
### START CODE HERE ###
# Step 1: CONV layer (≈4 lines)
X = None # CONV1D
X = None # Batch normalization
X = None # ReLu activation
X = None # dropout (use 0.8)
# Step 2: First GRU Layer (≈4 lines)
X = None # GRU (use 128 units and return the sequences)
X = None # dropout (use 0.8)
X = None # Batch normalization
# Step 3: Second GRU Layer (≈4 lines)
X = None # GRU (use 128 units and return the sequences)
X = None # dropout (use 0.8)
X = None # Batch normalization
X = None # dropout (use 0.8)
# Step 4: Time-distributed dense layer (≈1 line)
X = None # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
model = model(input_shape = (Tx, n_freq))
###Output
_____no_output_____
###Markdown
Let's print the model summary to keep track of the shapes.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
**Expected Output**: **Total params** 522,561 **Trainable params** 521,657 **Non-trainable params** 904 The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 at spectrogram to 1375. 2.2 - Fit the model Trigger word detection takes a long time to train. To save time, we've already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. Let's load the model.
###Code
model = load_model('./models/tr_model.h5')
###Output
_____no_output_____
###Markdown
You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples.
###Code
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=1)
###Output
_____no_output_____
###Markdown
2.3 - Test the modelFinally, let's see how your model performs on the dev set.
###Code
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)
###Output
_____no_output_____
###Markdown
This looks pretty good! However, accuracy isn't a great metric for this task, since the labels are heavily skewed to 0's, so a neural network that just outputs 0's would get slightly over 90% accuracy. We could define more useful metrics such as F1 score or Precision/Recall. But let's not bother with that here, and instead just empirically see how the model does. 3 - Making PredictionsNow that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network. <!--can use your model to make predictions on new audio clips.You will first need to compute the predictions for an input audio clip.**Exercise**: Implement predict_activates(). You will need to do the following:1. Compute the spectrogram for the audio file2. Use `np.swap` and `np.expand_dims` to reshape your input to size (1, Tx, n_freqs)5. Use forward propagation on your model to compute the prediction at each output step!-->
###Code
def detect_triggerword(filename):
plt.subplot(2, 1, 1)
x = graph_spectrogram(filename)
# the spectogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model
x = x.swapaxes(0,1)
x = np.expand_dims(x, axis=0)
predictions = model.predict(x)
plt.subplot(2, 1, 2)
plt.plot(predictions[0,:,0])
plt.ylabel('probability')
plt.show()
return predictions
###Output
_____no_output_____
###Markdown
Once you've estimated the probability of having detected the word "activate" at each output step, you can trigger a "chiming" sound to play when the probability is above a certain threshold. Further, $y^{\langle t \rangle}$ might be near 1 for many values in a row after "activate" is said, yet we want to chime only once. So we will insert a chime sound at most once every 75 output steps. This will help prevent us from inserting two chimes for a single instance of "activate". (This plays a role similar to non-max suppression from computer vision.) <!-- **Exercise**: Implement chime_on_activate(). You will need to do the following:1. Loop over the predicted probabilities at each output step2. When the prediction is larger than the threshold and more than 75 consecutive time steps have passed, insert a "chime" sound onto the original audio clipUse this code to convert from the 1,375 step discretization to the 10,000 step discretization and insert a "chime" using pydub:` audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio.duration_seconds)*1000)`!-->
###Code
chime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
audio_clip = AudioSegment.from_wav(filename)
chime = AudioSegment.from_wav(chime_file)
Ty = predictions.shape[1]
# Step 1: Initialize the number of consecutive output steps to 0
consecutive_timesteps = 0
# Step 2: Loop over the output steps in the y
for i in range(Ty):
# Step 3: Increment consecutive output steps
consecutive_timesteps += 1
# Step 4: If prediction is higher than the threshold and more than 50 consecutive output steps have passed
if predictions[0,i,0] > threshold and consecutive_timesteps > 75:
# Step 5: Superpose audio and background using pydub
audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000)
# Step 6: Reset consecutive output steps to 0
consecutive_timesteps = 0
audio_clip.export("chime_output.wav", format='wav')
###Output
_____no_output_____
###Markdown
3.3 - Test on dev examples Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips.
###Code
IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")
###Output
_____no_output_____
###Markdown
Now lets run the model on these audio clips and see if it adds a chime after "activate"!
###Code
filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____
###Markdown
Congratulations You've come to the end of this assignment! Here's what you should remember:- Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection. - Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM.- An end-to-end deep learning approach can be used to built a very effective trigger word detection system. *Congratulations* on finishing the fimal assignment! Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course! 4 - Try your own example! (OPTIONAL/UNGRADED)In this optional and ungraded portion of this notebook, you can try your model on your own audio clips! Record a 10 second audio clip of you saying the word "activate" and other random words, and upload it to the Coursera hub as `myaudio.wav`. Be sure to upload the audio as a wav file. If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds.
###Code
# Preprocess the audio to the correct format
def preprocess_audio(filename):
# Trim or pad audio segment to 10000ms
padding = AudioSegment.silent(duration=10000)
segment = AudioSegment.from_wav(filename)[:10000]
segment = padding.overlay(segment)
# Set frame rate to 44100
segment = segment.set_frame_rate(44100)
# Export as wav
segment.export(filename, format='wav')
###Output
_____no_output_____
###Markdown
Once you've uploaded your audio file to Coursera, put the path to your file in the variable below.
###Code
your_filename = "myaudio.wav"
preprocess_audio(your_filename)
IPython.display.Audio(your_filename) # listen to the audio you uploaded
###Output
_____no_output_____
###Markdown
Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime.
###Code
prediction = detect_triggerword(your_filename)
chime_on_activate(your_filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____ |
course_4_Convolutional_Neural_Networks/Car detection for Autonomous Driving/Autonomous+driving+application+-+Car+detection+-+v1.ipynb | ###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxesRun the following cell to load the packages and dependencies that are going to be useful for your journey!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model detailsFirst things to know:- The **input** is a batch of images of shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).Lets look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class. **Figure 4** : **Find the class detected by each box** Here's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). - Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.**Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```2. For each box, find: - the index of the class with the maximum box score ([Hint](https://keras.io/backend/argmax)) (Be careful with what axis you choose; consider using axis=-1) - the corresponding box score ([Hint](https://keras.io/backend/max)) (Be careful with what axis you choose; consider using axis=-1)3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))Reminder: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores,
# keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis = -1)
box_class_scores = K.max(box_scores, axis = -1, keepdims = False)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold".
# The mask should have the same dimension as box_class_scores, and be True for the
# boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = (box_class_scores >= threshold)
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
scores[2] = 10.7506
boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
classes[2] = 7
scores.shape = (?,)
boxes.shape = (?, 4)
classes.shape = (?,)
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) 2.3 - Non-max suppression Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that: - xi1 = maximum of the x1 coordinates of the two boxes - yi1 = maximum of the y1 coordinates of the two boxes - xi2 = minimum of the x2 coordinates of the two boxes - yi2 = minimum of the y2 coordinates of the two boxes In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_area = (yi2 - yi1) * (xi2 - xi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])
box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
###Output
iou = 0.14285714285714285
###Markdown
**Expected Output**: **iou = ** 0.14285714285714285 You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression( boxes = boxes,
scores = scores,
max_output_size = max_boxes_tensor,
iou_threshold = iou_threshold )
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 6.9384
boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086]
classes[2] = -2.24527
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
scores[2] = 138.791
boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
classes[2] = 54
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) **Summary for YOLO**:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pretrained model on images In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape. Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell. The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pretrained modelTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
/opt/conda/lib/python3.6/site-packages/keras/models.py:251: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 608, 608, 3) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
____________________________________________________________________________________________________
batch_normalization_1 (BatchNorm (None, 608, 608, 32) 128 conv2d_1[0][0]
____________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
batch_normalization_2 (BatchNorm (None, 304, 304, 64) 256 conv2d_2[0][0]
____________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0]
____________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 128) 73728 max_pooling2d_2[0][0]
____________________________________________________________________________________________________
batch_normalization_3 (BatchNorm (None, 152, 152, 128) 512 conv2d_3[0][0]
____________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_3[0][0]
____________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0]
____________________________________________________________________________________________________
batch_normalization_4 (BatchNorm (None, 152, 152, 64) 256 conv2d_4[0][0]
____________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0]
____________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 152, 152, 128) 73728 leaky_re_lu_4[0][0]
____________________________________________________________________________________________________
batch_normalization_5 (BatchNorm (None, 152, 152, 128) 512 conv2d_5[0][0]
____________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128) 0 batch_normalization_5[0][0]
____________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0]
____________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0]
____________________________________________________________________________________________________
batch_normalization_6 (BatchNorm (None, 76, 76, 256) 1024 conv2d_6[0][0]
____________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0]
____________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0]
____________________________________________________________________________________________________
batch_normalization_7 (BatchNorm (None, 76, 76, 128) 512 conv2d_7[0][0]
____________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0]
____________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0]
____________________________________________________________________________________________________
batch_normalization_8 (BatchNorm (None, 76, 76, 256) 1024 conv2d_8[0][0]
____________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0]
____________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0]
____________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0]
____________________________________________________________________________________________________
batch_normalization_9 (BatchNorm (None, 38, 38, 512) 2048 conv2d_9[0][0]
____________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0]
____________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0]
____________________________________________________________________________________________________
batch_normalization_10 (BatchNor (None, 38, 38, 256) 1024 conv2d_10[0][0]
____________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0]
____________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0]
____________________________________________________________________________________________________
batch_normalization_11 (BatchNor (None, 38, 38, 512) 2048 conv2d_11[0][0]
____________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0]
____________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0]
____________________________________________________________________________________________________
batch_normalization_12 (BatchNor (None, 38, 38, 256) 1024 conv2d_12[0][0]
____________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0]
____________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0]
____________________________________________________________________________________________________
batch_normalization_13 (BatchNor (None, 38, 38, 512) 2048 conv2d_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0]
____________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0]
____________________________________________________________________________________________________
batch_normalization_14 (BatchNor (None, 19, 19, 1024) 4096 conv2d_14[0][0]
____________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0]
____________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0]
____________________________________________________________________________________________________
batch_normalization_15 (BatchNor (None, 19, 19, 512) 2048 conv2d_15[0][0]
____________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0]
____________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0]
____________________________________________________________________________________________________
batch_normalization_16 (BatchNor (None, 19, 19, 1024) 4096 conv2d_16[0][0]
____________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0]
____________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0]
____________________________________________________________________________________________________
batch_normalization_17 (BatchNor (None, 19, 19, 512) 2048 conv2d_17[0][0]
____________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0]
____________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0]
____________________________________________________________________________________________________
batch_normalization_18 (BatchNor (None, 19, 19, 1024) 4096 conv2d_18[0][0]
____________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
____________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
____________________________________________________________________________________________________
batch_normalization_19 (BatchNor (None, 19, 19, 1024) 4096 conv2d_19[0][0]
____________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0]
____________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
____________________________________________________________________________________________________
batch_normalization_21 (BatchNor (None, 38, 38, 64) 256 conv2d_21[0][0]
____________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0]
____________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0]
____________________________________________________________________________________________________
batch_normalization_20 (BatchNor (None, 19, 19, 1024) 4096 conv2d_20[0][0]
____________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0]
____________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_20[0][0]
____________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0]
____________________________________________________________________________________________________
batch_normalization_22 (BatchNor (None, 19, 19, 1024) 4096 conv2d_22[0][0]
____________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0]
____________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0]
====================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
____________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a (`sess`) graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = None
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
_____no_output_____ |
Lectures/08-FeaturePoints.ipynb | ###Markdown
ETHZ: 227-0966-00L Quantitative Big Imaging April 11, 2019 Dynamic Experiments: Feature Points Anders Kaestner
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%load_ext autoreload
%autoreload 2
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["figure.dpi"] = 150
plt.rcParams["font.size"] = 14
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['DejaVu Sans']
plt.style.use('ggplot')
sns.set_style("whitegrid", {'axes.grid': False})
###Output
_____no_output_____
###Markdown
Papers / Sites- Keypoint and Corner Detection - Distinctive Image Features from Scale-Invariant Keypoints - https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf - https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html Key Points (or feature points)- Registration using the full data set is time demaning.- We can detect feature points in an image and use them to make a registration. Identifying key pointsWe first focus on the detection of points. A [Harris corner detector](https://en.wikipedia.org/wiki/Harris_Corner_Detector) helps us here:
###Code
from skimage.feature import corner_peaks, corner_harris, BRIEF
from skimage.transform import warp, AffineTransform
from skimage import data
from skimage.io import imread
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
tform = AffineTransform(scale=(1.3, 1.1), rotation=0, shear=0.1,
translation=(0, 0))
image = warp(data.checkerboard(), tform.inverse, output_shape=(200, 200))
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 5))
ax1.imshow(image); ax1.set_title('Raw Image')
ax2.imshow(corner_harris(image)); ax2.set_title('Corner Features')
peak_coords = corner_peaks(corner_harris(image))
ax3.imshow(image); ax3.set_title('Raw Image')
ax3.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs');
###Output
_____no_output_____
###Markdown
Let's try the corner detection on real data
###Code
full_img = imread("ext-figures/bonegfiltslice.png")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 5))
ax1.imshow(full_img)
ax1.set_title('Raw Image')
ax2.imshow(corner_harris(full_img)), ax2.set_title('Corner Features')
peak_coords = corner_peaks(corner_harris(full_img))
ax3.imshow(full_img), ax3.set_title('Raw Image')
ax3.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs');
###Output
_____no_output_____
###Markdown
Tracking with Points__Goal:__ To reducing the tracking effortsWe can use the corner points to track features between multiple frames. In this sample, we see that they are - quite stable - and fixed on the features. We need data - a series transformed images
###Code
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
fig, c_ax = plt.subplots(1, 1, figsize=(5, 5), dpi=100)
def update_frame(i):
c_ax.cla()
tform = AffineTransform(scale=(1.3+i/20, 1.1-i/20), rotation=-i/10, shear=i/20,
translation=(0, 0))
image = warp(data.checkerboard(), tform.inverse, output_shape=(200, 200))
c_ax.imshow(image)
peak_coords = corner_peaks(corner_harris(image))
c_ax.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs')
# write animation frames
anim_code = FuncAnimation(fig,
update_frame,
frames=np.linspace(0, 5, 10),
interval=1000,
repeat_delay=2000).to_html5_video()
plt.close('all')
HTML(anim_code)
###Output
_____no_output_____
###Markdown
Features and DescriptorsWe can move beyond just key points to keypoints and feature vectors (called descriptors) at those points. A descriptor is a vector that describes a given keypoint uniquely. This will be demonstrated using two methods in the following notebook cells...
###Code
from skimage.feature import ORB
full_img = imread("ext-figures/bonegfiltslice.png")
orb_det = ORB(n_keypoints=10)
det_obj = orb_det.detect_and_extract(full_img)
fig, (ax3, ax4, ax5) = plt.subplots(1, 3, figsize=(15, 5))
ax3.imshow(full_img, cmap='gray')
ax3.set_title('Raw Image')
for i in range(orb_det.keypoints.shape[0]):
ax3.plot(orb_det.keypoints[i, 1], orb_det.keypoints[i,
0], 's', label='Keypoint {}'.format(i))
ax4.bar(np.arange(10)+i/10.0, orb_det.descriptors[i][:10]+1e-2, width=1/10.0,
alpha=0.5, label='Keypoint {}'.format(i))
ax5.imshow(np.stack([x[:20] for x in orb_det.descriptors], 0))
ax5.set_title('Descriptor')
ax3.legend(facecolor='white', framealpha=0.5)
ax4.legend();
###Output
_____no_output_____
###Markdown
Defining a supporting function to show the matches
###Code
from skimage.feature import match_descriptors, plot_matches
import matplotlib.pyplot as plt
def show_matches(img1, img2, feat1, feat2):
matches12 = match_descriptors(
feat1['descriptors'], feat2['descriptors'], cross_check=True)
fig, (ax3, ax2) = plt.subplots(1, 2, figsize=(15, 5))
c_matches = match_descriptors(feat1['descriptors'],
feat2['descriptors'], cross_check=True)
plot_matches(ax3,
img1, img2,
feat1['keypoints'], feat1['keypoints'],
matches12)
ax2.plot(feat1['keypoints'][:, 1],
feat1['keypoints'][:, 0],
'.',
label='Before')
ax2.plot(feat2['keypoints'][:, 1],
feat2['keypoints'][:, 0],
'.', label='After')
for i, (c_idx, n_idx) in enumerate(c_matches):
x_vec = [feat1['keypoints'][c_idx, 0], feat2['keypoints'][n_idx, 0]]
y_vec = [feat1['keypoints'][c_idx, 1], feat2['keypoints'][n_idx, 1]]
dist = np.sqrt(np.square(np.diff(x_vec))+np.square(np.diff(y_vec)))
alpha = np.clip(50/dist, 0, 1)
ax2.plot(
y_vec,
x_vec,
'k-',
alpha=alpha,
label='Match' if i == 0 else ''
)
ax2.legend()
ax3.set_title(r'{} $\rightarrow$ {}'.format('Before', 'After'));
###Output
_____no_output_____
###Markdown
Let's create some data
###Code
from skimage.filters import median
full_img = imread("ext-figures/bonegfiltslice.png")
full_shift_img = median(
np.roll(np.roll(full_img, -15, axis=0), 15, axis=1), np.ones((1, 3)))
bw_img = full_img
shift_img = full_shift_img
###Output
_____no_output_____
###Markdown
Features found by the BRIEF descriptor
###Code
from skimage.feature import corner_peaks, corner_harris, BRIEF
def calc_corners(*imgs):
b = BRIEF()
for c_img in imgs:
corner_img = corner_harris(c_img)
coords = corner_peaks(corner_img, min_distance=5)
b.extract(c_img, coords)
yield {'keypoints': coords,
'descriptors': b.descriptors}
feat1, feat2 = calc_corners(bw_img, shift_img)
show_matches(bw_img, shift_img, feat1, feat2)
###Output
_____no_output_____
###Markdown
Features found by the ORB descriptor
###Code
from skimage.feature import ORB, BRIEF, CENSURE
def calc_orb(*imgs):
descriptor_extractor = ORB(n_keypoints=100)
for c_img in imgs:
descriptor_extractor.detect_and_extract(c_img)
yield {'keypoints': descriptor_extractor.keypoints,
'descriptors': descriptor_extractor.descriptors}
feat1, feat2 = calc_orb(bw_img, shift_img)
show_matches(bw_img, shift_img, feat1, feat2)
###Output
_____no_output_____
###Markdown
ETHZ: 227-0966-00L Quantitative Big Imaging April 11, 2019 Dynamic Experiments: Feature Points
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%load_ext autoreload
%autoreload 2
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["figure.dpi"] = 150
plt.rcParams["font.size"] = 14
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['DejaVu Sans']
plt.style.use('ggplot')
sns.set_style("whitegrid", {'axes.grid': False})
###Output
_____no_output_____
###Markdown
Papers / Sites- Keypoint and Corner Detection - Distinctive Image Features from Scale-Invariant Keypoints - https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf - https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html Key Points (or feature points)We can detect feature points in an image and use them to make a registration. We first focus on detection of points
###Code
from skimage.feature import corner_peaks, corner_harris, BRIEF
from skimage.transform import warp, AffineTransform
from skimage import data
from skimage.io import imread
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
tform = AffineTransform(scale=(1.3, 1.1), rotation=0, shear=0.1,
translation=(0, 0))
image = warp(data.checkerboard(), tform.inverse, output_shape=(200, 200))
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(10, 5))
ax1.imshow(image)
ax1.set_title('Raw Image')
ax2.imshow(corner_harris(image))
ax2.set_title('Corner Features')
peak_coords = corner_peaks(corner_harris(image))
ax3.imshow(image)
ax3.set_title('Raw Image')
ax3.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs')
full_img = imread("ext-figures/bonegfiltslice.png")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 5))
ax1.imshow(full_img)
ax1.set_title('Raw Image')
ax2.imshow(corner_harris(full_img))
ax2.set_title('Corner Features')
peak_coords = corner_peaks(corner_harris(full_img))
ax3.imshow(full_img)
ax3.set_title('Raw Image')
ax3.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs')
###Output
_____no_output_____
###Markdown
Tracking with PointsWe can also uses these points to track between multiple frames. We see that they are quite stable and fixed on the features.
###Code
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
fig, c_ax = plt.subplots(1, 1, figsize=(5, 5), dpi=150)
def update_frame(i):
c_ax.cla()
tform = AffineTransform(scale=(1.3+i/20, 1.1-i/20), rotation=-i/10, shear=i/20,
translation=(0, 0))
image = warp(data.checkerboard(), tform.inverse, output_shape=(200, 200))
c_ax.imshow(image)
peak_coords = corner_peaks(corner_harris(image))
c_ax.plot(peak_coords[:, 1], peak_coords[:, 0], 'rs')
# write animation frames
anim_code = FuncAnimation(fig,
update_frame,
frames=np.linspace(0, 5, 10),
interval=1000,
repeat_delay=2000).to_html5_video()
plt.close('all')
HTML(anim_code)
###Output
_____no_output_____
###Markdown
Features and DescriptorsWe can move beyond just key points to keypoints and feature vectors (called descriptors) at those points. A descriptor is a vector that describes a given keypoint uniquely.
###Code
from skimage.feature import ORB
full_img = imread("ext-figures/bonegfiltslice.png")
orb_det = ORB(n_keypoints=10)
det_obj = orb_det.detect_and_extract(full_img)
fig, (ax3, ax4, ax5) = plt.subplots(1, 3, figsize=(15, 5))
ax3.imshow(full_img, cmap='gray')
ax3.set_title('Raw Image')
for i in range(orb_det.keypoints.shape[0]):
ax3.plot(orb_det.keypoints[i, 1], orb_det.keypoints[i,
0], 's', label='Keypoint {}'.format(i))
ax4.bar(np.arange(10)+i/10.0, orb_det.descriptors[i][:10]+1e-2, width=1/10.0,
alpha=0.5, label='Keypoint {}'.format(i))
ax5.imshow(np.stack([x[:20] for x in orb_det.descriptors], 0))
ax5.set_title('Descriptor')
ax3.legend()
ax4.legend()
from skimage.feature import match_descriptors, plot_matches
import matplotlib.pyplot as plt
def show_matches(img1, img2, feat1, feat2):
matches12 = match_descriptors(
feat1['descriptors'], feat2['descriptors'], cross_check=True)
fig, (ax3, ax2) = plt.subplots(1, 2, figsize=(15, 5))
c_matches = match_descriptors(feat1['descriptors'],
feat2['descriptors'], cross_check=True)
plot_matches(ax3,
img1, img2,
feat1['keypoints'], feat1['keypoints'],
matches12)
ax2.plot(feat1['keypoints'][:, 1],
feat1['keypoints'][:, 0],
'.',
label='Before')
ax2.plot(feat2['keypoints'][:, 1],
feat2['keypoints'][:, 0],
'.', label='After')
for i, (c_idx, n_idx) in enumerate(c_matches):
x_vec = [feat1['keypoints'][c_idx, 0], feat2['keypoints'][n_idx, 0]]
y_vec = [feat1['keypoints'][c_idx, 1], feat2['keypoints'][n_idx, 1]]
dist = np.sqrt(np.square(np.diff(x_vec))+np.square(np.diff(y_vec)))
alpha = np.clip(50/dist, 0, 1)
ax2.plot(
y_vec,
x_vec,
'k-',
alpha=alpha,
label='Match' if i == 0 else ''
)
ax2.legend()
ax3.set_title(r'{} $\rightarrow$ {}'.format('Before', 'After'))
from skimage.filters import median
full_img = imread("ext-figures/bonegfiltslice.png")
full_shift_img = median(
np.roll(np.roll(full_img, -15, axis=0), 15, axis=1), np.ones((1, 3)))
def g_roi(x): return x
bw_img = g_roi(full_img)
shift_img = g_roi(full_shift_img)
from skimage.feature import corner_peaks, corner_harris, BRIEF
def calc_corners(*imgs):
b = BRIEF()
for c_img in imgs:
corner_img = corner_harris(c_img)
coords = corner_peaks(corner_img, min_distance=5)
b.extract(c_img, coords)
yield {'keypoints': coords,
'descriptors': b.descriptors}
feat1, feat2 = calc_corners(bw_img, shift_img)
show_matches(bw_img, shift_img, feat1, feat2)
from skimage.feature import ORB, BRIEF, CENSURE
def calc_orb(*imgs):
descriptor_extractor = ORB(n_keypoints=100)
for c_img in imgs:
descriptor_extractor.detect_and_extract(c_img)
yield {'keypoints': descriptor_extractor.keypoints,
'descriptors': descriptor_extractor.descriptors}
feat1, feat2 = calc_orb(bw_img, shift_img)
show_matches(bw_img, shift_img, feat1, feat2)
###Output
_____no_output_____ |
src/06_Modulos_Analise_de_Dados/09_Pandas_Dataframes_e_NumPy.ipynb | ###Markdown
Remember: NumPy isn't an analysis tool, it will work together with Pandas, Matplotlib, etc.
###Code
# Import Pandas and NumPy
import pandas as pd
import numpy as np
# Create a dictionary
data = {'State': ['Santa Catarina','Paraná','Goiás','Bahia','Minas Gerais'],
'Year': [2002,2003,2004,2005,2006],
'Population': [1.5,1.7,3.6,2.4,2.9]}
# Transform the data to DataFrame from above dictionary
frame = df(data)
# Show DataFrame
frame
# Create another DataFrame, adding a custom index, defining the column names and add a new column
frame2 = df(data, columns=['Year','State','Population','Debit'],
index=['one','two','three','four','five'])
frame2
# Fill Debit column with a Numpy array
# Note that the number 5 is exclusive
frame2['Debit'] = np.arange(5.)
frame2
# Show values
frame2.values
# Summary with statistical measures
frame2.describe()
# Slicing by index name
frame2['two':'four']
frame2 < 3
###Output
_____no_output_____
###Markdown
Locating records into a DataFrame
###Code
# To locate a value that contais a criteria
frame2.loc['four']
# iloc (index location), locate by the index number
frame2.iloc[2]
###Output
_____no_output_____
###Markdown
Inverting columns and indexes
###Code
# Create a dictionary
web_stats = {'Days': [1,2,3,4,5,6,7],
'Visitors':[45,23,67,78,23,12,14],
'rate':[11,22,33,44,55,66,77]}
df = pd.DataFrame(web_stats)
df
# As we can see, the column Days are in sequence
# Therefore, we can transform this column to an index
df.set_index('Days')
# The instruction above, doesn't change de Dataframe structure
# As we can see bellow, the Dataframe remains original
df.head()
# Slicing by Visitors column
print(df['Visitors'])
# Now, slicing by Visitors and Rate
# Note that the double bracktes
print(df[['Visitors','rate']])
###Output
Visitors rate
0 45 11
1 23 22
2 67 33
3 78 44
4 23 55
5 12 66
6 14 77
|
docs/source/notebooks/SMC_samplers_tutorial.ipynb | ###Markdown
SMC samplersThis tutorial gives a basic introduction to SMC samplers, and explains how to run the SMC samplers already implemented in ``particles``. For a more advanced tutorial on how to design new SMC samplers, see the next tutorial. For more background on SMC samplers, check Chapter 17 of the book. SMC samplers: what for? A SMC sampler is a SMC algorithm that samples from a sequence of probability distributions $\pi_t$, $t=0,\ldots,T$ (and compute their normalising constants). Sometimes one is genuinely interested in each $\pi_t$; more often one is interested only in the final distribution $\pi_T$. In the latter case, the sequence is purely instrumental.Examples of SMC sequences are: 1. $\pi_t(\theta) = p(\theta|y_{0:t})$, the Bayesian posterior distribution of parameter $\theta$ given data $y_{0:t}$, for a certain model. 2. A tempering sequence, $\pi_t(\theta) \propto \nu(\theta) L(\theta)^{\gamma_t}$ ,where the $\gamma_t$'s form an increasing sequence of exponents: $0=\gamma_0 < \ldots < \gamma_T=1$. You can think of $\nu$ being the prior, $L$ the likelihood function, and $\pi_T$ the posterior. However, more generally, tempering is a way to interpolate between any two distributions, $\nu$ and $\pi$, with $\pi(\theta) \propto \nu(\theta) L(\theta)$. We discuss first how to specify a sequence of the first type. Defining a Bayesian modelTo define a particular Bayesian model, you must subclass `StaticModel`, and define method `logpyt`, which evaluates the log-likelihood of datapoint $Y_t$ given parameter $\theta$ and past datapoints $Y_{0:t-1}$. Here is a simple example:
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sb
import numpy as np
from scipy import stats
import particles
from particles import smc_samplers as ssp
from particles import distributions as dists
class ToyModel(ssp.StaticModel):
def logpyt(self, theta, t): # density of Y_t given theta and Y_{0:t-1}
return stats.norm.logpdf(self.data[t], loc=theta['mu'],
scale = theta['sigma'])
###Output
_____no_output_____
###Markdown
In words, we are considering a model where the observations are $Y_t\sim N(\mu, \sigma^2)$ (independently). The parameter is $\theta=(\mu, \sigma)$. Note the fields notation; more about this later. Class `ToyModel` implicitely defines the likelihood of the considered model for any sample size (since the likelihood at time $t$ is $p^\theta(y_{0:t})=\prod_{s=0}^t p^\theta(y_s|y_{0:s-1})$, and method `logpyt` defines each factor in this product; note that $y_s$ does not depend on the past values in our particular example). We now define the data and the prior:
###Code
T = 30
my_data = stats.norm.rvs(loc=3.14, size=T) # simulated data
my_prior = dists.StructDist({'mu': dists.Normal(scale=10.),
'sigma': dists.Gamma()})
###Output
_____no_output_____
###Markdown
For more details about to define prior distributions, see the documentation of module `distributions`, or the previous [tutorial on Bayesian estimation of state-space models](Bayes_estimation_ssm.ipynb). Now that we have everything, let's specify our static model:
###Code
my_static_model = ToyModel(data=my_data, prior=my_prior)
###Output
_____no_output_____
###Markdown
This time, object `my_static_model` entirely defines the posterior.
###Code
thetas = my_prior.rvs(size=5)
my_static_model.logpost(thetas, t=2)
# if t is omitted, gives the full posterior
###Output
_____no_output_____
###Markdown
The input of `logpost` and output of `myprior.rvs()` are [structured arrays](https://docs.scipy.org/doc/numpy/user/basics.rec.html), that is, arrays with fields:
###Code
thetas['mu'][0]
###Output
_____no_output_____
###Markdown
Typically, you won't need to call `logpost` yourself, this will be done by the SMC sampler for you. IBISIBIS (iterated batch importance sampling) is the standard name for a SMC sampler that tracks a sequence of partial posterior distributions; i.e. $\pi_t$ is $p(\theta|y_{0:t})$, for $t=0,1,\ldots$. Module `smc_samplers` defines `IBIS` as a subclass of `FeynmanKac`.
###Code
my_ibis = ssp.IBIS(my_static_model, len_chain=50)
my_alg = particles.SMC(fk=my_ibis, N=20,
store_history=True, verbose=True)
my_alg.run()
###Output
t=0, ESS=82.04
t=1, Metropolis acc. rate (over 49 steps): 0.254, ESS=464.72
t=2, Metropolis acc. rate (over 49 steps): 0.171, ESS=690.07
t=3, ESS=438.62
t=4, Metropolis acc. rate (over 49 steps): 0.211, ESS=849.38
t=5, ESS=606.35
t=6, ESS=412.14
t=7, Metropolis acc. rate (over 49 steps): 0.289, ESS=732.81
t=8, ESS=680.56
t=9, ESS=411.45
t=10, Metropolis acc. rate (over 49 steps): 0.288, ESS=282.34
t=11, Metropolis acc. rate (over 49 steps): 0.317, ESS=850.79
t=12, ESS=926.82
t=13, ESS=936.24
t=14, ESS=906.07
t=15, ESS=650.82
t=16, ESS=514.22
t=17, ESS=426.00
t=18, Metropolis acc. rate (over 49 steps): 0.325, ESS=878.73
t=19, ESS=842.13
t=20, ESS=826.67
t=21, ESS=769.72
t=22, ESS=865.55
t=23, ESS=773.36
t=24, ESS=690.59
t=25, ESS=619.32
t=26, ESS=645.08
t=27, ESS=594.02
t=28, ESS=510.84
t=29, ESS=695.80
###Markdown
**Note**: we use option `verbose=True` in `SMC` in order to print some information on the intermediate distributions. Since we set `store_history` to `True`, the particles and their weights have been saved at every time (in attribute `hist`, see previous tutorials on smoothing). Let's plot the posterior distributions of $\mu$ and $\sigma$ at various times.
###Code
plt.style.use('ggplot')
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
for t in [1, 29]:
plt.hist(my_alg.hist.X[t].theta[p], weights=my_alg.hist.wgts[t].W, label="t=%i" % t,
alpha=0.5, density=True)
plt.xlabel(p)
plt.legend();
###Output
_____no_output_____
###Markdown
As expected, the posterior distribution concentrates progressively around the true values. As always, once the algorithm is run, `my_smc.X` contains the final particles. However, object `my_smc.X` is no longer a simple numpy array. It is a `ThetaParticles` object, with attributes:* `theta`: a structured array (an array with fields); i.e. `my_smc.X.theta['mu']` is a (N,) array that contains the the $\mu-$component of the $N$ particles; * `lpost`: a 1D numpy array that contains the target (posterior) log-density of each of the particles;* `shared`: a dictionary that contains "meta-data" on the particles; for instance `shared['acc_rates']` is a list of the acceptance rates of the successive Metropolis steps.
###Code
print(["%2.f%%" % (100 * np.mean(r)) for r in my_alg.X.shared['acc_rates']])
plt.hist(my_alg.X.lpost, 30);
###Output
['25%', '17%', '21%', '29%', '29%', '32%', '33%']
###Markdown
You do not need to know much more about class `ThetaParticles` in pratice (if you're curious, however, see the next tutorial on SMC samplers or the documention of module `smc_samplers`). Waste-free versus standard SMC samplersThe library now implements by default waste-free SMC ([Dau & Chopin, 2020](https://arxiv.org/abs/2011.02328)), a variant of SMC samplers that keeps all the intermediate Markov steps (rather than "wasting" them). In practice, this means that, in the piece of code above:* at each time $t$, $N=20$ particles are resampled, and used as a starting points of the MCMC chains; * the MCMC chains are run for 49 iterations, hence the chain length is 50 (parameter ``len_chain=50``)* and since we keep all the intermediate steps, we get 50*20 = 1000 particles at each iteration. In particular, we do O(1000) operations at each step. (At time 0, we also generate 1000 particles.) Thus, the number of particles is actually `N * len_chain`; given this number of particles, the performance typically does not depend too much on `N` and `len_chain`, provided the latter is "big enough" (relative to the mixing of the MCMC kernels). See Dau & Chopin (2020) for more details on waste-free SMC. If you wish to run a standard SMC sampler instead, you may set `wastefree=False`, like this:
###Code
my_ibis = ssp.IBIS(my_static_model, wastefree=False, len_chain=11)
my_alg = particles.SMC(fk=my_ibis, N=100, store_history=True)
my_alg.run()
###Output
_____no_output_____
###Markdown
This runs a standard SMC sampler which tracks $N=100$ particles; these particles are resampled from time to time, and then moved through 10 MCMC steps. (As explained in Dau & Chopin, 2020, you typically get a better performance vs CPU time trade-off with wastefree SMC.) Regarding the MCMC stepsThe default MCMC kernel used to move the particles is a Gaussian random walk Metropolis kernel, whose covariance matrix is calibrated automatically to $\gamma$ times of the empirical covariance matrix of the particle sample, where $\gamma=2.38 / \sqrt{d}$ (standard choice in the literature). It is possible to specify a different value for $\gamma$, or more generally other types of MCMC moves; for instance the following uses Metropolis kernels based on independent Gaussian proposals:
###Code
mcmc = ssp.ArrayIndependentMetropolis(scale=1.1)
# Independent Gaussian proposal, with mean and variance determined by
# the particle sample (variance inflated by factor scale=1.1)
alt_move = ssp.MCMCSequenceWF(mcmc=mcmc)
# This object represents a particular way to apply several MCMC steps
# in a row. WF = WasteFree
alt_ibis = ssp.IBIS(my_static_model, move=alt_move)
alt_alg = particles.SMC(fk=alt_ibis, N=100,ESSrmin=0.2,
verbose=True)
alt_alg.run()
###Output
t=0, ESS=56.14
t=1, Metropolis acc. rate (over 9 steps): 0.377, ESS=447.47
t=2, ESS=270.35
t=3, ESS=117.34
t=4, Metropolis acc. rate (over 9 steps): 0.491, ESS=848.91
t=5, ESS=595.25
t=6, ESS=391.41
t=7, ESS=276.38
t=8, ESS=199.79
t=9, Metropolis acc. rate (over 9 steps): 0.660, ESS=765.88
t=10, ESS=314.50
t=11, ESS=278.97
t=12, ESS=315.64
t=13, ESS=270.67
t=14, ESS=313.92
t=15, ESS=179.27
t=16, Metropolis acc. rate (over 9 steps): 0.759, ESS=937.78
t=17, ESS=820.94
t=18, ESS=951.60
t=19, ESS=962.98
t=20, ESS=938.37
t=21, ESS=883.72
t=22, ESS=843.66
t=23, ESS=811.68
t=24, ESS=736.35
t=25, ESS=650.76
t=26, ESS=597.71
t=27, ESS=515.31
t=28, ESS=456.33
t=29, ESS=494.19
###Markdown
In the future, the package may also implement other type of MCMC kernels such as MALA. It is also possible to define your own MCMC kernels, as explained in the next tutorial. For now, note the following practical detail: the algorithm resamples whenever the ESS gets below a certain threshold $\alpha * N$; the default value $\alpha=0.5$, but here we changed it (to $\alpha=0.2$) by setting `ESSrmin=0.2`.
###Code
plt.plot(alt_alg.summaries.ESSs)
plt.xlabel('t')
plt.ylabel('ESS');
###Output
_____no_output_____
###Markdown
As expected, the algorithm waits until the ESS is below 200 to trigger a resample-move step. SMC temperingSMC tempering is a SMC sampler that samples iteratively from the following sequence of distributions:\begin{equation}\pi_t(\theta) \propto \pi(\theta) L(\theta)^\gamma_t\end{equation}with $0=\gamma_0 < \ldots < \gamma_T = 1$. In words, this sequence is a **geometric bridge**, which interpolates between the prior and the posterior. SMC tempering implemented in the same was as IBIS: as a sub-class of `FeynmanKac`, whose `__init__` function takes as argument a `StaticModel` object.
###Code
fk_tempering = ssp.AdaptiveTempering(my_static_model)
my_temp_alg = particles.SMC(fk=fk_tempering, N=1000, ESSrmin=1.,
verbose=True)
my_temp_alg.run()
###Output
t=0, ESS=5000.00, tempering exponent=0.000938
t=1, Metropolis acc. rate (over 9 steps): 0.262, ESS=5000.00, tempering exponent=0.0121
t=2, Metropolis acc. rate (over 9 steps): 0.241, ESS=5000.00, tempering exponent=0.0627
t=3, Metropolis acc. rate (over 9 steps): 0.237, ESS=5000.00, tempering exponent=0.197
t=4, Metropolis acc. rate (over 9 steps): 0.266, ESS=5000.00, tempering exponent=0.629
t=5, Metropolis acc. rate (over 9 steps): 0.343, ESS=8583.28, tempering exponent=1
###Markdown
**Note**: Recall that `SMC` resamples every time the ESS drops below value N times option `ESSrmin`; here we set it to to 1, since we want to resample at every time. This makes sense: Adaptive SMC chooses adaptively the successive values of $\gamma_t$ so that the ESS equals a certain value ($N/2$ by default). We have not saved the intermediate results this time (option `store_history` was not set) since they are not particularly interesting. Let's look at the final results:
###Code
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
sb.histplot(my_temp_alg.X.theta[p], stat='density')
plt.xlabel(p)
###Output
_____no_output_____
###Markdown
This looks reasonable!You can see from the output that the algorithm automatically chooses the tempering exponents $\gamma_1, \gamma_2,\ldots$. In fact, at iteration $t$, the next value for $\gamma$ is set that the ESS drops at most to $N/2$. You can change this particular threshold by passing argument ESSrmin to TemperingSMC. (Warning: do not mistake this with the `ESSrmin` argument of class `SMC`):
###Code
lazy_tempering = ssp.AdaptiveTempering(my_static_model, ESSrmin = 0.1)
lazy_alg = particles.SMC(fk=lazy_tempering, N=1000, verbose=True)
lazy_alg.run()
###Output
t=0, ESS=1000.00, tempering exponent=0.0372
t=1, Metropolis acc. rate (over 9 steps): 0.223, ESS=1000.00, tempering exponent=0.699
t=2, Metropolis acc. rate (over 9 steps): 0.341, ESS=9125.17, tempering exponent=1
###Markdown
SMC samplersSMC samplers are SMC algorithms that sample from a sequence of target distributions. In this tutorial, these target distributions will be Bayesian posterior distributions of static models. SMC samplers are covered in Chapter 17 of the book. Defining a static modelA static model is a Python object that represents a Bayesian model with static parameter $\theta$. One may define a static model by subclassing base class `StaticModel`, and defining method `logpyt`, which evaluates the log-likelihood of datapoint $Y_t$ (given $\theta$ and past datapoints $Y_{0:t-1}$). Here is a simple example:
###Code
%matplotlib inline
import warnings; warnings.simplefilter('ignore') # hide warnings
from matplotlib import pyplot as plt
import seaborn as sb
from scipy import stats
import particles
from particles import smc_samplers as ssp
from particles import distributions as dists
class ToyModel(ssp.StaticModel):
def logpyt(self, theta, t): # density of Y_t given theta and Y_{0:t-1}
return stats.norm.logpdf(self.data[t], loc=theta['mu'],
scale = theta['sigma'])
###Output
_____no_output_____
###Markdown
In words, we are considering a model where the observations are $Y_t\sim N(\mu, \sigma^2)$. The parameter is $\theta=(\mu, \sigma)$.Class `ToyModel` contains information about the likelihood of the considered model, but not about its prior, or the considered data. First, let's define those:
###Code
T = 1000
my_data = stats.norm.rvs(loc=3.14, size=T) # simulated data
my_prior = dists.StructDist({'mu': dists.Normal(scale=10.),
'sigma': dists.Gamma()})
###Output
_____no_output_____
###Markdown
For more details about to define prior distributions, see the documentation of module `distributions`, or the previous [tutorial on Bayesian estimation of state-space models](Bayes_estimation_ssm.ipynb). Now that we have everything, let's specify our static model:
###Code
my_static_model = ToyModel(data=my_data, prior=my_prior)
###Output
_____no_output_____
###Markdown
This time, object `my_static_model` has enough information to define the posterior distribution(s) of the model (given all data, or part of the data). In fact, it inherits from `StaticModel` method `logpost`, which evaluates (for a collection of $\theta$ values) the posterior log-density at any time $t$ (meaning given data $y_{0:t}$).
###Code
thetas = my_prior.rvs(size=5)
my_static_model.logpost(thetas, t=2) # if t is omitted, gives the full posterior
###Output
_____no_output_____
###Markdown
The input of `logpost` (and output of `myprior.rvs()`) is a [structured array](https://docs.scipy.org/doc/numpy/user/basics.rec.html), with the same keys as the prior distribution:
###Code
thetas['mu'][0]
###Output
_____no_output_____
###Markdown
Typically, you won't need to call `logpost` yourself, this will be done by the SMC sampler for you. IBISThe IBIS (iterated batch importance sampling) algorithm is a SMC sampler that samples iteratively from a sequence of posterior distributions, $p(\theta|y_{0:t})$, for $t=0,1,\ldots$. Module `smc_samplers` defines `IBIS` as a subclass of `FeynmanKac`.
###Code
my_ibis = ssp.IBIS(my_static_model)
my_alg = particles.SMC(fk=my_ibis, N=1000, store_history=True)
my_alg.run()
###Output
_____no_output_____
###Markdown
Since we set `store_history` to `True`, the particles and their weights have been saved at every time (in attribute `hist`, see previous tutorials on smoothing). Let's plot the posterior distributions of $\mu$ and $\sigma$ at various times.
###Code
plt.style.use('ggplot')
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
for t in [100, 300, 900]:
plt.hist(my_alg.hist.X[t].theta[p], weights=my_alg.hist.wgt[t].W, label="t=%i" % t, alpha=0.5, density=True)
plt.xlabel(p)
plt.legend();
###Output
_____no_output_____
###Markdown
As expected, the posterior distribution concentrates progressively around the true values. As before, once the algorithm is run, `my_smc.X` contains the N final particles. However, object `my_smc.X` is no longer a simple (N,) or (N,d) numpy array. It is a `ThetaParticles` object, with attributes:* theta: a structured array: as mentioned above, this is an array with fields; i.e. `my_smc.X.theta['mu']` is a (N,) array that contains the the $\mu-$component of the $N$ particles; * `lpost`: a (N,) numpy array that contains the target (posterior) log-density of each of the N particles;* `acc_rates`: a list of the acceptance rates of the resample-move steps.
###Code
print(["%2.2f%%" % (100 * np.mean(r)) for r in my_alg.X.acc_rates])
plt.hist(my_alg.X.lpost, 30);
###Output
['23.33%', '12.70%', '26.07%', '31.82%', '32.75%', '31.90%', '35.50%', '35.98%', '33.75%', '35.28%']
###Markdown
You do not need to know much more about class `ThetaParticles` for most practical purposes (see however the documention of module `smc_samplers` if you do want to know more, e.g. in order to implement other classes of SMC samplers). Regarding the Metropolis stepsAs the text output of `my_alg.run()` suggests, the algorithm "resample-moves" whenever the ESS is below a certain threshold ($N/2$ by default). When this occurs, particles are resampled, and then moved through a certain number of Metropolis-Hastings steps. By default, the proposal is a Gaussian random walk, and both the number of steps and the covariance matrix of the random walk are chosen automatically as follows: * the covariance matrix of the random walk is set to `scale` times the empirical (weighted) covariance matrix of the particles. The default value for `scale` is $2.38 / \sqrt{d}$, where $d$ is the dimension of $\theta$. * the algorithm performs Metropolis steps until the relative increase of the average distance between the starting point and the end point is below a certain threshold $\delta$. Class `IBIS` takes as an optional argument `mh_options`, a dictionary which may contain the following (key, values) pairs: * `'type_prop'`: either `'random walk'` or `'independent`'; in the latter case, an independent Gaussian proposal is used. The mean of the Gaussian is set to the weighted mean of the particles. The variance is set to `scale` times the weighted variance of the particles. * `'scale`': the scale of the proposal (as explained above). * `'nsteps'`: number of steps. If set to `0`, the adaptive strategy described above is used. Let's illustrate all this by calling IBIS again:
###Code
alt_ibis = ssp.IBIS(my_static_model, mh_options={'type_prop': 'independent',
'nsteps': 10})
alt_alg = particles.SMC(fk=alt_ibis, N=1000, ESSrmin=0.2)
alt_alg.run()
###Output
_____no_output_____
###Markdown
Well, apparently the algorithm did what we asked. We have also changed the threshold of Let's see how the ESS evolved:
###Code
plt.plot(alt_alg.summaries.ESSs)
plt.xlabel('t')
plt.ylabel('ESS')
###Output
_____no_output_____
###Markdown
As expected, the algorithm waits until the ESS is below 200 to trigger a resample-move step. SMC temperingSMC tempering is a SMC sampler that samples iteratively from the following sequence of distributions:\begin{equation}\pi_t(\theta) \propto \pi(\theta) L(\theta)^\gamma_t\end{equation}with $0=\gamma_0 < \ldots < \gamma_T = 1$. In words, this sequence is a **geometric bridge**, which interpolates between the prior and the posterior. SMC tempering implemented in the same was as IBIS: as a sub-class of `FeynmanKac`, whose `__init__` function takes as argument a `StaticModel` object.
###Code
fk_tempering = ssp.AdaptiveTempering(my_static_model)
my_temp_alg = particles.SMC(fk=fk_tempering, N=1000, ESSrmin=1., verbose=True)
my_temp_alg.run()
###Output
t=0, ESS=500.00, tempering exponent=2.94e-05
t=1, Metropolis acc. rate (over 6 steps): 0.275, ESS=500.00, tempering exponent=0.000325
t=2, Metropolis acc. rate (over 6 steps): 0.261, ESS=500.00, tempering exponent=0.0018
t=3, Metropolis acc. rate (over 6 steps): 0.253, ESS=500.00, tempering exponent=0.0061
t=4, Metropolis acc. rate (over 6 steps): 0.287, ESS=500.00, tempering exponent=0.0193
t=5, Metropolis acc. rate (over 6 steps): 0.338, ESS=500.00, tempering exponent=0.0636
t=6, Metropolis acc. rate (over 6 steps): 0.347, ESS=500.00, tempering exponent=0.218
t=7, Metropolis acc. rate (over 5 steps): 0.358, ESS=500.00, tempering exponent=0.765
t=8, Metropolis acc. rate (over 5 steps): 0.366, ESS=941.43, tempering exponent=1
###Markdown
**Note**: Recall that `SMC` resamples every time the ESS drops below value N times option `ESSrmin`; here we set it to to 1, since we want to resample at every time. This makes sense: Adaptive SMC chooses adaptively the successive values of $\gamma_t$ so that the ESS drops to $N/2$ (by default). **Note**: we use option `verbose=True` in `SMC` in order to print some information on the intermediate distributions. We have not saved the intermediate results this time (option `store_history` was not set) since they are not particularly interesting. Let's look at the final results:
###Code
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
sb.distplot(my_temp_alg.X.theta[p])
plt.xlabel(p)
###Output
_____no_output_____
###Markdown
This looks reasonable!You can see from the output that the algorithm automatically chooses the tempering exponents $\gamma_1, \gamma_2,\ldots$. In fact, at iteration $t$, the next value for $\gamma$ is set that the ESS drops at most to $N/2$. You can change this particular threshold by passing argument ESSrmin to TemperingSMC. (Warning: do not mistake this with the `ESSrmin` argument of class `SMC`):
###Code
lazy_tempering = ssp.AdaptiveTempering(my_static_model, ESSrmin = 0.1)
lazy_alg = particles.SMC(fk=lazy_tempering, N=1000, verbose=True)
lazy_alg.run()
###Output
t=0, ESS=100.00, tempering exponent=0.00097
t=1, Metropolis acc. rate (over 5 steps): 0.233, ESS=100.00, tempering exponent=0.0217
t=2, Metropolis acc. rate (over 6 steps): 0.323, ESS=100.00, tempering exponent=0.315
t=3, Metropolis acc. rate (over 5 steps): 0.338, ESS=520.51, tempering exponent=1
###Markdown
SMC samplersSMC samplers are SMC algorithms that sample from a sequence of target distributions. In this tutorial, these target distributions will be Bayesian posterior distributions of static models. SMC samplers are covered in Chapter 17 of the book. Defining a static modelA static model is a Python object that represents a Bayesian model with static parameter $\theta$. One may define a static model by subclassing base class `StaticModel`, and defining method `logpyt`, which evaluates the log-likelihood of datapoint $Y_t$ (given $\theta$ and past datapoints $Y_{0:t-1}$). Here is a simple example:
###Code
%matplotlib inline
import warnings; warnings.simplefilter('ignore') # hide warnings
from matplotlib import pyplot as plt
import seaborn as sb
from scipy import stats
import particles
from particles import smc_samplers as ssp
from particles import distributions as dists
class ToyModel(ssp.StaticModel):
def logpyt(self, theta, t): # density of Y_t given theta and Y_{0:t-1}
return stats.norm.logpdf(self.data[t], loc=theta['mu'],
scale = theta['sigma'])
###Output
_____no_output_____
###Markdown
In words, we are considering a model where the observations are $Y_t\sim N(\mu, \sigma^2)$. The parameter is $\theta=(\mu, \sigma)$.Class `ToyModel` contains information about the likelihood of the considered model, but not about its prior, or the considered data. First, let's define those:
###Code
T = 1000
my_data = stats.norm.rvs(loc=3.14, size=T) # simulated data
my_prior = dists.StructDist({'mu': dists.Normal(scale=10.),
'sigma': dists.Gamma()})
###Output
_____no_output_____
###Markdown
For more details about to define prior distributions, see the documentation of module `distributions`, or the previous [tutorial on Bayesian estimation of state-space models](Bayes_estimation_ssm.ipynb). Now that we have everything, let's specify our static model:
###Code
my_static_model = ToyModel(data=my_data, prior=my_prior)
###Output
_____no_output_____
###Markdown
This time, object `my_static_model` has enough information to define the posterior distribution(s) of the model (given all data, or part of the data). In fact, it inherits from `StaticModel` method `logpost`, which evaluates (for a collection of $\theta$ values) the posterior log-density at any time $t$ (meaning given data $y_{0:t}$).
###Code
thetas = my_prior.rvs(size=5)
my_static_model.logpost(thetas, t=2) # if t is omitted, gives the full posterior
###Output
_____no_output_____
###Markdown
The input of `logpost` (and output of `myprior.rvs()`) is a [structured array](https://docs.scipy.org/doc/numpy/user/basics.rec.html), with the same keys as the prior distribution:
###Code
thetas['mu'][0]
###Output
_____no_output_____
###Markdown
Typically, you won't need to call `logpost` yourself, this will be done by the SMC sampler for you. IBISThe IBIS (iterated batch importance sampling) algorithm is a SMC sampler that samples iteratively from a sequence of posterior distributions, $p(\theta|y_{0:t})$, for $t=0,1,\ldots$. Module `smc_samplers` defines `IBIS` as a subclass of `FeynmanKac`.
###Code
my_ibis = ssp.IBIS(my_static_model)
my_alg = particles.SMC(fk=my_ibis, N=1000, store_history=True)
my_alg.run()
###Output
_____no_output_____
###Markdown
Since we set `store_history` to `True`, the particles and their weights have been saved at every time (in attribute `hist`, see previous tutorials on smoothing). Let's plot the posterior distributions of $\mu$ and $\sigma$ at various times.
###Code
plt.style.use('ggplot')
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
for t in [100, 300, 900]:
plt.hist(my_alg.hist.X[t].theta[p], weights=my_alg.hist.wgts[t].W, label="t=%i" % t,
alpha=0.5, density=True)
plt.xlabel(p)
plt.legend();
###Output
_____no_output_____
###Markdown
As expected, the posterior distribution concentrates progressively around the true values. As before, once the algorithm is run, `my_smc.X` contains the N final particles. However, object `my_smc.X` is no longer a simple (N,) or (N,d) numpy array. It is a `ThetaParticles` object, with attributes:* theta: a structured array: as mentioned above, this is an array with fields; i.e. `my_smc.X.theta['mu']` is a (N,) array that contains the the $\mu-$component of the $N$ particles; * `lpost`: a (N,) numpy array that contains the target (posterior) log-density of each of the N particles;* `acc_rates`: a list of the acceptance rates of the resample-move steps.
###Code
print(["%2.2f%%" % (100 * np.mean(r)) for r in my_alg.X.acc_rates])
plt.hist(my_alg.X.lpost, 30);
###Output
['25.22%', '23.92%', '23.83%', '28.42%', '34.06%', '33.80%', '34.38%', '34.80%', '35.77%', '35.95%']
###Markdown
You do not need to know much more about class `ThetaParticles` for most practical purposes (see however the documention of module `smc_samplers` if you do want to know more, e.g. in order to implement other classes of SMC samplers). Regarding the Metropolis stepsAs the text output of `my_alg.run()` suggests, the algorithm "resample-moves" whenever the ESS is below a certain threshold ($N/2$ by default). When this occurs, particles are resampled, and then moved through a certain number of Metropolis-Hastings steps. By default, the proposal is a Gaussian random walk, and both the number of steps and the covariance matrix of the random walk are chosen automatically as follows: * the covariance matrix of the random walk is set to `scale` times the empirical (weighted) covariance matrix of the particles. The default value for `scale` is $2.38 / \sqrt{d}$, where $d$ is the dimension of $\theta$. * the algorithm performs Metropolis steps until the relative increase of the average distance between the starting point and the end point is below a certain threshold $\delta$. Class `IBIS` takes as an optional argument `mh_options`, a dictionary which may contain the following (key, values) pairs: * `'type_prop'`: either `'random walk'` or `'independent`'; in the latter case, an independent Gaussian proposal is used. The mean of the Gaussian is set to the weighted mean of the particles. The variance is set to `scale` times the weighted variance of the particles. * `'scale`': the scale of the proposal (as explained above). * `'nsteps'`: number of steps. If set to `0`, the adaptive strategy described above is used. Let's illustrate all this by calling IBIS again:
###Code
alt_ibis = ssp.IBIS(my_static_model, mh_options={'type_prop': 'independent',
'nsteps': 10})
alt_alg = particles.SMC(fk=alt_ibis, N=1000, ESSrmin=0.2)
alt_alg.run()
###Output
_____no_output_____
###Markdown
Well, apparently the algorithm did what we asked. We have also changed the threshold of Let's see how the ESS evolved:
###Code
plt.plot(alt_alg.summaries.ESSs)
plt.xlabel('t')
plt.ylabel('ESS')
###Output
_____no_output_____
###Markdown
As expected, the algorithm waits until the ESS is below 200 to trigger a resample-move step. SMC temperingSMC tempering is a SMC sampler that samples iteratively from the following sequence of distributions:\begin{equation}\pi_t(\theta) \propto \pi(\theta) L(\theta)^\gamma_t\end{equation}with $0=\gamma_0 < \ldots < \gamma_T = 1$. In words, this sequence is a **geometric bridge**, which interpolates between the prior and the posterior. SMC tempering implemented in the same was as IBIS: as a sub-class of `FeynmanKac`, whose `__init__` function takes as argument a `StaticModel` object.
###Code
fk_tempering = ssp.AdaptiveTempering(my_static_model)
my_temp_alg = particles.SMC(fk=fk_tempering, N=1000, ESSrmin=1., verbose=True)
my_temp_alg.run()
###Output
t=0, ESS=500.00, tempering exponent=3.02e-05
t=1, Metropolis acc. rate (over 5 steps): 0.243, ESS=500.00, tempering exponent=0.000328
t=2, Metropolis acc. rate (over 7 steps): 0.245, ESS=500.00, tempering exponent=0.00177
t=3, Metropolis acc. rate (over 7 steps): 0.241, ESS=500.00, tempering exponent=0.00601
t=4, Metropolis acc. rate (over 7 steps): 0.288, ESS=500.00, tempering exponent=0.0193
t=5, Metropolis acc. rate (over 6 steps): 0.333, ESS=500.00, tempering exponent=0.0637
t=6, Metropolis acc. rate (over 5 steps): 0.366, ESS=500.00, tempering exponent=0.231
t=7, Metropolis acc. rate (over 6 steps): 0.357, ESS=500.00, tempering exponent=0.77
t=8, Metropolis acc. rate (over 5 steps): 0.358, ESS=943.23, tempering exponent=1
###Markdown
**Note**: Recall that `SMC` resamples every time the ESS drops below value N times option `ESSrmin`; here we set it to to 1, since we want to resample at every time. This makes sense: Adaptive SMC chooses adaptively the successive values of $\gamma_t$ so that the ESS drops to $N/2$ (by default). **Note**: we use option `verbose=True` in `SMC` in order to print some information on the intermediate distributions. We have not saved the intermediate results this time (option `store_history` was not set) since they are not particularly interesting. Let's look at the final results:
###Code
for i, p in enumerate(['mu', 'sigma']):
plt.subplot(1, 2, i + 1)
sb.distplot(my_temp_alg.X.theta[p])
plt.xlabel(p)
###Output
_____no_output_____
###Markdown
This looks reasonable!You can see from the output that the algorithm automatically chooses the tempering exponents $\gamma_1, \gamma_2,\ldots$. In fact, at iteration $t$, the next value for $\gamma$ is set that the ESS drops at most to $N/2$. You can change this particular threshold by passing argument ESSrmin to TemperingSMC. (Warning: do not mistake this with the `ESSrmin` argument of class `SMC`):
###Code
lazy_tempering = ssp.AdaptiveTempering(my_static_model, ESSrmin = 0.1)
lazy_alg = particles.SMC(fk=lazy_tempering, N=1000, verbose=True)
lazy_alg.run()
###Output
t=0, ESS=100.00, tempering exponent=0.00104
t=1, Metropolis acc. rate (over 6 steps): 0.247, ESS=100.00, tempering exponent=0.0208
t=2, Metropolis acc. rate (over 6 steps): 0.295, ESS=100.00, tempering exponent=0.514
t=3, Metropolis acc. rate (over 6 steps): 0.370, ESS=760.26, tempering exponent=1
|
examples/Python/Advanced/global_registration.ipynb | ###Markdown
Global registrationBoth [ICP registration](../Basic/icp_registration.ipynb) and [Colored point cloud registration](colored_point_cloud_registration.ipynb) are known as local registration methods because they rely on a rough alignment as initialization. This tutorial shows another class of registration methods, known as **global** registration. This family of algorithms do not require an alignment for initialization. They usually produce less tight alignment results and are used as initialization of the local methods. VisualizationThis helper function visualizes the transformed source point cloud together with the target point cloud:
###Code
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp],
zoom=0.4559,
front=[0.6452, -0.3036, -0.7011],
lookat=[1.9892, 2.0208, 1.8945],
up=[-0.2779, -0.9482 ,0.1556])
###Output
_____no_output_____
###Markdown
Extract geometric featureWe downsample the point cloud, estimate normals, then compute a FPFH feature for each point. The FPFH feature is a 33-dimensional vector that describes the local geometric property of a point. A nearest neighbor query in the 33-dimensinal space can return points with similar local geometric structures. See [\[Rasu2009\]](../reference.htmlrasu2009) for details.
###Code
def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)
radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh
###Output
_____no_output_____
###Markdown
InputThis code below reads a source point cloud and a target point cloud from two files. They are misaligned with an identity matrix as transformation.
###Code
def prepare_dataset(voxel_size):
print(":: Load two point clouds and disturb initial pose.")
source = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_0.pcd")
target = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_1.pcd")
trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
source.transform(trans_init)
draw_registration_result(source, target, np.identity(4))
source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
return source, target, source_down, target_down, source_fpfh, target_fpfh
voxel_size = 0.05 # means 5cm for this dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
RANSACWe use RANSAC for global registration. In each RANSAC iteration, `ransac_n` random points are picked from the source point cloud. Their corresponding points in the target point cloud are detected by querying the nearest neighbor in the 33-dimensional FPFH feature space. A pruning step takes fast pruning algorithms to quickly reject false matches early.Open3D provides the following pruning algorithms:- `CorrespondenceCheckerBasedOnDistance` checks if aligned point clouds are close (less than the specified threshold).- `CorrespondenceCheckerBasedOnEdgeLength` checks if the lengths of any two arbitrary edges (line formed by two vertices) individually drawn from source and target correspondences are similar. This tutorial checks that $||edge_{source}|| > 0.9 \cdot ||edge_{target}||$ and $||edge_{target}|| > 0.9 \cdot ||edge_{source}||$ are true.- `CorrespondenceCheckerBasedOnNormal` considers vertex normal affinity of any correspondences. It computes the dot product of two normal vectors. It takes a radian value for the threshold.Only matches that pass the pruning step are used to compute a transformation, which is validated on the entire point cloud. The core function is `registration_ransac_based_on_feature_matching`. The most important hyperparameter of this function is `RANSACConvergenceCriteria`. It defines the maximum number of RANSAC iterations and the maximum number of validation steps. The larger these two numbers are, the more accurate the result is, but also the more time the algorithm takes.We set the RANSAC parameters based on the empirical value provided by [\[Choi2015]\](../reference.htmlchoi2015).
###Code
def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, distance_threshold,
o3d.registration.TransformationEstimationPointToPoint(False), 4, [
o3d.registration.CorrespondenceCheckerBasedOnEdgeLength(0.9),
o3d.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.registration.RANSACConvergenceCriteria(4000000, 500))
return result
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
###Output
_____no_output_____
###Markdown
**Note:**Open3D provides a faster implementation for global registration. Please refer to [Fast global registration](fast-global-registration). Local refinementFor performance reason, the global registration is only performed on a heavily down-sampled point cloud. The result is also not tight. We use [Point-to-plane ICP](../icp_registration.ipynbpoint-to-plane-ICP) to further refine the alignment.
###Code
def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.4
print(":: Point-to-plane ICP registration is applied on original point")
print(" clouds to refine the alignment. This time we use a strict")
print(" distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_icp(
source, target, distance_threshold, result_ransac.transformation,
o3d.registration.TransformationEstimationPointToPlane())
return result
result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
voxel_size)
print(result_icp)
draw_registration_result(source, target, result_icp.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationThe RANSAC based global registration solution may take a long time due to countless model proposals and evaluations. [\[Zhou2016\]](../reference.htmlzhou2016) introduced a faster approach that quickly optimizes line process weights of few correspondences. As there is no model proposal and evaluation involved for each iteration, the approach proposed in [\[Zhou2016\]](../reference.htmlzhou2016) can save a lot of computational time.This tutorial compares the running time of the RANSAC based global registration to the implementation of [\[Zhou2016\]](../reference.htmlzhou2016). InputWe use the same input as in the global registration example above.
###Code
voxel_size = 0.05 # means 5cm for the dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = \
prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
BaselineIn the code below we time the global registration approach.
###Code
start = time.time()
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Global registration took %.3f sec.\n" % (time.time() - start))
print(result_ransac)
draw_registration_result(source_down, target_down,
result_ransac.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationWith the same input used for a baseline, the code below calls the implementation of [\[Zhou2016\]](../reference.htmlzhou2016).
###Code
def execute_fast_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.5
print(":: Apply fast global registration with distance threshold %.3f" \
% distance_threshold)
result = o3d.registration.registration_fast_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh,
o3d.registration.FastGlobalRegistrationOption(
maximum_correspondence_distance=distance_threshold))
return result
start = time.time()
result_fast = execute_fast_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Fast global registration took %.3f sec.\n" % (time.time() - start))
print(result_fast)
draw_registration_result(source_down, target_down,
result_fast.transformation)
###Output
_____no_output_____
###Markdown
Global registrationBoth [ICP registration](../Basic/icp_registration.ipynb) and [Colored point cloud registration](colored_point_cloud_registration.ipynb) are known as local registration methods because they rely on a rough alignment as initialization. This tutorial shows another class of registration methods, known as **global** registration. This family of algorithms do not require an alignment for initialization. They usually produce less tight alignment results and are used as initialization of the local methods. VisualizationThe helper function visualizes the transformed source point cloud together with the target point cloud.
###Code
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp],
zoom=0.4559,
front=[0.6452, -0.3036, -0.7011],
lookat=[1.9892, 2.0208, 1.8945],
up=[-0.2779, -0.9482 ,0.1556])
###Output
_____no_output_____
###Markdown
Extract geometric featureWe down sample the point cloud, estimate normals, then compute a FPFH feature for each point. The FPFH feature is a 33-dimensional vector that describes the local geometric property of a point. A nearest neighbor query in the 33-dimensinal space can return points with similar local geometric structures. See [\[Rasu2009\]](../reference.htmlrasu2009) for details.
###Code
def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)
radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh
###Output
_____no_output_____
###Markdown
InputThis code below reads a source point cloud and a target point cloud from two files. They are misaligned with an identity matrix as transformation.
###Code
def prepare_dataset(voxel_size):
print(":: Load two point clouds and disturb initial pose.")
source = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_0.pcd")
target = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_1.pcd")
trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
source.transform(trans_init)
draw_registration_result(source, target, np.identity(4))
source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
return source, target, source_down, target_down, source_fpfh, target_fpfh
voxel_size = 0.05 # means 5cm for this dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
RANSACWe use RANSAC for global registration. In each RANSAC iteration, `ransac_n` random points are picked from the source point cloud. Their corresponding points in the target point cloud are detected by querying the nearest neighbor in the 33-dimensional FPFH feature space. A pruning step takes fast pruning algorithms to quickly reject false matches early.Open3D provides the following pruning algorithms:- `CorrespondenceCheckerBasedOnDistance` checks if aligned point clouds are close (less than specified threshold).- `CorrespondenceCheckerBasedOnEdgeLength` checks if the lengths of any two arbitrary edges (line formed by two vertices) individually drawn from source and target correspondences are similar. This tutorial checks that $||edge_{source}|| > 0.9 \times ||edge_{target}||$ and $||edge_{target}|| > 0.9 \times ||edge_{source}||$ are true.- `CorrespondenceCheckerBasedOnNormal` considers vertex normal affinity of any correspondences. It computes dot product of two normal vectors. It takes radian value for the threshold.Only matches that pass the pruning step are used to compute a transformation, which is validated on the entire point cloud. The core function is `registration_ransac_based_on_feature_matching`. The most important hyperparameter of this function is `RANSACConvergenceCriteria`. It defines the maximum number of RANSAC iterations and the maximum number of validation steps. The larger these two numbers are, the more accurate the result is, but also the more time the algorithm takes.We set the RANSAC parameters based on the empirical value provided by [\[Choi2015]\](../reference.htmlchoi2015).
###Code
def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, distance_threshold,
o3d.registration.TransformationEstimationPointToPoint(False), 4, [
o3d.registration.CorrespondenceCheckerBasedOnEdgeLength(0.9),
o3d.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.registration.RANSACConvergenceCriteria(4000000, 500))
return result
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
###Output
_____no_output_____
###Markdown
**Note:**Open3D provides faster implementation for global registration. Please refer to [Fast global registration](fast-global-registration). Local refinementFor performance reason, the global registration is only performed on a heavily down-sampled point cloud. The result is also not tight. We use [Point-to-plane ICP](../icp_registration.ipynbpoint-to-plane-ICP) to further refine the alignment.
###Code
def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.4
print(":: Point-to-plane ICP registration is applied on original point")
print(" clouds to refine the alignment. This time we use a strict")
print(" distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_icp(
source, target, distance_threshold, result_ransac.transformation,
o3d.registration.TransformationEstimationPointToPlane())
return result
result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
voxel_size)
print(result_icp)
draw_registration_result(source, target, result_icp.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationThe RANSAC based global registration solution may take a long time due to countless model proposals and evaluations. [\[Zhou2016\]](../reference.htmlzhou2016) introduced a faster approach that quickly optimizes line process weights of few correspondences. As there is no model proposal and evaluation involved for each iteration, the approach proposed in [\[Zhou2016\]](../reference.htmlzhou2016) can save a lot of computational time.This tutorial compares the running time of the RANSAC based global registration to the implementation of [\[Zhou2016\]](../reference.htmlzhou2016). InputWe use the same input as in the global registration example above.
###Code
voxel_size = 0.05 # means 5cm for the dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = \
prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
BaselineIn the code below we time the global registration approach.
###Code
start = time.time()
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Global registration took %.3f sec.\n" % (time.time() - start))
print(result_ransac)
draw_registration_result(source_down, target_down,
result_ransac.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationWith the same input used for a baseline, the next code below calls the implementation of [\[Zhou2016\]](../reference.htmlzhou2016).
###Code
def execute_fast_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.5
print(":: Apply fast global registration with distance threshold %.3f" \
% distance_threshold)
result = o3d.registration.registration_fast_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh,
o3d.registration.FastGlobalRegistrationOption(
maximum_correspondence_distance=distance_threshold))
return result
start = time.time()
result_fast = execute_fast_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Fast global registration took %.3f sec.\n" % (time.time() - start))
print(result_fast)
draw_registration_result(source_down, target_down,
result_fast.transformation)
###Output
_____no_output_____
###Markdown
Global registrationBoth [ICP registration](../Basic/icp_registration.ipynb) and [Colored point cloud registration](colored_point_cloud_registration.ipynb) are known as local registration methods because they rely on a rough alignment as initialization. This tutorial shows another class of registration methods, known as **global** registration. This family of algorithms do not require an alignment for initialization. They usually produce less tight alignment results and are used as initialization of the local methods. VisualizationThe helper function visualizes the transformed source point cloud together with the target point cloud.
###Code
def draw_registration_result(source, target, transformation):
source_temp = copy.deepcopy(source)
target_temp = copy.deepcopy(target)
source_temp.paint_uniform_color([1, 0.706, 0])
target_temp.paint_uniform_color([0, 0.651, 0.929])
source_temp.transform(transformation)
o3d.visualization.draw_geometries([source_temp, target_temp],
zoom=0.4559,
front=[0.6452, -0.3036, -0.7011],
lookat=[1.9892, 2.0208, 1.8945],
up=[-0.2779, -0.9482 ,0.1556])
###Output
_____no_output_____
###Markdown
Extract geometric featureWe down sample the point cloud, estimate normals, then compute a FPFH feature for each point. The FPFH feature is a 33-dimensional vector that describes the local geometric property of a point. A nearest neighbor query in the 33-dimensinal space can return points with similar local geometric structures. See [\[Rasu2009\]](../reference.htmlrasu2009) for details.
###Code
def preprocess_point_cloud(pcd, voxel_size):
print(":: Downsample with a voxel size %.3f." % voxel_size)
pcd_down = pcd.voxel_down_sample(voxel_size)
radius_normal = voxel_size * 2
print(":: Estimate normal with search radius %.3f." % radius_normal)
pcd_down.estimate_normals(
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_normal, max_nn=30))
radius_feature = voxel_size * 5
print(":: Compute FPFH feature with search radius %.3f." % radius_feature)
pcd_fpfh = o3d.registration.compute_fpfh_feature(
pcd_down,
o3d.geometry.KDTreeSearchParamHybrid(radius=radius_feature, max_nn=100))
return pcd_down, pcd_fpfh
###Output
_____no_output_____
###Markdown
InputThis code below reads a source point cloud and a target point cloud from two files. They are misaligned with an identity matrix as transformation.
###Code
def prepare_dataset(voxel_size):
print(":: Load two point clouds and disturb initial pose.")
source = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_0.pcd")
target = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_1.pcd")
trans_init = np.asarray([[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 1.0]])
source.transform(trans_init)
draw_registration_result(source, target, np.identity(4))
source_down, source_fpfh = preprocess_point_cloud(source, voxel_size)
target_down, target_fpfh = preprocess_point_cloud(target, voxel_size)
return source, target, source_down, target_down, source_fpfh, target_fpfh
voxel_size = 0.05 # means 5cm for this dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
RANSACWe use RANSAC for global registration. In each RANSAC iteration, `ransac_n` random points are picked from the source point cloud. Their corresponding points in the target point cloud are detected by querying the nearest neighbor in the 33-dimensional FPFH feature space. A pruning step takes fast pruning algorithms to quickly reject false matches early.Open3D provides the following pruning algorithms:- `CorrespondenceCheckerBasedOnDistance` checks if aligned point clouds are close (less than specified threshold).- `CorrespondenceCheckerBasedOnEdgeLength` checks if the lengths of any two arbitrary edges (line formed by two vertices) individually drawn from source and target correspondences are similar. This tutorial checks that $||edge_{source}|| > 0.9 \times ||edge_{target}||$ and $||edge_{target}|| > 0.9 \times ||edge_{source}||$ are true.- `CorrespondenceCheckerBasedOnNormal` considers vertex normal affinity of any correspondences. It computes dot product of two normal vectors. It takes radian value for the threshold.Only matches that pass the pruning step are used to compute a transformation, which is validated on the entire point cloud. The core function is `registration_ransac_based_on_feature_matching`. The most important hyperparameter of this function is `RANSACConvergenceCriteria`. It defines the maximum number of RANSAC iterations and the maximum number of validation steps. The larger these two numbers are, the more accurate the result is, but also the more time the algorithm takes.We set the RANSAC parameters based on the empirical value provided by [\[Choi2015]\](../reference.htmlchoi2015).
###Code
def execute_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 1.5
print(":: RANSAC registration on downsampled point clouds.")
print(" Since the downsampling voxel size is %.3f," % voxel_size)
print(" we use a liberal distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_ransac_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh, distance_threshold,
o3d.registration.TransformationEstimationPointToPoint(False), 4, [
o3d.registration.CorrespondenceCheckerBasedOnEdgeLength(0.9),
o3d.registration.CorrespondenceCheckerBasedOnDistance(
distance_threshold)
], o3d.registration.RANSACConvergenceCriteria(4000000, 500))
return result
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print(result_ransac)
draw_registration_result(source_down, target_down, result_ransac.transformation)
###Output
_____no_output_____
###Markdown
**Note:**Open3D provides faster implementation for global registration. Please refer to [Fast global registration](fast-global-registration). Local refinementFor performance reason, the global registration is only performed on a heavily down-sampled point cloud. The result is also not tight. We use [Point-to-plane ICP](../icp_registration.ipynbpoint-to-plane-ICP) to further refine the alignment.
###Code
def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.4
print(":: Point-to-plane ICP registration is applied on original point")
print(" clouds to refine the alignment. This time we use a strict")
print(" distance threshold %.3f." % distance_threshold)
result = o3d.registration.registration_icp(
source, target, distance_threshold, result_ransac.transformation,
o3d.registration.TransformationEstimationPointToPlane())
return result
result_icp = refine_registration(source, target, source_fpfh, target_fpfh,
voxel_size)
print(result_icp)
draw_registration_result(source, target, result_icp.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationThe RANSAC based global registration solution may take a long time due to countless model proposals and evaluations. [\[Zhou2016\]](../reference.htmlzhou2016) introduced a faster approach that quickly optimizes line process weights of few correspondences. As there is no model proposal and evaluation involved for each iteration, the approach proposed in [\[Zhou2016\]](../reference.htmlzhou2016) can save a lot of computational time.This tutorial compares the running time of the RANSAC based global registration to the implementation of [\[Zhou2016\]](../reference.htmlzhou2016). InputWe use the same input as in the global registration example above.
###Code
voxel_size = 0.05 # means 5cm for the dataset
source, target, source_down, target_down, source_fpfh, target_fpfh = \
prepare_dataset(voxel_size)
###Output
_____no_output_____
###Markdown
BaselineIn the code below we time the global registration approach.
###Code
start = time.time()
result_ransac = execute_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Global registration took %.3f sec.\n" % (time.time() - start))
print(result_ransac)
draw_registration_result(source_down, target_down,
result_ransac.transformation)
###Output
_____no_output_____
###Markdown
Fast global registrationWith the same input used for a baseline, the next code below calls the implementation of [\[Zhou2016\]](../reference.htmlzhou2016).
###Code
def execute_fast_global_registration(source_down, target_down, source_fpfh,
target_fpfh, voxel_size):
distance_threshold = voxel_size * 0.5
print(":: Apply fast global registration with distance threshold %.3f" \
% distance_threshold)
result = o3d.registration.registration_fast_based_on_feature_matching(
source_down, target_down, source_fpfh, target_fpfh,
o3d.registration.FastGlobalRegistrationOption(
maximum_correspondence_distance=distance_threshold))
return result
start = time.time()
result_fast = execute_fast_global_registration(source_down, target_down,
source_fpfh, target_fpfh,
voxel_size)
print("Fast global registration took %.3f sec.\n" % (time.time() - start))
print(result_fast)
draw_registration_result(source_down, target_down,
result_fast.transformation)
###Output
_____no_output_____ |
Particle Filter mod.ipynb | ###Markdown
Particle Filter {-}
###Code
from numpy import arange, array, sqrt, random, sin, cos, zeros, pi, exp, cumsum, mean, var
import matplotlib.pyplot as plt
# System parameters
samples = 100 # Number of samples
dt = 1 # Time interval [second]
n = 100 # Number of particles
# Process noise
Q = 1
# Measurement noise
R = 1
# Initial state
x = 0.1
xt = x
# Initial particle variance
sigmap = 2
# Generate initial n particles
xn = zeros(n)
for i in range(0, n):
xn[i] = x + random.normal(0, sqrt(sigmap))
# Plot initial particles and corresponding histogram
plt.subplot(1, 2, 1)
plt.title('Initial particle distribution')
plt.xlabel('Time step')
plt.ylabel('Position')
plt.plot(zeros(n), xn, color='orange', marker='o', linestyle='None')
plt.grid()
plt.subplot(1, 2, 2)
plt.title('Histogram')
plt.xlabel('Count')
plt.hist(xn, align='mid', alpha=0.75)
plt.grid()
plt.show()
# Initialize plot vectors
xt_all = []; x_all = []; z_all = []
def proc(x, Q):
return x + random.normal(0, sqrt(Q))
def meas(x, R):
return x + random.normal(0, sqrt(R))
# Main loop
for k in range(0, samples):
# Generate dynamic process
xt = proc(xt, Q)
# Generate measurement
z = meas(xt, R)
# Time update (loop over n particles)
xnp = zeros(n);zp = zeros(n);w = zeros(n)
for i in range(0, n):
# Predicted state
xnp[i] = proc(xt, Q)
# Compute measurement
zp[i] = meas(xnp[i], R)
# Compute weights (normal distributed measurements)
w[i] = (1/sqrt(2*pi*R))*exp(-(z - zp[i])**2/(2*R))
# Normalize to form probability distribution
w = w/sum(w)
# Resampling of cumulative probability distribution
for i in range(0, n):
ind = [x for x, ind in enumerate(cumsum(w)) if ind <= random.uniform(0, 1)]
xn[i] = xnp[ind[-1]] # Find the index of last occurence
# State estimation
x = mean(xn)
var_est = var(xn)
# Accumulate plot vectors
x_all.append(x)
xt_all.append(xt)
z_all.append(z)
# Plot distributions
plt.subplot(1, 2, 1)
plt.title('Predicted particle positions')
plt.plot(0, x, color='green', marker='o')
plt.plot(w, xnp, color='orange', marker='.', linestyle='None')
plt.xlabel('Weight')
plt.ylabel('Position')
plt.grid()
plt.subplot(1, 2, 2)
plt.title('Predicted measurements')
plt.plot(0, z, color='green', marker='o')
plt.plot(w, zp, color='red', marker='.', linestyle='None')
plt.xlabel('Weight')
plt.grid()
plt.show()
# Time
time = arange(0, samples)*dt
# Plot estimates
plt.title('State estimation')
plt.plot(time, xt_all, color='blue')
plt.plot(time, x_all, 'g.')
plt.grid()
plt.show()
###Output
_____no_output_____ |
examples/nlp.ipynb | ###Markdown
Natural Language Processing---------------------------------------------------This example shows how to use ATOM to quickly go from raw text data to model predictions.Import the 20 newsgroups text dataset from [sklearn.datasets](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html). The dataset comprises around 18000 articles on 20 topics. The goal is to predict the topic of every article. Load the data
###Code
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'alt.atheism',
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Run the pipeline
###Code
atom = ATOMClassifier(X_text, y_text, index=True, test_size=0.3, verbose=2, warnings=False)
atom.dataset # Note that the feature is automatically named 'corpus'
# Let's have a look at the first document
atom.corpus[0]
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
# Have a look at the removed items
atom.drops
# Check how the first document changed
atom.corpus[0]
atom.corpus[atom.corpus.str.contains("mips")]
# Convert the strings to a sequence of words
atom.tokenize()
# Print the first few words of the first document
atom.corpus[0][:7]
# Normalize the text to a predefined standard
atom.normalize(stopwords="english", lemmatize=True)
atom.corpus[0][:7] # Check changes...
# Visualize the most common words with a wordcloud
atom.plot_wordcloud()
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
atom.bigrams
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tfidf")
# The dimensionality of the dataset has increased a lot!
atom.shape
# Note that the data is sparse and the columns are named
# after the words they are embedding
atom.dtypes
# When the dataset is sparse, stats() shows the density
atom.stats()
# Check which models have support for sparse matrices
atom.available_models()[["acronym", "fullname", "accepts_sparse"]]
# Train the model
atom.run(models="RF", metric="f1_weighted")
###Output
Training ========================= >>
Models: RF
Metric: f1_weighted
Results for Random Forest:
Fit ---------------------------------------------
Train evaluation --> f1_weighted: 1.0
Test evaluation --> f1_weighted: 0.9296
Time elapsed: 32.115s
-------------------------------------------------
Total time: 32.115s
Final results ==================== >>
Duration: 32.115s
-------------------------------------
Random Forest --> f1_weighted: 0.9296
###Markdown
Analyze results
###Code
atom.evaluate()
atom.plot_confusion_matrix(figsize=(10, 10))
atom.decision_plot(index=0, target=atom.predict(0), show=15)
atom.beeswarm_plot(target=0, show=15)
###Output
100%|===================| 4252/4265 [04:35<00:00]
###Markdown
Interactive NLPThe `seaqube` package provides a simple toolkit for simple usage of pre trained nlp models or fro self trained models like from `gensim`. _Whatever for a NLP model is used. If the model training, saving and loading process is implemented in a class which inherits from `SeaQuBeWordEmbeddingsModel`, the seaqube toolkit can be used for interactive NLP usage_
###Code
from seaqube.nlp.types import SeaQuBeWordEmbeddingsModel
# Lets have a look at a contexted based NLP model, called Context2Vec
from seaqube.nlp.context2vec.context2vec import Context2Vec
# Import some seaqube tools:
from seaqube.nlp.tools import word_count_list
from seaqube.nlp.types import RawModelTinCan
from seaqube.nlp.seaqube_model import SeaQuBeNLPLoader, SeaQuBeCompressLoader
from seaqube.nlp.tools import tokenize_corpus
###Output
_____no_output_____
###Markdown
To use the seaqube word embedding evaluation OR just to make nlp usage easier, it is neccessary to wrap such a model to a `SeaQuBeWordEmbeddingsModel` like we can see in the following:
###Code
class SeaQuBeWordEmbeddingsModelC2V(SeaQuBeWordEmbeddingsModel):
def __init__(self, c2v: Context2Vec):
self.c2v = c2v
def vocabs(self):
return self.c2v.wv.vocabs
@property
def wv(self):
return self.c2v.wv
def word_vector(self, word):
return self.c2v.wv[word]
def matrix(self):
return self.c2v.wv.matrix
###Output
_____no_output_____
###Markdown
We load a corpus which then will be used for model training
###Code
star_wars_cites = ["How you get so big eating food of this kind?", "'Spring the trap!'", "Same as always…", "You came in that thing? You’re braver than I thought!", "Who’s scruffy looking?", "Let the Wookiee win.", "The Emperor is not as forgiving as I am", "I don’t know where you get your delusions, laserbrain.", "Shutting up, sir,", "Boring conversation anyway…", ]
corpus = tokenize_corpus(star_wars_cites)
###Output
_____no_output_____
###Markdown
Traing a Context2Vec instance
###Code
c2v = Context2Vec(epoch=3)
c2v.train(corpus)
###Output
_____no_output_____
###Markdown
This context2Vec model can be completely saved with:
###Code
c2v.save("starwars_c2v")
###Output
_____no_output_____
###Markdown
Now, it is time to wrap the model to a seaqube understandable format
###Code
seaC2V = SeaQuBeWordEmbeddingsModelC2V(c2v)
tin_can = RawModelTinCan(seaC2V, word_count_list(corpus))
SeaQuBeCompressLoader.save_model_compressed(tin_can, "c2v_small")
###Output
_____no_output_____
###Markdown
The next step transform a nlp model with extra information to a nlp object, which provides interactive usage
###Code
nlp = SeaQuBeNLPLoader.load_model_from_tin_can(tin_can, "c2v")
###Output
_____no_output_____
###Markdown
This line tansforms a document to a SeaQuBeNLPDoc object which provides some features about similarity and word embeddings
###Code
nlp("This is a test")
###Output
_____no_output_____
###Markdown
`doc` is a list of tokens
###Code
doc = list(nlp("This is a test")); print(doc); type(doc[0])
###Output
_____no_output_____
###Markdown
For every token a embedding vector can be obtained. Here just for the first one:
###Code
nlp("This is a test")[0].vector
###Output
_____no_output_____
###Markdown
The vector can be merged, using mean or the dif algorithm, if vecor is used from the document contexts.
###Code
nlp("This is a test").vector
nlp("This is a test").sif_vector
###Output
_____no_output_____
###Markdown
Also the similarity between words or documents can be calulated, whereas for document the `sif` method gives a better semantic result.
###Code
nlp("Is the Emperor a laserbrain?").similarity("Boring conversation anyway…")
nlp("Is the Emperor a laserbrain?").similarity("Boring conversation anyway…", vector="sif")
###Output
_____no_output_____
###Markdown
Similarity for words
###Code
word = nlp("Is the Emperor a laserbrain?")[2]
word
word.similarity("Wookiee")
###Output
_____no_output_____
###Markdown
Get the vocab of the model
###Code
nlp.vocab()
###Output
_____no_output_____
###Markdown
Explaining Natural Language Processing (NLP) Models with CXPlainFirst, we load a number of reviews from the Internet Movie Database (IMDB) dataset which we will use as a training dataset to attempt to recognise the sentiment expressed in a given movie review.
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from cxplain.util.test_util import TestUtil
num_words = 1024
num_samples = 500
(x_train, y_train), (x_test, y_test) = TestUtil.get_imdb(word_dictionary_size=num_words,
num_subsamples=num_samples)
###Output
_____no_output_____
###Markdown
Next, we fit a review classification pipeline that first transforms the reviews into their term frequency–inverse document frequency (tf-idf) vector representation, and then fits a Random Forest classifier to these vector representationsof the training data.
###Code
from sklearn.pipeline import Pipeline
from cxplain.util.count_vectoriser import CountVectoriser
from sklearn.ensemble.forest import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfTransformer
explained_model = RandomForestClassifier(n_estimators=64, max_depth=5, random_state=1)
counter = CountVectoriser(num_words)
tfidf_transformer = TfidfTransformer()
explained_model = Pipeline([('counts', counter),
('tfidf', tfidf_transformer),
('model', explained_model)])
explained_model.fit(x_train, y_train);
###Output
_____no_output_____
###Markdown
After fitting the review classification pipeline, we wish to explain its decisions, i.e. what input features were most relevantfor a given pipeline prediction. To do so, we train a causal explanation (CXPlain) model that can learn to explain anymachine-learning model using the same training data. In practice, we have to define:- `model_builder`: The type of model we want to use as our CXPlain model. In this case we are using a neural explanation model usinga recurrent neural network (RNN) structure. - `masking_operation`: The masking operaion used to remove a certain input feature from the set of available input features. In this case we are using word drop masking, i.e. removing a word from the input sequence entirely.- `loss`: The loss function that we wish to use to measure the impact of removing a certain input feature from the set of available features. In most common use cases, this will be the mean squared error (MSE) for regression problems and the cross-entropy for classification problems.
###Code
from tensorflow.python.keras.losses import binary_crossentropy
from cxplain import RNNModelBuilder, WordDropMasking, CXPlain
model_builder = RNNModelBuilder(embedding_size=num_words, with_embedding=True,
num_layers=2, num_units=32, activation="relu", p_dropout=0.2, verbose=0,
batch_size=32, learning_rate=0.001, num_epochs=2, early_stopping_patience=128)
masking_operation = WordDropMasking()
loss = binary_crossentropy
###Output
_____no_output_____
###Markdown
Using this configuration, we now instantiate a CXPlain model and fit it to the same IMDB data that we used to fit the review classification pipeline model that we wish to explain.We also pad the movie reviews to the same length prior to fitting the CXPlain model since variable length inputsare currently not supported in CXPlain.
###Code
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
explainer = CXPlain(explained_model, model_builder, masking_operation, loss)
prior_test_lengths = map(len, x_test)
x_train = pad_sequences(x_train, padding="post", truncating="post", dtype=int)
x_test = pad_sequences(x_test, padding="post", truncating="post", dtype=int, maxlen=x_train.shape[1])
explainer.fit(x_train, y_train);
###Output
WARNING:tensorflow:From /Users/schwabp3/Documents/projects/venv/lib/python2.7/site-packages/tensorflow/python/keras/initializers.py:119: calling __init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /Users/schwabp3/Documents/projects/venv/lib/python2.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling __init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /Users/schwabp3/Documents/projects/cxplain/cxplain/backend/model_builders/base_model_builder.py:152: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
WARNING:tensorflow:From /Users/schwabp3/Documents/projects/venv/lib/python2.7/site-packages/tensorflow/python/ops/math_grad.py:1250: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
We can then use this fitted CXPlain model to explain the predictions of the explained model on the held-out test samples. Note that the importance scores are normalised to sum to a value of 1 and each score therefore represents the relative importance of each respective input word.(Although it would be possible, we do not request confidence intervals for the provided attributions in this example.)
###Code
attributions = explainer.explain(x_test)
###Output
_____no_output_____
###Markdown
We can now visualise the per-word attributions for a specific sample review from the test set using the `Plot` toolset available as part of CXPlain.Note that we first have to convert our input data from word indices back to actual word strings using `TestUtils.imdb_dictionary_indidces_to_words()`.
###Code
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from cxplain.visualisation.plot import Plot
plt.rcdefaults()
np.random.seed(909)
selected_index = np.random.randint(len(x_test))
selected_sample = x_test[selected_index]
importances = attributions[selected_index]
prior_length = prior_test_lengths[selected_index]
# Truncate to original review length prior to padding.
selected_sample = selected_sample[:prior_length]
importances = importances[:prior_length]
words = TestUtil.imdb_dictionary_indidces_to_words(selected_sample)
print(Plot.plot_attribution_nlp(words, importances))
###Output
<START> {0.000869052892085} <UNK> {0.00101536838338} watched {0.00123076280579} 8 {0.00117500138003} <UNK> {0.000994433648884} <UNK> {0.000954008253757} <UNK> {0.00104214868043} <UNK> {0.00127266219351} <UNK> {0.00109629391227} very {0.000929234724026} thought {0.000997570343316} <UNK> {0.00123121822253} and {0.00102737860288} very {0.0010709624039} well {0.000990054919384} done {0.00147655024193} movie {0.000871025433298} on {0.0014516452793} the {0.00144975376315} subject {0.00096174213104} of {0.00105090788566} the {0.00113586091902} death {0.00114432512783} <UNK> {0.0010641978588} <UNK> {0.000870429503266} more {0.00124163366854} <UNK> {0.000883863598574} and {0.00128600851167} <UNK> {0.00138137256727} than {0.000877433281858} it {0.00111598963849} <UNK> {0.00113100279123}
###Markdown
Natural Language Processing---------------------------------------------------This example shows how to use ATOM to quickly go from raw text data to model predictions.Import the 20 newsgroups text dataset from [sklearn.datasets](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html). The dataset comprises around 18000 articles on 20 topics. The goal is to predict the topic of every article. Load the data
###Code
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'alt.atheism',
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Run the pipeline
###Code
atom = ATOMClassifier(X_text, y_text, test_size=0.3, verbose=2, warnings=False)
atom.dataset # Note that the feature is automatically named 'Corpus'
# Let's have a look at the first document
atom.Corpus[0]
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
# Have a look at the removed items
atom.drops
# Check how the first document changed
atom.Corpus[0]
# Convert the strings to a sequence of words
atom.tokenize()
# Print the first few words of the first document
atom.Corpus[0][:7]
# Normalize the text to a predefined standard
atom.normalize(stopwords="english", lemmatize=True)
atom.Corpus[0][:7] # Check changes...
# Visualize the most common words with a wordcloud
atom.plot_wordcloud()
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
atom.bigrams
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tf-idf")
# The dimensionality of the dataset has increased a lot!
atom.shape
# Train the model
atom.run(models="MLP", metric="f1_weighted")
###Output
Training ========================= >>
Models: MLP
Metric: f1_weighted
Results for Multi-layer Perceptron:
Fit ---------------------------------------------
Train evaluation --> f1_weighted: 1.0
Test evaluation --> f1_weighted: 0.8701
Time elapsed: 1m:02s
-------------------------------------------------
Total time: 1m:02s
Final results ==================== >>
Duration: 1m:02s
-------------------------------------
Multi-layer Perceptron --> f1_weighted: 0.8701
###Markdown
Analyze results
###Code
atom.evaluate()
atom.plot_confusion_matrix(figsize=(10, 10))
###Output
_____no_output_____ |
docs/source/examples/Natural and artificial perturbations.ipynb | ###Markdown
Natural and artificial perturbations
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time, TimeDelta
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import propagate, cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.core.perturbations import atmospheric_drag, third_body, J2_perturbation
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter2D, OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format="jd", scale="tdb"))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m ** 2)).to(u.km ** 2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km ** 3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tofs = TimeDelta(np.linspace(0 * u.h, 100000 * u.s, num=2000))
rr = propagate(
orbit,
tofs,
method=cowell,
ad=atmospheric_drag,
R=R,
C_D=C_D,
A=A,
m=m,
H0=H0,
rho0=rho0,
)
plt.ylabel("h(t)")
plt.xlabel("t, days")
plt.plot(tofs.value, rr.data.norm() - Earth.R);
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) * u.km
v0 = np.array([-7.36138, -2.98997, 1.64354]) * u.km / u.s
orbit = Orbit.from_vectors(Earth, r0, v0)
tofs = TimeDelta(np.linspace(0, 48.0 * u.h, num=2000))
coords = propagate(
orbit, tofs, method=cowell,
ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value
)
rr = coords.data.xyz.T.to(u.km).value
vv = coords.data.differentials["s"].d_xyz.T.to(u.km / u.s).value
# This will be easier to compute when this is solved:
# https://github.com/poliastro/poliastro/issues/257
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel("RAAN(t)")
plt.xlabel("t, h")
plt.plot(tofs.value, raans);
###Output
_____no_output_____
###Markdown
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set("de432s")
epoch = Time(2454283.0, format="jd", scale="tdb") # setting the exact event date is important
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(
Moon, 28 * u.day, (epoch.value * u.day, epoch.value * u.day + 60 * u.day), rtol=1e-2
)
initial = Orbit.from_classical(
Earth,
42164.0 * u.km,
0.0001 * u.one,
1 * u.deg,
0.0 * u.deg,
0.0 * u.deg,
0.0 * u.rad,
epoch=epoch,
)
tofs = TimeDelta(np.linspace(0, 60 * u.day, num=1000))
# multiply Moon gravity by 400 so that effect is visible :)
rr = propagate(
initial,
tofs,
method=cowell,
rtol=1e-6,
ad=third_body,
k_third=400 * Moon.k.to(u.km ** 3 / u.s ** 2).value,
third_body=body_r,
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label="orbit influenced by Moon")
###Output
_____no_output_____
###Markdown
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
s0 = Orbit.from_classical(
Earth,
a * u.km,
ecc_0 * u.one,
inc_0 * u.deg,
0 * u.deg,
argp * u.deg,
0 * u.deg,
epoch=Time(0, format="jd", scale="tdb"),
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
tofs = TimeDelta(np.linspace(0, t_f * u.s, num=1000))
rr2 = propagate(s0, tofs, method=cowell, rtol=1e-6, ad=a_d)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label='orbit with artificial thrust')
###Output
_____no_output_____
###Markdown
Natural and artificial perturbations
###Code
import numpy as np
import matplotlib.pyplot as plt
from astropy import units as u
from astropy.time import Time, TimeDelta
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import propagate, cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.core.perturbations import atmospheric_drag, third_body, J2_perturbation
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format="jd", scale="tdb"))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m ** 2)).to(u.km ** 2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km ** 3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tofs = TimeDelta(np.linspace(0 * u.h, 100000 * u.s, num=2000))
rr = propagate(
orbit,
tofs,
method=cowell,
ad=atmospheric_drag,
R=R,
C_D=C_D,
A=A,
m=m,
H0=H0,
rho0=rho0,
)
plt.ylabel("h(t)")
plt.xlabel("t, days")
plt.plot(tofs.value, rr.norm() - Earth.R)
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbationWe can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) * u.km
v0 = np.array([-7.36138, -2.98997, 1.64354]) * u.km / u.s
orbit = Orbit.from_vectors(Earth, r0, v0)
tofs = TimeDelta(np.linspace(0, 48.0 * u.h, num=2000))
coords = propagate(
orbit, tofs, method=cowell,
ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value
)
rr = coords.xyz.T.to(u.km).value
vv = coords.differentials["s"].d_xyz.T.to(u.km / u.s).value
# This will be easier to compute when this is solved:
# https://github.com/poliastro/poliastro/issues/380
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel("RAAN(t)")
plt.xlabel("t, h")
plt.plot(tofs.value, raans)
###Output
_____no_output_____
###Markdown
3rd bodyApart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set("de432s")
epoch = Time(
2454283.0, format="jd", scale="tdb"
) # setting the exact event date is important
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(
Moon, 28 * u.day, (epoch.value * u.day, epoch.value * u.day + 60 * u.day), rtol=1e-2
)
initial = Orbit.from_classical(
Earth,
42164.0 * u.km,
0.0001 * u.one,
1 * u.deg,
0.0 * u.deg,
0.0 * u.deg,
0.0 * u.rad,
epoch=epoch,
)
tofs = TimeDelta(np.linspace(0, 60 * u.day, num=1000))
# multiply Moon gravity by 400 so that effect is visible :)
rr = propagate(
initial,
tofs,
method=cowell,
rtol=1e-6,
ad=third_body,
k_third=400 * Moon.k.to(u.km ** 3 / u.s ** 2).value,
third_body=body_r,
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label="orbit influenced by Moon")
###Output
_____no_output_____
###Markdown
Applying thrustApart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccentricity and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
s0 = Orbit.from_classical(
Earth,
a * u.km,
ecc_0 * u.one,
inc_0 * u.deg,
0 * u.deg,
argp * u.deg,
0 * u.deg,
epoch=Time(0, format="jd", scale="tdb"),
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
tofs = TimeDelta(np.linspace(0, t_f * u.s, num=1000))
rr2 = propagate(s0, tofs, method=cowell, rtol=1e-6, ad=a_d)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label="orbit with artificial thrust")
###Output
_____no_output_____
###Markdown
Combining multiple perturbationsIt might be of interest to determine what effect multiple perturbations have on a single object. In order to add multiple perturbations we can create a custom function that adds them up:
###Code
from poliastro.core.util import jit
# Add @jit for speed!
@jit
def a_d(t0, state, k, J2, R, C_D, A, m, H0, rho0):
return J2_perturbation(t0, state, k, J2, R) + atmospheric_drag(
t0, state, k, R, C_D, A, m, H0, rho0
)
# propagation times of flight and orbit
tofs = TimeDelta(np.linspace(0, 10 * u.day, num=10 * 500))
orbit = Orbit.circular(Earth, 250 * u.km) # recall orbit from drag example
# propagate with J2 and atmospheric drag
rr3 = propagate(
orbit,
tofs,
method=cowell,
ad=a_d,
R=R,
C_D=C_D,
A=A,
m=m,
H0=H0,
rho0=rho0,
J2=Earth.J2.value,
)
# propagate with only atmospheric drag
rr4 = propagate(
orbit,
tofs,
method=cowell,
ad=atmospheric_drag,
R=R,
C_D=C_D,
A=A,
m=m,
H0=H0,
rho0=rho0,
)
fig, (axes1, axes2) = plt.subplots(nrows=2, sharex=True, figsize=(15, 6))
axes1.plot(tofs.value, rr3.norm() - Earth.R)
axes1.set_ylabel("h(t)")
axes1.set_xlabel("t, days")
axes1.set_ylim([225, 251])
axes2.plot(tofs.value, rr4.norm() - Earth.R)
axes2.set_ylabel("h(t)")
axes2.set_xlabel("t, days")
axes2.set_ylim([225, 251])
###Output
_____no_output_____
###Markdown
Natural and artificial perturbations
###Code
import functools
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter2D, OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = orbit.sample(tr, method=cowell_with_ad)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R);
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) # km
v0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s)
tof = (48.0 * u.h).to(u.s).value
rr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value)
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, s')
plt.plot(np.linspace(0, tof, 2000), raans);
###Output
_____no_output_____
###Markdown
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = initial.sample(tr, method=cowell_with_3rdbody)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon')
###Output
_____no_output_____
###Markdown
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr2 = s0.sample(tr, method=cowell_with_ad)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label='orbit with artificial thrust')
###Output
_____no_output_____
###Markdown
Natural and artificial perturbations
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time, TimeDelta
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import propagate, cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.core.perturbations import atmospheric_drag, third_body, J2_perturbation
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter2D, OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format="jd", scale="tdb"))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m ** 2)).to(u.km ** 2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km ** 3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tofs = TimeDelta(np.linspace(0 * u.h, 100000 * u.s, num=2000))
rr = propagate(
orbit,
tofs,
method=cowell,
ad=atmospheric_drag,
R=R,
C_D=C_D,
A=A,
m=m,
H0=H0,
rho0=rho0,
)
plt.ylabel("h(t)")
plt.xlabel("t, days")
plt.plot(tofs.value, rr.norm() - Earth.R);
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) * u.km
v0 = np.array([-7.36138, -2.98997, 1.64354]) * u.km / u.s
orbit = Orbit.from_vectors(Earth, r0, v0)
tofs = TimeDelta(np.linspace(0, 48.0 * u.h, num=2000))
coords = propagate(
orbit, tofs, method=cowell,
ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value
)
rr = coords.xyz.T.to(u.km).value
vv = coords.differentials["s"].d_xyz.T.to(u.km / u.s).value
# This will be easier to compute when this is solved:
# https://github.com/poliastro/poliastro/issues/380
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel("RAAN(t)")
plt.xlabel("t, h")
plt.plot(tofs.value, raans);
###Output
_____no_output_____
###Markdown
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set("de432s")
epoch = Time(2454283.0, format="jd", scale="tdb") # setting the exact event date is important
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(
Moon, 28 * u.day, (epoch.value * u.day, epoch.value * u.day + 60 * u.day), rtol=1e-2
)
initial = Orbit.from_classical(
Earth,
42164.0 * u.km,
0.0001 * u.one,
1 * u.deg,
0.0 * u.deg,
0.0 * u.deg,
0.0 * u.rad,
epoch=epoch,
)
tofs = TimeDelta(np.linspace(0, 60 * u.day, num=1000))
# multiply Moon gravity by 400 so that effect is visible :)
rr = propagate(
initial,
tofs,
method=cowell,
rtol=1e-6,
ad=third_body,
k_third=400 * Moon.k.to(u.km ** 3 / u.s ** 2).value,
third_body=body_r,
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label="orbit influenced by Moon")
###Output
_____no_output_____
###Markdown
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km ** 3 / u.s ** 2).value
s0 = Orbit.from_classical(
Earth,
a * u.km,
ecc_0 * u.one,
inc_0 * u.deg,
0 * u.deg,
argp * u.deg,
0 * u.deg,
epoch=Time(0, format="jd", scale="tdb"),
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
tofs = TimeDelta(np.linspace(0, t_f * u.s, num=1000))
rr2 = propagate(s0, tofs, method=cowell, rtol=1e-6, ad=a_d)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label='orbit with artificial thrust')
###Output
_____no_output_____
###Markdown
Natural and artificial perturbations
###Code
import functools
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import propagate, cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter2D, OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = propagate(
orbit, (tr - orbit.epoch).to(u.s), method=cowell_with_ad
)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R);
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) * u.km
v0 = np.array([-7.36138, -2.98997, 1.64354]) * u.km / u.s
orbit = Orbit.from_vectors(Earth, r0, v0)
tof = 48.0 * u.h
# This will be easier with propagate
# when this is solved:
# https://github.com/poliastro/poliastro/issues/257
rr, vv = cowell(
Earth.k,
orbit.r,
orbit.v,
np.linspace(0, tof, 2000),
ad=J2_perturbation,
J2=Earth.J2.value,
R=Earth.R.to(u.km).value
)
k = Earth.k.to(u.km**3 / u.s**2).value
rr = rr.to(u.km).value
vv = vv.to(u.km / u.s).value
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, h')
plt.plot(np.linspace(0, tof, 2000), raans);
###Output
_____no_output_____
###Markdown
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = propagate(
initial, (tr - initial.epoch).to(u.s), method=cowell_with_3rdbody
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon')
###Output
_____no_output_____
###Markdown
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr2 = propagate(
s0, (tr - s0.epoch).to(u.s), method=cowell_with_ad
)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr2, label='orbit with artificial thrust')
###Output
_____no_output_____
###Markdown
Natural and artificial perturbations
###Code
# Temporary hack, see https://github.com/poliastro/poliastro/issues/281
from IPython.display import HTML
HTML('<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js"></script>')
import numpy as np
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
%matplotlib inline
import matplotlib.pyplot as plt
import functools
import numpy as np
from astropy import units as u
from astropy.time import Time
from astropy.coordinates import solar_system_ephemeris
from poliastro.twobody.propagation import cowell
from poliastro.ephem import build_ephem_interpolant
from poliastro.core.elements import rv2coe
from poliastro.core.util import norm
from poliastro.util import time_range
from poliastro.core.perturbations import (
atmospheric_drag, third_body, J2_perturbation
)
from poliastro.bodies import Earth, Moon
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot, OrbitPlotter3D
###Output
_____no_output_____
###Markdown
Atmospheric drag The poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!
###Code
R = Earth.R.to(u.km).value
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))
# parameters of a body
C_D = 2.2 # dimentionless (any value would do)
A = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2
m = 100 # kg
B = C_D * A / m
# parameters of the atmosphere
rho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3
H0 = Earth.H0.to(u.km).value
tof = (100000 * u.s).to(u.day).value
tr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')
cowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,
R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)
rr = orbit.sample(tr, method=cowell_with_ad)
plt.ylabel('h(t)')
plt.xlabel('t, days')
plt.plot(tr.value, rr.data.norm() - Earth.R)
###Output
_____no_output_____
###Markdown
Evolution of RAAN due to the J2 perturbation We can also see how the J2 perturbation changes RAAN over time!
###Code
r0 = np.array([-2384.46, 5729.01, 3050.46]) # km
v0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s
k = Earth.k.to(u.km**3 / u.s**2).value
orbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s)
tof = (48.0 * u.h).to(u.s).value
rr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value)
raans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]
plt.ylabel('RAAN(t)')
plt.xlabel('t, s')
plt.plot(np.linspace(0, tof, 2000), raans)
###Output
_____no_output_____
###Markdown
3rd body Apart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!
###Code
# database keeping positions of bodies in Solar system over time
solar_system_ephemeris.set('de432s')
j_date = 2454283.0 * u.day # setting the exact event date is important
tof = (60 * u.day).to(u.s).value
# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)
body_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)
epoch = Time(j_date, format='jd', scale='tdb')
initial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg,
0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)
# multiply Moon gravity by 400 so that effect is visible :)
cowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,
k_third=400 * Moon.k.to(u.km**3 / u.s**2).value,
third_body=body_r)
tr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')
rr = initial.sample(tr, method=cowell_with_3rdbody)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit influenced by Moon')
frame.show()
###Output
_____no_output_____
###Markdown
Thrusts Apart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.
###Code
from poliastro.twobody.thrust import change_inc_ecc
ecc_0, ecc_f = 0.4, 0.0
a = 42164 # km
inc_0 = 0.0 # rad, baseline
inc_f = (20.0 * u.deg).to(u.rad).value # rad
argp = 0.0 # rad, the method is efficient for 0 and 180
f = 2.4e-6 # km / s2
k = Earth.k.to(u.km**3 / u.s**2).value
s0 = Orbit.from_classical(
Earth,
a * u.km, ecc_0 * u.one, inc_0 * u.deg,
0 * u.deg, argp * u.deg, 0 * u.deg,
epoch=Time(0, format='jd', scale='tdb')
)
a_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)
cowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)
tr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')
rr = s0.sample(tr, method=cowell_with_ad)
frame = OrbitPlotter3D()
frame.set_attractor(Earth)
frame.plot_trajectory(rr, label='orbit with artificial thrust')
frame.show()
###Output
_____no_output_____ |
cv-basics/5_opencv_overview.ipynb | ###Markdown
**Introduction to OpenCV** *OpenCV* is a cross-platform library using which we can develop real-time computer vision applications. It mainly focuses on image processing, video capture and analysis including features like face detection and object detection. 1. Open CV is an Open Source, C++ based API(Application Programming Interface) Library for Computer Vision2. It includes several computer vision algorithms3. It is a library written in C++, but has also its binding in Python. This means that C++ code is executed behind the scene in Python.4. Why in Python? It's a bit easy to implement for beginning5. Explore more of OpenCV [here](https://opencv.org/) - **Computer Vision** : Computer Vision can be defined as a discipline that explains how to reconstruct, interrupt, and understand a 3D scene from its 2D images, in terms of the properties of the structure present in the scene. It deals with modeling and replicating human vision using computer software and hardware.- Computer Vision overlaps significantly with the following fields − 1. **Image Processing** − It focuses on image manipulation. 2. **Pattern Recognition** − It explains various techniques to classify patterns. 3. **Photogrammetry** − It is concerned with obtaining accurate measurements from images. **Computer Vision Vs Image Processing :** 1. Image processing deals with image-to-image transformation. The input and output of image processing are both images.2. Computer vision is the construction of explicit, meaningful descriptions of physical objects from their image. The output of computer vision is a description or an interpretation of structures in 3D scene. **Applications of Computer Vision :**1. **Medicine Application** : - Classification and detection (e.g. lesion or cells classification and tumor detection) - 3D human organ reconstruction (MRI or ultrasound) - Vision-guided robotics surgery_____________________2. **Industrial Automation Application** - Defect detection - Assembly - Barcode and package label reading - Object sorting_____________________3. **Security Application** - Biometrics (iris, finger print, face recognition)_____________________4. **Surveillance**: - Detecting certain suspicious activities or behaviors_____________________5. **Transportation Application** - Autonomous vehicle - Safety, e.g., driver vigilance monitoring Function DescriptionsRefer this from **google** and [OpenCv Docs](https://docs.opencv.org/master/index.html)| Sr. No. | FunctionName | Parameters | Description ||:------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|| 1 | cv2.imread(filename, [flags]) | filename: Name of file to be loaded.flags: Flag that can take values ofcv::ImreadModes | The function imread loads an image from the specified file and returns it. || 2 | cv2.imshow(window_name, image) | window_name: A string representing the name of the window in which image to be displayed.image: It is the image that is to be displayed. | cv2.imshow() method is used to display an image in a window. The window automatically fits to the image size.It doesn’t return anything. || 3 | cv2.cvtColor(src, code[, dst[, dstCn]]) | src: It is the image whose color space is to be changed.code: It is the color space conversion code.dst: It is the output image of the same size and depth as src image. It is an optional parameter.dstCn: It is the number of channels in the destination image. If the parameter is 0 then the number of the channels is derived automatically from src and code. It is an optional parameter. | cv2.cvtColor() method is used to convert an image from one color space to another. There are more than 150 color-space conversion methods available in OpenCV.It returns an image. || 4 | cv2.resize(src, dsize[, dst[, fx[, fy[, interpolation]]]]) | src: source/input imagedsize: desired size for the output imagefx: scale factor along the horizontal axisfy: scale factor along the vertical axis | To resize an image, OpenCV provides cv2.resize() function. || 5 | cv2.addWeighted(src1, alpha, src2, beta, gamma[,dst[, dtype]]) | src1: first input arrayalpha: weight of the first array elementssrc2: second input array of the same size and channel number as src1beta: weight of the second array elementsdst: output array that has the same size and number of channels as the input arraysgamma: scalar added to each sumdtype: optional depth of the output array | This function is used to output the weighted sum of overlapping/blending of two images. || 6 | cv2.threshold(source, thresholdValue, maxVal, thresholdingTechnique) | source: Input Image array (must be in Grayscale). thresholdValue: Value of Threshold below and above which pixel values will change accordingly.maxVal: Maximum value that can be assigned to a pixel.thresholdingTechnique: The type of thresholding to be applied. eg. simple, binary etc | This function is used for thresholding. The basic Thresholding technique is Binary Thresholding. For every pixel, the same threshold value is applied. If the pixel value is smaller than the threshold, it is set to 0, otherwise, it is set to a maximum value. || 7 | cv2.bitwise_and(src1, src2[, dst[, mask]]) | src1: first input array or a scalar.src2: second input array or a scalar.dst: output array that has the same size and type as the input arrays.mask: optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed. | This function computes bitwise conjunction/AND of the two arrays (dst = src1 & src2) Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar. || 8 | cv2.bitwise_not(src[, dst[, mask]]) | src: input array.dst: output array that has the same size and type as the input array.mask: optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed. | This function Inverts every bit of an array . || 9 | cv2.bitwise_or(src1, src2[, dst[, mask]]) | src1: first input array or a scalar.src2: second input array or a scalar.dst: output array that has the same size and type as the input arrays.mask: optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed. | This function Calculates the per-element bit-wise disjunction/OR of two arrays or an array and a scalar. || 10 | bitwise_xor(src1, src2[, dst[, mask]]) | src1: first input array or a scalar.src2: second input array or a scalar.dst: output array that has the same size and type as the input arrays.mask: optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed. | This function Calculates the per-element bit-wise "exclusive or" operation on two arrays or an array and a scalar. || 11 | cv2.inRange(src, lowerb, upperb[, dst]) | src: first input array.lowerb: inclusive lower boundary array or a scalar.upperb: inclusive upper boundary array or a scalar.dst: output array of the same size as src and CV_8U type. | This function Checks if array elements lie between the elements of two other arrays. || 12 | cv2.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) | image: 8-bit input image.edges: output edge map; single channels 8-bit image, which has the same size as image .threshold1: first threshold for the hysteresis procedure.threshold2: second threshold for the hysteresis procedure.apertureSize: aperture size for the Sobel operator.L2gradient: a flag, indicating whether a more accurate L2 norm. | This function Finds edges in an image using the Canny algorithm || 13 | cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]]) | image: Source, an 8-bit single-channel image. Non-zero pixels are treated as 1's. Zero pixels remain 0's, so the image is treated as binary. contours: Detected contours. Each contour is stored as a vector of pointshierarchy: Optional output vectormode: Contour retrieval modemethod: Contour approximation methodoffset: Optional offset by which every contour point is shifted. | This function Finds contours in a binary image. The contours are a useful tool for shape analysis and object detection and recognition. || 14 | cv2.drawContours(image, contours, contourIdx, color[, thickness[, lineType[, hierarchy[, maxLevel[, offset]]]]]) | image: Destination image.contours: All the input contours. Each contour is stored as a point vector.contourIdx: Parameter indicating a contour to draw. If it is negative, all the contours are drawn.color: Color of the contours.thickness: Thickness of lines the contours are drawn with. If it is negative (for example, thickness=FILLED), the contour interiors are drawn.lineType: Line connectivity. hierarchy: Optional information about hierarchy. It is only needed if you want to draw only some of the contours (see maxLevel ).maxLevel: Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. . | This function Draws contours outlines or filled contours.The function draws contour outlines in the image if thickness ≥0or fills the area bounded by the contours if thickness<0 | Reading and Ploting(Writing) Images using OpenCV- By default OpenCV reads and acquires the image's channels in BGR format order(Blue Green Red).
###Code
# Importing the OpenCV library
import cv2
# Reading the image using imread() function
img = cv2.imread('images/DK.jpeg') # Here the path of the image to be read is written
# Extracting the height and width of an image
h, w = img.shape[:2]
# or
# h, w, c = img.shape[:]
# c indicates number of channels
# Displaying the height and width of the image (pixels)
print("Height = {}, Width = {}".format(h, w))
# imshow - arguments : window_name, image_name
cv2.imshow('1', img)
# wait to execute next operation until a key is pressed, keep showing the image windows
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Matplotlib:1. Matplotlib is a cross-platform, data visualization and graphical plotting library for Python.2. Its numerical extension NumPy. As such, it offers a viable open source alternative to MATLAB(mathematics library). 3. Developers can also use matplotlib’s APIs (Application Programming Interfaces) to embed plots in GUI applications. i.e can be used in other applications as well.4. OpenCV understands, processes and reads an image by default in BGR channel format (Blue - Green - Red).5. Whereas Matplotlib understands, processes and reads an image by default in RGB channel format (Red - Green - Blue).6. Hence while using OpenCV and matplotlib together, one needs to take care about conversion of image between BGR and RGB format. Plotting the OpenCV read image with matplotlib
###Code
# import matplotlib for using plotting functionalities
import matplotlib.pyplot as plt
# plotting image using matplotlib
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Plotting and reading image using Matplotlib
###Code
# Both Reading and Plotting Image using Matplotlib
img2 = plt.imread('images/DK.jpeg')
plt.imshow(img2)
plt.show()
###Output
_____no_output_____
###Markdown
Conversion from BGR to RGB format using cvtColor() function of OpenCV
###Code
# Converting OpenCV read image (BGR format) to RGB format and ploting using matplotlib
# Here cvtColor function accepts the image and the colour conversion code
img3 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img3)
# Any image is made up of arrays and numbers -> each pixel's color value is represented in terms of pixels
img3
# images -> these are NUMPY arrays
print(type(img))
print(type(img2))
print(type(img3))
###Output
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
###Markdown
1. Here each image object i.e. the images are numpy arrays.2. ndarray means a n dimensional array
###Code
# Data type of elements in numpy array which represents an image
print(img.dtype)
print(img2.dtype)
###Output
uint8
uint8
###Markdown
Combining two imageseg. **Image1** = Windows Logo; **Image2** = Linux Logo; **Image3 = Image1 + Image2** Illustrations
###Code
img4 = cv2.imread('images/purple_night.jpg')
img4 = cv2.resize(img4, (img.shape[1],img.shape[0]))
img_res = img + img4
cv2.imshow("purple_night",img4)
cv2.imshow("result",img_res)
cv2.waitKey(0)
cv2.destroyAllWindows()
plt.imshow(img_res)
###Output
_____no_output_____
###Markdown
Converting Image to Grayscale Illustrations
###Code
# Converting image to grayscale (black and white/ monocolor)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("Grayscale",gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Resizing an Image Illustrations
###Code
# Initializing Height and Width
h = 200
w = 200
# Resizing using resize() function
resizeImg = cv2.resize(img, (h,w))
cv2.imshow("resized", resizeImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Checking the shape
h, w, c = resizeImg.shape
print(h,w,c)
###Output
200 200 3
###Markdown
Other Properties of Image
###Code
# Total number of pixels using size
pix = img.size
print("Number of Pixels: {}".format(pix))
# Image Datatype
dType = img.dtype
print("Datatype: {}".format(dType))
###Output
Number of Pixels: 2580480
Datatype: uint8
###Markdown
Creating a Black Background using zeros() function in Numpy Library
###Code
import cv2
import numpy as np
blImg = np.zeros((800,800,3))
print(blImg.shape)
cv2.imshow("black",blImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
(800, 800, 3)
###Markdown
Changing value of ROI (Region of Interest)- ROI is a part/subset of the image i.e a set of pixels- Example of ROI Extraction Illustrations
###Code
# bgr format
im2 = blImg.copy()
# green
im2[45:400,35:300] = (0,255,0)
cv2.imshow("im2",im2)
# blue
im2[450:500,350:590] = (255,0,0)
cv2.imshow("im2_1",im2)
# red
im2[700:840,650:] = (0,0,255)
cv2.imshow("im2_2",im2)
# pink = red + blue
im2[500:570,550:600] = (71,0,170)
cv2.imshow("im2_3",im2)
# yellow = green + blue
im2[:170,550:] = (0,255,255)
cv2.imshow("im2_4",im2)
roi = im2[100:700, 340:780]
cv2.imshow("roi", roi)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Image BlendingThis is also image addition, but different weights are given to images in order to give a feeling of blending or transparency. Images are added as per the equation below: g(x)=(1−α)f0(x)+αf1(x)By varying α from 0→1, you can perform a cool transition between one image to another.Here I took two images to blend together. The first image is given a weight of 0.7 and the second image is given 0.3. cv.addWeighted() applies the following equation to the image: dst=α⋅img1+β⋅img2+γHere γ is taken as zero.**Note** : Image combination is a subset of Image Blending. In image blendign we can specify the amont/percentage of effect that we want form either of the input images. Illustrations
###Code
import cv2
img1 = cv2.imread('images/dummy1.jpg')
# img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2RGB)
cv2.imshow("1mg1",img1)
print(img1.shape)
img2 = cv2.imread('images/purple_night.jpg')
# img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
cv2.imshow("img2",img2)
print(img2.shape)
alpha = 0.4
beta = 1 - alpha
# Blending using addWeighted() function which accepts:
# g(x)=(1−α)f0(x)+αf1(x)
# Source of 1st image
# Alpha
# Source of 2nd image
# Beta
# Gamma
dst = cv2.addWeighted(img1,alpha,img2,beta,0.0)
cv2.imshow("weighted",dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
(1080, 1920, 3)
(1080, 1920, 3)
###Markdown
Bitwise Operations1. Lower pixel values are close to BLACK2. Higher pixel values are close to WHITE3. These operations are done bitwise i.e. Pixel wise. 4. The logic of operation remains same as that of seen in logical/ bitwise operations Illustrations Input Images Bitwise AND Bitwise OR Bitwise XOR Bitwise NOT1. Image1 - Bitwise NOT/inversion of Input Image 1 2. Image2 - Bitwise NOT/inversion of Input Image 2
###Code
# Load two images
img1 = cv2.imread('images/DK.jpeg')
img2 = cv2.imread('images/opencv.png')
# I want to put logo on top-left corner, So I create a ROI
rows,cols,channels = img2.shape
print(img2.shape)
roi = img1[0:rows, 0:cols]
# Now create a mask of logo and create its inverse mask also
img2gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
# each pixel having value more than 10 will be set to white(255) for a mask
ret, mask = cv2.threshold(img2gray, 10, 255, cv2.THRESH_BINARY)
# inverting colors
mask_inv = cv2.bitwise_not(mask)
# Now black-out the area of logo in ROI
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
# # Take only region of logo from logo image.
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
# # Put logo in ROI and modify the main image
dst = cv2.add(img1_bg,img2_fg)
img1[0:rows, 0:cols ] = dst
cv2.imshow('img1',img1)
cv2.imshow('img2',img2)
cv2.imshow('roi',roi)
cv2.imshow('img2gray',img2gray)
cv2.imshow('mask',mask)
cv2.imshow('mask_inv',mask_inv)
cv2.imshow('img1_bg',img1_bg)
cv2.imshow('img2_fg',img2_fg)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
(249, 202, 3)
###Markdown
Detecting an object based on the range of pixel values in the HSV colorspace using inRange() HSV colorspace1. HSV (hue, saturation, value) colorspace is a model to represent the colorspace similar to the RGB color model. 2. Since the hue channel models the color type, it is very useful in image processing tasks that need to segment objects based on its color. 3. Variation of the saturation goes from unsaturated to represent shades of gray and fully saturated (no white component). 4. Value channel describes the brightness or the intensity of the color. Object Detection Illustration
###Code
import cv2
## Read
img = cv2.imread("images/DK.jpeg")
## convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cv2.imshow("hsv", hsv)
## mask of red (36,0,0) ~ (70, 255,255)
mask1 = cv2.inRange(hsv, (0, 33, 40), (38,150 ,255))
cv2.imshow("mask1", mask1)
## mask o yellow (15,0,0) ~ (36, 255, 255)
# mask2 = cv2.inRange(hsv, (15,0,0), (36, 255, 255))
## mask of blue
mask2 = cv2.inRange(hsv, (33,52,80), (150, 200, 255))
cv2.imshow("mask2", mask2)
## final mask and masked
mask = cv2.bitwise_or(mask1, mask2)
cv2.imshow("mask", mask)
target = cv2.bitwise_and(img,img, mask=mask)
cv2.imshow("target",target)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
CountoursContours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. The contours are a useful tool for shape analysis and object detection and recognition.Illustration:
###Code
image = cv2.imread('images/DK.jpeg')
# Grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)
# Find Canny edges
edged = cv2.Canny(gray, 30, 250)
# Finding Contours
# Use a copy of the image e.g. edged.copy()
# since findContours alters the image
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
print("Number of Contours found = " + str(len(contours)))
# Draw all contours
# -1 signifies drawing all contours
cv2.drawContours(image, contours, -1, (0, 255, 0), 2)
cv2.imshow('Contours', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____ |
docs/source/tutorials/1-getting-started/using-target-pixel-file-products.ipynb | ###Markdown
Using Target Pixel Files with Lightkurve Learning GoalsBy the end of this tutorial, you will:- Be able to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).- Be able to access target pixel file metadata.- Understand where to find more details about *Kepler* target pixel files. Introduction The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.*Kepler* selected certain pixels around targeted stars to be downloaded from the spacecraft. These were stored as *target pixel files* that contain data for each observed cadence. In this tutorial, we will learn how to use Lightkurve to download these raw data, plot them, and understand their properties and units.It is recommended that you first read the tutorial on how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. This series of cutouts were downloaded and stored as an array of images in target pixel files, or TPFs. By summing up the amount of light (the *flux*) captured by the pixels in which the star appears, you can make a measurement of the brightness of a star over time.TPFs are an important resource when studying an astronomical object with *Kepler*, *K2*, or *TESS*. The files allow us to understand the original images that were collected, and identify potential sources of noise or instrument-induced trends which may be less obvious in derived light curves. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally work for *TESS* and *K2*. ImportsThis tutorial requires **[Lightkurve](https://docs.lightkurve.org)**, which in turn uses `matplotlib` for plotting.
###Code
import lightkurve as lk
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. What is a Target Pixel File?The target pixel file (TPF) of a star contains an image for each observing cadence, either a 30-minute Long Cadence or one-minute Short Cadence exposure in the case of *Kepler*. The files also include metadata detailing how the observation was made, as well as post-processing information such as the estimated intensity of the astronomical background in each image. (Read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), Section 2.3.2 for more information.)TPFs are stored in a [FITS file format](https://fits.gsfc.nasa.gov/fits_primer.html). The Lightkurve package allows us to work with these binary files without having to worry about the details of the file structure. For examples on how to work with FITS files directly, read this tutorial on [Plotting Images from *Kepler* Target Pixel Files](https://github.com/spacetelescope/notebooks/blob/master/notebooks/MAST/Kepler/Kepler_TPF/kepler_tpf.ipynb). 2. Downloading a Target Pixel File The TPFs of stars observed by the *Kepler* mission are stored on the [Mikulksi Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive, along with metadata about the observations, such as which charge-coupled device (CCD) was used at each time. Lightkurve's built-in tools allow us to search and download TPFs from the archive. As we did in the accompanying tutorial on light curves, we will start by downloading one quarter (a *Kepler* observing period approximately 90 days in duration) of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter). Using the [`search_targetpixelfile`](https://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html) function, we can find an itemized list of different TPFs available for Kepler-8.
###Code
search_result = lk.search_targetpixelfile("Kepler-8", mission="Kepler")
search_result
###Output
_____no_output_____
###Markdown
In this list, each row represents a different observing period. We find that *Kepler* recorded 18 quarters of data for this target across four years. The `search_targetpixelfile()` function takes several additional arguments, such as the `quarter` number or the `mission` name. You can find examples of its use in the [online documentation](http://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html) for this function.The search function returns a [`SearchResult`](https://docs.lightkurve.org/api/lightkurve.search.SearchResult.html) object which has several convenient operations. For example, we can select the fourth data product in the list as follows:
###Code
search_result[4]
###Output
_____no_output_____
###Markdown
We can download this data product using the `download()` method.
###Code
tpf = search_result[4].download()
###Output
_____no_output_____
###Markdown
This instruction is identical to the following line:
###Code
tpf = lk.search_targetpixelfile("Kepler-8", mission="Kepler", quarter=4).download()
###Output
_____no_output_____
###Markdown
The `tpf_file` variable we have obtained in this way is a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) object.
###Code
tpf
###Output
_____no_output_____
###Markdown
This file object provides a convenient way to interact with the data file that has been returned by the archive, which contains both the TPF as well as metadata about the observations.Before diving into the properties of the `KeplerTargetPixelFile`, we can plot the data, also using Lightkurve.
###Code
%matplotlib inline
tpf.plot();
###Output
_____no_output_____
###Markdown
What you are seeing in this figure are pixels on the CCD camera, with which Kepler-8 was observed. The color indicates the amount of flux in each pixel, in electrons per second. The y-axis shows the pixel row, and the x-axis shows the pixel column. The title tells us the *Kepler* Input Catalogue (KIC) identification number of the target, and the observing cadence of this image. By default, `plot()` shows the first observation cadence in the quarter, but this can be changed by passing optional keyword arguments. You can type `help(tpf.plot)` to see a full list of those options. NoteYou can also download TPF FITS files from the archive by hand, store them on your local disk, and open them using the `lk.read()` function. This function will return a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) object just as in the above example. You can find out where Lightkurve stored a given TPF by typing `tpf.path`:
###Code
tpf.path
###Output
_____no_output_____
###Markdown
3. Accessing the Metadata Our [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) includes the observation's metadata, loaded from the header of the TPF files downloaded from MAST. Many of these are similar to the metadata stored in the [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html), which are discussed in the accompanying tutorial. The headers containing the metadata can be accessed from the [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) through the `get_header()` method.For example, the first extension ("extension 0") of the file provides metadata related to the star, such as its magnitude in different passbands, its movement and position on the sky, and its location on *Kepler*'s CCD detector:
###Code
tpf.get_header(ext=0)
###Output
_____no_output_____
###Markdown
This is an Astropy [`astropy.io.fits.Header`](https://docs.astropy.org/en/stable/io/fits/api/headers.html) object, which has many convenient features. For example, you can retrieve the value of an individual keyword as follows:
###Code
tpf.get_header(ext=0).get('QUARTER')
###Output
_____no_output_____
###Markdown
When constructing a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) from a FITS file, Lightkurve carries a subset of the metadata through into user-friendly object properties for convenience, which are available through shorthands (for example, `tpf.quarter`). You can view these properties with the the [`show_properties()`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.htmllightkurve.targetpixelfile.KeplerTargetPixelFile.show_properties) method:
###Code
tpf.show_properties()
###Output
_____no_output_____
###Markdown
A new piece of metadata not included in the [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurvefile.KeplerLightCurve.html) objects is the [World Coordinate System](https://fits.gsfc.nasa.gov/fits_wcs.html) (WCS). The WCS contains information about how pixel positions map to celestial sky coordinates. This is important when comparing a TPF from a *Kepler*, *K2*, or *TESS* observation to an observation of the same star with a different telescope.You can access the WCS using `tpf.wcs`, which is an Astropy WCS object:
###Code
type(tpf.wcs)
###Output
_____no_output_____
###Markdown
For example, you can obtain the sky coordinates for the bottom left corner of the TPF as follows:
###Code
tpf.wcs.pixel_to_world(0, 0)
###Output
_____no_output_____
###Markdown
Altogether, the metadata contains a lot of information, and you will rarely use it all, but it is important to know that it is available if you need it. For more details and a better overview of all of the metadata stored in a TPF, read the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), specifically: - Section 2.3.2 Target Pixel Data - Appendix A.1: Target Pixel File Headers 4. Time, Flux, and Background Finally, we have the most important properties of the TPF: the time and flux information. Just like a `KeplerLightCurve` object, we can access the time information as an Astropy `Time` object as follows:
###Code
tpf.time
###Output
_____no_output_____
###Markdown
The pixel brightness data is available as an Astropy `Quantity` object named `tpf.flux`:
###Code
tpf.flux
###Output
_____no_output_____
###Markdown
This object is a three-dimensional array, where each entry in the array represents one observing cadence. In our example, the flux array is composed of 4116 images, which are 5x5 pixels in size each:
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
We can access the values of the first 5x5 pixel image as a NumPy array as follows:
###Code
tpf.flux[0].value
###Output
_____no_output_____
###Markdown
At each cadence the TPF has four different flux-related data properties:- `tpf.flux`: the stellar brightness after the background is removed.- `tpf.flux_err`: the statistical uncertainty on the stellar flux after background removal.- `tpf.flux_bkg`: the astronomical background brightness of the image.- `tpf.flux_bkg_err`: the statistical uncertainty on the background flux.All four of these data arrays are in units of electrons per second.**Note**: for *Kepler*, the flux background isn't a measurement made using the local TPF data. Instead, at each cadence, the *Kepler* pipeline fits a model to thousands of empty pixels across each CCD in order to estimate a continuum background across the the CCD. For more details read the [*Kepler* Instrument Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19033-002-instrument-hb.pdf), Section 2.6.2.4. In the case of *TESS*, local background pixels contained within a TPF are used instead. **Note**: The `tpf.flux` values seen above have been quality-masked. This means that cadences of observations that violated the `quality_bitmask` parameter are removed, and so `tpf.flux` represents the data that you probably want to use to do your science. The `quality_bitmask` can also be accessed as a property of a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html). For specific details on the `quality` flags, read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/k2/_documents/MAST_Kepler_Archive_Manual_2020.pdf), Section 2.3.1.1.If you want to access flux and background flux measurements that *have not* been quality masked, you can pass a custom `quality_bitmask` parameter to the `download()` or `read()` method as follows:
###Code
search = lk.search_targetpixelfile("Kepler-8", mission="Kepler", quarter=4)
tpf = search.download(quality_bitmask=0)
###Output
_____no_output_____
###Markdown
You can see that the flux array of this object now has more cadences (4397) than the original one above (4116):
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
Alternatively, we can access the unmasked contents of the original TPF FITS file at any time using the `hdu` property:
###Code
tpf.hdu[1].data['FLUX'].shape
###Output
_____no_output_____
###Markdown
About this Notebook **Authors:** Oliver Hall ([email protected]), Geert Barentsen**Updated On**: 2020-09-15 Citing Lightkurve and AstropyIf you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
###Code
lk.show_citation_instructions()
###Output
_____no_output_____
###Markdown
Using Target Pixel Files with Lightkurve Learning GoalsBy the end of this tutorial, you will:- Be able to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).- Be able to access target pixel file metadata.- Understand where to find more details about *Kepler* target pixel files. Introduction The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.*Kepler* selected certain pixels around targeted stars to be downloaded from the spacecraft. These were stored as *target pixel files* that contain data for each observed cadence. In this tutorial, we will learn how to use Lightkurve to download these raw data, plot them, and understand their properties and units.It is recommended that you first read the tutorial on how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. This series of cutouts were downloaded and stored as an array of images in target pixel files, or TPFs. By summing up the amount of light (the *flux*) captured by the pixels in which the star appears, you can make a measurement of the brightness of a star over time.TPFs are an important resource when studying an astronomical object with *Kepler*, *K2*, or *TESS*. The files allow us to understand the original images that were collected, and identify potential sources of noise or instrument-induced trends which may be less obvious in derived light curves. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally work for *TESS* and *K2*. ImportsThis tutorial requires **[Lightkurve](https://docs.lightkurve.org)**, which in turn uses `matplotlib` for plotting.
###Code
import lightkurve as lk
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. What is a Target Pixel File?The target pixel file (TPF) of a star contains an image for each observing cadence, either a 30-minute Long Cadence or one-minute Short Cadence exposure in the case of *Kepler*. The files also include metadata detailing how the observation was made, as well as post-processing information such as the estimated intensity of the astronomical background in each image. (Read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), Section 2.3.2 for more information.)TPFs are stored in a [FITS file format](https://fits.gsfc.nasa.gov/fits_primer.html). The Lightkurve package allows us to work with these binary files without having to worry about the details of the file structure. For examples on how to work with FITS files directly, read this tutorial on [Plotting Images from *Kepler* Target Pixel Files](https://github.com/spacetelescope/notebooks/blob/master/notebooks/MAST/Kepler/Kepler_TPF/kepler_tpf.ipynb). 2. Downloading a Target Pixel File The TPFs of stars observed by the *Kepler* mission are stored on the [Mikulksi Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive, along with metadata about the observations, such as which charge-coupled device (CCD) was used at each time. Lightkurve's built-in tools allow us to search and download TPFs from the archive. As we did in the accompanying tutorial on light curves, we will start by downloading one quarter (a *Kepler* observing period approximately 90 days in duration) of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter). Using the [`search_targetpixelfile`](https://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html) function, we can find an itemized list of different TPFs available for Kepler-8.
###Code
search_result = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long")
search_result
###Output
_____no_output_____
###Markdown
In this list, each row represents a different observing period. We find that *Kepler* recorded 18 quarters of data for this target across four years. The `search_targetpixelfile()` function takes several additional arguments, such as the `quarter` number or the `mission` name. You can find examples of its use in the [online documentation](http://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html) for this function.The search function returns a [`SearchResult`](https://docs.lightkurve.org/api/lightkurve.search.SearchResult.html) object which has several convenient operations. For example, we can select the fourth data product in the list as follows:
###Code
search_result[4]
###Output
_____no_output_____
###Markdown
We can download this data product using the `download()` method.
###Code
tpf = search_result[4].download()
###Output
_____no_output_____
###Markdown
This instruction is identical to the following line:
###Code
tpf = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long", quarter=4).download()
###Output
_____no_output_____
###Markdown
The `tpf_file` variable we have obtained in this way is a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) object.
###Code
tpf
###Output
_____no_output_____
###Markdown
This file object provides a convenient way to interact with the data file that has been returned by the archive, which contains both the TPF as well as metadata about the observations.Before diving into the properties of the `KeplerTargetPixelFile`, we can plot the data, also using Lightkurve.
###Code
%matplotlib inline
tpf.plot();
###Output
_____no_output_____
###Markdown
What you are seeing in this figure are pixels on the CCD camera, with which Kepler-8 was observed. The color indicates the amount of flux in each pixel, in electrons per second. The y-axis shows the pixel row, and the x-axis shows the pixel column. The title tells us the *Kepler* Input Catalogue (KIC) identification number of the target, and the observing cadence of this image. By default, `plot()` shows the first observation cadence in the quarter, but this can be changed by passing optional keyword arguments. You can type `help(tpf.plot)` to see a full list of those options. NoteYou can also download TPF FITS files from the archive by hand, store them on your local disk, and open them using the `lk.read()` function. This function will return a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) object just as in the above example. You can find out where Lightkurve stored a given TPF by typing `tpf.path`:
###Code
tpf.path
###Output
_____no_output_____
###Markdown
3. Accessing the Metadata Our [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) includes the observation's metadata, loaded from the header of the TPF files downloaded from MAST. Many of these are similar to the metadata stored in the [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html), which are discussed in the accompanying tutorial. The headers containing the metadata can be accessed from the [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) through the `get_header()` method.For example, the first extension ("extension 0") of the file provides metadata related to the star, such as its magnitude in different passbands, its movement and position on the sky, and its location on *Kepler*'s CCD detector:
###Code
tpf.get_header(ext=0)
###Output
_____no_output_____
###Markdown
This is an Astropy [`astropy.io.fits.Header`](https://docs.astropy.org/en/stable/io/fits/api/headers.html) object, which has many convenient features. For example, you can retrieve the value of an individual keyword as follows:
###Code
tpf.get_header(ext=0).get('QUARTER')
###Output
_____no_output_____
###Markdown
When constructing a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html) from a FITS file, Lightkurve carries a subset of the metadata through into user-friendly object properties for convenience, which are available through shorthands (for example, `tpf.quarter`). You can view these properties with the the [`show_properties()`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.htmllightkurve.targetpixelfile.KeplerTargetPixelFile.show_properties) method:
###Code
tpf.show_properties()
###Output
_____no_output_____
###Markdown
A new piece of metadata not included in the [`KeplerLightCurve`](http://docs.lightkurve.org/api/lightkurve.lightcurvefile.KeplerLightCurve.html) objects is the [World Coordinate System](https://fits.gsfc.nasa.gov/fits_wcs.html) (WCS). The WCS contains information about how pixel positions map to celestial sky coordinates. This is important when comparing a TPF from a *Kepler*, *K2*, or *TESS* observation to an observation of the same star with a different telescope.You can access the WCS using `tpf.wcs`, which is an Astropy WCS object:
###Code
type(tpf.wcs)
###Output
_____no_output_____
###Markdown
For example, you can obtain the sky coordinates for the bottom left corner of the TPF as follows:
###Code
tpf.wcs.pixel_to_world(0, 0)
###Output
_____no_output_____
###Markdown
Altogether, the metadata contains a lot of information, and you will rarely use it all, but it is important to know that it is available if you need it. For more details and a better overview of all of the metadata stored in a TPF, read the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), specifically: - Section 2.3.2 Target Pixel Data - Appendix A.1: Target Pixel File Headers 4. Time, Flux, and Background Finally, we have the most important properties of the TPF: the time and flux information. Just like a `KeplerLightCurve` object, we can access the time information as an Astropy `Time` object as follows:
###Code
tpf.time
###Output
_____no_output_____
###Markdown
The pixel brightness data is available as an Astropy `Quantity` object named `tpf.flux`:
###Code
tpf.flux
###Output
_____no_output_____
###Markdown
This object is a three-dimensional array, where each entry in the array represents one observing cadence. In our example, the flux array is composed of 4116 images, which are 5x5 pixels in size each:
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
We can access the values of the first 5x5 pixel image as a NumPy array as follows:
###Code
tpf.flux[0].value
###Output
_____no_output_____
###Markdown
At each cadence the TPF has four different flux-related data properties:- `tpf.flux`: the stellar brightness after the background is removed.- `tpf.flux_err`: the statistical uncertainty on the stellar flux after background removal.- `tpf.flux_bkg`: the astronomical background brightness of the image.- `tpf.flux_bkg_err`: the statistical uncertainty on the background flux.All four of these data arrays are in units of electrons per second.**Note**: for *Kepler*, the flux background isn't a measurement made using the local TPF data. Instead, at each cadence, the *Kepler* pipeline fits a model to thousands of empty pixels across each CCD in order to estimate a continuum background across the the CCD. For more details read the [*Kepler* Instrument Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19033-002-instrument-hb.pdf), Section 2.6.2.4. In the case of *TESS*, local background pixels contained within a TPF are used instead. **Note**: The `tpf.flux` values seen above have been quality-masked. This means that cadences of observations that violated the `quality_bitmask` parameter are removed, and so `tpf.flux` represents the data that you probably want to use to do your science. The `quality_bitmask` can also be accessed as a property of a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html). For specific details on the `quality` flags, read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/k2/_documents/MAST_Kepler_Archive_Manual_2020.pdf), Section 2.3.1.1.If you want to access flux and background flux measurements that *have not* been quality masked, you can pass a custom `quality_bitmask` parameter to the `download()` or `read()` method as follows:
###Code
search = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long", quarter=4)
tpf = search.download(quality_bitmask=0)
###Output
_____no_output_____
###Markdown
You can see that the flux array of this object now has more cadences (4397) than the original one above (4116):
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
Alternatively, we can access the unmasked contents of the original TPF FITS file at any time using the `hdu` property:
###Code
tpf.hdu[1].data['FLUX'].shape
###Output
_____no_output_____
###Markdown
About this Notebook **Authors:** Oliver Hall ([email protected]), Geert Barentsen**Updated On**: 2020-09-15 Citing Lightkurve and AstropyIf you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
###Code
lk.show_citation_instructions()
###Output
_____no_output_____
###Markdown
Using Target Pixel Files with Lightkurve Learning GoalsBy the end of this tutorial, you will:- Be able to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).- Be able to access target pixel file metadata.- Understand where to find more details about *Kepler* target pixel files. Introduction The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.*Kepler* selected certain pixels around targeted stars to be downloaded from the spacecraft. These were stored as *target pixel files* that contain data for each observed cadence. In this tutorial, we will learn how to use Lightkurve to download these raw data, plot them, and understand their properties and units.It is recommended that you first read the tutorial on how to use *Kepler* light curve products with Lightkurve. That tutorial will introduce you to some specifics of how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. This series of cutouts were downloaded and stored as an array of images in target pixel files, or TPFs. By summing up the amount of light (the *flux*) captured by the pixels in which the star appears, you can make a measurement of the brightness of a star over time.TPFs are an important resource when studying an astronomical object with *Kepler*, *K2*, or *TESS*. The files allow us to understand the original images that were collected, and identify potential sources of noise or instrument-induced trends which may be less obvious in derived light curves. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally work for *TESS* and *K2*. ImportsThis tutorial requires **[Lightkurve](https://docs.lightkurve.org)**, which in turn uses `matplotlib` for plotting.
###Code
import lightkurve as lk
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. What is a Target Pixel File?The target pixel file (TPF) of a star contains an image for each observing cadence, either a 30-minute Long Cadence or one-minute Short Cadence exposure in the case of *Kepler*. The files also include metadata detailing how the observation was made, as well as post-processing information such as the estimated intensity of the astronomical background in each image. (Read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), Section 2.3.2 for more information.)TPFs are stored in a [FITS file format](https://fits.gsfc.nasa.gov/fits_primer.html). The Lightkurve package allows us to work with these binary files without having to worry about the details of the file structure. For examples on how to work with FITS files directly, read this tutorial on [Plotting Images from *Kepler* Target Pixel Files](https://github.com/spacetelescope/notebooks/blob/master/notebooks/MAST/Kepler/Kepler_TPF/kepler_tpf.ipynb). 2. Downloading a Target Pixel File The TPFs of stars observed by the *Kepler* mission are stored on the [Mikulksi Archive for Space Telescopes](https://archive.stsci.edu/kepler/) (MAST) archive, along with metadata about the observations, such as which charge-coupled device (CCD) was used at each time. Lightkurve's built-in tools allow us to search and download TPFs from the archive. As we did in the accompanying tutorial on light curves, we will start by downloading one quarter (a *Kepler* observing period approximately 90 days in duration) of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter). Using the [search_targetpixelfile](https://docs.lightkurve.org/reference/api/lightkurve.search_targetpixelfile.html?highlight=search_targetpixelfile) function, we can find an itemized list of different TPFs available for Kepler-8.
###Code
search_result = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long")
search_result
###Output
_____no_output_____
###Markdown
In this list, each row represents a different observing period. We find that *Kepler* recorded 18 quarters of data for this target across four years. The `search_targetpixelfile()` function takes several additional arguments, such as the `quarter` number or the `mission` name. You can find examples of its use in the [online documentation](https://docs.lightkurve.org/reference/api/lightkurve.search_targetpixelfile.html?highlight=search_targetpixelfile) for this function.The search function returns a `SearchResult` object which has several convenient operations. For example, we can select the fourth data product in the list as follows:
###Code
search_result[4]
###Output
_____no_output_____
###Markdown
We can download this data product using the [download()](https://docs.lightkurve.org/reference/api/lightkurve.SearchResult.download.html?highlight=downloadlightkurve.SearchResult.download) method.
###Code
tpf = search_result[4].download()
###Output
_____no_output_____
###Markdown
This instruction is identical to the following line:
###Code
tpf = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long", quarter=4).download()
###Output
_____no_output_____
###Markdown
The `tpf_file` variable we have obtained in this way is a [KeplerTargetPixelFile](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile) object.
###Code
tpf
###Output
_____no_output_____
###Markdown
This file object provides a convenient way to interact with the data file that has been returned by the archive, which contains both the TPF as well as metadata about the observations.Before diving into the properties of the `KeplerTargetPixelFile`, we can plot the data, also using Lightkurve.
###Code
%matplotlib inline
tpf.plot();
###Output
_____no_output_____
###Markdown
What you are seeing in this figure are pixels on the CCD camera, with which Kepler-8 was observed. The color indicates the amount of flux in each pixel, in electrons per second. The y-axis shows the pixel row, and the x-axis shows the pixel column. The title tells us the *Kepler* Input Catalogue (KIC) identification number of the target, and the observing cadence of this image. By default, `plot()` shows the first observation cadence in the quarter, but this can be changed by passing optional keyword arguments. You can type `help(tpf.plot)` to see a full list of those options. NoteYou can also download TPF FITS files from the archive by hand, store them on your local disk, and open them using the `lk.read()` function. This function will return a [KeplerTargetPixelFile](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile) object just as in the above example. You can find out where Lightkurve stored a given TPF by typing `tpf.path`:
###Code
tpf.path
###Output
_____no_output_____
###Markdown
3. Accessing the Metadata Our [KeplerTargetPixelFile](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile) includes the observation's metadata, loaded from the header of the TPF files downloaded from MAST. Many of these are similar to the metadata stored in the `KeplerLightCurve`, which are discussed in the accompanying tutorial. The headers containing the metadata can be accessed from the [KeplerTargetPixelFile](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile) through the [get_header()](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.get_header.html?highlight=get_header) method.For example, the first extension ("extension 0") of the file provides metadata related to the star, such as its magnitude in different passbands, its movement and position on the sky, and its location on *Kepler*'s CCD detector:
###Code
tpf.get_header(ext=0)
###Output
_____no_output_____
###Markdown
This is an Astropy [`astropy.io.fits.Header`](https://docs.astropy.org/en/stable/io/fits/api/headers.html) object, which has many convenient features. For example, you can retrieve the value of an individual keyword as follows:
###Code
tpf.get_header(ext=0).get('QUARTER')
###Output
_____no_output_____
###Markdown
When constructing a [KeplerTargetPixelFile](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile) from a FITS file, Lightkurve carries a subset of the metadata through into user-friendly object properties for convenience, which are available through shorthands (for example, `tpf.quarter`). You can view these properties with the the `show_properties()` method:
###Code
tpf.show_properties()
###Output
_____no_output_____
###Markdown
A new piece of metadata not included in the `KeplerLightCurve` objects is the [World Coordinate System](https://fits.gsfc.nasa.gov/fits_wcs.html) (WCS). The WCS contains information about how pixel positions map to celestial sky coordinates. This is important when comparing a TPF from a *Kepler*, *K2*, or *TESS* observation to an observation of the same star with a different telescope.You can access the WCS using [tpf.wcs](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.wcs.html?highlight=wcs), which is an Astropy WCS object:
###Code
type(tpf.wcs)
###Output
_____no_output_____
###Markdown
For example, you can obtain the sky coordinates for the bottom left corner of the TPF as follows:
###Code
tpf.wcs.pixel_to_world(0, 0)
###Output
_____no_output_____
###Markdown
Altogether, the metadata contains a lot of information, and you will rarely use it all, but it is important to know that it is available if you need it. For more details and a better overview of all of the metadata stored in a TPF, read the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf), specifically: - Section 2.3.2 Target Pixel Data - Appendix A.1: Target Pixel File Headers 4. Time, Flux, and Background Finally, we have the most important properties of the TPF: the time and flux information. Just like a `KeplerLightCurve` object, we can access the time information as an Astropy `Time` object as follows:
###Code
tpf.time
###Output
_____no_output_____
###Markdown
The pixel brightness data is available as an Astropy `Quantity` object named `tpf.flux`:
###Code
tpf.flux
###Output
_____no_output_____
###Markdown
This object is a three-dimensional array, where each entry in the array represents one observing cadence. In our example, the flux array is composed of 4116 images, which are 5x5 pixels in size each:
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
We can access the values of the first 5x5 pixel image as a NumPy array as follows:
###Code
tpf.flux[0].value
###Output
_____no_output_____
###Markdown
At each cadence the TPF has four different flux-related data properties:- `tpf.flux`: the stellar brightness after the background is removed.- `tpf.flux_err`: the statistical uncertainty on the stellar flux after background removal.- `tpf.flux_bkg`: the astronomical background brightness of the image.- `tpf.flux_bkg_err`: the statistical uncertainty on the background flux.All four of these data arrays are in units of electrons per second.**Note**: for *Kepler*, the flux background isn't a measurement made using the local TPF data. Instead, at each cadence, the *Kepler* pipeline fits a model to thousands of empty pixels across each CCD in order to estimate a continuum background across the the CCD. For more details read the [*Kepler* Instrument Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19033-002-instrument-hb.pdf), Section 2.6.2.4. In the case of *TESS*, local background pixels contained within a TPF are used instead. **Note**: The `tpf.flux` values seen above have been quality-masked. This means that cadences of observations that violated the `quality_bitmask` parameter are removed, and so `tpf.flux` represents the data that you probably want to use to do your science. The `quality_bitmask` can also be accessed as a property of a [`KeplerTargetPixelFile`](https://docs.lightkurve.org/reference/api/lightkurve.KeplerTargetPixelFile.html?highlight=keplertargetpixelfile). For specific details on the `quality` flags, read the [*Kepler* Archive Manual](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/k2/_documents/MAST_Kepler_Archive_Manual_2020.pdf), Section 2.3.1.1.If you want to access flux and background flux measurements that *have not* been quality masked, you can pass a custom `quality_bitmask` parameter to the `download()` or `read()` method as follows:
###Code
search = lk.search_targetpixelfile("Kepler-8", author="Kepler", cadence="long", quarter=4)
tpf = search.download(quality_bitmask=0)
###Output
_____no_output_____
###Markdown
You can see that the flux array of this object now has more cadences (4397) than the original one above (4116):
###Code
tpf.flux.shape
###Output
_____no_output_____
###Markdown
Alternatively, we can access the unmasked contents of the original TPF FITS file at any time using the `hdu` property:
###Code
tpf.hdu[1].data['FLUX'].shape
###Output
_____no_output_____
###Markdown
About this Notebook **Authors:** Oliver Hall ([email protected]), Geert Barentsen**Updated On**: 2020-09-15 Citing Lightkurve and AstropyIf you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
###Code
lk.show_citation_instructions()
###Output
_____no_output_____ |
Rectangle Circumference.ipynb | ###Markdown
Rectangle Circumference
###Code
length = float(input("Enter Length: "))
width = float(input("Enter Width: "))
circumference = 2 * (length+width)
print(f"The Circumference of Rectangle with Length {length} & Width {width} = {circumference}")
###Output
Enter Length: 2
Enter Width: 3
The Circumference of Rectangle with Length 2.0 & Width 3.0 = 10.0
|
tutorials/source_zh_cn/autograd.ipynb | ###Markdown
自动微分[](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/autograd.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/quick_start/mindspore_autograd.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3F1aWNrX3N0YXJ0L21pbmRzcG9yZV9hdXRvZ3JhZC5pcHluYg==&imagename=MindSpore1.1.1) 在训练神经网络时,最常用的算法是反向传播,在该算法中,根据损失函数对于给定参数的梯度来调整参数(模型权重)。MindSpore计算一阶导数方法`mindspore.ops.GradOperation (get_all=False, get_by_list=False, sens_param=False)`,其中`get_all`为`False`时,只会对第一个输入求导,为`True`时,会对所有输入求导;`get_by_list`为`False`时,不会对权重求导,为`True`时,会对权重求导;`sens_param`对网络的输出值做缩放以改变最终梯度。下面用MatMul算子的求导做深入分析。首先导入本文档需要的模块和接口,如下所示:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
###Output
_____no_output_____
###Markdown
对输入求一阶导如果需要对输入进行求导,首先需要定义一个需要求导的网络,以一个由MatMul算子构成的网络$f(x,y)=z * x * y$为例。定义网络结构如下:
###Code
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
def construct(self, x, y):
x = x * self.z
out = self.matmul(x, y)
return out
###Output
_____no_output_____
###Markdown
接着定义求导网络,`__init__`函数中定义需要求导的网络`self.net`和`ops.GradOperation`操作,`construct`函数中对`self.net`进行求导。求导网络结构如下:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
定义输入并且打印输出:
###Code
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5099998 2.7 3.6000001]
[4.5099998 2.7 3.6000001]]
###Markdown
若考虑对`x`、`y`输入求导,只需在`GradNetWrtX`中设置`self.grad_op = GradOperation(get_all=True)`。 对权重求一阶导若需要对权重的求导,将`ops.GradOperation`中的`get_by_list`设置为`True`:则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.params = ParameterTuple(net.trainable_params())
self.grad_op = ops.GradOperation(get_by_list=True)
def construct(self, x, y):
gradient_function = self.grad_op(self.net, self.params)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
运行并打印输出:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
###Markdown
若需要对某些权重不进行求导,则在定义求导网络时,对相应的权重中`requires_grad`设置为`False`。```Pythonself.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z', requires_grad=False)``` 梯度值缩放可以通过`sens_param`参数对网络的输出值做缩放以改变最终梯度。首先将`ops.GradOperation`中的`sens_param`设置为`True`,并确定缩放指数,其维度与输出维度保持一致。缩放指数`self.grad_wrt_output`可以记作如下形式:```pythonself.grad_wrt_output = Tensor([[s1, s2, s3], [s4, s5, s6]])```则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation(sens_param=True)
self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y, self.grad_wrt_output)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[2.211 0.51 1.49 ]
[5.588 2.68 4.07 ]]
###Markdown
停止计算梯度我们可以使用`stop_gradient`来禁止网络内的算子对梯度的影响,例如:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
from mindspore.ops.functional import stop_gradient
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
def construct(self, x, y):
out1 = self.matmul(x, y)
out2 = self.matmul(x, y)
out2 = stop_gradient(out2)
out = out1 + out2
return out
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5 2.7 3.6]
[4.5 2.7 3.6]]
###Markdown
在这里我们对`out2`设置了`stop_gradient`, 所以`out2`没有对梯度计算有任何的贡献。 如果我们删除`out2 = stop_gradient(out2)`,那么输出值会变为:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[9.0 5.4 7.2]
[9.0 5.4 7.2]]
###Markdown
自动微分`Ascend` `GPU` `CPU` `入门` `模型开发`[](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tb2RlbGFydHMvcXVpY2tfc3RhcnQvbWluZHNwb3JlX2F1dG9ncmFkLmlweW5i&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/tutorials/zh_cn/mindspore_autograd.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/tutorials/zh_cn/mindspore_autograd.py) [](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/autograd.ipynb) 自动微分是网络训练中常用的反向传播算法的一般化,利用该算法用户可以将多层复合函数分解为一系列简单的基本运算,该功能让用户可以跳过复杂的求导过程的编程,从而大大降低框架的使用门槛。MindSpore计算一阶导数方法`mindspore.ops.GradOperation (get_all=False, get_by_list=False, sens_param=False)`,其中`get_all`为`False`时,只会对第一个输入求导,为`True`时,会对所有输入求导;`get_by_list`为`False`时,不会对权重求导,为`True`时,会对权重求导;`sens_param`对网络的输出值做缩放以改变最终梯度。下面用MatMul算子的求导做深入分析。首先导入本文档需要的模块和接口,如下所示:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
###Output
_____no_output_____
###Markdown
对输入求一阶导如果需要对输入进行求导,首先需要定义一个需要求导的网络,以一个由MatMul算子构成的网络$f(x,y)=z*x*y$为例。定义网络结构如下:
###Code
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
def construct(self, x, y):
x = x * self.z
out = self.matmul(x, y)
return out
###Output
_____no_output_____
###Markdown
接着定义求导网络,`__init__`函数中定义需要求导的网络`self.net`和`ops.GradOperation`操作,`construct`函数中对`self.net`进行求导。求导网络结构如下:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
定义输入并且打印输出:
###Code
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5099998 2.7 3.6000001]
[4.5099998 2.7 3.6000001]]
###Markdown
若考虑对`x`、`y`输入求导,只需在`GradNetWrtX`中设置`self.grad_op = GradOperation(get_all=True)`。 对权重求一阶导若需要对权重的求导,将`ops.GradOperation`中的`get_by_list`设置为`True`:则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.params = ParameterTuple(net.trainable_params())
self.grad_op = ops.GradOperation(get_by_list=True)
def construct(self, x, y):
gradient_function = self.grad_op(self.net, self.params)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
运行并打印输出:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
###Markdown
若需要对某些权重不进行求导,则在定义求导网络时,对相应的权重中`requires_grad`设置为`False`。```Pythonself.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z', requires_grad=False)``` 梯度值缩放可以通过`sens_param`参数对网络的输出值做缩放以改变最终梯度。首先将`ops.GradOperation`中的`sens_param`设置为`True`,并确定缩放指数,其维度与输出维度保持一致。缩放指数`self.grad_wrt_output`可以记作如下形式:```pythonself.grad_wrt_output = Tensor([[s1, s2, s3], [s4, s5, s6]])```则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation(sens_param=True)
self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y, self.grad_wrt_output)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[2.211 0.51 1.49 ]
[5.588 2.68 4.07 ]]
###Markdown
停止计算梯度我们可以使用`stop_gradient`来禁止网络内的算子对梯度的影响,例如:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
from mindspore.ops import stop_gradient
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
def construct(self, x, y):
out1 = self.matmul(x, y)
out2 = self.matmul(x, y)
out2 = stop_gradient(out2)
out = out1 + out2
return out
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5 2.7 3.6]
[4.5 2.7 3.6]]
###Markdown
在这里我们对`out2`设置了`stop_gradient`, 所以`out2`没有对梯度计算有任何的贡献。 如果我们删除`out2 = stop_gradient(out2)`,那么输出值会变为:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[9.0 5.4 7.2]
[9.0 5.4 7.2]]
###Markdown
自动微分[](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/autograd.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/tutorials/zh_cn/mindspore_autograd.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3F1aWNrX3N0YXJ0L21pbmRzcG9yZV9hdXRvZ3JhZC5pcHluYg==&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) 在训练神经网络时,最常用的算法是反向传播,在该算法中,根据损失函数对于给定参数的梯度来调整参数(模型权重)。MindSpore计算一阶导数方法`mindspore.ops.GradOperation (get_all=False, get_by_list=False, sens_param=False)`,其中`get_all`为`False`时,只会对第一个输入求导,为`True`时,会对所有输入求导;`get_by_list`为`False`时,不会对权重求导,为`True`时,会对权重求导;`sens_param`对网络的输出值做缩放以改变最终梯度。下面用MatMul算子的求导做深入分析。首先导入本文档需要的模块和接口,如下所示:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
###Output
_____no_output_____
###Markdown
对输入求一阶导如果需要对输入进行求导,首先需要定义一个需要求导的网络,以一个由MatMul算子构成的网络$f(x,y)=z * x * y$为例。定义网络结构如下:
###Code
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
def construct(self, x, y):
x = x * self.z
out = self.matmul(x, y)
return out
###Output
_____no_output_____
###Markdown
接着定义求导网络,`__init__`函数中定义需要求导的网络`self.net`和`ops.GradOperation`操作,`construct`函数中对`self.net`进行求导。求导网络结构如下:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
定义输入并且打印输出:
###Code
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5099998 2.7 3.6000001]
[4.5099998 2.7 3.6000001]]
###Markdown
若考虑对`x`、`y`输入求导,只需在`GradNetWrtX`中设置`self.grad_op = GradOperation(get_all=True)`。 对权重求一阶导若需要对权重的求导,将`ops.GradOperation`中的`get_by_list`设置为`True`:则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.params = ParameterTuple(net.trainable_params())
self.grad_op = ops.GradOperation(get_by_list=True)
def construct(self, x, y):
gradient_function = self.grad_op(self.net, self.params)
return gradient_function(x, y)
###Output
_____no_output_____
###Markdown
运行并打印输出:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
(Tensor(shape=[1], dtype=Float32, value= [ 2.15359993e+01]),)
###Markdown
若需要对某些权重不进行求导,则在定义求导网络时,对相应的权重中`requires_grad`设置为`False`。```Pythonself.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z', requires_grad=False)``` 梯度值缩放可以通过`sens_param`参数对网络的输出值做缩放以改变最终梯度。首先将`ops.GradOperation`中的`sens_param`设置为`True`,并确定缩放指数,其维度与输出维度保持一致。缩放指数`self.grad_wrt_output`可以记作如下形式:```pythonself.grad_wrt_output = Tensor([[s1, s2, s3], [s4, s5, s6]])```则`GradNetWrtX`结构为:
###Code
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation(sens_param=True)
self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y, self.grad_wrt_output)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[2.211 0.51 1.49 ]
[5.588 2.68 4.07 ]]
###Markdown
停止计算梯度我们可以使用`stop_gradient`来禁止网络内的算子对梯度的影响,例如:
###Code
import numpy as np
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import Tensor
from mindspore import ParameterTuple, Parameter
from mindspore import dtype as mstype
from mindspore.ops.functional import stop_gradient
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.matmul = ops.MatMul()
def construct(self, x, y):
out1 = self.matmul(x, y)
out2 = self.matmul(x, y)
out2 = stop_gradient(out2)
out = out1 + out2
return out
class GradNetWrtX(nn.Cell):
def __init__(self, net):
super(GradNetWrtX, self).__init__()
self.net = net
self.grad_op = ops.GradOperation()
def construct(self, x, y):
gradient_function = self.grad_op(self.net)
return gradient_function(x, y)
x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[4.5 2.7 3.6]
[4.5 2.7 3.6]]
###Markdown
在这里我们对`out2`设置了`stop_gradient`, 所以`out2`没有对梯度计算有任何的贡献。 如果我们删除`out2 = stop_gradient(out2)`,那么输出值会变为:
###Code
output = GradNetWrtX(Net())(x, y)
print(output)
###Output
[[9.0 5.4 7.2]
[9.0 5.4 7.2]]
|
Notebooks/Linear_System.ipynb | ###Markdown
solving linear systemsThis is a basic demo of using angler to solve simple linear electromagnetic systems.For optimization & inverse design problems, check out the other notebooks as well.
###Code
import numpy as np
from angler import Simulation
%load_ext autoreload
%autoreload 2
%matplotlib inline
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Electric point dipole This example demonstrates solving for the fields of a radiating electric point dipole (out of plane electric current).
###Code
omega = 2*np.pi*200e12
dl = 0.02 # grid size (units of L0, which defaults to 1e-6)
eps_r = np.ones((200, 200)) # relative permittivity
NPML = [15, 15] # number of pml grid points on x and y borders
simulation = Simulation(omega, eps_r, dl, NPML, 'Ez')
simulation.src[100, 100] = 1
simulation.solve_fields()
simulation.plt_abs(outline=False, cbar=True);
simulation.plt_re(outline=False, cbar=True);
###Output
_____no_output_____
###Markdown
Ridge waveguide This example demonstrates solving for the fields of a waveguide excited by a modal source.
###Code
omega = 2*np.pi*200e12
dl = 0.01 # grid size (units of L0, which defaults to 1e-6)
eps_r = np.ones((600, 200)) # relative permittivity
eps_r[:,90:110] = 12.25 # set waveguide region
NPML = [15, 15] # number of pml grid points on x and y borders
simulation = Simulation(omega, eps_r, dl, NPML, 'Ez')
simulation.add_mode(3.5, 'x', [20, 100], 60, scale=10)
simulation.setup_modes()
simulation.solve_fields()
print('input power of {} in W/L0'.format(simulation.W_in))
simulation.plt_re(outline=True, cbar=False);
simulation.plt_abs(outline=True, cbar=False);
###Output
input power of 0.0016689050319357358 in W/L0
###Markdown
Making an animation This demonstrates how one can generate an animation of the field, which is saved to 'fields.gif'
###Code
from matplotlib import animation, rc
from IPython.display import HTML
from angler.plot import plt_base_ani
animation = plt_base_ani(simulation.fields["Ez"], cbar=True, Nframes=40, interval=80)
HTML(animation.to_html5_video())
animation.save('fields.gif', dpi=80, writer='imagemagick')
###Output
_____no_output_____ |
resource_allocation.ipynb | ###Markdown
The optimization problem to model the resource allocation tasks is as follows$$\begin{alignedat}{3}& \min_{x_{c,s}, \delta_s, \gamma_s, \iota_c, t_s} & \quad & \sum_{s \in \mathcal{S}} \delta_s + \sum_{c \in \mathcal{C}} \iota_c + 2 \sum_{s \in \mathcal{S}} \gamma_s \\& \text{subject to} &&& t_s &= \sum_{c \in \mathcal{C}_s} x_{c,s} & \quad & \forall s \in \mathcal{S}, \\&&&& t_s + \gamma_s &\geq \mathrm{PA}_{\min,s} && \forall s \in \mathcal{S}, \\&&&& t_s + \gamma_s + \delta_s &= \mathrm{PA}_{\mathrm{est},s} && \forall s \in \mathcal{S}, \\&&&& \sum_{s \in \mathcal{S}_c} x_{c,s} + \iota_c &= \mathrm{DCC}_c && \forall c \in \mathcal{C}, \\&&&& x_{c, s} & \geq 0.5 && \forall s \in \mathcal{S}_c, \quad \forall c \in \mathcal{C}, \\&&&& x_{c, s} & \in \mathcal{J}_{c,s} && \forall s \in \mathcal{S}_c, \quad \forall c \in \mathcal{C}, \\&&&& \gamma_s, \delta_s, \iota_c & \geq 0 && \forall s \in \mathcal{S}, c \in \mathcal{C}. \end{alignedat}$$The sets $\mathcal{C}$ and $\mathcal{S}$ are the sets of all consultants and all specialities respectively. Their subscripted counterparts $\mathcal{C}_s$ and $\mathcal{S}_c$ are all the consultants that are able to work on Speciality $s$ and its counterpart, all specialities that the Consultant $c$ can work on respectively. The $\iota_s$ are the missing PAs that are necessary to cover the minimum workload and emergencies in Speciality $s$. Any non-zero $\iota_s$ is critical and impedes patient safety as minimal coverage _cannot_ be guaranteed.$\delta_s$ is the difference between the estimated workload in Speciality $s$ and the resources that were allocated. Finally, the variable $\iota_c$ captures the eventual time a particular Consultant $c$ is idle.The main optimization variables are the $x_{c,s}$s, the allocated PAs for Consultant $c$ in Speciality $s$. The auxiliary variable $t_s$ is introduced for better legibiliy and contains the PAs covered in Speciality $s$ by all available consultants.The constraint $x_{c,s} \geq 0.5$ ensures that if a consutant is working in a speciality, this consultant keeps a minimum workload to not loose practice. The constraint-sets $\mathcal{J}_{c,s}$ capture particular requirements stemming from individual jobplans, such as minimum or maximum contributions of a consultant to a particular speciality, if applicable.
###Code
consultants = data.iloc[:-2]
consultants
specialities = data.iloc[-2:, :-1]
specialities
reqs = pd.melt(consultants.iloc[:,:-1].reset_index(), id_vars='index').dropna()
reqs.reset_index(drop=True, inplace=True)
reqs.columns = ['consultant', 'speciality', 'availability']
reqs.consultant = reqs.consultant.astype('category')
reqs.availability = reqs.availability.astype('str')
reqs.loc[:, 'consultant_id'] = reqs.consultant.cat.codes
reqs.speciality = reqs.speciality.astype('category')
reqs.loc[:, 'speciality_id'] = reqs.speciality.cat.codes
reqs.head()
num_specialities = specialities.shape[1]
t_s = cp.Variable(num_specialities)
γ_s = cp.Variable(num_specialities, nonneg=True)
δ_s = cp.Variable(num_specialities, nonneg=True)
num_consultants = len(consultants)
ι_c = cp.Variable(num_consultants, nonneg=True)
x_cs = cp.Variable(len(reqs))
constr = [x_cs >= .5]
for (speciality_name, speciality_id), rows in reqs.groupby(['speciality', 'speciality_id']):
min_cover = specialities.loc['Emergency cover', speciality_name]
est_demand = specialities.loc['Estimated demand', speciality_name]
print(
speciality_name, ':', [x for x in rows.consultant],
'=', est_demand, f'(min: {min_cover})'
)
t = t_s[speciality_id]
constr.append(t == cp.sum(x_cs[rows.index]))
constr.append(t + γ_s[speciality_id] >= min_cover)
constr.append(t + γ_s[speciality_id] + δ_s[speciality_id] == est_demand)
print()
for (consultant_name, consultant_id), rows in reqs.groupby(['consultant', 'consultant_id']):
DCCs = consultants.loc[consultant_name, 'DCC PAs']
print(consultant_name, ':', [x for x in rows.speciality], '=', DCCs)
constr.append(
cp.sum(x_cs[rows.index]) + ι_c[consultant_id] == DCCs)
# flexible assignments --> do nothing
idx = reqs.availability == 'x'
# min value
idx = reqs.availability.str.startswith('>')
if idx.sum():
min_pa = reqs.loc[idx, 'availability'].str[1:].astype('float')
constr.append(
x_cs[min_pa.index] >= min_pa.values
)
# max value
idx = reqs.availability.str.startswith('<')
if idx.sum():
max_pa = reqs.loc[idx, 'availability'].str[1:].astype('float')
constr.append(
x_cs[max_pa.index] <= max_pa.values
)
# exact value
idx = reqs.availability.str.isnumeric()
if idx.sum():
exact_pa = reqs.loc[idx, 'availability'].astype('float')
constr.append(
x_cs[exact_pa.index] == exact_pa.values
)
obj = cp.Minimize(
2 * cp.sum(γ_s) + cp.sum(δ_s) + cp.sum(ι_c)
)
prob = cp.Problem(obj, constr)
prob.solve(solver=cp.SCS)
df = pd.DataFrame()
for s, c, v in zip(reqs.speciality, reqs.consultant, np.round(x_cs.value, 2)):
df.loc[c, s] = v
Cs = consultants.index
Ss = specialities.columns
df = df.loc[Cs, Ss]
df.loc[Cs, 'Idle'] = np.round(ι_c.value, 2)
df.loc[Cs, 'DCC PAs'] = consultants.loc[:, 'DCC PAs']
df.loc['sum', :] = df.loc[Cs, :].sum()
df.fillna(0, inplace=True)
df.loc['min cover gap', Ss] = np.round(γ_s.value, 2)
df.loc['demand gap', Ss] = np.round(δ_s.value, 2)
df.loc['Emergency cover', Ss] = specialities.loc['Emergency cover', :]
df.loc['Estimated demand', Ss] = specialities.loc['Estimated demand', :]
df.loc['min cover gap':, 'DCC PAs'] = df.loc['min cover gap':, Ss].sum('columns')
fn = Path(FILE_NAME)
df.to_excel(fn.with_suffix('.result' + fn.suffix))
df
###Output
_____no_output_____ |
concept.ipynb | ###Markdown
*One world*By modeling the relationship between services, trade, and consumption patterns across countries over time that should provide me enough insights to further develop the questions below into actual hypotheses:* what are the limitations of using money as the principal measure of economic activity (some will be obvious of course, others - hopefully - not so much)?* how does the (mis-/non-)valuation of services and the informal economy determine into the economic worldview? and consequently, how to deal with missing data and accurate estimates and/ or valuations of intangible economic activity? and labour pricing in general?* what is the impact of the growing share of services (intangibles) in global economic activity on the world economy?* what part of the cycle of supply and demand is endogeneous, and what part is exogeneous? this question is closely related to the role of technological development, information diffusion, knowledge / knowhow transfer (education), and the exceedingly dominant role of services in the global economy.* could the working models of the **real** economy be improved in the light of the questions posed above?* could any of the resulting insights be used to improve the valuation of firms and (e)valuation and remuneration of firm employees?* how would the automation of service work through cognitive computing affect the macroeconomic outlook?* what is the consumer exposure to macroeconomic policy decisions?The goal being to build a functioning predictive model of global human economic activity.*Reminder: continuously perform sanity checks on the completeness and sufficieny of the developed models and systems.* Research outlineCurrent technological advances make possible both unprecented economies of scale and scope and the exact matching of supply and demand for labour and skills. These advances would in theory allow firms to significantly reduce overhead on non-productive employees, at the same time creating more meaningful work opportunities for workers. However, given difficulties involved in building up buffers for workers specializing in less in-demand activities, the gig economy currently disproportionally benefits highly skilled workers. Unfortunately, these workers only constitute a small percentage of the total workforce of a country. As a result, the same technologies that enable unprecented economies of scale and scope at the level of the individual worker therefore also work to aggravate existing economic inequalities across the workforce, perpetuating a cycle of privilege and access to education inherent in the socio-economic fabric of our early 21st-century world. Solutions such as universal basic income (UBI) fail to take into account the positive psychological, social, societal and health benifits of individuals participating in and contributing to a community. In the inverse sense, increasing the economic value of the population provides a buffer and leverage against any exclusivist tendencies or policies developed by the most powerful economic and political actors. Where in the last three hundred-fifty years (roughly 1650 CE to 1990 CE) the nation-state had a powerful military incentive to improve the education levels of the workforce and national economy, at the beginning of the 21st century these incentives appear of limited value given the global challenges mankind faces. In fact, the incentivization schemes being maintained at the national level are counterproductive or even damaging when it come to tackling issues such as climate change, epidemic diseases, resource scarcity, poverty reduction, economic development, and the protection of the earth ecosystem. What I argue in this paper is that break is needed from economic the paradigm put forward in it's most clear form in Adam Smith's *The Wealth of Nations* (1776 CE), and the construction of a new social contract, which - in contrast to the nation state - I will refer to as *the human state*. People-----------Of course merely changing the political and economic discourse won't change the world economic outlook. However, framing the political discourse and greater good along global lines rather than national borders and national interests should - over time - help force politicians and the general public away from the currents of misplaced large-scale bigotry, tribalism, and nepotism still all too common in today's world. Progress as a rallying cry has of course been used and abused throughout history, so a clear definition of what is meant by human development is merited. ... Technology--------------(innovation, adaptation, education, transference)... Trade----------(including any means of exchange, viz markets, currencies, labour)... Policy----------(roughly speaking, institutions in the broadest sense)..
###Code
import ipywidgets as widgets
from IPython.display import YouTubeVideo
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Amartya Sen on demonetization in India')
out.append_display_data(YouTubeVideo('OknuVaSW4M0'))
out
out.clear_output()
###Output
_____no_output_____
###Markdown
[PHOENIX model ftp address](ftp://phoenix.astro.physik.uni-goettingen.de//v1.0/SpecIntFITS/PHOENIX-ACES-AGSS-COND-SPECINT-2011/Z-0.0/)[Some results on alkali metals in hot Jupiters](http://www.exoclimes.com/news/recent-results/a-survey-of-alkali-line-absorption-in-exoplanetary-atmospheres/)
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import batman
from astropy.utils.data import download_file
wavelength_url = ('ftp://phoenix.astro.physik.uni-goettingen.de/v2.0/HiResFITS/'
'WAVE_PHOENIX-ACES-AGSS-COND-2011.fits')
wavelength_path = download_file(wavelength_url, cache=True, timeout=30)
wavelengths_vacuum = fits.getdata(wavelength_path)
from astropy.io import fits
mu = fits.open("data/lte04800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")[1].data
mu_vs_lam = fits.getdata("data/lte05800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")
mu_vs_lam /= mu_vs_lam.max(axis=0)
header = fits.getheader("data/lte05800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")
wavelengths = header['CRVAL1'] + np.arange(0, mu_vs_lam.shape[1]) * header['CDELT1']
one_minus_mu = 1 - mu
extent = [wavelengths.min(), wavelengths.max(),
one_minus_mu.max(), one_minus_mu.min()]
fig, ax = plt.subplots(figsize=(14, 10))
ax.imshow(np.log(mu_vs_lam), extent=extent, origin='lower',
interpolation='nearest')
ax.set_xlim([3000, 9000])
# ax.set_xlim([6560, 6570])
ax.set_aspect(1000)
# ax.set_aspect(10)
ax.set(xlabel='Wavelength [Angstrom]', ylabel='$1 - \mu$');
plt.hist(np.mean(np.diff(mu_vs_lam, axis=0), axis=0));
na_D = [5896, 5890]
ca_HK = [3968, 3934]
k_1 = 7664.8991
ref_lambda = 5010
ca_1 = 6122.219
fe_E = 5270
wl_names = ['continuum', 'Na D1', 'Na D2', 'Ca H',
'Ca K', 'K I', "Ca I"]#, 'Fe E']
test_wavelengths = [ref_lambda, na_D[0], na_D[1], ca_HK[0],
ca_HK[1], k_1, ca_1]#, fe_E]
index_ref = np.argmin(np.abs(wavelengths - ref_lambda))
index_d1 = np.argmin(np.abs(wavelengths - na_d[0]))
index_d2 = np.argmin(np.abs(wavelengths - na_d[1]))
def wavelength_to_quadratic(wavelength):
mu_lower_limit = 0.05
limit_mu = mu > mu_lower_limit
quad_params = np.polyfit(mu[limit_mu], mu_vs_lam[limit_mu, ind], 2)
return quad_params[::-1]
from scipy.optimize import fmin_powell, fmin_slsqp, fmin_l_bfgs_b
mu_lower_limit = 0.06
def nonlinear(p, mu):
c_1, c_2, c_3, c_4 = p
return (1 - c_1*(1 - mu**0.5) - c_2 * (1-mu) -
c_3 * (1 - mu**1.5) - c_4 * (1 - mu**2))
def chi2_nl(p, mu, wl):
limit_mu = mu > mu_lower_limit
ind = np.argmin(np.abs(wavelengths - wl))
intensity = mu_vs_lam[limit_mu, ind]
return np.sum((nonlinear(p, mu[limit_mu]) - intensity)**2 / 0.01**2)
def wavelength_to_nonlinear(wavelength):
return fmin_slsqp(chi2_nl, [0.5, -0.1, 0.1, -0.1],
args=(mu, wavelength), disp=0)
def chi2_optical(p, mu):
limit_mu = mu > mu_lower_limit
intensity = mu_vs_lam.mean(1)
chi = np.sum((nonlinear(p, mu[limit_mu]) - intensity[limit_mu])**2 / 0.01**2)
return chi
def optical_to_nonlinear():
return fmin_slsqp(chi2_optical, [0.5, -0.1, 0.1, -0.1],
args=(mu, ), disp=0)
def logarithmic(p, mu):
c_1, c_2 = p
return (1 - c_1*(1-mu) - c_2 * mu * np.log(mu))
def chi2_log(p, mu, wl):
limit_mu = mu > mu_lower_limit
ind = np.argmin(np.abs(wavelengths - wl))
intensity = mu_vs_lam[limit_mu, ind]
return np.sum((logarithmic(p, mu[limit_mu]) - intensity)**2 /0.01**2)
def wavelength_to_log(wavelength):
return fmin_l_bfgs_b(chi2_log, [1, 1], args=(mu, wavelength),
approx_grad=True, bounds=[[-1, 2], [-1, 2]], disp=0)[0]
fig, ax = plt.subplots(len(test_wavelengths), 2, figsize=(8, 8), sharex=True)
for i, wl, label in zip(range(len(test_wavelengths)), test_wavelengths, wl_names):
ind = np.argmin(np.abs(wavelengths - wl))
for j in range(2):
ax[i, j].plot(mu, mu_vs_lam[:, ind], 'k')
ax[i, j].axvline(mu_lower_limit, ls=':', color='k')
u_nl = wavelength_to_nonlinear(wl)
ax[i, 0].plot(mu, nonlinear(u_nl, mu), 'r--')
u_log = wavelength_to_log(wl)
ax[i, 1].plot(mu, logarithmic(u_log, mu), 'r--')
ax[i, 0].set_ylabel('$I(\mu)/I(1)$')
ax[i, 0].set_title(label + ', nonlinear')
ax[i, 1].set_title(label + ', logarithmic')
for j in range(2):
ax[-1, j].set_xlabel('$\mu$')
#ax[-1, 0].set(xlabel='$\mu$', ylabel='$I(\mu)/I(1)$')
fig.tight_layout()
from astropy.constants import R_jup, R_sun
def hd189_wavelength_nl(u):
hd189_params = batman.TransitParams()
hd189_params.per = 2.21857567
hd189_params.t0 = 2454279.436714
hd189_params.inc = 85.7100
hd189_params.a = 8.84
hd189_params.rp = float((1.138 * R_jup)/(0.805 * R_sun))
hd189_params.limb_dark = 'nonlinear'
hd189_params.u = u
hd189_params.ecc = 0
hd189_params.w = 90
return hd189_params
def wavelength_to_transit(times, wavelength):
params = hd189_wavelength_nl(wavelength_to_nonlinear(wavelength))
model = batman.TransitModel(params, times)
f = model.light_curve(params)
return f
def nl_to_transit(times, params):
params = hd189_wavelength_nl(wavelength_to_nonlinear(wavelength))
model = batman.TransitModel(params, times)
f = model.light_curve(params)
return f
def optical_to_transit(times):
params = hd189_wavelength_nl(optical_to_nonlinear())
model = batman.TransitModel(params, times)
f = model.light_curve(params)
return f
import astropy.units as u
times = params.t0 + np.linspace(-1.5/24, 1.5/24, 200)
for i, wl, label in zip(range(len(test_wavelengths)), test_wavelengths, wl_names):
if label != 'continuum':
f = wavelength_to_transit(times, wl)
else:
f = optical_to_transit(times)
plt.plot(times, f, label=label)
plt.xlabel('Time')
plt.ylabel('Flux')
plt.legend()
plt.savefig('plots/transit.png')
f_continuum = optical_to_transit(times)
for i, wl, label in zip(range(len(test_wavelengths)-1), test_wavelengths[1:], wl_names[1:]):
f = wavelength_to_transit(times, wl)
plt.plot(times, f - f_continuum, label=label)
plt.xlabel('Time')
plt.ylabel('Residual')
plt.legend();
plt.savefig('plots/residuals.png', bbox_inches='tight', dpi=200)
###Output
_____no_output_____
###Markdown
***
###Code
mu = fits.open("data/lte04800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")[1].data
mu_vs_lam = fits.getdata("data/lte05800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")
mu_vs_lam /= mu_vs_lam.max(axis=0)
header = fits.getheader("data/lte05800-4.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits")
wavelengths = header['CRVAL1'] + np.arange(0, mu_vs_lam.shape[1]) * header['CDELT1']
one_minus_mu = 1 - mu
###Output
_____no_output_____
###Markdown
$R_{sim} = \frac{\lambda}{\Delta \lambda_{sim}}$
###Code
index_5000 = np.argmin(np.abs(wavelengths - 5000))
sim_dlam = (wavelengths[index_5000+1] - wavelengths[index_5000])
simulation_R = wavelengths[index_5000]/sim_dlam
goal_R = 50
def rebin_pixels(image, wavelength_grid, binning_factor=4):
# Courtesy of J.F. Sebastian: http://stackoverflow.com/a/8090605
if binning_factor == 1:
return image
new_shape = (image.shape[0], image.shape[1]/binning_factor)
sh = (new_shape[0], image.shape[0]//new_shape[0], new_shape[1],
image.shape[1]//new_shape[1])
binned_image = image.reshape(sh).mean(-1).mean(1)
binned_wavelengths = wavelength_grid.reshape(new_shape[1],
image.shape[1]//new_shape[1]).mean(-1)
return binned_image, binned_wavelengths
bin_factor = int(simulation_R/goal_R)
binned_mu_vs_lam, binned_wavelengths = rebin_pixels(mu_vs_lam, wavelengths, bin_factor)
index_5000 = np.argmin(np.abs(binned_wavelengths - 5000))
binned_dlam = (binned_wavelengths[index_5000+1] - binned_wavelengths[index_5000])
binned_R = binned_wavelengths[index_5000]/binned_dlam
binned_R
na_D = np.mean([5896, 5890])
ca_HK = np.mean([3968, 3934])
k_1 = np.mean([7664.8991, 7698.9645])
ca_1 = 6122.219
wl_names = ["Na D", "CaII HK", "K I", "Ca I", r"$H\alpha$"]
test_wavelengths = [na_D, ca_HK, k_1, ca_1, 6562.8]
extent = [binned_wavelengths.min(), binned_wavelengths.max(),
mu.min(), mu.max()]
#one_minus_mu.max(), one_minus_mu.min()]
fig, ax = plt.subplots(figsize=(8, 6))
ax.imshow(np.log(binned_mu_vs_lam), extent=extent, origin='lower',
# ax.imshow(binned_mu_vs_lam, extent=extent, origin='lower',
interpolation='nearest', cmap=plt.cm.Greys_r)
ax.set_xlim([3000, 8000])
ax.set_aspect(2000)
ax2 = fig.add_axes(ax.get_position(), frameon=False)
ax2.tick_params(labelbottom='off',labeltop='on',
labelleft="off", labelright='off',
bottom='off', left='off', right='off')
ax2.set_xlim(ax.get_xlim())
ax2.set_xticks(test_wavelengths)
ax2.set_xticklabels(wl_names)
ax2.grid(axis='x', ls='--')
plt.setp(ax2.get_xticklabels(), rotation=45, ha='left')
plt.draw()
ax2.set_position(ax.get_position())
ax.set(xlabel='Wavelength [Angstrom]', ylabel='$\mu$');
ax2.set_title("$I(\mu) \,/\, I(1)$")
fig.savefig('plots/intensity.png', bbox_inches='tight', dpi=200)
def hd189_vary_rp(rp, u):
hd189_params = batman.TransitParams()
hd189_params.per = 2.21857567
hd189_params.t0 = 2454279.436714
hd189_params.inc = 85.7100
hd189_params.a = 8.84
hd189_params.rp = rp #float((1.138 * R_jup)/(0.805 * R_sun))
hd189_params.limb_dark = 'nonlinear'
hd189_params.u = u
hd189_params.ecc = 0
hd189_params.w = 90
return hd189_params
def transit_model(rp, times):
params = hd189_vary_rp(rp, optical_mean_params)
model = batman.TransitModel(params, times)
f = model.light_curve(params)
return f
def transit_chi2(p, times, data):
rp = p[0]
return np.sum((transit_model(rp, times) - data)**2/0.01**2)
def fit_transit(times, data, initp):
return fmin_l_bfgs_b(transit_chi2, initp, args=(times, data), disp=0,
approx_grad=True, bounds=[[0.05, 0.2]])
def chi2_nl_binned(p, mu, wl):
limit_mu = mu > mu_lower_limit
ind = np.argmin(np.abs(binned_wavelengths - wl))
intensity = binned_mu_vs_lam[limit_mu, ind]
return np.sum((nonlinear(p, mu[limit_mu]) - intensity)**2 / 0.01**2)
def wavelength_to_nonlinear_binned(wavelength):
return fmin_slsqp(chi2_nl_binned, [0.5, -0.1, 0.1, -0.1],
args=(mu, wavelength), disp=0)
def wavelength_to_transit_binned(times, wavelength):
params = hd189_wavelength_nl(wavelength_to_nonlinear_binned(wavelength))
model = batman.TransitModel(params, times)
f = model.light_curve(params)
return f
from astropy.utils.console import ProgressBar
fit_wavelengths = binned_wavelengths[((binned_wavelengths < 8500) &
(binned_wavelengths > 3000))]
initp = [float((1.138 * R_jup)/(0.805 * R_sun))]
radii = np.zeros_like(fit_wavelengths)
with ProgressBar(len(fit_wavelengths), ipython_widget=True) as bar:
for i, wl in enumerate(fit_wavelengths):
data = wavelength_to_transit_binned(times, wl)
bestrp = fit_transit(times, data, initp)
radii[i] = bestrp[0]
bar.update()
#model = transit_model(bestrp, times)
#plt.plot(times, data)
fig, ax = plt.subplots()
ax.plot(fit_wavelengths, radii, 'o')
ax.axhline(initp[0])
ax2 = fig.add_axes(ax.get_position(), frameon=False)
ax2.tick_params(labelbottom='off',labeltop='on',
labelleft="off", labelright='off',
bottom='off', left='off', right='off')
ax2.set_xlim(ax.get_xlim())
ax2.set_xticks(test_wavelengths)
ax2.set_xticklabels(wl_names)
ax2.grid(axis='x', ls='--')
plt.setp(ax2.get_xticklabels(), rotation=45, ha='left')
ax.set_xlabel('Wavelength [Angstrom]')
ax.set_ylabel(r'$R_p/R_\star$')
ax.set_title('Nonlinear LD params fixed to their mean in the optical\n\n\n\n')
plt.savefig('plots/false_spectrum.png', bbox_inches='tight', dpi=200)
###Output
_____no_output_____ |
arrays_strings/reverse_string/reverse_string_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can I assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can I use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
def reverse_string(list_chars):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(reverse_string(None), None)
assert_equal(reverse_string(['']), [''])
assert_equal(reverse_string(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
reverse_string(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
test.test_reverse()
test.test_reverse_inplace()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars is None:
return None
if len(chars) <= 1:
return chars
result = []
for i in range(len(chars) // 2):
chars[i], chars[len(chars) - i - 1] = chars[len(chars) - i - 1], chars[i]
return chars
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
import unittest
class TestReverse(unittest.TestCase):
def test_reverse(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(['']), [''])
self.assertEqual(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
self.assertEqual(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars is None:
return None
l = len(chars)
for i in range(l // 2):
tmp = chars[i]
chars[i] = chars[l - 1 - i]
chars[l - 1 - i] = tmp
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars is None:
return None
for i in range(round(len(chars)/2)):
chars[i], chars[-(i+1)] = chars[-(i+1)], chars[i]
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
import unittest
class TestReverse(unittest.TestCase):
def test_reverse(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(['']), [''])
self.assertEqual(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
self.assertEqual(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can I assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can I use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
def list_of_chars(list_chars):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
# TODO: Implement me
if chars is None or not chars:
return chars
return chars[::-1]
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can I assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can I use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
def list_of_chars(list_chars):
if list_chars is None:
return None
return list_chars[::-1]
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
###Output
Success: test_reverse
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars:
mid = len(chars) // 2
for idx in range(mid):
# if a[0] is 1st item then a[-1] is last item
chars[idx], chars[-idx - 1] = chars[-idx - 1], chars[idx]
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars is None or not chars:
return chars
res = chars
left = 0
right = len(chars)-1
while left<right:
chars[left], chars[right] = chars[right], chars[left]
left +=1
right -= 1
return res
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
import unittest
class TestReverse(unittest.TestCase):
def test_reverse(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(['']), [''])
self.assertEqual(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
self.assertEqual(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if not chars:
return chars
lenght = len(chars)
for i in range(lenght // 2):
chars[i], chars[-(i + 1)] = chars[-(i + 1)], chars[i]
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars is None: return None
size = len(chars)
for i in range(size//2):
chars[i],chars[size-i-1] = chars[size-i-1], chars[i]
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
import unittest
class TestReverse(unittest.TestCase):
def test_reverse(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(['']), [''])
self.assertEqual(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
self.assertEqual(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a function to reverse a string (a list of characters), in-place.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function? * Correct* Since Python string are immutable, can we use a list of characters instead? * Yes Test Cases* None -> None* [''] -> ['']* ['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/reverse_string/reverse_string_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class ReverseString(object):
def reverse(self, chars):
if chars:
for i in range(len(chars) // 2):
temp = chars[i]
chars[i] = chars[len(chars) - i - 1]
chars[len(chars) - i - 1] = temp
return chars
return chars
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self, func):
assert_equal(func(None), None)
assert_equal(func(['']), [''])
assert_equal(func(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def test_reverse_inplace(self, func):
target_list = ['f', 'o', 'o', ' ', 'b', 'a', 'r']
func(target_list)
assert_equal(target_list, ['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse_inplace')
def main():
test = TestReverse()
reverse_string = ReverseString()
test.test_reverse(reverse_string.reverse)
test.test_reverse_inplace(reverse_string.reverse)
if __name__ == '__main__':
main()
###Output
Success: test_reverse
Success: test_reverse_inplace
|
Experiments_FashionMNIST.ipynb | ###Markdown
ExperimentsWe'll go through learning feature embeddings using different loss functions on MNIST dataset. This is just for visualization purposes, thus we'll be using 2-dimensional embeddings which isn't the best choice in practice.For every experiment the same embedding network is used (32 conv 5x5 -> PReLU -> MaxPool 2x2 -> 64 conv 5x5 -> PReLU -> MaxPool 2x2 -> Dense 256 -> PReLU -> Dense 256 -> PReLU -> Dense 2) and we don't do any hyperparameter search. Prepare datasetWe'll be working on MNIST dataset
###Code
import torch
from torchvision.datasets import FashionMNIST
from torchvision import transforms
mean, std = 0.28604059698879553, 0.35302424451492237
batch_size = 256
train_dataset = FashionMNIST('../data/FashionMNIST', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((mean,), (std,))
]))
test_dataset = FashionMNIST('../data/FashionMNIST', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((mean,), (std,))
]))
cuda = torch.cuda.is_available()
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
n_classes = 10
###Output
_____no_output_____
###Markdown
Common setup
###Code
import torch
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
from trainer import fit
import numpy as np
cuda = torch.cuda.is_available()
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
fashion_mnist_classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728',
'#9467bd', '#8c564b', '#e377c2', '#7f7f7f',
'#bcbd22', '#17becf']
mnist_classes = fashion_mnist_classes
def plot_embeddings(embeddings, targets, xlim=None, ylim=None):
plt.figure(figsize=(10,10))
for i in range(10):
inds = np.where(targets==i)[0]
plt.scatter(embeddings[inds,0], embeddings[inds,1], alpha=0.5, color=colors[i])
if xlim:
plt.xlim(xlim[0], xlim[1])
if ylim:
plt.ylim(ylim[0], ylim[1])
plt.legend(mnist_classes)
def extract_embeddings(dataloader, model):
with torch.no_grad():
model.eval()
embeddings = np.zeros((len(dataloader.dataset), 2))
labels = np.zeros(len(dataloader.dataset))
k = 0
for images, target in dataloader:
if cuda:
images = images.cuda()
embeddings[k:k+len(images)] = model.get_embedding(images).data.cpu().numpy()
labels[k:k+len(images)] = target.numpy()
k += len(images)
return embeddings, labels
###Output
_____no_output_____
###Markdown
Baseline: Classification with softmaxWe'll train the model for classification and use outputs of penultimate layer as embeddings
###Code
# Set up data loaders
batch_size = 256
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet, ClassificationNet
from metrics import AccumulatedAccuracyMetric
embedding_net = EmbeddingNet()
model = ClassificationNet(embedding_net, n_classes=n_classes)
if cuda:
model.cuda()
loss_fn = torch.nn.NLLLoss()
lr = 1e-2
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 50
fit(train_loader, test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval, metrics=[AccumulatedAccuracyMetric()])
train_embeddings_baseline, train_labels_baseline = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_baseline, train_labels_baseline)
val_embeddings_baseline, val_labels_baseline = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_baseline, val_labels_baseline)
###Output
_____no_output_____
###Markdown
Siamese networkWe'll train a siamese network that takes a pair of images and trains the embeddings so that the distance between them is minimized if their from the same class or greater than some margin value if they represent different classes.We'll minimize a contrastive loss function*:$$L_{contrastive}(x_0, x_1, y) = \frac{1}{2} y \lVert f(x_0)-f(x_1)\rVert_2^2 + \frac{1}{2}(1-y)\{max(0, m-\lVert f(x_0)-f(x_1)\rVert_2)\}^2$$*Raia Hadsell, Sumit Chopra, Yann LeCun, [Dimensionality reduction by learning an invariant mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf), CVPR 2006* Steps1. Create a dataset returning pairs - **SiameseMNIST** class from *datasets.py*, wrapper for MNIST-like classes.2. Define **embedding** *(mapping)* network $f(x)$ - **EmbeddingNet** from *networks.py*3. Define **siamese** network processing pairs of inputs - **SiameseNet** wrapping *EmbeddingNet*4. Train the network with **ContrastiveLoss** - *losses.py*
###Code
# Set up data loaders
from datasets import SiameseMNIST
# Step 1
siamese_train_dataset = SiameseMNIST(train_dataset) # Returns pairs of images and target same/different
siamese_test_dataset = SiameseMNIST(test_dataset)
batch_size = 128
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet, SiameseNet
from losses import ContrastiveLoss
# Step 2
embedding_net = EmbeddingNet()
# Step 3
model = SiameseNet(embedding_net)
if cuda:
model.cuda()
# Step 4
margin = 1.
loss_fn = ContrastiveLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 500
fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval)
train_embeddings_cl, train_labels_cl = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_cl, train_labels_cl)
val_embeddings_cl, val_labels_cl = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_cl, val_labels_cl)
###Output
_____no_output_____
###Markdown
Triplet networkWe'll train a triplet network, that takes an anchor, positive (same class as anchor) and negative (different class than anchor) examples. The objective is to learn embeddings such that the anchor is closer to the positive example than it is to the negative example by some margin value.Source: [2] *Schroff, Florian, Dmitry Kalenichenko, and James Philbin. [Facenet: A unified embedding for face recognition and clustering.](https://arxiv.org/abs/1503.03832) CVPR 2015.***Triplet loss**: $L_{triplet}(x_a, x_p, x_n) = max(0, m + \lVert f(x_a)-f(x_p)\rVert_2^2 - \lVert f(x_a)-f(x_n)\rVert_2^2$\) Steps1. Create a dataset returning triplets - **TripletMNIST** class from *datasets.py*, wrapper for MNIST-like classes2. Define **embedding** *(mapping)* network $f(x)$ - **EmbeddingNet** from *networks.py*3. Define **triplet** network processing triplets - **TripletNet** wrapping *EmbeddingNet*4. Train the network with **TripletLoss** - *losses.py*
###Code
# Set up data loaders
from datasets import TripletMNIST
triplet_train_dataset = TripletMNIST(train_dataset) # Returns triplets of images
triplet_test_dataset = TripletMNIST(test_dataset)
batch_size = 128
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
triplet_train_loader = torch.utils.data.DataLoader(triplet_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
triplet_test_loader = torch.utils.data.DataLoader(triplet_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet, TripletNet
from losses import TripletLoss
margin = 1.
embedding_net = EmbeddingNet()
model = TripletNet(embedding_net)
if cuda:
model.cuda()
loss_fn = TripletLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 500
fit(triplet_train_loader, triplet_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval)
train_embeddings_tl, train_labels_tl = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_tl, train_labels_tl)
val_embeddings_tl, val_labels_tl = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_tl, val_labels_tl)
###Output
_____no_output_____
###Markdown
Online pair/triplet selection - negative miningThere are couple of problems with siamese and triplet networks.1. The number of possible pairs/triplets grows **quadratically/cubically** with the number of examples. It's infeasible to process them all2. We generate pairs/triplets randomly. As the training continues, more and more pairs/triplets are easy to deal with (their loss value is very small or even 0), preventing the network from training. We need to provide the network with **hard examples**.3. Each image that is fed to the network is used only for computation of contrastive/triplet loss for only one pair/triplet. The computation is somewhat wasted; once the embedding is computed, it could be reused for many pairs/triplets.To deal with that efficiently, we'll feed a network with standard mini-batches as we did for classification. The loss function will be responsible for selection of hard pairs and triplets within mini-batch. In these case, if we feed the network with 16 images per 10 classes, we can process up to $159*160/2 = 12720$ pairs and $10*16*15/2*(9*16) = 172800$ triplets, compared to 80 pairs and 53 triplets in previous implementation.We can find some strategies on how to select triplets in [2] and [3] *Alexander Hermans, Lucas Beyer, Bastian Leibe, [In Defense of the Triplet Loss for Person Re-Identification](https://arxiv.org/pdf/1703.07737), 2017* Online pair selection Steps1. Create **BalancedBatchSampler** - samples $N$ classes and $M$ samples *datasets.py*2. Create data loaders with the batch sampler3. Define **embedding** *(mapping)* network $f(x)$ - **EmbeddingNet** from *networks.py*4. Define a **PairSelector** that takes embeddings and original labels and returns valid pairs within a minibatch5. Define **OnlineContrastiveLoss** that will use a *PairSelector* and compute *ContrastiveLoss* on such pairs6. Train the network!
###Code
from datasets import BalancedBatchSampler
# We'll create mini batches by sampling labels that will be present in the mini batch and number of examples from each class
train_batch_sampler = BalancedBatchSampler(train_dataset.train_labels, n_classes=10, n_samples=25)
test_batch_sampler = BalancedBatchSampler(test_dataset.test_labels, n_classes=10, n_samples=25)
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
online_train_loader = torch.utils.data.DataLoader(train_dataset, batch_sampler=train_batch_sampler, **kwargs)
online_test_loader = torch.utils.data.DataLoader(test_dataset, batch_sampler=test_batch_sampler, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet
from losses import OnlineContrastiveLoss
from utils import AllPositivePairSelector, HardNegativePairSelector # Strategies for selecting pairs within a minibatch
margin = 1.
embedding_net = EmbeddingNet()
model = embedding_net
if cuda:
model.cuda()
loss_fn = OnlineContrastiveLoss(margin, HardNegativePairSelector())
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 250
all_embeddings = fit(online_train_loader, online_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval)
train_embeddings_ocl, train_labels_ocl = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_ocl, train_labels_ocl)
val_embeddings_ocl, val_labels_ocl = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_ocl, val_labels_ocl)
###Output
_____no_output_____
###Markdown
Online triplet selection Steps1. Create **BalancedBatchSampler** - samples $N$ classes and $M$ samples *datasets.py*2. Create data loaders with the batch sampler3. Define **embedding** *(mapping)* network $f(x)$ - **EmbeddingNet** from *networks.py*4. Define a **TripletSelector** that takes embeddings and original labels and returns valid triplets within a minibatch5. Define **OnlineTripletLoss** that will use a *TripletSelector* and compute *TripletLoss* on such pairs6. Train the network!
###Code
from datasets import BalancedBatchSampler
# We'll create mini batches by sampling labels that will be present in the mini batch and number of examples from each class
train_batch_sampler = BalancedBatchSampler(train_dataset.train_labels, n_classes=10, n_samples=25)
test_batch_sampler = BalancedBatchSampler(test_dataset.test_labels, n_classes=10, n_samples=25)
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
online_train_loader = torch.utils.data.DataLoader(train_dataset, batch_sampler=train_batch_sampler, **kwargs)
online_test_loader = torch.utils.data.DataLoader(test_dataset, batch_sampler=test_batch_sampler, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet
from losses import OnlineTripletLoss
from utils import AllTripletSelector,HardestNegativeTripletSelector, RandomNegativeTripletSelector, SemihardNegativeTripletSelector # Strategies for selecting triplets within a minibatch
from metrics import AverageNonzeroTripletsMetric
margin = 1.
embedding_net = EmbeddingNet()
model = embedding_net
if cuda:
model.cuda()
loss_fn = OnlineTripletLoss(margin, RandomNegativeTripletSelector(margin))
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-4)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 150
fit(online_train_loader, online_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval, metrics=[AverageNonzeroTripletsMetric()])
train_embeddings_otl, train_labels_otl = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_otl, train_labels_otl)
val_embeddings_otl, val_labels_otl = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_otl, val_labels_otl)
display_emb_online, display_emb, display_label_online, display_label = train_embeddings_ocl, train_embeddings_cl, train_labels_ocl, train_labels_cl
# display_emb_online, display_emb, display_label_online, display_label = val_embeddings_ocl, val_embeddings_cl, val_labels_ocl, val_labels_cl
x_lim = (np.min(display_emb_online[:,0]), np.max(display_emb_online[:,0]))
y_lim = (np.min(display_emb_online[:,1]), np.max(display_emb_online[:,1]))
x_lim = (min(x_lim[0], np.min(display_emb[:,0])), max(x_lim[1], np.max(display_emb[:,0])))
y_lim = (min(y_lim[0], np.min(display_emb[:,1])), max(y_lim[1], np.max(display_emb[:,1])))
plot_embeddings(display_emb, display_label, x_lim, y_lim)
plot_embeddings(display_emb_online, display_label_online, x_lim, y_lim)
display_emb_online, display_emb, display_label_online, display_label = train_embeddings_otl, train_embeddings_tl, train_labels_otl, train_labels_tl
# display_emb_online, display_emb, display_label_online, display_label = val_embeddings_otl, val_embeddings_tl, val_labels_otl, val_labels_tl
x_lim = (np.min(display_emb_online[:,0]), np.max(display_emb_online[:,0]))
y_lim = (np.min(display_emb_online[:,1]), np.max(display_emb_online[:,1]))
x_lim = (min(x_lim[0], np.min(display_emb[:,0])), max(x_lim[1], np.max(display_emb[:,0])))
y_lim = (min(y_lim[0], np.min(display_emb[:,1])), max(y_lim[1], np.max(display_emb[:,1])))
plot_embeddings(display_emb, display_label, x_lim, y_lim)
plot_embeddings(display_emb_online, display_label_online, x_lim, y_lim)
###Output
_____no_output_____
###Markdown
ExperimentsWe'll go through learning feature embeddings using different loss functions on MNIST dataset. This is just for visualization purposes, thus we'll be using 2-dimensional embeddings which isn't the best choice in practice.For every experiment the same embedding network is used (32 conv 5x5 -> PReLU -> MaxPool 2x2 -> 64 conv 5x5 -> PReLU -> MaxPool 2x2 -> Dense 256 -> PReLU -> Dense 256 -> PReLU -> Dense 2) and we don't do any hyperparameter search. Prepare datasetWe'll be working on MNIST dataset
###Code
import torch
from torchvision.datasets import FashionMNIST
from torchvision import transforms
mean, std = 0.28604059698879553, 0.35302424451492237
batch_size = 256
train_dataset = FashionMNIST('../data/FashionMNIST', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((mean,), (std,))
]))
test_dataset = FashionMNIST('../data/FashionMNIST', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((mean,), (std,))
]))
cuda = torch.cuda.is_available()
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
n_classes = 10
###Output
_____no_output_____
###Markdown
Common setup
###Code
import torch
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
from trainer import fit
import numpy as np
cuda = torch.cuda.is_available()
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
fashion_mnist_classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728',
'#9467bd', '#8c564b', '#e377c2', '#7f7f7f',
'#bcbd22', '#17becf']
mnist_classes = fashion_mnist_classes
def plot_embeddings(embeddings, targets, xlim=None, ylim=None):
plt.figure(figsize=(10,10))
for i in range(10):
inds = np.where(targets==i)[0]
plt.scatter(embeddings[inds,0], embeddings[inds,1], alpha=0.5, color=colors[i])
if xlim:
plt.xlim(xlim[0], xlim[1])
if ylim:
plt.ylim(ylim[0], ylim[1])
plt.legend(mnist_classes)
def extract_embeddings(dataloader, model):
with torch.no_grad():
model.eval()
embeddings = np.zeros((len(dataloader.dataset), 2))
labels = np.zeros(len(dataloader.dataset))
k = 0
for images, target in dataloader:
if cuda:
images = images.cuda()
embeddings[k:k+len(images)] = model.get_embedding(images).data.cpu().numpy()
labels[k:k+len(images)] = target.numpy()
k += len(images)
return embeddings, labels
###Output
_____no_output_____
###Markdown
Triplet networkWe'll train a triplet network, that takes an anchor, positive (same class as anchor) and negative (different class than anchor) examples. The objective is to learn embeddings such that the anchor is closer to the positive example than it is to the negative example by some margin value.Source: [2] *Schroff, Florian, Dmitry Kalenichenko, and James Philbin. [Facenet: A unified embedding for face recognition and clustering.](https://arxiv.org/abs/1503.03832) CVPR 2015.***Triplet loss**: $L_{triplet}(x_a, x_p, x_n) = max(0, m + \lVert f(x_a)-f(x_p)\rVert_2^2 - \lVert f(x_a)-f(x_n)\rVert_2^2$\) Steps1. Create a dataset returning triplets - **TripletMNIST** class from *datasets.py*, wrapper for MNIST-like classes2. Define **embedding** *(mapping)* network $f(x)$ - **EmbeddingNet** from *networks.py*3. Define **triplet** network processing triplets - **TripletNet** wrapping *EmbeddingNet*4. Train the network with **TripletLoss** - *losses.py*
###Code
# Set up data loaders
from datasets import TripletMNIST
triplet_train_dataset = TripletMNIST(train_dataset) # Returns triplets of images
triplet_test_dataset = TripletMNIST(test_dataset)
batch_size = 128
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
triplet_train_loader = torch.utils.data.DataLoader(triplet_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
triplet_test_loader = torch.utils.data.DataLoader(triplet_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from networks import EmbeddingNet, TripletNet
from losses import TripletLoss
margin = 1.
embedding_net = EmbeddingNet()
model = TripletNet(embedding_net)
if cuda:
model.cuda()
loss_fn = TripletLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 20
log_interval = 500
fit(triplet_train_loader, triplet_test_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval)
train_embeddings_tl, train_labels_tl = extract_embeddings(train_loader, model)
plot_embeddings(train_embeddings_tl, train_labels_tl)
val_embeddings_tl, val_labels_tl = extract_embeddings(test_loader, model)
plot_embeddings(val_embeddings_tl, val_labels_tl)
###Output
_____no_output_____ |
pgm/scatter.ipynb | ###Markdown
testing graph
###Code
import numpy as np
x = np.linspace(0,1,100)
y = np.linspace(0,5,100)
np.random.shuffle(x)
plt.scatter(x,y)
plt.bar(x,y)
###Output
_____no_output_____ |
gantry-jupyterhub/time_series/pd_arima/pd_arima.ipynb | ###Markdown
Time Series Forecasting reference site : [Time Series Forecasting](https://h3imdallr.github.io/2017-08-19/arima/) Analysis Gantry CPU using time Series with ARIMA
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# EDA: 시계열 데이터 확인하기
import datetime
from dateutil.relativedelta import relativedelta
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import pacf
from statsmodels.tsa.seasonal import seasonal_decompose
df = pd.read_csv('../csv_data/datasets_20066_26008_portland-oregon-average-monthly-.csv', index_col='Month')
#df = pd.read_csv('../../../csv_data/prophet_data.csv', index_col='date')
df.head()
df.dropna(axis=0, inplace=True)
df.columns = ['ridership']
print(df.head(), '\n...\n', df.tail())
df = df.iloc[:-1]
print(df.tail())
df.index = pd.to_datetime(df.index)
type(df.index);
print(df.head(), '\n...\n', df.tail())
time_window_l = datetime.datetime(2020, 6, 29)
time_window_r = datetime.datetime(2020, 7, 10)
temp_df = df[
(df.index >= time_window_l)
& (df.index <= time_window_r)
]
print(temp_df)
temp_df = df[:time_window_l]
print(temp_df)
df['ridership'] = df['ridership'].astype(float)
print(df.dtypes)
df.plot()
decomposition = seasonal_decompose(df['ridership'], period=12)
plt.rcParams['figure.figsize'] = [10,10]
fig = plt.figure()
fig = decomposition.plot()
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
rolmean = timeseries.rolling(12).mean()
rolstd = timeseries.rolling(12).std()
fig = plt.figure(figsize=(10, 6 ))
orig = plt.plot(timeseries, color='blue', label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation')
plt.show()
print('<Results of Dickey-Fuller Test>')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index = ['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(df['ridership'])
df['first_difference'] = df['ridership'] - df['ridership'].shift(1)
test_stationarity(df.first_difference.dropna(inplace=False))
df['seasonal_first_difference'] = df['first_difference'] - df['first_difference'].shift(12)
test_stationarity(df.seasonal_first_difference.dropna(inplace=False))
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df.seasonal_first_difference.iloc[13:], lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df.seasonal_first_difference.iloc[13:],lags=40,ax=ax2)
mod = sm.tsa.SARIMAX(df['ridership'],order=(0,1,0), seasonal_order=(1,1,1,12))
results = mod.fit()
print (results.summary())
plt.rcParams['figure.figsize'] = [10,5]
results.plot_diagnostics();
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0);
# 시계열 예측
df['forecast'] = results.predict(start = len(df)-12, end= len(df), dynamic= True)
df[['ridership', 'forecast']].plot()
df[-24:]
# 24개월 예측
df['forecast'] = results.predict(start = len(df)-24, end= len(df), dynamic= True)
df[['ridership', 'forecast']].plot()
#start = datetime.datetime.strptime("2020-07-01", "%Y-%m-%d")
## >2020-07-01 00:00:00
#date_list = [start + relativedelta(months=x) for x in range(0,12)]
##> 1982/7/1,8/1, ... 1983/6/1
#future_df = pd.DataFrame(index=date_list, columns= df.columns)
#new_df = pd.concat([df, future_df]) #concatenated dataframe
## print(new_df.head(),'\n...\n',new_df.tail())
#new_df['forecast'] = results.predict(start = len(df), end = len(df)+11, dynamic= True)
#new_df[['ridership', 'forecast']].ix[-48:].plot()
#print(df.forecast[-12:])
###Output
_____no_output_____ |
codes/labs_lecture03/lab01_python/python_introduction.ipynb | ###Markdown
Lab 01: Introduction to Python Strings
###Code
s1='hello'
print(s1)
type(s1)
s2='world'
print(s1,s2)
print(s1,'\t',s2)
print(s1.upper())
###Output
HELLO
###Markdown
Numbers
###Code
x=5
y=-0.3
print(x)
print(y)
type(x)
type(y)
z=x+y
print(z)
print('x+y =' , z)
x=2.35789202950400
print('x={:2.5f}, x={:.1f}'.format(x,x))
###Output
x=2.35789, x=2.4
###Markdown
Boolean
###Code
b1=True
b2=False
print(b1)
print(b2)
type(b1)
###Output
_____no_output_____
###Markdown
Loops
###Code
for i in range(0,10):
print(i)
for i in range(1,10):
print('i=', i , '\t i^2=', i**2)
###Output
i= 1 i^2= 1
i= 2 i^2= 4
i= 3 i^2= 9
i= 4 i^2= 16
i= 5 i^2= 25
i= 6 i^2= 36
i= 7 i^2= 49
i= 8 i^2= 64
i= 9 i^2= 81
###Markdown
Loops with step size
###Code
for i in range(0,10,2):
print(i)
###Output
0
2
4
6
8
###Markdown
If statement
###Code
x=5
if x == 0:
print('x equal zero')
else:
print('x is non zero')
loud=False
mystring='hello'
if loud:
print(mystring.upper())
else:
print(mystring)
###Output
hello
###Markdown
Intendation
###Code
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('out of the loop')
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('in the loop')
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('in the else statement')
###Output
-3 is negative
-2 is negative
-1 is negative
0 is equal to zero
in the else statement
1 is positive
2 is positive
###Markdown
Load libraries List all libraries modudels: lib. + tabList properties of the module: lib.abs + shit + tab
###Code
import numpy as np
np.abs
###Output
_____no_output_____
###Markdown
Functions
###Code
def myfun(x,y):
z = x**2 + y
return z
z=myfun(3,1)
print(z)
###Output
10
###Markdown
Functions are just another type of object, like strings, floats, or boolean
###Code
type(myfun)
###Output
_____no_output_____
###Markdown
The sign function
###Code
def sign(x):
if x > 0:
mystring='positive'
elif x < 0:
mystring= 'negative'
else:
mystring= 'zero'
return mystring
string= sign(0.5)
print(string)
###Output
positive
###Markdown
The greeting function
###Code
def hello(name, loud):
if loud:
print('HELLO', name.upper(), '!')
else:
print('Hello', name)
hello('Bob', False)
hello('Fred', True) # Prints "HELLO, FRED!"
###Output
Hello Bob
HELLO FRED !
###Markdown
Classes A simple class for complex numbers : real + i imag
###Code
class Complex:
# constructor:
def __init__( self, real_part, imaginary_part ):
self.re = real_part
self.im = imaginary_part
# methods/functions:
def display(self):
print( self.re,'+',self.im,'i')
def compute_modulus(self):
modulus = ( self.re**2 + self.im**2 )**(1/2)
return modulus
def multiply_by(self,scalar):
self.re = self.re * scalar
self.im = self.im * scalar
# Create an instance
z=Complex(4,3)
print(z)
# Access the attributes
print(z.re)
print(z.im)
# Use the display() method
z.display()
# Compute the modulus
m=z.compute_modulus()
print(m)
# Change the attributes
z.multiply_by(5)
z.display()
print(z.re)
###Output
20 + 15 i
20
###Markdown
Optional Load and save data
###Code
pwd
ls -al
cd ..
pwd
cd lab01_python
data = np.loadtxt('../data/misc/profit_population.txt', delimiter=',')
print(data)
new_data = 2* data
np.savetxt('../data/misc/profit_population_new.txt', new_data, delimiter=',', fmt='%2.5f')
%whos
###Output
Variable Type Data/Info
--------------------------------
Complex type <class '__main__.Complex'>
b1 bool True
b2 bool False
data ndarray 97x2: 194 elems, type `float64`, 1552 bytes
hello function <function hello at 0x10544b400>
i int 2
loud bool False
m float 5.0
myfun function <function myfun at 0x105447c80>
mystring str hello
new_data ndarray 97x2: 194 elems, type `float64`, 1552 bytes
np module <module 'numpy' from '/Us<...>kages/numpy/__init__.py'>
s1 str hello
s2 str world
sign function <function sign at 0x10544b268>
string str positive
x int 5
y float -0.3
z Complex <__main__.Complex object at 0x10544a710>
###Markdown
Plotting data
###Code
# Visualization library
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png2x','pdf')
import matplotlib.pyplot as plt
x = np.linspace(0,6*np.pi,100)
#print(x)
y = np.sin(x)
plt.figure(1)
plt.plot(x, y,label='sin'.format(i=1))
plt.legend(loc='best')
plt.title('Sin plotting')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Vectorization and efficient linear algebra computations No vectorization
###Code
import time
n = 10**7
x = np.linspace(0,1,n)
y = np.linspace(0,2,n)
start = time.time()
z = 0
for i in range(len(x)):
z += x[i]*y[i]
end = time.time() - start
print(z)
print('Time=',end)
###Output
6666666.999999491
Time= 3.809494733810425
###Markdown
Vectorization
###Code
start = time.time()
z = x.T.dot(y)
end = time.time() - start
print(z)
print('Time=',end)
###Output
6666667.000000033
Time= 0.010256052017211914
###Markdown
Lab 03.01: Introduction to Python Strings
###Code
s1='hello'
print(s1)
type(s1)
s2='world'
print(s1,s2)
print(s1,'\t',s2)
print(s1.upper())
###Output
HELLO
###Markdown
Numbers
###Code
x=5
y=-0.3
print(x)
print(y)
type(x)
type(y)
z=x+y
print(z)
print('x+y =' , z)
x=2.35789202950400
print('x={:2.5f}, x={:.1f}'.format(x,x))
###Output
x=2.35789, x=2.4
###Markdown
Boolean
###Code
b1=True
b2=False
print(b1)
print(b2)
type(b1)
###Output
_____no_output_____
###Markdown
Loops
###Code
for i in range(0,10):
print(i)
for i in range(1,10):
print('i=', i , '\t i^2=', i**2)
###Output
i= 1 i^2= 1
i= 2 i^2= 4
i= 3 i^2= 9
i= 4 i^2= 16
i= 5 i^2= 25
i= 6 i^2= 36
i= 7 i^2= 49
i= 8 i^2= 64
i= 9 i^2= 81
###Markdown
Loops with step size
###Code
for i in range(0,10,2):
print(i)
###Output
0
2
4
6
8
###Markdown
If statement
###Code
x=5
if x == 0:
print('x equal zero')
else:
print('x is non zero')
loud=False
mystring='hello'
if loud:
print(mystring.upper())
else:
print(mystring)
###Output
hello
###Markdown
Intendation
###Code
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('out of the loop')
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('in the loop')
for i in range(-3,3):
if i<0:
print(i , ' is negative')
elif i>0:
print(i,' is positive' )
else:
print(i,' is equal to zero')
print('in the else statement')
###Output
-3 is negative
-2 is negative
-1 is negative
0 is equal to zero
in the else statement
1 is positive
2 is positive
###Markdown
Load libraries List all libraries modudels: lib. + tabList properties of the module: lib.abs + shit + tab
###Code
import numpy as np
np.abs
###Output
_____no_output_____
###Markdown
Functions
###Code
def myfun(x,y):
z = x**2 + y
return z
z=myfun(3,1)
print(z)
###Output
10
###Markdown
Functions are just another type of object, like strings, floats, or boolean
###Code
type(myfun)
###Output
_____no_output_____
###Markdown
The sign function
###Code
def sign(x):
if x > 0:
mystring='positive'
elif x < 0:
mystring= 'negative'
else:
mystring= 'zero'
return mystring
string= sign(0.5)
print(string)
###Output
positive
###Markdown
The greeting function
###Code
def hello(name, loud):
if loud:
print('HELLO', name.upper(), '!')
else:
print('Hello', name)
hello('Bob', False)
hello('Fred', True) # Prints "HELLO, FRED!"
###Output
Hello Bob
HELLO FRED !
###Markdown
Classes A simple class for complex numbers : real + i imag
###Code
class Complex:
# constructor:
def __init__( self, real_part, imaginary_part ):
self.re = real_part
self.im = imaginary_part
# methods/functions:
def display(self):
print( self.re,'+',self.im,'i')
def compute_modulus(self):
modulus = ( self.re**2 + self.im**2 )**(1/2)
return modulus
def multiply_by(self,scalar):
self.re = self.re * scalar
self.im = self.im * scalar
# Create an instance
z=Complex(4,3)
print(z)
# Access the attributes
print(z.re)
print(z.im)
# Use the display() method
z.display()
# Compute the modulus
m=z.compute_modulus()
print(m)
# Change the attributes
z.multiply_by(5)
z.display()
print(z.re)
###Output
20 + 15 i
20
###Markdown
Optional Load and save data
###Code
pwd
ls -al
cd ..
pwd
cd lab01_python
data = np.loadtxt('../data/misc/profit_population.txt', delimiter=',')
print(data)
new_data = 2* data
np.savetxt('../data/misc/profit_population_new.txt', new_data, delimiter=',', fmt='%2.5f')
%whos
###Output
Variable Type Data/Info
--------------------------------
Complex type <class '__main__.Complex'>
b1 bool True
b2 bool False
data ndarray 97x2: 194 elems, type `float64`, 1552 bytes
hello function <function hello at 0x109e14620>
i int 2
loud bool False
m float 5.0
myfun function <function myfun at 0x109e0ee18>
mystring str hello
new_data ndarray 97x2: 194 elems, type `float64`, 1552 bytes
np module <module 'numpy' from '/Us<...>kages/numpy/__init__.py'>
s1 str hello
s2 str world
sign function <function sign at 0x109e14158>
string str positive
x int 5
y float -0.3
z Complex <__main__.Complex object at 0x109e11710>
###Markdown
Plotting data
###Code
# Visualization library
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png2x','pdf')
import matplotlib.pyplot as plt
x = np.linspace(0,6*np.pi,100)
#print(x)
y = np.sin(x)
plt.figure(1)
plt.plot(x, y,label='sin'.format(i=1))
plt.legend(loc='best')
plt.title('Sin plotting')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Vectorization and efficient linear algebra computations No vectorization
###Code
import time
n = 10**7
x = np.linspace(0,1,n)
y = np.linspace(0,2,n)
start = time.time()
z = 0
for i in range(len(x)):
z += x[i]*y[i]
end = time.time() - start
print(z)
print('Time=',end)
###Output
6666667.0
Time= 3.323513984680176
###Markdown
Vectorization
###Code
start = time.time()
z = x.T.dot(y)
end = time.time() - start
print(z)
print('Time=',end)
###Output
6666667.0
Time= 0.015230894088745117
|
cleaning data in python/Exploring your data/.ipynb_checkpoints/05. Visualizing multiple variables with boxplots-checkpoint.ipynb | ###Markdown
Visualizing multiple variables with boxplotsHistograms are great ways of visualizing single variables. To visualize multiple variables, boxplots are useful, especially when one of the variables is categorical.In this exercise, your job is to use a boxplot to compare the 'initial_cost' across the different values of the 'Borough' column. The pandas .boxplot() method is a quick way to do this, in which you have to specify the column and by parameters. Here, you want to visualize how 'initial_cost' varies by 'Borough'.pandas and matplotlib.pyplot have been imported for you as pd and plt, respectively, and the DataFrame has been pre-loaded as df. Instructions- Using the .boxplot() method of df, create a boxplot of 'initial_cost' across the different values of 'Borough'.- Display the plot.
###Code
# Import pandas
import pandas as pd
# Read the file into a DataFrame: df
df = pd.read_csv('dob_job_application_filings_subset.csv')
# Import necessary modules
import pandas as pd
import matplotlib.pyplot as plt
# Create the boxplot
df.boxplot(column='Doc #', by='Lot', rot=90)
# Display the plot
plt.show()
###Output
_____no_output_____ |
mini-projects/aic-5_1_10-api/api_data_wrangling_mini_project.ipynb | ###Markdown
This exercise will require you to pull some data from the Qunadl API. Qaundl is currently the most widely used aggregator of financial market data. As a first step, you will need to register a free account on the http://www.quandl.com website. After you register, you will be provided with a unique API key, that you should store:
###Code
# Store the API key as a string - according to PEP8, constants are always named in all upper case
API_KEY = 'VHstzZDzTg_hokZ4xq4J'
###Output
_____no_output_____
###Markdown
Qaundl has a large number of data sources, but, unfortunately, most of them require a Premium subscription. Still, there are also a good number of free datasets. For this mini project, we will focus on equities data from the Frankfurt Stock Exhange (FSE), which is available for free. We'll try and analyze the stock prices of a company called Carl Zeiss Meditec, which manufactures tools for eye examinations, as well as medical lasers for laser eye surgery: https://www.zeiss.com/meditec/int/home.html. The company is listed under the stock ticker AFX_X. You can find the detailed Quandl API instructions here: https://docs.quandl.com/docs/time-series While there is a dedicated Python package for connecting to the Quandl API, we would prefer that you use the *requests* package, which can be easily downloaded using *pip* or *conda*. You can find the documentation for the package here: http://docs.python-requests.org/en/master/ Finally, apart from the *requests* package, you are encouraged to not use any third party Python packages, such as *pandas*, and instead focus on what's available in the Python Standard Library (the *collections* module might come in handy: https://pymotw.com/3/collections/).Also, since you won't have access to DataFrames, you are encouraged to us Python's native data structures - preferably dictionaries, though some questions can also be answered using lists.You can read more on these data structures here: https://docs.python.org/3/tutorial/datastructures.html Keep in mind that the JSON responses you will be getting from the API map almost one-to-one to Python's dictionaries. Unfortunately, they can be very nested, so make sure you read up on indexing dictionaries in the documentation provided above.
###Code
# First, import the relevant modules
import requests
import pprint
import json
# Now, call the Quandl API and pull out a small sample of the data (only one day) to get a glimpse
# into the JSON structure that will be returned
# Detailed info: https://www.quandl.com/data/FSE/AFX_X-Carl-Zeiss-Meditec-AFX_X
response = requests.get(f'https://www.quandl.com/api/v3/datasets/FSE/AFX_X.json?api_key={API_KEY}&start_date=2017-01-01&end_date=2017-12-31')
# Detect the object type of the returned 'response' variable
type(response)
# Inspect available attributes and methods offered by 'response'
dir(response)
# Inspect the JSON structure of the object you created, and take note of how nested it is,
# as well as the overall structure
json_response = response.json()
pprint.pprint(json_response)
# Obtain the whole dataset
dataset = json_response.get('dataset')
# Obtain column names
column_names = dataset.get('column_names')
print(column_names)
# Store json response' data as ordered dictionary with the date is the key
table = []
for row in dataset.get('data'):
table.append(dict(zip(column_names, row)))
pprint.pprint(table)
highest = table[0].get(column_names[2]) # High
lowest = table[0].get(column_names[3]) # Low
largest_1day = highest - lowest
largest_2days = 0
average = table[0].get('Traded Volume')
volumes = [average]
table_len = len(table)
for i in range(1, table_len): # skip the first one
# Calculate what the highest and lowest opening prices were for the stock in this period
high = table[i].get('High')
if highest < high:
highest = high
low = table[i].get('Low')
if lowest > low:
lowest = low
# What was the largest change in any one day (based on High and Low price)?
large_1day = high - low
if largest_1day < large_1day:
largest_1day = large_1day
# Largest change between any two days (based on Closing Price)?
prev_close = table[i - 1].get('Close')
close = table[i].get('Close')
large_2days = abs(close - prev_close)
if largest_2days < large_2days:
largest_2days = large_2days
# Construct volumes list
volume = table[i].get('Traded Volume')
volumes.append(volume)
# Sum daily trading volume
average += volume
print('Highest:', highest) # 53.54
print('Lowest:', lowest) # 33.62
print('Largest change in a day:', '%.2f' % largest_1day)
print('Largest change between two days:', '%.2f' % largest_2days)
print('Average daily trading volume:', '%.2f' % (average / table_len))
# What was the median trading volume during the year
volumes.sort()
middle = (table_len - 1) // 2
if table_len % 2 == 1: # odd size
median = volumes[middle]
else: # even size
median = (volumes[middle] + volumes[middle + 1]) / 2.0
print('Median trading volume during the year:', '%.2f' % median)
###Output
Median trading volume during the year: 76286.00
###Markdown
These are your tasks for this mini project:1. Collect data from the Franfurt Stock Exchange, for the ticker AFX_X, for the whole year 2017 (keep in mind that the date format is YYYY-MM-DD).2. Convert the returned JSON object into a Python dictionary.3. Calculate what the highest and lowest opening prices were for the stock in this period.4. What was the largest change in any one day (based on High and Low price)?5. What was the largest change between any two days (based on Closing Price)?6. What was the average daily trading volume during this year?7. (Optional) What was the median trading volume during this year. (Note: you may need to implement your own function for calculating the median.) Verify the result using 3rd party libraries
###Code
# List installed packages
# help('modules') # Domino failed to run this command
# Run once for new environment
# ! pip install quandl
import pandas as pd
import quandl
# quandl returns a Pandas DataFrame
data = quandl.get("FSE/AFX_X", authtoken="VHstzZDzTg_hokZ4xq4J", start_date='2017-01-01', end_date='2017-12-31')
data.head()
# Highest opening price: max
data['High'].describe()
# Lowest closing price: min
data['Low'].describe()
# What was the largest change in any one day: top row
change_1day = data['High'] - data['Low']
change_1day.sort_values(ascending=False, inplace=True)
change_1day.head()
# What was the largest change between any two days:
data['Close'].diff().abs().max()
# What was the average (mean) and median (50%) of the trading volume during this year?
data['Traded Volume'].describe()
###Output
_____no_output_____ |
HeroesOfPymoli/HeroesOfPymoli_Pandas_Final.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#Finding the unique count of players using nunique function
unique_count = purchase_data["SN"].nunique(dropna = True)
#Creating a DataFrame for the unique value count of players
Total_players = pd.DataFrame([{"Total Players": unique_count }])
#Printing the DataFrame
Total_players
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Calculating unique count of Items
unique_item = purchase_data["Item Name"].nunique(dropna = True)
# Calculating average price of the items from the given data and rounding it by 2 decimals
average_price = round(purchase_data["Price"].mean() , 2)
# Calculating the total number of unique purchases
purchase_count = purchase_data["Purchase ID"].nunique(dropna = True)
#Calculating the total revinue
Total_rev = purchase_data["Price"].sum()
#Defining the DataFrame
purchasing_analysis = pd.DataFrame ( {
"Number of Unique Items" : [unique_item],
"Average Price" : "$" + str(average_price),
"Number of Purchases": [purchase_count],
"Total Revenue" : "$" + str(Total_rev)
})
#Printing the DataFrame for summary
purchasing_analysis
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#Finding the count of unique Players by doing Group by Gender
unique_gender = purchase_data.groupby("Gender") ["SN"].nunique(dropna = True)
#Creating a DataFrame
count_players = pd.DataFrame(unique_gender)
#Renaming the columns of DataFrame
count_players = count_players.rename(columns={"SN" : "Total Count"})
#Calculating the sum of Total unique players
sum = count_players["Total Count"].sum()
#Calculating the percentage of players by Gender
player_percent = count_players["Total Count"] / sum * 100
#Rounding the percentage of players to 2 decimals
count_players["Percentage of Players"] = round(player_percent, 2)
#Formating the percentage column to add " % " at the end
count_players.style.format({"Percentage of Players" : "{:,.2f}%"} )
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Calculating the total purchases
purchase_sum = purchase_data.groupby("Gender") ["Price"].sum()
purchase_sum
#Calculating the purchase average price by Gender and rounding it to 2 Decimals
purchase_avg = round(purchase_data.groupby("Gender") ["Price"].mean() , 2)
purchase_avg
#Counting the unique purchase ID's by Gender
purchase_count = purchase_data.groupby("Gender") ["Purchase ID"].nunique(dropna = True)
purchase_count
count = purchase_data.groupby("Gender") ["Price"].mean()
#Calculating and Rounding the Average Purchase per person to 2 decimal
avg_purchase_per_person = round(purchase_sum/unique_gender, 2)
#Defining the dataframe and adding columns for Purchase avg, purchase count and avg purchase per person
df1 = pd.DataFrame(purchase_sum)
df1 ["Average Purchase Price"] = purchase_avg
df1 ["Purchase count"] = purchase_count
df1 ["Avg Total Purchase per Person"] = avg_purchase_per_person
#Renaming columns and creating a new DataFrame for final output
Pur_analysis_gender = df1.rename(columns={"Price": "Total Purchase Value", })
#Formating the percentage column to add " $" at the front of Values
Pur_analysis_gender.style.format({"Total Purchase Value": "${:,.2f}",
"Average Purchase Price": "${:,.2f}",
"Avg Total Purchase per Person": "${:,.2f}" })
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Define the bins for age groups
bins_age = [0, 9.99, 14, 19, 24, 29, 34, 39, 100]
#Define the lables group names
grouped_name = ["< 10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Adding Age group classification column and cut it to group into buckets
purchase_data["Age Group"] = pd.cut(purchase_data["Age"],bins_age, labels=grouped_name)
#Aggregating all the data sets by using group by on Age Group
age_group = purchase_data.groupby("Age Group")
#Calculating the unique Total age by name
total_age = age_group["SN"].nunique()
#Calculating the percentage of age group
percentage_age = (total_age/sum ) * 100
#Defining the Data Frame
age_demographics = pd.DataFrame({"Total Count":total_age,"Percentage of Players": percentage_age})
#Index set to none for siaplaying the final summary
age_demographics.index.name = None
#Formating the percentage column to add " %" at the end
age_demographics.style.format({"Percentage of Players":"{:,.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Define the bins for age groups
bins_age = [0, 9.99, 14, 19, 24, 29, 34, 39, 100]
#Define the lables group names
grouped_name = [">10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
#Aggregating all the data sets by using group by on Age Group
age_group = purchase_data.groupby("Age Group")
#Calculating the count of purchase ID by age group.
pur_age_count = age_group["Purchase ID"].count()
#Calculating the Avg Purchase Price i.e mean on Price by age
Avg_pur_price_age = age_group["Price"].mean()
#Calculating the total purchase value
total_purchase_value = age_group["Price"].sum()
#Calculating the avg purchase price age total
Avg_pur_price_age_tot = total_purchase_value/total_age
#Defining the DataFrame for Final output
age_pur_analysis = pd.DataFrame({"Purchase Count": pur_age_count,
"Average Purchase Price": Avg_pur_price_age,
"Total Purchase Value":total_purchase_value,
"Average Purchase Total per Person": Avg_pur_price_age_tot})
#Index set to none for siaplaying the final summary
age_pur_analysis.index.name = None
#Formating the percentage column to add " %" at the end
age_pur_analysis.style.format({"Average Purchase Price":"${:,.2f}","Total Purchase Value":"${:,.2f}",
"Average Purchase Total per Person":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Aggregating the Data by SN
top_spenders = purchase_data.groupby("SN")
#Counting the Spenders by purchase ID
pur_spender = top_spenders["Purchase ID"].count()
#Calculating the avg price mean
avg_price_spender = top_spenders["Price"].mean()
#Calculating the total spenders
purchase_total_spender = top_spenders["Price"].sum()
#Defining the data frame and assigning values
top_spenders = pd.DataFrame({"Purchase Count": pur_spender,
"Average Purchase Price": avg_price_spender,
"Total Purchase Value":purchase_total_spender})
#Displaying the sorted value ( top 5 )
top_spenders_summary = top_spenders.sort_values(["Total Purchase Value"], ascending=False).head()
#Formating the percentage column to add " $" at the front of the values
top_spenders_summary.style.format({"Average Purchase Total":"${:,.2f}","Average Purchase Price":"${:,.2f}","Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Retrieving the Item ID, Item Name, and Item Price columns in a data Frame
item_name = purchase_data[["Item ID", "Item Name", "Price"]]
#Aggregating and grouping by Item ID and Item Name
item_group = item_name.groupby(["Item ID","Item Name"])
#Calculating the count of items on the aggregated data
purchasing_item = item_group["Price"].count()
#Calculating the total sum of purchases on the aggregated data
purchasing_value = (item_group["Price"].sum())
# Calculating the Price per item
price = purchasing_value/purchasing_item
#Creating a Data Frame and adding values to display the final summary
popular_items = pd.DataFrame({"Purchase Count": purchasing_item,
"Item Price": price,
"Total Purchase Value":purchasing_value})
#Sorting the output by Purchase count and displaying the top 5
most_popular_item = popular_items.sort_values(["Purchase Count"], ascending=False).head()
#Formating the percentage column to add " $" at the front of the values
most_popular_item.style.format({"Item Price":"${:,.2f}","Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Sort the above Data Frame by total purchase value in descending order
most_profitable_item = popular_items.sort_values(["Total Purchase Value"],ascending=False).head()
#Formating the percentage column to add " $" at the front of the values
most_profitable_item.style.format({"Item Price":"${:,.2f}","Total Purchase Value":"${:,.2f}"})
###Output
_____no_output_____ |
501-EVAL-1dCNN.ipynb | ###Markdown
Functions
###Code
def transform_target(target):
return (np.log(target + EPS) - MEAN) / STD
def inverse_target(target):
return np.exp(MEAN + STD * target) - EPS
def np_rmspe(y_true, y_pred):
y_true = inverse_target(y_true)
y_pred = inverse_target(y_pred)
return np.sqrt(np.mean(np.square((y_true - y_pred) / y_true)))
def mspe_loss(y_true, y_pred):
y_true = K.exp(MEAN + STD * y_true) - EPS
y_pred = K.exp(MEAN + STD * y_pred) - EPS
return K.sqrt(K.mean(K.square((y_true - y_pred) / y_true)))
def rmspe_keras(y_true, y_pred):
return K.sqrt(K.mean(K.square((y_true - y_pred) / y_true)))
def create_1dcnn(num_columns, num_labels, learning_rate):
# input
inp = tf.keras.layers.Input(shape=(num_columns,))
x = tf.keras.layers.BatchNormalization()(inp)
# 1dcnn
x = tf.keras.layers.Dense(256, activation='relu')(x)
x = tf.keras.layers.Reshape((16, 16))(x)
x = tf.keras.layers.Conv1D(filters=12,
kernel_size=2,
strides=1,
activation='swish')(x)
x = tf.keras.layers.MaxPooling1D(pool_size=2)(x)
x = tf.keras.layers.Flatten()(x)
# ffn
for i in range(3):
x = tf.keras.layers.Dense(64 // (2 ** i), activation='swish')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.GaussianNoise(0.01)(x)
x = tf.keras.layers.Dropout(0.20)(x)
x = tf.keras.layers.Dense(num_labels)(x)
model = tf.keras.models.Model(inputs=inp, outputs=x)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),
loss=mspe_loss,
)
return model
###Output
_____no_output_____
###Markdown
Loading data
###Code
# train
df_train = dt.fread(f'./dataset/train_{DATA_NAME}_NN.csv').to_pandas()
fea_cols = [f for f in df_train.columns if f.startswith('B_') or f.startswith('T_') or f.startswith('Z_')]
# result
df_result = dt.fread('./dataset/train.csv').to_pandas()
df_result = gen_row_id(df_result)
fea_cols_TA = [f for f in fea_cols if 'min_' not in f]
df_time_mean = df_train.groupby('time_id')[fea_cols_TA].mean()
df_time_mean.columns = [f'{c}_TA_mean' for c in df_time_mean.columns]
df_time_mean = df_time_mean.reset_index()
df_train = df_train.merge(df_time_mean, on='time_id', how='left')
del df_time_mean
gc.collect()
df_train['target'] = transform_target(df_train['target'])
df_train = gen_row_id(df_train)
df_train = add_time_fold(df_train, N_FOLD)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
def add_time_stats(df_train):
time_cols = [f for f in df_train.columns if f.endswith('_time')]
df_gp_stock = df_train.groupby('stock_id')
#
df_stats = df_gp_stock[time_cols].mean().reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_mean' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].std().reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_std' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].skew().reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_skew' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].min().reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_min' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].max().reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_max' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].quantile(0.25).reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_q1' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].quantile(0.50).reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_q2' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
#
df_stats = df_gp_stock[time_cols].quantile(0.75).reset_index()
df_stats.columns = ['stock_id'] + [f'{f}_q3' for f in time_cols]
df_train = df_train.merge(df_stats, on=['stock_id'], how='left')
return df_train
batch_size = 1024
learning_rate = 6e-3
epochs = 1000
list_seeds = [0, 11, 42, 777, 2045]
list_rmspe = []
for i_seed, seed in enumerate(list_seeds):
df_train = add_time_fold(df_train, N_FOLD, seed=seed)
list_rmspe += [[]]
for i_fold in range(N_FOLD):
gc.collect()
df_tr = df_train.loc[df_train.fold!=i_fold]
df_te = df_train.loc[df_train.fold==i_fold]
df_tr = add_time_stats(df_tr)
df_te = add_time_stats(df_te)
fea_cols = [f for f in df_tr if f.startswith('B_') or f.startswith('T_') or f.startswith('Z_')]
X_train = df_tr[fea_cols].values
y_train = df_tr[['target']].values
X_test = df_te[fea_cols].values
y_test = df_te[['target']].values
idx_test = df_train.loc[df_train.fold==i_fold].index
print(f'Fold {i_seed+1}/{len(list_seeds)} | {i_fold+1}/{N_FOLD}', X_train.shape, X_test.shape)
# Callbacks
ckp_path = f'./models/{SOL_NAME}/model_{i_seed}_{i_fold}.hdf5'
rlr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_delta=1e-5, verbose=2)
es = EarlyStopping(monitor='val_loss', min_delta=1e-5, patience=31, restore_best_weights=True, verbose=2)
model = create_1dcnn(X_train.shape[1], 1, learning_rate)
history = model.fit(X_train, y_train,
epochs=epochs,
validation_data=(X_test, y_test),
validation_batch_size=len(y_test),
batch_size=batch_size,
verbose=2,
callbacks=[rlr, es]
)
# model = tf.keras.models.load_model(ckp_path, custom_objects={'mspe_loss': mspe_loss})
y_pred = model.predict(X_test, batch_size=len(y_test))
curr_rmspe = np_rmspe(y_test, y_pred)
list_rmspe[-1] += [curr_rmspe]
model.save(ckp_path)
# generate and save preds
df_result.loc[idx_test, f'pred_{i_seed}'] = inverse_target(y_pred)
clear_output()
print(list_rmspe)
df_result.to_csv(f'./results/{SOL_NAME}.csv', index=False)
for i in range(len(list_seeds)):
print(i, rmspe(df_result['target'], df_result[f'pred_{i}']))
print('All: ', rmspe(df_result['target'], df_result[[f'pred_{i}' for i in range(len(list_seeds))]].mean(axis=1)))
###Output
0 0.21369911137011632
1 0.21460543478949706
2 0.21438267580899686
3 0.2146666805788994
4 0.21427824604107953
All: 0.21010537596822407
|
code/CNN/.ipynb_checkpoints/CrossValidation_final_marjolein-checkpoint.ipynb | ###Markdown
Le-Net 1 based architecture We start with 41X41 (I) after first convolution (9x9)we have 33X33 (L1). The next pooling layer reduces dimension with 3 to an output image of 11X11 with 4x4 pooling kernels (L2). Then we apply different types of convolution 4x4 kernels on the L2 layer resulting in 8x8 (L3) . Then followed by pooling 2X2 resulting in 4x4 output map (L4). So we have 16 connection for each element in layer L4 (which depend on the amount of different Covolutions in L3) \begin{equation}f(x)=\frac{1}{1+e^{-x}} \\F_{k}= f( \sum_{i} \mathbf{W^{k}_{i} \cdot y_{i}}-b_{k})\end{equation}\begin{equation}E=\sum_{k} \frac{1}{2}|t_k-F_{k}|^{2} \\\Delta W_{ij}= - \eta \frac{dE}{d W_{ij}}\end{equation}\begin{equation}\Delta W_{ij}= \sum_{k} - \eta \frac{dE}{d F_{k}} \frac{dF_{k}}{dx_{k}} \frac{dx_{k}}{dW_{ij}}=\sum_{k} \eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} \frac{dx_{k}}{dW_{ij}} \\= \eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} y_{ij}\end{equation}\begin{equation}\Delta b_{k}= - \eta \frac{dE}{d F_{k}} \frac{dF_{k}}{dx_{k}} \frac{dx_{k}}{b_{k}}=\eta (t_{k}-F_{k})\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}} \cdot-1\end{equation}Since $\frac{e^{-x_{k}}}{(1+e^{-x_{k}})^{2}}$ is always positive we can neglect this term in our programme\begin{equation}x_{k}=\sum_{ij} W^{k}[i,j] \; y^{4rb}[i,j] - b_{k}\end{equation}\begin{equation}y^{4rb}[i,j]= \sum_{u,v} W^{3rb}[u,v] \; y^{3rb} [2i+u,2j+v]\end{equation}\begin{equation}y^{3rb} [2i+u,2j+v]= f\left (x^{3rb}[2i+u,2j+v] \right)\end{equation}\begin{equation}x^{3rb}[2i+u,2j+v]=\sum_{nm} W^{2rb}[n,m] \; y^{2rb}[n+(2i+u),m+(2j+v)] -b^{3rb}[2i+u,2j+v]\end{equation}\begin{equation}\begin{split}\Delta W^{2rb}[n,m] =\sum_{k} - \eta \frac{dE}{dF_{k}} \frac{dF_{k}}{dx_{k}} \sum_{ij} \frac{dx_{k}}{dy^{4rb}[i,j]} \sum_{uv}\frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]} \frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}\sum_{nm}\frac{d x^{3rb}[2i+u,2j+v]}{d W^{2rb}[n,m]}\end{split}\end{equation}\begin{equation}\begin{split}\Delta b^{3rb}[2i+u,2j+v] =\sum_{k} - \eta \frac{dE}{dF_{k}} \frac{dF_{k}}{dx_{k}} \sum_{ij} \frac{dx_{k}}{dy^{4rb}[i,j]} \sum_{uv}\frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]} \frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}\frac{d x^{3rb}[2i+u,2j+v]}{d b^{3rb}[2i+u,2j+v]}\end{split}\end{equation}\begin{equation} \frac{dx_{k}}{dy^{4rb}[i,j]} = W^{4rbk}[i,j]\\\end{equation}\begin{equation} \frac{dy^{4rb}[i,j]}{d y^{3rb} [2i+u,2j+v]} = W^{3rb}[u,v] \\ \end{equation} \begin{equation}\frac{d y^{3rb} [2i+u,2j+v]}{d x^{3rb}[2i+u,2j+v]}=\frac{e^{-x^{3rb}[2i+u,2j+v]}}{(1+e^{-x^{3rb}[2i+u,2j+v]})^2}\end{equation}This term is first not included since it is always positive. If the training will not converge it might be possible to include this term \begin{equation} \frac{d y^{3rb} [2i+u,2j+v]}{d W^{2rb}[n,m]}= y^{2rb} [n+(2i+u),m+(2j+v)] \\\end{equation}\begin{equation}\frac{d x^{3rb}[2i+u,2j+v]}{d b^{3rb}[2i+u,2j+v]}=-1\end{equation}
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy import linalg as lin
import scipy.signal as sig
from PIL import Image
import glob
import matplotlib.cm as cm
import itertools
########### Load Input ############################################################################################################################
# In this script I used the brightness to determine structures, instead of one RGB color:
# this is determined by: 0.2126*R + 0.7152*G + 0.0722*B
# Source: https://en.wikipedia.org/wiki/Relative_luminance
patchSize=40 # patchsize this must be 48 since our network can only handle this value
# Open forest
Amount_data= len(glob.glob('Forest/F*'))
Patches_F=np.empty([1,patchSize,patchSize])
Patches_F_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="Forest/F%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data= np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2=img.convert('RGB')
data2= np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_F=np.append(Patches_F,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_F_RGB=np.append(Patches_F_RGB, data_t,axis=0)
Patches_F=np.delete(Patches_F, 0,0)
Patches_F_RGB=np.delete(Patches_F_RGB, 0,0)
# Open city
Amount_data= len(glob.glob('City/C*'))
Patches_C=np.empty([1,patchSize,patchSize])
Patches_C_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="City/C%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data = np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2=img.convert('RGB')
data2= np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_C=np.append(Patches_C,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_C_RGB=np.append(Patches_C_RGB, data_t,axis=0)
Patches_C=np.delete(Patches_C, 0,0)
Patches_C_RGB=np.delete(Patches_C_RGB, 0,0)
# Open water
Amount_data= len(glob.glob('Water/W*'))
Patches_W=np.empty([1,patchSize,patchSize])
Patches_W_RGB=np.empty([1,patchSize,patchSize,3])
Patches_t=np.empty([3])
for k in range (0, Amount_data):
name="Water/W%d.png" % (k+1)
img = Image.open(name)
data=img.convert('RGB')
data = np.asarray( data, dtype="int32" )
data=0.2126*data[:,:,0]+0.7152*data[:,:,1]+0.0722*data[:,:,2]
data2 = img.convert('RGB')
data2 = np.asarray( data2, dtype="int32" )
Yamount=data.shape[0]/patchSize # Counts how many times the windowsize fits in the picture
Xamount=data.shape[1]/patchSize # Counts how many times the windowsize fits in the picture
# Create patches for structure
data_t=np.array([[data[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize])
Patches_W=np.append(Patches_W,data_t,axis=0)
#Create patches for colour
data_t=np.array([[data2[j*patchSize:(j+1)*patchSize,i*patchSize:(i+1)*patchSize,:] for i in range(0,Xamount)] for j in range(0,Yamount)])
data_t=np.reshape(data_t, [data_t.shape[0]*data_t.shape[1], patchSize, patchSize, 3])
Patches_W_RGB=np.append(Patches_W_RGB, data_t,axis=0)
Patches_W=np.delete(Patches_W, 0,0)
Patches_W_RGB=np.delete(Patches_W_RGB, 0,0)
########### Functions ############################################################################################################################
# Define Activitation functions, pooling and convolution functions (the rules)
def Sigmoid(x):
return (1/(1+np.exp(-x)))
def Sigmoid_dx(x):
return np.exp(-x)/((1+np.exp(-x))**2)
def TanH(x):
return (1-np.exp(-x))/(1+np.exp(-x))
def Pool(I,W):
PoolImg=np.zeros((len(I)/len(W),len(I)/len(W))) # W must fit an integer times into I.
for i in range(0,len(PoolImg)):
for j in range(0,len(PoolImg)):
SelAr=I[i*len(W):(i+1)*len(W),j*len(W):(j+1)*len(W)]
PoolImg[i,j]=np.inner(SelAr.flatten(),W.flatten()) # Now this is just an inner product since we have vectors
return PoolImg
# To automatically make Gaussian kernels
def makeGaussian(size, fwhm = 3, center=None):
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
if center is None:
x0 = y0 = size // 2
else:
x0 = center[0]
y0 = center[1]
return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
# To automatically define pooling nodes
def Pool_node(N):
s=(N,N)
a=float(N)*float(N)
return (1.0/a)*np.ones(s)
#################### Define pooling layers ###########################################################################
P12=Pool_node(4)*(1.0/100.0) #factor 1000 added to lower values more
P34=Pool_node(1)*(1.0/10)
#################### Define Convolution layers #######################################################################
######### First C layer #########
C1=[]
## First Kernel
# Inspiration: http://en.wikipedia.org/wiki/Sobel_operator
# http://stackoverflow.com/questions/9567882/sobel-filter-kernel-of-large-size
Kernel=np.array([[4,3,2,1,0,-1,-2,-3,-4],
[5,4,3,2,0,-2,-3,-4,-5],
[6,5,4,3,0,-3,-4,-5,-6],
[7,6,5,4,0,-4,-5,-6,-7],
[8,7,6,5,0,-5,-6,-7,-8],
[7,6,5,4,0,-4,-5,-6,-7],
[6,5,4,3,0,-3,-4,-5,-6],
[5,4,3,2,0,-2,-3,-4,-5],
[4,3,2,1,0,-1,-2,-3,-4]])
C1.append(Kernel)
## Second Kernel
Kernel=np.matrix.transpose(Kernel)
C1.append(Kernel)
######### Initialize output weights and biases #########
# Define the number of branches in one row
N_branches= 3
ClassAmount=3 # Forest, City, Water
Size_C2=5
S_H3=((patchSize-C1[0].shape[0]+1)/P12.shape[1])-Size_C2+1
S_H4=S_H3/P34.shape[1]
C2INIT=np.random.rand(len(C1),N_branches, Size_C2, Size_C2) # second convolution weigths
WINIT=np.random.rand(ClassAmount, len(C1), N_branches, S_H3, S_H3) # end-weight from output to classifier-neurons
W2INIT=np.random.rand(3,3)
H3_bias=np.random.rand(len(C1),N_branches) # bias in activation function from C2 to H3
Output_bias=np.random.rand(ClassAmount) # bias on the three classes
#learning rates
n_bias=1*10**-2
n_W=1*10**-2
n_C2=1*10**-2
n_H3_bias=1*10**-2
#Labda=5*10**-3
np.random.rand?
N_plts=len(C1)
for i in range(0,N_plts):
plt.subplot(4,3,i+1)
plt.imshow(C1[i])
###Output
_____no_output_____
###Markdown
For the extra information regarding the code in the following cella random patch is chosen in the following way: the program counts how many files and patches there are in total, then it permutes the sequence so that a random patch is chosen every iteration (forest, city, water). After selecting the number the file has to be found back.
###Code
N_F=Patches_F.shape[0]
N_C=Patches_C.shape[0]
N_W=Patches_W.shape[0]
N_total=N_F+N_C+N_W
Sequence = np.arange(N_total)
Sequence = np.random.permutation(Sequence)
##### TRAINING ON ALL DATA TO FIND FINAL WEIGHTS FOR IMPLEMENTATION #####
C2=np.copy(C2INIT)
W=np.copy(WINIT)
W2=np.copy(W2INIT)
Sample_iterations=0
n_W=10
for PP in range(0,N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=np.append([H4.flatten()], [Int_RGB])
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
Sample_iterations=Sample_iterations+1
if Sample_iterations==5000:
n_W=5
print Sample_iterations
if Sample_iterations==10000:
n_W=1
print Sample_iterations
if Sample_iterations==15000:
n_W=0.1
print Sample_iterations
print "Training completed"
CV_RGB=np.zeros([10])
##### CROSS VALIDATION, INFORMATION FROM RGB COLOURS INCLUDED, NO BACKPROPAGATION UNTIL C2 #####
for CROSSES in range(0,10):
# TRAINING PHASE
W=np.copy(WINIT)
W2=np.copy(W2INIT)
C2=np.copy(C2INIT)
n_W=10
Sample_iterations=0
###### Chooses patch and defines label #####
for PP in range(0,int(np.ceil(0.9*N_total))):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=np.append([H4.flatten()], [Int_RGB])
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
Sample_iterations=Sample_iterations+1
if Sample_iterations==5000:
n_W=5
print Sample_iterations
if Sample_iterations==10000:
n_W=1
print Sample_iterations
if Sample_iterations==15000:
n_W=0.1
print Sample_iterations
print "Training completed"
####### Test phase #######
N_correct=0
###### Chooses patch and defines label #####
for PP in range(int(np.ceil(0.9*N_total)),N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
#H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=np.append([H4.flatten()], [Int_RGB])
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
if np.argmax(f)==np.argmax(Class_label):
N_correct=N_correct+1
Perc_corr=float(N_correct)/(N_total-int(np.ceil(0.9*N_total)))
print Perc_corr
CV_RGB[CROSSES]=Perc_corr
Sequence=np.roll(Sequence,(N_total-int(np.ceil(0.9*N_total))))
# Save calculated parameters
CV1_withRGB=np.mean(CV_RGB)
Std_CV1_withRGB=np.std(CV_RGB)
with open("CV1_withRGB.txt", 'w') as f:
f.write(str(CV1_withRGB))
with open("Std_CV1_withRGB.txt", 'w') as f:
f.write(str(Std_CV1_withRGB))
##### CROSS VALIDATION, INFORMATION FROM RGB COLOURS NOT INCLUDED, NO BACKPROPAGATION UNTIL C2 #####
CV_noRGB=np.zeros([10])
for CROSSES in range(0,10):
# TRAINING PHASE
W=np.copy(WINIT)
W2=np.copy(W2INIT)
C2=np.copy(C2INIT)
n_W=25
Sample_iterations=0
###### Chooses patch and defines label #####
for PP in range(0,int(np.ceil(0.9*N_total))):
SS=Sequence[PP]
#SS=14000
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=H4.flatten()
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
Sample_iterations=Sample_iterations+1
if Sample_iterations==5000:
n_W=10
print Sample_iterations
if Sample_iterations==10000:
n_W=2.5
print Sample_iterations
if Sample_iterations==15000:
n_W=0.5
print Sample_iterations
print "Training completed"
####### Test phase #######
N_correct=0
###### Chooses patch and defines label #####
for PP in range(int(np.ceil(0.9*N_total)),N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
#H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=H4.flatten()
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
if np.argmax(f)==np.argmax(Class_label):
N_correct=N_correct+1
Perc_corr=float(N_correct)/(N_total-int(np.ceil(0.9*N_total)))
print Perc_corr
CV_noRGB[CROSSES]=Perc_corr
Sequence=np.roll(Sequence,(N_total-int(np.ceil(0.9*N_total))))
# Save calculated parameters
CV1_withoutRGB=np.mean(CV_noRGB)
Std_CV1_withoutRGB=np.std(CV_noRGB)
with open("CV1_withoutRGB.txt", 'w') as f:
f.write(str(CV1_withoutRGB))
with open("Std_CV1_withoutRGB.txt", 'w') as f:
f.write(str(Std_CV1_withoutRGB))
##### CROSS VALIDATION, INFORMATION FROM RGB COLOURS INCLUDED, BACKPROPAGATION UNTIL C2 #####
# TRAINING PHASE
ERROR_cv=np.zeros([10])
for CROSSES in range(0,10):
W=np.copy(WINIT)
W2=np.copy(W2INIT)
C2=np.copy(C2INIT)
n_W=10
n_C2=1*10**-2
Sample_iterations=0
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,int(np.ceil(0.9*N_total))):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=np.append([H4.flatten()], [Int_RGB])
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
Normalization_factor=np.sum(np.sum(np.abs(C2),axis=3),axis=2)/np.sum(np.sum(np.abs(C2_INIT),axis=3),axis=2)
for r in range(0,len(C1)):
for b in range(0, N_branches):
C2[r][b]=C2[r][b]/Normalization_factor[r][b]
Sample_iterations=Sample_iterations+1
if Sample_iterations==5000:
n_W=5
n_C2=5*10**-2
print Sample_iterations
if Sample_iterations==10000:
n_W=1
n_C2=1*10**-2
print Sample_iterations
if Sample_iterations==15000:
n_W=0.1
n_C2=0.75*10**-2
print Sample_iterations
print "Training completed"
###### test phase!
N_correct=0
for PP in range(int(np.ceil(0.9*N_total)),N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
Int_RGB=np.mean(np.mean(Patches_F_RGB[SS,:,:,:], axis=0), axis=0)/255
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
Int_RGB=np.mean(np.mean(Patches_C_RGB[SS-N_F,:,:,:], axis=0), axis=0)/255
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
Int_RGB=np.mean(np.mean(Patches_W_RGB[SS-N_F-N_C,:,:,:], axis=0), axis=0)/255
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=np.append([H4.flatten()], [Int_RGB])
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=np.append([W[k].flatten()], [W2[k]])
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/(N_total-int(np.ceil(0.9*N_total)))
print Perc_corr
ERROR_cv[CROSSES]=Perc_corr
Sequence=np.roll(Sequence,(N_total-int(np.ceil(0.9*N_total))))
##### CROSS VALIDATION, INFORMATION FROM RGB COLOURS NOT INCLUDED, BACKPROPAGATION UNTIL C2 #####
# TRAINING PHASE
ERROR_cv_without=np.zeros([10])
for CROSSES in range(0,10):
W=np.copy(WINIT)
W2=np.copy(W2INIT)
C2=np.copy(C2INIT)
n_W=25
n_C2=12.5*10**-2
Sample_iterations=0
###### Chooses patch and defines label #####
#for PP in range(0,len(Sequence)):
for PP in range(0,int(np.ceil(0.9*N_total))):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
else:
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
#From here on BP trakes place!
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
y=H4.flatten()
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
###### Back-propagation #####
# First learning the delta's
delta_H4=np.zeros([ClassAmount,len(C1),N_branches,S_H4,S_H4])
e_k=f-Class_label
delta_k=e_k*Sigmoid_dx(x)
for k in range(0, ClassAmount):
#update weights output layer
W[k]=W[k]-n_W*delta_k[k]*H4
W2[k]=W2[k]-n_W*delta_k[k]*Int_RGB
delta_H4[i]=delta_k[i]*W[i]
delta_H4=np.sum(delta_H4, axis=0)
delta_H3=(float(1)/10)*delta_H4
C2_diff=np.zeros([len(C1),N_branches, Size_C2, Size_C2])
for r in range(0, len(C1)):
C2_t=np.array([[delta_H3[r][:]*H2[r][(0+u):(4+u),(0+v):(4+v)] for u in range(0,Size_C2)] for v in range (0,Size_C2)])
C2_t=np.sum(np.sum(C2_t, axis=4),axis=3)
C2_t=np.rollaxis(C2_t,2)
C2_diff[r]=-n_C2*C2_t
C2=C2+C2_diff
Normalization_factor=np.sum(np.sum(np.abs(C2),axis=3),axis=2)/np.sum(np.sum(np.abs(C2_INIT),axis=3),axis=2)
for r in range(0,len(C1)):
for b in range(0, N_branches):
C2[r][b]=C2[r][b]/Normalization_factor[r][b]
Sample_iterations=Sample_iterations+1
if Sample_iterations==5000:
n_W=10
n_C2=5*10**-2
print Sample_iterations
if Sample_iterations==10000:
n_W=2.5
n_C2=1*10**-2
print Sample_iterations
if Sample_iterations==15000:
n_W=0.5
n_C2=0.1*10**-2
print Sample_iterations
print "Training completed"
###### test phase!
N_correct=0
for PP in range(int(np.ceil(0.9*N_total)),N_total):
SS=Sequence[PP]
if SS<N_F:
Class_label=np.array([1,0,0])
inputPatch=Patches_F[SS]
elif(SS>=N_F) and (SS<(N_F+N_C)):
Class_label=np.array([0,1,0])
inputPatch=Patches_C[SS-N_F]
elif(SS>=(N_F+N_C)) and (SS<N_F+N_C+N_W):
Class_label=np.array([0,0,1])
inputPatch=Patches_W[SS-N_F-N_C]
### Layer 1 ###
H1=[]
H2=[]
H3=np.zeros((len(C1), N_branches, S_H3,S_H3))
H4=np.zeros((len(C1), N_branches, S_H4,S_H4))
x=np.zeros(ClassAmount)
f=np.zeros(ClassAmount)
for r in range (0, len(C1)):
H1.append(sig.convolve(inputPatch, C1[r], 'valid'))
H2.append(Pool(H1[r], P12))
for b in range(0,N_branches):
H3[r][b]=Sigmoid(sig.convolve(H2[r], C2[r][b],'valid')-H3_bias[r][b])
H4[r][b]=Pool(H3[r][b],P34)
y=H4.flatten()
#Now we have 3x3x4x4 inputs, connected to the 3 output nodes
for k in range(0,ClassAmount):
W_t=W[k].flatten()
x[k]=np.inner(y, W_t)
f[k]=Sigmoid(x[k]-Output_bias[k])
if np.argmax(f)==np.argmax(Class_label):
#print True
N_correct=N_correct+1
Perc_corr=float(N_correct)/(N_total-int(np.ceil(0.9*N_total)))
print Perc_corr
ERROR_cv_without[CROSSES]=Perc_corr
Sequence=np.roll(Sequence,(N_total-int(np.ceil(0.9*N_total))))
###Output
_____no_output_____ |
Convolutional Neural Networks.ipynb | ###Markdown
Diferent types of layers- **Convolutional layer**: several kernels of size `kernel_size` (these are called *filters*) are passed across the image and a scalar matrix product is applied. This has the effect of transforming (filtering). After the convolution, you may apply a non-linear function.- **Pooling layer:** Pass a window across the transformed layers and choose a representative (max pixel value / average pixel value). This has the effect of reducing image size.- **Flatten layer:** Bring everything to a flat vector. - Once we have flat vector (through a flattening layer) we can apply dense layers to get to the desired output size (e.g. a vector of size number of classes). - A popular pattern is (Conv, Conv, Pool) followed by (Flatten, Dense, Dense for output) Example
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPool2D
model = Sequential([
Conv2D(filters=32, kernel_size=3, activation='relu', input_shape=(28, 28, 1)), # (9 weights + 1 bias per layer) * filter
Conv2D(filters=32, kernel_size=3, activation='relu'), # (32*(9 weights) + 1 bias per layer)*32
MaxPool2D(pool_size=2),
Flatten(),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
model.summary()
128*4608+128 # weights + bias
128*10+10
###Output
_____no_output_____
###Markdown
Reshape the data to fit it into the model
###Code
X_train = X_train.reshape(-1, 28, 28, 1) # (number of samples, height, width, number of channels)
X_test = X_test.reshape(-1, 28, 28, 1)
X_train = X_train/255. # Note: This rescaling is suitable for this problem, (assuming range of 0-255)
X_test = X_test/255.
model.compile(loss="sparse_categorical_crossentropy", # loss function for integer-valued categories
metrics=["accuracy", "mse"]
)
history = model.fit(X_train, y_train, epochs=10, validation_split=0.1)
plt.imshow(model.variables[0][:,:,:,0].numpy().reshape(3,3), cmap='gray')
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.plot(loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.legend()
plt.xlabel("Number of epochs")
X.shape
model.variables[1].shape
model.variables[1]
model.variables[3]
###Output
_____no_output_____
###Markdown
COVID-19 Classification using 4-layer Convolutional Neural Networks.
###Code
! pip install opencv-python
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
import cv2
import numpy as np
import shutil
from sklearn import preprocessing
from keras.preprocessing import image
from sklearn.model_selection import train_test_split
from keras.applications.vgg16 import VGG16
from keras.applications import xception
from keras.applications import inception_v3
from keras.applications.vgg16 import preprocess_input
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import log_loss, accuracy_score
from sklearn.metrics import precision_score, \
recall_score, confusion_matrix, classification_report, \
accuracy_score, f1_score
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.metrics import f1_score
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from glob import glob
base_dir = './data/'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'test')
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(optimizer='adam', loss= 'categorical_crossentropy', metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255.
train_datagen = ImageDataGenerator( rescale = 1.0/255. )
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# --------------------
# Flow training images in batches of 20 using train_datagen generator
# --------------------
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size=20,
class_mode='categorical',
target_size=(150, 150))
# --------------------
# Flow validation images in batches of 20 using test_datagen generator
# --------------------
validation_generator = test_datagen.flow_from_directory(validation_dir,
batch_size=20,
class_mode = 'categorical',
target_size = (150, 150),
shuffle=False)
history = model.fit(train_generator,
validation_data=validation_generator,
steps_per_epoch=10,
epochs=20,
validation_steps = 3,
verbose=2)
from sklearn.metrics import classification_report
from sklearn import preprocessing
y_pred = model.predict(validation_generator, verbose=1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(validation_generator.classes, y_pred_bool))
###Output
4/4 [==============================] - 1s 318ms/step
precision recall f1-score support
0 1.00 1.00 1.00 26
1 1.00 0.70 0.82 20
2 0.77 1.00 0.87 20
accuracy 0.91 66
macro avg 0.92 0.90 0.90 66
weighted avg 0.93 0.91 0.91 66
###Markdown
One Hot Encoding with 100 -> [1,0,0,0,0,0,0,0,0,0]1 -> [0,1,0,0,0,0,0,0,0,0]2 -> [0,0,1,0,0,0,0,0,0,0]3 -> [0,0,0,1,0,0,0,0,0,0]4 -> [0,0,0,0,1,0,0,0,0,0]5 -> [0,0,0,0,0,1,0,0,0,0]6 -> [0,0,0,0,0,0,1,0,0,0]7 -> [0,0,0,0,0,0,0,1,0,0]8 -> [0,0,0,0,0,0,0,0,1,0]9 -> [0,0,0,0,0,0,0,0,0,1]
###Code
image1 = x[10]
image2 = x[4]
image1 = np.array(image1)
image2 = np.array(image2)
image1 = image1.reshape(28,28)
image2 = image2.reshape(28,28)
plt.subplot(121)
plt.title('{label}'.format(label=y[10]))
plt.imshow(image1, cmap='gray')
plt.subplot(122)
plt.title('{label}'.format(label=y[4]))
plt.imshow(image2, cmap='gray')
plt.tight_layout()
plt.show()
print(x.shape[0])
x = x.reshape([-1, 28, 28, 1])
test_x = test_x.reshape([-1, 28, 28, 1])
print(x.shape)
###Output
(55000, 28, 28, 1)
###Markdown
Architecture
###Code
# Building the convolutional neural network
network = input_data(shape=[None, 28, 28, 1],name='input') #input layer
network = conv_2d(network, nb_filter=4, filter_size=5, activation='relu') #conv layer with 4 5x5 conv kernels and rectifier activiation
network = max_pool_2d(network, 2) #max pool subsampling layer with 2x2 sampling window
network = conv_2d(network, nb_filter=4, filter_size=5, activation='relu') #conv layer with 4 5x5 conv kernels and rectifier activiation
network = max_pool_2d(network, 2) #max pool subsampling layer with 2x2 sampling window
network = fully_connected(network, 128, activation='tanh') #fully connected layer with 128 neurons and tanh activation function
network = fully_connected(network, 10, activation='softmax') #output layer with 10 neurons and softmax activation function
network = regression(network, optimizer='adam', learning_rate=0.01, loss='categorical_crossentropy', name='target') #regression layer with adam optimizer and crossentropy loss function
model = tflearn.DNN(network, tensorboard_verbose=0)
###Output
_____no_output_____
###Markdown
Training
###Code
model.fit({'input': x}, {'target': y}, n_epoch=5,
validation_set=({'input': test_x}, {'target': test_y}), show_metric=True, run_id='convnet_mnist')
###Output
Training Step: 4299 | total loss: [1m[32m0.23708[0m[0m | time: 20.230s
| Adam | epoch: 005 | loss: 0.23708 - acc: 0.9455 -- iter: 54976/55000
Training Step: 4300 | total loss: [1m[32m0.22711[0m[0m | time: 21.798s
| Adam | epoch: 005 | loss: 0.22711 - acc: 0.9478 | val_loss: 0.10935 - val_acc: 0.9657 -- iter: 55000/55000
--
###Markdown
TestingCreate a test image: http://www.onemotion.com/flash/sketch-paint/
###Code
from scipy import misc
image = misc.imread("test2.png", flatten=True)
print(image.shape)
print(image.dtype)
#image = 1 - image
image = image.reshape(28,28)
plt.imshow(image, cmap='gray')
plt.show()
image = image.reshape([-1,28,28,1])
predict = model.predict({'input': image})
print(np.round_(predict, decimals=3))
print("prediction: " + str(np.argmax(predict)))
###Output
_____no_output_____ |
2. Numpy.ipynb | ###Markdown
Numpy Basics
###Code
array_1 = np.array([1,5,656])
array_1
array_1.shape
arr2 = np.array([[1,2,3], [4,5,6]], dtype = 'int8')
arr2
###Output
_____no_output_____
###Markdown
###Code
arr2.ndim
arr2.shape
arr2.dtype
arr2[0, 1] = 100
arr2
arr2[0, 1]
###Output
_____no_output_____
###Markdown
Changng Specific raws, clolumns, specific elements
###Code
arr2[:, 1:-1:1]
arr2[:,2] = [5]
arr2
###Output
_____no_output_____
###Markdown
3D numpy arrays
###Code
arr3 = np.array([[[1, 2, 3, 10], [4, 5, 6, 11], [7, 8, 9, 12]]])
arr3
arr3.dtype
type(arr3)
arr3.ndim
arr3.shape
arr3
arr3[0, 0]
arr3
###Output
_____no_output_____
###Markdown
Get specific elements (inside out)
###Code
arr3[0,0,3]
arr3[0][1]
###Output
_____no_output_____
###Markdown
Initailising Different types of arrays
###Code
arr4 = np.zeros((3, 4), dtype = 'int32')
arr4
arr5 = np.ones((4,5), dtype = 'int32')
arr5
arr6 = np.full((3,2,8), 9)
arr6
arr7 = np.full_like(arr6, 5)
arr7
arr8 = np.random.randint(4, 7, size = (4, 4))
arr8
np.identity(4)
###Output
_____no_output_____
###Markdown
problem statememnt
###Code
output = np.ones((5, 5))
output
inside = np.zeros((3, 3))
print(inside)
inside[1, 1] = 9
print(inside)
output[1:-1, 1:-1] = inside
print(output)
###Output
[[1. 1. 1. 1. 1.]
[1. 0. 0. 0. 1.]
[1. 0. 9. 0. 1.]
[1. 0. 0. 0. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
be careful while copying arrays
###Code
a = np.array([10,34,45])
b = np.copy(a)
b[0] = 9999
print(a)
print(b)
###Output
[10 34 45]
[9999 34 45]
###Markdown
Mathematics
###Code
a = np.array([1,2,3,4,5,6])
a += 2
print(a)
a - 1
a * 2
a * 2
a
np.sin(a)
###Output
_____no_output_____
###Markdown
Linear algebra
###Code
a = np.ones((4,5))
print(a)
b = np.full((5, 4), 3)
print(b)
c = np.matmul(a, b)
print(c)
d = np.linalg.det(c)
print(d)
###Output
0.0
###Markdown
Statistics
###Code
a = np.array([[[1, 2, 3, 10], [4, 5, 6, 11], [7, 8, 9, 12]]])
np.max(a)
np.min(a)
np.sum(a)
###Output
_____no_output_____
###Markdown
Rearranging Arrays
###Code
a = np.array([[[1, 2, 3, 10], [4, 5, 6, 11], [7, 8, 9, 12]]])
a.shape
a = a.reshape((4, 3))
a.shape
###Output
_____no_output_____
###Markdown
Stacks -- vertical and horizontal
###Code
s1 = np.array([1, 2, 3, 4])
s2 = np.array([6, 7, 8, 9])
p = np.vstack([s1, s2, s2, s1])
p
p = np.hstack([s1, s2, s2, s1])
p
###Output
_____no_output_____
###Markdown
Krishna Naik
###Code
q = np.random.randint(4, size = (4, 4))
q
a = np.arange(0, stop = 10, step = 1)
a
b = np.linspace(2, 10, 10, True)
c = b.reshape(5, 2)
c
c.shape
###Output
_____no_output_____ |
models/Ethan_Models.ipynb | ###Markdown
Ethan's Modeling
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import Lasso, Ridge, LinearRegression, ElasticNet
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.impute import KNNImputer
from sklearn.metrics import mean_squared_error, accuracy_score, mean_absolute_error
from sklearn.model_selection import GridSearchCV
from matplotlib import pyplot as plt
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
plt.rcParams['figure.figsize'] = [12, 8]
plt.rcParams['figure.dpi'] = 100
###Output
_____no_output_____
###Markdown
Get the Data Cleaning The DataMost of the cleaning was done outside of this file. The cleaned data is in the `men_lead_no_drop.csv` file.
###Code
def impute(df):
countries = df.country.unique()
#dataset averages
global_h = np.mean(df.height)
global_w = np.mean(df.weight)
global_a = np.mean(df.age)
heights = []
weights = []
ages = []
#steps through each country
for co in countries:
group = df[df['country'] == co]
# counting datapoints within country
count_h = np.count_nonzero(~np.isnan(group.height))
count_w = np.count_nonzero(~np.isnan(group.weight))
count_a = np.count_nonzero(~np.isnan(group.age))
# sets thresholds between accepting the countries average or using dataset average to fill in NaN's
if count_h >= 5:
avg_h = np.mean(group.height)
else:
avg_h = global_h
if count_w >= 5:
avg_w = np.mean(group.weight)
else:
avg_w = global_w
if count_a >= 10:
avg_a = np.mean(group.age)
else:
avg_a = global_a
# steps through each person creating lists to replace current columns in dataframe
for i in range(len(group)):
if np.isnan(group.iloc[i].height):
heights.append(avg_h)
else:
heights.append(group.iloc[i].height)
if np.isnan(group.iloc[i].weight):
weights.append(avg_w)
else:
weights.append(group.iloc[i].weight)
if np.isnan(group.iloc[i].age) or group.iloc[i].age==0:
ages.append(avg_a)
else:
ages.append(group.iloc[i].age)
#replacing columns of dataframe
imputed = df.copy()
imputed['height'] = heights
imputed['weight'] = weights
imputed['age'] = ages
return imputed.fillna(0)
df = pd.read_csv('../data/women_lead_no_drop.csv')
df = impute(df)
df = df.drop(['id', 'last_name', 'first_name', 'points', 'rank', 'event_count'], axis=1)
df = pd.get_dummies(df)
df = df.drop(['Unnamed: 0'], axis=1)
test = df[df['year'] >= 2019]
X_test = test.drop(['avg_points', 'year'], axis=1)
y_test = test['avg_points']
train = df[df['year'] <= 2018]
X_train = train.drop(['avg_points', 'year'], axis=1)
y_train = train['avg_points']
# df.loc[:,['height', 'weight', 'age', 'career_len', 'avg_points', 't-1', 't-2', 't-3', 'country_CZE', 'country_ESP', 'country_CAN']].head().to_markdown(index=False)
df
###Output
_____no_output_____
###Markdown
Regression ModelsThese perform a number of regression models on the data with the training data taken from before 2019 and then predicting the 2019 numbers. The number we're using is the `avg_score` that year. And the way we look at how well the model predicted is using the `mean_squared_error` scoring function. Naive Baseline Model
###Code
pred = X_test['t-1']
print("MSE Value:", mean_squared_error(pred, y_test))
print("MAE Value:", mean_absolute_error(pred, y_test))
plt.scatter(range(len(pred)), pred, label='Prediction')
plt.scatter(range(len(pred)), y_test, label='Actual AVG Points')
plt.title('Naive Baseline Model')
plt.legend()
plt.show()
# plt.bar(range(len(pred)), np.abs(pred - y_test))
# plt.title
###Output
MSE Value: 362.0169801875552
MAE Value: 12.358878127522194
###Markdown
Boilerplate code for following ModelsTesting the modesl below follows a lot of the same patterns, so let's just implement it in a simple function
###Code
def run_regression_model(model, param_grid, model_name):
grid = GridSearchCV(model, param_grid, n_jobs=-1, scoring='neg_mean_squared_error').fit(X_train, y_train)
pred = grid.predict(X_test)
print("MSE Value:", mean_squared_error(pred, y_test))
print("MAE Value:", mean_absolute_error(pred, y_test))
print("Best Params:", grid.best_params_)
plt.scatter(range(len(pred)), pred, label='Prediction')
plt.scatter(range(len(pred)), y_test, label='Actual AVG Points')
plt.title(f'{model_name} - GridSearchCV')
plt.xlabel('Rank')
plt.ylabel('Average Score')
plt.legend()
plt.show()
plt.bar(range(len(pred)), np.abs(pred - y_test))
plt.title(f'{model_name} - Absolute Errors')
plt.xlabel('Rank')
plt.ylabel('Absolute Error - `np.abs(pred - y_test)`')
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
param_grid = {
'fit_intercept': [True, False],
'normalize': [True, False]
}
model = LinearRegression(n_jobs=-1)
run_regression_model(model, param_grid, 'Linear Regression')
###Output
MSE Value: 249.3605326941072
MAE Value: 10.858999816690613
Best Params: {'fit_intercept': True, 'normalize': False}
###Markdown
Lasso
###Code
param_grid = {
'alpha': [1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3, 1e4],
'fit_intercept': [True, False],
'normalize': [True, False]
}
model = Lasso()
run_regression_model(model, param_grid, 'Linear Regression (Lasso)')
###Output
MSE Value: 159.64927661478197
MAE Value: 9.315498326989573
Best Params: {'alpha': 1.0, 'fit_intercept': True, 'normalize': False}
###Markdown
Ridge
###Code
param_grid = {
'alpha': [1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3, 1e4],
'fit_intercept': [True, False],
'normalize': [True, False]
}
model = Ridge(max_iter=100000)
run_regression_model(model, param_grid, 'Linear Regression (Ridge)')
###Output
MSE Value: 155.149902694903
MAE Value: 9.076325199731976
Best Params: {'alpha': 1000.0, 'fit_intercept': False, 'normalize': True}
###Markdown
Elastic Net
###Code
param_grid = {
'alpha': [1e-3, 1e-1, 1e0, 1e2, 1e4],
'l1_ratio': [1e-3, 1e-1, 1e0, 1e2, 1e4],
'fit_intercept': [True, False],
'normalize': [True, False]
}
model = ElasticNet(max_iter=100000)
run_regression_model(model, param_grid, 'Linear Regression (Elastic Net)')
###Output
MSE Value: 159.64927661478197
MAE Value: 9.315498326989573
Best Params: {'alpha': 1.0, 'fit_intercept': True, 'l1_ratio': 1.0, 'normalize': False}
###Markdown
Decision Tree Regressor
###Code
param_grid = {
'max_depth': [1, 2, 5, 10, None],
'max_leaf_nodes': [2, 5, 10, None],
'min_samples_leaf': [1, 2, 5, 10],
'min_samples_split': [2, 5, 10]
}
model = DecisionTreeRegressor()
run_regression_model(model, param_grid, 'Decision Tree Regressor')
###Output
MSE Value: 168.46543912879326
MAE Value: 9.543745800916518
Best Params: {'max_depth': 10, 'max_leaf_nodes': 10, 'min_samples_leaf': 1, 'min_samples_split': 2}
###Markdown
Random Forest Regressor
###Code
param_grid = {
'n_estimators': [10, 50, 100, 200],
'max_depth': [1, 2, 5, 10, None],
'min_samples_leaf': [1, 2, 5, 10],
'max_features': ['auto', 'sqrt']
}
model = RandomForestRegressor()
best_params_ = run_regression_model(model, param_grid, 'Random Forest Regressor')
model.set_params(**best_params_)
model.fit(X_train, y_train)
features = pd.Series(model.feature_importances_, index=X_train.columns)
print(features.sort_values(ascending=False))
###Output
MSE Value: 169.7604041763395
MAE Value: 9.66181708060917
Best Params: {'max_depth': 5, 'max_features': 'auto', 'min_samples_leaf': 10, 'n_estimators': 50}
###Markdown
Gradient Boosted Regressor
###Code
param_grid = {
'n_estimators': [10, 50, 100, 200],
'learning_rate': [0.01, 0.1, 0.5],
'max_depth': [2, 5, 10, None],
'min_samples_leaf': [1, 2, 5, 10],
'max_features': ['auto', 'sqrt']
}
model = GradientBoostingRegressor()
run_regression_model(model, param_grid, 'Gradient Boosting Regressor')
###Output
MSE Value: 170.11196507004445
MAE Value: 9.645725743590715
Best Params: {'learning_rate': 0.1, 'max_depth': 2, 'max_features': 'auto', 'min_samples_leaf': 10, 'n_estimators': 50}
###Markdown
Classification Methods
###Code
df = pd.read_csv('../data/men_lead_no_drop.csv')
df = impute(df)
df = df.drop(['id', 'last_name', 'first_name', 'points', 'rank'], axis=1)
le = LabelEncoder()
le.fit(df.country)
test = df[df['year'] >= 2019]
X_test = test.drop(['country'], axis=1).to_numpy()
y_test = le.transform(test['country'])
train = df[df['year'] <= 2018]
X_train = train.drop(['country'], axis=1).to_numpy()
y_train = le.transform(train['country'])
###Output
_____no_output_____
###Markdown
Decision Tree Classifier
###Code
param_grid = {
'criterion': ['gini', 'entropy'],
'max_depth': [1, 3, 5, 6, None],
'max_leaf_nodes': [2, 3, 4, 5, 6, 10],
'min_samples_leaf': [1, 2, 3, 5, 10]
}
model = DecisionTreeClassifier()
grid = GridSearchCV(model, param_grid, n_jobs=-1, scoring='accuracy').fit(X_train, y_train)
pred = grid.predict(X_test)
print("Accuracy Score:", accuracy_score(pred, y_test))
print("Best Params:", grid.best_params_)
# plt.scatter(range(len(pred)), pred, label='Prediction')
# plt.scatter(range(len(pred)), y_test, label='Actual AVG Points')
# plt.title('Decision Tree Classifier - GridSearchCV')
# plt.legend()
# plt.show()
###Output
/home/ebrouwerdev/.virtualenvs/ACME/lib/python3.8/site-packages/sklearn/model_selection/_split.py:670: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5.
warnings.warn(("The least populated class in y has only %d"
|
Day 4.ipynb | ###Markdown
Day 4 Part 1
###Code
import pandas as pd
from copy import deepcopy
# Function that checks if a card is winning
def check_card_is_winning(card, nums):
bool_card = card.isin(nums)
result_df = pd.DataFrame({"columns": bool_card.all(axis=0),
"rows": bool_card.all(axis=1)})
return result_df.any().any()
# Function that calculate the score of the winning card
def card_calc_score(card, nums):
bool_card = card.isin(nums)
bool_card = bool_card.astype(int)
bool_card = bool_card.applymap(lambda x: 1 if x == 0 else 0)
return (bool_card * card).sum().sum()
# Load the inputs
input_list = []
with open("inputs/day4.txt") as input_file:
input_list = [x for x in input_file.read().splitlines()]
# Separate the draw order (first line) and the bingo cards
numbers_list = list(map(int, input_list[0].split(",")))
bingo_lines = [list(map(int, x.split())) for x in input_list[1:]]
# Format the bingo cards as a list of list
bingo_cards = []
for line in bingo_lines:
if len(line) == 0:
bingo_cards.append(line)
else:
bingo_cards[len(bingo_cards)-1].append(line)
# Format the bingo cards as a list of Pandas DataFrames
for i in range(len(bingo_cards)):
bingo_cards[i] = pd.DataFrame(bingo_cards[i])
# Find the winning card and the winning number
number_drawned = []
winning_card = 0
winning_number = 0
for num in numbers_list:
number_drawned.append(num)
for card in bingo_cards:
if check_card_is_winning(card, number_drawned):
winning_card = card
winning_number = num
break
if winning_number != 0:
break
print("Winning bingo card :\n", winning_card)
print("Winning number :", winning_number)
print("Winning score :", card_calc_score(winning_card, number_drawned) * winning_number)
###Output
Winning score : 44088
###Markdown
Part 2
###Code
# Find the losing card
bingo_cards_copy = deepcopy(bingo_cards)
number_to_draw = 0
number_drawned = []
while len(bingo_cards_copy) > 1:
number_drawned.append(numbers_list[number_to_draw])
new_bingo_card_list = []
for card in bingo_cards_copy:
if not check_card_is_winning(card, number_drawned):
new_bingo_card_list.append(card)
bingo_cards_copy = new_bingo_card_list
number_to_draw += 1
losing_card = bingo_cards_copy[0]
print("Losing card :\n", losing_card)
# Find the winning number for the losing card
losing_number = 0
number_drawned = []
for num in numbers_list:
number_drawned.append(num)
if check_card_is_winning(losing_card, number_drawned):
losing_number = num
break
print("Winning number for the losing card :", losing_number)
# Find the score for the losing card
print("Winning score :", card_calc_score(losing_card, number_drawned) * losing_number)
###Output
Winning score : 23670
###Markdown
Armstrong Number
###Code
i = 1042000
while i<=702648265:
r=0
s=str(i)
l=len(s)
for j in range(l):
r+=int(s[j])**l
if r==i:
print("The first armstrong number is",r)
break
i+=1
###Output
The first armstrong number is 1741725
###Markdown
Day 4 String Concatenation To concatenate, or combine, two strings you can use the + operator.
###Code
a = "Hello"
b = "World"
c = a + " " + b
print(c)
###Output
Hello World
###Markdown
String FormatAs we learned in the Python Variables chapter, we cannot combine strings and numbers like this:
###Code
age = 36
txt = "My name is John, I am " + age
print(txt)
###Output
_____no_output_____
###Markdown
But we can combine strings and numbers by using the `format()` method!The `format()` method takes the passed arguments, formats them, and places them in the string where the placeholders` {}` are:
###Code
age = 36
txt = "My name is John, and I am {}"
print(txt.format(age))
###Output
My name is John, and I am 36
###Markdown
The format() method takes unlimited number of arguments, and are placed into the respective placeholders:
###Code
quantity = 3
itemno = 567
price = 49.95
myorder = "I want {} pieces of item {} for {} dollars."
print(myorder.format(quantity, itemno, price))
quantity = 3
itemno = 567
price = 49.95
myorder = "I want to pay {2} dollars for {0} pieces of item {1}."
print(myorder.format(quantity, itemno, price))
###Output
I want to pay 49.95 dollars for 3 pieces of item 567.
###Markdown
Escape Characters To insert characters that are illegal in a string, use an escape character.An escape character is a backslash `\` followed by the character you want to insert.An example of an illegal character is a double quote inside a string that is surrounded by double quotes:
###Code
# You will get an error if you use double quotes inside a string that is surrounded by double quotes:
txt = "We are the so-called "Vikings" from the north."
txt = "We are the so-called \"Vikings\" from the north."
txt
###Output
_____no_output_____
###Markdown
Deleting/Updating from a String In Python, `Updation or deletion` of characters from a String is not allowed. This will cause an error because item assignment or item deletion from a String is not supported. Although deletion of entire String is possible with the use of a `built-in del` keyword. This is because Strings are `immutable`, hence elements of a String cannot be changed once it has been assigned. Only new strings can be reassigned to the same name. Updation of a character:
###Code
# Python Program to Update
# character of a String
String1 = "Hello, I'm a Geek"
print("Initial String: ")
print(String1)
# Updating a character
# of the String
String1[2] = 'p'
print("\nUpdating character at 2nd Index: ")
print(String1)
###Output
Initial String:
Hello, I'm a Geek
###Markdown
Updating Entire String:
###Code
# Python Program to Update
# entire String
String1 = "Hello, I'm a Geek"
print("Initial String: ")
print(String1)
# Updating a String
String1 = "Welcome to the Geek World"
print("\nUpdated String: ")
print(String1)
###Output
Initial String:
Hello, I'm a Geek
Updated String:
Welcome to the Geek World
###Markdown
Deletion of a character:
###Code
# Python Program to Delete
# characters from a String
String1 = "Hello, I'm a Geek"
print("Initial String: ")
print(String1)
# Deleting a character
# of the String
del String1[2]
print("\nDeleting character at 2nd Index: ")
print(String1)
###Output
Initial String:
Hello, I'm a Geek
###Markdown
Deleting Entire String:Deletion of entire string is possible with the use of del keyword. Further, if we try to print the string, this will produce an error because String is deleted and is unavailable to be printed.`
###Code
# Python Program to Delete
# entire String
String1 = "Hello, I'm a Geek"
print("Initial String: ")
print(String1)
# Deleting a String
# with the use of del
del String1
print("\nDeleting entire String: ")
print(String1)
###Output
Initial String:
Hello, I'm a Geek
Deleting entire String:
###Markdown
q)Importing the numpy package
###Code
import numpy as np
myarr = np.array([1,2,3,4,5])
print(myarr)
###Output
_____no_output_____
###Markdown
Note : Arrays can store elemements of the same type Advantages of array in numpy Uses less space and its faster in computation q)Lets check the size of an integer type in Python
###Code
import sys
b = 10
sys.getsizeof(b)
###Output
_____no_output_____
###Markdown
q) Lets check the size of an empty list in Python
###Code
myl = []
sys.getsizeof(myl)
###Output
_____no_output_____
###Markdown
q Lets add an item to the list and check the size
###Code
myl = [1]
sys.getsizeof(myl)
###Output
_____no_output_____
###Markdown
Note : Every item in the list would occupy 8 bytes q)Lets create a list of 100 elements
###Code
myl=list(range(1,100))
print(myl)
type(myl)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]
###Markdown
q)Lets now check the size of the entire list
###Code
size_l = sys.getsizeof(myl[1])*len(myl)
print(size_l)
###Output
2772
###Markdown
Note : getsizeof(myl[1]) would give the size of one element of the list in bytes q)Now lets create an array of 100 elements and check the size of the array
###Code
import numpy as np
myar = np.arange(100)
print(myar)
###Output
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99]
###Markdown
q)How to find out the count of number of elements in an array
###Code
myar.size
###Output
_____no_output_____
###Markdown
q)How to find the size (memory occupied) by every item in an array
###Code
print(myar.itemsize)
###Output
4
###Markdown
Now we need to multiple the lenght of the array with the memory occupied by each array element to get the total memory occupied by the entire array elements.
###Code
size_a = (myar.size)*(myar.itemsize)
print(size_a)
###Output
400
###Markdown
Now compare how much memory does an array and list occupy (scroll up and compare with that of the list) q)How to verify that numpy arrays are faster than the python lists
###Code
import sys
import time
import numpy as np
def list_add(l1,l2):
i=0
for a in l2:
l1[i]=l1[i]+a
i=i+1
count = 10000000
l1=list(range(count))
l2=list(range(count))
start=time.time()
list_add(l1,l2)
end=time.time()
elapsed = 1000*(end-start)
print("List addition time is ",elapsed)
a1 = np.arange(count)
a2 = np.arange(count)
start=time.time()
a3=a1+a2
end=time.time()
elapsed = 1000*(end-start)
print("Array addition time is ",elapsed)
###Output
List addition time is 2515.1050090789795
Array addition time is 36.522626876831055
###Markdown
WAP - In class exe : Modify the above program to perform a slightly complex operation on a larger data set to observe the time difference. q)Demonstration of arithmatic operations on arrays Adding 2 arrays, Subrtacting 2 arrays Mul Div q)Creating a 2 dimensional array
###Code
import numpy as np
a = np.array([[1,2],[3,4]])
print(a)
a.ndim #The ndim will print the dimensionality of the array i.e 1 dimentional or 2 dimensional
###Output
[[1 2]
[3 4]]
###Markdown
q)How to count the number of elements in any array
###Code
print(a.size)
print("shape =",a.shape) # would print how many rows and cols are there in the array
###Output
4
shape = (2, 2)
###Markdown
q) Lets see another example of using the shape function
###Code
a=np.array([[1,2],[3,4],[5,6]])
a.shape
print(a)
###Output
[[1 2]
[3 4]
[5 6]]
###Markdown
q)Reshaping an array
###Code
b = a.reshape((2,3))
b.shape
print(b)
###Output
[[1 2 3]
[4 5 6]]
###Markdown
q)Creating a zero value array
###Code
a = np.zeros( (2,3) ) #will result in creating a 2 row and 3 col array, filled with zeroes
print(a)
b = np.ones((2,3)) #This will create an array with value 1's
print(b)
###Output
[[0. 0. 0.]
[0. 0. 0.]]
[[1. 1. 1.]
[1. 1. 1.]]
###Markdown
q)Using the arange function to create arrays
###Code
a = np.arange(5)
print(a)
###Output
[0 1 2 3 4]
###Markdown
q) Understanding the ravel function,its used to convert an N dimensional array into a 1 dimensional array
###Code
a = b.ravel()
print(a)
type(a)
###Output
[1. 1. 1. 1. 1. 1.]
###Markdown
q)Finding out the min and max element from an array
###Code
b.min()
b.max()
b.sum()
# Note: try the built in function b.sum()
###Output
_____no_output_____
###Markdown
WAP In-class exe : To develop a custom user defined function to find out the min and max element from an 2 by 3 array or elements as shown below 1 23 45 6 Hint : U may use the ravel function to flatten it out and then work on finding the min and max elements from the array Note : U are not supposed to use the min or max built in function
###Code
a = np.arange(6)
print(a)
def min_max(a):
results_list = sorted(a)
return results_list[0], results_list[-1]
r = min_max(a)
print(r)
###Output
[0 1 2 3 4 5]
(0, 5)
###Markdown
q)Operations on arrays (Addition, subtraction and multiplication is possible)
###Code
a=np.ones((2,2))
b=np.ones((2,2))
print(a)
print(b)
c = a+b
d = a-b
e = a*b
print("addition result follows\n\n:",c)
print("subtraction result follows\n \n :",d)
print("multiplication result follows\n \n :",e)
###Output
_____no_output_____
###Markdown
Slicing python lists
###Code
l = [1,2,3,4,5]
l[2:4]
###Output
_____no_output_____
###Markdown
Similar kind of slicing is possible on Python arrays
###Code
d2 = np.array([[1,2,3],[4,5,6],[7,8,9]])
print(d2)
print(d2[0,0]) #This would print the element from 0th row and 0th column i.e 1
print(d2[1,1]) #This would print the element from 0th row and 0th column i.e 5
###Output
_____no_output_____
###Markdown
1 2 3 4 5 6 7 8 9 q)How to interpret the below statement with respect to the above 3 by 3 array
###Code
print(d2[0:2,2]) #would print [3 6]
# [0:2,2] would translate to [(0,1),2] i.e to print the row 0 and row 1 elements of column 2 which is 3,6
print(d2[:,0:2]) # Print all the row elements from col 0 and col 1
###Output
_____no_output_____
###Markdown
q)Iterating the array using for loops 1 2 34 5 67 8 9The array d2 contains the above elements
###Code
for row in d2:
print(row)
###Output
_____no_output_____
###Markdown
q)Conditional statements on arrays
###Code
ca = np.arange(9).reshape(3,3)
print("The original array is \n" ,ca)
con = ca > 5
print(" \nThe boolean array after the conditional statement is applied \n")
print(con)
ca[con] # The boolean TRUE/FALSE array is now the index and print only the numbers satisfying the the > 5 condition
###Output
_____no_output_____
###Markdown
In class - exe WAP : To create an array with 100 elements (M,N is 5,10) anf filter only those elements between than 75 and 100 and create a new array with 5 rows and 5 cols q)Replacing all the elements of an array which is greater than 4 with zero
###Code
ca = np.arange(9).reshape(3,3)
print("The original array is : \n", ca)
con = ca > 4
ca[con]=0
print("\n",ca)
###Output
_____no_output_____
###Markdown
q)Taking a 1 dimensional array and reshaping it to a 2 dimensional array
###Code
newa = np.arange(10).reshape(2,5)
newb = np.arange(10,20).reshape(2,5)
print(newa)
print(newb)
###Output
_____no_output_____
###Markdown
understanding stacking operations on an array
###Code
newc=np.vstack((newa,newb)) # vstack is for vertical stacking and similarly hstack can be used too
print(newc)
type(newc)
###Output
_____no_output_____
###Markdown
q)We can split a large arrays into smaller arrays
###Code
biga = np.arange(60).reshape(6,10)
print(biga)
biga.shape
sma = np.hsplit(biga,2) # sma is a list consisting of 3 smaller arrays
print(sma[0])
print(sma[1])
###Output
_____no_output_____
###Markdown
Day 4 of 15PetroNumpyDays Using inbuilt methods for generating arrays Arange: Return an array with evenly spaced elements in the given interval. - np.arange(start,stop,step)
###Code
#Make an array of pressures ranging from 0 to 5000 psi with a step size of 500 psi
pressures = np.arange(0,5500,500)
pressures
pressures.ndim
###Output
_____no_output_____
###Markdown
Linspace: Creates lineraly spaced array- Evenly spaced numbers over a specified interval- Creating n datapoints between two points
###Code
# Create saturation array from 0 to 1 having 100 points
saturations = np.linspace(0,1,100)
saturations
saturations.shape
###Output
_____no_output_____
###Markdown
List Widget
###Code
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(539, 401)
font = QtGui.QFont()
font.setPointSize(14)
font.setBold(True)
font.setWeight(75)
Form.setFont(font)
self.verticalLayout_2 = QtWidgets.QVBoxLayout(Form)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.verticalLayout = QtWidgets.QVBoxLayout()
self.verticalLayout.setObjectName("verticalLayout")
self.listWidget = QtWidgets.QListWidget(Form)
self.listWidget.setStyleSheet("QListWidget{\n"
"font: 75 14pt \"MS Shell Dlg 2\";\n"
"background-color:rgb(255, 0, 0)\n"
"}")
self.listWidget.setObjectName("listWidget")
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
self.listWidget.clicked.connect(self.item_clicked)
self.verticalLayout.addWidget(self.listWidget)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(14)
font.setBold(True)
font.setWeight(75)
self.label.setFont(font)
self.label.setStyleSheet("QLabel{\n"
"color:red;\n"
"}")
self.label.setText("")
self.label.setObjectName("label")
self.verticalLayout.addWidget(self.label)
self.verticalLayout_2.addLayout(self.verticalLayout)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def item_clicked(self):
item = self.listWidget.currentItem()
self.label.setText("You have selected : " + str(item.text()))
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
__sortingEnabled = self.listWidget.isSortingEnabled()
self.listWidget.setSortingEnabled(False)
item = self.listWidget.item(0)
item.setText(_translate("Form", "Python"))
item = self.listWidget.item(1)
item.setText(_translate("Form", "Java"))
item = self.listWidget.item(2)
item.setText(_translate("Form", "C++"))
item = self.listWidget.item(3)
item.setText(_translate("Form", "C#"))
item = self.listWidget.item(4)
item.setText(_translate("Form", "JavaScript"))
item = self.listWidget.item(5)
item.setText(_translate("Form", "Kotlin"))
self.listWidget.setSortingEnabled(__sortingEnabled)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
from PyQt5.QtWidgets import QApplication, QWidget,QLabel,QVBoxLayout, QListWidget
import sys
from PyQt5.QtGui import QIcon, QFont
class Window(QWidget):
def __init__(self):
super().__init__()
#window requirements like title,icon
self.setGeometry(200,200,400,300)
self.setWindowTitle("PyQt5 QListWidget")
self.setWindowIcon(QIcon("python.png"))
self.create_list()
def create_list(self):
#create vbox layout object
vbox = QVBoxLayout()
#create object of list_widget
self.list_widget = QListWidget()
#add items to the listwidget
self.list_widget.insertItem(0, "Python")
self.list_widget.insertItem(1, "Java")
self.list_widget.insertItem(2, "C++")
self.list_widget.insertItem(3, "C#")
self.list_widget.insertItem(4, "Kotlin")
self.list_widget.setStyleSheet('background-color:red')
self.list_widget.setFont(QFont("Sanserif", 15))
self.list_widget.clicked.connect(self.item_clicked)
#create label
self.label = QLabel("")
self.setFont(QFont("Sanserif", 13))
self.setStyleSheet('color:green')
#add widgets to the vboxlyaout
vbox.addWidget(self.list_widget)
vbox.addWidget(self.label)
#set the layout for the main window
self.setLayout(vbox)
def item_clicked(self):
item = self.list_widget.currentItem()
self.label.setText("You have selected: " + str(item.text()))
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
QDial
###Code
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(532, 398)
self.verticalLayout_2 = QtWidgets.QVBoxLayout(Form)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.verticalLayout = QtWidgets.QVBoxLayout()
self.verticalLayout.setObjectName("verticalLayout")
self.dial = QtWidgets.QDial(Form)
self.dial.valueChanged.connect(self.dial_changed)
self.dial.setMaximum(360)
self.dial.setStyleSheet("QDial{\n"
"background-color:rgb(255, 0, 0);\n"
"}")
self.dial.setObjectName("dial")
self.verticalLayout.addWidget(self.dial)
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(14)
font.setBold(True)
font.setWeight(75)
self.label.setFont(font)
self.label.setText("")
self.label.setObjectName("label")
self.verticalLayout.addWidget(self.label)
self.verticalLayout_2.addLayout(self.verticalLayout)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def dial_changed(self):
getValue = self.dial.value()
self.label.setText("Dial is changing : "+ str(getValue))
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
from PyQt5.QtWidgets import QApplication, QWidget,QVBoxLayout, QDial, QLabel
import sys
from PyQt5.QtGui import QIcon, QFont
class Window(QWidget):
def __init__(self):
super().__init__()
self.setGeometry(200,200,400,300)
self.setWindowTitle("PyQt5 QDial Application")
self.setWindowIcon(QIcon("python.png"))
self.create_dial()
def create_dial(self):
vbox = QVBoxLayout()
self.dial = QDial()
self.dial.setMinimum(0)
self.dial.setMaximum(360)
self.dial.setValue(30)
self.dial.setStyleSheet('background-color:green')
self.dial.valueChanged.connect(self.dial_changed)
self.label = QLabel("")
self.label.setFont(QFont("Sanserif", 15))
self.label.setStyleSheet('color:red')
vbox.addWidget(self.dial)
vbox.addWidget(self.label)
self.setLayout(vbox)
def dial_changed(self):
getValue = self.dial.value()
self.label.setText("Dial is changing : "+ str(getValue))
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
ComboBox
###Code
from PyQt5.QtWidgets import QApplication,\
QWidget, QComboBox, QLabel
from PyQt5 import uic
class UI(QWidget):
def __init__(self):
super().__init__()
# loading the ui file with uic module
uic.loadUi('combobox.ui', self)
#find widgets in the ui file
self.combo = self.findChild(QComboBox, "comboBox")
self.combo.currentTextChanged.connect(self.combo_selected)
self.label = self.findChild(QLabel, "label")
def combo_selected(self):
item = self.combo.currentText()
self.label.setText("You selected : " + item)
app = QApplication([])
window = UI()
window.show()
app.exec_()
from PyQt5.QtWidgets import QApplication, QWidget, \
QVBoxLayout, QComboBox, QLabel
import sys
from PyQt5.QtGui import QIcon, QFont
class Window(QWidget):
def __init__(self):
super().__init__()
#window requrements like geometry,icon and title
self.setGeometry(200,200,400,200)
self.setWindowTitle("PyQt5 QComboBox")
self.setWindowIcon(QIcon("python.png"))
#our vboxlayout
vbox = QVBoxLayout()
#create the object of combobox
self.combo = QComboBox()
#add items to the combobox
self.combo.addItem("Python")
self.combo.addItem("Java")
self.combo.addItem("C++")
self.combo.addItem("C#")
self.combo.addItem("JavaScript")
#connected combobox signal
self.combo.currentTextChanged.connect(self.combo_selected)
#create label
self.label = QLabel("")
self.label.setFont(QFont("Sanserif", 15))
self.label.setStyleSheet('color:red')
#added widgets in the vbox layout
vbox.addWidget(self.combo)
vbox.addWidget(self.label)
self.setLayout(vbox)
def combo_selected(self):
item = self.combo.currentText()
self.label.setText("You selected : " + item)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
Slider
###Code
from PyQt5.QtWidgets import QApplication, QWidget,QSlider, QLabel
from PyQt5 import uic
class UI(QWidget):
def __init__(self):
super().__init__()
# loading the ui file with uic module
uic.loadUi('slider.ui', self)
#finding the widgets
self.slider = self.findChild(QSlider, "horizontalSlider")
self.slider.valueChanged.connect(self.changed_slider)
self.label = self.findChild(QLabel, "label")
def changed_slider(self):
value = self.slider.value()
self.label.setText(str(value))
app = QApplication([])
window = UI()
window.show()
app.exec_()
from PyQt5.QtWidgets import QApplication, QWidget, QSlider, QLabel, QHBoxLayout
import sys
from PyQt5.QtGui import QIcon, QFont
from PyQt5.QtCore import Qt
class Window(QWidget):
def __init__(self):
super().__init__()
#window requrements like geometry,icon and title
self.setGeometry(200,200,400,200)
self.setWindowTitle("PyQt5 Slider")
self.setWindowIcon(QIcon("python.png"))
hbox = QHBoxLayout()
self.slider = QSlider()
self.slider.setOrientation(Qt.Horizontal)
self.slider.setTickPosition(QSlider.TicksBelow)
self.slider.setTickInterval(10)
self.slider.setMinimum(0)
self.slider.setMaximum(100)
self.slider.valueChanged.connect(self.changed_slider)
self.label =QLabel("")
self.label.setFont(QFont("Sanserif", 15))
hbox.addWidget(self.slider)
hbox.addWidget(self.label)
self.setLayout(hbox)
def changed_slider(self):
value = self.slider.value()
self.label.setText(str(value))
App = QApplication(sys.argv)
window = Window()
window.show()
###Output
_____no_output_____
###Markdown
Menu
###Code
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 26))
self.menubar.setObjectName("menubar")
self.menuFile = QtWidgets.QMenu(self.menubar)
self.menuFile.setObjectName("menuFile")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.actionName = QtWidgets.QAction(MainWindow)
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(":/image/new.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.actionName.setIcon(icon)
self.actionName.setObjectName("actionName")
self.actionSave = QtWidgets.QAction(MainWindow)
icon1 = QtGui.QIcon()
icon1.addPixmap(QtGui.QPixmap(":/image/save.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.actionSave.setIcon(icon1)
self.actionSave.setObjectName("actionSave")
self.actionCopy = QtWidgets.QAction(MainWindow)
icon2 = QtGui.QIcon()
icon2.addPixmap(QtGui.QPixmap(":/image/copy.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.actionCopy.setIcon(icon2)
self.actionCopy.setObjectName("actionCopy")
self.actionPaste = QtWidgets.QAction(MainWindow)
icon3 = QtGui.QIcon()
icon3.addPixmap(QtGui.QPixmap(":/image/paste.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.actionPaste.setIcon(icon3)
self.actionPaste.setObjectName("actionPaste")
self.actionExit = QtWidgets.QAction(MainWindow)
icon4 = QtGui.QIcon()
icon4.addPixmap(QtGui.QPixmap(":/image/exit.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.actionExit.setIcon(icon4)
self.actionExit.setObjectName("actionExit")
self.menuFile.addAction(self.actionName)
self.menuFile.addAction(self.actionSave)
self.menuFile.addAction(self.actionCopy)
self.menuFile.addAction(self.actionPaste)
self.menuFile.addAction(self.actionExit)
self.menubar.addAction(self.menuFile.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.menuFile.setTitle(_translate("MainWindow", "File"))
self.actionName.setText(_translate("MainWindow", "New"))
self.actionSave.setText(_translate("MainWindow", "Save"))
self.actionCopy.setText(_translate("MainWindow", "Copy"))
self.actionPaste.setText(_translate("MainWindow", "Paste"))
self.actionExit.setText(_translate("MainWindow", "Exit"))
import images_rc
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
from PyQt5.QtWidgets import QApplication, QMainWindow,\
QAction
import sys
from PyQt5.QtGui import QIcon
class Window(QMainWindow):
def __init__(self):
super().__init__()
#window requrements like geometry,icon and title
self.setGeometry(200,200,400,200)
self.setWindowTitle("PyQt5 Menu")
self.setWindowIcon(QIcon("python.png"))
self.create_menu()
def create_menu(self):
main_menu = self.menuBar()
fileMenu = main_menu.addMenu("File")
newAction = QAction(QIcon('image/new.png'), "New", self)
newAction.setShortcut("Ctrl+N")
fileMenu.addAction(newAction)
saveAction = QAction(QIcon('image/save.png'), "Save", self)
saveAction.setShortcut("Ctrl+S")
fileMenu.addAction(saveAction)
fileMenu.addSeparator()
copyAction = QAction(QIcon('image/copy.png'), "Copy", self)
copyAction.setShortcut("Ctrl+C")
fileMenu.addAction(copyAction)
pasteAction = QAction(QIcon('image/paste.png'), "Paste", self)
pasteAction.setShortcut("Ctrl+P")
fileMenu.addAction(pasteAction)
exitAction = QAction(QIcon('image/exit.png'), "Exit", self)
exitAction.triggered.connect(self.close_window)
fileMenu.addAction(exitAction)
def close_window(self):
self.close()
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
QPainter Rectangle
###Code
from PyQt5.QtWidgets import QApplication, QWidget
import sys
from PyQt5.QtGui import QIcon
from PyQt5.QtGui import QPainter, QPen, QBrush
from PyQt5.QtCore import Qt
class Window(QWidget):
def __init__(self):
super().__init__()
self.setGeometry(200,200,400,300)
self.setWindowTitle("PyQt5 Drawing")
self.setWindowIcon(QIcon("python.png"))
def paintEvent(self, e):
painter = QPainter(self)
painter.setPen(QPen(Qt.black, 5, Qt.SolidLine))
painter.setBrush(QBrush(Qt.red, Qt.SolidPattern))
# painter.setBrush(QBrush(Qt.green, Qt.DiagCrossPattern))
painter.drawRect(100, 15, 300, 100)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
Eclipse
###Code
from PyQt5.QtWidgets import QApplication, QWidget
import sys
from PyQt5.QtGui import QIcon
from PyQt5.QtGui import QPainter, QPen, QBrush
from PyQt5.QtCore import Qt
class Window(QWidget):
def __init__(self):
super().__init__()
self.setGeometry(200, 200, 700, 400)
self.setWindowTitle("PyQt5 Ellipse ")
self.setWindowIcon(QIcon("python.png"))
def paintEvent(self, e):
painter = QPainter(self)
painter.setPen(QPen(Qt.black, 5, Qt.SolidLine))
painter.setBrush(QBrush(Qt.red, Qt.SolidPattern))
painter.setBrush(QBrush(Qt.green, Qt.DiagCrossPattern))
painter.drawEllipse(100, 100, 400, 200)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____
###Markdown
Polygon
###Code
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from PyQt5.QtWidgets import QApplication, QWidget
import sys
from PyQt5.QtGui import QIcon, QPolygon
from PyQt5.QtGui import QPainter, QPen, QBrush
from PyQt5.QtCore import Qt, QPoint
class Window(QWidget):
def __init__(self):
super().__init__()
self.setGeometry(200, 200, 700, 400)
self.setWindowTitle("PyQt5 Drawing")
self.setWindowIcon(QIcon("python.png"))
def paintEvent(self, e):
painter = QPainter(self)
painter.setPen(QPen(Qt.black, 5, Qt.SolidLine))
painter.setBrush(QBrush(Qt.red, Qt.SolidPattern))
painter.setBrush(QBrush(Qt.green, Qt.VerPattern))
points = QPolygon([
QPoint(10, 10),
QPoint(10, 100),
QPoint(100, 10),
QPoint(100, 100)
])
painter.drawPolygon(points)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
###Output
_____no_output_____ |
5_student_performance_prediction.ipynb | ###Markdown
Higher Education Students Performance Evaluation
###Code
# Dataset - https://www.kaggle.com/csafrit2/higher-education-students-performance-evaluation/code
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.style
df=pd.read_csv('/content/drive/MyDrive/28 feb/student_prediction.csv')
df.head(3)
df.tail(3)
df.shape
df.isnull().sum()
df.drop('STUDENTID',axis=1,inplace=True)
df.shape
figure = plt.figure(figsize=(28, 26))
sns.heatmap(df.corr(), annot=True,cmap=plt.cm.cool)
cols = ['AGE','GENDER', 'HS_TYPE', 'SCHOLARSHIP', 'ACTIVITY', 'PARTNER','SALARY' ] #--->>> column of which you want to see outlier, include them
for i in cols:
sns.boxplot(df[i])
plt.show();
#df.drop(['MOTHER_JOB','FATHER_JOB'],axis=1,inplace=True)
df.shape
#df.drop(['#_SIBLINGS','PARTNER'],axis=1,inplace=True)
df.shape
df.info()
#df.drop(['AGE','GENDER','HS_TYPE','SCHOLARSHIP'],axis=1,inplace=True)
df.info()
#df.drop(['WORK','ACTIVITY','SALARY','TRANSPORT','LIVING','MOTHER_EDU','FATHER_EDU','KIDS','READ_FREQ'],axis=1,inplace=True)
df.info()
#df.drop(['CLASSROOM','CUML_GPA','COURSE ID'],axis=1,inplace=True)
df.shape
#df.drop(['READ_FREQ_SCI','ATTEND_DEPT','IMPACT','ATTEND','PREP_STUDY','NOTES','LISTENS','LIKES_DISCUSS','EXP_GPA'],axis=1,inplace=True)
df.shape
df['GRADE'].value_counts()
# Copy all the predictor variables into X dataframe
X = df.drop('GRADE', axis=1)
# Copy target into the y dataframe.This is the dependent variable
y = df[['GRADE']]
X.head()
X.shape
#Let us break the X and y dataframes into training set and test set. For this we will use
#Sklearn package's data splitting function which is based on random function
from sklearn.model_selection import train_test_split
# Split X and y into training and test set in 65:35 ratio
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15 , random_state=10)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
X_train # now all value in same range
import tensorflow as tf
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LeakyReLU,PReLU,ELU
from keras.layers import Dropout
# Initialising the ANN
ann = Sequential()
ann.add(Dense(units=15,kernel_initializer='he_normal', activation = 'relu'))
ann.add(Dropout(0.2))
#ann.add(Dense(units=13,kernel_initializer='he_normal', activation = 'relu'))
#ann.add(Dense(units=26,kernel_initializer='he_normal', activation = 'relu'))
ann.add(Dense(8,kernel_initializer='glorot_uniform', activation = 'softmax'))
input_shape = X.shape
ann.build(input_shape)
ann.summary() # ,input_dim = 31 >>> (31+1)=32*16=512
ann.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model_history = ann.fit(X_train, y_train,validation_split = 0.15, batch_size = 12, epochs = 92)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 10)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print ("Confusion Matrix : \n", cm)
print (classification_report(y_test, y_pred))
from sklearn.metrics import accuracy_score
print ("Accuracy : ", accuracy_score(y_test, y_pred))
# Import necessary modules
from sklearn.neighbors import KNeighborsClassifier
# Split into training and test set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.1, random_state=32)
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train, y_train)
# Calculate the accuracy of the model
print(knn.score(X_test, y_test))
from sklearn.tree import DecisionTreeClassifier
# Splitting the dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 100)
# Function to perform training with giniIndex.
def train_using_gini(X_train, X_test, y_train):
# Creating the classifier object
clf_gini = DecisionTreeClassifier(criterion = "gini",
random_state = 100,max_depth=3, min_samples_leaf=5)
# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini
# Function to perform training with entropy.
def tarin_using_entropy(X_train, X_test, y_train):
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier(
criterion = "entropy", random_state = 100,
max_depth = 3, min_samples_leaf = 5)
# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy
# Function to make predictions
def prediction(X_test, clf_object):
# Predicton on test with giniIndex
y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred
# Function to calculate accuracy
def cal_accuracy(y_test, y_pred):
print("Confusion Matrix: ",
confusion_matrix(y_test, y_pred))
print ("Accuracy : ",
accuracy_score(y_test,y_pred)*100)
print("Report : ",
classification_report(y_test, y_pred))
clf_gini = train_using_gini(X_train, X_test, y_train)
clf_entropy = tarin_using_entropy(X_train, X_test, y_train)
# Operational Phase
print("Results Using Gini Index:")
# Prediction using gini
y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)
print("Results Using Entropy:")
# Prediction using entropy
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)
from sklearn.ensemble import RandomForestClassifier
# creating a RF classifier
clf = RandomForestClassifier(n_estimators = 100)
# Training the model on the training dataset
# fit function is used to train the model using the training sets as parameters
clf.fit(X_train, y_train)
# performing predictions on the test dataset
y_pred = clf.predict(X_test)
# metrics are used to find accuracy or error
from sklearn import metrics
print()
# using metrics module for accuracy calculation
print("ACCURACY OF THE MODEL: ", metrics.accuracy_score(y_test, y_pred))
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
rfc_acc = rfc.score(X_test, y_test)
print("The training accuracy for Random Forest is:", rfc.score(X_train, y_train)*100, "%")
print("The testing accuracy for Random Forest is:", rfc_acc * 100, "%")
from xgboost import XGBClassifier
xgb = XGBClassifier(verbosity=0)
xgb.fit(X_train, y_train)
xgb_pred = xgb.predict(X_test)
xgb_acc = xgb.score(X_test, y_test)
print("The training accuracy for XGB is:", xgb.score(X_train, y_train)*100, "%")
print("The testing accuracy for XGB is:", xgb_acc * 100, "%")
from sklearn.linear_model import LinearRegression
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
regression_model.score(X_train, y_train)
regression_model.score(X_test, y_test)
###Output
_____no_output_____ |
01_copy_get_flights_data.ipynb | ###Markdown
How to download flights csv file from transtats website **In this notebook, we will**1. Download a csv file for your chosen year(s) and month(s)2. Prepare the data for further processing3. Push the prepared data to a table in the database
###Code
# Import all necessary libraries
import pandas as pd
import numpy as np
import psycopg2
import requests #package for getting data from the web
from zipfile import * #package for unzipping zip files
from sql import get_engine #adjust this as necessary to match your sql.py connection methods
###Output
_____no_output_____
###Markdown
1. Download csv file with flight data for your specific year/month In the following, you are going to download a csv file containing flight data from [this website](https://transtats.bts.gov). You can specify, which data you want to download. Choose a month/year that you want to explore further.With the following command lines, you will download a csv file on public flight data from [this website](https://transtats.bts.gov) containing data of your chosen month/year. The file will be stored in a data folder.
###Code
# Specifies path for saving file
path ='data/'
# Create the data folder
!mkdir {path}
years = [2012] # list of years you want to look at, specify one year
months = [10, 11] # list of months you want to look at, specify one month
# Here: October 2012
# Loop through months
for year in years:
for month in months:
# Get the file from the website https://transtats.bts.gov
zip_file = f'On_Time_Reporting_Carrier_On_Time_Performance_1987_present_{year}_{month}.zip'
csv_file = f'On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_{year}_{month}.csv'
url = (f'https://transtats.bts.gov/PREZIP/{zip_file}')
# Download the database
r = requests.get(f'{url}', verify=False)
# Save database to local file storage
with open(path+zip_file, 'wb') as f:
f.write(r.content)
# Unzip your file
for month in months:
z_file = f'On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2012_{month}.zip'
with ZipFile(path+z_file, 'r') as zip_ref:
zip_ref.extractall(path)
# Read in your data
csv_file = f'On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_2012_10.csv'
df_10 = pd.read_csv(path+csv_file, low_memory = False)
csv_file = f'On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_2012_11.csv'
df_11 = pd.read_csv(path+csv_file, low_memory = False)
csv_file = f'On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_2012_12.csv'
df_12 = pd.read_csv(path+csv_file, low_memory = False)
# df_names = ['df_10', 'df_11']
# for i, month in enumerate(months):
# csv_file = f'On_Time_Reporting_Carrier_On_Time_Performance_(1987_present)_2012_{month}.csv'
# df_names[i] = pd.read_csv(path+csv_file, low_memory = False)
print(df_10.shape)
print(df_11.shape)
pd.set_option('display.max_columns', None)
display(df_10.head())
df_10[df_10['OriginState'] == 'NJ']
df_10[df_10['OriginState'] == 'NJ']
# Combine your date
df = df_10.append(df_11)
display(df.shape)
display(df.head())
# Read in your data
# df = pd.read_csv(path+csv_file, low_memory = False)
# display(df.shape)
# display(df.head())
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1003260 entries, 0 to 488005
Columns: 110 entries, Year to Unnamed: 109
dtypes: float64(70), int64(21), object(19)
memory usage: 849.6+ MB
###Markdown
2. Prepare the csv file for further processing In the next step, we clean and prepare our dataset. a) Since the dataset consists of a lot of columns, we we define which ones to keep.
###Code
# Columns from downloaded file that are to be kept
columns_to_keep = [
'FlightDate',
'DepTime',
'CRSDepTime',
'DepDelay',
'ArrTime',
'CRSArrTime',
'ArrDelay',
'Reporting_Airline',
'Tail_Number',
'Flight_Number_Reporting_Airline',
'Origin',
'Dest',
'AirTime',
'Distance',
'Cancelled',
'Diverted'
]
df[columns_to_keep].info()
# set up your database connection
engine = get_engine()
# The columns in the DB have different naming as in the source csv files. Lets get the names from the DB
table_name_sql = '''SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'flights'
AND TABLE_SCHEMA ='public'
ORDER BY ordinal_position'''
c_names = engine.execute(table_name_sql).fetchall()
c_names
# we can clean up the results into a clean list
new_column_names=[]
for name in c_names:
new_column_names.append(name[0])
new_column_names
# Just in case the above fails here are the results
new_column_names_alternate = ['flight_date',
'dep_time',
'sched_dep_time',
'dep_delay',
'arr_time',
'sched_arr_time',
'arr_delay',
'airline',
'tail_number',
'flight_number',
'origin',
'dest',
'air_time',
'distance',
'cancelled',
'diverted' ]
###Output
_____no_output_____
###Markdown
b) With the next function, we make our csv file ready to be uploaded to SQL. We only keep to above specified columns and convert the datatypes.
###Code
def clean_airline_df(df):
'''
Transforms a df made from BTS csv file into a df that is ready to be uploaded to SQL
Set rows=0 for no filtering
'''
# Build dataframe including only the columns you want to keep
df_airline = df.loc[:,columns_to_keep]
# Clean data types and NULLs
df_airline['FlightDate']= pd.to_datetime(df_airline['FlightDate'], yearfirst=True)
df_airline['CRSArrTime']= pd.to_numeric(df_airline['CRSArrTime'], downcast='integer', errors='coerce')
df_airline['Cancelled']= pd.to_numeric(df_airline['Cancelled'], downcast='integer')
df_airline['Diverted']= pd.to_numeric(df_airline['Diverted'], downcast='integer')
# Rename columns
df_airline.columns = new_column_names
return df_airline
# Call function and check resulting dataframe
df_clean = clean_airline_df(df)
df_clean.head()
###Output
_____no_output_____
###Markdown
If you decide to only look at specific airports, it is a good decision to filter for them in advance. This function does the filtering.
###Code
# Specify the airports you are interested in and put them as a list in the function.
def select_airport(df, airports):
''' Helper function for filtering airline df for a subset of airports'''
df_out = df.loc[(df.origin.isin(airports)) | (df.dest.isin(airports))]
return df_out
# Execute function, filtering for New York area airports
airports=['BOS', 'EWR', 'JFK', 'MIA', 'PHI', 'SJU']
if len(airports) > 0:
df_selected_airports = select_airport(df_clean, airports)
else:
df_selected_airports = df_clean
df_selected_airports.head()
###Output
_____no_output_____
###Markdown
3. Push the prepared data to a table in the database
###Code
# Specify which table within your database you want to push your data to. Give your table an unambiguous name.
# Example: flights_sp for Sina's flights table
table_name = 'flight_api_proj_gr4_raw'
# If the specified table doesn't exist yet, it will be created
# With 'replace', your data will be replaced if the table already exists.
# This will take a minute or two...
# Write records stored in a dataframe to SQL database
if engine!=None:
try:
df_selected_airports.to_sql(name=table_name, # Name of SQL table
con=engine, # Engine or connection
if_exists='replace', # Drop the table before inserting new values
index=False, # Write DataFrame index as a column
chunksize=5000, # Specify the number of rows in each batch to be written at a time
method='multi') # Pass multiple values in a single INSERT clause
print(f"The {table_name} table was imported successfully.")
# Error handling
except (Exception, psycopg2.DatabaseError) as error:
print(error)
engine = None
# Check the number of rows match
table_name_sql = f'''SELECT count(*)
FROM {table_name}
'''
engine.execute(table_name_sql).fetchall()[0][0] == df_selected_airports.shape[0]
###Output
_____no_output_____ |
notebooks/quantum-machine-learning/qbm.ipynb | ###Markdown
Quantum Boltzmann Machine In this section we will introduce a probabilistic model such as the Boltzmann Machine in its quantum version, which generates a distribution, $P(\vec{x})$, from a set of between-samples and can generate new samples. Introduction One focus of machine learning is probabilistic modelling, in which a probability distribution is obtained from a finite set of samples. If the training process is successful, the learned distribution $P(\vec{x})$ has sufficient similarity to the actual distribution of the data that it can make correct predictions about unknown situations. Depending on the details of the distributions and the approximation technique, machine learning can be used to perform classification, clustering, compression, denoising, inpainting, or other tasks 1. In recent years the popularity of different applications using quantum computing has increased, this tutorial is based on generating one of the machine learning models in order to facilitate training. The purpose of this tutorial is to explain a probabilistic model based on the Boltzmann distribution, i.e. a Quantum Boltzmann Machine (QBM). Classical Boltzmann Machines Boltzmann Machines (BMs) offer a powerful framework for modelling probability distributions. These types of neural networks use an undirected graph structure to encode relevant information. More precisely,the respective information is stored in bias coefficients and connection weights of network nodes, which are typically related to binary spin-systems and grouped into those that determine the output, the visible nodes, and those that act as latent variables, the hidden nodes [1], [2]. ApplicationsApplications have been studied in a large variety of domains such as the analysis of quantum many-body systems, statistics, biochemistry, social networks, signal processing and finance [2]. Quantum Boltzmann Machine Figure 1. General structure of a QBM.Quantum Boltzmann Machines (QBMs) are a natural adaption of BMs to the quantum computing framework. Instead of an energy function with nodes being represented by binary spin values, QBMs define the underlying network using a Hermitian operator, a parameterized Hamiltonian.1. Initialize circuit with random parameters $\vec{\theta} = (\theta^1, \dots, \theta^n)$2. Measurements3. Estimate mismatch between data and quantum outcomes4. Update $\vec{\theta}$, and repeat 2 through 4 until covergenceThe image below outlines the processes in a quantum Boltzmann machine: Implementation This example is based on the following article : *Benedetti, M., Garcia-Pintos, D., Perdomo, O. et al. A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Inf 5, 45 (2019). https://doi.org/10.1038/s41534-019-0157-8*To begin this Tutorial we must call the necessary methods to generate circuits and that can be variational as is the case of the class [ParameterVector](https://qiskit.org/documentation/stubs/qiskit.circuit.ParameterVector.html).
###Code
import numpy as np
from qiskit import QuantumCircuit, Aer, execute
from qiskit.circuit import ParameterVector
from qiskit.quantum_info import Statevector
# Classes to work with three optimizers
from qiskit.algorithms.optimizers import NELDER_MEAD, SPSA, COBYLA
# For visualizations
import matplotlib.pyplot as plt
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
DatasetConsidering the Bars and Stripes (or BAS) dataset, which is composed of binary images of size 2x2, for this tutorial only a subset that is enclosed in a circle is used, as followsFigure 3. BAS dataset. Mapping image to qubitsThere are various ways of mapping values to qubits as we saw in the Data Encoding section, as these are binary images, i.e. they can only take 2 values, black-white, 0-255, 0-1. To work on the quantum computer, the Basis Encoding method will be used to convert images into states, i.e. each pixel is represented as a qubit, being as described in the following figureFigure 4. Image to quantum state.now consider the next conditions:- if the pixel is white, then the qubit value state is 0,- if the pixel is Black, then the qubit value state is 1.For example in Figure 4, the image of size $2 \times 2$ can be rewritten into a matrix$$\begin{pmatrix} c_0r_0 & c_0r_1\\ c_1r_0 & x_1r_1\end{pmatrix}. (1)$$Based on the conditions, it can be seen that for the pixel at position $c_0r_0$ is white, and this is equivalent to the qubit $|q_0\rangle$, then its state value is $|0\rangle$; this logic is performed with pixels $c_0r_1$,$c_1r_0$,$c_1r_1$ which are the qubits $|q_1\rangle,|q_2\rangle,|q_3\rangle$ respectively, as they are all white color their state would be $|0\rangle$ for all of them. The result is the quantum state $|0000\rangle$ of the 4 qubits.Performing this process for each of the images of the subset would look like this Figura 5. Mapping the size in qubits.in total are six quantum states, this can be rewritten as the linear combination, for this purpose it is necessary to consider the characteristic that $$\sum_{i=0}^{2^n} | \alpha_i |^2 = 1 \text{ (2)}$$ where $n$ is the number of the qubits and $\alpha_i \in \mathbb{C}$ are the scalar values of each state, for this case we consider them purely real, and as each state has the same probability of being measured the following quantum state remains $|\psi\rangle$,$$|\psi \rangle = \frac{1}{\sqrt{6}} (|0000\rangle+|0011\rangle+|0101\rangle+|1010\rangle+|1100\rangle+|1111\rangle)$$ which represents a probability distribution $P(x)$. Note: Check that each quantum state representate a binary number and that in a decimal value, i.e.:- $|0000 \rangle \rightarrow 0$- $|0011 \rangle \rightarrow 3$ - $|0101 \rangle \rightarrow 5$ - $|1010 \rangle \rightarrow 10$ - $|1100 \rangle \rightarrow 12$ - $|1111 \rangle \rightarrow 15$ Question What happens if we use other state of interes with image of size 3x3? Starting $P(x)$ with the variablepx_output to generate the equivalent state it is necessary that
###Code
num_qubits = 4
init_list = [0,3,5,10,12,15] # indices of interest
# create all-zeros array of size num_qubits^2
px_output = Statevector.from_label('0'*num_qubits)
for init_value in init_list:
px_output.data[init_value] = 1
px_output /= np.sqrt(len(init_list)) # normalize the statevector
px_output = Statevector(px_output)
print(px_output) # print to check it's correct
# px_output = [1,0,0,1,0,1,0,0,0,0,1,0,1,0,0,1]/16 expected output
###Output
Statevector([0.40824829+0.j, 0. +0.j, 0. +0.j,
0.40824829+0.j, 0. +0.j, 0.40824829+0.j,
0. +0.j, 0. +0.j, 0. +0.j,
0. +0.j, 0.40824829+0.j, 0. +0.j,
0.40824829+0.j, 0. +0.j, 0. +0.j,
0.40824829+0.j],
dims=(2, 2, 2, 2))
###Markdown
it is important to prove that the state vector satisfies the characteristics of eq(2), this is possible if we use the next line
###Code
np.sum(px_output.data**2)
###Output
_____no_output_____
###Markdown
The result must be a uniform distribution for the 6 states:0000, 0011, 0101, 1010, 1100, 1111 otherwise it is 0, the probability can be obtained from the following expression $|\frac{1}{6}|^2 = \frac{1}{6} =0.167 $.
###Code
dict_px_output = px_output.probabilities_dict()
plot_histogram(dict_px_output, title='p(x) of the QBM')
###Output
_____no_output_____
###Markdown
Design a Variational Quantum Circuit (Layer)The QBM design based on [3] requires a Variational Quantum Circuit or [ansatz](https://qiskit.org/documentation/tutorials/circuits_advanced/01_advanced_circuits.html) that we can name as a layer and this can be repeated L times in order to obtain the desired distribution, in our case ofpx_outputQiskit has some ansatz already implemented as is the case of [RealAmplitudes](https://qiskit.org/documentation/stubs/qiskit.circuit.library.RealAmplitudes.html), for this method requires the parameters number of qubits and the number of repetitions or *L* number of layers.
###Code
# import for the ansatz is in circuit.library from qiskit
from qiskit.circuit.library import RealAmplitudes
# this ansatz only needs the number of the qubits and the repetitions
ansatz_example = RealAmplitudes(num_qubits, reps=1)
ansatz_example.draw()
###Output
/usr/local/lib/python3.9/site-packages/sympy/core/expr.py:3949: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
SymPyDeprecationWarning(feature="expr_free_symbols method",
###Markdown
It is important to know that there are other ansatz other than RealAmplitudes in Qiskit [here](https://qiskit.org/documentation/apidoc/circuit_library.htmln-local-circuits), except for this tutorial it will be based on an ansatz that follows the idea of the paper [3],"in this work, we use arbitrary single qubit rotations for the odd layers, and Mølmer-Sørensen XX gates for the even layers", we need remark the paper works in a trapped ion architecture. Design the ansatzes must use [ParameterVector](https://qiskit.org/documentation/stubs/qiskit.circuit.ParameterVector.html) class, in order to generate gates that can vary their values.Question What happens if we use a predefined ansatz for this problem?
###Code
## Design any ansatz
# Here, we use arbitrary single qubit rotations for the odd layers,
# and Mølmer-Sørensen XX gates for the even layers
def ansatz_odd(n,parameters):
# arbitrary single qubit rotations for the odd layer
qc = QuantumCircuit(n)
for i in range(n):
# Use variable value for ry with the values parameters[i]
qc.u(parameters[3*i],parameters[1+(3*i)],parameters[2+(3*i)],i)
return qc
def ansatz_even(n,parameters):
# Mølmer-Sørensen XX gates for the even layer
qc = QuantumCircuit(n)
k = n//2
for i in range(k):
qc.rxx(parameters[i],i*2,(2*i)+1)
for i in range(k):
qc.rxx(parameters[i+k],k-i-1,k)
for i in range(k):
qc.rxx(parameters[i+k*2],k-i-1,k+1)
return qc
###Output
_____no_output_____
###Markdown
The odd layer is possible consider the arbitrary single qubit rotation has the form $U\left(\theta_{l}^{j}\right)=R_{z}\left(\theta_{l}^{j, 1}\right) R_{x}\left(\theta_{l}^{j, 2}\right) R_{z}\left(\theta_{l}^{j, 3}\right)$. This is possible if we use the [U gate](https://qiskit.org/documentation/stubs/qiskit.circuit.library.UGate.html), with a list of parameters.
###Code
# depending on design, we can change the num_params value
num_params_odd = 12 # for 4 qubits we need 12 parameters
parameters_odd = ParameterVector('θ', num_params_odd)
ansatz_odd(num_qubits,parameters_odd).draw()
###Output
_____no_output_____
###Markdown
For the case of the even layer, MGS XX can be represented in [4], it tells us that from the gate summation ${RXX}(\theta)$ in all qubits of the quantum circuit we can obtain such a circuit.
###Code
num_params_even = 6 # for 4 qubits we need 6 parameters
parameters_even = ParameterVector('θ', num_params_even)
ansatz_even(num_qubits,parameters_even).draw()
###Output
_____no_output_____
###Markdown
Applying n layersDue to a Qiskit method in the QuantumCircuit object our both ansatz can be converted into a quantum gate and we can indicate the number of repeats with the variablenum_layersthat are required to fit the expected output distribution.
###Code
# ansatz to quantum gate
def gate_layer(n, params,flag):
if flag == 1:
parameters = ParameterVector('θ', num_params_odd)
qc = ansatz_odd(n,parameters) # call the odd layer
else:
parameters = ParameterVector('θ', num_params_even)
qc = ansatz_even(n,parameters) # call the even layer
params_dict = {}
j = 0
for p in parameters:
# The name of the value will be the string identifier and an
# integer specifying the vector length
params_dict[p] = params[j]
j += 1
# Assign parameters using the assign_parameters method
qc = qc.assign_parameters(parameters = params_dict)
qc_gate = qc.to_gate()
qc_gate.name = "layer" # To show when we display the circuit
return qc_gate # return a quantum gate
###Output
_____no_output_____
###Markdown
We are going to make a quantum circuit with 3 layers, at the same time it is important to consider that there are two gates that are interleaved for each layer and these have different amount of parameters, so we look for the one that has more of these, i.e. ```max(paremers_odd,parameters_even)```, and that only the necessary ones are read per layer.
###Code
# example of a quantum circuit
num_layers = 3
list_n = range(num_qubits)
num_params = max(num_params_odd, num_params_even)
params = np.random.random([num_layers*num_params]) # all parameters
qc_gate = QuantumCircuit(num_qubits)
for i in range(len(params)//num_params):
# apply a function to consider m layers
qc_gate.append(gate_layer(num_qubits,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
qc_gate.barrier()
qc_gate.draw()
###Output
_____no_output_____
###Markdown
In order to verify that they are interleaving we use the decompose() method, to see that the same circuit is repeated in layer 1 and layer 3, using the odd layer, and the second circuit is the even layer.
###Code
qc_gate.decompose().draw(fold=-1, )
###Output
_____no_output_____
###Markdown
Suggestion play with the number of layers and see how is the draw output Build all the algorithmAt this point we have the data mapping process and the quantum circuit that represents the QBM, for this we need the optimization section and the cost function to be evaluated, for this we use the advantages of quantum computing in the model and the classical one in the optimization, as shown in Figure 6.Figure 6. Hybrid algorithm process. Cost FunctionConsider the data set BAS, our goal is obtain an approximation to the target probability distributionpx_outputor $P(\vec{x})$ . This is possible with a quantum circuit with gates parameterized by a vector $\vec{\theta}$,where the layer index $l$ runs from 0 to $d$, with $d$ the maximum depth of the quantum circuit [5], prepares a wave function $|\psi(\vec{\theta})\rangle$ from which probabilities are obtained to $P(\vec{x})=|\langle\vec{x} \mid \psi(\vec{\theta})\rangle|^{2}$.Minimization of this quantity is directly related to the minimization of a well known cost function: the negative [log-likelihood](https://en.wikipedia.org/wiki/Likelihood_function) $\mathcal{C}(\vec{\theta})=-\frac{1}{D} \sum_{d=1}^{D} \ln \left(P\left(\vec{x}^{(d)}\right)\right) .$ Is important consider that all the probabilities are estimated from a finite number of measurements and a way to avoid singularities in the cost function [3], we use a simple variant$\mathcal{C}(\vec{\theta})=-\frac{1}{D} \sum_{d=1}^{D} \ln \left(\max \left(\varepsilon, P_{\vec{\theta}}\left(\vec{x}^{(d)}\right)\right)\right)$ (3) where $\epsilon>0 $ is a small number to be chosen. Following this equation we have a method calledboltzman_machine(params) which is the function that integrates all the quantum process and the cost function required to perform the optimization.
###Code
def boltzman_machine(params):
n = 4
D = int(n**2)
cost = 0
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(gate_layer(n,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
shots = 8192
sv_sim = Aer.get_backend('statevector_simulator')
result = execute(qc, sv_sim).result()
statevector = result.get_statevector(qc)
for j in range(D):
cost += np.log10(max(0.001,
statevector[j].real*px_output.data[j].real
+(statevector[j].imag*px_output.data[j].imag)
)
)
cost = -cost/D
return cost
num_layers = 6
params = np.random.random([num_layers*num_params])
boltzman_machine(params)
###Output
_____no_output_____
###Markdown
Having the quantum process that returns the cost, we use the classical process implemented in Qiskit, which has a series of classical [optimizers](https://qiskit.org/documentation/stubs/qiskit.algorithms.optimizers.html).Consider 10 epoch with 500 iterations.
###Code
cost_cobyla = []
cost_nm = []
cost_spsa = []
params_cobyla = params
params_nm = params
params_spsa = params
epoch = 10
maxiter = 500
for i in range(epoch):
optimizer_cobyla = COBYLA(maxiter=maxiter)
ret = optimizer_cobyla.optimize(num_vars=len(params),
objective_function=boltzman_machine,
initial_point=params_cobyla)
params_cobyla = ret[0]
cost_cobyla.append(ret[1])
optimizer_nm = NELDER_MEAD(maxiter=maxiter)
ret = optimizer_nm.optimize(num_vars=len(params),
objective_function=boltzman_machine,
initial_point=params_nm)
params_nm = ret[0]
cost_nm.append(ret[1])
optimizer_spsa = SPSA(maxiter=maxiter)
ret = optimizer_spsa.optimize(num_vars=len(params),
objective_function=boltzman_machine,
initial_point=params_spsa)
params_spsa = ret[0]
cost_spsa.append(ret[1])
###Output
_____no_output_____
###Markdown
From the plot of each result we can see that the best optimizer is COBYLA for this algorithm.
###Code
xfit = range(epoch)
plt.plot(xfit, cost_cobyla, label='COBYLA')
plt.plot(xfit, cost_nm, label='Nelder-Mead')
plt.plot(xfit, cost_spsa, label='SPSA')
plt.legend()
plt.title("C(x) ")
plt.xlabel("epoch")
plt.ylabel("cost");
###Output
_____no_output_____
###Markdown
Qiskit has the opportunity to be able to work with more optimisers that are [here](https://qiskit.org/documentation/stubs/qiskit.algorithms.optimizers.html) . Suggestion Try changing the value of themaxitervariable and the optimisers in order to identify the best case. the boltzman_machine_valid method is performed to give us $P(\vec{x})=|\langle\vec{x} \mid \psi(\vec{\theta})\rangle|^{2}$
###Code
def boltzman_machine_valid(params):
n = 4
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(gate_layer(n,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
shots = 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator).result()
return result
###Output
_____no_output_____
###Markdown
VisualizationWe obtain the output of the three different optimizers
###Code
psi_vqc_spsa = boltzman_machine_valid(params_spsa)
psi_spsa = psi_vqc_spsa.get_statevector()
psi_vqc_nm = boltzman_machine_valid(params_nm)
psi_nm = psi_vqc_nm.get_statevector()
psi_vqc_cobyla = boltzman_machine_valid(params_cobyla)
psi_cobyla = psi_vqc_cobyla.get_statevector()
###Output
_____no_output_____
###Markdown
In order to compare all the results of each optimizer with the expected output, the plot_histogram method is used and we can see at a glance the similarities of each distribution.
###Code
psi_dict_cobyla = psi_vqc_cobyla.get_counts()
psi_dict_spsa = psi_vqc_spsa.get_counts()
psi_dict_nm = psi_vqc_nm.get_counts()
plot_histogram([dict_px_output, psi_dict_cobyla,
psi_dict_spsa, psi_dict_nm],
title='p(x) of the QBM',
legend=['correct distribution', 'simulation cobyla',
'simulation spsa', 'simulation nm'])
###Output
_____no_output_____
###Markdown
It can be seen that of all the visualization methods, the closest is the one used with COBYLA, which will be considered to generate new samples from this dataset. But you are still invited to look for other optimizers and change the number of the maxiter variable. Using our QBM To apply this circuit we use the final parameters and see what kind of results they produce as images. This is shown below.
###Code
def boltzman_machine_valid_random(params):
n = 4
list_n = range(n)
qc = QuantumCircuit(n,n)
for i in range(len(params)//num_params):
qc.append(
gate_layer(n,
params[num_params*i:num_params*(i+1)],(i+1)%2),
list_n)
qc.measure(list_n,list_n)
shots = 1
job = execute( qc, Aer.get_backend('qasm_simulator'),shots=shots )
counts = job.result().get_counts()
return counts.keys()
# Plot results as 2x2 images
matrix = np.zeros((2,2))
for i in range(0,16):
img = list(boltzman_machine_valid_random(params))[0]
matrix[0][0] = int(img[0])
matrix[0][1] = int(img[1])
matrix[1][0] = int(img[2])
matrix[1][1] = int(img[3])
plt.subplot(4, 4, 1+i)
plt.imshow(matrix)
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Check how many images do not follow the expected distribution. Depending on the Dial distributor we expect with higher probability the desired states, but if there is a percentage of an incorrect state it may sometimes appear with its respective probability. Noise modelWe managed to realize a quantum circuit that is a QBM and to see its effectiveness in a more complex environment than the simulation we will perform the process in a noise model, for this we must consider having our paltaform key in our system.For this example we will use the real `ibmq_lima` gate, but you can choose from the ones found [here](https://quantum-computing.ibm.com/services?services=systems).
###Code
from qiskit.test.mock import FakeLima
backend = FakeLima()
# Uncomment the code below to use the real device:
# from qiskit import IBMQ# provider = IBMQ.load_account()
# backend = provider.get_backend('ibmq_lima')
###Output
_____no_output_____
###Markdown
We can use the ibmq_lima features for our simulator and it can perform at the same characteristics of this one. To know more about the features of the NoiseModel method [here](https://qiskit.org/documentation/stubs/qiskit.providers.aer.noise.NoiseModel.html?highlight=noisemodel)
###Code
from qiskit.providers.aer.noise import NoiseModel
noise_model = NoiseModel.from_backend(backend)
# Get coupling map from backend
coupling_map = backend.configuration().coupling_map
# Get basis gates from noise model
basis_gates = noise_model.basis_gates
###Output
_____no_output_____
###Markdown
We performed the same process that we did previously in simulation but adapted it with the noise model variables.
###Code
def noise_boltzman_machine(params):
n = 4
D = int(n**2)
cost = 0
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(gate_layer(n,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
shots = 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator, shots = 8192,
# These parameters are for our noise model:
coupling_map=coupling_map,
basis_gates=basis_gates,
noise_model=noise_model,
cals_matrix_refresh_period=30).result()
statevector = result.get_statevector(qc)
for j in range(D):
cost += np.log10(max(0.001,
statevector[j].real*px_output.data[j].real
+(statevector[j].imag*px_output.data[j].imag)
)
)
cost = -cost/D
return cost
num_layers = 6
noise_params = np.random.random([num_layers*num_params])
noise_boltzman_machine(noise_params)
###Output
_____no_output_____
###Markdown
Consider run but using a new variable callnoise_params and we can consider the best optimizer,COBYLAin stead of using the others one.
###Code
print("cost:")
print(noise_boltzman_machine(noise_params))
for i in range(10):
optimizer = COBYLA(maxiter=500)
ret = optimizer.optimize(num_vars=len(noise_params),
objective_function=noise_boltzman_machine,
initial_point=noise_params)
noise_params = ret[0]
print(ret[1])
###Output
cost:
2.860379448430914
2.2843559495106893
2.2876193723013847
2.370747462669242
2.4158515841223926
2.3237948329775584
2.288052241158968
2.3179851856821525
2.3626775056067344
2.3128664509231
2.260656287576424
###Markdown
At this point we have the distribution $P(x)$ result of the Noise model
###Code
noise_psi_vqc = boltzman_machine_valid(noise_params)
noise_psi_vqc.get_statevector()
###Output
_____no_output_____
###Markdown
The expected output, the best simulated result and the best simulated result using a NoiseModel are compared.
###Code
noise_model_psi = noise_psi_vqc.get_counts()
plot_histogram([dict_px_output, psi_dict_cobyla, noise_model_psi],
title='p(x) of the QBM',
legend=['correct distribution', 'simulation',
'noise model'])
###Output
_____no_output_____
###Markdown
Having each result follow the trend of the expected output, but with certain errors, we can see that our circuit is working, and we can continue this by trying to obtain new samples from the circuit with noise.
###Code
matrix = np.zeros((2,2))
for i in range(0,16):
img = list(boltzman_machine_valid_random(noise_params))[0]
matrix[0][0] = int(img[0])
matrix[0][1] = int(img[1])
matrix[1][0] = int(img[2])
matrix[1][1] = int(img[3])
plt.subplot(4 , 4 , 1+i)
plt.imshow(matrix)
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Check how many images do not follow the expected distribution. Real quantum computerThe quantum instance for real free quantum computer (try to change the value for provider.get_backend( )), in this tutorial we use the ibmq_lima the same that i nthe noise model. More information about this quantum computer you can find [here](https://quantum-computing.ibm.com/services?services=systems&system=ibmq_lima).
###Code
real_backend = FakeLima()
# Uncomment the code below to use a real backend:# from qiskit import IBMQ# provider = IBMQ.load_account()# real_backend = provider.get_backend('ibmq_lima')
###Output
_____no_output_____
###Markdown
We follow the same processing of the simulation, but in this case we use the backend which is the real computer.
###Code
def real_boltzman_machine(params):
n = 4
D = int(n**2)
cost = 0
list_n = range(n)
qc = QuantumCircuit(n,n)
for i in range(len(params)//num_params):
qc.append(gate_layer(n,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
qc.measure(list_n,list_n)
shots= 8192
result = execute(qc, real_backend,
shots = 8192).result()
counts = result.get_counts(qc)
for j in range(D):
bin_index = bin(j)[2:]
while len(bin_index) < 4:
bin_index = '0' + bin_index
statevector_index = counts[bin_index]/8192
cost += np.log10(max(0.001,
statevector_index*px_output.data[j].real))
cost = -cost/D
return cost
num_layers = 6
real_params = np.random.random([num_layers*num_params])
real_boltzman_machine(real_params)
###Output
_____no_output_____
###Markdown
For this process being a real free computer, anyone with their IBM account can use any of the free computers, so the process can take some time, for this tutorial we will only use10 iterations
###Code
print("cost:")
print(real_boltzman_machine(real_params))
for i in range(1):
optimizer = COBYLA(maxiter=10)
ret = optimizer.optimize(num_vars=len(real_params),
objective_function=real_boltzman_machine,
initial_point=real_params)
real_params = ret[0]
print(ret[1])
###Output
cost:
2.4908683878403566
2.48536011404098
###Markdown
In this point we have the result of the real quantum computer for our QBM.
###Code
real_psi_vqc = boltzman_machine_valid(real_params)
real_psi_vqc.get_statevector()
###Output
_____no_output_____
###Markdown
We can compare all the result in a same plot_histogram, and check the worst case is the result of the quantum real computer, that could be solve, using more iteration, mitigate the error, modified the ansatz or both.
###Code
real_model_psi = real_psi_vqc.get_counts()
plot_histogram([dict_px_output, psi_dict_cobyla,
noise_model_psi, real_model_psi],
title='p(x) of the QBM',
legend=['correct distribution', 'simulation',
'noise model', 'real quantum computer'])
###Output
_____no_output_____
###Markdown
Just to confirm what are the possible outputs of our circuit using the real computer, this is given by the type of distribution obtained from the computer.
###Code
matrix = np.zeros((2,2))
for i in range(0,16):
img = list(boltzman_machine_valid_random(real_params))[0]
matrix[0][0] = int(img[0])
matrix[0][1] = int(img[1])
matrix[1][0] = int(img[2])
matrix[1][1] = int(img[3])
plt.subplot(4 , 4 , 1+i)
plt.imshow(matrix)
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Check how many images do not follow the expected distribution. Another perspectiveFor this part we can do the same procces and using the reference [5] with the proposal to design another Ansatz model, the characteristics are:- Using the same layer arbitrary rotation gate, - And employ CNOT gates with no parameters for the entangle layers.The new ansatz it could be
###Code
## Design any ansatz
# Here, we use arbitrary single qubit rotations for the odd layers,
# and Mølmer-Sørensen XX gates for the even layers
def ansatz_layer(n,parameters): # this ansatz is equivalent a layer
qc = QuantumCircuit(n)
for i in range(n):
# use variable value for ry with the values parameters[i]
qc.u(parameters[i*3],parameters[(i*3)+1],parameters[(i*3)+2],i)
for i in range(n-1):
qc.cx(i,i+1)
return qc
# depents of own design we can change the num_params value
num_params_v2 = 12# how is 4 qubits we required 12 parametrs
parameters_v2 = ParameterVector('θ', num_params_v2)
ansatz_layer(num_qubits,parameters_v2).draw()
###Output
/usr/local/lib/python3.9/site-packages/sympy/core/expr.py:3949: SymPyDeprecationWarning:
expr_free_symbols method has been deprecated since SymPy 1.9. See
https://github.com/sympy/sympy/issues/21494 for more info.
SymPyDeprecationWarning(feature="expr_free_symbols method",
###Markdown
We apply this ansatz how an gate like the previous part, how is only a gate we don't now use the parameter flag
###Code
# ansatz to quantum gate
def gate_layer_v2(n, params):
parameters = ParameterVector('θ', num_params_v2)
qc = ansatz_layer(n,parameters)
params_dict = {}
j = 0
for p in parameters:
# The name of the value will be the string identifier,
# and an integer specifying the vector length
params_dict[p] = params[j]
j += 1
# Assign parameters using the assign_parameters method
qc = qc.assign_parameters(parameters = params_dict)
qc_gate = qc.to_gate()
qc_gate.name = "layer" # To show when we display the circuit
return qc_gate # return a quantum gate
###Output
_____no_output_____
###Markdown
We are going to make a quantum circuit with 3 layers, where each gate are the same structure.
###Code
# example of a quantum circuit
num_layers = 3
list_n = range(num_qubits)
params = np.random.random([num_layers*num_params_v2]) # all parameters
qc_gate = QuantumCircuit(num_qubits)
for i in range(len(params)//num_params):
# apply a function to consider m layers
qc_gate.append(gate_layer_v2(num_qubits,
params[num_params_v2*i:num_params_v2*(i+1)]),
list_n)
qc_gate.barrier()
qc_gate.draw()
###Output
_____no_output_____
###Markdown
Now we are using decompose(), it is observed that if the new structure consists of the same circuit for each layer.
###Code
qc_gate.decompose().draw(fold=-1)
###Output
_____no_output_____
###Markdown
ExperimentsAs in the previous section, we will train with simulation. In this ansatz we are using only 3 layers
###Code
def boltzman_machine_v2(params):
n = 4
D = int(n**2)
cost = 0
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(gate_layer_v2(n,params[num_params*i:num_params*(i+1)]),
list_n)
shots= 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator).result()
statevector = result.get_statevector(qc)
for j in range(D):
cost += np.log10(max(0.001,
statevector[j].real*px_output.data[j].real
+(statevector[j].imag*px_output.data[j].imag)
)
)
cost = -cost/D
return cost
num_layers = 3
params = np.random.random([num_layers*num_params])
boltzman_machine_v2(params)
cost_cobyla = []
cost_nm = []
cost_spsa = []
params_cobyla = params
params_nm = params
params_spsa = params
epoch = 10
maxiter = 500
for i in range(epoch):
optimizer_cobyla = COBYLA(maxiter=maxiter)
ret = optimizer_cobyla.optimize(num_vars=len(params),
objective_function=boltzman_machine_v2,
initial_point=params_cobyla)
params_cobyla = ret[0]
cost_cobyla.append(ret[1])
optimizer_nm = NELDER_MEAD(maxiter=maxiter)
ret = optimizer_nm.optimize(num_vars=len(params),
objective_function=boltzman_machine_v2,
initial_point=params_nm)
params_nm = ret[0]
cost_nm.append(ret[1])
optimizer_spsa = SPSA(maxiter=maxiter)
ret = optimizer_spsa.optimize(num_vars=len(params),
objective_function=boltzman_machine_v2,
initial_point=params_spsa)
params_spsa = ret[0]
cost_spsa.append(ret[1])
###Output
_____no_output_____
###Markdown
The process is repeated to confirm the best optimizer using this new ansatz
###Code
xfit = range(epoch)
plt.plot(xfit, cost_cobyla, label='COBYLA')
plt.plot(xfit, cost_nm, label='Nelder-Mead')
plt.plot(xfit, cost_spsa, label='SPSA')
plt.legend()
plt.title("C(x) ")
plt.xlabel("epoch")
plt.ylabel("cost")
plt.show()
def boltzman_machine_valid_v2(params):
n = 4
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(gate_layer_v2(n,
params[num_params*i:num_params*(i+1)]),
list_n)
shots= 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator).result()
return result
###Output
_____no_output_____
###Markdown
Obtain the $P(x)$ for each optimizer
###Code
psi_vqc_spsa = boltzman_machine_valid_v2(params_spsa)
psi_spsa = psi_vqc_spsa.get_statevector()
psi_vqc_nm = boltzman_machine_valid_v2(params_nm)
psi_nm = psi_vqc_nm.get_statevector()
psi_vqc_cobyla = boltzman_machine_valid_v2(params_cobyla)
psi_cobyla = psi_vqc_cobyla.get_statevector()
###Output
_____no_output_____
###Markdown
It is reconfirmed that the best case is using COBYLA
###Code
psi_dict_cobyla = psi_vqc_cobyla.get_counts()
psi_dict_spsa = psi_vqc_spsa.get_counts()
psi_dict_nm = psi_vqc_nm.get_counts()
plot_histogram([dict_px_output, psi_dict_cobyla,
psi_dict_spsa, psi_dict_nm],
title='p(x) of the QBM',
legend=['correct distribution', 'simulation cobyla',
'simulation spsa', 'simulation nm'])
###Output
_____no_output_____
###Markdown
Applying now the distribution obtained from the simulation result
###Code
def boltzman_machine_valid_random_v2(params):
n = 4
list_n = range(n)
qc = QuantumCircuit(n,n)
for i in range(len(params)//num_params):
qc.append(gate_layer_v2(n,
params[num_params*i:num_params*(i+1)]),
list_n)
qc.measure(list_n,list_n)
shots= 1
job = execute( qc, Aer.get_backend('qasm_simulator'), shots=shots)
counts = job.result().get_counts()
return counts.keys()
matrix = np.zeros((2,2))
for i in range(0,16):
img = list(boltzman_machine_valid_random_v2(params_cobyla))[0]
matrix[0][0] = int(img[0])
matrix[0][1] = int(img[1])
matrix[1][0] = int(img[2])
matrix[1][1] = int(img[3])
plt.subplot(4 , 4 , 1+i)
plt.imshow(matrix)
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Check how many images do not follow the expected distribution. Exercise Redesign the methods for the noise model and the real computer and see what the results are. Suggestion Modify the ansatz or use a new ansatz and apply with a simulation, noise model and real quantum computer. You can use a 3x3 or 4x4 image for that consider reference [5]. Extending the problemFor this section consider this next set of images of size $3\times 3$ and generate the $|\psi\rangle$ state representing this distribution:We'll use the mapping:You can use the "Mapping image to qubits" section above to help.Use the next code to representate, only you need to identify the init_list values that represents our set of images. And consider for this problem 9 qubits.
###Code
num_qubits_3x3 = 9
# list of indices of interest
init_list = []
# create all-zeros array of size num_qubits^2
px_3x3 = Statevector.from_label('0'*num_qubits)
for init_value in init_list:
px_3x3.data[init_value] = 1
px_3x3 /= np.sqrt(len(init_list)) # normalize the statevector
px_3x3 = Statevector(px_3x3) # use Qiskit's Statevector object
print(px_3x3) # print to check it's correct
###Output
_____no_output_____
###Markdown
Now we can plot the distribution $P_{3x3}(x)$ using plot_histogram method.
###Code
dict_px_3x3 = px_3x3.probabilities_dict()
plot_histogram(dict_px_3x3)
###Output
_____no_output_____
###Markdown
Now you are going to design an ansatz, it is important to keep in mind with 9 qubits.
###Code
## Design your own ansatz for 9 qubits
def ansatz_layer_3x3(n,parameters): # this ansatz is equivalent a layer
qc = QuantumCircuit(n)
# Your code goes here
return qc
###Output
_____no_output_____
###Markdown
Validate with the following code that a quantum gate is being developed from your proposed ansatz.
###Code
num_params_3x3 = # check the parameters that you need in your ansatz
num_layers = # check the number of layers
list_n = range(num_qubits_3x3)
parameters_3x3 = ParameterVector('θ', num_params_3x3)
params = np.random.random([num_layers*num_params_3x3]) # all parameters
qc_gate_3x3 = QuantumCircuit(num_qubits_3x3)
for i in range(len(params)//num_params_3x3):
qc_gate_3x3.append(
ansatz_layer_3x3(num_qubits_3x3,
params[num_params_3x3*i:num_params_3x3*(i+1)]),
list_n)
qc_gate_3x3.barrier()
qc_gate_3x3.draw()
###Output
_____no_output_____
###Markdown
of each gate we check that it is equivalent to our ansatz using decompose()
###Code
qc_gate_3x3.decompose().draw()
###Output
_____no_output_____
###Markdown
Based on the above examples, fill in what we are missing
###Code
def boltzman_machine_3x3(params):
D = int(num_qubits_3x3**2)
cost = 0
list_n = range(num_qubits_3x3)
qc = QuantumCircuit(num_qubits_3x3)
for i in range(len(params)//num_params_3x3):
qc.append(
ansatz_layer_3x3(num_qubits_3x3,
params[num_params_3x3*i:num_params_3x3*(i+1)]),
list_n)
shots= 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator).result()
statevector = result.get_statevector(qc)
# how do we check the cost?
return cost
###Output
_____no_output_____
###Markdown
Complete the code you have to run our QBM, for this consider an optimizer and the number of iterations to use, remember in the optimizer you have the variable maxiter
###Code
params = np.random.random([num_layers*num_params_3x3])
print("cost:")
for i in range(): # number of iterations
optimizer = # which iterations and steps?
ret = optimizer.optimize(num_vars=len(params),
objective_function=boltzman_machine_3x3,
initial_point=params)
noise_params = ret[0]
print(ret[1])
###Output
_____no_output_____
###Markdown
Now, we obtain the result in a distribution $P_{3\times 3}(x)$
###Code
def boltzman_machine_valid_3x3(params):
n=4
list_n = range(n)
qc = QuantumCircuit(n)
for i in range(len(params)//num_params):
qc.append(
ansatz_layer_3x3(n,
params[num_params*i:num_params*(i+1)],
(i+1)%2),
list_n)
shots= 8192
simulator = Aer.get_backend('statevector_simulator')
result = execute(qc, simulator).result()
return result
psi_sv_3x3 = boltzman_machine_valid_3x3(params)
psi_3x3 = psi_sv_3x3.get_statevector()
###Output
_____no_output_____
###Markdown
Finally, we plot the results
###Code
psi_3x3_dict = psi_3x3.get_counts()
plot_histogram([dict_px_3x3,psi_3x3_dict], title='p(x) of the QBM',
legend=['correct distribution', 'simulation'])
###Output
_____no_output_____
###Markdown
If there is no problem up to this point, you will have a distribution similar to the one we have designed,congratulations!But for the end we leave the following question, can we decrease the number of qubits? References1. Amin, Mohammad & Andriyash, Evgeny & Rolfe, Jason & Kulchytskyy, Bohdan & Melko, Roger. (2016). Quantum Boltzmann Machine. Physical Review X. 8. 10.1103/PhysRevX.8.021050 [https://arxiv.org/pdf/1601.02036.pdf](https://arxiv.org/pdf/1601.02036.pdf) .2. Zoufal, Christa & Lucchi, Aurelien & Woerner, Stefan. (2021). Variational quantum Boltzmann machines. Quantum Machine Intelligence. 3. 10.1007/s42484-020-00033-7 [https://arxiv.org/pdf/2006.06004.pdf](https://arxiv.org/pdf/2006.06004.pdf). 3. Benedetti, Marcello & Garcia-Pintos, Delfina & Nam, Yunseong & Perdomo-Ortiz, Alejandro. (2018). A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Information. 5. 10.1038/s41534-019-0157-8. [https://arxiv.org/pdf/1801.07686.pdf](https://arxiv.org/pdf/1801.07686.pdf) [paper](https://www.nature.com/articles/s41534-019-0157-8)4. Rudolph, Manuel & Bashige, Ntwali & Katabarwa, Amara & Johr, Sonika & Peropadre, Borja & Perdomo-Ortiz, Alejandro. (2020). Generation of High Resolution Handwritten Digits with an Ion-Trap Quantum Computer. [https://arxiv.org/pdf/2012.03924.pdf](https://arxiv.org/pdf/2012.03924.pdf)5. Jinguo, Liu & Wang, Lei. (2018). Differentiable Learning of Quantum Circuit Born Machine. Physical Review A. 98. 10.1103/PhysRevA.98.062324. [https://arxiv.org/pdf/1804.04168.pdf](https://arxiv.org/pdf/1804.04168.pdf)
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____ |
Day_044_HW.ipynb | ###Markdown
作業1. 試著調整 RandomForestClassifier(...) 中的參數,並觀察是否會改變結果?2. 改用其他資料集 (boston, wine),並與回歸模型與決策樹的結果進行比較
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston, load_wine
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error, r2_score, classification_report
###Output
_____no_output_____
###Markdown
Bsoton Data
###Code
boston = load_boston()
X, y = boston.data, boston.target
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
grid_para = {'n_estimators': np.arange(10, 20), 'max_depth': np.arange(10, 20)}
randfroest_reg = RandomForestRegressor()
grid_randforest_reg = GridSearchCV(randfroest_reg, grid_para, cv=5)
grid_randforest_reg.fit(x_train, y_train)
print('best parameters: {}'.format(grid_randforest_reg.best_params_))
y_pred = grid_randforest_reg.predict(x_test)
r_square = r2_score(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
print('R-square: {}, MSE: {}'.format(r_square, mse))
feature_importance_dict = {'feature': boston.feature_names,
'importance': grid_randforest_reg.best_estimator_.feature_importances_}
feature_importance_df = pd.DataFrame(feature_importance_dict).sort_values(by='importance', ascending=False)
plt.figure(figsize=(10, 2))
plt.bar(x=feature_importance_df['feature'], height=feature_importance_df['importance'], align='center')
plt.xlabel('feature name')
plt.ylabel('importance')
plt.show()
###Output
_____no_output_____
###Markdown
Wine Data
###Code
wine = load_wine()
X, y = wine.data, wine.target
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
grid_param = {'n_estimators': np.arange(12, 20), 'max_depth': np.arange(2, 10)}
randforest_cls = RandomForestClassifier()
grid_randforest_cls = GridSearchCV(randforest_cls, grid_param, cv=5)
grid_randforest_cls.fit(x_train, y_train)
print('Best Parameters: {}'.format(grid_randforest_cls.best_params_))
print('-' * 50)
y_pred = grid_randforest_cls.predict(x_test)
report = classification_report(y_test, y_pred,
labels=np.unique(wine.target),
target_names=wine.target_names)
print(report)
feature_importance_dict = {'feature': wine.feature_names,
'importance': grid_randforest_cls.best_estimator_.feature_importances_}
feature_importance_df = pd.DataFrame(feature_importance_dict).sort_values(by='importance', ascending=False)
plt.figure(figsize=(20, 6))
plt.bar(x=feature_importance_df['feature'], height=feature_importance_df['importance'],
align='center', color='brown')
plt.xlabel('feature name')
plt.xticks(rotation=40)
plt.ylabel('importance')
plt.show()
###Output
_____no_output_____
###Markdown
作業Q1. 試著調整 RandomForestClassifier(...) 中的參數,並觀察是否會改變結果?
###Code
from sklearn import datasets, metrics
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# 讀取鳶尾花資料集
iris = datasets.load_iris()
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.25, random_state=4)
# 建立模型
for x in range(1,100,10):
clf = RandomForestClassifier(n_estimators=x)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
acc = metrics.accuracy_score(y_test, y_pred)
print('n_estimator = %d' % x)
print("Acuuracy: ", acc)
print('-'*30)
###Output
n_estimator = 1
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 11
Acuuracy: 0.9473684210526315
------------------------------
n_estimator = 21
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 31
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 41
Acuuracy: 0.9473684210526315
------------------------------
n_estimator = 51
Acuuracy: 0.9473684210526315
------------------------------
n_estimator = 61
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 71
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 81
Acuuracy: 0.9736842105263158
------------------------------
n_estimator = 91
Acuuracy: 0.9736842105263158
------------------------------
###Markdown
Q2. 改用其他資料集 (boston, wine),並與回歸模型與決策樹的結果進行比較
###Code
# 讀取波士頓資料集
boston = datasets.load_boston()
# 切分訓練集/測試集
x_train, x_test, y_train, y_test = train_test_split(boston.data, boston.target, test_size=0.1, random_state=4)
# 建立模型
for x in range(1,100,10):
clf = RandomForestRegressor(n_estimators=x)
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
print('n_estimator = %d' % x)
print("Mean squared error: %.2f" % mean_squared_error(y_test, y_pred))
print('-'*30)
###Output
n_estimator = 1
Mean squared error: 28.68
------------------------------
n_estimator = 11
Mean squared error: 11.48
------------------------------
n_estimator = 21
Mean squared error: 10.40
------------------------------
n_estimator = 31
Mean squared error: 9.74
------------------------------
n_estimator = 41
Mean squared error: 9.58
------------------------------
n_estimator = 51
Mean squared error: 11.15
------------------------------
n_estimator = 61
Mean squared error: 10.07
------------------------------
n_estimator = 71
Mean squared error: 9.65
------------------------------
n_estimator = 81
Mean squared error: 10.06
------------------------------
n_estimator = 91
Mean squared error: 10.20
------------------------------
###Markdown
[作業重點]確保你了解隨機森林模型中每個超參數的意義,並觀察調整超參數對結果的影響 作業1. 試著調整 RandomForestClassifier(...) 中的參數,並觀察是否會改變結果?2. 改用其他資料集 (boston, wine),並與回歸模型與決策樹的結果進行比較
###Code
from sklearn import datasets, metrics
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.model_selection import train_test_split
wine = datasets.load_wine()
x = wine.data
y = wine.target
print('x sahpe: ', x.shape)
print('y sample: ', y[: 6]) # classification
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=4)
# baseline logistic regression
logreg = LogisticRegression(solver='newton-cg')
logreg.fit(x_train, y_train)
print('params: ', logreg.coef_)
print('acc: ', logreg.score(x_test, y_test))
clf = RandomForestClassifier(n_estimators=10, max_depth=4)
clf.fit(x_train, y_train)
print('acc: ', clf.score(x_test, y_test))
print('feature importances: ', {name:value for (name, value) in zip(wine.feature_names, clf.feature_importances_)})
# boston
boston = datasets.load_boston()
x = boston.data
y = boston.target
print('x sahpe: ', x.shape)
print('y sample: ', y[: 6]) # linear regression
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=5)
# baseline linear regression
linear = LinearRegression()
linear.fit(x_train, y_train)
print('params: ', linear.coef_)
print('R2: ', linear.score(x_test, y_test))
clf2 = RandomForestRegressor(n_estimators=10, max_depth=4)
clf2.fit(x_train, y_train)
print('R2: ', clf2.score(x_test, y_test))
print('feature importances: ', {name:value for (name, value) in zip(boston.feature_names, clf2.feature_importances_)})
###Output
R2: 0.837963895356755
feature importances: {'CRIM': 0.03946751128402972, 'ZN': 0.0, 'INDUS': 0.002276176307010874, 'CHAS': 0.0, 'NOX': 0.009136104068026727, 'RM': 0.3577398862093797, 'AGE': 0.0029516843209303235, 'DIS': 0.08636260460188355, 'RAD': 0.002480758878488432, 'TAX': 0.00436853956189145, 'PTRATIO': 0.0034113614528555186, 'B': 0.0033432546939453226, 'LSTAT': 0.4884621186215584}
|
Data Analyst with Python/11_Cleaning_Data_in_Python/11_2_Text and categorical data problems.ipynb | ###Markdown
2. Text and categorical data problems**Categorical and text data can often be some of the messiest parts of a dataset due to their unstructured nature. In this chapter, you’ll learn how to fix whitespace and capitalization inconsistencies in category labels, collapse multiple categories into one, and reformat strings for consistency.** Membership constraints Categories and membership constraints**Predifined finite set of categories**Type of data | Example values | Numeric representation:---|:---|:---Marriage Status | `unmarried`, `married` | `0`, `1`Household Income Category | `0-20k`, `20-40k`, ... | `0`, `1`, ...Loan Status | `default`, `payed`, `no_loan` | `0`, `1`, `2`*Marriage status can **only** be `unmarried` _or_ `married`*To run machine learning models on categorical data, they are often coded as numbers. Since categorical data represent a predefined set of categories, they can't have values that go beyond these predefined categories. Why could we have these problems?We can have inconsistencies in our categorical data for a variety of reasons. This could be due to data entry issues with free text vs dropdown fields, *data parsing errors* and *other types of errors*. How do we treat these problems?There's a variety of ways we can treat these, with increasingly specific solutions for different types of inconsistencies. Most simply, we can drop the rows with incorrect categories. We can attempt remapping incorrect categories to correct ones, and more. An exampleHere's a DataFrame named `study_data` containing a list of first names, birth dates, and blood types. Additionally, a DataFrame named categories, containing the correct possible categories for the blood type column has been created as well.```python Read study data and print itstudy_data = pd.read_csv('study.csv')study_data`````` name birthday blood_type1 Beth 2019-10-20 B-2 Ignatius 2020-07-08 A-3 Paul 2019-08-12 O+4 Helen 2019-03-17 O-5 Jennifer 2019-12-17 Z+ <--6 Kennedy 2020-04-27 A+7 Keith 2018-04-19 AB+```There's definitely no blood type named `Z+`. Luckily, the `categories` DataFrame will help us systematically spot all rows with these inconsistencies. ```python Correct possible blood typescategories`````` blood_type1 O-2 O+3 A-4 A+5 B+6 B-7 AB+8 AB-```It's always good practice to keep a log of all possible values of your categorical data, as it will make dealing with these types of inconsistencies way easier. A note on joins- Anti Joins: What is **in A and not in B**- Inner Joins: What is **in *both* A and B** Finding inconsistent categoriesWe first get all inconsistent categories in the `blood_type` column of the `study_data` DataFrame. We do that by creating a set out of the `blood_type` column which stores its unique values, and use the `difference` method which takes in as argument the `blood_type` column from the `categories` DataFrame. ```pythoninconsistent_categories = set(study_data['blood_type']).difference(categories['blood_type'])print(inconsistent_categories)```This returns all the categories in `blood_type` that are not in categories. ```{'Z+'}```We then find the inconsistent rows by finding all the rows of the `blood_type` columns that are equal to inconsistent categories by using the `isin` method, this returns a series of boolean values that are `True` for inconsistent rows and `False` for consistent ones. We then subset the `study_data` DataFrame based on these boolean values, ```python Get and print rows with incinsistent categoriesinconsistent_rows = study_data['blood_type'].isin(inconsistent_categories)study_data[inconsistent_rows]```and we have our inconsistent data.``` name birthday blood_type5 Jennifer 2019-12-17 Z+``` Dropping inconsistent categoriesTo drop inconsistent rows and keep ones that are only consistent. We just use the tilde(`~`) symbol while subsetting which returns everything except inconsistent rows.```pythoninconsistent_categories = set(study_data['blood_type']).difference(categories['blood_type'])inconsistent_rows = study_data['blood_type'].isin(inconsistent_categories)inconsistent_data = study_data[inconsistent_rows] Drop inconsistent categories and get consistent data onlyconsistent_data = study_data[~inconsistent_rows]``` Finding consistencyIn this exercise and throughout this chapter, you'll be working with the `airlines` DataFrame which contains survey responses on the San Francisco Airport from airline customers.The DataFrame contains flight metadata such as the airline, the destination, waiting times as well as answers to key questions regarding cleanliness, safety, and satisfaction. Another DataFrame named `categories` was created, containing all correct possible values for the survey columns.In this exercise, you will use both of these DataFrames to find survey answers with inconsistent values, and drop them, effectively performing an outer and inner join on both these DataFrames as seen in the video exercise.
###Code
import pandas as pd
airlines = pd.read_csv('airlines.csv', index_col=0)
airlines.head(3)
###Output
_____no_output_____
###Markdown
- Print the `categories` DataFrame and take a close look at all possible correct categories of the survey columns.
###Code
# Print categories DataFrame
categories = pd.read_csv('categories.csv')
print(categories)
###Output
cleanliness safety satisfaction
0 Clean Neutral Very satisfied
1 Average Very safe Neutral
2 Somewhat clean Somewhat safe Somewhat satisfied
3 Somewhat dirty Very unsafe Somewhat unsatisfied
4 Dirty Somewhat unsafe Very unsatisfied
###Markdown
- Print the unique values of the survey columns in `airlines` using the `.unique()` method.
###Code
# Print unique values of survey columns in airlines
print('Cleanliness: ', airlines['cleanliness'].unique(), "\n")
print('Safety: ', airlines['safety'].unique(), "\n")
print('Satisfaction: ', airlines['satisfaction'].unique(), "\n")
###Output
Cleanliness: ['Clean' 'Average' 'Unacceptable' 'Somewhat clean' 'Somewhat dirty'
'Dirty']
Safety: ['Neutral' 'Very safe' 'Somewhat safe' 'Very unsafe' 'Somewhat unsafe']
Satisfaction: ['Very satisfied' 'Neutral' 'Somewhat satsified' 'Somewhat unsatisfied'
'Very unsatisfied']
###Markdown
- Create a set out of the `cleanliness` column in `airlines` using `set()` and find the inconsistent category by finding the **difference** in the `cleanliness` column of `categories`.- Find rows of `airlines` with a `cleanliness` value not in `categories` and print the output.
###Code
# Find the cleanliness category in airlines not in categories
cat_clean = set(airlines['cleanliness']).difference(categories['cleanliness'])
# Find rows with that category
cat_clean_rows = airlines['cleanliness'].isin(cat_clean)
# View rows with inconsistent category
display(airlines[cat_clean_rows])
###Output
_____no_output_____
###Markdown
- Print the rows with the consistent categories of `cleanliness` only.
###Code
# View rows with consistent categories only
display(airlines[~cat_clean_rows])
###Output
_____no_output_____
###Markdown
--- Categorical variables What type of errors could we have?1. **Value Inconsistency** - *Inconsistent fields*: `'married'`, `'Maried'`, `'UNMARRIED'`, `'not married'`... - _Trailling white spaces: _`'married '`, `' married '` ...2. **Collapsing too many categories to few** - *Creating new groups*: `0-20k`, `20-40k` categories ... from continuous household income data - *Mapping groups to new ones*: Mapping household income categories to 2 `'rich'`, `'poor'`3. **Making sure data is of type `category`** Value consistencyA common categorical data problem is having values that slightly differ because of capitalization. Not treating this could lead to misleading results when we decide to analyze our data, for example, let's assume we're working with a demographics dataset, and we have a marriage status column with inconsistent capitalization. ***Capitalization***: `'married'`, `'Married'`, `'UNMARRIED'`, `'unmarried'` ...Here's what counting the number of married people in the `marriage_status` Series would look like. Note that the `.value_counts()` methods works on Series only.```python Get marriage status columnmarriage_status = demographics['marriage_status']marriage_status.value_counts()``````unmarried 352married 268MARRIED 204UNMARRIED 176dtype: int64```For a DataFrame, we can `groupby` the column and use the `.count()` method.```python Get value counts on DataFramemarriage_status.groupby('marriage_status').count()`````` household_income gendermarriage_status MARRIED 204 204UNMARRIED 176 176married 268 268unmarried 352 352```To deal with this, we can either capitalize or lowercase the marriage_status column. This can be done with the `str.upper()` or `str.lower()` functions respectively.```python Caplitalizemarriage_status['marriage_status'] = marriage_status['marriage_status'].str.upper()marriage_status['marriage_status'].value.count()``````UNMARRIED 528MARRIED 472``````python Lowercasemarriage_status['marriage_status'] = marriage_status['marriage_status'].str.lower()marriage_status['marriage_status'].value.count()``````unmarried 528married 472``` Another common problem with categorical values are leading or trailing spaces. ***Trailling spaces***: `'married '`, `'married'`, `'unmarried'`, `' unmarried'` ...For example, imagine the same demographics DataFrame containing values with leading spaces. Here's what the counts of married vs unmarried people would look like.```python Get marriage status columnmarriage_status = demographics['marriage_status']marriage_status.value_counts()`````` unmarried 352unmarried 268married 204married 176dtype: int64```Note that there is a married category with a trailing space on the right, which makes it hard to spot on the output, as opposed to unmarried.To remove leading spaces, we can use the `str.strip()` method which when given no input, strips all leading and trailing white spaces.```python Strip all spacesmarriage_status = demographics['marriage_status'].str.strip()demographics['marriage_status'].value_counts()``````unmarried 528married 472``` Collapsing data into categories***Create categories out of data***: `income_group` column from `income` columnTo create categories out of data, let's use the example of creating an income group column in the demographics DataFrame. We can do this in 2 ways. The first method utilizes the `qcut` function from `pandas`, which automatically divides our data based on its distribution into the number of categories we set in the `q` argument, we created the category names in the group_names list and fed it to the labels argument, returning the following. ```python Using qcut()import padnas as pdgroup_names = ['0-200k', '200-500k', '500k+']demographics['income_group'] = pd.qcut(demographics['household_income'], q = 3, labels = group_names) Print income_group columndemographics[['income_group', 'household_income']]`````` category household_income0 200k-500k 1892431 500K+ 778533...```Notice that the first row actually misrepresents the actual income of the income group, as we didn't instruct qcut where our ranges actually lie.We can do this with the `cut` function instead, which lets us define category cutoff ranges with the `bins` argument. It takes in a list of cutoff points for each category, with the final one being infinity represented with `np.inf()`. From the output, we can see this is much more correct.```python Using cut() - create category ranges and namesranges = [0, 200000, 500000, np.inf]group_names = ['0-200k', '200-500k', '500k+'] Create income group columndemographics['income_group'] = pd.cut(demographics['household_income'], bins=ranges, labels = group_names) Print income_group columndemographics[['income_group', 'household_income']]`````` category Income0 200k-500k 1892431 500K+ 778533``` Collapsing data into categoriesSometimes, we may want to reduce the amount of categories we have in our data. Let's move on to mapping categories to fewer ones. For example, assume we have a column containing the operating system of different devices, and contains these unique values. Say we want to collapse these categories into 2, `DesktopOS`, and `MobileOS`. We can do this using the replace method. It takes in a dictionary that maps each existing category to the category name you desire. ***Map categories to fewer ones***: reducing categories in categorical column`operating_system` column is: `'Microsoft'`, `'MacOS'`, `'IOS'`, `'Android'`, `'Linux'``operating_system` column should become: `'DesktopOS'`, `'MobileOS'````python Create mapping dictionary and replacemapping = {'Microsoft':'DesktopOS', 'MacOS':'DesktopOS' , 'Linux':'DesktopOS' , 'IOS':'MobileOS' , 'Android':'MobileOS'}device['operating_system'] = devices['operating_system'].replace(mapping)device['operating_system'].unique()``````array(['DesktopOS', 'MobileOS'], dtype=object)```In this case, this is the mapping dictionary. A quick print of the unique values of operating system shows the mapping has been complete. Inconsistent categoriesIn this exercise, you'll be revisiting the `airlines` DataFrame from the previous lesson.As a reminder, the DataFrame contains flight metadata such as the airline, the destination, waiting times as well as answers to key questions regarding cleanliness, safety, and satisfaction on the San Francisco Airport.In this exercise, you will examine two categorical columns from this DataFrame, `dest_region` and `dest_size` respectively, assess how to address them and make sure that they are cleaned and ready for analysis. - Print the unique values in `dest_region` and `dest_size` respectively.
###Code
# Print unique values of both columns
print(airlines['dest_region'].unique())
print(airlines['dest_size'].unique())
###Output
['Asia' 'Canada/Mexico' 'West US' 'East US' 'Midwest US' 'EAST US'
'Middle East' 'Europe' 'eur' 'Central/South America'
'Australia/New Zealand' 'middle east']
['Hub' 'Small' 'Medium' 'Large' ' Hub' 'Hub ' ' Small'
'Medium ' ' Medium' ' Large' 'Small ' 'Large ']
###Markdown
QuestionFrom looking at the output, what do you think is the problem with these columns?1. ~~The `dest_region` column has only inconsistent values due to capitalization.~~2. The `dest_region` column has inconsistent values due to capitalization and has one value that needs to be remapped.3. The `dest_size` column has only inconsistent values due to leading and trailing spaces.**Answer: 2,3** - Change the capitalization of all values of `dest_region` to lowercase.- Replace the `'eur'` with `'europe'` in `dest_region` using the `.replace()` method.
###Code
# Lower dest_region column and then replace "eur" with "europe"
airlines['dest_region'] = airlines['dest_region'].str.lower()
airlines['dest_region'] = airlines['dest_region'].replace({'eur':'europe'})
###Output
_____no_output_____
###Markdown
- Strip white spaces from the `dest_size` column using the `.strip()` method.
###Code
# Remove white spaces from `dest_size`
airlines['dest_size'] = airlines['dest_size'].str.strip()
###Output
_____no_output_____
###Markdown
- Verify that the changes have been into effect by printing the unique values of the columns using `.unique()`.
###Code
# Verify changes have been effected
print(airlines['dest_region'].unique())
print(airlines['dest_size'].unique())
###Output
['asia' 'canada/mexico' 'west us' 'east us' 'midwest us' 'middle east'
'europe' 'central/south america' 'australia/new zealand']
['Hub' 'Small' 'Medium' 'Large']
###Markdown
Remapping categoriesTo better understand survey respondents from `airlines`, you want to find out if there is a relationship between certain responses and the day of the week and wait time at the gate.The `airlines` DataFrame contains the `day` and `wait_min` columns, which are categorical and numerical respectively. The `day` column contains the exact day a flight took place, and `wait_min` contains the amount of minutes it took travelers to wait at the gate. To make your analysis easier, you want to create two new categorical variables:`wait_type`: `'short'` for 0-60 min, `'medium'` for 60-180 and `long` for 180+`day_week`: `'weekday'` if day is in the weekday, `'weekend'` if day is in the weekend. - Create the ranges and labels for the `wait_type` column mentioned in the description above.- Create the `wait_type` column by from `wait_min` by using `pd.cut()`, while inputting `label_ranges` and `label_names` in the correct arguments.- Create the `mapping` dictionary mapping weekdays to `'weekday'` and weekend days to `'weekend'`.- Create the `day_week` column by using `.replace()`.
###Code
import numpy as np
# Create ranges for categories
label_ranges = [0, 60, 180, np.inf]
label_names = ['short', 'medium', 'long']
# Create wait_type column
airlines['wait_type'] = pd.cut(airlines['wait_min'], bins = label_ranges,
labels = label_names)
# Create mappings and replace
mappings = {'Monday':'weekday', 'Tuesday':'weekday', 'Wednesday': 'weekday',
'Thursday': 'weekday', 'Friday': 'weekday',
'Saturday': 'weekend', 'Sunday': 'weekend'}
airlines['day_week'] = airlines['day'].replace(mappings)
###Output
_____no_output_____
###Markdown
*You just created two new categorical variables, that when combined with other columns, could produce really interesting analysis. Don't forget, you can always use an* `assert` *statement to check your changes passed.*
###Code
print(airlines[['wait_type', 'wait_min', 'day_week']])
import matplotlib.pyplot as plt
import seaborn as sns
sns.barplot(data=airlines, x='wait_min', y='satisfaction', hue='day_week')
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.