hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
ec960b52db7a2e0622d9832152945074f5cf72e0
5,295
ipynb
Jupyter Notebook
tasks/task_02_making_materials/4_example_materials_parameter_study.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
1
2021-08-23T22:49:31.000Z
2021-08-23T22:49:31.000Z
tasks/task_02_making_materials/4_example_materials_parameter_study.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
null
null
null
tasks/task_02_making_materials/4_example_materials_parameter_study.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
null
null
null
29.093407
219
0.593012
[ [ [ "# Part 4 - Changing materials\n\nAs we saw in Part 3, the Neutronics Material Maker makes it easy to find the density of materials.\n\nIt is important to account for material density correctly in neutronics simulations as the density of a material impacts the number density of atoms and therefore the neutronics reaction rate.\n\nDensity is impacted by material properties such as temperature, enrichment and pressure.", "_____no_output_____" ] ], [ [ "# imports packages needed for the example\n\nimport numpy as np\nimport plotly.graph_objs as go\n\nimport neutronics_material_maker as nmm", "_____no_output_____" ] ], [ [ "The following example calculates water density as a function of temperature (at constant pressure) using the Neutronics Material Maker. The Neutronics Material Maker uses the Python CoolProp package to do this.\n\nUsing input parameters from the WCLL blanket, we will show density as a function of temperature over a large range (at constant pressure).\n\nWCLL input parameters:\n- pressure = 15.5 MPa\n- inlet temperature = 285 degrees C\n- outlet temperature = 325 degrees C", "_____no_output_____" ] ], [ [ "temperatures = np.linspace(400, 800., 100)\n\nwater_densities = [nmm.Material.from_library('H2O', temperature=temperature, pressure=15500000).openmc_material.density for temperature in temperatures]\n\nfig = go.Figure()\n\nfig.add_trace(go.Scatter(\n x=temperatures,\n y=water_densities,\n mode='lines+markers',\n showlegend=False),\n)\n\nfig.update_layout(\n title=\"Water density as a function of temperature (at constant pressure)\",\n xaxis_title=\"Temperature in C\",\n yaxis_title=\"Density (g/cm3)\"\n)\n\nfig.show()", "_____no_output_____" ] ], [ [ "Similarly, the next example shows how Helium density changes as a function of pressure (at constant temperature).", "_____no_output_____" ] ], [ [ "pressures = np.linspace(1000000., 10000000., 100)\n\nhelium_densities = [nmm.Material.from_library('He', temperature=700, pressure=pressure).openmc_material.density for pressure in pressures]\n\nfig = go.Figure()\n\nfig.add_trace(go.Scatter(\n x=pressures,\n y=helium_densities,\n mode='lines+markers',\n showlegend=False),\n)\n\nfig.update_layout(\n title=\"Helium density as a function of pressure (at constant temperature)\",\n xaxis_title=\"Pressure in Pa\",\n yaxis_title=\"Density (g/cm3)\"\n)\n\nfig.show()", "_____no_output_____" ] ], [ [ "This example shows how the density of a Lithium ceramic changes as a function of Lithium-6 enrichment.", "_____no_output_____" ] ], [ [ "enrichments = np.linspace(0., 100., 50)\n\nli4sio4_densities = [nmm.Material.from_library('Li4SiO4', enrichment=enrichment).openmc_material.density for enrichment in enrichments]\n\nfig =go.Figure()\n\nfig.add_trace(go.Scatter(\n x=enrichments,\n y=li4sio4_densities,\n mode='lines+markers',\n showlegend=False),\n)\n\nfig.update_layout(\n title=\"Lithium ceramic density as a function of Li-6 enrichment\",\n xaxis_title=\"Li-6 enrichment\",\n yaxis_title=\"Density (g/cm3)\"\n)\n\nfig.show()", "_____no_output_____" ] ], [ [ "(Note: A density parameter study like this is not possible using in-built OpenMC functions as material densities must be specified explicitly).", "_____no_output_____" ], [ "**Learning Outcomes for Part 4:**\n\n- Understand that density varies for materials as a function of pressure, temperature and enrichment.\n- It is important to account for materials properties correctly, especially when the impact material density.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
ec9614574ec90c7cfa8b564b7cf4393bc9cf6ce1
90,984
ipynb
Jupyter Notebook
round/figures/distance.ipynb
tagordon/round
65e6329087f007e763893dd5103073390c9cbeb6
[ "MIT" ]
null
null
null
round/figures/distance.ipynb
tagordon/round
65e6329087f007e763893dd5103073390c9cbeb6
[ "MIT" ]
null
null
null
round/figures/distance.ipynb
tagordon/round
65e6329087f007e763893dd5103073390c9cbeb6
[ "MIT" ]
null
null
null
241.978723
49,392
0.915018
[ [ [ "import numpy as np\nimport matplotlib.pyplot as pl\nimport matplotlib as mpl\nimport pandas as pd\nimport sys\nsys.path.append(\"../\")\nimport gyrochrones as gyr\n\npl.rc('xtick', labelsize=20)\npl.rc('ytick', labelsize=20)\npl.rc('axes', labelsize=25)\npl.rc('axes', titlesize=30)\npl.rc('legend', handlelength=3)\npl.rc('legend', fontsize=20)\n\n%matplotlib inline", "_____no_output_____" ], [ "df = pd.read_hdf('../../output/good.h5')", "_____no_output_____" ], [ "dist = np.array([0, 150, 200, 300, 1000000])\ngyr_age = 600\ngyrochrone = gyr.Gordon2019(df['B_V'])\nfinite_age_mask = np.isfinite(gyrochrone)\ncolor_mask = (df['B_V'] > 1.2) & (df['B_V'] < 1.4)\nfig, axs = pl.subplots(2, 2, figsize=(20, 10))\naxs = np.concatenate(axs)\npl.subplots_adjust(wspace=0.5, hspace=0.5)\n#xs = np.concatenate(axs)\nfor i in range(1, len(dist)):\n cut = (df['r_est'] > dist[i-1]) & (df['r_est'] < dist[i])\n mask = finite_age_mask & cut & color_mask\n perdiff = np.exp(df['logperiod_mean'][mask]) - gyrochrone[mask]\n if i == len(dist)-1:\n axs[i-1].annotate(\"{0} < r \\nN={1}\".format(dist[i-1], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n else:\n axs[i-1].annotate(\"{0} < r < {1}\\nN={2}\".format(dist[i-1], dist[i], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n axs[i-1].hist(perdiff, color='b', density=True, bins=30, alpha=0.5)\n axs[i-1].set_xlabel(r'$P_\\mathrm{{rot}} - P({0})$'.format(gyr_age))\n axs[i-1].set_ylabel('frequency')\npl.savefig(\"distance.pdf\")", "_____no_output_____" ], [ "dist = np.array([0, 150, 200, 300, 1000000])\ngyr_age = 600\ngyrochrone = gyr.MM09e3(df['B_V'], gyr_age)\nfinite_age_mask = np.isfinite(gyrochrone)\nnoclusters = (df['k2_campaign_str'] != b'16') & (df['k2_campaign_str'] != b'5') & (df['k2_campaign_str'] != b'4')\nfig, axs = pl.subplots(2, 2, figsize=(20, 10))\naxs = np.concatenate(axs)\npl.subplots_adjust(wspace=0.5, hspace=0.5)\n#xs = np.concatenate(axs)\nfor i in range(1, len(dist)):\n cut = (df['r_est'] > dist[i-1]) & (df['r_est'] < dist[i])\n mask = finite_age_mask & cut & noclusters & color_mask\n perdiff = np.exp(df['logperiod_mean'][mask]) - gyrochrone[mask]\n if i == len(dist)-1:\n axs[i-1].annotate(\"{0} < r \\nN={1}\".format(dist[i-1], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n else:\n axs[i-1].annotate(\"{0} < r < {1}\\nN={2}\".format(dist[i-1], dist[i], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n axs[i-1].hist(perdiff, color='b', density=True, bins=30, alpha=0.5)\n axs[i-1].set_xlabel(r'$P_\\mathrm{{rot}} - P({0})$'.format(gyr_age))\n axs[i-1].set_ylabel('frequency')\nfinite_age_mask = np.isfinite(gyrochrone)\npl.savefig(\"distance_noclusters.pdf\")", "_____no_output_____" ], [ "fig, axs = pl.subplots(2, 2, figsize=(40, 20))\naxs = np.concatenate(axs)\npl.subplots_adjust(wspace=0.5, hspace=0.5)\nfor i in range(1, len(dist)):\n cut = (df['r_est'] > dist[i-1]) & (df['r_est'] < dist[i])\n mask = finite_age_mask & cut \n if i == len(dist)-1:\n axs[i-1].annotate(\"{0} < r \\nN={1}\".format(dist[i-1], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n else:\n axs[i-1].annotate(\"{0} < r < {1}\\nN={2}\".format(dist[i-1], dist[i], np.sum(mask)), \n xy=(0.05, 0.8), xycoords='axes fraction', fontsize=15)\n axs[i-1].semilogy(df['B_V'][mask], np.exp(df['logperiod_mean'][mask]), 'k.')\n axs[i-1].semilogy(df['B_V'][mask], gyr.MM09e3(df['B_V'][mask], 600), 'b.')\n axs[i-1].semilogy(df['B_V'][mask & color_mask], np.exp(df['logperiod_mean'][mask & color_mask]), 'r.')\n axs[i-1].set_xlim(0.3, 2.0)\n axs[i-1].set_ylim(1, 35)\n #axs[i-1].set_xlabel(r'$P_\\mathrm{{rot}} - P({0})$'.format(gyr_age))\n #axs[i-1].set_ylabel('frequency')\n#finite_age_mask = np.isfinite(gyrochrone)\n#pl.savefig(\"distance_noclusters.pdf\")", "_____no_output_____" ], [ "pl.plot(df['logperiod_mean'], df['logperiod_sd'], '.')", "_____no_output_____" ], [ "for k in df.keys():\n print(k)", "solution_id\ndesignation\nsource_id\nrandom_index\ngaia_ref_epoch\nra\nra_error\ndec\ndec_error\nparallax\nparallax_error\nparallax_over_error\npmra\npmra_error\npmdec\npmdec_error\nra_dec_corr\nra_parallax_corr\nra_pmra_corr\nra_pmdec_corr\ndec_parallax_corr\ndec_pmra_corr\ndec_pmdec_corr\nparallax_pmra_corr\nparallax_pmdec_corr\npmra_pmdec_corr\nastrometric_n_obs_al\nastrometric_n_obs_ac\nastrometric_n_good_obs_al\nastrometric_n_bad_obs_al\nastrometric_gof_al\nastrometric_chi2_al\nastrometric_excess_noise\nastrometric_excess_noise_sig\nastrometric_params_solved\nastrometric_primary_flag\nastrometric_weight_al\nastrometric_pseudo_colour\nastrometric_pseudo_colour_error\nmean_varpi_factor_al\nastrometric_matched_observations\nvisibility_periods_used\nastrometric_sigma5d_max\nframe_rotator_object_type\nmatched_observations\nduplicated_source\nphot_g_n_obs\nphot_g_mean_flux\nphot_g_mean_flux_error\nphot_g_mean_flux_over_error\nphot_g_mean_mag\nphot_bp_n_obs\nphot_bp_mean_flux\nphot_bp_mean_flux_error\nphot_bp_mean_flux_over_error\nphot_bp_mean_mag\nphot_rp_n_obs\nphot_rp_mean_flux\nphot_rp_mean_flux_error\nphot_rp_mean_flux_over_error\nphot_rp_mean_mag\nphot_bp_rp_excess_factor\nphot_proc_mode\nbp_rp\nbp_g\ng_rp\nradial_velocity\nradial_velocity_error\nrv_nb_transits\nrv_template_teff\nrv_template_logg\nrv_template_fe_h\nphot_variable_flag\nl\nb\necl_lon\necl_lat\npriam_flags\nteff_val\nteff_percentile_lower\nteff_percentile_upper\na_g_val\na_g_percentile_lower\na_g_percentile_upper\ne_bp_min_rp_val\ne_bp_min_rp_percentile_lower\ne_bp_min_rp_percentile_upper\nflame_flags\nradius_val\nradius_percentile_lower\nradius_percentile_upper\nlum_val\nlum_percentile_lower\nlum_percentile_upper\ndatalink_url\nepoch_photometry_url\nepic_number\nra_epic\ndec_epic\nr_est\nr_lo\nr_hi\nr_length_prior\nr_result_flag\nr_modality_flag\ntm_name\nk2_campaign_str\nk2_type\nk2_lcflag\nk2_scflag\nk2_teff\nk2_tefferr1\nk2_tefferr2\nk2_logg\nk2_loggerr1\nk2_loggerr2\nk2_metfe\nk2_metfeerr1\nk2_metfeerr2\nk2_rad\nk2_raderr1\nk2_raderr2\nk2_mass\nk2_masserr1\nk2_masserr2\nk2_kepmag\nk2_kepmagerr\nk2_kepmagflag\nk2c_disp\nk2c_note\nk2_gaia_ang_dist\nmix_mean\nmix_sd\nmix_neff\nmix_rhat\nlogdeltaQ_mean\nlogdeltaQ_sd\nlogdeltaQ_neff\nlogdeltaQ_rhat\nlogQ0_mean\nlogQ0_sd\nlogQ0_neff\nlogQ0_rhat\nlogperiod_mean\nlogperiod_sd\nlogperiod_neff\nlogperiod_rhat\nlogamp_mean\nlogamp_sd\nlogamp_neff\nlogamp_rhat\nlogs2_mean\nlogs2_sd\nlogs2_neff\nlogs2_rhat\nacfpeak\nB_V\ngalcen_x\ngalcen_y\ngalcen_z\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ec962c7003d296165f7c7e699e479e53c3c490f5
210,982
ipynb
Jupyter Notebook
Playing_with_matrices.ipynb
afedynitch/ISAPPMCEq
11292ccbb6eb80f55157a8cbf2a5b3e8adf39092
[ "MIT" ]
null
null
null
Playing_with_matrices.ipynb
afedynitch/ISAPPMCEq
11292ccbb6eb80f55157a8cbf2a5b3e8adf39092
[ "MIT" ]
null
null
null
Playing_with_matrices.ipynb
afedynitch/ISAPPMCEq
11292ccbb6eb80f55157a8cbf2a5b3e8adf39092
[ "MIT" ]
null
null
null
522.232673
71,044
0.944284
[ [ [ "## Let's look at matrices together\n\nHere I demonstrate how to look deeper in what happens in MCEq.\n\nFollow my presentation. Afterwards I will commit changes to github.\n\nTo pull changes (if any) to your directory, you can then do:\n\n git pull\n \nExercise:\n\n 1) Look at the distribution of kaons (321, -321), protons (2212) and neutrons (2112).\n\nParticles are indexed according to the PDG convention http://pdg.lbl.gov/2018/reviews/rpp2018-rev-monte-carlo-numbering.pdf\n \n ", "_____no_output_____" ] ], [ [ "#usual imports and jupyter setup\n%load_ext autoreload\n%matplotlib inline\n%autoreload 2\nimport os\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Path to MCEq. Can be uncommented if in PYTHONPATH\nimport sys\nsys.path.append('../MCEq/')\n\n#import solver related modules\nfrom MCEq.core import MCEqRun\nfrom mceq_config import config\n#import primary model choices\nimport CRFluxModels as pm", "_____no_output_____" ], [ "# Disable semi-analytical approximation\nconfig['adv_set']['no_mixing'] = True", "_____no_output_____" ], [ "mceq_run_no_mix = MCEqRun(\n#provide the string of the interaction model\ninteraction_model='SIBYLL2.3c',\n#primary cosmic ray flux model\n#support a tuple (primary model class (not instance!), arguments)\nprimary_model=(pm.Thunman,None), #(pm.HillasGaisser2012, 'H3a'),\n# Zenith angle in degrees. 0=vertical, 90=horizontal\ntheta_deg=0.0,\n#expand the rest of the options from mceq_config.py\n**config\n)", "InteractionYields::_load(): Looking for /home/isapp2018/work/MCEqExercise/MCEq/data/SIBYLL23C_yields_compact_ledpm.bz2\nDecayYields:_load():: Loading file /home/isapp2018/work/MCEqExercise/MCEq/data/compact_decay_tables.ppd\n\nHadrons and stable particles:\n\n\"p\", \"p-bar\", \"n-bar\", \"n\", \"pi-\", \"pi+\", \"K0L\", \"K-\", \n\"K+\", \"Lambda0\", \"Lambda0-bar\", \"K0S\", \"D+\", \"D-\", \"Ds+\", \"Ds-\", \n\"D0\", \"D0-bar\"\n\nMixed:\n\n\n\nResonances:\n\n\n\nLeptons:\n\n\"e-\", \"nue\", \"numu\", \"nutau\", \"gamma\", \"antinutau\", \"antinumu\", \"antinue\", \n\"e+\", \"mu-\", \"mu+\"\n\nAliases:\n\"obs_numu\", \"obs_nutau\", \"pr_antinutau\", \"pr_antinumu\", \"pr_antinue\", \"obs_antinue\", \"k_nue\", \"k_numu\", \n\"k_nutau\", \"pi_antinutau\", \"pi_antinue\", \"pi_antinumu\", \"pi_nue\", \"pi_numu\", \"pi_nutau\", \"k_antinutau\", \n\"k_antinumu\", \"k_antinue\", \"obs_nue\", \"pr_nue\", \"pr_numu\", \"pr_nutau\", \"obs_antinutau\", \"obs_antinumu\", \n\"k_mu-\", \"obs_mu-\", \"pr_mu+\", \"pi_mu+\", \"pi_mu-\", \"k_mu+\", \"pr_mu-\", \"obs_mu+\"\n\nTotal number of species: 61\nMCEqRun::set_interaction_model(): SIBYLL23C\nInteractionYields:set_interaction_model():: Model SIBYLL23C already loaded.\nInteractionYields:set_interaction_model():: Model SIBYLL23C already loaded.\nMCEqRun::_init_default_matrices():Start filling matrices. Skip_D_matrix = False\nMCEqRun::_convert_to_sparse():Converting to sparse (CSR) matrix format.\nC Matrix info:\n density : 3.82%\n shape : 5368 x 5368\n nnz : 1100563\nD Matrix info:\n density : 1.42%\n shape : 5368 x 5368\n nnz : 408473\nMCEqRun::_init_default_matrices():Done filling matrices.\nMCEqRun::set_density_model(): CORSIKA ('BK_USStd', None)\nMCEqRun::set_theta_deg(): 0.0\nCorsikaAtmosphere::calculate_density_spline(): Calculating spline of rho(X) for zenith 0.0 degrees.\n.. took 0.14s\nMCEqRun::set_primary_model(): Thunman None\n" ] ], [ [ "## Shows the structure of interaction and decay matrix. These are the C and D matrix", "_____no_output_____" ] ], [ [ "M_int = mceq_run_no_mix.int_m #C\nM_dec = mceq_run_no_mix.dec_m*mceq_run_no_mix.density_model.rho_inv(1000.,1.) #D\n\nplt.figure(figsize=(9,9))\nplt.spy(M_int,marker='o',markersize=0.1)\ntickloc = np.arange(mceq_run_no_mix.n_tot_species)*mceq_run_no_mix.d + mceq_run_no_mix.d/2.\ntickstr = [p.name for p in mceq_run_no_mix.particle_species]\nt = plt.xticks(tickloc, tickstr,fontsize=8, rotation=90)\nt = plt.yticks(tickloc, tickstr,fontsize=8, rotation=0)\nplt.tight_layout()\n\nplt.figure(figsize=(9,9))\nplt.spy(M_dec,marker='o',markersize=0.1)\ntickloc = np.arange(mceq_run_no_mix.n_tot_species)*mceq_run_no_mix.d + mceq_run_no_mix.d/2.\ntickstr = [p.name for p in mceq_run_no_mix.particle_species]\nt = plt.xticks(tickloc, tickstr,fontsize=8, rotation=90)\nt = plt.yticks(tickloc, tickstr,fontsize=8, rotation=0)\nplt.tight_layout()", "_____no_output_____" ], [ "## Let's look at particle production matrices", "_____no_output_____" ], [ "#Shortcut to MCEq.data.InteractionYields\nprod_mat = mceq_run_no_mix.y", "_____no_output_____" ], [ "# Pion production matrix\npi_mat = prod_mat.get_y_matrix(2212, 130)\nplt.imshow(np.log(pi_mat))", "/home/isapp2018/work/MCEqExercise/miniconda2/lib/python2.7/site-packages/ipykernel_launcher.py:3: RuntimeWarning: divide by zero encountered in log\n This is separate from the ipykernel package so we can avoid doing imports until\n" ], [ "# Select vertical slice\negrid = mceq_run_no_mix.e_grid\nfor eidx in [30,50,70]:\n print 'proton-air collision lab energy=%3.2e GeV' % egrid[eidx]\n plt.loglog(egrid,pi_mat[:,eidx]/mceq_run_no_mix.e_widths)\nplt.ylabel('dN/dE (1/GeV)')\nplt.xlabel('Elab (GeV)')\nplt.title('Pion inclusive spectra')", "proton-air collision lab energy=6.49e+03 GeV\nproton-air collision lab energy=2.05e+06 GeV\nproton-air collision lab energy=6.49e+08 GeV\n" ], [ "xlab, dndxlab = prod_mat.get_xlab_dist(1e3,2212,211)\nplt.plot(xlab, xlab**1.7*dndxlab,label='pi+')\n\nxlab, dndxlab = prod_mat.get_xlab_dist(1e3,2212,-211)\nplt.plot(xlab, xlab**1.7*dndxlab,label='pi-')\n\nxlab, dndxlab = prod_mat.get_xlab_dist(1e3,2212,2212)\nplt.plot(xlab, xlab**1.7*dndxlab,label='p')\n\nxlab, dndxlab = prod_mat.get_xlab_dist(1e3,2212,-2112)\nplt.plot(xlab, xlab**1.7*dndxlab,label='n')\n\nplt.ylabel('dN/dxlab')\nplt.xlabel('xlab')\nplt.title('Pion inclusive spectra')\nplt.legend()\nplt.loglog()", "Nearest energy, index: 865.9643233600654 23\nNearest energy, index: 865.9643233600654 23\nNearest energy, index: 865.9643233600654 23\nNearest energy, index: 865.9643233600654 23\n" ], [ "#Shortcut to MCEq.data.InteractionYields\ndec_mat = mceq_run_no_mix.decays", "_____no_output_____" ], [ "# Pion production matrix\npi_mat = prod_mat.get_y_matrix(2212, 130)\nplt.imshow(np.log(pi_mat))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9630dcbf1375f2b5c3ca6dea540fa9681342e4
3,694
ipynb
Jupyter Notebook
scripts_cocalc/calcnum_aula02_parte5.ipynb
americocunhajr/UERJ_CalculoNumerico
4aff9861251d9762358bc299e2e93e2eff625fe1
[ "MIT" ]
1
2020-11-16T16:32:48.000Z
2020-11-16T16:32:48.000Z
scripts_cocalc/calcnum_aula02_parte5.ipynb
arturmartins9/CalculoNumerico
3cc0e217f9a4ce7b431256fc035060f4271c4b20
[ "MIT" ]
null
null
null
scripts_cocalc/calcnum_aula02_parte5.ipynb
arturmartins9/CalculoNumerico
3cc0e217f9a4ce7b431256fc035060f4271c4b20
[ "MIT" ]
2
2020-09-14T16:19:13.000Z
2020-11-16T16:34:43.000Z
21.108571
170
0.51895
[ [ [ "<img align=\"left\" src=\"https://www.uerj.br/wp-content/uploads/2017/09/logo_uerj_cor.jpg\" width=\"10%\">", "_____no_output_____" ], [ "# Cálculo Numérico\n\n## Aula 2 - Noções de Programação para Computação Científica\n\n#### Prof. Americo Cunha\n#### Prof. Augusto Barbosa\n#### Prof. Luiz Mariano Carvalho\n#### Profa. Nancy Baygorrea", "_____no_output_____" ], [ "### Funções e Scripts no GNU Octave", "_____no_output_____" ], [ "O programa a seguir calcula as raízes de uma equação do segundo grau. Implemente esse programa no ambiente GNU Octave e teste-o para diferentes valores de a, b e c.", "_____no_output_____" ], [ "###### I. Primeiro deve-se definir a função da equação do segundo grau (nome do código: eq2.m)", "_____no_output_____" ] ], [ [ "% função\nfunction [x1,x2] = eq2(a,b,c)\n x1 = (-b + sqrt(b^2 - 4*a*c))/(2*a);\n x2 = (-b - sqrt(b^2 - 4*a*c))/(2*a);\nend", "_____no_output_____" ] ], [ [ "###### II. Em seguida executar o código principal com a definição dos parâmetros (nome do código: main.m)", "_____no_output_____" ] ], [ [ "% script\na = 2.0\nb = 1.0\nc = 1.0\n\n[x1,x2] = eq2(a,b,c)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec963127d4c390a30cee6e1f2235173629c5598a
43,374
ipynb
Jupyter Notebook
sentiment-rnn/Sentiment_RNN_Solution.ipynb
hangdeng/deep-learning-v2-pytorch
dcf7260384d0084685a43d434d50874d938dea8e
[ "MIT" ]
null
null
null
sentiment-rnn/Sentiment_RNN_Solution.ipynb
hangdeng/deep-learning-v2-pytorch
dcf7260384d0084685a43d434d50874d938dea8e
[ "MIT" ]
null
null
null
sentiment-rnn/Sentiment_RNN_Solution.ipynb
hangdeng/deep-learning-v2-pytorch
dcf7260384d0084685a43d434d50874d938dea8e
[ "MIT" ]
null
null
null
39.075676
843
0.560013
[ [ [ "# Sentiment Analysis with an RNN\n\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. \n>Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. \n\nHere we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.\n\n<img src=\"assets/reviews_ex.png\" width=40%>\n\n### Network Architecture\n\nThe architecture for this network is shown below.\n\n<img src=\"assets/network_diagram.png\" width=40%>\n\n>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*\n\n>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data. \n\n>**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. \n\nWe don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).", "_____no_output_____" ], [ "---\n### Load in and visualize the data", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# read data from text files\nwith open('data/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('data/labels.txt', 'r') as f:\n labels = f.read()", "_____no_output_____" ], [ "print(reviews[:1000])\nprint()\nprint(labels[:20])", "bromwell high is a cartoon comedy . it ran at the same time as some other programs about school life such as teachers . my years in the teaching profession lead me to believe that bromwell high s satire is much closer to reality than is teachers . the scramble to survive financially the insightful students who can see right through their pathetic teachers pomp the pettiness of the whole situation all remind me of the schools i knew and their students . when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled . . . . . . . . . at . . . . . . . . . . high . a classic line inspector i m here to sack one of your teachers . student welcome to bromwell high . i expect that many adults of my age think that bromwell high is far fetched . what a pity that it isn t \nstory of a man who has unnatural feelings for a pig . starts out with a opening scene that is a terrific example of absurd comedy . a formal orchestra audience is turn\n\npositive\nnegative\npo\n" ] ], [ [ "## Data pre-processing\n\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\n\nYou can see an example of the reviews data above. Here are the processing steps, we'll want to take:\n>* We'll want to get rid of periods and extraneous punctuation.\n* Also, you might notice that the reviews are delimited with newline characters `\\n`. To deal with those, I'm going to split the text into each review using `\\n` as the delimiter. \n* Then I can combined all the reviews back together into one big string.\n\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "_____no_output_____" ] ], [ [ "from string import punctuation\n\n# get rid of punctuation\nreviews = reviews.lower() # lowercase, standardize\nall_text = ''.join([c for c in reviews if c not in punctuation])\n\n# split by new lines and spaces\nreviews_split = all_text.split('\\n')\nall_text = ' '.join(reviews_split)\n\n# create a list of words\nwords = all_text.split()", "_____no_output_____" ], [ "words[:30]", "_____no_output_____" ] ], [ [ "### Encoding the words\n\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\n> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.\n> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. ", "_____no_output_____" ] ], [ [ "# feel free to use this import \nfrom collections import Counter\n\n## Build a dictionary that maps words to integers\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\n## use the dict to tokenize each review in reviews_split\n## store the tokenized reviews in reviews_ints\nreviews_ints = []\nfor review in reviews_split:\n reviews_ints.append([vocab_to_int[word] for word in review.split()])", "_____no_output_____" ] ], [ [ "**Test your code**\n\nAs a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.", "_____no_output_____" ] ], [ [ "# stats about vocabulary\nprint('Unique words: ', len((vocab_to_int))) # should ~ 74000+\nprint()\n\n# print tokens in first review\nprint('Tokenized review: \\n', reviews_ints[:1])", "Unique words: 74072\n\nTokenized review: \n [[21025, 308, 6, 3, 1050, 207, 8, 2138, 32, 1, 171, 57, 15, 49, 81, 5785, 44, 382, 110, 140, 15, 5194, 60, 154, 9, 1, 4975, 5852, 475, 71, 5, 260, 12, 21025, 308, 13, 1978, 6, 74, 2395, 5, 613, 73, 6, 5194, 1, 24103, 5, 1983, 10166, 1, 5786, 1499, 36, 51, 66, 204, 145, 67, 1199, 5194, 19869, 1, 37442, 4, 1, 221, 883, 31, 2988, 71, 4, 1, 5787, 10, 686, 2, 67, 1499, 54, 10, 216, 1, 383, 9, 62, 3, 1406, 3686, 783, 5, 3483, 180, 1, 382, 10, 1212, 13583, 32, 308, 3, 349, 341, 2913, 10, 143, 127, 5, 7690, 30, 4, 129, 5194, 1406, 2326, 5, 21025, 308, 10, 528, 12, 109, 1448, 4, 60, 543, 102, 12, 21025, 308, 6, 227, 4146, 48, 3, 2211, 12, 8, 215, 23]]\n" ] ], [ [ "### Encoding the labels\n\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\n> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.", "_____no_output_____" ] ], [ [ "# 1=positive, 0=negative label conversion\nlabels_split = labels.split('\\n')\nencoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])", "_____no_output_____" ] ], [ [ "### Removing Outliers\n\nAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:\n\n1. Getting rid of extremely long or short reviews; the outliers\n2. Padding/truncating the remaining data so that we have reviews of the same length.\n\nBefore we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.", "_____no_output_____" ] ], [ [ "# outlier review stats\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Zero-length reviews: 1\nMaximum review length: 2514\n" ] ], [ [ "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.\n\n> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`.", "_____no_output_____" ] ], [ [ "print('Number of reviews before removing outliers: ', len(reviews_ints))\n\n## remove any reviews/labels with zero length from the reviews_ints list.\n\n# get indices of any reviews with length 0\nnon_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\n\n# remove 0-length reviews and their labels\nreviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nencoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])\n\nprint('Number of reviews after removing outliers: ', len(reviews_ints))", "Number of reviews before removing outliers: 25001\nNumber of reviews after removing outliers: 25000\n" ] ], [ [ "---\n## Padding sequences\n\nTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.\n\n> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network. \n* The data should come from `review_ints`, since we want to feed integers to the network. \n* Each row should be `seq_length` elements long. \n* For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. \n* For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.\n\nAs a small example, if the `seq_length=10` and an input review is: \n```\n[117, 18, 128]\n```\nThe resultant, padded sequence should be: \n\n```\n[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]\n```\n\n**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "_____no_output_____" ] ], [ [ "def pad_features(reviews_ints, seq_length):\n ''' Return features of review_ints, where each review is padded with 0's \n or truncated to the input seq_length.\n '''\n \n # getting the correct rows x cols shape\n features = np.zeros((len(reviews_ints), seq_length), dtype=int)\n\n # for each review, I grab that review and \n for i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_length]\n \n return features", "_____no_output_____" ], [ "# Test your implementation!\n\nseq_length = 200\n\nfeatures = pad_features(reviews_ints, seq_length=seq_length)\n\n## test statements - do not change - ##\nassert len(features)==len(reviews_ints), \"Your features should have as many rows as reviews.\"\nassert len(features[0])==seq_length, \"Each feature row should contain seq_length values.\"\n\n# print first 10 values of the first 30 batches \nprint(features[:30,:10])", "[[ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [22382 42 46418 15 706 17139 3389 47 77 35]\n [ 4505 505 15 3 3342 162 8312 1652 6 4819]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 54 10 14 116 60 798 552 71 364 5]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 1 330 578 34 3 162 748 2731 9 325]\n [ 9 11 10171 5305 1946 689 444 22 280 673]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 1 307 10399 2069 1565 6202 6528 3288 17946 10628]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 21 122 2069 1565 515 8181 88 6 1325 1182]\n [ 1 20 6 76 40 6 58 81 95 5]\n [ 54 10 84 329 26230 46427 63 10 14 614]\n [ 11 20 6 30 1436 32317 3769 690 15100 6]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 40 26 109 17952 1422 9 1 327 4 125]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 10 499 1 307 10399 55 74 8 13 30]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]]\n" ] ], [ [ "## Training, Validation, Test\n\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\n> **Exercise:** Create the training, validation, and test sets. \n* You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example. \n* Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9. \n* Whatever data is left will be split in half to create the validation and *testing* data.", "_____no_output_____" ] ], [ [ "split_frac = 0.8\n\n## split data into training, validation, and test data (features and labels, x and y)\n\nsplit_idx = int(len(features)*split_frac)\ntrain_x, remaining_x = features[:split_idx], features[split_idx:]\ntrain_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]\n\ntest_idx = int(len(remaining_x)*0.5)\nval_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]\nval_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]\n\n## print out the shapes of your resultant feature data\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "\t\t\tFeature Shapes:\nTrain set: \t\t(20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t(2500, 200)\n" ] ], [ [ "**Check your work**\n\nWith train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:\n```\n Feature Shapes:\nTrain set: \t\t (20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t (2500, 200)\n```", "_____no_output_____" ], [ "---\n## DataLoaders and Batching\n\nAfter creating training, test, and validation data, we can create DataLoaders for this data by following two steps:\n1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html#) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.\n2. Create DataLoaders and batch our training, validation, and test Tensor datasets.\n\n```\ntrain_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))\ntrain_loader = DataLoader(train_data, batch_size=batch_size)\n```\n\nThis is an alternative to creating a generator function for batching our data into full batches.", "_____no_output_____" ] ], [ [ "import torch\nfrom torch.utils.data import TensorDataset, DataLoader\n\n# create Tensor datasets\ntrain_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))\nvalid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))\ntest_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))\n\n# dataloaders\nbatch_size = 50\n\n# make sure the SHUFFLE your training data\ntrain_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)\nvalid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)\ntest_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)", "_____no_output_____" ], [ "# obtain one batch of training data\ndataiter = iter(train_loader)\nsample_x, sample_y = dataiter.next()\n\nprint('Sample input size: ', sample_x.size()) # batch_size, seq_length\nprint('Sample input: \\n', sample_x)\nprint()\nprint('Sample label size: ', sample_y.size()) # batch_size\nprint('Sample label: \\n', sample_y)", "Sample input size: torch.Size([50, 200])\nSample input: \n tensor([[ 0, 0, 0, ..., 43, 12952, 123],\n [ 256, 11, 6, ..., 21, 181, 3],\n [ 1761, 154, 587, ..., 15018, 400, 55],\n ...,\n [ 157, 4834, 1369, ..., 19, 250, 762],\n [ 0, 0, 0, ..., 467, 4, 1845],\n [ 10, 14, 966, ..., 1, 2209, 13]], dtype=torch.int32)\n\nSample label size: torch.Size([50])\nSample label: \n tensor([1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1,\n 0, 1], dtype=torch.int32)\n" ] ], [ [ "---\n# Sentiment Network with PyTorch\n\nBelow is where you'll define the network.\n\n<img src=\"assets/network_diagram.png\" width=40%>\n\nThe layers are as follows:\n1. An [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) that converts our word tokens (integers) into embeddings of a specific size.\n2. An [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) defined by a hidden_state size and number of layers\n3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size\n4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network.\n\n### The Embedding Layer\n\nWe need to add an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.\n\n\n### The LSTM Layer(s)\n\nWe'll create an [LSTM](https://pytorch.org/docs/stable/nn.html#lstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.\n\nMost of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships. \n\n> **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.\n\nNote: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.", "_____no_output_____" ] ], [ [ "# First checking if GPU is available\ntrain_on_gpu=torch.cuda.is_available()\n\nif(train_on_gpu):\n print('Training on GPU.')\nelse:\n print('No GPU available, training on CPU.')", "No GPU available, training on CPU.\n" ], [ "import torch.nn as nn\n\nclass SentimentRNN(nn.Module):\n \"\"\"\n The RNN model that will be used to perform Sentiment analysis.\n \"\"\"\n\n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):\n \"\"\"\n Initialize the model by setting up the layers.\n \"\"\"\n super(SentimentRNN, self).__init__()\n\n self.output_size = output_size\n self.n_layers = n_layers\n self.hidden_dim = hidden_dim\n \n # embedding and LSTM layers\n self.embedding = nn.Embedding(vocab_size, embedding_dim)\n self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, \n dropout=drop_prob, batch_first=True)\n \n # dropout layer\n self.dropout = nn.Dropout(0.3)\n \n # linear and sigmoid layers\n self.fc = nn.Linear(hidden_dim, output_size)\n self.sig = nn.Sigmoid()\n \n\n def forward(self, x, hidden):\n \"\"\"\n Perform a forward pass of our model on some input and hidden state.\n \"\"\"\n batch_size = x.size(0)\n # fix Expected tensor for argument #1 ‘indices’ to have scalar type Long; \n # but got CPUDoubleTensor instead (while checking arguments for embedding)\n x = x.long()\n # embeddings and lstm_out\n x = x.long()\n embeds = self.embedding(x)\n lstm_out, hidden = self.lstm(embeds, hidden)\n \n # stack up lstm outputs\n lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)\n \n # dropout and fully-connected layer\n out = self.dropout(lstm_out)\n out = self.fc(out)\n # sigmoid function\n sig_out = self.sig(out)\n \n # reshape to be batch_size first\n sig_out = sig_out.view(batch_size, -1)\n sig_out = sig_out[:, -1] # get last batch of labels\n \n # return last sigmoid output and hidden state\n return sig_out, hidden\n \n \n def init_hidden(self, batch_size):\n ''' Initializes hidden state '''\n # Create two new tensors with sizes n_layers x batch_size x hidden_dim,\n # initialized to zero, for hidden state and cell state of LSTM\n weight = next(self.parameters()).data\n \n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())\n \n return hidden\n ", "_____no_output_____" ] ], [ [ "## Instantiate the network\n\nHere, we'll instantiate the network. First up, defining the hyperparameters.\n\n* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.\n* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).\n* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.\n* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\n* `n_layers`: Number of LSTM layers in the network. Typically between 1-3\n\n> **Exercise:** Define the model hyperparameters.\n", "_____no_output_____" ] ], [ [ "# Instantiate the model w/ hyperparams\nvocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens\noutput_size = 1\nembedding_dim = 400\nhidden_dim = 256\nn_layers = 2\n\nnet = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)\n\nprint(net)", "SentimentRNN(\n (embedding): Embedding(74073, 400)\n (lstm): LSTM(400, 256, num_layers=2, batch_first=True, dropout=0.5)\n (dropout): Dropout(p=0.3)\n (fc): Linear(in_features=256, out_features=1, bias=True)\n (sig): Sigmoid()\n)\n" ] ], [ [ "---\n## Training\n\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.\n\n>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.html#bceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.\n\nWe also have some data and training hyparameters:\n\n* `lr`: Learning rate for our optimizer.\n* `epochs`: Number of times to iterate through the training dataset.\n* `clip`: The maximum gradient value to clip at (to prevent exploding gradients).", "_____no_output_____" ] ], [ [ "# loss and optimization functions\nlr=0.001\n\ncriterion = nn.BCELoss()\noptimizer = torch.optim.Adam(net.parameters(), lr=lr)\n", "_____no_output_____" ], [ "# training params\n\nepochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing\n\ncounter = 0\nprint_every = 100\nclip=5 # gradient clipping\n\n# move model to GPU, if available\nif(train_on_gpu):\n net.cuda()\n\nnet.train()\n# train for some number of epochs\nfor e in range(epochs):\n # initialize hidden state\n h = net.init_hidden(batch_size)\n\n # batch loop\n for inputs, labels in train_loader:\n counter += 1\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n # zero accumulated gradients\n net.zero_grad()\n\n # get the output from the model\n output, h = net(inputs, h)\n\n # calculate the loss and perform backprop\n loss = criterion(output.squeeze(), labels.float())\n loss.backward()\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n nn.utils.clip_grad_norm_(net.parameters(), clip)\n optimizer.step()\n\n # loss stats\n if counter % print_every == 0:\n # Get validation loss\n val_h = net.init_hidden(batch_size)\n val_losses = []\n net.eval()\n for inputs, labels in valid_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n val_h = tuple([each.data for each in val_h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n output, val_h = net(inputs, val_h)\n val_loss = criterion(output.squeeze(), labels.float())\n\n val_losses.append(val_loss.item())\n\n net.train()\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Step: {}...\".format(counter),\n \"Loss: {:.6f}...\".format(loss.item()),\n \"Val Loss: {:.6f}\".format(np.mean(val_losses)))", "_____no_output_____" ] ], [ [ "---\n## Testing\n\nThere are a few ways to test your network.\n\n* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.\n\n* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**.", "_____no_output_____" ] ], [ [ "# Get test data loss and accuracy\n\ntest_losses = [] # track loss\nnum_correct = 0\n\n# init hidden state\nh = net.init_hidden(batch_size)\n\nnet.eval()\n# iterate over test data\nfor inputs, labels in test_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n \n # get predicted outputs\n output, h = net(inputs, h)\n \n # calculate loss\n test_loss = criterion(output.squeeze(), labels.float())\n test_losses.append(test_loss.item())\n \n # convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) # rounds to the nearest integer\n \n # compare predictions to true label\n correct_tensor = pred.eq(labels.float().view_as(pred))\n correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())\n num_correct += np.sum(correct)\n\n\n# -- stats! -- ##\n# avg test loss\nprint(\"Test loss: {:.3f}\".format(np.mean(test_losses)))\n\n# accuracy over all test data\ntest_acc = num_correct/len(test_loader.dataset)\nprint(\"Test accuracy: {:.3f}\".format(test_acc))", "_____no_output_____" ] ], [ [ "### Inference on a test review\n\nYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!\n \n> **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!\n* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length.\n", "_____no_output_____" ] ], [ [ "# negative test review\ntest_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'\n", "_____no_output_____" ], [ "from string import punctuation\n\ndef tokenize_review(test_review):\n test_review = test_review.lower() # lowercase\n # get rid of punctuation\n test_text = ''.join([c for c in test_review if c not in punctuation])\n\n # splitting by spaces\n test_words = test_text.split()\n\n # tokens\n test_ints = []\n test_ints.append([vocab_to_int[word] for word in test_words])\n\n return test_ints\n\n# test code and generate tokenized review\ntest_ints = tokenize_review(test_review_neg)\nprint(test_ints)", "_____no_output_____" ], [ "# test sequence padding\nseq_length=200\nfeatures = pad_features(test_ints, seq_length)\n\nprint(features)", "_____no_output_____" ], [ "# test conversion to tensor and pass into your model\nfeature_tensor = torch.from_numpy(features)\nprint(feature_tensor.size())", "_____no_output_____" ], [ "def predict(net, test_review, sequence_length=200):\n \n net.eval()\n \n # tokenize review\n test_ints = tokenize_review(test_review)\n \n # pad tokenized sequence\n seq_length=sequence_length\n features = pad_features(test_ints, seq_length)\n \n # convert to tensor to pass into your model\n feature_tensor = torch.from_numpy(features)\n \n batch_size = feature_tensor.size(0)\n \n # initialize hidden state\n h = net.init_hidden(batch_size)\n \n if(train_on_gpu):\n feature_tensor = feature_tensor.cuda()\n \n # get the output from the model\n output, h = net(feature_tensor, h)\n \n # convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) \n # printing output value, before rounding\n print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))\n \n # print custom response\n if(pred.item()==1):\n print(\"Positive review detected!\")\n else:\n print(\"Negative review detected.\")\n ", "_____no_output_____" ], [ "# positive test review\ntest_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'\n", "_____no_output_____" ], [ "# call function\nseq_length=200 # good to use the length that was trained on\n\npredict(net, test_review_neg, seq_length)", "_____no_output_____" ] ], [ [ "### Try out test_reviews of your own!\n\nNow that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.\n\nLater, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
ec9632b715d0a0031a2fd1148650780ed3039147
4,126
ipynb
Jupyter Notebook
Python-Session/Submissions/RITIKA1201.ipynb
priyankasaini69/ML-Circle-20-21
f16c8a6a1141fbaca3cffcd3ddb8599e84faf548
[ "MIT" ]
1
2021-05-16T09:44:15.000Z
2021-05-16T09:44:15.000Z
Python-Session/Submissions/RITIKA1201.ipynb
priyankasaini69/ML-Circle-20-21
f16c8a6a1141fbaca3cffcd3ddb8599e84faf548
[ "MIT" ]
18
2021-01-27T20:27:16.000Z
2021-04-16T17:58:28.000Z
Python-Session/Submissions/RITIKA1201.ipynb
priyankasaini69/ML-Circle-20-21
f16c8a6a1141fbaca3cffcd3ddb8599e84faf548
[ "MIT" ]
27
2021-01-27T18:46:52.000Z
2022-01-03T19:38:41.000Z
25.469136
98
0.500485
[ [ [ "#### 1. Write a function that returns the maximum of two numbers.", "_____no_output_____" ] ], [ [ "def max(a,b):\n if a<b:\n print(\"MAXIMUM NUMBER : \"+str(b))\n elif b<a:\n print(\"MAXIMUM NUMBER : \"+str(a))\n", "_____no_output_____" ] ], [ [ "#### 2. Write a function called fizz_buzz that takes a number.\n• If the number is divisible by 3, it should return “Fizz”.\n• If it is divisible by 5, it should return “Buzz”.\n• If it is divisible by both 3 and 5, it should return “FizzBuzz”.\n• Otherwise, it should return the same number ", "_____no_output_____" ] ], [ [ "def fizz_buzz(c):\n if (c%3==0 and c%5!=0):\n return \"Fizz\"\n elif (c%5==0 and c%3!=0):\n return \"Buzz\"\n elif (c%3==0 and c%5==0):\n return \"FizzBuzz\"\n else:\n return c", "_____no_output_____" ] ], [ [ "#### 3. Write a function for checking the speed of drivers. This function should have one\nparameter: speed.\n• If speed is less than 70, it should print “Ok”.\n• Otherwise, for every 5km above the speed limit (70), it should give the\ndriver one demerit point and print the total number of demerit points. For\nexample, if the speed is 80, it should print: “Points: 2”.\n• If the driver gets more than 12 points, the function should print: “License\nsuspended” ", "_____no_output_____" ] ], [ [ "def speed(speed):\n if speed<=70:\n print(\"OK\")\n else:\n demerit = (speed-70)//5\n print(\"Points: \"+str(demerit))\n if demerit>12:\n print(\"License suspended\")", "_____no_output_____" ] ], [ [ "#### 4. Write a function called showNumbers that takes a parameter called limit. It\nshould print all the numbers between 0 and limit with a label to identify the even\nand odd numbers. For example, if the limit is 3, it should print:\n• 0 EVEN\n• 1 ODD\n• 2 EVEN\n• 3 ODD ", "_____no_output_____" ] ], [ [ "def showNumbers(limit):\n for i in range(0,limit+1):\n if (i%2==0):\n print(\"*\"+str(i)+\" EVEN\")\n else:\n print(\"*\"+str(i)+\" ODD\")", "_____no_output_____" ] ], [ [ "#### 5. Write a function that returns the sum of multiples of 3 and 5 between 0\nand limit (parameter). For example, if limit is 20, it should return the sum of 3, 5,\n6, 9, 10, 12, 15, 18, 20. ", "_____no_output_____" ] ], [ [ "def sum_3(lim):\n sume=0\n for i in range(1,lim):\n if i%3==0:\n sume+=i\n sume+=lim\n return sume", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec9653d789fdbf2dbca24d182c80216b55baafa8
1,793
ipynb
Jupyter Notebook
nbs/index.ipynb
choct155/nbdev_test
f0dd10f1d346be7c6d2883d89a2a051952f79a4f
[ "Apache-2.0" ]
null
null
null
nbs/index.ipynb
choct155/nbdev_test
f0dd10f1d346be7c6d2883d89a2a051952f79a4f
[ "Apache-2.0" ]
null
null
null
nbs/index.ipynb
choct155/nbdev_test
f0dd10f1d346be7c6d2883d89a2a051952f79a4f
[ "Apache-2.0" ]
null
null
null
16.757009
81
0.491355
[ [ [ "#hide\nfrom nbdev_test.core import *", "_____no_output_____" ] ], [ [ "# Test nbdev\n\n> Just a place to experiment with the nbdev workflow", "_____no_output_____" ], [ "This file will become your README and also the index of your documentation.", "_____no_output_____" ], [ "## Install", "_____no_output_____" ], [ "`pip install nbdev_test`", "_____no_output_____" ], [ "## How to use", "_____no_output_____" ], [ "Fill me in please! Don't forget code examples:", "_____no_output_____" ] ], [ [ "say_hello(\"Muprhy\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
ec965646104b4416540a0d4600e4c6ff5b586380
2,152
ipynb
Jupyter Notebook
_notebooks/draft/Roadmap_AI.ipynb
phucnsp/blog
343e5987628276ff2c74e4e3ebdcb5a4a1baa1df
[ "Apache-2.0" ]
2
2021-06-06T07:17:53.000Z
2022-01-18T17:12:17.000Z
_notebooks/draft/Roadmap_AI.ipynb
phucnsp/blog
343e5987628276ff2c74e4e3ebdcb5a4a1baa1df
[ "Apache-2.0" ]
7
2020-03-08T02:50:29.000Z
2022-02-26T06:55:02.000Z
_notebooks/draft/Roadmap_AI.ipynb
phucnsp/blog
343e5987628276ff2c74e4e3ebdcb5a4a1baa1df
[ "Apache-2.0" ]
2
2021-08-30T07:19:54.000Z
2022-01-18T17:12:26.000Z
19.044248
61
0.52184
[ [ [ "# Roadmap to dominate in AI\n> in progress\n- toc: true \n- badges: true\n- comments: true\n- categories: [self-taught]\n- image: images/bone.jpeg\n- hide: true", "_____no_output_____" ], [ "1. dataquest.io\n2. Andrew Ng Machine learning course\n3. Andrew Ng. DL specialization\n4. fastai course: 2018v1, 2018v2, 2019v1, npl course\n5. deeplizard pytorch", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
ec9656d5a8fdc3fa7fb98dbb4e6c2aff77a69bbb
39,508
ipynb
Jupyter Notebook
Notebooks/07_Employer_Level_Measures.ipynb
Coleridge-Initiative/ada-tdc-2020
64bddd3797fd1a517feb3fdb1cc91f6c6b9b6f9b
[ "CC0-1.0" ]
1
2021-11-30T18:06:21.000Z
2021-11-30T18:06:21.000Z
Notebooks/07_Employer_Level_Measures.ipynb
Coleridge-Initiative/ada-tdc-2020
64bddd3797fd1a517feb3fdb1cc91f6c6b9b6f9b
[ "CC0-1.0" ]
null
null
null
Notebooks/07_Employer_Level_Measures.ipynb
Coleridge-Initiative/ada-tdc-2020
64bddd3797fd1a517feb3fdb1cc91f6c6b9b6f9b
[ "CC0-1.0" ]
null
null
null
32.786722
554
0.599904
[ [ [ "<center>\n<img style=\"float: center;\" src=\"images/CI_horizontal.png\" width=\"400\">\n</center>\n<center>\n <span style=\"font-size: 1.5em;\">\n <a href='https://www.coleridgeinitiative.org'>Website</a>\n </span>\n</center>\n\n<center> Julia Lane, Clayton Hunter, Brian Kim, Benjamin Feder, Ekaterina Levitskaya, Tian Lou, Lisa Osorio-Copete. \n</center>", "_____no_output_____" ], [ "# Using Employment and Employer-Level Measures to Understand Indiana's Labor Market", "_____no_output_____" ], [ "## Introduction\n\nWhile in the [Data Exploration](Data_Exploration.ipynb) notebook we focused primarily on understanding our cohort's earnings, here we will first look at two measures of stable employment before switching gears to the demand side of employment: the employers. For the second part of this notebook, we will analyze some employer-level measures created in a supplementary [notebook](Create_Employer_Characteristics.ipynb) to get a better sense of Indiana's labor market and how employers of individuals in our cohort fit into the overall labor market.", "_____no_output_____" ], [ "### Learning Objectives\n\nWe will cover two prominent analyses:\n\n1. Different measures of stable employment\n1. Labor market interactions\n\nThese two sections will have two different units of analysis: the first will focus directly on the individuals in our cohort, and then will switch onto their employers. \n\nBefore we start looking at their employers, a logical prelude would be taking a deeper dive into our cohort's employment. Here, we will walk through two different measures of stable employment within a cohort and see if their earnings differed significantly from those without stable employment. From there, we will load in our employer-level measures file and look at the differences in employers of members in our cohort who experienced different levels in employment.\n\nWe would like to find out if there are any distinguishing factors between the overall labor market in Indiana and the employers that hired members of our 2016Q4 cohort. Ultimately, we want to gain a better understanding of the demand side when it comes to employment opportunities for our TANF leavers.\n\nSimilar to the [Data Exploration](Data_Exploration.ipynb) notebook, we will pose a few direct questions we will use to answer our ultimate question: **How can we use labor market interactions to help explain employment outcomes of TANF leavers?**\n\nBefore we do so, we need to load our external R packages and connect to the database.", "_____no_output_____" ], [ "### R Setup", "_____no_output_____" ] ], [ [ "#database interaction imports\nlibrary(DBI)\nlibrary(RPostgreSQL)\n\n# for data manipulation/visualization\nlibrary(tidyverse)\n\n# scaling data\nlibrary(scales)", "_____no_output_____" ], [ "# create an RPostgreSQL driver\ndrv <- dbDriver(\"PostgreSQL\")\n\n# connect to the database\ncon <- dbConnect(drv,dbname = \"postgresql://stuffed.adrf.info/appliedda\")", "_____no_output_____" ] ], [ [ "## Stable Employment Measures\n\nAs discussed above, we will spend some time in this section taking a look at our 2016Q4 cohort's employment outcomes. We will examine two different defintions of stable employment and see how average quarterly earnings differ for individuals who satisfy these definitions of stable employment. We have listed the two questions we will seek to answer in this section below:\n\n1. How many leavers found stable employment? What percentage is this of our total cohort?\n1. What were the average quarterly earnings within these stable jobs?\n\nLet's first load our table matching our 2016Q4 cohort to their employment outcomes into R.", "_____no_output_____" ] ], [ [ "# read table into R\nqry = \"\nselect *\nfrom ada_tdc_2020.cohort_2016_earnings\n\"\ndf_2016_wages = dbGetQuery(con, qry)", "_____no_output_____" ], [ "# take a look at df_2016_wages\nglimpse(df_2016_wages)", "_____no_output_____" ] ], [ [ "Now, we're ready to start answering our first guiding question for this section.", "_____no_output_____" ], [ "<font color=green><h3>Question 1: How many leavers found stable employment? What percentage is this of our total cohort? </h3></font> \n\nHow would you define stable employment? In fact, it is quite a subjective measure. Here are the two definitions of stable employment we will look at: \n\n1. Those with positive earnings all four quarters after exit with the same employer\n2. Those that experienced full-quarter employment. By full-quarter employment, an individual had earnings in quarters t-1, t, and t+1 from the same employer.\n\n> These are not the only two, but just two common measures of stable employment. If you choose to analyze stable employment within a specific cohort (highly recommended), make sure you clearly state your definition of stable employment.", "_____no_output_____" ], [ "### Stable Employment Measure #1: Positive earnings all four quarters with the same employer\n\nThis calculation is relatively simple given that we have to just manipulate `df_2016_wages`. We will approach this calculation by counting the number of quarters each individual (`ssn`) received wages from each employer (`uiacct`), and then filter for just those `ssn`/`uiacct` combinations that appear in all four quarters in 2017.", "_____no_output_____" ] ], [ [ "# see if we can calculate stable employment measure #1\ndf_2016_wages %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(quarter)\n ) %>%\n ungroup() %>%\n filter(n_quarters==4) %>%\n head()", "_____no_output_____" ] ], [ [ "From here, we can add one line of code `summarize(n_distinct(ssn))` to calculate the number of individuals in our cohort that experienced this measure of stable employment.", "_____no_output_____" ] ], [ [ "# calculate number of individuals in our cohort that experienced stable employment measure #1\ndf_2016_wages %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(quarter)\n ) %>%\n ungroup() %>%\n filter(n_quarters==4) %>%\n summarize(n_distinct(ssn))", "_____no_output_____" ] ], [ [ "If you are curious about the amount of members of our cohort that found stable employment (according to this defintion) with multiple employers, you can do so with a few more lines of code.", "_____no_output_____" ] ], [ [ "# see if we can calculate stable employment measure #1\ndf_2016_wages %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(quarter)\n ) %>%\n ungroup() %>%\n filter(n_quarters==4) %>%\n group_by(ssn) %>%\n summarize(n=n()) %>%\n ungroup() %>%\n filter(n>1) %>%\n summarize(num=n())", "_____no_output_____" ] ], [ [ "Anyways, we can calculate the percentage of our cohort that experienced stable employment within this time frame pretty easily now--we just need to load our original cohort into R as a frame of reference.", "_____no_output_____" ] ], [ [ "# 2016Q4 cohort with most recent case information\nqry <- \"\nSELECT *\nFROM ada_tdc_2020.cohort_2016\n\"\n\n#read into R as df\ndf_2016 <- dbGetQuery(con,qry)", "_____no_output_____" ], [ "# save to calculate stable employment percentage\nstable <- df_2016_wages %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(quarter)\n ) %>%\n ungroup() %>%\n filter(n_quarters==4) %>%\n summarize(num = n_distinct(ssn))", "_____no_output_____" ], [ "# percentage employed all four quarters\npercent((stable$num/n_distinct(df_2016$ssn)), .01)", "_____no_output_____" ] ], [ [ "Now, let's see how the percentage changes when we use our second definition of stable employment.", "_____no_output_____" ], [ "### Stable Employment Measure #2: Full-Quarter Employment\n\nFinding full-quarter employment is a bit more complicated. Instead of using R, we will venture back into SQL, since we will need to find earnings for our cohort from 2016Q4 through 2018Q1 to calculate if an individual experienced full-quarter employment some time in 2017. We have already created this table, named `full_q_wages_2016` in the `ada_tdc_2020` schema for you using the code below:\n> To satisfy full-quarter employment in 2017Q1, an individual needed to have earnings from the same employer in 2016Q4, 2017Q1, and 2017Q2. Therefore, if we want to see all full-quarter employment from 2017Q1 to 2017Q4, we would need all earnings data from 2016Q4 to 2018Q1.", "_____no_output_____" ], [ " create table ada_tdc_2020.full_q_wages_2016 as\n select a.ssn, a.tanf_spell_months, a.tanf_total_months, a.county,\n b.year, b.quarter, b.uiacct, b.wages, b.naics_3_digit, b.cnty, \n format('%s-%s-1', b.year, b.quarter*3-2)::date as job_yr_q\n from ada_tdc_2020.cohort_2016 a\n left join in_dwd.wage_by_employer b\n on a.ssn = b.ssn\n where b.year = 2017 or (b.year = 2016 and b.quarter = 4) or (b.year=2018 and b.quarter=1)", "_____no_output_____" ] ], [ [ "# get earnings for our cohort from 2016Q4-2018Q1\nqry = '\nselect *\nfrom ada_tdc_2020.full_q_wages_2016\nlimit 5\n'\ndbGetQuery(con, qry)", "_____no_output_____" ] ], [ [ "Now that we have earnings for our cohort from 2016Q4-2018Q1, we can calculate full-quarter employment. To do so, we will use three copies of the same table, and then use a `WHERE` clause to make sure we are identifying the same individual and employer combination across three consecutive quarters.\n\nThe `\\'3 month\\'::interval` code can be used when working with dates (`job_yr_q` in this case), as it will match to exactly three months from the original date. Before or after the original date can be indicated with `+` or `-` signs.", "_____no_output_____" ] ], [ [ "# see if we can calculate full-quarter employment\nqry = '\nselect a.ssn, a.uiacct, a.job_yr_q, a.wages\nfrom ada_tdc_2020.full_q_wages_2016 a, ada_tdc_2020.full_q_wages_2016 b, ada_tdc_2020.full_q_wages_2016 c\nwhere a.ssn = b.ssn and a.uiacct=b.uiacct and\na.ssn = c.ssn and a.uiacct = c.uiacct and a.job_yr_q = (b.job_yr_q - \\'3 month\\'::interval)::date and \na.job_yr_q = (c.job_yr_q + \\'3 month\\'::interval)::date\norder by a.ssn, a.job_yr_q\nlimit 5\n'\ndbGetQuery(con, qry)", "_____no_output_____" ] ], [ [ "The query above will only select earnings for quarters where an individual experienced full-quarter employment with an employer, and due to the `WHERE` clause, it will only select full-quarter employment in 2017, and won't include those who experienced full quarter employment in 2016Q4 or 2018Q1.", "_____no_output_____" ] ], [ [ "# read full-quarter employment into r as cohort_2016_full\nqry = '\nselect a.ssn, a.uiacct, a.job_yr_q, a.wages\nfrom ada_tdc_2020.full_q_wages_2016 a, ada_tdc_2020.full_q_wages_2016 b, ada_tdc_2020.full_q_wages_2016 c\nwhere a.ssn = b.ssn and a.uiacct=b.uiacct and\na.ssn = c.ssn and a.uiacct = c.uiacct and a.job_yr_q = (b.job_yr_q - \\'3 month\\'::interval)::date and \na.job_yr_q = (c.job_yr_q + \\'3 month\\'::interval)::date\norder by a.ssn, a.job_yr_q\n'\ncohort_2016_full <- dbGetQuery(con, qry)", "_____no_output_____" ] ], [ [ "Now that we have all records of full-quarter employment, along with their earnings in the quarter, we can easily calculate the number of individuals in our cohort who experienced our second measure of stable employment in at least one quarter.", "_____no_output_____" ] ], [ [ "# calculate number of individuals in our cohort that experienced full-quarter employment\ncohort_2016_full %>%\n summarize(n=n_distinct(ssn))", "_____no_output_____" ], [ "# save number of individuals in our cohort that experienced full-quarter employment\nfull_n <- cohort_2016_full %>%\n summarize(n=n_distinct(ssn))", "_____no_output_____" ], [ "# calculate proportion of people in our cohort that experienced full-quarter employment\npercent((full_n$n/n_distinct(df_2016$ssn)), .01)", "_____no_output_____" ] ], [ [ "We can also calculate the percentage of individuals in our cohort that experienced full quarter employment with the same employer in all four quarters.", "_____no_output_____" ] ], [ [ "cohort_2016_full %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(job_yr_q)) %>%\n ungroup() %>%\n filter(n_quarters == 4) %>%\n summarize(n=n_distinct(ssn))", "_____no_output_____" ] ], [ [ "And then we can calculate this percentage.", "_____no_output_____" ] ], [ [ "# save as full_4\nfull_4 <- cohort_2016_full %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(job_yr_q)) %>%\n ungroup() %>%\n filter(n_quarters == 4) %>%\n summarize(n=n_distinct(ssn))", "_____no_output_____" ], [ "percent((full_4$n/n_distinct(df_2016$ssn)), .01)", "_____no_output_____" ] ], [ [ "If you're curious, we can see if anyone experienced full quarter employment all four quarters with multiple employers as well.", "_____no_output_____" ] ], [ [ "# save as full_4\ncohort_2016_full %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(job_yr_q)) %>%\n ungroup() %>%\n filter(n_quarters == 4) %>%\n group_by(ssn) %>%\n summarize(n_emps = n_distinct(uiacct)) %>%\n filter(n_emps > 1) %>%\n summarize(n=n_distinct(ssn))", "_____no_output_____" ] ], [ [ "Are you surprised at the difference in percentages for our two measures of stable employment?", "_____no_output_____" ], [ "<font color=red><h3> Checkpoint 1: Recreate for 2009Q1 </h3></font> \n\nFind the percentage of our 2009Q1 cohort that experienced stable employment using these two metrics. How do they compare? Does this surprise you?", "_____no_output_____" ] ], [ [ "# How many individuals satisfy stable employment measure #1?\n", "_____no_output_____" ], [ "# What percentage of our cohort satisfies stable employment measure #1?\n", "_____no_output_____" ], [ "# How many individuals satisfy stable employment measure #2?\n\n# Use table \"ada_tdc_2020.full_q_wages_2009\"\n", "_____no_output_____" ], [ "# What percentage of our cohort satisfies stable employment measure #2 for at least one quarter?\n", "_____no_output_____" ] ], [ [ "<font color=green><h3>Question 2: What were the average quarterly earnings within these stable jobs?</h3></font> \n\nLet's see if earnings differed for our cohort when comparing our two measures of stable employment. ", "_____no_output_____" ], [ "### Stable Employment Measure #1: Average Quarterly Earnings\n\nWe'll start with our first measure of those that had earnings with the same employer for all four quarters within our time frame. First, we will isolate all `ssn`/`uiacct` combinations that satisfied this stable employment measure, and then filter our original earnings data frame `df_2016_wages` to just include wages for these combinations.", "_____no_output_____" ] ], [ [ "# all ssn and uiacct values from stable employment measure #1 and save to stable_emp_1\nstable_emp_1 <- df_2016_wages %>%\n group_by(ssn, uiacct) %>%\n summarize(n_quarters = n_distinct(quarter)\n ) %>%\n ungroup() %>%\n filter(n_quarters==4) %>%\n select(-n_quarters)", "_____no_output_____" ] ], [ [ "> The code used to create `stable_emp_1` is copied from the code used earlier to isolate those who had earnings with the same employer for all four quarters within our time frame, with the addition of the last line so we don't store the number of quarters for which they were employed (which is always four in this case anyways).", "_____no_output_____" ] ], [ [ "# see stable_emp_1\nhead(stable_emp_1)", "_____no_output_____" ] ], [ [ "Now, we just need to `inner_join` rows in `df_2016_wages` for those with the same `uiacct` and `ssn` combinations as in `stable_emp_1`, and then we can find the average quarterly earnings.", "_____no_output_____" ] ], [ [ "# find average quarterly earnings for these individuals\ndf_2016_wages %>%\n inner_join(stable_emp_1, by = c('uiacct', 'ssn')) %>%\n summarize(mean_wages = mean(wages))", "_____no_output_____" ] ], [ [ "### Stable Employment Measure #2: Average Quarterly Earnings\n\nFor our second stable employment measure, we have already identified `ssn`/`uiacct`/`job_yr_q` combinations for full-quarter employment. We will use a similar strategy in joining `df_2016_wages` before finding the average quarterly earnings for quarters in which members of our cohort experienced full-quarter employment.", "_____no_output_____" ] ], [ [ "# see cohort_2016_full\nhead(cohort_2016_full)", "_____no_output_____" ], [ "# find average quarterly earnings for stable employment measure 2\ndf_2016_wages %>%\n inner_join(cohort_2016_full, by = c('uiacct', 'ssn', 'job_yr_q') %>%\n summarize(mean_wages = mean(wages))", "_____no_output_____" ] ], [ [ "<font color=red><h3> Checkpoint 2: Wages in Stable Employment for the 2009Q1 Cohort</h3></font> \n\nFind the average quarterly wages for those in our 2009Q1 cohort that experienced stable employment using the two defintions above.", "_____no_output_____" ] ], [ [ "# average quarterly wages under stable employment measure #1\n\n", "_____no_output_____" ], [ "# average quarterly wages under stable employment measure #2\n\n", "_____no_output_____" ] ], [ [ "## Indiana's Employers\n\nIn this section, we'll look at the characteristics of Indiana's employers. First, let's load in and take a quick look at our employer-level characteristics file `employers_2017` (located in the `ada_tdc_2020` schema for all employers in each quarter of 2017.", "_____no_output_____" ], [ "### Load the dataset", "_____no_output_____" ], [ "Before we get started answering these questions, let's load and then take a look at this file.", "_____no_output_____" ] ], [ [ "# look at employer-level characteristics table\nqry <- \"\nselect *\nfrom ada_tdc_2020.employers_2017\nlimit 5\n\"\ndbGetQuery(con, qry)", "_____no_output_____" ], [ "# read into R\nqry <- \"\nselect *\nfrom ada_tdc_2020.employers_2017\n\"\nemployers <- dbGetQuery(con, qry)", "_____no_output_____" ] ], [ [ "Let's see how many rows are in `employer`.", "_____no_output_____" ] ], [ [ "# number of rows\nnrow(employers)", "_____no_output_____" ] ], [ [ "Let's also see how many employers we have on file per quarter in 2017.", "_____no_output_____" ] ], [ [ "# number of employers by quarter\nemployers %>%\n count(quarter)", "_____no_output_____" ] ], [ [ "## Indiana's Employers\n\nNow that the `employers` data frame is ready for use, as in the [Data Exploration](Data_Exploration.ipynb) notebook, we will try to answer some broad questions about Indiana's labor market through some more direct questions:\n\n- What is the total number of jobs per quarter? What about total number of full quarter jobs?\n- What are the most popular industries by number of employees? What about by number of employers?\n- What is the distribution of both total and full-quarter employment of employers per quarter?\n- What is the distribution of total and average annual earnings by quarter of these employers?\n- Did average employment, hiring, and separation rates across all employers vary by quarter in 2017?", "_____no_output_____" ], [ "<font color=green><h3>Question 1: What is the total number of jobs per quarter? What about total number of full quarter jobs?</h3></font> \n\nThere are two columns in `employers` we will focus on to answer this set of questions: `num_employed`, which is a calculation of the number of employers, and `full_num_employed`, which is the number of full-quarter employees.", "_____no_output_____" ] ], [ [ "# find number of employees and full-quarter employees\nemployers %>%\n summarize(total_jobs = sum(num_employed),\n total_full_quarter_jobs = sum(full_num_employed, na.rm=T))", "_____no_output_____" ] ], [ [ "<font color=green><h3>Question 2: What are the most popular industries by number of employees? What about by number of employers?</h3></font> \n\nAgain, we will leverage the `num_employed` variable in `employers`, and this time, we will group by `naics_3_digit`.", "_____no_output_____" ] ], [ [ "# 10 most popular industries\nemployers %>%\n group_by(naics_3_digit) %>%\n summarize(num_employed = sum(num_employed)) %>%\n arrange(desc(num_employed)) %>%\n head(10)", "_____no_output_____" ] ], [ [ "Let's use our industry crosswalk to put some names to these NAICS codes. Like in the [Data Exploration](Data_Exploration.ipynb) notebook, we can use the `naics_2017` table in the `public` schema to act as a crosswalk.", "_____no_output_____" ] ], [ [ "# read naics_2017 table into R as naics\nqry = '\nselect *\nfrom public.naics_2017\n'\nnaics <- dbGetQuery(con, qry)", "_____no_output_____" ], [ "# save 10 most popular industries\npop_naics <- employers %>%\n group_by(naics_3_digit) %>%\n summarize(num_employed = sum(num_employed)) %>%\n arrange(desc(num_employed)) %>%\n # make naics_3_digit character type instead of numeric\n mutate(naics_3_digit = as.character(naics_3_digit)) %>%\n head(10)", "_____no_output_____" ] ], [ [ "Now that we have stored `pop_naics` as a data frame, we can `left_join()` it to `naics` to find the industries associated with each 3-digit NAICS code.", "_____no_output_____" ] ], [ [ "# get industry names of most popular naics\npop_naics %>% \n left_join(naics, by=c('naics_3_digit' = 'naics_us_code')) %>%\n # don't include the other columns\n select(-c(seq_no,naics_3_digit)) %>%\n # sort order of columns\n select(naics_us_title, num_employed)", "_____no_output_____" ] ], [ [ "Do any of these industries suprise you? Now, let's move on to our most common industries by number of employers.\n> In the following code, `n_distinct()` is used to calculate the number of unique employers in 2017, whereas `n()` calculates the number of total employers for all four quarters in 2017.", "_____no_output_____" ] ], [ [ "# calculate number of distinct and total number of employers in all four quarters of 2017\nemployers %>%\n group_by(naics_3_digit) %>%\n summarize(distinct_emp = n_distinct(uiacct),\n num_emps = n()) %>%\n arrange(desc(distinct_emp)) %>%\n filter(!is.na(naics_3_digit)) %>%\n head(10)", "_____no_output_____" ] ], [ [ "Again, we can find the associated industry names with a quick join after saving the resulting data frame above.", "_____no_output_____" ] ], [ [ "# calculate number of distinct and total number of employers in all four quarters of 2017\n# save to pop_naics_emps\npop_naics_emps <- employers %>%\n group_by(naics_3_digit) %>%\n summarize(distinct_emp = n_distinct(uiacct),\n num_emps = n()) %>%\n arrange(desc(distinct_emp)) %>%\n filter(!is.na(naics_3_digit)) %>%\n # again make naics_3_digit character type\n mutate(naics_3_digit = as.character(naics_3_digit)) %>%\n head(10)", "_____no_output_____" ], [ "# get industry names of most popular naics\npop_naics_emps %>% \n left_join(naics, by=c('naics_3_digit' = 'naics_us_code')) %>%\n # don't include the other columns\n select(-c(seq_no,naics_3_digit)) %>%\n # sort order of columns\n select(naics_us_title, distinct_emp, num_emps)", "_____no_output_____" ] ], [ [ "How does this list compare to the one of the most popular industries by number of total employees?", "_____no_output_____" ], [ "<font color=green><h3>Question 3: What is the distribution of both total and full-quarter employment of employers per quarter?</h3></font> \n\nNow, instead of aggregating `num_employed` by quarter, we will simply look at the distribution of `num_employed` within each quarter. We will find the 1st, 10th, 25th, 50th, 75th, 90th and 99th percentiles.", "_____no_output_____" ] ], [ [ "# find distribution of total employees by employer and quarter\nemployers %>%\n summarize('.01' = quantile(num_employed, .01, na.rm=TRUE),\n '.1' = quantile(num_employed, .1, na.rm=TRUE),\n '.25' = quantile(num_employed, .25, na.rm=TRUE),\n '.5' = quantile(num_employed, .5, na.rm=TRUE),\n '.75' = quantile(num_employed, .75, na.rm=TRUE),\n '.9' = quantile(num_employed, .9, na.rm=TRUE),\n '.99' = quantile(num_employed, .99, na.rm=TRUE),\n )", "_____no_output_____" ], [ "# find distribution of full-quarter employees by employer and quarter\nemployers %>%\n summarize('01' = quantile(full_num_employed, .01, na.rm=TRUE),\n '.1' = quantile(full_num_employed, .1, na.rm=TRUE),\n '.25' = quantile(full_num_employed, .25, na.rm=TRUE),\n '.5' = quantile(full_num_employed, .5, na.rm=TRUE),\n '.75' = quantile(full_num_employed, .75, na.rm=TRUE),\n '.9' = quantile(full_num_employed, .9, na.rm=TRUE),\n '.99' = quantile(full_num_employed, .99, na.rm=TRUE),\n )", "_____no_output_____" ] ], [ [ "What does this tell you about the relative size of employers in Indiana?", "_____no_output_____" ], [ "<font color=green><h3>Question 4: What is the distribution of total and average annual earnings by quarter of these employers?\n</h3></font> ", "_____no_output_____" ] ], [ [ "# find distribution of total earnings by employer and quarter\nemployers %>%\n summarize('.01' = quantile(total_earnings, .01, na.rm=TRUE),\n '.1' = quantile(total_earnings, .1, na.rm=TRUE),\n '.25' = quantile(total_earnings, .25, na.rm=TRUE),\n '.5' = quantile(total_earnings, .5, na.rm=TRUE),\n '.75' = quantile(total_earnings, .75, na.rm=TRUE),\n '.9' = quantile(total_earnings, .9, na.rm=TRUE),\n '.99' = quantile(total_earnings, .99, na.rm=TRUE),\n )", "_____no_output_____" ], [ "# find distribution of average annual earnings by employer and quarter\nemployers %>%\n summarize('.1' = quantile(avg_earnings, .1, na.rm=TRUE),\n '.25' = quantile(avg_earnings, .25, na.rm=TRUE),\n '.5' = quantile(avg_earnings, .5, na.rm=TRUE),\n '.75' = quantile(avg_earnings, .75, na.rm=TRUE),\n '.9' = quantile(avg_earnings, .9, na.rm=TRUE),\n '.99' = quantile(avg_earnings, .99, na.rm=TRUE),\n )", "_____no_output_____" ] ], [ [ "Is this what you were expecting to see? How do overall average earnings by employees compare to average earnings within our cohort?", "_____no_output_____" ], [ "<font color=green><h3>Question 5: Did average employment, hiring, and separation rates across all employers vary by quarter in 2017?</h3></font> \n\nHere, we will go back to using `group_by` and `summarize` to find our answers.", "_____no_output_____" ] ], [ [ "# find mean and standard deviation of employment rates by quarter\nemployers %>%\n group_by(quarter) %>%\n summarize(mean = mean(emp_rate, na.rm=TRUE),\n sd = sd(emp_rate, na.rm=TRUE))", "_____no_output_____" ], [ "# find mean and standard deviation of hiring rates by quarter\nemployers %>%\n group_by(quarter) %>%\n summarize(mean = mean(hire_rate, na.rm=TRUE),\n sd = sd(hire_rate, na.rm=T))", "_____no_output_____" ], [ "# find mean and standard deviation of separation rates by quarter\nemployers %>%\n group_by(quarter) %>%\n summarize(mean = mean(sep_rate, na.rm=T),\n sd = sd(sep_rate, na.rm=T))", "_____no_output_____" ] ], [ [ "Based on your knowledge of employment patterns in 2017, are these results consistent with the overall trends in the United States at the time?", "_____no_output_____" ], [ "<font color=red><h3> Checkpoint 3: Understanding Our Cohort within Labor Market </h3></font> \n\nOptimally, we would like to get a better sense of who is employing our 2016 cohort - are they larger employers with lots of turnover? Do they tend to pay their employees better? Please find the answers to the questions posed in \"Indiana's Employers\" for employers that employed members of our cohort. Filter the `employers` data frame based on the `uiacct` and `quarter`.", "_____no_output_____" ] ], [ [ "# guiding question 1\n\n", "_____no_output_____" ], [ "# guiding question 2\n\n", "_____no_output_____" ], [ "# guiding question 3\n\n", "_____no_output_____" ], [ "# guiding question 4\n\n", "_____no_output_____" ], [ "# guiding question 5\n\n", "_____no_output_____" ] ], [ [ "In this notebook, you have explored two separate definitions of stable employment and how quarterly wages changed under the two definitions. Then, you switched over to looking at the demand side of the labor market, learning about all of Indiana's employers in 2017. \n\nAfter answering the final checkpoint, you will be able to compare employers of our cohort to the overall labor market in Indiana. Did you find that individuals in our cohort were not employed by certain types of employers? For your next assignment, you will repeat this analysis with our 2009Q1 cohort to better understand the labor market as it began to recover from the Great Recession.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
ec96703fa44fde72d8b77d5736f3cb8f9ae4b613
25,439
ipynb
Jupyter Notebook
superfit/SNID_DASH_SUPERPYFIT-Copy1.ipynb
Hallflower20/supernova-spectrum-analysis
e852f23b11677fdcd8c95f2df6e267bb7afd093c
[ "MIT" ]
null
null
null
superfit/SNID_DASH_SUPERPYFIT-Copy1.ipynb
Hallflower20/supernova-spectrum-analysis
e852f23b11677fdcd8c95f2df6e267bb7afd093c
[ "MIT" ]
null
null
null
superfit/SNID_DASH_SUPERPYFIT-Copy1.ipynb
Hallflower20/supernova-spectrum-analysis
e852f23b11677fdcd8c95f2df6e267bb7afd093c
[ "MIT" ]
2
2020-10-07T20:10:30.000Z
2021-05-09T23:16:36.000Z
72.065156
11,716
0.711742
[ [ [ "import os\nimport astropy\nimport numpy as np\nfrom astropy.table import Table\nfrom astropy.table import Column\nimport glob\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom collections import Counter\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom astropy.io import ascii", "_____no_output_____" ], [ "dash = \"C:/users/20xha/Documents/Caltech/Research/DASH/\"", "_____no_output_____" ], [ "dashoutput = np.load(dash+\"output.npy\",allow_pickle=True)", "_____no_output_____" ], [ "snid_superfit_results = Table.read(\"results_2018_all_exact_z.txt\", format = \"ascii\")", "_____no_output_____" ], [ "final_rcf_table = Table.from_pandas(pd.read_hdf(\"C:/users/20xha/Documents/Caltech/Research/final_rcf_table.h5\"))", "_____no_output_____" ], [ "SEDM_ML_sample = Table.read(\"C:/Users/20xha/Documents/Caltech/Research/SEDM_ML_sample.ascii\", format = \"ascii\")\nSEDM_ML_sample.rename_column('col1', 'ZTF_Name')\nSEDM_ML_sample.rename_column('col2', \"Class\")\nSEDM_ML_sample.rename_column('col8', \"Version\")", "_____no_output_____" ], [ "ranges = np.linspace(0, 25, 6)\nrlap = 0\ncount = 0\nResultsTable = Table(\n names=(\"ZTF_Name\", \"c_dash\", \"confidence_dash\"\n ),\n meta={\"name\": \"Spectrum Results after SNID\"},\n dtype=(\"U64\", \"U64\", \"float64\",\n )\n )\n\nfor i in dashoutput:\n row = []\n row.append(i[-1])\n reliable = np.where(np.asarray(i[4]) != 'Unreliable matches')[0]\n if(len(reliable) != 0):\n best = np.asarray(i[2])[:,0][reliable]\n c = Counter(best)\n rlap_list = []\n for rlap_vals in np.asarray(i[3]):\n rlap_list.append(float(rlap_vals.split(\":\")[-1]))\n best_rlap = np.max(rlap_list)\n if(best_rlap > rlap):\n row.append(c.most_common()[0][0])\n row.append(best_rlap)\n ResultsTable.add_row(row)\n\n count += 1\nif(count % 50 == 0):\n print(count)", "_____no_output_____" ], [ "all_the_power = astropy.table.join(snid_superfit_results, ResultsTable)", "_____no_output_____" ], [ "snid_superfit_results[0]", "_____no_output_____" ], [ "ResultsTable[0]", "_____no_output_____" ], [ "all_the_power[0]", "_____no_output_____" ], [ "final_stuff = all_the_power[\"ZTF_Name\", \"classification\", \"c_rlap\", \"SF_fit_1\", \"c_snid\", \"c_dash\"]", "_____no_output_____" ], [ "negativeIa = 0\npositiveIa = 0\nfor j in final_stuff:\n correct_1a = \"Ia\" in j[\"classification\"]\n if(correct_1a):\n positiveIa += 1\n if(not(correct_1a)):\n negativeIa += 1", "_____no_output_____" ], [ "tprfpr = Table(\n names=(\"rlap\", \"tpr_snid\", \"tpr_dash\", \"tpr_superfit\", \"fpr_snid\", \"fpr_dash\", \"fpr_superfit\"\n ),\n meta={\"name\": \"Comparison of Three Programs\"},\n dtype=(\"float64\", \"float64\", \"float64\", \"float64\", \"float64\", \"float64\", \"float64\"\n )) \nfor rlap in np.linspace(0, 25, 251):\n truepositive = [0, 0, 0]\n falsepositive = [0, 0, 0]\n for i in final_stuff:\n if(i[\"c_rlap\"] > rlap):\n c_actual = \"Ia\" in i[\"classification\"]\n \n c_snid = \"Ia\" in i[\"c_snid\"]\n if(c_actual == True and c_snid == True):\n truepositive[0] += 1\n if(c_actual != True and c_snid == True):\n falsepositive[0] += 1\n \n c_dash = \"Ia\" in i[\"c_dash\"]\n if(c_actual == True and c_dash == True):\n truepositive[1] += 1\n if(c_actual != True and c_dash == True):\n falsepositive[1] += 1\n \n c_super = \"Ia\" in i[\"SF_fit_1\"]\n if(c_actual == True and c_super == True):\n truepositive[2] += 1\n if(c_actual != True and c_super == True):\n falsepositive[2] += 1\n row = [rlap, truepositive[0]/positiveIa, truepositive[1]/positiveIa, truepositive[2]/positiveIa, falsepositive[0]/negativeIa, falsepositive[1]/negativeIa, falsepositive[2]/negativeIa]\n tprfpr.add_row(row) \n \n ", "_____no_output_____" ], [ "plt.plot(tprfpr[\"fpr_dash\"], tprfpr[\"tpr_dash\"], color = \"red\")\nplt.plot(tprfpr[\"fpr_snid\"], tprfpr[\"tpr_snid\"], color = \"blue\")\nplt.plot(tprfpr[\"fpr_superfit\"], tprfpr[\"tpr_superfit\"], color = \"green\")", "_____no_output_____" ], [ "ascii.write(tprfpr, \"tprfpr.ascii\")", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9679dbbb8af0cd652d15f38fd07662d1fc7963
8,731
ipynb
Jupyter Notebook
notebooks/04.3.2.3 Lower Triangular Matrix Vector Multiply Routines.ipynb
vishwesh5/Linear-Algebra---LAFF
8d8e711628ff99c544b01c93072f35bf6cb06d1e
[ "MIT" ]
1
2019-12-27T21:12:29.000Z
2019-12-27T21:12:29.000Z
notebooks/04.3.2.3 Lower Triangular Matrix Vector Multiply Routines.ipynb
vishwesh5/Linear-Algebra---LAFF
8d8e711628ff99c544b01c93072f35bf6cb06d1e
[ "MIT" ]
null
null
null
notebooks/04.3.2.3 Lower Triangular Matrix Vector Multiply Routines.ipynb
vishwesh5/Linear-Algebra---LAFF
8d8e711628ff99c544b01c93072f35bf6cb06d1e
[ "MIT" ]
null
null
null
30.211073
391
0.516321
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ec9685226ad85493bc5ecf2b6fb3a54cf7344f34
69,238
ipynb
Jupyter Notebook
Code/Q4.verified_account .ipynb
Rohit-Rawat24/twitter_data_analysis
d951f2ee7af3006f07826d7f53f4214f45be615b
[ "MIT" ]
null
null
null
Code/Q4.verified_account .ipynb
Rohit-Rawat24/twitter_data_analysis
d951f2ee7af3006f07826d7f53f4214f45be615b
[ "MIT" ]
null
null
null
Code/Q4.verified_account .ipynb
Rohit-Rawat24/twitter_data_analysis
d951f2ee7af3006f07826d7f53f4214f45be615b
[ "MIT" ]
null
null
null
70.149949
174
0.604292
[ [ [ "import pyspark\nfrom pyspark import SparkContext\nsc = SparkContext.getOrCreate();\nimport findspark\nfindspark.init()\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.master(\"local[*]\").getOrCreate()\nspark.conf.set(\"spark.sql.repl.eagerEval.enabled\", True) # Property used to format output tables better\nspark\n", "_____no_output_____" ], [ "import os\nimport tweepy as tw", "_____no_output_____" ], [ "#sc.stop()", "_____no_output_____" ], [ "# API Keys and Tokens\nconsumer_key = 'TXq9cSiPgoZvcKFb2h1wQcnfF'\nconsumer_secret = 'Hf3jBUCLDd065ndxrPqhElNCdq4PUYuC0zh0MAyLOAc5vICA7C'\naccess_token = '1423514561668272128-Exw0A7NmUy6txQg2JIOCzmATbvZeJY'\naccess_secret = 'GyJnh9o1iCDbtDHtJXXuGPqjtDZ4OxRgoy7W7gC4sPeLx'\n", "_____no_output_____" ], [ "auth = tw.OAuthHandler(consumer_key,consumer_secret)\nauth.set_access_token(access_token,access_secret)\napi= tw.API(auth,wait_on_rate_limit =True)", "_____no_output_____" ], [ "search_words = \"#olympic\"\ndate_since = \"2021-08-07\"", "_____no_output_____" ], [ "tweets = tw.Cursor(api.search,q=search_words,lang=\"en\",since=date_since).items(200)\n#tweets = tw.Cursor(api.search, lang=\"en\",since=date_since).items(20)\n[tweet.text for tweet in tweets]", "_____no_output_____" ], [ "new_search= search_words \nnew_search", "_____no_output_____" ], [ "\ntweets = tw.Cursor(api.search, q=new_search,lang=\"en\",since=date_since).items(500)\nusers_details = [[tweet.user.screen_name,tweet.user.name,tweet.user.followers_count,tweet.user.verified]for tweet in tweets]\nusers_details", "_____no_output_____" ], [ "from pyspark.sql.types import StructType,StructField, StringType, IntegerType\nschema = StructType([ \\\n StructField(\"Username\",StringType(),True), \\\n StructField(\"Name\",StringType(),True), \\\n StructField(\"Followers_count\", IntegerType(), True), \\\n StructField(\"Verified\", StringType(), True) \\\n ])\n \ndf = spark.createDataFrame(data=users_details,schema=schema)\n#df.printSchema()\n#df.show(truncate=False)\ndf.where(df.Verified == \"true\").show()", "+---------------+--------------------+---------------+--------+\n| Username| Name|Followers_count|Verified|\n+---------------+--------------------+---------------+--------+\n| RudyCuzzetto| Rudy Cuzzetto, MPP| 9186| true|\n| bwfmedia| BWF| 197396| true|\n| BPCLimited| Bharat Petroleum| 315347| true|\n| editorji| editorji| 40753| true|\n| Outlookindia| Outlook Magazine| 233711| true|\n| latestly| LatestLY| 70626| true|\n| TheQuint| The Quint| 683754| true|\n| CNBCTV18News| CNBC-TV18| 957045| true|\n| GraziaIndia| Grazia India| 1587085| true|\n| Raghav_Bahl| Raghav Bahl| 112795| true|\n| ndtvvideos| NDTV Videos| 314308| true|\n| ndtv| NDTV| 16004088| true|\n| ChicagoCouncil| The Chicago Council| 21519| true|\n| BevVincent| Bev Vincent| 8233| true|\n| RusEmbIndia|Russia in India 🇷🇺| 35158| true|\n| TBTimes_Rays| Marc Topkin| 50997| true|\n| Jerusalem_Post| The Jerusalem Post| 533689| true|\n| muslimgirl| Muslim Girl| 41582| true|\n|BrandStoryboard| Storyboard| 14568| true|\n| ShibaniGharat| Shibani Gharat| 10550| true|\n+---------------+--------------------+---------------+--------+\nonly showing top 20 rows\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec96a9d7ffa84eb6087a0888ce8aca1caeed91b6
4,210
ipynb
Jupyter Notebook
8-Labs/Lab24/.ipynb_checkpoints/Lab24-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
8-Labs/Lab24/.ipynb_checkpoints/Lab24-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
8-Labs/Lab24/.ipynb_checkpoints/Lab24-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
26.815287
358
0.563183
[ [ [ "%%html\n<!--Script block to left align Markdown Tables-->\n<style>\n table {margin-left: 0 !important;}\n</style>", "_____no_output_____" ] ], [ [ "**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: [Lab24](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24.ipynb)\n\n___", "_____no_output_____" ], [ "# <font color=green>Laboratory 24: \"Predictor-Response Data Models\"</font>\n\nLAST NAME, FIRST NAME\n\nR00000000\n\nENGR 1330 Laboratory 24 ", "_____no_output_____" ], [ "## Exercise: Watershed Response Metrics \n\n### Background \nRainfall-Runoff response prediction is a vital step in engineering design for mitigating flood-induced infrastructure failure. One easy to measure characteristic of a watershed is its drainage area. Harder to quantify are its characteristic response time, and its conversion (of precipitation into runoff) factor.\n\n### Study Database\n\nThe [watersheds.csv](http://54.243.252.9/engr-1330-webroot/4-Databases/watersheds.csv) dataset contains (measured) drainage area for 92 study watersheds in Texas from [Cleveland, et. al., 2006](https://192.168.1.75/documents/about-me/MyWebPapers/journal_papers/ASCE_Irrigation_Drainage_IR-022737/2006_0602_IUHEvalTexas.pdf), and the associated data:\n\n|Columns|Info.|\n|:---|:---|\n|STATION_ID |USGS HUC-8 Station ID code|\n|TDA |Total drainage area (sq. miles) |\n|RCOEF|Runoff Ratio (Runoff Depth/Precipitation Depth)|\n|TPEAK|Characteristic Time (minutes)|\n|FPEAK|Peaking factor (same as NRCS factor)|\n|QP_OBS|Observed peak discharge (measured)|\n|QP_MOD|Modeled peak discharge (modeled)| \n\n### :\n\nUsing the following steps, build a predictor-response type data model. \n", "_____no_output_____" ], [ "<hr/><hr/> \n\n**Step 1:** \n\n<hr/>\n\nRead the \"watersheds.csv\" file as a dataframe. Explore the dataframe and in a markdown cell briefly describe the summarize the dataframe. <br>", "_____no_output_____" ] ], [ [ "# import packages\n# read data file\n# summarize contents + markdown cell as needed", "_____no_output_____" ] ], [ [ "<hr/><hr/> \n\n**Step 2:** <hr/>\n\nMake a data model using **TDA** as a predictor of **TPEAK** ($T_{peak} = \\beta_{0}+\\beta_{1}*TDA$) <br> Plot your model and the data on the same plot. Report your values of the parameters.", "_____no_output_____" ] ], [ [ "# ", "_____no_output_____" ] ], [ [ "<hr/><hr/> \n\n**Step 3:**\n\n<hr/>\n\nMake a data model using **log(TDA)** as a predictor of **TPEAK** ($T_{peak} = \\beta_{0}+\\beta_{1}*log(TDA)$)\n\nIn your opinion which mapping of **TDA** (arithmetic or logarithmic) produces a more useful graph? ", "_____no_output_____" ] ], [ [ "#", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec96a9f4c6d487bed6d28353215d3607400e70a5
18,019
ipynb
Jupyter Notebook
Excel-Chapter6.ipynb
nontechautomation/Excel
dedbc579108fa0e6c337098eb542062ee5ed4355
[ "MIT" ]
null
null
null
Excel-Chapter6.ipynb
nontechautomation/Excel
dedbc579108fa0e6c337098eb542062ee5ed4355
[ "MIT" ]
null
null
null
Excel-Chapter6.ipynb
nontechautomation/Excel
dedbc579108fa0e6c337098eb542062ee5ed4355
[ "MIT" ]
null
null
null
43.524155
1,499
0.431156
[ [ [ "## Basic Column Operations", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Read csv file and load data ", "_____no_output_____" ] ], [ [ "df = pd.read_excel('Data/Excel_06_01.xlsx')", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Add Col A, Col B and store result in column named Sum", "_____no_output_____" ] ], [ [ "df['Sum'] = df['ColA'] + df['ColB']", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Subtraction", "_____no_output_____" ] ], [ [ "df['Sub'] = df['ColA'] - df['ColB'] ", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Multiplication", "_____no_output_____" ] ], [ [ "df['Mul'] = df['ColB'] * df['ColC'] ", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### String operations", "_____no_output_____" ] ], [ [ "df = pd.read_excel('Data/Excel_06_02.xlsx')", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df['string_append'] = df['Str1'] + df['Str2']", "_____no_output_____" ], [ "df", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
ec96aa135cb604f36b932a074d7b454beb056e4a
562,953
ipynb
Jupyter Notebook
learningPoles_network-Copy1.ipynb
sadjadasghari/ActivityRecognition
2ecd8fa9ec7c4a4d2128bbac1cf3cf48023c62bf
[ "Apache-2.0" ]
null
null
null
learningPoles_network-Copy1.ipynb
sadjadasghari/ActivityRecognition
2ecd8fa9ec7c4a4d2128bbac1cf3cf48023c62bf
[ "Apache-2.0" ]
null
null
null
learningPoles_network-Copy1.ipynb
sadjadasghari/ActivityRecognition
2ecd8fa9ec7c4a4d2128bbac1cf3cf48023c62bf
[ "Apache-2.0" ]
null
null
null
32.49368
1,415
0.427888
[ [ [ "from __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\n# import cv2\n# import csv\nimport os\nimport sys\nimport time\nimport struct\nimport h5py\nimport scipy.io as sio\n# from scipy import ndimage\nfrom numpy import linalg as LA\nfrom IPython.display import display, Image\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\nimport tensorflow as tf\n\n# Config the matplotlib backend as plotting inline in IPython\n%matplotlib inline", "_____no_output_____" ], [ "import scipy.io\n# Load synthetic dataset\nnum_classes = 3\n\n# 60 samples\n#X = h5py.File('/Users/angelsrates/Documents/PhD/Robust Systems Lab/Activity Recognition/Code/poles_data.mat')\n#y = h5py.File('/Users/angelsrates/Documents/PhD/Robust Systems Lab/Activity Recognition/Code/poles_y.mat')\n\n# 300 samples\nX = scipy.io.loadmat('/Users/angelsrates/Documents/activity-recog-synthetic/poles_data2.mat')\ny = scipy.io.loadmat('/Users/angelsrates/Documents/activity-recog-synthetic/poles_y2.mat')\n\nX = X['data']\nX = np.squeeze(np.transpose(X))\ny = y['label']\ny = y - 1\ny = np.squeeze(y)", "_____no_output_____" ], [ "np.random.seed(4294967295)\npermutation = np.random.permutation(len(X))\nX = [X[perm] for perm in permutation]\ny = [y[perm] for perm in permutation]", "_____no_output_____" ], [ "#Select training and testing (75% and 25%)\ny = [int(i) for i in y]\nX_train = np.asarray(X[:225])\ny_train = np.asarray(y[:225])\nX_test = np.asarray(X[225:])\ny_test = np.asarray(y[225:])\n\nprint(X_train.shape)\nprint(y_train.shape)\nprint(X_test.shape)\nprint(y_test.shape)", "(225, 50)\n(225,)\n(75, 50)\n(75,)\n" ], [ "sys_par = np.array([[-1, 0.823676337910219, -1], [-1,-1.93592782488463,-1]])", "_____no_output_____" ], [ "def extract_batch_size(_train, step, batch_size):\n # Function to fetch a \"batch_size\" amount of data from \"(X|y)_train\" data. \n \n shape = list(_train.shape)\n #shape = list((batch_size, 1843200))\n shape[0] = batch_size\n #shape[1] = 1843200\n batch_s = np.empty(shape)\n\n for i in range(batch_size):\n # Loop index\n index = ((step-1)*batch_size + i) % len(_train)\n batch_s[i] = _train[index]\n #batch_s[i] = np.reshape(load_video(_train[index]), (1,1843200))\n\n return batch_s\n\ndef one_hot(y_):\n # Function to encode output labels from number indexes \n # e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]\n \n y_ = y_.reshape(len(y_))\n n_values = np.max(y_) + 1\n return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS", "_____no_output_____" ], [ "from sklearn import linear_model\n\n#alpha = 0.1\ndef np_sparseLoss(y,p,alpha):\n #Assume p is real\n N = y.shape[0]\n k = p.shape[1]\n print(p)\n W = np.zeros((N,k))\n pw_idx = np.arange(1, N+1, 1)\n #print(pw_idx.shape)\n # Define vocabulary on set of poles\n for i in range(k):\n W[:,i] = np.power(np.squeeze(np.full((1, N), np.squeeze(p[0,i]))), pw_idx)\n # ADMM - Lasso\n clf = linear_model.Lasso(alpha=alpha)\n clf.fit(W, y)\n linear_model.Lasso(alpha=alpha, copy_X=True, fit_intercept=True, max_iter=1000,\n normalize=True, positive=False, precompute=False, random_state=None,\n selection='cyclic', tol=0.0001, warm_start=False)\n coeff = clf.coef_\n coeff = np.reshape(coeff, [k,1])\n print(coeff)\n return coeff\n\n#s = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8])\n#c = np_sparseLoss(s,p,alpha)", "_____no_output_____" ], [ "from tensorflow.python.framework import ops\nfrom tensorflow.python.ops import array_ops\nfrom tensorflow.python.ops import sparse_ops\n\ndef coeff_grad(y,p,c, grad):\n #y = op.inputs[0] \n #p = op.inputs[1]\n #c = op.outputs[0]\n #W_shape = W.get_shape().as_list()\n y_shape = y.get_shape().as_list()\n p_shape = p.get_shape().as_list()\n N = y_shape[0]\n K = p_shape[1]\n # W Calculation\n impulse = []\n idx = tf.cast(tf.stack(np.arange(1, N+1, 1)), tf.float64)\n for cc in range(K):\n impulse.append(tf.pow(tf.tile(p[:,cc], [50], name=None), idx , name=None))\n W = tf.cast(tf.reshape(tf.stack(impulse, axis=1), (N,K)), tf.float64)\n WW = tf.matrix_inverse(tf.matmul(tf.transpose(W), W))\n Wty = tf.matmul(tf.transpose(W), y)\n WWc = tf.matmul(WW, c)\n output_dW = []\n # Grad wrt W\n for i in range(N):\n for j in range(K):\n output_dWty = []\n output_dWWc = []\n for n in range(K):\n gr1 = tf.gradients(Wty[n,:], [W[i,j]])\n gr1 = [tf.constant(0, dtype=tf.float64) if t == None else t for t in gr1]\n gr2 = tf.gradients(WWc[n,:], [W[i,j]])\n gr2 = [tf.constant(0, dtype=tf.float64) if t == None else t for t in gr2]\n output_dWty.append(gr1)\n output_dWWc.append(gr2)\n gr = tf.matmul(WW, tf.subtract(tf.stack(output_dWty), tf.stack(output_dWWc)))\n output_dW.append(gr)\n dW = tf.reshape(tf.squeeze(tf.stack(output_dW)), [N, K, K])\n \n # Grad wrt p\n grp = []\n for k in range(K):\n output_dp = []\n for i in range(N):\n output_dp.append(tf.multiply(tf.reshape(tf.multiply(tf.cast(i, tf.float64), tf.pow(p[0, k],tf.cast(i-1, tf.float64))), [1]), tf.reshape(dW[i,k,:], [K,1])))\n grp.append(tf.add_n(output_dp))\n dp = tf.stack(grp)\n dp_list = []\n for j in range(K):\n dp_list.append(tf.reduce_sum(tf.multiply(dp[j,:,:], grad)))\n dp = tf.reshape(tf.stack(dp_list), [1, K])\n print('dc/dp size:', dp.get_shape())\n \n #dW = tf.reshape(dW, [N*K,K,1])\n #dW_list = []\n #for j in range(N*K):\n # dW_list.append(tf.reduce_sum(tf.multiply(dW[j,:,:], grad)))\n #dW = tf.reshape(tf.stack(dW_list), [N, K])\n #print('dc/dW size:', dW.get_shape())\n \n # Grad wrt y\n dy = tf.matmul(WW, tf.transpose(W))\n dy_list = []\n for j in range(N):\n dy_list.append(tf.reduce_sum(tf.multiply(dy[:,j], grad)))\n dy = tf.reshape(tf.stack(dy_list), [N, 1])\n print('dc/dy size:', dy.get_shape())\n \n # Grad wrt alpha \n dalpha = tf.matmul(tf.scalar_mul(tf.constant(-1, dtype=tf.float64), WW), tf.sign(c))\n dalpha = tf.reduce_sum(tf.multiply(dalpha, grad))\n print('dc/dalpha size:', dalpha.get_shape())\n \n return dy, dp, dalpha", "_____no_output_____" ], [ "def py_func(func, inp, Tout, stateful=True, name=None, grad=None):\n\n # Need to generate a unique name to avoid duplicates:\n rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1000000000000000))\n\n tf.RegisterGradient(rnd_name)(grad)\n g = tf.get_default_graph()\n with g.gradient_override_map({\"PyFunc\": rnd_name}):\n return tf.py_func(func, inp, Tout, stateful=stateful, name=name)", "_____no_output_____" ], [ "from tensorflow.python.framework import ops\n\ndef tf_coeff_grad(y,p,alpha, name=None):\n\n with ops.op_scope([y,p,alpha], name, \"CoeffGrad\") as name:\n z = py_func(np_sparseLoss,\n [y,p,alpha],\n [tf.double],\n name=name,\n grad=coeff_grad) # <-- here's the call to the gradient\n return z[0]", "_____no_output_____" ], [ "from scipy import signal\n#import control\nfrom scipy.signal import step2\nimport math\n\n# Parameters\nlearning_rate = 0.0015\n#training_iters = 45000\nbatch_size = 1\n#display_step = 10\n\n# Network Parameters\nn_input = 50\nn_classes = 3\nN = 50\n#dropout = 0.75 # Dropout, probability to keep units\n\n# tf Graph input\nx = tf.placeholder(tf.float64, [n_input, 1])\ny = tf.placeholder(tf.float64, [1, n_classes])\ngrad = tf.constant(0, dtype=tf.float64)\n#y = tf.placeholder(tf.int32, [1,1])\n\ndef index_along_every_row(array, index):\n N,_ = array.shape \n return array[np.arange(N), index]\n\ndef build_hankel_tensor(x, nr, nc, N, dim):\n cidx = np.arange(0, nc, 1)\n ridx = np.transpose(np.arange(1, nr+1, 1))\n Hidx = np.transpose(np.tile(ridx, (nc,1))) + dim*np.tile(cidx, (nr,1))\n Hidx = Hidx - 1\n arr = tf.reshape(x[:], (1,N))\n return tf.py_func(index_along_every_row, [arr, Hidx], [tf.float64])[0]\n\ndef build_hankel(x, nr, nc, N, dim):\n cidx = np.arange(0, nc, 1)\n ridx = np.transpose(np.arange(1, nr+1, 1))\n Hidx = np.transpose(np.tile(ridx, (nc,1))) + dim*np.tile(cidx, (nr,1))\n Hidx = Hidx - 1\n arr = x[:]\n return arr[Hidx]\n\n# Create model\ndef poles_net(x,grad):\n # Reshape input picture\n #x = tf.reshape(x, shape=[-1, , 50, 1])\n # Change accordingly\n dim = 1\n N = 50\n num_poles = 2\n # Complex poles\n #idx = tf.cast(tf.stack(np.arange(1, N+1, 1)), tf.complex128)\n #p11 = tf.multiply(tf.cast(tf.sqrt(weights['r11']), tf.complex128), tf.exp(tf.complex(tf.constant(0, tf.float64), weights['theta11'])))\n #p12 = tf.multiply(tf.cast(tf.sqrt(weights['r12']), tf.complex128), tf.exp(tf.complex(tf.constant(0, tf.float64), -weights['theta12'])))\n #p21 = tf.multiply(tf.cast(tf.sqrt(weights['r21']), tf.complex128), tf.exp(tf.complex(tf.constant(0, tf.float64), weights['theta21'])))\n #p22 = tf.multiply(tf.cast(tf.sqrt(weights['r22']), tf.complex128), tf.exp(tf.complex(tf.constant(0, tf.float64), -weights['theta22']))) \n #y11 = tf.pow(tf.tile(p11, [50], name=None), idx , name=None)\n #y12 = tf.pow(tf.tile(p12, [50], name=None), idx, name=None)\n #y21 = tf.pow(tf.tile(p21, [50], name=None), idx, name=None)\n #y22 = tf.pow(tf.tile(p22, [50], name=None), idx, name=None)\n #W = tf.cast(tf.reshape(tf.stack([y11, y21, y12, y22], 1), (N,4)), tf.float64)\n \n # Real poles\n idx = tf.cast(tf.stack(np.arange(1, N+1, 1)), tf.float64)\n p1 = weights['real1']\n p2 = weights['real2']\n p3 = weights['real3']\n p = tf.stack([p1, p2, p3], 1)\n \n #alpha = tf.constant([0.1])\n alpha = weights['alpha']\n #a = tf.matmul(W, W, adjoint_a=True)\n #c = tf.matrix_inverse(tf.cast(a, tf.float64), adjoint=False, name=None)\n #coeff = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(tf.transpose(W), W)), tf.transpose(W)), tf.reshape(x, (N,1)))\n #alpha_ind = tf.reshape(tf.stack([alpha_ind1, alpha_ind2]), (1,2))\n #coeff = tf.matrix_solve_ls(W, tf.reshape(x, (N,1)), fast=False, l2_regularizer=0.002, name=None)\n #x = tf.cast(tf.reshape(x, (N,1)), tf.complex128)\n #coeff = tf.transpose(coeff)\n coeff = tf_coeff_grad(x,p,alpha)\n print(coeff)\n out = tf.add(tf.matmul(tf.transpose(tf.cast(coeff, tf.float32)), weights['out']), biases['out'])\n diff = tf.subtract(tf.cast(out, tf.float64), y)\n grad = coeff_grad(x,p,coeff,grad)\n return [coeff, out]\n\nweights = {\n 'r11': tf.Variable(tf.random_uniform([1], minval=(0.02)**2, maxval=(1)**2, dtype=tf.float64)), # Complex poles\n 'r12': tf.Variable(tf.random_uniform([1], minval=(0.02)**2, maxval=(1)**2, dtype=tf.float64)),\n 'theta11': tf.Variable(tf.random_uniform([1], minval=0, maxval=math.pi, dtype=tf.float64)),\n 'theta12': tf.Variable(tf.random_uniform([1], minval=0, maxval=math.pi, dtype=tf.float64)),\n 'r21': tf.Variable(tf.random_uniform([1], minval=(0.02)**2, maxval=(1)**2, dtype=tf.float64)),\n 'r22': tf.Variable(tf.random_uniform([1], minval=(0.02)**2, maxval=(1)**2, dtype=tf.float64)),\n 'theta21': tf.Variable(tf.random_uniform([1], minval=0, maxval=math.pi, dtype=tf.float64)),\n 'theta22': tf.Variable(tf.random_uniform([1], minval=0, maxval=math.pi, dtype=tf.float64)),\n 'real1': tf.Variable(tf.random_uniform([1], minval=-1, maxval=1, dtype=tf.float64)), # Real poles\n 'real2': tf.Variable(tf.random_uniform([1], minval=-1, maxval=1, dtype=tf.float64)),\n 'real3': tf.Variable(tf.random_uniform([1], minval=-1, maxval=1, dtype=tf.float64)),\n 'alpha' : tf.Variable(tf.constant(0.1, dtype=tf.float64)),\n #'sys_par1': tf.Variable(tf.random_normal([1], dtype=tf.float64)),\n #'sys_par2': tf.Variable(tf.random_normal([1], dtype=tf.float64)),\n 'out': tf.Variable(tf.random_normal([2, n_classes]))\n}\n \nbiases = {\n 'out': tf.Variable(tf.random_normal([1, n_classes]))\n}\n\n[coeff, pred]= poles_net(x,grad)\n\n# Define loss and optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\n#cost = tf.reduce_mean(tf.losses.softmax_cross_entropy(logits=pred, onehot_labels=y))\n#cost = tf.reduce_mean(tf.losses.mean_squared_error(predictions=pred, labels=y))\n#cost = tf.subtract(pred, labels)\n\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(cost)\n\n# Evaluate model\ncorrect_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\ninit = tf.global_variables_initializer()", "WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values)\nTensor(\"CoeffGrad_10:0\", dtype=float64)\ndc/dp size: (1, 3)\ndc/dy size: (50, 1)\ndc/dalpha size: ()\n" ], [ "y_test = one_hot(y_test)", "_____no_output_____" ], [ "# Launch the graph\ntraining_iters = X_train.shape[0]*10\ndisplay_step = 1\nwith tf.Session() as sess:\n sess.run(init)\n step = 1\n # Keep training until reach max iterations\n train_acc = 0\n while step * batch_size <= training_iters:\n batch_x = np.reshape(extract_batch_size(X_train,step,batch_size), [50, 1])\n batch_y = extract_batch_size(one_hot(y_train),step,batch_size)\n #batch_y = np.reshape(extract_batch_size(y_train,step,batch_size), (1,1))\n # Run optimization op (backprop)\n sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})\n if step % display_step == 0:\n # Calculate batch loss and accuracy\n loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,\n y: batch_y})\n train_acc += acc \n print(\"Iter \" + str(step*batch_size) + \", Minibatch Loss= \" + \\\n \"{:.6f}\".format(loss) + \", Training Accuracy= \" + \\\n \"{:.5f}\".format(acc))\n step += 1\n print('Final Training Accuracy:', train_acc/(X_train.shape[0]*10))\n print(\"Optimization Finished!\")\n \n acc = 0\n for i in range(X_test.shape[0]):\n test = np.reshape(X_test[i,:], [50,1])\n print(test.shape)\n label = np.reshape(y_test[i,:], (1,3))\n #label = np.reshape(y_test[i], (1,1))\n print(\"Trajectory:\", i, \\\n sess.run([coeff], feed_dict={x: test, y: label}))\n print(\"Testing Accuracy:\", \\\n sess.run(accuracy, feed_dict={x: test, y: label}))\n acc += sess.run(accuracy, feed_dict={x: test, y: label})\n print('Final Testing Accuracy:', acc/X_test.shape[0])", "[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1, Minibatch Loss= 0.205573, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2, Minibatch Loss= 0.750132, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 3, Minibatch Loss= 0.354754, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 4, Minibatch Loss= 0.223955, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 5, Minibatch Loss= 0.163173, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 6, Minibatch Loss= 0.832254, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 7, Minibatch Loss= 0.373712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 8, Minibatch Loss= 0.503557, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 9, Minibatch Loss= 2.060165, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 10, Minibatch Loss= 0.462125, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 11, Minibatch Loss= 0.281662, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 12, Minibatch Loss= 0.735584, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 13, Minibatch Loss= 1.311603, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 14, Minibatch Loss= 0.521147, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 15, Minibatch Loss= 0.681585, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 16, Minibatch Loss= 0.920270, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 17, Minibatch Loss= 0.476434, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 18, Minibatch Loss= 0.815299, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 19, Minibatch Loss= 0.488434, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 20, Minibatch Loss= 0.968381, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 21, Minibatch Loss= 0.490309, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 22, Minibatch Loss= 0.875899, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 23, Minibatch Loss= 0.488821, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 24, Minibatch Loss= 0.639216, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 25, Minibatch Loss= 0.352274, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 26, Minibatch Loss= 1.157236, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 27, Minibatch Loss= 0.413222, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 28, Minibatch Loss= 0.894121, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 29, Minibatch Loss= 0.458672, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 30, Minibatch Loss= 0.285913, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 31, Minibatch Loss= 0.807237, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 32, Minibatch Loss= 1.143384, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 33, Minibatch Loss= 0.542622, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 34, Minibatch Loss= 0.325271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 35, Minibatch Loss= 0.901808, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 36, Minibatch Loss= 0.447946, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 37, Minibatch Loss= 1.066623, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 38, Minibatch Loss= 0.701384, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 39, Minibatch Loss= 0.389917, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 40, Minibatch Loss= 0.801745, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 41, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 42, Minibatch Loss= 1.068485, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 43, Minibatch Loss= 0.530561, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 44, Minibatch Loss= 0.318495, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 45, Minibatch Loss= 0.811521, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 46, Minibatch Loss= 0.408692, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 47, Minibatch Loss= 0.259709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 48, Minibatch Loss= 0.772159, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 49, Minibatch Loss= 0.380331, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 50, Minibatch Loss= 0.242279, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 51, Minibatch Loss= 1.534427, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 52, Minibatch Loss= 0.718738, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 53, Minibatch Loss= 0.390766, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 54, Minibatch Loss= 0.256290, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 55, Minibatch Loss= 0.866845, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 56, Minibatch Loss= 0.420473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 57, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 58, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 59, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 60, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 61, Minibatch Loss= 1.532540, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 62, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 63, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 64, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 65, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 66, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 67, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 68, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 69, Minibatch Loss= 0.448404, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 70, Minibatch Loss= 1.223803, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 71, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 72, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 73, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 74, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 75, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 76, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 77, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 78, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 79, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 80, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 81, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 82, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 83, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 84, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 85, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 86, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 87, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 88, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 89, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 90, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 91, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 92, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 93, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 94, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 95, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 96, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 97, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 98, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 99, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 100, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 101, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 102, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 103, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 104, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 105, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 106, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 107, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 108, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 109, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 110, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 111, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 112, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 113, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 114, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 115, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 116, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 117, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 118, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 119, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 120, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 121, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 122, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 123, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 124, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 125, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 126, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 127, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 128, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 129, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 130, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 131, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 132, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 133, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 134, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 135, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 136, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 137, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 138, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 139, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 140, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 141, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 142, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 143, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 144, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 145, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 146, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 147, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 148, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 149, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 150, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 151, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 152, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 153, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 154, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 155, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 156, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 157, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 158, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 159, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 160, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 161, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 162, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 163, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 164, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 165, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 166, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 167, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 168, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 169, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 170, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 171, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 172, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 173, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 174, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 175, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 176, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 177, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 178, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 179, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 180, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 181, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 182, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 183, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 184, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 185, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 186, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 187, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 188, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 189, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 190, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 191, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 192, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 193, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 194, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 195, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 196, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 197, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 198, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 199, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 200, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 201, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 202, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 203, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 204, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 205, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 206, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 207, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 208, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 209, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 210, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 211, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 212, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 213, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 214, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 215, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 216, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 217, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 218, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 219, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 220, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 221, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 222, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 223, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 224, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 225, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 226, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 227, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 228, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 229, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 230, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 231, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 232, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 233, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 234, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 235, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 236, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 237, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 238, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 239, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 240, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 241, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 242, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 243, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 244, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 245, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 246, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 247, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 248, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 249, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 250, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 251, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 252, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 253, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 254, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 255, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 256, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 257, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 258, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 259, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 260, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 261, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 262, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 263, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 264, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 265, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 266, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 267, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 268, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 269, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 270, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 271, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 272, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 273, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 274, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 275, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 276, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 277, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 278, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 279, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 280, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 281, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 282, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 283, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 284, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 285, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 286, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 287, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 288, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 289, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 290, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 291, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 292, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 293, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 294, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 295, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 296, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 297, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 298, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 299, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 300, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 301, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 302, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 303, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 304, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 305, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 306, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 307, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 308, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 309, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 310, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 311, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 312, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 313, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 314, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 315, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 316, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 317, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 318, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 319, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 320, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 321, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 322, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 323, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 324, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 325, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 326, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 327, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 328, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 329, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 330, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 331, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 332, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 333, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 334, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 335, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 336, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 337, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 338, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 339, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 340, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 341, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 342, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 343, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 344, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 345, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 346, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 347, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 348, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 349, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 350, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 351, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 352, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 353, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 354, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 355, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 356, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 357, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 358, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 359, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 360, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 361, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 362, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 363, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 364, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 365, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 366, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 367, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 368, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 369, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 370, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 371, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 372, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 373, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 374, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 375, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 376, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 377, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 378, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 379, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 380, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 381, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 382, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 383, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 384, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 385, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 386, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 387, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 388, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 389, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 390, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 391, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 392, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 393, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 394, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 395, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 396, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 397, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 398, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 399, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 400, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 401, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 402, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 403, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 404, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 405, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 406, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 407, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 408, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 409, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 410, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 411, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 412, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 413, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 414, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 415, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 416, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 417, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 418, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 419, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 420, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 421, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 422, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 423, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 424, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 425, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 426, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 427, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 428, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 429, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 430, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 431, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 432, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 433, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 434, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 435, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 436, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 437, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 438, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 439, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 440, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 441, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 442, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 443, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 444, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 445, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 446, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 447, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 448, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 449, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 450, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 451, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 452, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 453, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 454, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 455, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 456, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 457, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 458, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 459, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 460, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 461, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 462, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 463, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 464, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 465, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 466, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 467, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 468, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 469, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 470, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 471, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 472, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 473, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 474, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 475, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 476, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 477, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 478, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 479, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 480, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 481, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 482, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 483, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 484, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 485, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 486, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 487, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 488, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 489, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 490, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 491, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 492, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 493, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 494, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 495, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 496, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 497, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 498, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 499, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 500, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 501, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 502, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 503, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 504, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 505, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 506, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 507, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 508, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 509, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 510, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 511, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 512, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 513, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 514, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 515, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 516, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 517, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 518, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 519, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 520, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 521, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 522, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 523, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 524, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 525, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 526, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 527, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 528, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 529, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 530, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 531, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 532, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 533, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 534, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 535, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 536, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 537, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 538, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 539, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 540, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 541, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 542, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 543, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 544, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 545, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 546, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 547, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 548, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 549, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 550, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 551, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 552, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 553, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 554, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 555, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 556, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 557, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 558, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 559, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 560, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 561, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 562, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 563, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 564, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 565, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 566, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 567, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 568, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 569, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 570, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 571, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 572, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 573, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 574, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 575, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 576, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 577, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 578, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 579, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 580, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 581, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 582, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 583, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 584, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 585, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 586, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 587, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 588, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 589, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 590, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 591, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 592, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 593, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 594, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 595, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 596, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 597, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 598, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 599, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 600, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 601, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 602, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 603, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 604, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 605, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 606, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 607, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 608, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 609, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 610, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 611, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 612, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 613, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 614, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 615, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 616, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 617, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 618, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 619, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 620, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 621, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 622, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 623, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 624, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 625, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 626, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 627, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 628, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 629, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 630, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 631, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 632, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 633, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 634, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 635, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 636, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 637, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 638, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 639, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 640, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 641, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 642, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 643, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 644, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 645, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 646, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 647, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 648, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 649, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 650, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 651, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 652, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 653, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 654, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 655, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 656, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 657, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 658, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 659, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 660, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 661, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 662, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 663, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 664, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 665, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 666, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 667, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 668, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 669, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 670, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 671, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 672, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 673, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 674, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 675, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 676, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 677, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 678, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 679, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 680, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 681, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 682, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 683, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 684, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 685, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 686, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 687, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 688, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 689, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 690, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 691, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 692, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 693, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 694, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 695, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 696, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 697, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 698, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 699, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 700, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 701, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 702, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 703, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 704, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 705, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 706, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 707, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 708, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 709, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 710, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 711, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 712, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 713, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 714, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 715, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 716, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 717, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 718, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 719, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 720, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 721, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 722, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 723, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 724, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 725, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 726, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 727, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 728, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 729, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 730, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 731, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 732, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 733, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 734, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 735, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 736, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 737, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 738, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 739, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 740, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 741, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 742, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 743, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 744, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 745, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 746, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 747, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 748, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 749, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 750, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 751, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 752, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 753, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 754, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 755, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 756, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 757, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 758, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 759, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 760, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 761, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 762, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 763, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 764, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 765, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 766, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 767, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 768, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 769, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 770, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 771, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 772, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 773, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 774, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 775, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 776, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 777, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 778, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 779, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 780, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 781, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 782, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 783, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 784, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 785, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 786, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 787, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 788, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 789, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 790, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 791, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 792, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 793, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 794, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 795, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 796, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 797, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 798, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 799, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 800, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 801, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 802, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 803, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 804, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 805, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 806, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 807, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 808, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 809, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 810, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 811, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 812, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 813, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 814, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 815, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 816, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 817, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 818, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 819, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 820, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 821, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 822, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 823, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 824, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 825, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 826, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 827, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 828, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 829, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 830, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 831, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 832, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 833, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 834, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 835, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 836, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 837, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 838, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 839, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 840, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 841, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 842, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 843, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 844, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 845, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 846, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 847, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 848, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 849, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 850, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 851, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 852, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 853, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 854, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 855, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 856, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 857, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 858, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 859, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 860, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 861, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 862, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 863, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 864, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 865, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 866, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 867, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 868, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 869, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 870, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 871, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 872, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 873, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 874, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 875, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 876, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 877, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 878, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 879, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 880, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 881, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 882, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 883, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 884, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 885, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 886, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 887, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 888, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 889, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 890, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 891, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 892, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 893, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 894, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 895, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 896, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 897, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 898, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 899, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 900, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 901, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 902, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 903, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 904, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 905, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 906, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 907, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 908, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 909, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 910, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 911, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 912, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 913, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 914, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 915, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 916, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 917, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 918, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 919, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 920, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 921, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 922, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 923, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 924, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 925, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 926, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 927, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 928, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 929, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 930, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 931, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 932, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 933, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 934, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 935, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 936, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 937, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 938, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 939, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 940, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 941, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 942, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 943, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 944, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 945, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 946, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 947, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 948, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 949, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 950, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 951, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 952, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 953, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 954, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 955, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 956, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 957, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 958, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 959, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 960, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 961, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 962, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 963, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 964, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 965, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 966, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 967, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 968, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 969, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 970, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 971, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 972, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 973, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 974, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 975, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 976, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 977, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 978, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 979, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 980, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 981, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 982, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 983, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 984, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 985, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 986, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 987, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 988, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 989, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 990, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 991, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 992, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 993, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 994, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 995, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 996, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 997, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 998, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 999, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1000, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1001, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1002, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1003, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1004, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1005, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1006, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1007, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1008, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1009, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1010, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1011, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1012, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1013, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1014, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1015, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1016, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1017, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1018, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1019, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1020, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1021, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1022, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1023, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1024, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1025, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1026, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1027, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1028, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1029, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1030, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1031, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1032, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1033, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1034, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1035, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1036, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1037, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1038, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1039, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1040, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1041, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1042, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1043, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1044, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1045, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1046, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1047, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1048, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1049, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1050, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1051, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1052, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1053, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1054, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1055, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1056, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1057, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1058, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1059, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1060, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1061, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1062, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1063, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1064, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1065, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1066, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1067, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1068, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1069, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1070, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1071, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1072, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1073, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1074, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1075, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1076, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1077, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1078, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1079, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1080, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1081, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1082, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1083, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1084, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1085, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1086, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1087, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1088, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1089, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1090, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1091, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1092, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1093, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1094, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1095, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1096, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1097, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1098, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1099, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1100, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1101, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1102, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1103, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1104, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1105, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1106, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1107, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1108, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1109, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1110, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1111, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1112, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1113, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1114, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1115, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1116, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1117, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1118, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1119, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1120, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1121, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1122, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1123, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1124, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1125, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1126, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1127, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1128, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1129, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1130, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1131, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1132, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1133, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1134, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1135, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1136, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1137, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1138, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1139, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1140, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1141, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1142, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1143, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1144, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1145, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1146, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1147, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1148, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1149, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1150, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1151, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1152, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1153, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1154, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1155, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1156, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1157, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1158, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1159, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1160, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1161, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1162, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1163, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1164, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1165, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1166, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1167, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1168, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1169, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1170, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1171, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1172, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1173, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1174, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1175, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1176, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1177, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1178, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1179, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1180, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1181, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1182, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1183, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1184, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1185, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1186, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1187, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1188, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1189, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1190, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1191, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1192, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1193, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1194, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1195, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1196, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1197, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1198, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1199, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1200, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1201, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1202, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1203, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1204, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1205, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1206, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1207, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1208, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1209, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1210, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1211, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1212, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1213, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1214, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1215, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1216, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1217, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1218, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1219, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1220, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1221, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1222, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1223, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1224, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1225, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1226, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1227, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1228, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1229, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1230, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1231, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1232, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1233, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1234, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1235, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1236, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1237, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1238, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1239, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1240, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1241, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1242, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1243, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1244, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1245, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1246, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1247, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1248, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1249, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1250, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1251, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1252, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1253, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1254, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1255, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1256, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1257, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1258, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1259, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1260, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1261, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1262, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1263, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1264, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1265, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1266, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1267, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1268, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1269, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1270, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1271, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1272, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1273, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1274, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1275, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1276, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1277, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1278, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1279, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1280, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1281, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1282, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1283, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1284, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1285, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1286, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1287, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1288, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1289, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1290, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1291, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1292, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1293, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1294, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1295, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1296, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1297, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1298, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1299, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1300, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1301, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1302, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1303, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1304, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1305, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1306, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1307, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1308, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1309, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1310, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1311, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1312, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1313, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1314, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1315, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1316, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1317, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1318, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1319, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1320, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1321, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1322, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1323, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1324, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1325, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1326, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1327, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1328, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1329, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1330, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1331, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1332, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1333, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1334, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1335, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1336, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1337, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1338, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1339, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1340, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1341, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1342, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1343, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1344, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1345, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1346, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1347, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1348, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1349, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1350, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1351, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1352, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1353, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1354, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1355, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1356, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1357, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1358, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1359, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1360, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1361, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1362, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1363, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1364, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1365, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1366, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1367, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1368, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1369, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1370, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1371, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1372, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1373, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1374, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1375, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1376, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1377, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1378, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1379, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1380, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1381, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1382, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1383, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1384, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1385, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1386, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1387, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1388, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1389, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1390, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1391, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1392, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1393, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1394, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1395, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1396, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1397, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1398, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1399, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1400, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1401, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1402, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1403, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1404, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1405, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1406, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1407, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1408, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1409, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1410, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1411, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1412, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1413, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1414, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1415, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1416, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1417, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1418, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1419, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1420, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1421, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1422, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1423, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1424, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1425, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1426, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1427, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1428, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1429, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1430, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1431, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1432, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1433, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1434, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1435, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1436, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1437, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1438, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1439, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1440, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1441, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1442, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1443, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1444, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1445, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1446, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1447, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1448, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1449, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1450, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1451, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1452, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1453, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1454, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1455, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1456, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1457, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1458, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1459, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1460, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1461, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1462, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1463, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1464, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1465, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1466, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1467, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1468, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1469, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1470, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1471, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1472, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1473, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1474, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1475, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1476, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1477, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1478, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1479, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1480, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1481, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1482, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1483, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1484, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1485, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1486, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1487, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1488, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1489, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1490, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1491, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1492, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1493, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1494, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1495, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1496, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1497, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1498, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1499, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1500, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1501, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1502, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1503, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1504, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1505, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1506, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1507, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1508, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1509, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1510, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1511, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1512, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1513, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1514, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1515, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1516, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1517, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1518, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1519, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1520, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1521, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1522, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1523, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1524, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1525, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1526, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1527, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1528, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1529, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1530, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1531, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1532, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1533, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1534, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1535, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1536, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1537, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1538, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1539, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1540, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1541, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1542, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1543, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1544, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1545, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1546, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1547, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1548, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1549, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1550, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1551, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1552, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1553, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1554, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1555, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1556, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1557, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1558, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1559, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1560, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1561, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1562, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1563, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1564, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1565, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1566, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1567, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1568, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1569, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1570, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1571, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1572, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1573, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1574, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1575, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1576, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1577, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1578, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1579, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1580, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1581, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1582, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1583, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1584, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1585, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1586, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1587, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1588, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1589, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1590, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1591, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1592, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1593, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1594, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1595, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1596, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1597, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1598, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1599, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1600, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1601, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1602, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1603, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1604, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1605, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1606, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1607, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1608, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1609, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1610, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1611, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1612, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1613, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1614, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1615, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1616, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1617, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1618, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1619, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1620, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1621, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1622, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1623, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1624, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1625, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1626, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1627, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1628, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1629, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1630, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1631, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1632, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1633, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1634, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1635, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1636, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1637, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1638, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1639, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1640, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1641, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1642, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1643, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1644, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1645, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1646, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1647, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1648, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1649, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1650, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1651, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1652, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1653, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1654, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1655, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1656, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1657, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1658, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1659, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1660, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1661, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1662, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1663, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1664, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1665, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1666, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1667, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1668, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1669, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1670, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1671, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1672, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1673, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1674, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1675, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1676, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1677, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1678, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1679, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1680, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1681, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1682, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1683, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1684, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1685, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1686, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1687, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1688, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1689, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1690, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1691, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1692, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1693, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1694, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1695, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1696, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1697, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1698, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1699, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1700, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1701, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1702, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1703, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1704, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1705, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1706, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1707, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1708, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1709, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1710, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1711, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1712, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1713, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1714, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1715, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1716, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1717, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1718, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1719, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1720, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1721, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1722, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1723, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1724, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1725, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1726, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1727, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1728, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1729, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1730, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1731, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1732, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1733, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1734, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1735, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1736, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1737, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1738, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1739, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1740, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1741, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1742, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1743, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1744, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1745, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1746, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1747, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1748, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1749, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1750, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1751, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1752, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1753, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1754, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1755, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1756, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1757, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1758, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1759, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1760, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1761, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1762, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1763, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1764, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1765, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1766, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1767, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1768, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1769, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1770, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1771, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1772, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1773, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1774, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1775, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1776, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1777, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1778, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1779, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1780, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1781, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1782, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1783, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1784, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1785, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1786, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1787, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1788, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1789, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1790, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1791, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1792, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1793, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1794, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1795, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1796, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1797, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1798, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1799, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1800, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1801, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1802, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1803, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1804, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1805, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1806, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1807, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1808, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1809, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1810, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1811, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1812, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1813, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1814, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1815, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1816, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1817, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1818, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1819, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1820, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1821, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1822, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1823, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1824, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1825, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1826, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1827, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1828, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1829, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1830, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1831, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1832, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1833, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1834, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1835, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1836, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1837, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1838, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1839, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1840, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1841, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1842, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1843, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1844, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1845, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1846, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1847, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1848, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1849, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1850, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1851, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1852, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1853, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1854, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1855, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1856, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1857, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1858, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1859, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1860, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1861, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1862, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1863, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1864, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1865, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1866, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1867, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1868, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1869, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1870, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1871, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1872, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1873, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1874, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1875, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1876, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1877, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1878, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1879, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1880, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1881, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1882, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1883, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1884, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1885, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1886, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1887, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1888, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1889, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1890, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1891, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1892, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1893, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1894, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1895, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1896, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1897, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1898, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1899, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1900, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1901, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1902, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1903, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1904, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1905, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1906, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1907, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1908, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1909, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1910, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1911, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1912, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1913, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1914, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1915, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1916, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1917, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1918, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1919, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1920, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1921, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1922, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1923, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1924, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1925, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1926, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1927, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1928, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1929, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1930, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1931, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1932, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1933, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1934, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1935, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1936, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1937, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1938, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1939, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1940, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1941, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1942, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1943, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1944, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1945, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1946, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1947, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1948, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1949, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1950, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1951, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1952, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1953, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1954, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1955, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1956, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1957, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1958, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1959, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1960, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1961, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1962, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1963, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1964, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1965, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1966, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1967, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1968, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1969, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1970, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1971, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1972, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1973, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1974, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1975, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1976, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1977, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1978, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1979, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1980, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1981, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1982, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1983, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1984, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1985, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1986, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 1987, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1988, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1989, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1990, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1991, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1992, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1993, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1994, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 1995, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 1996, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1997, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1998, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 1999, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2000, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2001, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2002, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2003, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2004, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2005, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2006, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2007, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2008, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2009, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2010, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2011, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2012, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2013, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2014, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2015, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2016, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2017, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2018, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2019, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2020, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2021, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2022, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2023, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2024, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2025, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2026, Minibatch Loss= 1.122947, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2027, Minibatch Loss= 0.605506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2028, Minibatch Loss= 0.351238, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2029, Minibatch Loss= 0.238870, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2030, Minibatch Loss= 0.179110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2031, Minibatch Loss= 1.164340, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2032, Minibatch Loss= 0.525217, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2033, Minibatch Loss= 0.521783, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2034, Minibatch Loss= 1.256133, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2035, Minibatch Loss= 0.598630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2036, Minibatch Loss= 0.347642, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2037, Minibatch Loss= 0.809042, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2038, Minibatch Loss= 0.995930, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2039, Minibatch Loss= 0.583043, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2040, Minibatch Loss= 0.733878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2041, Minibatch Loss= 0.812454, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2042, Minibatch Loss= 0.433446, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2043, Minibatch Loss= 0.844715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2044, Minibatch Loss= 0.469972, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2045, Minibatch Loss= 0.985057, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2046, Minibatch Loss= 0.496273, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2047, Minibatch Loss= 0.878380, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2048, Minibatch Loss= 0.490699, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2049, Minibatch Loss= 0.640410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2050, Minibatch Loss= 0.352934, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2051, Minibatch Loss= 1.152901, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2052, Minibatch Loss= 0.413859, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2053, Minibatch Loss= 0.895269, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2054, Minibatch Loss= 0.458842, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2055, Minibatch Loss= 0.286018, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2056, Minibatch Loss= 0.807705, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2057, Minibatch Loss= 1.142260, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2058, Minibatch Loss= 0.542791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2059, Minibatch Loss= 0.325344, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2060, Minibatch Loss= 0.901416, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2061, Minibatch Loss= 0.447791, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2062, Minibatch Loss= 1.066869, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2063, Minibatch Loss= 0.701378, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2064, Minibatch Loss= 0.389914, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2065, Minibatch Loss= 0.801688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2066, Minibatch Loss= 0.453980, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2067, Minibatch Loss= 1.068545, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2068, Minibatch Loss= 0.530587, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2069, Minibatch Loss= 0.318506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2070, Minibatch Loss= 0.811516, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2071, Minibatch Loss= 0.408691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2072, Minibatch Loss= 0.259710, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2073, Minibatch Loss= 0.772164, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2074, Minibatch Loss= 0.380333, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2075, Minibatch Loss= 0.242280, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2076, Minibatch Loss= 1.534414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2077, Minibatch Loss= 0.718732, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2078, Minibatch Loss= 0.390763, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2079, Minibatch Loss= 0.256289, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2080, Minibatch Loss= 0.866846, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2081, Minibatch Loss= 0.420474, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2082, Minibatch Loss= 0.262519, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2083, Minibatch Loss= 0.741202, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2084, Minibatch Loss= 0.367340, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2085, Minibatch Loss= 0.572854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2086, Minibatch Loss= 1.532541, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2087, Minibatch Loss= 0.437486, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2088, Minibatch Loss= 0.700543, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2089, Minibatch Loss= 0.997152, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2090, Minibatch Loss= 0.537863, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2091, Minibatch Loss= 0.687529, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2092, Minibatch Loss= 0.376168, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2093, Minibatch Loss= 0.697995, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2094, Minibatch Loss= 0.448405, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2095, Minibatch Loss= 1.223802, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2096, Minibatch Loss= 0.439478, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2097, Minibatch Loss= 0.770942, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2098, Minibatch Loss= 0.476203, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2099, Minibatch Loss= 0.596116, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2100, Minibatch Loss= 1.131453, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2101, Minibatch Loss= 0.606501, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2102, Minibatch Loss= 0.729288, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2103, Minibatch Loss= 0.541123, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2104, Minibatch Loss= 0.937700, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2105, Minibatch Loss= 0.499688, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2106, Minibatch Loss= 0.664051, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2107, Minibatch Loss= 0.968324, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2108, Minibatch Loss= 0.637643, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2109, Minibatch Loss= 0.364441, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2110, Minibatch Loss= 0.245361, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2111, Minibatch Loss= 1.012528, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2112, Minibatch Loss= 0.480098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2113, Minibatch Loss= 0.566854, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2114, Minibatch Loss= 1.202511, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2115, Minibatch Loss= 0.593502, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2116, Minibatch Loss= 0.655586, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2117, Minibatch Loss= 0.365175, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2118, Minibatch Loss= 1.031394, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2119, Minibatch Loss= 0.507601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2120, Minibatch Loss= 0.929805, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2121, Minibatch Loss= 0.729408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2122, Minibatch Loss= 0.401369, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2123, Minibatch Loss= 0.824110, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2124, Minibatch Loss= 0.458002, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2125, Minibatch Loss= 0.613473, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2126, Minibatch Loss= 1.123911, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2127, Minibatch Loss= 0.560354, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2128, Minibatch Loss= 0.676812, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2129, Minibatch Loss= 0.526756, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2130, Minibatch Loss= 0.566558, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2131, Minibatch Loss= 0.322150, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2132, Minibatch Loss= 1.246871, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2133, Minibatch Loss= 0.393601, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2134, Minibatch Loss= 0.259008, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2135, Minibatch Loss= 0.925740, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2136, Minibatch Loss= 0.384994, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2137, Minibatch Loss= 1.178877, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2138, Minibatch Loss= 0.573313, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2139, Minibatch Loss= 0.847026, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2140, Minibatch Loss= 0.519651, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2141, Minibatch Loss= 0.631305, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2142, Minibatch Loss= 0.995188, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2143, Minibatch Loss= 0.519372, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2144, Minibatch Loss= 0.802515, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2145, Minibatch Loss= 0.505113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2146, Minibatch Loss= 0.612960, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2147, Minibatch Loss= 0.503616, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2148, Minibatch Loss= 0.297609, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2149, Minibatch Loss= 0.709856, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2150, Minibatch Loss= 0.360858, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2151, Minibatch Loss= 0.591626, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2152, Minibatch Loss= 0.317080, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2153, Minibatch Loss= 0.212381, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2154, Minibatch Loss= 0.766909, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2155, Minibatch Loss= 0.364641, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2156, Minibatch Loss= 1.730414, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2157, Minibatch Loss= 0.841510, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2158, Minibatch Loss= 0.700283, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2159, Minibatch Loss= 0.621098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2160, Minibatch Loss= 0.845214, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2161, Minibatch Loss= 0.712721, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2162, Minibatch Loss= 0.620691, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2163, Minibatch Loss= 0.837456, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2164, Minibatch Loss= 0.535847, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2165, Minibatch Loss= 0.628698, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2166, Minibatch Loss= 0.979878, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2167, Minibatch Loss= 0.646630, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2168, Minibatch Loss= 0.672554, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2169, Minibatch Loss= 0.558538, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2170, Minibatch Loss= 0.567798, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2171, Minibatch Loss= 0.522888, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2172, Minibatch Loss= 1.171270, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2173, Minibatch Loss= 0.578298, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2174, Minibatch Loss= 0.649505, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2175, Minibatch Loss= 0.533989, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2176, Minibatch Loss= 0.556526, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2177, Minibatch Loss= 0.511300, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2178, Minibatch Loss= 0.514347, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2179, Minibatch Loss= 1.317093, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2180, Minibatch Loss= 0.447752, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2181, Minibatch Loss= 0.284098, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2182, Minibatch Loss= 0.204326, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2183, Minibatch Loss= 1.102578, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2184, Minibatch Loss= 1.025318, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2185, Minibatch Loss= 0.518653, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2186, Minibatch Loss= 0.797113, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2187, Minibatch Loss= 0.505248, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2188, Minibatch Loss= 0.948709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2189, Minibatch Loss= 0.482956, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2190, Minibatch Loss= 0.670709, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2191, Minibatch Loss= 0.365613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2192, Minibatch Loss= 1.116872, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2193, Minibatch Loss= 0.544328, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2194, Minibatch Loss= 0.611088, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2195, Minibatch Loss= 0.343410, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2196, Minibatch Loss= 0.697944, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2197, Minibatch Loss= 0.363228, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2198, Minibatch Loss= 1.310218, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2199, Minibatch Loss= 0.667564, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2200, Minibatch Loss= 0.589128, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2201, Minibatch Loss= 0.339262, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2202, Minibatch Loss= 0.231332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2203, Minibatch Loss= 0.174054, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2204, Minibatch Loss= 1.313646, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2205, Minibatch Loss= 0.313633, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2206, Minibatch Loss= 1.099798, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2207, Minibatch Loss= 0.402145, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2208, Minibatch Loss= 0.957632, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2209, Minibatch Loss= 0.793866, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2210, Minibatch Loss= 0.427032, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2211, Minibatch Loss= 0.788435, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2212, Minibatch Loss= 0.902687, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2213, Minibatch Loss= 0.470419, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2214, Minibatch Loss= 0.750038, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2215, Minibatch Loss= 0.398550, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2216, Minibatch Loss= 1.037090, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2217, Minibatch Loss= 0.441398, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2218, Minibatch Loss= 0.280056, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2219, Minibatch Loss= 0.201715, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2220, Minibatch Loss= 1.186231, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2221, Minibatch Loss= 0.338816, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2222, Minibatch Loss= 0.230389, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2223, Minibatch Loss= 0.173195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2224, Minibatch Loss= 0.138348, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2225, Minibatch Loss= 1.131494, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2226, Minibatch Loss= 0.490664, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2227, Minibatch Loss= 1.420785, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2228, Minibatch Loss= 0.691482, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2229, Minibatch Loss= 0.610271, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2230, Minibatch Loss= 0.349195, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nIter 2231, Minibatch Loss= 1.013828, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2232, Minibatch Loss= 0.813803, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2233, Minibatch Loss= 0.434712, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2234, Minibatch Loss= 0.278223, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2235, Minibatch Loss= 0.924887, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2236, Minibatch Loss= 1.002613, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2237, Minibatch Loss= 0.553053, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2238, Minibatch Loss= 0.329453, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2239, Minibatch Loss= 0.919331, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2240, Minibatch Loss= 0.910682, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2241, Minibatch Loss= 0.609322, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2242, Minibatch Loss= 0.352506, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2243, Minibatch Loss= 0.239408, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2244, Minibatch Loss= 1.050478, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2245, Minibatch Loss= 0.972155, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2246, Minibatch Loss= 0.535332, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nIter 2247, Minibatch Loss= 0.321670, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nIter 2248, Minibatch Loss= 0.950056, Training Accuracy= 0.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2249, Minibatch Loss= 0.467790, Training Accuracy= 1.00000\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nIter 2250, Minibatch Loss= 0.606129, Training Accuracy= 1.00000\nFinal Training Accuracy: 0.817333333333\nOptimization Finished!\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 0 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 1 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 2 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 3 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 4 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 5 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 6 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 7 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 8 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 9 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 10 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 11 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 12 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 13 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 14 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 15 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 16 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 17 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 18 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 19 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 20 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 21 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 22 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 23 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 24 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 25 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 26 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 27 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 28 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 29 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 30 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 31 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 32 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 33 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 34 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 35 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 36 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 37 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 38 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 39 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 40 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 41 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 42 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 43 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 44 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 45 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 46 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 47 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 48 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 49 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 50 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 51 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 52 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 53 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 54 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 55 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 56 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 57 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 58 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 59 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 60 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 61 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 62 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 63 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTrajectory: 64 [array([[ 0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 65 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 66 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 67 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTrajectory: 68 [array([[-0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 69 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 70 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTrajectory: 71 [array([[ 0.],\n [-0.]])]\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[ 0.]\n [-0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 72 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 73 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 0.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\n(50, 1)\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTrajectory: 74 [array([[-0.],\n [ 0.]])]\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nTesting Accuracy: 1.0\n[[-0.66370539 0.23084568]]\n[[-0.]\n [ 0.]]\nFinal Testing Accuracy: 0.413333333333\n" ], [ "# Confusion matrix code\n\npred = multilayer_perceptron(x, weights, biases)\n correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n\n with tf.Session() as sess:\n init = tf.initialize_all_variables()\n sess.run(init)\n for epoch in xrange(150):\n for i in xrange(total_batch):\n train_step.run(feed_dict = {x: train_arrays, y: train_labels})\n avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch \n if epoch % display_step == 0:\n print \"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(avg_cost)\n\n #metrics\n y_p = tf.argmax(pred, 1)\n val_accuracy, y_pred = sess.run([accuracy, y_p], feed_dict={x:test_arrays, y:test_label})\n\n print \"validation accuracy:\", val_accuracy\n y_true = np.argmax(test_label,1)\n print \"Precision\", sk.metrics.precision_score(y_true, y_pred)\n print \"Recall\", sk.metrics.recall_score(y_true, y_pred)\n print \"f1_score\", sk.metrics.f1_score(y_true, y_pred)\n print \"confusion_matrix\"\n print sk.metrics.confusion_matrix(y_true, y_pred)\n fpr, tpr, tresholds = sk.metrics.roc_curve(y_true, y_pred)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec96ab23ef31e1c155880b91b8314580a47f0153
4,840
ipynb
Jupyter Notebook
research/VOT.ipynb
d61h6k4/vot
179bfc4cb7c41d48c115efb5bb3112d0ba860d89
[ "MIT" ]
null
null
null
research/VOT.ipynb
d61h6k4/vot
179bfc4cb7c41d48c115efb5bb3112d0ba860d89
[ "MIT" ]
null
null
null
research/VOT.ipynb
d61h6k4/vot
179bfc4cb7c41d48c115efb5bb3112d0ba860d89
[ "MIT" ]
null
null
null
39.349593
222
0.589669
[ [ [ "<a href=\"https://colab.research.google.com/github/d61h6k4/vot/blob/master/research/VOT.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# VOT - Visual object tracking\n\n## Introduction\n[Introduction to Motion Estimation with Optical Flow](https://nanonets.com/blog/optical-flow/)\n\nIntroduction blog post where you can find some definition of optical flow and \ntypes of optical flows (dense and sparse). Here you cand find some classical algorithm for solve this problems with open cv.\n\n\n[DL for Optical Flow](https://habr.com/ru/company/ods/blog/446726/) (in russian)\n\nBlog post from ODS with history of the problem and examles of algorithms with short description. Here I read that classical approach to this problem is better than DL one.\n\n[Optical Flow (GPU)](https://www.compression.ru/video/seminar/slides/2008_OpticalFlowOnGPU.pdf) (in russian)\n\nSlides where collected different classical computer vision algos. Here I found that best one is \"Phase-based method\".\n\n[Real time phase-based Optical Flow on the GPU](https://gbiomed.kuleuven.be/english/research/50000666/50000669/50488669/neuro_research/neuro_research_mvanhulle/comp_pdf/GPU.pdf)\n\n48 FPS for 640x512 images\n\n\n[Benchmarks](https://github.com/foolwood/benchmark_results)\n\nRepository where collected papers about VOT.\nHere you can find some tree of differrent algos.\n\nFirst recommended paper is [VOT2019](http://openaccess.thecvf.com/content_ICCVW_2019/papers/VOT/Kristan_The_Seventh_Visual_Object_Tracking_VOT2019_Challenge_Results_ICCVW_2019_paper.pdf)\nHere you can find information about VOT challenge, that means *dataset*, *metrics* and *soltions*\n\n[Fast Online Object Tracking and Segmentation: A Unifying Approach](http://www.robots.ox.ac.uk/~qwang/SiamMask/)\n\n35 FPS and with tracking you get real-time segmentation.\n\nYou can find that in VOT2019-RT (real time) won SiamMargin, I didn't find their paper but from slides I think SiamMargin is modifed SiamRPN++\n\nSiamMargin, SiamRPN++ was made by research from SenseTime. I found his repo with all algos https://github.com/STVIR/pysot\n\n\n## Implementation\nMany papers implemented on Python and when it DL part it's ok, cause we have framework which can convert DAG in efficient program (e.g. Tensorflow serving)\nbut what we can do with preprocessing.\n\nI found Google's project [MediaPipe](https://mediapipe.readthedocs.io/en/latest/index.html), here you can find how they use ffmpeg and other stuff.\n\n\n", "_____no_output_____" ] ], [ [ "### ", "_____no_output_____" ] ], [ [ "[Papers with Code](https://paperswithcode.com/task/visual-object-tracking) collections of papers about VOT with code\n\n[Blog post](https://towardsdatascience.com/how-i-sort-of-learned-computer-vision-in-a-month-c3faec83b3d6) where you can find links to videos with explanation of some classical algorithms\n\n[What is mAP?](https://towardsdatascience.com/evaluating-performance-of-an-object-detection-model-137a349c517b)", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec96b0fc11591fec48e57ec1af29e671ecfc7811
5,660
ipynb
Jupyter Notebook
Python/Python_Consolidate_Excel_files.ipynb
krajai/testt
3aaf5fd7fe85e712c8c1615852b50f9ccb6737e5
[ "BSD-3-Clause" ]
1
2022-03-24T07:46:45.000Z
2022-03-24T07:46:45.000Z
Python/Python_Consolidate_Excel_files.ipynb
PZawieja/awesome-notebooks
8ae86e5689749716e1315301cecdad6f8843dcf8
[ "BSD-3-Clause" ]
null
null
null
Python/Python_Consolidate_Excel_files.ipynb
PZawieja/awesome-notebooks
8ae86e5689749716e1315301cecdad6f8843dcf8
[ "BSD-3-Clause" ]
null
null
null
23.881857
291
0.559541
[ [ [ "<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>", "_____no_output_____" ], [ "# Python - Consolidate Excel files\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Python/Python_Consolidate_Excel_files.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>", "_____no_output_____" ], [ "**Tags:** #python #consolidate #files #productivity #snippet #operations #excel", "_____no_output_____" ], [ "**Author:** [Florent Ravenel](https://www.linkedin.com/in/ACoAABCNSioBW3YZHc2lBHVG0E_TXYWitQkmwog/)", "_____no_output_____" ], [ "The objective of this notebook is to consolidate multiple Excel files (.xlsx) into one. ", "_____no_output_____" ], [ "## Input ", "_____no_output_____" ], [ "### Import library\nImport the necessary libraries: os and pandas ", "_____no_output_____" ] ], [ [ "import os\nimport pandas as pd", "_____no_output_____" ] ], [ [ "### Variables", "_____no_output_____" ] ], [ [ "# Output\nexcel_output = 'concatenate.xlsx'", "_____no_output_____" ] ], [ [ "## Model\nUse a for loop to \n- List all the files in the current directory with os.listdir().\n- Filter files with the .endswith(‘.xlsx’) method.\n- Make sure the files will be stored into a list called my_list and then combined with pd.concat()\n\nThen\n- Return a dataframe and name it df_concat. ", "_____no_output_____" ] ], [ [ "files = os.listdir()\nmy_list = []\nfor file in files:\n if file.endswith('.xlsx'):\n df = pd.read_excel(file)\n my_list.append(df)\n\ndf_concat = pd.concat(my_list, axis=0)", "_____no_output_____" ] ], [ [ "## Output\nExport your dataframe to an Excel file.", "_____no_output_____" ] ], [ [ "df_concat.to_excel(excel_output, index=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec96cb747206fc1fd871af1d99468dd63568f9d8
16,061
ipynb
Jupyter Notebook
notebooks/process/process_citysummary.ipynb
aschwa/happy_city_tweets
80a240492860a6cda48f75b825cb0b54cc369c8b
[ "MIT" ]
null
null
null
notebooks/process/process_citysummary.ipynb
aschwa/happy_city_tweets
80a240492860a6cda48f75b825cb0b54cc369c8b
[ "MIT" ]
null
null
null
notebooks/process/process_citysummary.ipynb
aschwa/happy_city_tweets
80a240492860a6cda48f75b825cb0b54cc369c8b
[ "MIT" ]
null
null
null
56.552817
2,127
0.51678
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import json\nimport os\nimport pandas as pd\nfrom datetime import datetime\nfrom pathlib import Path\nimport numpy as np\nfrom collections import Counter", "_____no_output_____" ], [ "tweet_dir= Path(\"../data/processed/park_user_tweets_0530/\")\n#city = Path(\"../data/processed/tweets/CO_Denver_0820000.json\")", "_____no_output_____" ], [ "# Load tweets every tweets is labeled with a park name or nan\ndef build_info_table(tweet_dir, old_index_values = []):\n city_info = []\n for city in tweet_dir.glob(\"*.json\"):\n city_name = city.stem\n #city_summary.index.values \n if city_name not in old_index_values:\n print(\"{}\".format(city_name))\n with open(city, 'r') as f:\n park_user_tweets = json.load(f)\n n_park_users = len(park_user_tweets)\n total_tweets = 0\n park_tweets = 0\n parks_visisted = []\n for user_tweets in park_user_tweets.values():\n total_tweets += len(user_tweets)\n for tweet in user_tweets:\n if not pd.isnull(tweet['ParkID']):\n park_tweets+=1\n parks_visited.append(tweet['Park_Name'])\n users = list(set([tweet['user']['id_str'] for tweet in tweets]))\n n_users = len(users)\n n_park_tweets = len(park_tweets)\n n_parks_w_tweets = len(set(park_names))\n park_users = list(set([tweet['user']['id_str'] for tweet in park_tweets]))\n n_park_users = len(park_users)\n summary = {\"city_file\":city,\n \"city\": city_name,\n \"tweets\":n_tweets,\n \"users\":n_users,\n \"park_tweets\":n_park_tweets,\n \"park_users\":n_park_users,\n \"parks_visited\":n_parks_w_tweets}\n city_info.append(summary)\n return city_table\n\ndef process_city_table(new_table, old_table):\n new_table = pd.DataFrame.from_records(new_table, index = 'city').sort_values('tweets',ascending=False)\n city_table = pd.concat([old_table, new_table])\n #city_table.to_csv('./results/city_summary.csv')\n return city_table\n\ndef build_user_table(tweet_dir):\n city_info = {}\n for city in tweet_dir.glob(\"*.json\"):\n city_name = city.stem\n print(\"{}\".format(city_name))\n with open(city, 'r') as f:\n park_user_tweets = json.load(f)\n n_park_users = len(park_user_tweets)\n total_tweets = 0\n park_tweets = 0\n parks_visited = []\n for user_tweets in park_user_tweets.values():\n total_tweets += len(user_tweets)\n for tweet in user_tweets:\n if not pd.isnull(tweet['ParkID']):\n park_tweets+=1\n parks_visited.append(tweet['Park_Name'])\n n_parks_visisted = len(set(parks_visited))\n city_info[city_name] = {\"Park Visitors\":n_park_users,\"Total Tweets\":total_tweets,\"Park Tweets\":park_tweets,\"Parks Visited\":n_parks_visisted}\n return city_info", "_____no_output_____" ], [ "tweet_dir= Path(\"../data/processed/park_user_tweets_0530/\")\ncity_info = build_user_table(tweet_dir)", "_____no_output_____" ], [ "city_info", "_____no_output_____" ], [ "order = ['Total Tweets','Park Tweets','% Tweets in Park','Park Visitors', 'Parks Visited']\ndf = pd.DataFrame.from_dict(city_info, orient='index').sort_values(by='Total Tweets', ascending=False)\ndf['% Tweets in Park'] = np.round(df['Park Tweets']/df['Total Tweets'],3)*100\ndf[order].head()", "_____no_output_____" ], [ "df['city'] = [x[3:-8].replace(\"_\",\" \") for x in df.index.values]", "_____no_output_____" ], [ "pop = pd.read_csv(\"../data/processed/city_info.csv\")\npop.rename({\"NAME\":\"city\"}, axis=1, inplace=True)\nsummary = pd.merge(df,pop[['city','POP2012']],on='city').set_index('city')\nsummary['Tweets per capita'] = np.round(summary['Total Tweets'] / summary['POP2012'],2)\nsummary = summary.drop('POP2012',axis=1)\norder = ['Total Tweets','Park Tweets','% Tweets in Park','Park Visitors', 'Parks Visited', 'Tweets per capita']\nsummary['Park Visitors'] = summary['Park Visitors'].apply(lambda x : \"{:,}\".format(x))\nsummary['Total Tweets'] = summary['Total Tweets'].apply(lambda x : \"{:,}\".format(x))\nsummary['Parks Visited'] = summary['Parks Visited'].apply(lambda x : \"{:,}\".format(x))\nsummary['Park Tweets'] = summary['Park Tweets'].apply(lambda x : \"{:,}\".format(x))\nsummary = summary[order]\n", "_____no_output_____" ], [ "print(summary.to_latex())", "\\begin{tabular}{lllrllr}\n\\toprule\n{} & Total Tweets & Park Tweets & \\% Tweets in Park & Park Visitors & Parks Visited & Tweets per capita \\\\\ncity & & & & & & \\\\\n\\midrule\nNew York & 2,892,512 & 213,813 & 7.4 & 113,702 & 1,880 & 0.35 \\\\\nLos Angeles & 1,215,288 & 53,988 & 4.4 & 36,271 & 540 & 0.32 \\\\\nPhiladelphia & 1,166,125 & 64,857 & 5.6 & 26,287 & 482 & 0.76 \\\\\nChicago & 1,130,611 & 66,100 & 5.8 & 36,919 & 872 & 0.41 \\\\\nHouston & 821,433 & 39,581 & 4.8 & 13,464 & 501 & 0.38 \\\\\nSan Antonio & 589,595 & 23,566 & 4.0 & 12,763 & 268 & 0.43 \\\\\nWashington & 570,157 & 74,937 & 13.1 & 41,062 & 370 & 0.92 \\\\\nBoston & 547,625 & 52,689 & 9.6 & 23,479 & 682 & 0.87 \\\\\nSan Diego & 491,219 & 36,080 & 7.3 & 22,269 & 406 & 0.37 \\\\\nDallas & 490,918 & 21,787 & 4.4 & 12,211 & 346 & 0.40 \\\\\nSan Francisco & 486,782 & 59,412 & 12.2 & 36,175 & 407 & 0.59 \\\\\nAustin & 449,853 & 23,547 & 5.2 & 14,689 & 289 & 0.55 \\\\\nBaltimore & 333,734 & 12,965 & 3.9 & 5,135 & 260 & 0.53 \\\\\nFort Worth & 320,178 & 9,664 & 3.0 & 4,278 & 239 & 0.42 \\\\\nPhoenix & 268,455 & 12,041 & 4.5 & 7,566 & 189 & 0.18 \\\\\nColumbus & 251,573 & 8,884 & 3.5 & 4,340 & 328 & 0.31 \\\\\nSan Jose & 234,234 & 8,263 & 3.5 & 4,517 & 314 & 0.24 \\\\\nIndianapolis & 225,931 & 11,560 & 5.1 & 5,660 & 183 & 0.27 \\\\\nCharlotte & 218,310 & 8,039 & 3.7 & 3,868 & 190 & 0.29 \\\\\nSeattle & 201,533 & 12,758 & 6.3 & 7,739 & 373 & 0.32 \\\\\nDetroit & 195,572 & 7,885 & 4.0 & 3,819 & 234 & 0.28 \\\\\nJacksonville & 194,777 & 6,219 & 3.2 & 3,218 & 261 & 0.23 \\\\\nMemphis & 137,222 & 5,614 & 4.1 & 3,112 & 163 & 0.21 \\\\\nDenver & 131,240 & 6,243 & 4.8 & 3,902 & 279 & 0.21 \\\\\nEl Paso & 96,015 & 2,722 & 2.8 & 1,397 & 180 & 0.14 \\\\\n\\bottomrule\n\\end{tabular}\n\n" ], [ "summary.to_csv('./results/summary_tables/summary_0605.csv')\n#df.to_html('table.html')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec96ce31d251292341c35e9e061486212c94b00c
3,891
ipynb
Jupyter Notebook
jupyter-notebooks/0-test-DAGMC-OpenMC/0_testing_dagmc_openmc.ipynb
Hazunanafaru/iter-tritium-breeding-xgboost
917fa046eb2364e300ac19a873d6ebd57e349312
[ "MIT" ]
2
2021-08-06T12:50:17.000Z
2021-10-16T08:34:16.000Z
jupyter-notebooks/0-test-DAGMC-OpenMC/0_testing_dagmc_openmc.ipynb
Hazunanafaru/iter-tritium-breeding-xgboost
917fa046eb2364e300ac19a873d6ebd57e349312
[ "MIT" ]
1
2021-09-25T18:02:16.000Z
2021-12-19T07:58:51.000Z
jupyter-notebooks/0-test-DAGMC-OpenMC/0_testing_dagmc_openmc.ipynb
Hazunanafaru/iter-tritium-breeding-xgboost
917fa046eb2364e300ac19a873d6ebd57e349312
[ "MIT" ]
null
null
null
31.379032
210
0.584683
[ [ [ "# Testing DAGMC and OpenMC", "_____no_output_____" ] ], [ [ "import openmc\nimport json\nimport os\n\n#MATERIALS#\n\nbreeder_material = openmc.Material(name=\"li4sio4\") #Pb84.2Li15.8 with enrichment of Li6\nenrichment_fraction = 0.90\nbreeder_material.add_element('Pb', 84.2,'ao')\nbreeder_material.add_nuclide('Li6', enrichment_fraction*15.8, 'ao')\nbreeder_material.add_nuclide('Li7', (1.0-enrichment_fraction)*15.8, 'ao')\nbreeder_material.set_density('g/cm3', 11.)\n\ncopper = openmc.Material(name='copper')\ncopper.set_density('g/cm3', 8.5)\ncopper.add_element('Cu', 1.0)\n\neurofer = openmc.Material(name='eurofer')\neurofer.set_density('g/cm3', 7.75)\neurofer.add_element('Fe', 89.067, percent_type='wo')\neurofer.add_element('C', 0.11, percent_type='wo')\neurofer.add_element('Mn', 0.4, percent_type='wo')\neurofer.add_element('Cr', 9.0, percent_type='wo')\neurofer.add_element('Ta', 0.12, percent_type='wo')\neurofer.add_element('W', 1.1, percent_type='wo')\neurofer.add_element('N', 0.003, percent_type='wo')\neurofer.add_element('V', 0.2, percent_type='wo')\n\nmats = openmc.Materials([breeder_material, eurofer, copper])\n\n\n#GEOMETRY#\n\ndag_universe = openmc.DAGMCUniverse(filename=\"dagmc.h5m\")\ngeom = openmc.Geometry(root=dag_universe)\n\n\n\n#SIMULATION SETTINGS#\n\n# Instantiate a Settings object\nsett = openmc.Settings()\nbatches = 10\nsett.batches = batches\nsett.inactive = 0\nsett.particles = 500\nsett.run_mode = 'fixed source'\nsett.dagmc = True\n\n# Create a DT point source\nsource = openmc.Source()\nsource.space = openmc.stats.Point((0,0,0))\nsource.angle = openmc.stats.Isotropic()\nsource.energy = openmc.stats.Discrete([14e6], [1])\nsett.source = source\n\n\ntallies = openmc.Tallies()\n\n#added a cell tally for tritium production\ncell_filter = openmc.CellFilter(1) #breeder_material is in cell number 1\ntbr_tally = openmc.Tally(2,name='TBR')\ntbr_tally.filters = [cell_filter]\ntbr_tally.scores = ['205'] # MT 205 is the (n,Xt) reaction where X is a wildcard, if MT 105 or (n,t) then some tritium production will be missed, for example (n,nt) which happens in Li7 would be missed\ntallies.append(tbr_tally)\n\n# Run OpenMC!\nmodel = openmc.model.Model(geom, mats, sett, tallies)\nmodel.run()\n\n# open the results file\nsp = openmc.StatePoint('statepoint.'+str(batches)+'.h5')\n\n# access the tally\ntbr_tally = sp.get_tally(name='TBR')\ndf = tbr_tally.get_pandas_dataframe()\n\ntbr_tally_result = df['mean'].sum()\n\n# print result\nprint('The tritium breeding ratio was found, TBR = ',tbr_tally_result)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
ec96d5c1320709ff8c3b3c078aa77ee958a89a7e
8,104
ipynb
Jupyter Notebook
code/model_zoo/tensorflow_ipynb/softmax-regression.ipynb
wpsliu123/Sebastian_Raschka-Deep-Learning-Book
fc57a58b46921f057248bd8fd0f258e952a3cddb
[ "MIT" ]
3
2019-02-19T16:42:28.000Z
2020-10-11T05:16:12.000Z
code/model_zoo/tensorflow_ipynb/softmax-regression.ipynb
bharat3012/deep-learning-book
839e076c5098084512c947a38878a9a545d9a87d
[ "MIT" ]
null
null
null
code/model_zoo/tensorflow_ipynb/softmax-regression.ipynb
bharat3012/deep-learning-book
839e076c5098084512c947a38878a9a545d9a87d
[ "MIT" ]
2
2020-09-07T12:43:33.000Z
2021-06-11T12:10:09.000Z
38.961538
479
0.528875
[ [ [ "*Accompanying code examples of the book \"Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python\" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*\n \nOther code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).", "_____no_output_____" ] ], [ [ "%load_ext watermark\n%watermark -a 'Sebastian Raschka' -v -p tensorflow", "Sebastian Raschka \n\nCPython 3.6.1\nIPython 6.0.0\n\ntensorflow 1.2.0\n" ] ], [ [ "# Model Zoo -- Softmax Regression", "_____no_output_____" ], [ "Implementation of softmax regression (multinomial logistic regression).", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n\n##########################\n### DATASET\n##########################\n\nmnist = input_data.read_data_sets(\"./\", one_hot=True)\n\n\n##########################\n### SETTINGS\n##########################\n\n# Hyperparameters\nlearning_rate = 0.5\ntraining_epochs = 30\nbatch_size = 256\n\n# Architecture\nn_features = 784\nn_classes = 10\n\n\n##########################\n### GRAPH DEFINITION\n##########################\n\ng = tf.Graph()\nwith g.as_default():\n\n # Input data\n tf_x = tf.placeholder(tf.float32, [None, n_features])\n tf_y = tf.placeholder(tf.float32, [None, n_classes])\n\n # Model parameters\n params = {\n 'weights': tf.Variable(tf.zeros(shape=[n_features, n_classes],\n dtype=tf.float32), name='weights'),\n 'bias': tf.Variable([[n_classes]], dtype=tf.float32, name='bias')}\n\n # Softmax regression\n linear = tf.matmul(tf_x, params['weights']) + params['bias']\n pred_proba = tf.nn.softmax(linear, name='predict_probas')\n \n # Loss and optimizer\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n logits=linear, labels=tf_y), name='cost')\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\n train = optimizer.minimize(cost, name='train')\n\n # Class prediction\n pred_labels = tf.argmax(pred_proba, 1, name='predict_labels')\n correct_prediction = tf.equal(tf.argmax(tf_y, 1), pred_labels)\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')\n\n \n##########################\n### TRAINING & EVALUATION\n##########################\n\nwith tf.Session(graph=g) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch in range(training_epochs):\n avg_cost = 0.\n total_batch = mnist.train.num_examples // batch_size\n\n for i in range(total_batch):\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n _, c = sess.run(['train', 'cost:0'], feed_dict={tf_x: batch_x,\n tf_y: batch_y})\n avg_cost += c\n \n train_acc = sess.run('accuracy:0', feed_dict={tf_x: mnist.train.images,\n tf_y: mnist.train.labels})\n valid_acc = sess.run('accuracy:0', feed_dict={tf_x: mnist.validation.images,\n tf_y: mnist.validation.labels}) \n \n print(\"Epoch: %03d | AvgCost: %.3f\" % (epoch + 1, avg_cost / (i + 1)), end=\"\")\n print(\" | Train/Valid ACC: %.3f/%.3f\" % (train_acc, valid_acc))\n \n test_acc = sess.run(accuracy, feed_dict={tf_x: mnist.test.images,\n tf_y: mnist.test.labels})\n print('Test ACC: %.3f' % test_acc)", "Extracting ./train-images-idx3-ubyte.gz\nExtracting ./train-labels-idx1-ubyte.gz\nExtracting ./t10k-images-idx3-ubyte.gz\nExtracting ./t10k-labels-idx1-ubyte.gz\nEpoch: 001 | AvgCost: 0.476 | Train/Valid ACC: 0.903/0.909\nEpoch: 002 | AvgCost: 0.339 | Train/Valid ACC: 0.911/0.918\nEpoch: 003 | AvgCost: 0.320 | Train/Valid ACC: 0.915/0.922\nEpoch: 004 | AvgCost: 0.309 | Train/Valid ACC: 0.918/0.923\nEpoch: 005 | AvgCost: 0.301 | Train/Valid ACC: 0.918/0.922\nEpoch: 006 | AvgCost: 0.296 | Train/Valid ACC: 0.919/0.922\nEpoch: 007 | AvgCost: 0.291 | Train/Valid ACC: 0.921/0.925\nEpoch: 008 | AvgCost: 0.287 | Train/Valid ACC: 0.922/0.925\nEpoch: 009 | AvgCost: 0.286 | Train/Valid ACC: 0.922/0.926\nEpoch: 010 | AvgCost: 0.283 | Train/Valid ACC: 0.923/0.926\nEpoch: 011 | AvgCost: 0.282 | Train/Valid ACC: 0.923/0.924\nEpoch: 012 | AvgCost: 0.278 | Train/Valid ACC: 0.925/0.927\nEpoch: 013 | AvgCost: 0.278 | Train/Valid ACC: 0.925/0.928\nEpoch: 014 | AvgCost: 0.276 | Train/Valid ACC: 0.925/0.925\nEpoch: 015 | AvgCost: 0.276 | Train/Valid ACC: 0.926/0.928\nEpoch: 016 | AvgCost: 0.274 | Train/Valid ACC: 0.927/0.927\nEpoch: 017 | AvgCost: 0.270 | Train/Valid ACC: 0.927/0.925\nEpoch: 018 | AvgCost: 0.273 | Train/Valid ACC: 0.927/0.930\nEpoch: 019 | AvgCost: 0.270 | Train/Valid ACC: 0.927/0.929\nEpoch: 020 | AvgCost: 0.268 | Train/Valid ACC: 0.927/0.927\nEpoch: 021 | AvgCost: 0.268 | Train/Valid ACC: 0.927/0.926\nEpoch: 022 | AvgCost: 0.270 | Train/Valid ACC: 0.928/0.926\nEpoch: 023 | AvgCost: 0.268 | Train/Valid ACC: 0.927/0.926\nEpoch: 024 | AvgCost: 0.266 | Train/Valid ACC: 0.929/0.926\nEpoch: 025 | AvgCost: 0.261 | Train/Valid ACC: 0.927/0.926\nEpoch: 026 | AvgCost: 0.269 | Train/Valid ACC: 0.929/0.927\nEpoch: 027 | AvgCost: 0.265 | Train/Valid ACC: 0.928/0.928\nEpoch: 028 | AvgCost: 0.261 | Train/Valid ACC: 0.929/0.928\nEpoch: 029 | AvgCost: 0.266 | Train/Valid ACC: 0.930/0.926\nEpoch: 030 | AvgCost: 0.261 | Train/Valid ACC: 0.929/0.924\nTest ACC: 0.925\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
ec96d691e06448a45fc6e27687ad28ccbd47c2db
352,997
ipynb
Jupyter Notebook
Notebooks/11.KernelRidgeRegression.ipynb
MatSciEd/Machine-Learning
3f781f177f972bc56f0cad0e63bd8502d26352fe
[ "Apache-2.0" ]
1
2022-01-14T01:53:46.000Z
2022-01-14T01:53:46.000Z
Notebooks/11.KernelRidgeRegression.ipynb
MatSciEd/Machine-Learning
3f781f177f972bc56f0cad0e63bd8502d26352fe
[ "Apache-2.0" ]
null
null
null
Notebooks/11.KernelRidgeRegression.ipynb
MatSciEd/Machine-Learning
3f781f177f972bc56f0cad0e63bd8502d26352fe
[ "Apache-2.0" ]
4
2022-01-18T15:15:01.000Z
2022-02-24T18:34:50.000Z
663.528195
122,532
0.9433
[ [ [ "# 11. Kernel Ridge Regression\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rhennig/EMA6938/blob/main/Notebooks/11.KernelRidgeRegression.ipynb)\n\n(Source: https://github.com/marcosdelcueto/Tutorial_KRR)\n\nThe previous notebooks on Gaussian Kernel Regression applied the kernel trick for regression together with a radial basis function (RBF) kernel. The kernel trick enable kernel regression to learn a linear function in the space corresponding to the kernel. For non-linear kernels, this corresponds to a non-linear function in the original space of the data.\n\nIn this notebook, we explore **Kernel Ridge Regression (KRR)**, which adds $L_2$ regularization to kernel regression.\n\nKRR is a powerful machine learning technique because it results in a convex optimization problem that is straightforward to train and only requires few hyperparameters.", "_____no_output_____" ], [ "In this tutorial, we will use Python’s scikit-learn library, which provides easy access to kernel ridge regression. We will cover:\n\n- Linear regression\n- When linear regression fails\n- Kernel Ridge Regression to the rescue\n- Regularization parameter\n- Optimized KRR", "_____no_output_____" ], [ "## Linear Regression\n\nA large number of processes in nature follow linear relationships, and it is a good place to start when trying to fit data to a function. If we have a target property $y$ (dependent variable) that depends on the values of a feature $x$ (independent variable). They follow a linear relationship if $y$ can be approximated as\n$$\ny = a + b x,\n$$\nwhere $a$ represents the $y$-intercept and $b$ is the slope of the line. Note that $x$ and $y$ can be vectors of any dimensionality. For simplicity, we will consider here that both $x$ and $y$ are one-dimensional. This means that we have a single target property that depends on the value of just one feature.", "_____no_output_____" ] ], [ [ "# Import libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\nplt.rc('xtick', labelsize=16) \nplt.rc('ytick', labelsize=16)", "_____no_output_____" ], [ "# Initialize random seed\nnp.random.seed(19)\n\n# Create dataset with 11 points following quasi-lienar relation in interval [-5,5]\nx = np.linspace(-5, 5, 11, endpoint=True)\ny = x + (np.random.rand(11)-0.5)*4\nfor i in range(len(x)):\n print(x[i], '\\t ', y[i])\n\n# Create list with 1010 points in interval x:[-5,5]\nx_pred = np.linspace(-5, 5, 101, endpoint=True)", "-5.0 \t -6.609865593002194\n-4.0 \t -2.955001133300575\n-3.0 \t -4.012248107334692\n-2.0 \t -3.447473250094722\n-1.0 \t -1.6742137469162741\n0.0 \t -1.6680017399724285\n1.0 \t 1.6879083251218665\n2.0 \t 3.22637519264663\n3.0 \t 4.930967658176856\n4.0 \t 4.54264293918741\n5.0 \t 3.8636930241887213\n" ], [ "# Transform lists to np arrays\nx = np.array(x).reshape(-1, 1)\nx_pred = np.array(x_pred).reshape(-1, 1)\n\n# Do linear regression using database with 11 points\nregr = linear_model.LinearRegression()\nregr.fit(x, y)\n\n# Calculate value of linear regressor at 101 points in interval x:[-5,5]\ny_pred = regr.predict(x_pred)\n\n# Calculate value of linear regressor at 11 points in interval x:[-5,5]\nnew_y = regr.predict(x)\n\n# Calculate rmse value\nrmse = np.sqrt(mean_squared_error(new_y, y))\n\n# Set axes and labels\nfig = plt.figure(figsize = (8,6))\nax = fig.add_subplot()\nax.set_xlabel('x', fontsize=18)\nax.set_ylabel('y', fontsize=18)\nax.annotate(u'$RMSE$ = %.2f' % rmse, xy=(0.15,0.85), xycoords='axes fraction', fontsize=20)\nplt.plot(x_pred, y_pred, color='orange', linestyle='solid', linewidth=2)\nplt.scatter(x, y, color='purple')\nplt.show()", "_____no_output_____" ] ], [ [ "## When Linear Regression Fails\n\nLinear regression is ubiquitous and it should be a first go-to when trying to fit data. For the next example, we have generated a larger database, with 21 points, in which $y$ is calculated by a 4th order poolynomial plus a random variation in the interval [-1,1]:\n$$\ny=(x+4)⋅(x+1)⋅(x−1)⋅(x−3)+\\mathrm{rnd}(−1,1).\n$$", "_____no_output_____" ] ], [ [ "# Import libraries\nfrom sklearn.model_selection import train_test_split\n\n# Initialize random seed\nnp.random.seed(2020)\n\n# Generate a data set for machine learning\nnp.random.seed(seed=5)\nx = np.linspace(-2, 2, 300)\nx = x + np.random.normal(0, .3, x.shape)\ny = np.cos(x) - 2*np.sin(x) + 3*np.cos(x*2) + np.random.normal(0, 1, x.shape)\n\n# Create list with 1001 points in interval of x data\nx_pred = np.linspace(np.amin(x), np.amax(x), 1001, endpoint=True)\nx_pred = np.array(x_pred).reshape(-1, 1)\n\n# Split the dataset into 80% for training and 20% for testing\nx = x.reshape((x.size,1))\nx_train,x_test,y_train,y_test = train_test_split(x, y, train_size=0.8, shuffle=True)\n\n# Plot the training and testing dataset\nfig,ax=plt.subplots(figsize=(8,8))\nax.scatter(x_train, y_train, color='blue', label='Training')\nax.scatter(x_test, y_test, color='orange', label='Testing')\nax.set_xlabel('X Values',fontsize=20)\nax.set_ylabel('cos(x)+2sin(x)+3cos(2x)',fontsize=20)\nax.set_title('Training and testing data',fontsize=25)\nplt.legend(fontsize=20)\nplt.show()", "_____no_output_____" ], [ "# Do linear regression using database with 21 points\nregr = linear_model.LinearRegression()\nregr.fit(x_train, y_train)\n\n# Calculate value of linear regressor at 1060 points in interval x:[-5,5]\ny_pred = regr.predict(x_pred)\n\n# Calculate training and testing errors\ntraining_predictions = regr.predict(x_train)\ntraining_rmse = np.sqrt(mean_squared_error(y_train, training_predictions))\n\ntesting_predictions = regr.predict(x_test)\ntesting_rmse = np.sqrt(mean_squared_error(y_test, testing_predictions))\n\n# Set axes and labels\nfig = plt.figure(figsize = (12,8))\nax = fig.add_subplot()\nax.set_xlabel('x', fontsize=18)\nax.set_ylabel('y', fontsize=18)\nax.annotate(u'$Training RMSE$ = %.2f' % training_rmse, xy=(0.2,0.2), xycoords='axes fraction', fontsize=20)\nax.annotate(u'$Testing RMSE$ = %.2f' % testing_rmse, xy=(0.2,0.1), xycoords='axes fraction', fontsize=20)\n\nplt.plot(x_pred, y_pred, color='black', linestyle='solid', linewidth=2)\nplt.scatter(x_train, y_train, color='blue', label='Training')\nplt.scatter(x_test, y_test, color='orange', label='Testing')\nplt.show()", "_____no_output_____" ] ], [ [ "We clearly observes that the linear regression in orange fails to describe the trend followed by the violet points. This also results into a much larger RMSE.", "_____no_output_____" ], [ "## Kernel Ridge Regression to the rescue\n\nLinear regression is often insufficient to model complex data, When linear regression fails, we should use non-linear regression methods that allow greater flexibility.\n\nKRR uses the kernel trick to transform our dataset to the kernel space and then performs a linear regression in kernel-space. Therefore, one should always **choose the appropriate kernel for a problem**.\n\nIn this case, we will be using using a polynomial kernel. The polynomial kernel for two vectors (two points in our one-dimensional example) ${\\bf x}_1$ and ${\\bf x}_2$ is:\n$$\nK({\\bf x}_1, {\\bf x}_2) = \\left ( \\gamma ({\\bf x}_1^\\mathrm{T} {\\bf x}_2) + c \\right )^d,\n$$\nwhere $\\gamma$ is the kernel coefficient, $c$ is the independent term and $d$ is the degree of the polynomial. In this case, $\\gamma$ and $c$ play a minor role, and their default value of 1.0 is adequate, so we will only focus on optimizing the polynomial degree $d$.\n\nNext, we perform KRR regression using polynomial kernels of different degrees.", "_____no_output_____" ] ], [ [ "# Import libraries\nfrom sklearn.kernel_ridge import KernelRidge", "_____no_output_____" ], [ "# Create lists to store results\ny_pred = []\ntraining_predictions = []\ntraining_rmse = []\ntesting_predictions = []\ntesting_rmse = []\n\n# For each of the tested polynomial degree values\ndegrees = [1, 2, 3, 4]\nprint(degrees[0])\nfor degree_value in degrees:\n krr = KernelRidge(alpha=1.0,kernel='polynomial',degree=degree_value)\n krr.fit(x_train, y_train)\n y_pred.append(krr.predict(x_pred))\n pred_y_train = krr.predict(x_train)\n pred_y_test = krr.predict(x_test)\n \n # Calculate training and testing errors\n training_predictions.append(pred_y_train)\n training_rmse.append(np.sqrt(mean_squared_error(y_train, pred_y_train)))\n\n testing_predictions.append(pred_y_test)\n testing_rmse.append(np.sqrt(mean_squared_error(y_test, pred_y_test)))\n\n# Set axes and labels\nfig, axs = plt.subplots(2, 2, figsize = (16,12))\nfor ax in axs.flat:\n ax.set_xlabel('x', fontsize = 18)\n ax.set_ylabel('y', fontsize = 18)\n ax.scatter(x_train, y_train, color='blue', label='Training')\n ax.scatter(x_test, y_test, color='orange', label='Testing')\n\n# Hide x labels and tick labels for top plots and y ticks for right plots.\nfor ax in axs.flat:\n ax.label_outer()\n\n# Subplot top-left\naxs[0, 0].plot(x_pred, y_pred[0],color='black')\naxs[0, 0].set_title(r'$d$ = %2d' % degrees[0], fontsize = 22)\naxs[0, 0].annotate(u'$Training RMSE$ = %.2f' % training_rmse[0], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[0, 0].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[0], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot top-right\naxs[0, 1].plot(x_pred, y_pred[1], color='black')\naxs[0, 1].set_title(r'$d$ = %d' % degrees[1], fontsize = 22)\naxs[0, 1].annotate(u'$Training RMSE$ = %.2f' % training_rmse[1], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[0, 1].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[1], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot bottom-left\naxs[1, 0].plot(x_pred, y_pred[2], color='black')\naxs[1, 0].set_title(r'$d$ = %d' % degrees[2], fontsize = 22)\naxs[1, 0].annotate(u'$Training RMSE$ = %.2f' % training_rmse[2], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[1, 0].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[2], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot bottom-right\naxs[1, 1].plot(x_pred, y_pred[3], color='black')\naxs[1, 1].set_title(r'$d$ = %d' % degrees[3], fontsize = 22)\naxs[1, 1].annotate(u'$Training RMSE$ = %.2f' % training_rmse[3], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[1, 1].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[3], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\nplt.show()", "1\n" ] ], [ [ "## Question\n\nWhat happends to the RMSE in training and testing if we further increase the degree of the polynomials?", "_____no_output_____" ], [ "## Regularization parameter\n\nThe regularization paremeter, $\\alpha$, should also be optimized. It controls the conditioning of the problem, and larger $\\alpha$ values result into results that are more “general” and ignore the peculiarities of the problem. Larger values of $\\alpha$ allow to ignore noise in the system, but this might result into the model being blind to actual trends of the data.\n\nIf we perform our kernel ridge regression for different $\\alpha$ values, we can see its effect, as shown below.", "_____no_output_____" ] ], [ [ "# Create lists to store results\ny_pred = []\ntraining_predictions = []\ntraining_rmse = []\ntesting_predictions = []\ntesting_rmse = []\n\n# For each of the tested polynomial degree values\nalphas = [1E-4, 1E0, 1E2, 1E4]\nfor alpha_value in alphas:\n krr = KernelRidge(alpha=alpha_value, kernel='polynomial', degree=4)\n krr.fit(x_train, y_train)\n y_pred.append(krr.predict(x_pred))\n pred_y_train = krr.predict(x_train)\n pred_y_test = krr.predict(x_test)\n \n # Calculate training and testing errors\n training_predictions.append(pred_y_train)\n training_rmse.append(np.sqrt(mean_squared_error(y_train, pred_y_train)))\n\n testing_predictions.append(pred_y_test)\n testing_rmse.append(np.sqrt(mean_squared_error(y_test, pred_y_test)))\n\n \n# Set axes and labels\n# Set axes and labels\nfig, axs = plt.subplots(2, 2, figsize = (16,12))\nfor ax in axs.flat:\n ax.set_xlabel('x', fontsize = 18)\n ax.set_ylabel('y', fontsize = 18)\n ax.scatter(x_train, y_train, color='blue', label='Training')\n ax.scatter(x_test, y_test, color='orange', label='Testing')\n\n# Hide x labels and tick labels for top plots and y ticks for right plots.\nfor ax in axs.flat:\n ax.label_outer()\n\n# Subplot top-left\naxs[0, 0].plot(x_pred, y_pred[0],color='black')\naxs[0, 0].set_title(r'$\\alpha = %.0e$' % alphas[0], fontsize = 22)\naxs[0, 0].annotate(u'$Training RMSE$ = %.2f' % training_rmse[0], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[0, 0].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[0], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot top-right\naxs[0, 1].plot(x_pred, y_pred[1], color='black')\naxs[0, 1].set_title(r'$\\alpha = %.0e$' % alphas[1], fontsize = 22)\naxs[0, 1].annotate(u'$Training RMSE$ = %.2f' % training_rmse[1], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[0, 1].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[1], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot bottom-left\naxs[1, 0].plot(x_pred, y_pred[2], color='black')\naxs[1, 0].set_title(r'$\\alpha = %.0e$' % alphas[2], fontsize = 22)\naxs[1, 0].annotate(u'$Training RMSE$ = %.2f' % training_rmse[2], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[1, 0].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[2], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\n# Subplot bottom-right\naxs[1, 1].plot(x_pred, y_pred[3], color='black')\naxs[1, 1].set_title(r'$\\alpha = %.0e$' % alphas[3], fontsize = 22)\naxs[1, 1].annotate(u'$Training RMSE$ = %.2f' % training_rmse[3], xy=(0.2,0.2), xycoords='axes fraction', fontsize=16)\naxs[1, 1].annotate(u'$Testing RMSE$ = %.2f' % testing_rmse[3], xy=(0.2,0.1), xycoords='axes fraction', fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ "## Question\n\nHow do the testing RMSEs change in KKR for different regularization parameters when we modify the degree of the polynomials?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
ec96db415fe4b957b5dfa02c276fcb057c833cf9
406,708
ipynb
Jupyter Notebook
Sonify_Atomic_Hydrogen_in_Galaxies.ipynb
rdzudzar/Sonify
799622841f7b829c468df7f35ee4925c3f577fba
[ "MIT" ]
1
2020-12-05T13:21:00.000Z
2020-12-05T13:21:00.000Z
Sonify_Atomic_Hydrogen_in_Galaxies.ipynb
rdzudzar/Sonify
799622841f7b829c468df7f35ee4925c3f577fba
[ "MIT" ]
null
null
null
Sonify_Atomic_Hydrogen_in_Galaxies.ipynb
rdzudzar/Sonify
799622841f7b829c468df7f35ee4925c3f577fba
[ "MIT" ]
null
null
null
271.500668
152,284
0.898431
[ [ [ "**Author: Robert Džudžar** <br>\n[GitHub](https://github.com/rdzudzar) <br>\n[Website](https://rdzudzar.github.io/)<br>\n[Twitter](https://twitter.com/robertdzudzar)\n\n**Proofreader/editor/tester: Niko Šarčević**<br>\n[GitHub](https://github.com/nikosarcevic)<br>\n[Website](https://nikosarcevic.com)<br>\n[Twitter](https://twitter.com/NikoSarcevic)", "_____no_output_____" ], [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href=\"#Imports-and-setup\" data-toc-modified-id=\"Imports-and-setup-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Imports and setup</a></span></li><li><span><a href=\"#Import-files\" data-toc-modified-id=\"Import-files-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Import files</a></span></li><li><span><a href=\"#What-is-Atomic-Hydrogen-&amp;-How-it-looks-when-we-observe-it?\" data-toc-modified-id=\"What-is-Atomic-Hydrogen-&amp;-How-it-looks-when-we-observe-it?-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>What is Atomic Hydrogen &amp; How it looks when we observe it?</a></span><ul class=\"toc-item\"><li><span><a href=\"#Example-for-galaxy-ESO153-G017\" data-toc-modified-id=\"Example-for-galaxy-ESO153-G017-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Example for galaxy ESO153-G017</a></span></li><li><span><a href=\"#Let's-hear-it!\" data-toc-modified-id=\"Let's-hear-it!-4.2\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>Let's hear it!</a></span></li></ul></li><li><span><a href=\"#Assymetry?\" data-toc-modified-id=\"Assymetry?-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Assymetry?</a></span><ul class=\"toc-item\"><li><span><a href=\"#Let's-hear-it!\" data-toc-modified-id=\"Let's-hear-it!-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Let's hear it!</a></span></li></ul></li><li><span><a href=\"#Galaxy-groups\" data-toc-modified-id=\"Galaxy-groups-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Galaxy groups</a></span><ul class=\"toc-item\"><li><span><a href=\"#Atomic-Hydrogen-spectra-for-each-detected-galaxy-in-the-group\" data-toc-modified-id=\"Atomic-Hydrogen-spectra-for-each-detected-galaxy-in-the-group-6.1\"><span class=\"toc-item-num\">6.1&nbsp;&nbsp;</span>Atomic Hydrogen spectra for each detected galaxy in the group</a></span></li><li><span><a href=\"#Let's-hear-it!\" data-toc-modified-id=\"Let's-hear-it!-6.2\"><span class=\"toc-item-num\">6.2&nbsp;&nbsp;</span>Let's hear it!</a></span></li></ul></li><li><span><a href=\"#Tweaking\" data-toc-modified-id=\"Tweaking-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Tweaking</a></span><ul class=\"toc-item\"><li><span><a href=\"#Sound-parameters-within-Astronify\" data-toc-modified-id=\"Sound-parameters-within-Astronify-7.1\"><span class=\"toc-item-num\">7.1&nbsp;&nbsp;</span>Sound parameters within Astronify</a></span></li><li><span><a href=\"#We-can-roughly-animate-the-plot-and-play-the-sound\" data-toc-modified-id=\"We-can-roughly-animate-the-plot-and-play-the-sound-7.2\"><span class=\"toc-item-num\">7.2&nbsp;&nbsp;</span>We can roughly animate the plot and play the sound</a></span></li></ul></li></ul></div>", "_____no_output_____" ], [ "# Introduction\n\nHearing about Astronify package from [Nikolina Šarčević](https://twitter.com/NikoSarcevic), and after exploring it, I realise that: <br>\n\n**i)** [Astronify](https://astronify.readthedocs.io/en/latest/astronify/index.html#) is a great tool to make data accessible to people with impaired vision, <br>\n\n**ii)** I can make a contribution by creating a sonifying radio astronomy data - in particular, 21 cm emission from Atomic Hydrogen,\n\n**iii)** Have a great time exploring how data sounds.\n\nMain idea is to build a small project within a Jupyter Notebook to sonify examples of Atomic Hydrogen spectrum - for an individual galaxy and for a galaxy group - and here it is :)\n", "_____no_output_____" ], [ "# Imports and setup", "_____no_output_____" ] ], [ [ "# Astronify import\nfrom astronify.series import SoniSeries\nfrom astropy.table import Table\n\n# General import\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# In case you don't have some of the packages:\n# !pip install package_name", "\nWxPython is not found for the current python version.\nPyo will use a minimal GUI toolkit written with Tkinter (if available).\nThis toolkit has limited functionnalities and is no more\nmaintained or updated. If you want to use all of pyo's\nGUI features, you should install WxPython, available here:\nhttp://www.wxpython.org/\n\n" ], [ "# Define matplotlib parameters\nimport matplotlib\nfsize = 20\nmatplotlib.rcParams.update(\n {'font.size': fsize, 'xtick.major.size': 10, 'ytick.major.size': 10, 'xtick.major.width': 1, \n 'ytick.major.width': 1, 'ytick.minor.size': 5, 'xtick.minor.size': 5, 'xtick.direction': 'in', \n 'ytick.direction': 'in', 'axes.linewidth': 1, 'text.usetex': False, 'font.family': 'serif',\n 'legend.numpoints': 1, 'legend.columnspacing': 1,\n 'legend.fontsize': fsize-4, 'xtick.top': True, 'ytick.right': True,\n 'axes.grid': False, 'grid.color': 'lightgrey', 'grid.linestyle': ':','grid.linewidth': 3})", "_____no_output_____" ] ], [ [ "# Import files\nData are obtained with the Australia Telescope Compact Array (an array of six 2 m antennas used for radio astronomy). Each file contains galaxy spectra information: Channel, Velocity and Flux of the Atomic Hydrogen. \n\nThese galaxies belong to the galaxy group called `HIPASS J0205-55` - composed of two subgroups `HIPASS J0205-55a` and `HIPASS J0205-55b` (more information available in [Dzudzar et al. 2020](https://arxiv.org/abs/2011.01438), page 25 and page 30) ", "_____no_output_____" ] ], [ [ "G1_channel, G1_velocity, G1_flux = np.loadtxt('g1', unpack=True)\nG2_channel, G2_velocity, G2_flux = np.loadtxt('g2', unpack=True)\nG3_channel, G3_velocity, G3_flux = np.loadtxt('g3', unpack=True)\nG4_channel, G4_velocity, G4_flux = np.loadtxt('g4', unpack=True)\nG5_channel, G5_velocity, G5_flux = np.loadtxt('g5', unpack=True)", "_____no_output_____" ] ], [ [ "# What is Atomic Hydrogen \\& How it looks when we observe it?\nWithout going into too many details, we can say that a galaxy is composed out of stars, dust, gas and dark matter. The gaseous component in spiral galaxies is most often the **Atomic Hydrogen (HI)**. This gas represents galaxy's reservoir from which it can form new stars in the future.\n\nWe observe Atomic Hydrogen by observing its **~21 cm emission** line, using radio telescopes. Important thing is that using these observations, we can study how much gas a galaxy has, how the gas moves, whether galaxy is disturbed, etc. \n\nThe overall shape of 21 cm spectral line indicates motion and distribution of the gas, and the width of the line represents Doppler broadening (check [Doppler effect](https://en.wikipedia.org/wiki/Doppler_effect)) due to the rotation of the galaxy. Therefore, we can roughly say that we are observing **how gas in the galaxy rotates** (ignoring e.g. projection effects due to inclination)! Moreover, we will also show here not just how it looks like, but also, **how we can hear it!**", "_____no_output_____" ], [ "## Example for galaxy ESO153-G017\nWe have a flux on the *y*-axis and velocity on the *x*-axis. The <span style=\"color:blue\">blue</span> side of the spectrum is the part of the galaxy that moves towards us - so we say it's <span style=\"color:blue\">*the approaching side*</span>, while the <span style=\"color:red\">red</span> part of the galaxy is the one that moves away from us - and we call it <span style=\"color:red\">*the receeding side*</span>. The vertical line in the middle shows the systemic velocity of the galaxy - which can be used to calculate how far away the galaxy is. In this case it is roughly 300 million light years (check [Hubble's Law](https://en.wikipedia.org/wiki/Hubble%27s_law) for details).\n\n\nBecause this galaxy has a nice rotation (unfortunately, not all do!) -- we see a 21 cm line as so called **`Double horn`** profile. ", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(8,8)) \nax = fig.add_subplot(1,1,1) \n\n# Full spectrum\nplt.plot(G1_velocity, G1_flux, 'grey', linewidth=8, label='ESO153-G017')\n\n# Spectrum where Atomic Hydrogen emission is\nax.plot(G1_velocity[(G1_velocity>6200) & (G1_velocity<6525)],G1_flux[(G1_velocity>6200) & (G1_velocity<6525)], 'b-', color='blue', linewidth=2, label='')\nax.plot(G1_velocity[(G1_velocity>6525) & (G1_velocity<7525)],G1_flux[(G1_velocity>6525) & (G1_velocity<7525)], 'b-', color='red', linewidth=2, label='')\n\n# Fill out the emission part\nax.fill_between(G1_velocity[(G1_velocity>6410) & (G1_velocity<6650)],G1_flux[(G1_velocity>6410) & (G1_velocity<6650)], 0, color='#f0f0f0' )\n\nplt.axhline(0, color = 'k', linestyle = '-', linewidth = 1) \n\nplt.axvline(6525, color='k', linestyle='--')\n\nplt.text(6355, 0.110, 'Approaching',fontsize=20)\nplt.text(6400, 0.100, 'Side',fontsize=20)\n\nplt.text(6570, 0.110, 'Receeding',fontsize=20)\nplt.text(6600, 0.100, 'Side',fontsize=20)\n\n\nplt.ylabel(\"Total Flux Density [mJy]\", fontsize=22)\nplt.xlabel(\"Velocity [km s$^{-1}$]\", fontsize = 22)\n\nplt.tight_layout()\nplt.xlim(6320,6710)\nplt.ylim(-0.015, 0.120)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Let's hear it!\nWe place our data in the input format, and select a narrow velocity range (6320 - 6710 km/s) of our emission line. \n\nWe use `Data Series Sonification` from the [Astronify](https://astronify.readthedocs.io/en/latest/astronify/index.html#) to sonify our spectrum. ", "_____no_output_____" ] ], [ [ "# Placing data into input Table\n# We will place our Velocity information as a 'time' input\ndata_table = Table({\"time\": G1_velocity[(G1_velocity>6320) & (G1_velocity<6710)], \n \"flux\": G1_flux[(G1_velocity>6320) & (G1_velocity<6710)]})\n\n# Sonify the data\nsoni_obj = SoniSeries(data_table)\nsoni_obj.note_spacing = 0.07 # Speed of the played sound, increase number to slow it down\nsoni_obj.sonify()\n# Play\nsoni_obj.play()\n# If you want to save it, uncomment the line below, it will save `Galaxy.mp3` file into your working directory\n#soni_obj.write('Galaxy.mp3')", "_____no_output_____" ] ], [ [ "**How cool was that?! <br>**\n\nWe just represented the gas movement in the galaxy with sound!\n\nWe can hear that double-peaked signal: first we hear the noise - the deep sound. The pitch quickly rises for the first horn (the approaching side). After that, the frequency decreases a bit when it approaches the middle of the peak - as it approaches the galaxy's systemic velocity. The pitch rises again for the receeding horn and finally falls quickly.", "_____no_output_____" ], [ "# Assymetry?\n\nSometimes galaxies have assymetric spectrum. <br>\nFor example, we show the spectrum of ESO153-IG016, which is very asymmetric -- there is no clear `double horn` profile.", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(8,8)) \nax = fig.add_subplot(1,1,1) \n\n# Full spectrum\nplt.plot(G2_velocity, G2_flux, 'grey', linewidth=8, label='ESO153-IG016')\n\n# Spectrum where Atomic Hydrogen emission is\nax.plot(G2_velocity[(G2_velocity>5880) & (G2_velocity<=5961)],G2_flux[(G2_velocity>5880) & (G2_velocity<=5961)], 'b-', color='blue', linewidth=2, label='')\nax.plot(G2_velocity[(G2_velocity>5960) & (G2_velocity<6055)],G2_flux[(G2_velocity>5960) & (G2_velocity<6055)], 'b-', color='red', linewidth=2, label='')\n\n# Fill out the emission part\nax.fill_between(G2_velocity[(G2_velocity>5880) & (G2_velocity<6055)], G2_flux[(G2_velocity>5880) & (G2_velocity<6055)], 0, color='#f0f0f0' )\n\nplt.axhline(0, color = 'k', linestyle = '-', linewidth = 1) \n\nplt.axvline(5960, color='k', linestyle='--')\n\nplt.text(5840, 0.055, 'Approaching',fontsize=20)\nplt.text(5870, 0.052, 'Side',fontsize=20)\n\nplt.text(6000, 0.055, 'Receeding',fontsize=20)\nplt.text(6020, 0.052, 'Side',fontsize=20)\n\n\nplt.ylabel(\"Total Flux Density [Jy]\", fontsize=22)\nplt.xlabel(\"Velocity [km s$^{-1}$]\", fontsize = 22)\n\nplt.tight_layout()\nplt.xlim(5820,6100)\nplt.ylim(-0.008, 0.060)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Let's hear it!\nWe place our data in the input format, and select only a narrow velocity range (5820 - 6100 km/s) of our emission line. \n\nWe use `Data Series Sonification` from the [Astronify](https://astronify.readthedocs.io/en/latest/astronify/index.html#) to sonify our spectrum. \n", "_____no_output_____" ] ], [ [ "# Placing data into input Table\n# We will place our Velocity information as a 'time' input\ndata_table = Table({\"time\": G2_velocity[(G2_velocity>5820) & (G2_velocity<6100)], \n \"flux\": G2_flux[(G2_velocity>5820) & (G2_velocity<6100)]})\n\n# Sonify the data\nsoni_obj = SoniSeries(data_table)\nsoni_obj.note_spacing = 0.1 # Speed of the played sound, increase number to slow it down\nsoni_obj.sonify()\n# Play\nsoni_obj.play()\n# If you want to save it, uncomment line below, it will save `Galaxy.mp3` file into your working directory\n#soni_obj.write('Galaxy.mp3')", "_____no_output_____" ] ], [ [ "Awesome! <br>\nWe also hear that there is no double-peaked sound, and can barely resolve any increase in the second peak.", "_____no_output_____" ], [ "# Galaxy groups\n\nGalaxies tend to group together in pairs, triplets, or groups with more than 3 members. We can also find clusters with thousands of gravitationally bound galaxies. <br>\n\nIn this example, we are sonifying the galaxy group `HIPASS J0205-55` and its sub-groups. Mapping this group with the Australia Telescope Compact Array, we found Atomic Hydrogen emission in 5 galaxies.", "_____no_output_____" ], [ "## Atomic Hydrogen spectra for each detected galaxy in the group\nReproduced `Figure I1 b)` from [Dzudzar et al. 2020](https://arxiv.org/abs/2011.01438)", "_____no_output_____" ] ], [ [ "# Subplots\n%matplotlib inline\n\nnum_rows = 5\nnum_cols = 1\nfig, ax = plt.subplots(nrows=num_rows, ncols=num_cols, sharex='col', sharey='row', figsize=(9,10))\n\n# Data in mJy -- full spectra\nax[0].plot(G1_velocity, G1_flux, 'b-', color='grey', linewidth=1, label='')\nax[1].plot(G2_velocity, G2_flux, 'c-', color='grey', linewidth=1, label='')\nax[2].plot(G3_velocity, G3_flux, 'c-', color='grey', linewidth=1, label='')\nax[3].plot(G4_velocity, G4_flux, 'c-', color='grey', linewidth=1, label='')\nax[4].plot(G5_velocity, G5_flux, 'c-', color='grey', linewidth=1, label='')\n\n# Data in mJy -- highlighted Atomic Hydrogen emission lines\nax[0].plot(G1_velocity[(G1_velocity>6400) & (G1_velocity<6650)], G1_flux[(G1_velocity>6400) & (G1_velocity<6650)], 'b-', color='#225ea8', linewidth=2, label='G1: ESO153-G017')\nax[1].plot(G2_velocity[(G2_velocity>5880) & (G2_velocity<6055)], G2_flux[(G2_velocity>5880) & (G2_velocity<6055)], 'c-', color='#225ea8', linewidth=2, label='G2: ESO153-IG016')\nax[2].plot(G3_velocity[(G3_velocity>6075) & (G3_velocity<6175)], G3_flux[(G3_velocity>6075) & (G3_velocity<6175)], 'c-', color='#225ea8', linewidth=2, label='G3: ESO153-G015')\nax[3].plot(G4_velocity[(G4_velocity>5665) & (G4_velocity<6135)], G4_flux[(G4_velocity>5665) & (G4_velocity<6135)], 'c-', color='#225ea8', linewidth=2, label='G4: ESO153-G013')\nax[4].plot(G5_velocity[(G5_velocity>5735) & (G5_velocity<6120)], G5_flux[(G5_velocity>5735) & (G5_velocity<6120)], 'c-', color='#225ea8', linewidth=2, label='G5: ESO153-G020')\n\n# Add legend information\nfont_label_size = 22\nfor i in np.arange(num_rows):\n ax[i].axhline(0, color = 'k', linestyle = '-.',linewidth = 1)\n ax[i].legend(loc=0,frameon=True,fancybox=True, framealpha=0.6, fontsize=16)\n\n# Plot details\nplt.xlabel(\"Velocity [km s$^{-1}$]\", fontsize = font_label_size)\nplt.tight_layout()\n\nplt.xlim(5600,6700)\nax[2].set_ylim(-0.0025, 0.005)\nax[2].set_ylabel(\"Total Flux Density [Jy]\", fontsize=22)\n\nplt.subplots_adjust(wspace = 0.0, hspace = 0.0)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Let's hear it!\nWe can play all files simultaneously and hear how this galaxy group sounds! :)\n", "_____no_output_____" ] ], [ [ "# We build the data table for each galaxy\ndata_table_hi_1 = Table({\"time\": G1_velocity, #setting as velocity\n \"flux\": G1_flux})\n\ndata_table_hi_2 = Table({\"time\": G2_velocity, #setting as velocity\n \"flux\": G2_flux})\n\ndata_table_hi_3 = Table({\"time\": G3_velocity, #setting as velocity\n \"flux\": G3_flux})\n\ndata_table_hi_4 = Table({\"time\": G4_velocity, #setting as velocity\n \"flux\": G4_flux})\n\ndata_table_hi_5 = Table({\"time\": G5_velocity, #setting as velocity\n \"flux\": G5_flux})", "_____no_output_____" ], [ "# We make each Sonify object and play it simultaneously, with the same note_spacing (speed)\n\nnote_spacing = 0.07\n\nsoni_obj_1 = SoniSeries(data_table_hi_1)\n\nsoni_obj_1.note_spacing = note_spacing\nsoni_obj_1.sonify()\nsoni_obj_1.play()\n\nsoni_obj_2 = SoniSeries(data_table_hi_2)\nsoni_obj_2.note_spacing = note_spacing\nsoni_obj_2.sonify()\nsoni_obj_2.play()\n\nsoni_obj_3 = SoniSeries(data_table_hi_3)\nsoni_obj_3.note_spacing = note_spacing\nsoni_obj_3.sonify()\nsoni_obj_3.play()\n\nsoni_obj_4 = SoniSeries(data_table_hi_4)\nsoni_obj_4.note_spacing = note_spacing\nsoni_obj_4.sonify()\nsoni_obj_4.play()\n\nsoni_obj_5 = SoniSeries(data_table_hi_5)\nsoni_obj_5.note_spacing = note_spacing\nsoni_obj_5.sonify()\nsoni_obj_5.play()\n", "_____no_output_____" ] ], [ [ "**JUST WOW!**\n\nWe notice (HEAR!) that the signal from the galaxies does not appear at the same time - that is because every galaxy is at a different distance in a group. In other words, the point where the signal from the galaxy appears is both visualised and sonified! \n\nAt first, we can hear two galaxies (G4 and G5) rougly at the same time. Next, as the sound progresses we hear one high pitch noise - signaling a new galaxy (G2) and then yet another high-pitch signal - another galaxy (G3)! We can then hear the background noise for a bit, as the next galaxy is a bit further away. Finally we get to hear the highest-pitch sound which comes from the galaxy G! in the plot. \n\nAs you probably noticed, noise has a low-pitch sound, and with the increase of the Atomic Hydrogen Flux in a galaxy (within this galaxy group), galaxy will produce a higher-pitch tone. ", "_____no_output_____" ], [ "# Tweaking ", "_____no_output_____" ], [ "## Sound parameters within Astronify\n\nThere are possibilities to adjust the sounds for a better output, see details [here].(https://astronify.readthedocs.io/en/latest/astronify/Intro_Astronify_Series.html) <br>\n\nIn the example below, we are defining a `zero_point` at 0.001 and the `stretch` of the sound, from default 'linear' to 'log'. There are more options available to be explored and the plan is to investigate this in the near future. For now, we are only going to tweak the zero point and the stretch.", "_____no_output_____" ] ], [ [ "# We make each Sonify object and play them at the same time, with the same note_spacing (speed)\n\nnote_spacing = 0.07\nzero_point = 0.001\nstretch = 'log'\n \nsoni_obj_1 = SoniSeries(data_table_hi_1)\n\nsoni_obj_1.note_spacing = note_spacing\nsoni_obj_1.pitch_mapper.pitch_map_args\n{\"zero_point\": zero_point,\n\"stretch\": stretch}\n\nsoni_obj_1.sonify()\nsoni_obj_1.play()\n\nsoni_obj_2 = SoniSeries(data_table_hi_2)\nsoni_obj_2.note_spacing = note_spacing\nsoni_obj_2.pitch_mapper.pitch_map_args\n{\"zero_point\": zero_point,\n\"stretch\": stretch}\n\nsoni_obj_2.sonify()\nsoni_obj_2.play()\n\nsoni_obj_3 = SoniSeries(data_table_hi_3)\nsoni_obj_3.note_spacing = note_spacing\nsoni_obj_3.pitch_mapper.pitch_map_args\n{\"zero_point\": zero_point,\n\"stretch\": stretch}\n\nsoni_obj_3.sonify()\nsoni_obj_3.play()\n\nsoni_obj_4 = SoniSeries(data_table_hi_4)\nsoni_obj_4.note_spacing = note_spacing\nsoni_obj_4.pitch_mapper.pitch_map_args\n{\"zero_point\": zero_point,\n\"stretch\": stretch}\n\nsoni_obj_4.sonify()\nsoni_obj_4.play()\n\nsoni_obj_5 = SoniSeries(data_table_hi_5)\nsoni_obj_5.note_spacing = note_spacing\nsoni_obj_5.pitch_mapper.pitch_map_args\n{\"zero_point\": zero_point,\n\"stretch\": stretch}\n\nsoni_obj_5.sonify()\nsoni_obj_5.play()\n", "_____no_output_____" ] ], [ [ "## We can roughly animate the plot and play the sound\nAlthough not 100% correct matching, here is a nice animation between the Visual and Sound properties. <br>\n\nSometimes animation is lagging behind the sound, since here we have to use `%matplotlib notebook` in order for it to work. Try re-running the cell below multiple times if there is an issue.", "_____no_output_____" ] ], [ [ "#from matplotlib.animation import FuncAnimation\nimport matplotlib.animation as animation\n%matplotlib notebook\n\nduration = 7000 # in sec\nrefreshPeriod = 100 # in ms\n\nfig,ax = plt.subplots(figsize=(6,6))\n\nplt.plot(G1_velocity[(G1_velocity>6320) & (G1_velocity<6710)], G1_flux[(G1_velocity>6320) & (G1_velocity<6710)], 'grey', linewidth=8, label='ESO153-G017')\n# Spectrum where Atomic Hydrogen emission is\nax.plot(G1_velocity[(G1_velocity>6320) & (G1_velocity<6525)],G1_flux[(G1_velocity>6320) & (G1_velocity<6525)], 'b-', color='blue', linewidth=2, label='')\nax.plot(G1_velocity[(G1_velocity>6525) & (G1_velocity<7525)],G1_flux[(G1_velocity>6525) & (G1_velocity<7525)], 'b-', color='red', linewidth=2, label='')\nplt.ylabel(\"Total Flux Density [Jy]\", fontsize=22)\nplt.xlabel(\"Velocity [km s$^{-1}$]\", fontsize = 22)\n\nplt.tight_layout()\nplt.xlim(6320,6710)\nplt.ylim(-0.008, 0.12)\n\n# Line Animation\nvl = ax.axvline(6320, ls='-', color='k', lw=3, zorder=10)\n\ndef animate(i, vl, period):\n t = i*period/14\n vl.set_xdata([t+6320,t+6320])\n return vl\n\nani = animation.FuncAnimation(fig, animate, frames=int(duration/(refreshPeriod/1)), fargs=(vl,refreshPeriod), interval=refreshPeriod)\nplt.show()\n\n# SOUND\n\n# Placing data into input Table\n# We will place our Velocity information as a 'time' input\ndata_table = Table({\"time\": G1_velocity[(G1_velocity>6320) & (G1_velocity<6710)], \n \"flux\": G1_flux[(G1_velocity>6320) & (G1_velocity<6710)]})\n\n# Sonify the data\nsoni_obj = SoniSeries(data_table)\nsoni_obj.note_spacing = 0.07 # Speed of the played sound, increase number to slow it down\nsoni_obj.sonify()\n# Play\nsoni_obj.play()\n# If you want to save it, uncomment the line below, it will save `Galaxy.mp3` file into your working directory\n#soni_obj.write('Galaxy.mp3')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec96fbcfefd6da0c377159d78bd080fb2f41f377
23,441
ipynb
Jupyter Notebook
nbs/41_tabular.data.ipynb
RogerS49/fastai2
1f631c9943418d3d0175ce73aa646d8b82fa7fd1
[ "Apache-2.0" ]
null
null
null
nbs/41_tabular.data.ipynb
RogerS49/fastai2
1f631c9943418d3d0175ce73aa646d8b82fa7fd1
[ "Apache-2.0" ]
null
null
null
nbs/41_tabular.data.ipynb
RogerS49/fastai2
1f631c9943418d3d0175ce73aa646d8b82fa7fd1
[ "Apache-2.0" ]
null
null
null
34.421439
395
0.452284
[ [ [ "#default_exp tabular.data", "_____no_output_____" ], [ "#export\nfrom fastai2.torch_basics import *\nfrom fastai2.data.all import *\nfrom fastai2.tabular.core import *", "_____no_output_____" ], [ "#hide\nfrom nbdev.showdoc import *", "_____no_output_____" ] ], [ [ "# Tabular data\n\n> Helper functions to get data in a `DataLoaders` in the tabular application and higher class `TabularDataLoaders`", "_____no_output_____" ], [ "## TabularDataLoaders -", "_____no_output_____" ] ], [ [ "#export\nclass TabularDataLoaders(DataLoaders):\n \"Basic wrapper around several `DataLoader`s with factory methods for tabular data\"\n @classmethod\n @delegates(Tabular.dataloaders)\n def from_df(cls, df, path='.', procs=None, cat_names=None, cont_names=None, y_names=None, y_block=None,\n valid_idx=None, **kwargs):\n \"Create from `df` in `path` using `procs`\"\n if cat_names is None: cat_names = []\n if cont_names is None: cont_names = list(set(df)-set(cat_names)-set(y_names))\n splits = RandomSplitter()(df) if valid_idx is None else IndexSplitter(valid_idx)(df)\n to = TabularPandas(df, procs, cat_names, cont_names, y_names, splits=splits, y_block=y_block)\n return to.dataloaders(path=path, **kwargs)\n\n @classmethod\n @delegates(Tabular.dataloaders)\n def from_csv(cls, csv, path='.', procs=None, cat_names=None, cont_names=None, y_names=None, y_block=None,\n valid_idx=None, **kwargs):\n \"Create from `csv` file in `path` using `procs`\"\n return cls.from_df(pd.read_csv(csv), path, procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names,\n y_block=y_block, valid_idx=valid_idx, **kwargs)\n \n @delegates(TabDataLoader.__init__)\n def test_dl(self, test_items, rm_type_tfms=None, **kwargs):\n to = self.train_ds.new(test_items)\n to.process()\n return self.valid.new(to, **kwargs)\n \nTabular._dbunch_type = TabularDataLoaders", "_____no_output_____" ], [ "show_doc(TabularDataLoaders.from_df)", "_____no_output_____" ] ], [ [ "This class should not be used directly, one of the factory methods should be prefered instead. All those factory methods accept as arguments:\n\n- `cat_names`: the names of the categorical variables\n- `cont_names`: the names of the continuous variables\n- `y_names`: the names of the dependent variables\n- `y_block`: the `TransformBlock` to use for the target\n- `valid_idx`: the indices to use for the validation set (defaults to a random split otherwise)\n- `bs`: the batch size\n- `val_bs`: the batch size for the validation `DataLoader` (defaults to `bs`)\n- `shuffle_train`: if we shuffle the training `DataLoader` or not\n- `n`: overrides the numbers of elements in the dataset\n- `device`: the PyTorch device to use (defaults to `default_device()`)", "_____no_output_____" ] ], [ [ "show_doc(TabularDataLoaders.from_df)", "_____no_output_____" ] ], [ [ "Let's have a look on an example with the adult dataset:", "_____no_output_____" ] ], [ [ "path = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(path/'adult.csv')\ndf.head()", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['age', 'fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]", "_____no_output_____" ], [ "dls = TabularDataLoaders.from_df(df, path, procs, cat_names, cont_names, y_names=\"salary\",\n valid_idx=list(range(800,1000)), bs=64)", "_____no_output_____" ], [ "dls.show_batch()", "_____no_output_____" ], [ "show_doc(TabularDataLoaders.from_csv)", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['age', 'fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]\ndls = TabularDataLoaders.from_csv(path/'adult.csv', path, procs, cat_names, cont_names, y_names=\"salary\",\n valid_idx=list(range(800,1000)), bs=64)", "_____no_output_____" ] ], [ [ "## Export -", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.export import notebook2script\nnotebook2script()", "Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.vision.ipynb.\nConverted 24_tutorial.siamese.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.text.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 44_tutorial.tabular.ipynb.\nConverted 45_collab.ipynb.\nConverted 50_tutorial.datablock.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 61_tutorial.medical_imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted index.ipynb.\nConverted tutorial.ipynb.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
ec970f52138762854149b6b4ba19bba3f9e2bb4b
33,374
ipynb
Jupyter Notebook
_notebooks/2022-02-10-dl.ipynb
rhkrehtjd/INTROdl
f758700454f0d373c1403bb20fc5a37c9560eace
[ "Apache-2.0" ]
null
null
null
_notebooks/2022-02-10-dl.ipynb
rhkrehtjd/INTROdl
f758700454f0d373c1403bb20fc5a37c9560eace
[ "Apache-2.0" ]
null
null
null
_notebooks/2022-02-10-dl.ipynb
rhkrehtjd/INTROdl
f758700454f0d373c1403bb20fc5a37c9560eace
[ "Apache-2.0" ]
null
null
null
62.498127
14,928
0.735393
[ [ [ "# 클래스로 신경망 구현, 오차역 전파법", "_____no_output_____" ], [ "- 학습 알고리즘 구현하기\n - 신경망 학습의 순서를 확인해보자\n - 전제 : 신경망에는 적응 가능한 가중치와 편향이 이 있고, 이 가중치와 편향을 훈련 데이터에 적응하도록 조정하는 과정을 학습이라고 한다. 신경망 학습은 다음과 같이 4단계로 수행된다.\n - 1) 미니배치 : 훈련 데이터 중 일부를 무작위로 가져온다. 이렇게 선별한 데이터를 미니배치라 하며, 그 미니배치의 손실 함수 값을 줄이는 것이 목표이다.\n - 2) 기울기 산출 : 미니배치의 손실 함수 값을 줄이기 위해 각 가중치 매개변수의 기울기를 구한다. 기울기는 손실함수의 값을 가장 작게 하는 방향을 제시한다. \n - 3) 매개변수 갱신 : 가중치 매개변수를 기울기 방향으로 아주 조금 갱신한다.\n - 4) 반복 : 1~3단계를 반복한다.\n - 이것이 신경망 학습이 이루어지는 순서이다. 이는 경사 하강법으로 매개변수를 갱신하는 방법이며, 이때 데이터를 미니배치로 무작위로 선정하기 때문에 확률적 경사 하강법이라고 부른다. (이하 SGD) 확률적으로 무작위로 골라낸 데이터에 대해 수행하는 경사 하강법이라 의미이다.\n- 실제로 손글씨 숫자를 학습하는 신경망을 구현해보자\n- 여기에서는 2층 신경망(은닉층이 1개인 네트워크)을 대상으로 MNIST 데이터셋을 사용하여 학습을 수행한다.\n- 처음에는 2층 신경망을 하나의 클래스로 구현하는 것부처 시작한다. \n- 이 클래스의 이름은 TwoLayerNet이다.", "_____no_output_____" ] ], [ [ "import sys, os\nsys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정\nfrom common.functions import *\nfrom common.gradient import numerical_gradient\n\n\nclass TwoLayerNet:\n\n def __init__(self, input_size, hidden_size, output_size, weight_init_std=0.01):\n # 가중치 초기화\n self.params = {}\n self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size)\n self.params['b1'] = np.zeros(hidden_size)\n self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size)\n self.params['b2'] = np.zeros(output_size)\n\n def predict(self, x):\n W1, W2 = self.params['W1'], self.params['W2']\n b1, b2 = self.params['b1'], self.params['b2']\n \n a1 = np.dot(x, W1) + b1\n z1 = sigmoid(a1)\n a2 = np.dot(z1, W2) + b2\n y = softmax(a2)\n \n return y\n \n # x : 입력 데이터, t : 정답 레이블\n def loss(self, x, t):\n y = self.predict(x)\n \n return cross_entropy_error(y, t)\n \n def accuracy(self, x, t):\n y = self.predict(x)\n y = np.argmax(y, axis=1)\n t = np.argmax(t, axis=1)\n \n accuracy = np.sum(y == t) / float(x.shape[0])\n return accuracy\n \n # x : 입력 데이터, t : 정답 레이블\n def numerical_gradient(self, x, t):\n loss_W = lambda W: self.loss(x, t)\n \n grads = {}\n grads['W1'] = numerical_gradient(loss_W, self.params['W1'])\n grads['b1'] = numerical_gradient(loss_W, self.params['b1'])\n grads['W2'] = numerical_gradient(loss_W, self.params['W2'])\n grads['b2'] = numerical_gradient(loss_W, self.params['b2'])\n \n return grads", "_____no_output_____" ] ], [ [ "- 앞에서 다룬 신경망의 순전파 처리 구현과 공통되는 부분이 많고, 새로운 내용은 딱히 없다.\n- 우선 이 클래스가 사용하는 변수와 메서드를 정리해보자\n - 중요해보이는 것 일부만 작성하였으며 그 외의 것은 139p를 참고하자\n - params : 신경망의 매개변수를 보관하는 딕셔너리 변수(인스턴스 변수)\n - grads : 기울기 보관하는 딕셔너리 변수 (numerical_gradient() 메서드의 반환 값)\n - TwoLayerNet 클래스는 딕셔너리인 params와 grads를 인스턴스 변수로 갖는다. \n - 자세한 내용은 해당 교재를 참고하자.\n - 예를 하나 살펴보자", "_____no_output_____" ] ], [ [ "net = TwoLayerNet(input_size = 784, hidden_size= 100, output_size= 10)\nprint(net.params['W1'].shape)\nprint(net.params['b1'].shape)\nprint(net.params['W2'].shape)\nprint(net.params['b2'].shape)", "(784, 100)\n(100,)\n(100, 10)\n(10,)\n" ] ], [ [ "- 이와 같이 params 변수에는 이 신경망에 필요한 매개변수가 모두 저장된다. 그리고 params 변수에 저장된 가중치 매개변수가 예측 처리(순방향 처리)에서 사용된다. 참고로 예측 처리는 다음과 같이 실행할 수 있다.", "_____no_output_____" ] ], [ [ "x = np.random.rand(100,784) # 더미 입력 데이터 100장 분량\ny = net.predict(x)", "_____no_output_____" ] ], [ [ "- grads 변수에는 params 변수에 대응하는 각 매개변수의 기울기가 저장된다. 예를 들어 다음과 같이 numericla_gradient() 메서드를 사용해 기울기를 계산하면 grads 변수에 기울기 정보가 저장된다. ", "_____no_output_____" ] ], [ [ "x = np.random.rand(100,784) # 더미 입력 데이터 (100장 분량)\nt = np.random.rand(100,10) # 더미 정답 레이블 (100장 분량)\n\ngrads = net.numerical_gradient(x,t)\n\nprint(grads['W1'].shape)\nprint(grads['b1'].shape)\nprint(grads['W2'].shape)\nprint(grads['b2'].shape)", "(784, 100)\n(100,)\n(100, 10)\n(10,)\n" ] ], [ [ "- 이어서 TwoLayerNet 메서드를 살펴보자\n - 우선 __init__ : 메서드는 클래스를 초기화한다. (이 초기화 메서드는 TwoLayerNet을 생성할 때 불리는 메서드이다.)\n - 추가 : 신경망 학습은 시간이 오래 걸리니, 시간을 절약하려면 같은 결과를 훨씬 빠르게 얻을 수 있는 오차역전파법으로 각 매개변수의 손실 함수에 대한 기울기를 계산할 수 있다. 이는 다음장에서 학습할 것이다.", "_____no_output_____" ], [ "- 미니배치 학습 구현하기\n - 신경망 학습 구현에는 앞에서 설명한 미니배치 학습을 활용한다. 미니배치 학습이란 훈련 데이터 중 일부를 무작위로 꺼내고, 그 미니배치에 대해서 경사법으로 매개변수를 갱신한다. ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom dataset.mnist import load_mnist\nfrom two_layer_net import TwoLayerNet\n\n# 데이터 읽기\n(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)\n\nnetwork = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)\n\n# 하이퍼파라미터\niters_num = 10000 # 반복 횟수를 적절히 설정한다.\ntrain_size = x_train.shape[0]\nbatch_size = 100 # 미니배치 크기\nlearning_rate = 0.1\n\ntrain_loss_list = []\n\nfor i in range(iters_num):\n # 미니배치 획득\n batch_mask = np.random.choice(train_size, batch_size)\n x_batch = x_train[batch_mask]\n t_batch = t_train[batch_mask]\n \n # 기울기 계산\n #grad = network.numerical_gradient(x_batch, t_batch)\n grad = network.gradient(x_batch, t_batch)\n \n # 매개변수 갱신\n for key in ('W1', 'b1', 'W2', 'b2'):\n network.params[key] -= learning_rate * grad[key]\n \n # 학습 경과 기록\n loss = network.loss(x_batch, t_batch)\n train_loss_list.append(loss)", "_____no_output_____" ] ], [ [ "- 여기서는 미니배치 크기를 100으로 했다. 즉, 매번 60000개의 훈련 데이터에서 임의로 100개의 데이터(이미지 데이터와 정답 레이블 데이터)를 추려낸다. 그리고 그 100개의 미니배치를 대상으로 확률적 경사 하강법을 수행해 매개변수를 갱신한다. 경사법에 의한 갱신 횟수(반복 횟수)를 10000번으로 설정하고 갱신할 때마다 훈련 데이터에 대한 손실함수를 계산하고 그 값을 배열에 추가한다.\n- 학습 횟수가 늘어가면서 손실 함수의 값이 줄어들고 이는 학습이 잘 이루어지고 있다는 뜻으로 신경망의 가중치 매개변수가 서서히 데이터에 적응하고 있음을 의미한다. 신경망이 학습하고 있는 것이다. 다시 말해 데이터를 반복해서 학습함으로써 최적 가중치 매개변수로 서서히 다가가고 있는 것이다.", "_____no_output_____" ], [ "- 하지만 정확히는 훈련 데이터의 미니배치에 대한 손실 함수의 값이다. 훈련 데이터의 손실 함수 값이 작아지는 것은 잘 학습하고 있다는 방증이지만 이 결과만으로는 다른 데이터셋에서도 비슷한 실력을 발휘할지는 확실하지 않다. \n- 신경망 학습에서는 훈련 데이터 외의 데이터를 올바르게 인식하는지를 확인해여 한다. 다른 말로 오버피팅을 일으키지 않는지 확인해야 한다. 오비피팅 되었다는 것은 예를 들어 훈련 데이터 포함된 이미지만 제대로 구분하고 그렇지 않은 이미지는 식별할 수 없다는 뜻이다.\n- 범용적인 능력의 평가를 위해, 훈련 데이터에 포함되지 않은 데이터를 사용해 평가해봐야 한다.\n - 이를 위해 다음 구현에서는 학습 도중 정기적으로 훈련 데이터와 시험 데이터를 대상으로 정확도를 기록한다. 여기에서는 1에폭별로 훈련 데이터와 시험 데이터에 대한 정확도를 기록한다. \n - 에폭은 하나의 단위이다. 1에폭은 학습에서 훈련 데이터를 모두 소진했을 때의 횟수에 해당한다. 예컨대 훈련 데이터 10000개를 100개의 미니배치로 학습할 경우, 확률적 경사 하강법을 100회 반복하면 모든 훈련 데이터를 소진한게 된다. 이 경우 100회가 1에폭이 된다.", "_____no_output_____" ] ], [ [ "from dataset.mnist import load_mnist\nfrom two_layer_net import TwoLayerNet\n\n# 데이터 읽기\n(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)\n\nnetwork = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)\n\n# 하이퍼파라미터\niters_num = 10000 # 반복 횟수를 적절히 설정한다.\ntrain_size = x_train.shape[0]\nbatch_size = 100 # 미니배치 크기\nlearning_rate = 0.1\n\ntrain_loss_list = []\ntrain_acc_list = []\ntest_acc_list = []\n\n# 1에폭당 반복 수\niter_per_epoch = max(train_size / batch_size, 1)\n\nfor i in range(iters_num):\n # 미니배치 획득\n batch_mask = np.random.choice(train_size, batch_size)\n x_batch = x_train[batch_mask]\n t_batch = t_train[batch_mask]\n \n # 기울기 계산\n #grad = network.numerical_gradient(x_batch, t_batch)\n grad = network.gradient(x_batch, t_batch)\n \n # 매개변수 갱신\n for key in ('W1', 'b1', 'W2', 'b2'):\n network.params[key] -= learning_rate * grad[key]\n \n # 학습 경과 기록\n loss = network.loss(x_batch, t_batch)\n train_loss_list.append(loss)\n \n # 1에폭당 정확도 계산\n if i % iter_per_epoch == 0:\n train_acc = network.accuracy(x_train, t_train)\n test_acc = network.accuracy(x_test, t_test)\n train_acc_list.append(train_acc)\n test_acc_list.append(test_acc)\n print(\"train acc, test acc | \" + str(train_acc) + \", \" + str(test_acc))", "train acc, test acc | 0.09915, 0.1009\ntrain acc, test acc | 0.7825666666666666, 0.7853\ntrain acc, test acc | 0.8771333333333333, 0.8807\ntrain acc, test acc | 0.8981166666666667, 0.9014\ntrain acc, test acc | 0.9082833333333333, 0.9104\ntrain acc, test acc | 0.9147, 0.9171\ntrain acc, test acc | 0.9193, 0.9215\ntrain acc, test acc | 0.92455, 0.9265\ntrain acc, test acc | 0.9277166666666666, 0.9291\ntrain acc, test acc | 0.9313833333333333, 0.9322\ntrain acc, test acc | 0.93445, 0.9348\ntrain acc, test acc | 0.9368333333333333, 0.9367\ntrain acc, test acc | 0.939, 0.9385\ntrain acc, test acc | 0.9403666666666667, 0.9408\ntrain acc, test acc | 0.9431666666666667, 0.942\ntrain acc, test acc | 0.9448, 0.9435\ntrain acc, test acc | 0.94635, 0.9451\n" ] ], [ [ "- 이 예에서는 1에폭마다 모든 훈련 데이터와 시험 데이터에 대한 정확도를 계산하고 그 결과를 기록한다. \n- 정확도를 1에폭마다 계산하는 이유는 for문 앞에서 매번 계산하기에는 시간이 오래 걸리고 또 그렇게까지 자주 기록할 필요도 없기 때문이다. \n- 앞의 코드로 얻은 결과를 그래프로 그려보자", "_____no_output_____" ] ], [ [ "# 그래프 그리기\nmarkers = {'train': 'o', 'test': 's'}\nx = np.arange(len(train_acc_list))\nplt.plot(x, train_acc_list, label='train acc')\nplt.plot(x, test_acc_list, label='test acc', linestyle='--')\nplt.xlabel(\"epochs\")\nplt.ylabel(\"accuracy\")\nplt.ylim(0, 1.0)\nplt.legend(loc='lower right')\nplt.show()", "_____no_output_____" ] ], [ [ "- 훈련 데이터에 대한 정확도를 실선으로 시험 데이터 에대한 정확도를 점선으로 그렸다. 보다시피 에폭이 진행될수록, 즉 학습이 진행될수록 훈련데이터와 시험데이터를 사용하고 평가한 정확도가 모두 좋아지고 있다. 또 두 정확도에는 차이가 없음을 알 수 있다. 다시 말해 이번 학습에서는 오버피팅이 일어나지 않았음을 알 수 있다.", "_____no_output_____" ], [ "- 만약 오버피팅이 일어난다면?\n - 훈련이란 훈련 데이터에 대한 정확도를 높이는 방향으로 학습이 이루어지니 그 정확도는 에폭을 반복할 수록 높아진다. 반면 훈련 데이터에 지나치게 적응하면, 즉 오버피팅되면 훈련 데이터와는 다른 데이터를 보면 잘못된 판단을 하기 시작한다. 어느 순간부터 시험 데이터에 대한 정확도가 점차 떨어지기 시작한다는 뜻이다. 이 순간이 오버피팅이 시작되는 순간이다. 여기서 중요한 insight! $\\to$ 이 순간을 포착해 학습을 중단하면 오버피팅을 효과적으로 예방할 수 있을 것이다. 이 기법을 조기 종료라 하며, 가중치 감소, 드롭 아웃과 함께 대표적인 오버피팅 예방법이다.", "_____no_output_____" ], [ "- 결론 \n - 기계학습에서 사용하는 데이터 셋은 훈련 데이터와 시험 데이터로 나눠 사용한다.\n - 훈련 데이터로 학습한 모델의 범용 능력을 시험 데이터로 평가한다.\n - 신경망 학습은 손실 함수를 지표로, 손실 함수의 값이 작아지는 방향으로 가중치 매개변수를 갱신한다.\n - 가중치 매개변수를 갱신할 때는 가중치 매개변수의 기울기를 이용하고, 기울어진 방향으로 가중치의 값을 갱신하는 작업을 반복한다.\n - 아주 작은 값을 주었을 때의 차분으로 미분하는 것을 수치 미분이라고 한다.\n - 수치 미분을 이용해 가중치 매개변수의 기울기를 구할 수 있다.\n - 수치 미분을 이요한 계산에는 시간이 걸리지만, 그 구현은 간단하다. 한편, 다음 장에서 구현하는 다소 복잡한 오차역 전파법은 기울기를 고속으로 구할 수 있다. ", "_____no_output_____" ], [ "---", "_____no_output_____" ], [ "- 오차역 전파법\n - 가중치 매개변수의 기울기 정확히는 가중치 매개변수에 대한 손실 함수의 기울기를 효율적으로 계산하는 오차역 전파법을 배워보자\n - 오차역 전파법 이해하기\n - 오차역 전파법을 수식으로도 이해할 수 있겠지만, 이번 장에서는 계산 그래프를 사용해서 시작적으로 이해해보자", "_____no_output_____" ], [ "- 계산 그래프\n - 계산 과정을 그래프로 나타낸 것이다.\n - 복수의 노드와 에지로 표현된다. 노드 사이의 직선을 에지라고 한다. \n - 간단한 문제부터 해결해보자", "_____no_output_____" ], [ "- 문제 1) 현빈 군은 슈퍼에서 1개에 100원인 사과를 2개 샀다. 이때 지불 금액을 구하세요. 단, 소비세가 10% 부과된다.\n - 계산 그래프는 계산 과정을 노드와 화살표(에지)로 표현한다. \n - 원안에 연산 내용을 적고 계산 결과를 화살표 위에 적어 각 노드의 계산 결과가 왼쪽에서 오른쪽으로 전해지게 한다.\n - 여기서는 원 대신 괄호로 대체한다.\n - (사과) $\\to$ 100 $\\to$ (x2) $\\to$ 200 $\\to$ (x1.1) $\\to$ 220\n - 처음에 사과의 100원이 x2 노드로 흐르고 200원이 되어 다음 노드(x1.1)로 전달된다. 이제 200원이 x1.1 노드를 거쳐 220원이 된다. \n - 따라서 이 계산 그래프에 따르면 최종 답은 220원이 된다.\n - 여기에서는 x2와 x1.1을 각각 하나의 연산으로 취급해 원 안에 표기했지만, 곱셈이 x만을 연산으로 생각할 수도 있다. 이렇게하면 다음과 같이 2와 1.1은 각각 사과의 개수와 소비세 변수가 되어 원밖에 표기하게 된다.\n - 사과 $\\to$ 100 $\\to$ (x) $\\to$ 200 $\\to$ (x) $\\to$ 220 $\\to$\n - 이제 첫 번째 노드에 2가 대입되고 다음 노드에 1.1이 대입된다.", "_____no_output_____" ], [ "- 지금까지 살펴본 계산 그래프를 이용한 문제풀이는 다음 흐름으로 진행된다.\n - 1) 계산 그래프를 구성한다.\n - 2) 그래프에서 계산을 왼쪽에서 오른쪽으로 진행한다. \n - ***여기서 2번째 '계산을 왼쪽에서 오른쪽으로 진행'하는 단계를 순전파라고 한다.***\n - 순전파의 반대 '역전파'도 존재할 것이다. 역전파는 이후에 미분을 계산할 때 중요한 역할을 한다.", "_____no_output_____" ], [ "- 계산 그래프의 특징 : 국소적(자신과 직접 관계된 작은 범위)계산을 전파함으로써 최종 결과를 얻는다.\n - 국소적 계산은 결국 전체에서 어떤 일이 벌어지든 상관없이 자신과 관계된 정보만으로 결과를 출력할 수 있다는 점이다.\n - 국소적 계산?\n - 가령 슈퍼마켓에서 사과 2개를 포함한 여러 식품을 구입하는 경우를 생각해보자. 여러 식품을 구입하여 총 금액이 3000원이 되었다 여기에서 ㅎ개심은 각 노드에서의 계산은 국소적 계산이라는 점이다. 가령 사과와 그 외의 물품 값을 더하는 계산 3000+200에서 3000이라는 숫자가 어떻게 계산되었느냐와는 상관없이 단지 두 숫자(3000과 200)를 더하면 된다는 뜻이다. 각 노드는 자신과 관련한 계산외에는 아무것도 신경 쓸 게 없다는 것이다. \n - 이처럼 계산 그래프는 국소적 계산에 집중한다. 전체 계산이 제아무리 복잡하더라도 각 단계에서 하는 일은 해당 노드의 국소적 계산이다. 국소적 계산은 단순하지만, 그 결과를 전달함으로써 전체를 구성하는 복잡한 계산을 해낼 수 있다.\n - 또한 계산 그래프는 중간 계산 결과를 모두 보관할 수 있다.\n - 또한 역절파를 통해 미분을 효율적으로 계산할 수 있다.\n - 위에서 설명한 문제1)을 살펴보자\n - 가령 사과 가격이 오르면 최종 금액에 어떤 영향을 끼치는지를 알고 싶다고 해보자.\n - 이는 사과 과격에 대한 지불 금액의 미분을 구하는 문제에 해당된다. 기호로 나타낸다면 x는 사과 값을, L을 지불 금액이라고 했을때 $\\frac{\\partial L}{\\partial x}$을 구하는 것이다. \n - 위 미분 값은 사과 값이 아주 조금 올랐을 때 지불 금액이 얼마나 증가하느냐를 표시한 것이다. \n - 즉, 사과 과격에 대한 지불 금액의 미분 같은 값은 계산 그래프에서 역전파를 하면 구할 수 있다.\n - 역전파가 어떻게 이루어지느냐는 뒤에서 추가 설명하겠다.", "_____no_output_____" ], [ "- 이처럼 계산 그래프의 이점은 순전파와 역전파를 활용해서 각 변수의 미분을 효율적으로 구할 수 있다는 것이다. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
ec97106c405fc0156948ee609ead4e47cf2c168d
416,045
ipynb
Jupyter Notebook
chessprimes_analysis.ipynb
kennethgoodman/lichess_own_games_analysis
126f16d193586d79778b028a5b9015a2f1070499
[ "MIT" ]
null
null
null
chessprimes_analysis.ipynb
kennethgoodman/lichess_own_games_analysis
126f16d193586d79778b028a5b9015a2f1070499
[ "MIT" ]
null
null
null
chessprimes_analysis.ipynb
kennethgoodman/lichess_own_games_analysis
126f16d193586d79778b028a5b9015a2f1070499
[ "MIT" ]
2
2021-01-11T18:40:54.000Z
2021-01-11T19:33:32.000Z
449.292657
48,020
0.885695
[ [ [ "import io\nfrom collections import defaultdict\nimport random\n\nimport chess\nimport chess.pgn\nimport chess.engine\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd", "_____no_output_____" ], [ "from analysis_utils import get_first_move_with_bad_move, get_all_losses_for_my_moves, get_move\nfrom local_data_manager import get_all_games", "_____no_output_____" ], [ "games = get_all_games('chessprimes', None, download=False, parse=False, analysis_time=0.25)", "_____no_output_____" ], [ "def get_total_number_moves(game):\n i = 0\n while len(game.variations):\n i += 1\n game = game.variations[0]\n return i", "_____no_output_____" ], [ "num_moves = list(map(get_total_number_moves, games))", "_____no_output_____" ], [ "print(f\"on average total half moves is {sum(num_moves)/len(num_moves)}\")", "on average total half moves is 71.10741510741511\n" ], [ "games[31].headers", "_____no_output_____" ], [ "game = games[31]\nmove_num = 11\nmove = get_move(game, move_num)\nprint(move.eval())\nmove.board()", "-301\n" ], [ "moves = []\ndiffs = []\ncheckmate_moves = []\nno_data_moves = []\nfor i, game in enumerate(games):\n move_num, diff = get_first_move_with_bad_move(game, 'chessprimes', min_rating=-175, max_rating=175)\n if diff is None:\n no_data_moves.append(i)\n continue\n if diff == float('inf'):\n checkmate_moves.append(i)\n continue\n moves.append(move_num // 2 if move_num % 2 == 0 else move_num // 2 + 1)\n diffs.append(diff)", "_____no_output_____" ], [ "print(f\"On average I lose {sum(diffs)/len(diffs)} to go into a bad position on my {sum(moves)/len(moves)}'th move'\")", "On average I lose 241.9083728278041 to go into a bad position on my 13.902053712480253'th move'\n" ], [ "plt.hist(moves, density=True, bins=40) # density=False would make counts\nplt.ylabel('Percent Of Time')\nplt.xlabel('Move Num')\nplt.title(\"What move do I get into a losing position?\")\nplt.savefig('percent_move_losing.png')", "_____no_output_____" ], [ "plt.hist(list(filter(lambda m: m < 20, moves)), density=True, bins=10) # density=False would make counts\nplt.ylabel('Percent Of Time')\nplt.xlabel('Move Num')\nplt.title(\"Before 20th move, what move do I get into a losing position\");", "_____no_output_____" ], [ "d_set = defaultdict(list)\nfor move_num, diff in zip(moves, diffs):\n d_set[move_num].append(diff)\ndatas = []\nfor move_num, idv_diffs in d_set.items():\n datas.append(\n {\n 'move_num': move_num,\n 'avg_loss': sum(idv_diffs)/len(idv_diffs),\n 'total_moves': len(idv_diffs)\n }\n )", "_____no_output_____" ], [ "data = pd.DataFrame(datas).sort_values(by='move_num')", "_____no_output_____" ], [ "sns.set_theme(style=\"whitegrid\")\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\ng = sns.relplot(\n data=data,\n x=\"move_num\", y=\"total_moves\",\n size=\"avg_loss\",\n palette=cmap, sizes=(25, 250),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus Move Num, size is avg loss')\nplt.savefig('images/replotavg_total_moves_versus_move_num.png');", "_____no_output_____" ], [ "sns.set_theme(style=\"whitegrid\")\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\ng = sns.relplot(\n data=data,\n x=\"move_num\", y=\"avg_loss\",\n size=\"total_moves\",\n palette=cmap, sizes=(25, 250),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Avg loss versus Move Num, size is total moves')\nplt.savefig('images/replotavg_loss_versus_move_num.png');", "_____no_output_____" ], [ "game.headers", "_____no_output_____" ], [ "for i, game in enumerate(games):\n for move_num, loss in get_all_losses_for_my_moves(game, 'chessprimes'):\n move_num = move_num // 2 if move_num % 2 == 0 else move_num // 2 + 1\n if loss < -8000:\n print(i, move_num, loss)\n break", "263 62 -8723\n281 42 -8149\n1386 39 -8333\n" ], [ "games[263].headers", "_____no_output_____" ], [ "get_move(games[3], 63).board()", "_____no_output_____" ], [ "data_buckets = defaultdict(list)\ndata_buckets_inf = defaultdict(int)\nfor game in games:\n for move_num, loss in get_all_losses_for_my_moves(game, 'chessprimes'):\n move_num = move_num // 2 if move_num % 2 == 0 else move_num // 2 + 1\n if loss == float('inf') or loss < -2500:\n data_buckets_inf[move_num] += 1\n else:\n data_buckets[move_num].append(loss if loss >= 20 else 0) # appending zero for near-perfect moves, especially because analysis was only 0.25 seconds\n \ndatas = []\nfor move_num, losses in data_buckets.items():\n datas.append({\n 'move_num': move_num,\n 'avg_loss': sum(losses)/len(losses),\n 'total_moves': len(losses)\n })\ndatas = pd.DataFrame(datas)\ndatas_inf = []\nfor move_num, num_losses in data_buckets_inf.items():\n datas_inf.append({\n 'move_num': move_num,\n 'avg_loss': float('inf'),\n 'total_moves': num_losses\n })\ndatas_inf = pd.DataFrame(datas_inf)", "_____no_output_____" ], [ "sns.set_theme(style=\"whitegrid\")\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\ng = sns.relplot(\n data=datas,\n x=\"move_num\", y=\"total_moves\",\n size=\"avg_loss\",\n palette=cmap, sizes=(20, 250),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus Move Num, size is avg loss, Across all moves')\nplt.savefig('images/replotavg_total_moves_versus_move_num_all_moves_loss.png');", "_____no_output_____" ], [ "sns.set_theme(style=\"whitegrid\")\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\ng = sns.relplot(\n data=datas,\n x=\"move_num\", y=\"avg_loss\",\n size=\"total_moves\",\n palette=cmap, sizes=(20, 250),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Avg loss versus Move Num, size is total moves, Across all moves')\nplt.savefig('images/replotavg_avg_loss_versus_move_num_all_moves.png');", "_____no_output_____" ], [ "sns.set_theme(style=\"whitegrid\")\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\ng = sns.relplot(\n data=datas_inf,\n x=\"move_num\", y=\"total_moves\",\n# size=\"total_moves\",\n palette=cmap, sizes=(20, 250),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus move num, blundering mate (either not taking advantage or giving opponent)')\nplt.savefig('images/blundering_mate.png'); ", "_____no_output_____" ], [ "games[0].headers._others", "_____no_output_____" ], [ "df['loss'].describe()", "_____no_output_____" ], [ "def group_it_by(df, colname, aggs, f):\n tmp = df.copy()\n tmp[colname] = df.apply(f, axis=1)\n tmp2 = tmp[['move_num', 'loss', colname]].groupby(colname, as_index=False).apply(lambda x: x.reset_index(drop=True)).reset_index()[['move_num', 'loss', colname]]\n tmp3 = tmp2.groupby('move_num').agg({\n 'loss': ['count', 'mean'],\n colname: aggs\n }).reset_index()\n tmp3.columns = ['_'.join(filter(None, col)) for col in tmp3.columns]\n return tmp3", "_____no_output_____" ], [ "colname = 'playing_against_higher_rated'\nf = lambda x: int(int(x['BlackElo']) > int(x['WhiteElo'])) if x['playing_white'] else int(int(x['WhiteElo']) > int(x['BlackElo']))\ntmp = df[df['loss'] != float('inf')].copy()\ntmp[colname] = df.apply(f, axis=1)", "_____no_output_____" ], [ "def get_data1(df, colname, f, apply_loss_n=0, remove_loss_zero=False):\n tmp = df[df['loss'] != float('inf')].copy()\n tmp['loss'] = tmp.apply(lambda x: x['loss'] if abs(x['loss']) > apply_loss_n else 0, axis=1)\n if remove_loss_zero:\n tmp = tmp[tmp['loss'] != 0]\n tmp[colname] = df.apply(f, axis=1)\n tmp2 = tmp[['move_num', 'loss', colname]].groupby([colname, 'move_num'], as_index=False).agg({\n 'loss': ['count', 'mean'],\n })\n tmp2.columns = ['_'.join(filter(None, col)) for col in tmp2.columns]\n return tmp2", "_____no_output_____" ], [ "def get_data2(df, colname, f, apply_loss_n=0, remove_loss_zero=False):\n tmp = df[df['loss'] != float('inf')].copy()\n tmp['loss'] = tmp.apply(lambda x: x['loss'] if abs(x['loss']) > apply_loss_n else 0, axis=1)\n if remove_loss_zero:\n tmp = tmp[tmp['loss'] != 0]\n tmp[colname] = df.apply(f, axis=1)\n tmp2 = tmp[['move_num', 'loss', colname]].groupby([colname, 'move_num'], as_index=False).agg({\n 'loss': ['count', 'mean'],\n })\n tmp2.columns = ['_'.join(filter(None, col)) for col in tmp2.columns]\n return tmp2", "_____no_output_____" ], [ "pd.set_option('display.float_format', lambda x: '%.5f' % x)", "_____no_output_____" ], [ "tmp = df[df['loss'] != float('inf')]\ntmp = tmp[tmp['loss'] < -100]\ntmp['loss'].describe()", "_____no_output_____" ], [ "col = 'playing_against_higher_rated'\nf = lambda x: int(int(x['BlackElo']) > int(x['WhiteElo'])) if x['playing_white'] else int(int(x['WhiteElo']) > int(x['BlackElo']))\ndata = get_data1(df, col, f, apply_loss_n=20)\n\nsns.set_theme(style=\"whitegrid\")\n\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\nhue = 'playing_against_higher_rated'\ntotal_num_hue = len(set(data[hue].values))\ng = sns.relplot(\n data=data,\n x=\"move_num\", y=\"loss_count\",\n size=\"loss_mean\", hue=\"playing_against_higher_rated\",\n palette=cmap.colors[random.sample(range(256), total_num_hue)], sizes=(10, 200),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus move num, Average loss is size')\nplt.savefig('images/total_move_versus_move_num_with_hue_for_higher_lower_rated.png'); ", "/Users/kennethgoodman/venv/lib/python3.6/site-packages/seaborn/_core.py:163: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n if palette in QUAL_PALETTES:\n" ], [ "col = 'playing_white'\nf = lambda x: int(x['playing_white'])\ndata = get_data1(df, col, f, apply_loss_n=20)\n\nsns.set_theme(style=\"whitegrid\")\n\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\nhue = 'playing_white'\ntotal_num_hue = len(set(data[hue].values))\ng = sns.relplot(\n data=data,\n x=\"move_num\", y=\"loss_count\",\n size=\"loss_mean\", hue=\"playing_white\",\n palette=cmap.colors[random.sample(range(256), total_num_hue)], sizes=(10, 200),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus move num, Average loss is size')\nplt.savefig('images/total_move_versus_move_num_with_hue_playing_white.png'); ", "/Users/kennethgoodman/venv/lib/python3.6/site-packages/seaborn/_core.py:163: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n if palette in QUAL_PALETTES:\n" ], [ "col = 'game_mode'\nf = lambda x: x['Speed']\ndata = get_data1(df, col, f, apply_loss_n=20)\n\nsns.set_theme(style=\"whitegrid\")\ng = data.groupby(['game_mode']).agg({'loss_count': 'max'}).reset_index()\nd = dict(zip(g['game_mode'],g['loss_count'])) \ndata['loss_count'] = data.apply(lambda x: x['loss_count'] / d[x['game_mode']], axis=1)\n\ncmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)\nhue = 'game_mode'\ntotal_num_hue = len(set(data[hue].values))\ng = sns.relplot(\n data=data,\n x=\"move_num\", y=\"loss_count\",\n size=\"loss_mean\", hue=\"game_mode\",\n palette=['black', 'red', 'green', 'blue'], sizes=(10, 200),\n)\n# g.set(xscale=\"log\", yscale=\"log\")\ng.ax.xaxis.grid(True, \"minor\", linewidth=.25)\ng.ax.yaxis.grid(True, \"minor\", linewidth=.25)\ng.despine(left=True, bottom=True)\nplt.title('Total Moves versus move num, Average loss is size')\n# plt.savefig('images/total_move_versus_move_num_with_hue_playing_white.png'); ", "_____no_output_____" ], [ "# hue = win/loss", "_____no_output_____" ], [ "# hue = innaccuracy/mistake/blunder", "_____no_output_____" ], [ "# hue = clock", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9716d7ca64ae4e6332e65d3f31673f3a484999
29,182
ipynb
Jupyter Notebook
ACPC_distance_calculations.ipynb
ataha24/AFIDs-Data
fb4469e45dd0d4a816d15f97ba8b0cd135f346e3
[ "MIT" ]
null
null
null
ACPC_distance_calculations.ipynb
ataha24/AFIDs-Data
fb4469e45dd0d4a816d15f97ba8b0cd135f346e3
[ "MIT" ]
null
null
null
ACPC_distance_calculations.ipynb
ataha24/AFIDs-Data
fb4469e45dd0d4a816d15f97ba8b0cd135f346e3
[ "MIT" ]
1
2022-01-26T21:49:50.000Z
2022-01-26T21:49:50.000Z
32.388457
120
0.447399
[ [ [ "# ACPC distance calculations\nSandy Wong 2022-01-26", "_____no_output_____" ], [ "### Imports", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport csv\nimport math\nimport os", "_____no_output_____" ] ], [ [ "### Functions", "_____no_output_____" ], [ "#### Get data from fcsv and put in dictionary", "_____no_output_____" ] ], [ [ "def getDataFromFcsv(inFile): #inFile = fcsv with annotation data \n\n version = 0\n coordSys = \"\"\n cols = []\n data = []\n\n with open(inFile, newline='') as csvfile:\n csvreader = csv.reader(csvfile, delimiter=' ')\n for row in csvreader:\n if row[0] == \"#\": #header in fcsv\n if row[1] == \"Markups\":\n version = float(row[-1])\n if row[1] == \"CoordinateSystem\":\n coordSys = str(row[-1])\n if row[1] == \"columns\":\n cols = row[-1].split(\",\")\n\n else: #data in fcsv\n rawdata = row[0].split(\",\")\n data.append(rawdata)\n \n df = pd.DataFrame(data, columns = cols )\n \n data_dict = {\"version\":version,\n \"coordinateSystem\": coordSys,\n \"data\":df}\n \n return data_dict", "_____no_output_____" ] ], [ [ "#### Get distance between 2 points ", "_____no_output_____" ] ], [ [ "def getDistance(label1, label2, col, df):\n \n row1 = df.loc[df[col]==label1]\n row2 = df.loc[df[col]==label2]\n \n dist = math.sqrt((float(row1[\"x\"])-float(row2[\"x\"]))**2 + \n (float(row1[\"y\"])-float(row2[\"y\"]))**2 +\n (float(row1[\"z\"])-float(row2[\"z\"]))**2)\n return(dist)", "_____no_output_____" ] ], [ [ "### Loop through filenames and get ACPC distance for all", "_____no_output_____" ], [ "#### Filenames", "_____no_output_____" ] ], [ [ "path_AFIDs_Clinical = \"C:\\\\Users\\\\Sandy\\\\Documents\\\\AFIDs-Data\\\\AFIDs-Clinical\"\npath_OASIS_DATASET = \"C:\\\\Users\\\\Sandy\\\\Documents\\\\AFIDs-Data\\\\OASIS-DATASET\"\n#HCP_DATASET\n#SNSX-DATASET\n\nfiles_AFIDs_Clinical = os.listdir(path_AFIDs_Clinical)\nfiles_OASIS_DATASET = os.listdir(path_OASIS_DATASET)", "_____no_output_____" ] ], [ [ "#### Looping through all the files and calculating ACPC dist", "_____no_output_____" ], [ "##### AFIDs-Clinical", "_____no_output_____" ] ], [ [ "ACPC_dists_AFIDs_Clinical = []\nfor folder in files_AFIDs_Clinical:\n file = \"%s/%s/%s_FID32_T1w_mean.fcsv\" % (\"AFIDs-Clinical\",folder,folder)\n data = getDataFromFcsv(file)\n ACPC_dist = getDistance(\"AC\",\"PC\",\"label\",data[\"data\"])\n \n fileshort = file.split(\"/\")[-1]\n ACPC_dists_AFIDs_Clinical.append([\"AFIDs-Clinical\",fileshort,ACPC_dist])\n \nAFIDs_Clinical_ACPC = pd.DataFrame(ACPC_dists_AFIDs_Clinical, columns = [\"dataset\",\"filename\",\"ACPC_dist\"] )", "_____no_output_____" ], [ "AFIDs_Clinical_ACPC", "_____no_output_____" ] ], [ [ "###### Group calculations", "_____no_output_____" ] ], [ [ "ACPC_dists = np.array(AFIDs_Clinical_ACPC[\"ACPC_dist\"].tolist())", "_____no_output_____" ], [ "mean = np.mean(ACPC_dists)\nstddev = np.std(ACPC_dists)\nprint(\"AFIDs-Clincical:\\nmean: %0.10f\\nstandard deviation: %0.10f\" % (mean, stddev))", "AFIDs-Clincical:\nmean: 26.8001437626\nstandard deviation: 1.6761420771\n" ] ], [ [ "##### OASIS-DATASET", "_____no_output_____" ] ], [ [ "ACPC_dists_OASIS_DATASET = []\nfor file in files_OASIS_DATASET:\n filePath = \"%s/%s\" % (\"OASIS-DATASET\",file)\n data = getDataFromFcsv(filePath)\n ACPC_dist = getDistance(\"AC\",\"PC\",\"desc\",data[\"data\"])\n ACPC_dists_OASIS_DATASET.append([\"OASIS-DATASET\",file,ACPC_dist])\n \nOASIS_DATASET_ACPC = pd.DataFrame(ACPC_dists_OASIS_DATASET, columns = [\"dataset\",\"filename\",\"ACPC_dist\"] )", "_____no_output_____" ], [ "OASIS_DATASET_ACPC", "_____no_output_____" ] ], [ [ "###### Group calculations", "_____no_output_____" ] ], [ [ "ACPC_dists = np.array(OASIS_DATASET_ACPC[\"ACPC_dist\"].tolist())", "_____no_output_____" ], [ "mean = np.mean(ACPC_dists)\nstddev = np.std(ACPC_dists)\nprint(\"OASIS-DATASET:\\nmean: %0.10f\\nstandard deviation: %0.10f\" % (mean, stddev))", "OASIS-DATASET:\nmean: 26.3963861472\nstandard deviation: 1.3845644721\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ec9728d8217ae462c902f4fc7d41dda6817d2a09
26,752
ipynb
Jupyter Notebook
Crawler-ppo2.ipynb
1jsingh/rl_crawler
d5be751863e39cb78350958134d5515e08547dad
[ "MIT" ]
1
2020-05-07T08:38:30.000Z
2020-05-07T08:38:30.000Z
Crawler-ppo2.ipynb
1jsingh/rl_crawler
d5be751863e39cb78350958134d5515e08547dad
[ "MIT" ]
null
null
null
Crawler-ppo2.ipynb
1jsingh/rl_crawler
d5be751863e39cb78350958134d5515e08547dad
[ "MIT" ]
null
null
null
38.492086
227
0.533082
[ [ [ "### 1. Start the Environment", "_____no_output_____" ] ], [ [ "from mlagents.envs import UnityEnvironment\nimport numpy as np\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "**_Before running the code cell below_**, change the `file_name` parameter to match the location of the Reacher Unity environment.\n\nFor instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:\n```\nenv = UnityEnvironment(file_name=\"Reacher.app\")\n```", "_____no_output_____" ] ], [ [ "env = UnityEnvironment(file_name='unity_envs/Crawler_StaticTarget')", "INFO:mlagents.envs:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of Training Brains : 1\n Reset Parameters :\n\t\t\nUnity brain name: CrawlerStaticLearning\n Number of Visual Observations (per agent): 0\n Vector Observation space size (per agent): 129\n Number of stacked Vector Observation: 1\n Vector Action space type: continuous\n Vector Action space size (per agent): [20]\n Vector Action descriptions: , , , , , , , , , , , , , , , , , , , \n" ] ], [ [ "Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.", "_____no_output_____" ] ], [ [ "# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]", "_____no_output_____" ] ], [ [ "### 2. Examine the State and Action Spaces\n\n* Set-up: A creature with 4 arms and 4 forearms.\n* Goal: The agents must move its body toward the goal direction without falling.\n* CrawlerStaticTarget - Goal direction is always forward.\n* CrawlerDynamicTarget- Goal direction is randomized.\n* Agents: The environment contains 3 agent linked to a single Brain.\n* Agent Reward Function (independent):\n* +0.03 times body velocity in the goal direction.\n* +0.01 times body direction alignment with goal direction.\n* Brains: One Brain with the following observation/action space.\n* Vector Observation space: 117 variables corresponding to position, rotation, velocity, and angular velocities of each limb plus the acceleration and angular acceleration of the body.\n* Vector Action space: (Continuous) Size of 20, corresponding to target rotations for joints.\n* Visual Observations: None.\n* Reset Parameters: None\n* Benchmark Mean Reward for CrawlerStaticTarget: 2000\n* Benchmark Mean Reward for CrawlerDynamicTarget: 400\n\nLets print some information about the environment.", "_____no_output_____" ] ], [ [ "# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents\nnum_agents = len(env_info.agents)\nprint('Number of agents:', num_agents)\n\n# size of each action\naction_size = brain.vector_action_space_size[0]\nprint('Size of each action:', action_size)\n\n# examine the state space \nstates = env_info.vector_observations\nstate_size = states.shape[1]\nprint('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))\nprint('The state for the first agent looks like:', states[0])", "Number of agents: 12\nSize of each action: 20\nThere are 12 agents. Each observes a state with length: 129\nThe state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00\n 1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00\n 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01\n -1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n -6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01\n -1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n -6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01\n 1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01\n 1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00\n 0.00000000e+00]\n" ] ], [ [ "### 3. Take Random Actions in the Environment", "_____no_output_____" ] ], [ [ "env_info = env.reset(train_mode=False)[brain_name] # reset the environment \nstates = env_info.vector_observations # get the current state (for each agent)\nscores = np.zeros(num_agents) # initialize the score (for each agent)\nstep=0\nwhile True:\n actions = np.random.randn(num_agents, action_size) # select an action (for each agent)\n actions = np.clip(actions, -1, 1) # all actions between -1 and 1\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n scores += env_info.rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n step+=1\n if np.any(dones): # exit loop if episode finished\n break\nprint('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))", "Total score (averaged over agents) this episode: 0.20592984704611203\n" ] ], [ [ "### 4. Training the agent!\n\nNow it's turn to train an agent to solve the environment! When training the environment, we have to set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```", "_____no_output_____" ] ], [ [ "import random\nimport datetime\nimport torch\nimport numpy as np\nfrom collections import deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#pytorch\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.distributions import Normal\n\n# imports for rendering outputs in Jupyter.\nfrom JSAnimation.IPython_display import display_animation\nfrom matplotlib import animation\nfrom IPython.display import display\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "# defining the device\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nprint (\"using\",device)", "using cpu\n" ] ], [ [ "### 3. Define policy network (Actor Critic style)", "_____no_output_____" ] ], [ [ "state_size = brain.vector_observation_space_size\naction_size = brain.vector_action_space_size\naction_low = -1\naction_high = 1\n\n# define actor critic network\nclass ActorCritic(nn.Module):\n \n def __init__(self,state_size,action_size,action_high,action_low,hidden_size=32):\n super(ActorCritic, self).__init__()\n \n # action range\n self.action_high = torch.tensor(action_high).to(device)\n self.action_low = torch.tensor(action_low).to(device)\n \n self.std = nn.Parameter(torch.zeros(action_size))\n \n # common network\n self.fc1 = nn.Linear(state_size,512)\n \n # actor network\n self.fc2_actor = nn.Linear(512,256)\n self.fc3_action = nn.Linear(256,action_size)\n #self.fc3_std = nn.Linear(64,action_size)\n \n # critic network\n self.fc2_critic = nn.Linear(512,64)\n self.fc3_critic = nn.Linear(64,1)\n \n def forward(self,state):\n # common network\n x = F.relu(self.fc1(state))\n \n # actor network\n x_actor = F.relu(self.fc2_actor(x))\n action_mean = F.sigmoid(self.fc3_action(x_actor))\n ## rescale action mean\n action_mean_ = (self.action_high-self.action_low)*action_mean + self.action_low\n #action_std = F.sigmoid(self.fc3_std(x_actor))\n \n # critic network\n x_critic = F.relu(self.fc2_critic(x))\n v = self.fc3_critic(x_critic)\n return action_mean_,v\n \n def act(self,state,action=None):\n # converting state from numpy array to pytorch tensor on the \"device\"\n state = torch.from_numpy(state).float().to(device)\n action_mean,v = self.forward(state)\n prob_dist = Normal(action_mean,F.softplus(self.std))\n if action is None:\n action = prob_dist.sample()\n log_prob = prob_dist.log_prob(action).sum(-1).unsqueeze(-1)\n entropy = prob_dist.entropy().sum(-1).unsqueeze(-1)\n return {'a': action,\n 'log_pi_a': log_prob,\n 'ent': entropy,\n 'mean': action_mean,\n 'v': v}", "_____no_output_____" ] ], [ [ "### 4. Storage class", "_____no_output_____" ] ], [ [ "class Storage:\n def __init__(self, size, keys=None):\n if keys is None:\n keys = []\n keys = keys + ['s', 'a', 'r', 'm',\n 'v', 'q', 'pi', 'log_pi', 'ent',\n 'adv', 'ret', 'q_a', 'log_pi_a',\n 'mean']\n self.keys = keys\n self.size = size\n self.reset()\n\n def add(self, data):\n for k, v in data.items():\n assert k in self.keys\n getattr(self, k).append(v)\n\n def placeholder(self):\n for k in self.keys:\n v = getattr(self, k)\n if len(v) == 0:\n setattr(self, k, [None] * self.size)\n\n def reset(self):\n for key in self.keys:\n setattr(self, key, [])\n\n def cat(self, keys):\n data = [getattr(self, k)[:self.size] for k in keys]\n return map(lambda x: torch.cat(x, dim=0), data)", "_____no_output_____" ] ], [ [ "### 4. PPO agent", "_____no_output_____" ] ], [ [ "from collections import deque\nfrom itertools import accumulate\nimport torch.tensor as tensor\n\ndef random_sample(indices, batch_size):\n indices = np.asarray(np.random.permutation(indices))\n batches = indices[:len(indices) // batch_size * batch_size].reshape(-1, batch_size)\n for batch in batches:\n yield batch\n r = len(indices) % batch_size\n if r:\n yield indices[-r:]\n \nclass Agent:\n \n def __init__(self,env,learning_rate=1e-3):\n self.env = env\n nS = brain.vector_observation_space_size\n nA = brain.vector_action_space_size[0]\n self.policy = ActorCritic(state_size=nS,hidden_size=128,action_size=nA,\n action_low=action_low,action_high=action_high).to(device)\n self.optimizer = optim.RMSprop(self.policy.parameters(), lr=learning_rate)\n \n # reset the environment\n env_info = self.env.reset(train_mode=True)[brain_name]\n self.states = env_info.vector_observations\n \n self.episode_rewards_window = deque(maxlen=100)\n self.episode_rewards = []\n num_trajectories = 12\n self.online_rewards = np.zeros(num_trajectories)\n \n \n def train(self,max_opt_steps=1000,num_trajectories=12,rollout_length=2048,mini_batch_size=64,gamma=.99,\n target_score=-250,use_gae=False,gae_tau=0.95,PRINT_EVERY=100):\n \n for opt_step in range(max_opt_steps):\n \n storage = Storage(rollout_length)\n states = self.states\n for _ in range(rollout_length):\n prediction = self.policy.act(states)\n \n # send all actions to tne environment\n env_info = self.env.step((prediction['a']).cpu().numpy())[brain_name]\n \n next_states = np.array(env_info.vector_observations) # get next state (for each agent)\n rewards = np.array(env_info.rewards) # get reward (for each agent)\n terminals = np.array(env_info.local_done) # see if episode finished\n \n self.online_rewards += rewards\n for i, terminal in enumerate(terminals):\n if terminals[i]:\n self.episode_rewards.append(self.online_rewards[i])\n self.episode_rewards_window.append(self.online_rewards[i])\n self.online_rewards[i] = 0\n \n storage.add(prediction)\n storage.add({'r': tensor(rewards).unsqueeze(-1).float().to(device),\n 'm': tensor(1 - terminals).unsqueeze(-1).float().to(device),\n 's': tensor(states).to(device)})\n states = next_states\n\n self.states = states\n prediction = self.policy.act(states)\n storage.add(prediction)\n storage.placeholder()\n\n advantages = tensor(np.zeros((num_trajectories, 1))).float().to(device)\n returns = prediction['v'].detach()\n for i in reversed(range(rollout_length)):\n returns = storage.r[i] + gamma * storage.m[i] * returns\n if not use_gae:\n advantages = returns - storage.v[i].detach()\n else:\n td_error = storage.r[i] + gamma * storage.m[i] * storage.v[i + 1] - storage.v[i]\n advantages = advantages * gae_tau * gamma * storage.m[i] + td_error\n storage.adv[i] = advantages.detach()\n storage.ret[i] = returns.detach()\n\n states, actions, log_probs_old, returns, advantages = storage.cat(['s', 'a', 'log_pi_a', 'ret', 'adv'])\n actions = actions.detach()\n log_probs_old = log_probs_old.detach()\n advantages = (advantages - advantages.mean()) / advantages.std()\n \n ppo_ratio_clip = 0.2\n gradient_clip = 0.5\n entropy_weight = 0.0\n \n for _ in range(10):\n sampler = random_sample(np.arange(states.size(0)), mini_batch_size)\n for batch_indices in sampler:\n batch_indices = tensor(batch_indices).long()\n sampled_states = states[batch_indices]\n sampled_actions = actions[batch_indices]\n sampled_log_probs_old = log_probs_old[batch_indices]\n sampled_returns = returns[batch_indices]\n sampled_advantages = advantages[batch_indices]\n\n prediction = self.policy.act(sampled_states.cpu().numpy(), sampled_actions)\n ratio = (prediction['log_pi_a'] - sampled_log_probs_old).exp()\n obj = ratio * sampled_advantages\n obj_clipped = ratio.clamp(1.0 - ppo_ratio_clip,\n 1.0 + ppo_ratio_clip) * sampled_advantages\n policy_loss = -torch.min(obj, obj_clipped).mean() - entropy_weight * prediction['ent'].mean()\n\n value_loss = 0.5 * (sampled_returns - prediction['v']).pow(2).mean()\n\n self.optimizer.zero_grad()\n (policy_loss + value_loss).backward()\n nn.utils.clip_grad_norm_(self.policy.parameters(), gradient_clip)\n self.optimizer.step()\n \n #printing progress\n if opt_step % PRINT_EVERY == 0:\n print (\"Opt step: {}\\t Avg reward: {:.2f}\\t std: {}\".format(opt_step,np.mean(self.episode_rewards_window),\n self.policy.std))\n # save the policy\n torch.save(self.policy, 'ppo-crawler.policy')\n \n if np.mean(self.episode_rewards_window)>= target_score:\n print (\"Environment solved in {} optimization steps! ... Avg reward : {:.2f}\".format(opt_step-100,\n np.mean(self.episode_rewards_window)))\n # save the policy\n torch.save(self.policy, 'ppo-crawler.policy')\n break\n \n return self.episode_rewards", "_____no_output_____" ] ], [ [ "### 5. Train the agent", "_____no_output_____" ] ], [ [ "# lets define and train our agent\nagent = Agent(env=env,learning_rate=1e-4)", "_____no_output_____" ], [ "scores = agent.train(max_opt_steps=2000,gamma=0.98,target_score=600,use_gae=True,PRINT_EVERY=1)", "_____no_output_____" ], [ "# uncomment this cell to load the trained policy for Pendulum-v0\n# load policy\npolicy = torch.load('ppo-crawler.policy',map_location='cpu')\nagent = Agent(env)\nagent.policy = policy", "_____no_output_____" ], [ "frames = []\ntotal_rewards = np.zeros(12)\n\n# reset the environment\nenv_info = env.reset(train_mode=False)[brain_name]\nstates = np.array(env_info.vector_observations)\nvalue = []\nr = []\nfor t in range(2000):\n prediction = agent.policy.act(states)\n action = prediction['a'].cpu().numpy()\n v = prediction['v'].detach().cpu().numpy()\n #frames.append(env.render(mode='rgb_array')) \n \n # send all actions to tne environment\n env_info = env.step(action)[brain_name]\n\n next_states = np.array(env_info.vector_observations) # get next state (for each agent)\n rewards = np.array(env_info.rewards) # get reward (for each agent)\n terminals = np.array(env_info.local_done) # see if episode finished\n \n #value.append(v.squeeze())\n #r.append(reward)\n states=next_states\n total_rewards+= rewards\n \n if np.any(terminals):\n for i,terminal in enumerate(terminals):\n if terminal:\n eps_reward = total_rewards[i]\n break\n break\n\nprint (\"Total reward:\",eps_reward)\n#animate_frames(frames)", "/Users/jsingh/anaconda3/envs/cv3/lib/python3.6/site-packages/torch/nn/functional.py:1332: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\n warnings.warn(\"nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.\")\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
ec972c8fe7d5c4b7c2e71ec2ac9a3f9d2356a755
916,349
ipynb
Jupyter Notebook
examples/PCR/PCR_on_sensory_and_fluorescence_data.ipynb
Mohamed0gad/hoggorm
4debdb49a8d1d8858abb783be2ad67ffc96fd3ab
[ "BSD-2-Clause" ]
null
null
null
examples/PCR/PCR_on_sensory_and_fluorescence_data.ipynb
Mohamed0gad/hoggorm
4debdb49a8d1d8858abb783be2ad67ffc96fd3ab
[ "BSD-2-Clause" ]
null
null
null
examples/PCR/PCR_on_sensory_and_fluorescence_data.ipynb
Mohamed0gad/hoggorm
4debdb49a8d1d8858abb783be2ad67ffc96fd3ab
[ "BSD-2-Clause" ]
null
null
null
137.507353
62,756
0.825249
[ [ [ "# Principal Component Regression (PCR) on Sensory and Fluorescence data", "_____no_output_____" ], [ "This notebook illustrates how to use the **hoggorm** package to carry out principal component regression (PCR) on multivariate data. Furthermore, we will learn how to visualise the results of the PCR using the **hoggormPlot** package.", "_____no_output_____" ], [ "---", "_____no_output_____" ], [ "### Import packages and prepare data", "_____no_output_____" ], [ "First import **hoggorm** for analysis of the data and **hoggormPlot** for plotting of the analysis results. We'll also import **pandas** such that we can read the data into a data frame. **numpy** is needed for checking dimensions of the data.", "_____no_output_____" ] ], [ [ "import hoggorm as ho\nimport hoggormplot as hop\nimport pandas as pd\nimport numpy as np", "_____no_output_____" ] ], [ [ "Next, load the data that we are going to analyse using **hoggorm**. After the data has been loaded into the pandas data frame, we'll display it in the notebook.", "_____no_output_____" ] ], [ [ "# Load fluorescence data\nX_df = pd.read_csv('cheese_fluorescence.txt', index_col=0, sep='\\t')\nX_df", "_____no_output_____" ], [ "# Load sensory data\nY_df = pd.read_csv('cheese_sensory.txt', index_col=0, sep='\\t')\nY_df", "_____no_output_____" ] ], [ [ "The ``nipalsPCR`` class in hoggorm accepts only **numpy** arrays with numerical values and not pandas data frames. Therefore, the pandas data frames holding the imported data need to be \"taken apart\" into three parts: \n* two numpy array holding the numeric values\n* two Python list holding variable (column) names\n* two Python list holding object (row) names. \n\nThe numpy arrays with values will be used as input for the ``nipalsPCR`` class for analysis. The Python lists holding the variable and row names will be used later in the plotting function from the **hoggormPlot** package when visualising the results of the analysis. Below is the code needed to access both data, variable names and object names.", "_____no_output_____" ] ], [ [ "# Get the values from the data frame\nX = X_df.values\nY = Y_df.values\n\n# Get the variable or columns names\nX_varNames = list(X_df.columns)\nY_varNames = list(Y_df.columns)\n\n# Get the object or row names\nX_objNames = list(X_df.index)\nY_objNames = list(Y_df.index)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "### Apply PCR to our data", "_____no_output_____" ], [ "Now, let's run PCR on the data using the ``nipalsPCR`` class. The documentation provides a [description of the input parameters](https://hoggorm.readthedocs.io/en/latest/pcr.html). Using input paramter ``arrX`` and ``arrY`` we define which numpy array we would like to analyse. ``arrY`` is what typically is considered to be the response matrix, while the measurements are typically defined as ``arrX``. By setting input parameter ``Xstand=False`` and ``Ystand=False`` we make sure that the variables are only mean centered, not scaled to unit variance, if this is what you want. This is the default setting and actually doesn't need to expressed explicitly. Setting paramter ``cvType=[\"loo\"]`` we make sure that we compute the PCR model using full cross validation. ``\"loo\"`` means \"Leave One Out\". By setting paramter ``numpComp=4`` we ask for four components to be computed.", "_____no_output_____" ] ], [ [ "model = ho.pcr.nipalsPCR(arrX=X, Xstand=False, \n arrY=Y, Ystand=False,\n cvType=[\"loo\"], \n numComp=4)", "loo\nloo\n" ] ], [ [ "That's it, the PCR model has been computed. Now we would like to inspect the results by visualising them. We can do this using plotting functions of the separate [**hoggormPlot** package](https://hoggormplot.readthedocs.io/en/latest/). If we wish to plot the results for component 1 and component 2, we can do this by setting the input argument ``comp=[1, 2]``. The input argument ``plots=[1, 2, 3, 4, 6]`` lets the user define which plots are to be plotted. If this list for example contains value ``1``, the function will generate the scores plot for the model. If the list contains value ``2``, then the loadings plot will be plotted. Value ``3`` stands for correlation loadings plot and value ``4`` stands for bi-plot and ``6`` stands for explained variance plot. The hoggormPlot documentation provides a [description of input paramters](https://hoggormplot.readthedocs.io/en/latest/mainPlot.html).", "_____no_output_____" ] ], [ [ "hop.plot(model, comp=[1, 2], \n plots=[1, 2, 3, 4, 6], \n objNames=X_objNames, \n XvarNames=X_varNames,\n YvarNames=Y_varNames)", "_____no_output_____" ] ], [ [ "Plots can also be called separately.", "_____no_output_____" ] ], [ [ "# Plot cumulative explained variance (both calibrated and validated) using a specific function for that.\nhop.explainedVariance(model)", "_____no_output_____" ], [ "# Plot cumulative validated explained variance for each variable in Y\nhop.explainedVariance(model, individual = True)", "_____no_output_____" ], [ "# Plot cumulative validated explained variance in X.\nhop.explainedVariance(model, which='X')", "_____no_output_____" ], [ "hop.scores(model)", "_____no_output_____" ], [ "hop.correlationLoadings(model)", "_____no_output_____" ], [ "# Plot X loadings in line plot\nhop.loadings(model, weights=False, line=True)", "_____no_output_____" ], [ "# Plot regression coefficients\nhop.coefficients(model, comp=3)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "### Accessing numerical results", "_____no_output_____" ], [ "Now that we have visualised the PCR results, we may also want to access the numerical results. Below are some examples. For a complete list of accessible results, please see this part of the documentation. ", "_____no_output_____" ] ], [ [ "# Get X scores and store in numpy array\nX_scores = model.X_scores()\n\n# Get scores and store in pandas dataframe with row and column names\nX_scores_df = pd.DataFrame(model.X_scores())\nX_scores_df.index = X_objNames\nX_scores_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_scores().shape[1])]\nX_scores_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_scores)", "Help on function X_scores in module hoggorm.pcr:\n\nX_scores(self)\n Returns array holding scores of array X. First column holds scores\n for component 1, second column holds scores for component 2, etc.\n\n" ], [ "# Dimension of the X_scores\nnp.shape(model.X_scores())", "_____no_output_____" ] ], [ [ "We see that the numpy array holds the scores for all countries and OECD (35 in total) for four components as required when computing the PCA model.", "_____no_output_____" ] ], [ [ "# Get X loadings and store in numpy array\nX_loadings = model.X_loadings()\n\n# Get X loadings and store in pandas dataframe with row and column names\nX_loadings_df = pd.DataFrame(model.X_loadings())\nX_loadings_df.index = X_varNames\nX_loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\nX_loadings_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_loadings)", "Help on function X_loadings in module hoggorm.pcr:\n\nX_loadings(self)\n Returns array holding loadings of array X. Rows represent variables\n and columns represent components. First column holds loadings for\n component 1, second column holds scores for component 2, etc.\n\n" ], [ "np.shape(model.X_loadings())", "_____no_output_____" ] ], [ [ "Here we see that the array holds the loadings for the 10 variables in the data across four components.", "_____no_output_____" ] ], [ [ "# Get Y loadings and store in numpy array\nY_loadings = model.Y_loadings()\n\n# Get Y loadings and store in pandas dataframe with row and column names\nY_loadings_df = pd.DataFrame(model.Y_loadings())\nY_loadings_df.index = Y_varNames\nY_loadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.Y_loadings().shape[1])]\nY_loadings_df", "_____no_output_____" ], [ "# Get X correlation loadings and store in numpy array\nX_corrloadings = model.X_corrLoadings()\n\n# Get X correlation loadings and store in pandas dataframe with row and column names\nX_corrloadings_df = pd.DataFrame(model.X_corrLoadings())\nX_corrloadings_df.index = X_varNames\nX_corrloadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_corrLoadings().shape[1])]\nX_corrloadings_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_corrLoadings)", "Help on function X_corrLoadings in module hoggorm.pcr:\n\nX_corrLoadings(self)\n Returns array holding correlation loadings of array X. First column\n holds correlation loadings for component 1, second column holds\n correlation loadings for component 2, etc.\n\n" ], [ "# Get Y loadings and store in numpy array\nY_corrloadings = model.X_corrLoadings()\n\n# Get Y loadings and store in pandas dataframe with row and column names\nY_corrloadings_df = pd.DataFrame(model.Y_corrLoadings())\nY_corrloadings_df.index = Y_varNames\nY_corrloadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.Y_corrLoadings().shape[1])]\nY_corrloadings_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_corrLoadings)", "Help on function Y_corrLoadings in module hoggorm.pcr:\n\nY_corrLoadings(self)\n Returns array holding correlation loadings of array X. First column\n holds correlation loadings for component 1, second column holds\n correlation loadings for component 2, etc.\n\n" ], [ "# Get calibrated explained variance of each component in X\nX_calExplVar = model.X_calExplVar()\n\n# Get calibrated explained variance in X and store in pandas dataframe with row and column names\nX_calExplVar_df = pd.DataFrame(model.X_calExplVar())\nX_calExplVar_df.columns = ['calibrated explained variance in X']\nX_calExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\nX_calExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_calExplVar)", "Help on function X_calExplVar in module hoggorm.pcr:\n\nX_calExplVar(self)\n Returns a list holding the calibrated explained variance for\n each component. First number in list is for component 1, second number\n for component 2, etc.\n\n" ], [ "# Get calibrated explained variance of each component in Y\nY_calExplVar = model.Y_calExplVar()\n\n# Get calibrated explained variance in Y and store in pandas dataframe with row and column names\nY_calExplVar_df = pd.DataFrame(model.Y_calExplVar())\nY_calExplVar_df.columns = ['calibrated explained variance in Y']\nY_calExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.Y_loadings().shape[1])]\nY_calExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_calExplVar)", "Help on function Y_calExplVar in module hoggorm.pcr:\n\nY_calExplVar(self)\n Returns a list holding the calibrated explained variance for each\n component. First number in list is for component 1, second number for\n component 2, etc.\n\n" ], [ "# Get cumulative calibrated explained variance in X\nX_cumCalExplVar = model.X_cumCalExplVar()\n\n# Get cumulative calibrated explained variance in X and store in pandas dataframe with row and column names\nX_cumCalExplVar_df = pd.DataFrame(model.X_cumCalExplVar())\nX_cumCalExplVar_df.columns = ['cumulative calibrated explained variance in X']\nX_cumCalExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\nX_cumCalExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_cumCalExplVar)", "Help on function X_cumCalExplVar in module hoggorm.pcr:\n\nX_cumCalExplVar(self)\n Returns a list holding the cumulative calibrated explained variance\n for array X after each component.\n\n" ], [ "# Get cumulative calibrated explained variance in Y\nY_cumCalExplVar = model.Y_cumCalExplVar()\n\n# Get cumulative calibrated explained variance in Y and store in pandas dataframe with row and column names\nY_cumCalExplVar_df = pd.DataFrame(model.Y_cumCalExplVar())\nY_cumCalExplVar_df.columns = ['cumulative calibrated explained variance in Y']\nY_cumCalExplVar_df.index = ['PC{0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)]\nY_cumCalExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_cumCalExplVar)", "Help on function Y_cumCalExplVar in module hoggorm.pcr:\n\nY_cumCalExplVar(self)\n Returns a list holding the cumulative calibrated explained variance\n for array X after each component. First number represents zero\n components, second number represents component 1, etc.\n\n" ], [ "# Get cumulative calibrated explained variance for each variable in X\nX_cumCalExplVar_ind = model.X_cumCalExplVar_indVar()\n\n# Get cumulative calibrated explained variance for each variable in X and store in pandas dataframe with row and column names\nX_cumCalExplVar_ind_df = pd.DataFrame(model.X_cumCalExplVar_indVar())\nX_cumCalExplVar_ind_df.columns = X_varNames\nX_cumCalExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\nX_cumCalExplVar_ind_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_cumCalExplVar_indVar)", "Help on function X_cumCalExplVar_indVar in module hoggorm.pcr:\n\nX_cumCalExplVar_indVar(self)\n Returns an array holding the cumulative calibrated explained variance\n for each variable in X after each component. First row represents zero\n components, second row represents one component, third row represents\n two components, etc. Columns represent variables.\n\n" ], [ "# Get cumulative calibrated explained variance for each variable in Y\nY_cumCalExplVar_ind = model.Y_cumCalExplVar_indVar()\n\n# Get cumulative calibrated explained variance for each variable in Y and store in pandas dataframe with row and column names\nY_cumCalExplVar_ind_df = pd.DataFrame(model.Y_cumCalExplVar_indVar())\nY_cumCalExplVar_ind_df.columns = Y_varNames\nY_cumCalExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)]\nY_cumCalExplVar_ind_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_cumCalExplVar_indVar)", "Help on function Y_cumCalExplVar_indVar in module hoggorm.pcr:\n\nY_cumCalExplVar_indVar(self)\n Returns an array holding the cumulative calibrated explained variance\n for each variable in Y after each component. First row represents zero\n components, second row represents one component, third row represents\n two components, etc. Columns represent variables.\n\n" ], [ "# Get calibrated predicted Y for a given number of components\n\n# Predicted Y from calibration using 1 component\nY_from_1_component = model.Y_predCal()[1]\n\n# Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names\nY_from_1_component_df = pd.DataFrame(model.Y_predCal()[1])\nY_from_1_component_df.index = Y_objNames\nY_from_1_component_df.columns = Y_varNames\nY_from_1_component_df", "_____no_output_____" ], [ "# Get calibrated predicted Y for a given number of components\n\n# Predicted Y from calibration using 4 component\nY_from_4_component = model.Y_predCal()[4]\n\n# Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names\nY_from_4_component_df = pd.DataFrame(model.Y_predCal()[4])\nY_from_4_component_df.index = Y_objNames\nY_from_4_component_df.columns = Y_varNames\nY_from_4_component_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_predCal)", "Help on function X_predCal in module hoggorm.pcr:\n\nX_predCal(self)\n Returns a dictionary holding the predicted arrays Xhat from\n calibration after each computed component. Dictionary key represents\n order of component.\n\n" ], [ "# Get validated explained variance of each component X\nX_valExplVar = model.X_valExplVar()\n\n# Get calibrated explained variance in X and store in pandas dataframe with row and column names\nX_valExplVar_df = pd.DataFrame(model.X_valExplVar())\nX_valExplVar_df.columns = ['validated explained variance in X']\nX_valExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\nX_valExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_valExplVar)", "Help on function X_valExplVar in module hoggorm.pcr:\n\nX_valExplVar(self)\n Returns a list holding the validated explained variance for X after\n each component. First number in list is for component 1, second number\n for component 2, third number for component 3, etc.\n\n" ], [ "# Get validated explained variance of each component Y\nY_valExplVar = model.Y_valExplVar()\n\n# Get calibrated explained variance in X and store in pandas dataframe with row and column names\nY_valExplVar_df = pd.DataFrame(model.Y_valExplVar())\nY_valExplVar_df.columns = ['validated explained variance in Y']\nY_valExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.Y_loadings().shape[1])]\nY_valExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_valExplVar)", "Help on function Y_valExplVar in module hoggorm.pcr:\n\nY_valExplVar(self)\n Returns a list holding the validated explained variance for Y after\n each component. First number in list is for component 1, second number\n for component 2, third number for component 3, etc.\n\n" ], [ "# Get cumulative validated explained variance in X\nX_cumValExplVar = model.X_cumValExplVar()\n\n# Get cumulative validated explained variance in X and store in pandas dataframe with row and column names\nX_cumValExplVar_df = pd.DataFrame(model.X_cumValExplVar())\nX_cumValExplVar_df.columns = ['cumulative validated explained variance in X']\nX_cumValExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\nX_cumValExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_cumValExplVar)", "Help on function X_cumValExplVar in module hoggorm.pcr:\n\nX_cumValExplVar(self)\n Returns a list holding the cumulative validated explained variance\n for array X after each component. First number represents zero\n components, second number represents component 1, etc.\n\n" ], [ "# Get cumulative validated explained variance in Y\nY_cumValExplVar = model.Y_cumValExplVar()\n\n# Get cumulative validated explained variance in Y and store in pandas dataframe with row and column names\nY_cumValExplVar_df = pd.DataFrame(model.Y_cumValExplVar())\nY_cumValExplVar_df.columns = ['cumulative validated explained variance in Y']\nY_cumValExplVar_df.index = ['PC{0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)]\nY_cumValExplVar_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_cumValExplVar)", "Help on function Y_cumValExplVar in module hoggorm.pcr:\n\nY_cumValExplVar(self)\n Returns a list holding the cumulative validated explained variance\n for array X after each component. First number represents zero\n components, second number represents component 1, etc.\n\n" ], [ "# Get cumulative validated explained variance for each variable in Y\nY_cumCalExplVar_ind = model.Y_cumCalExplVar_indVar()\n\n# Get cumulative validated explained variance for each variable in Y and store in pandas dataframe with row and column names\nY_cumValExplVar_ind_df = pd.DataFrame(model.Y_cumValExplVar_indVar())\nY_cumValExplVar_ind_df.columns = Y_varNames\nY_cumValExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.Y_loadings().shape[1] + 1)]\nY_cumValExplVar_ind_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_cumValExplVar_indVar)", "Help on function X_cumValExplVar_indVar in module hoggorm.pcr:\n\nX_cumValExplVar_indVar(self)\n Returns an array holding the cumulative validated explained variance\n for each variable in X after each component. First row represents\n zero components, second row represents component 1, third row for\n compnent 2, etc. Columns represent variables.\n\n" ], [ "# Get validated predicted Y for a given number of components\n\n# Predicted Y from validation using 1 component\nY_from_1_component_val = model.Y_predVal()[1]\n\n# Predicted Y from calibration using 1 component stored in pandas data frame with row and columns names\nY_from_1_component_val_df = pd.DataFrame(model.Y_predVal()[1])\nY_from_1_component_val_df.index = Y_objNames\nY_from_1_component_val_df.columns = Y_varNames\nY_from_1_component_val_df", "_____no_output_____" ], [ "# Get validated predicted Y for a given number of components\n\n# Predicted Y from validation using 3 components\nY_from_3_component_val = model.Y_predVal()[3]\n\n# Predicted Y from calibration using 3 components stored in pandas data frame with row and columns names\nY_from_3_component_val_df = pd.DataFrame(model.Y_predVal()[3])\nY_from_3_component_val_df.index = Y_objNames\nY_from_3_component_val_df.columns = Y_varNames\nY_from_3_component_val_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.Y_predVal)", "Help on function Y_predVal in module hoggorm.pcr:\n\nY_predVal(self)\n Returns dictionary holding arrays of predicted Yhat after each\n component from validation. Dictionary key represents order of\n component.\n\n" ], [ "# Get predicted scores for new measurements (objects) of X\n\n# First pretend that we acquired new X data by using part of the existing data and overlaying some noise\nimport numpy.random as npr\nnew_X = X[0:4, :] + npr.rand(4, np.shape(X)[1])\nnp.shape(X)\n\n# Now insert the new data into the existing model and compute scores for two components (numComp=2)\npred_X_scores = model.X_scores_predict(new_X, numComp=2)\n\n# Same as above, but results stored in a pandas dataframe with row names and column names\npred_X_scores_df = pd.DataFrame(model.X_scores_predict(new_X, numComp=2))\npred_X_scores_df.columns = ['PC{0}'.format(x+1) for x in range(2)]\npred_X_scores_df.index = ['new object {0}'.format(x+1) for x in range(np.shape(new_X)[0])]\npred_X_scores_df", "_____no_output_____" ], [ "help(ho.pcr.nipalsPCR.X_scores_predict)", "Help on function X_scores_predict in module hoggorm.pcr:\n\nX_scores_predict(self, Xnew, numComp=None)\n Returns array of X scores from new X data using the exsisting model.\n Rows represent objects and columns represent components.\n\n" ], [ "# Predict Y from new X data\npred_Y = model.Y_predict(new_X, numComp=2)\n\n# Predict Y from nex X data and store results in a pandas dataframe with row names and column names\npred_Y_df = pd.DataFrame(model.Y_predict(new_X, numComp=2))\npred_Y_df.columns = Y_varNames\npred_Y_df.index = ['new object {0}'.format(x+1) for x in range(np.shape(new_X)[0])]\npred_Y_df", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec972f494d4ab224bfdb727bdbb0dbc27fdfc826
27,289
ipynb
Jupyter Notebook
Tutorial-RK_Butcher_Table_Validation.ipynb
KAClough/nrpytutorial
2cc3b22cb1092bc10890237dd8ee3b6881c36b52
[ "BSD-2-Clause" ]
1
2019-12-23T05:31:25.000Z
2019-12-23T05:31:25.000Z
Tutorial-RK_Butcher_Table_Validation.ipynb
Yancheng-Li-PHYS/nrpytutorial
73b706c7f7e80ba22dd563735c0a7452c82c5245
[ "BSD-2-Clause" ]
null
null
null
Tutorial-RK_Butcher_Table_Validation.ipynb
Yancheng-Li-PHYS/nrpytutorial
73b706c7f7e80ba22dd563735c0a7452c82c5245
[ "BSD-2-Clause" ]
null
null
null
48.992819
636
0.577705
[ [ [ "<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# Validating Runge Kutta Butcher tables using Truncated Taylor Series\n## Authors: Zach Etienne & Brandon Clark\n\n\n## This tutorial notebook is designed to validate the Butcher tables contained within the Butcher dictionary constructed in the [RK Butcher Table Dictionary](Tutorial-RK_Butcher_Table_Dictionary.ipynb) NRPy+ module. \n\n### NRPy+ Source Code for this module: \n* [MoLtimestepping/RK_Butcher_Table_Validation.py](../edit/MoLtimestepping/RK_Butcher_Table_Validation.py) Stores the `Validate` function for calidating convergence orders for Runge Kutta methods\n* [MoLtimestepping/RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) [\\[**tutorial**\\]](Tutorial-RK_Butcher_Table_Dictionary.ipynb) Accesses the Butcher table dictionary `Butcher_dict` for known explicit Runge Kutta methods\n\n## Introduction:\n\nStarting with the ODE (ordinary differential equation) initial value problem:\n$$\ny'(t) = f(y,t)\\ \\ \\ y\\left(t=0\\right)=y_0,\n$$\nfor various choices of $f(y,t)$, this module validates the Runge Kutta (RK) methods coded in [RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) [**tutorial notebook**](Tutorial-RK_Butcher_Table_Dictionary.ipynb) as follows:\n\nGiven $y_0$, and a smooth $f(y,t)$, all explicit RK methods provide an estimate for $y_1 = y\\left(\\Delta t\\right)$, with an error term that is proportional to $\\left(\\Delta t\\right)^m$, where $m$ is an integer typically greater than zero. This error term corresponds to the *local* truncation error. For RK4, for example, while the *total accumulated truncation error* (i.e., the accumulated error at a fixed final time $t_f$) is proportional to $\\left(\\Delta t\\right)^4$, the *local* truncation error (i.e., the error after one arbitrarily chosen timestep $\\Delta t$) is proportional to $\\left(\\Delta t\\right)^5$.\n\nIf the exact solution $y(t)$ is known as a closed-form expression, then $y\\left(\\Delta t\\right)$ can be *separately* written as a Taylor expansion about $y(t=0)$:\n\n$$\ny\\left(\\Delta t\\right) = \\sum_{n=0}^\\infty \\frac{y^{(n)}(t=0)}{n!} \\left(\\Delta t\\right)^n,\n$$\nwhere $y^{(n)}(t=0)$ is the $n$th derivative of $y(t)$ evaluated at $t=0$.\n\nThe above expression will be known exactly. Further if one chooses a numerical value for $y_0$ *and leaves $\\Delta t$ unspecified*, any explicit RK method will provide an estimate for $y\\left(\\Delta t\\right)$ of the form\n\n$$\ny\\left(\\Delta t\\right) = \\sum_{n=0}^\\infty a_n \\left(\\Delta t\\right)^n,\n$$\nwhere $a_n$ *must* match the Taylor expansion of the *exact* solution at least up to and including terms proportional to $\\left(\\Delta t\\right)^m$, where $m$ is the order of the local truncation error. If this is *not* the case, then the Butcher table was almost certainly *not* typed correctly.\n\nTherefore, comparing the numerical result with unspecified $\\Delta t$ against the exact Taylor series provides a convenient (though not perfectly robust) means to verify that the Butcher table for a given RK method was typed correctly. Multiple typos in the Butcher tables were found using this approach.\n\n**Example from Z. Etienne's MATH 521 (Numerical Analysis) lecture notes:**\n\nConsider the ODE\n$$\ny' = y - 2 t e^{-2t},\\quad y(0)=y(t_0)=0.\n$$\n\n* Solve this ODE exactly, then Taylor expand the solution about $t=0$ to\napproximate the solution at $y(t=\\Delta t)$ to fifth order in $\\Delta\nt$.\n* Next solve this ODE using Heun's method (second order in total accumulated truncation error, third order in local truncation error) {\\it by hand} with a step size of\n$\\Delta t$ to find $y(\\Delta t)$. Confirm that the solution obtained\nwhen using Heun's method has an error term that is at worst\n$\\mathcal{O}\\left((\\Delta t)^3\\right)$. If the dominant error is\nproportional to a higher power of $\\Delta t$, explain the discrepancy.\n\n* Finally solve this ODE using the Ralston method {\\it by hand}\n with a step size of $\\Delta t$ to find $y(\\Delta t)$. Is the\n coefficient on the dominant error term closer to the exact solution\n than Heun's method?\n\nWe can solve this equation via the method of integrating factors,\nwhich states that ODEs of the form:\n$$\ny'(t) + p(t) y(t) = g(t)\n$$\nare solved via \n$$\ny(t) = \\frac{1}{\\mu(t)} \\left[ \\int \\mu(s) g(s) ds + c \\right],\n$$\nwhere the integrating factor $\\mu(t)$ is given by\n$$\n\\mu(t) = \\exp\\left(\\int p(t) dt\\right)\n$$\n\nHere, $p(t)=-1$ and $g(t) = - 2 t e^{-2t}$. Then\n\\beq\n\\mu(t) = \\exp\\left(-\\int dt\\right) = e^{-t+c} = k e^{-t}\n\\eeq\nand\n\\begin{align}\ny(t) &= e^t/k \\left[ \\int k e^{-s} (- 2 s e^{-2s}) ds + c \\right] = -2 e^t \\left[ \\int s e^{-3s} ds + c' \\right] \\\\\n&= -2 e^t \\left[ e^{-3 t} \\left(-\\frac{t}{3}-\\frac{1}{9}\\right) + c' \\right] = -2 e^{-2t} \\left(-\\frac{t}{3}-\\frac{1}{9}\\right) -2 c' e^t \\\\\n&= e^{-2t} \\left(2\\frac{t}{3}+\\frac{2}{9}\\right) + c'' e^t \\\\\n\\end{align}\n\nIf $y(0)=0$ then we can compute the integration constant $c''$, and\n$y(t)$ becomes\n$$\ny(t) = \\frac{2}{9} e^{-2 t} \\left(3 t + 1 - e^{3 t}\\right).\n$$\n\nThe Taylor Series expansion of the exact solution about $t=0$\nevaluated at $y(\\Delta t)$ yields\n$$\ny(\\Delta t) = -(\\Delta t)^2+(\\Delta t)^3-\\frac{3 (\\Delta t)^4}{4}+\\frac{23 (\\Delta\n t)^5}{60}-\\frac{19 (\\Delta t)^6}{120}+O\\left((\\Delta t)^7\\right).\n$$\n\nNext we evaluate $y(\\Delta t)$ using Heun's method. We know $y(0)=y_0=0$ and\n$f(y,t)=y - 2 t e^{-2t}$, so\n\\begin{align}\nk_1 &= \\Delta t f(y(0),0) \\\\\n &= \\Delta t \\times 0 \\\\\n &= 0 \\\\\nk_2 &= \\Delta t f(y(0)+k_1,0+\\Delta t) \\\\\n &= \\Delta t f(y(0)+0,0+\\Delta t) \\\\\n &= \\Delta t (-2 \\Delta t e^{-2\\Delta t}) \\\\\n &= -2 (\\Delta t)^2 e^{-2\\Delta t} \\\\\ny(\\Delta t) &= y_0 + \\frac{1}{2} (k_1 + k_2) + \\mathcal{O}\\left((\\Delta t)^3\\right) \\\\\n&= 0 - (\\Delta t)^2 e^{-2\\Delta t} \\\\\n&= - (\\Delta t)^2 ( 1 - 2 \\Delta t + 2 (\\Delta t)^2 + ...) \\\\\n&= - (\\Delta t)^2 + 2 (\\Delta t)^3 + \\mathcal{O}\\left((\\Delta t)^4\\right).\n\\end{align}\n\nThus the coefficient on the $(\\Delta t)^3$ term is wrong, but\nthis is completely consistent with the fact that our stepping\nscheme is only third-order accurate in $\\Delta t$.\n\nIn the below approach, the RK result is subtracted from the exact Taylor series result, as a check to determine whether the RK Butcher table was coded correctly; if it was not, then the odds are good that the RK results will not match to the expected local truncation error order. Multiple $f(y,t)$ are coded below to improve the robustness of this test.", "_____no_output_____" ], [ "<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows\n\n1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules\n1. [Step 2](#table_validate) Validate Convergence Order of Butcher Tables\n 1. [Step 2.a](#rhs): Defining the right-hand side of the ODE\n 1. [Step 2.b](#validfunc): Defining a Validation Function\n 1. [Step 2.c](#rkvalid): Validating RK Methods against ODEs\n1. [Step 3](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file", "_____no_output_____" ], [ "<a id='initializenrpy'></a>\n\n# Step 1: Initialize needed Python/NRPy+ modules [Back to [top](#toc)\\]\n$$\\label{initializenrpy}$$\n\nLet's start by importing all the needed modules from Python/NRPy+:", "_____no_output_____" ] ], [ [ "import sympy as sp\nimport NRPy_param_funcs as par\nimport numpy as np\nfrom MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict", "_____no_output_____" ] ], [ [ "<a id='table_validate'></a>\n\n# Step 2: Validate Convergence Order of Butcher Tables [Back to [top](#toc)\\]\n$$\\label{table_validate}$$\n\n\nEach Butcher table/Runge Kutta method is tested by solving an ODE. Comparing the Taylor series expansions of the exact solution and the numerical solution as discussed in the **Introduction** above will confirm whether the method converges to the appropriate order. ", "_____no_output_____" ], [ "<a id='rhs'></a>\n\n## Step 2.a: Defining the right-hand side of the ODE [Back to [top](#toc)\\]\n$$\\label{rhs}$$\n\nConsider the form of ODE $y'=f(y,t)$. The following begins to construct a dictionary `rhs_dict` of right-hand side functions for us to validate explicit Runge Kutta methods. The most up-to-date catlog of functions stored in `rhs_dict` can be found in the [RK_Butcher_Table_Validation.py](../edit/MoLtimestepping/RK_Butcher_Table_Validation.py) module. ", "_____no_output_____" ] ], [ [ "def fypt(y,t): # Yields expected convergence order for all cases\n # except DP6 which converge to higher order (7, respectively)\n return y+t\n\ndef fy(y,t): # Yields expected convergence order for all cases\n return y\n\ndef feypt(y,t): # Yields expected convergence order for all cases\n return sp.exp(1.0*(y+t))\n\ndef ftpoly6(y,t): # Yields expected convergence order for all cases, L6 has 0 error\n return 2*t**6-389*t**5+15*t**4-22*t**3+81*t**2-t+42\nrhs_dict = {'ypt':fypt, 'y':fy, 'eypt':feypt, 'tpoly6':ftpoly6}", "_____no_output_____" ] ], [ [ "<a id='validfunc'></a>\n\n## Step 2.b: Defining a Validation Function [Back to [top](#toc)\\]\n$$\\label{validfunc}$$\n\nTo validate each Butcher table we compare the exact solutions to ODEs with the numerical solutions using the Runge Kutta scheme built into each Butcher table. The following is a function that\n\n1. Solves the ODE exactly,\n2. Solves the ODE numericaly for a given Butcher table, and\n3. Compares the two solutions and checks for the order of convergence by returning their difference.\n\nThe `Validate()` function inputs a specified `Butcher_key`, the starting guess solution and time `y_n`, `t_n` and the right-hand side of the ODE corresponding to a specified intial value problem, `rhs_key`.\n\n\n", "_____no_output_____" ] ], [ [ "from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict\ndef Validate(Butcher_key, yn, tn, rhs_key):\n # 1. First we solve the ODE exactly\n y = sp.Function('y')\n sol = sp.dsolve(sp.Eq(y(t).diff(t), rhs_dict[rhs_key](y(t), t)), y(t)).rhs\n constants = sp.solve([sol.subs(t,tn)-yn])\n exact = sol.subs(constants)\n \n # 2. Now we solve the ODE numerically using specified Butcher table\n\n # Access the requested Butcher table\n Butcher = Butcher_dict[Butcher_key][0]\n # Determine number of predictor-corrector steps\n L = len(Butcher)-1\n # Set a temporary array for update values\n k = np.zeros(L, dtype=object) \n # Initialize intermediate variable\n yhat = 0\n # Initialize the updated solution\n ynp1 = 0\n for i in range(L):\n #Initialize and approximate update for solution\n yhat = yn\n for j in range(i):\n # Update yhat for solution using a_ij Butcher table coefficients\n yhat += Butcher[i][j+1]*k[j]\n if Butcher_key == \"DP8\" or Butcher_key == \"L6\":\n yhat = 1.0*sp.N(yhat,20) # Otherwise the adding of fractions kills performance.\n # Determine the next corrector variable k_i using c_i Butcher table coefficients \n k[i] = dt*rhs_dict[rhs_key](yhat, tn + Butcher[i][0]*dt) \n # Update the solution at the next iteration ynp1 using Butcher table coefficients\n ynp1 += Butcher[L][i+1]*k[i]\n # Finish determining the solution for the next iteration\n ynp1 += yn\n \n # Determine the order of the RK method\n order = Butcher_dict[Butcher_key][1]+2\n # Produces Taylor series of exact solution at t=tn about t = 0 with the specified order\n exact_series = sp.series(exact.subs(t, dt),dt, 0, order)\n num_series = sp.series(ynp1, dt, 0, order)\n diff = exact_series-num_series\n return diff", "_____no_output_____" ] ], [ [ "<a id='rkvalid'></a>\n\n## Step 2.c: Validating RK Methods against ODEs [Back to [top](#toc)\\]\n$$\\label{rkvalid}$$\n\nThe following makes use of the `Validate()` function above to demonstrate that each method within the Bucther table dictionary converges to the expected order for the given right-hand side expression. ", "_____no_output_____" ] ], [ [ "t, dt = sp.symbols('t dt')\n# Set intial conditions\nt0 = 0\ny0 = 1\n# Set RHS of ODE\nfunction = 'ypt'# This can be changed, just be careful that the initial conditions are satisfied\nfor key,value in Butcher_dict.items():\n print(\"RK method: \\\"\"+str(key)+\"\\\".\")\n y = sp.Function('y')\n print(\" When solving y'(t) = \"+str(rhs_dict[function](y(t),t))+\", y(\"+str(t0)+\")=\"+str(y0)+\",\")\n local_truncation_order = list(value)[1]+1\n print(\" the first nonzero term should have local truncation error proportional to O(dt^\"+str(local_truncation_order)+\") or a higher power of dt.\")\n print(\"Subtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\")\n sp.pretty_print(Validate(key, y0, t0, function))\n# print(\"\\n\")\n print(\" (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\\n\")", "RK method: \"Euler\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^2) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 2 ⎛ 3⎞\ndt + O⎝dt ⎠\n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK2 Heun\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^3) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 3 \ndt ⎛ 4⎞\n─── + O⎝dt ⎠\n 3 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK2 MP\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^3) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 3 \ndt ⎛ 4⎞\n─── + O⎝dt ⎠\n 3 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK2 Ralston\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^3) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 3 \ndt ⎛ 4⎞\n─── + O⎝dt ⎠\n 3 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK3\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^4) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 4 \ndt ⎛ 5⎞\n─── + O⎝dt ⎠\n 12 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK3 Heun\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^4) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 4 \ndt ⎛ 5⎞\n─── + O⎝dt ⎠\n 12 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK3 Ralston\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^4) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 4 \ndt ⎛ 5⎞\n─── + O⎝dt ⎠\n 12 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"SSPRK3\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^4) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 4 \ndt ⎛ 5⎞\n─── + O⎝dt ⎠\n 12 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"RK4\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^5) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 5 \ndt ⎛ 6⎞\n─── + O⎝dt ⎠\n 60 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"DP5\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^6) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 6 \n dt ⎛ 7⎞\n- ──── + O⎝dt ⎠\n 1800 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"DP5alt\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^6) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 6 \n13⋅dt ⎛ 7⎞\n────── + O⎝dt ⎠\n231000 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"CK5\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^6) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 6 \ndt ⎛ 7⎞\n──── + O⎝dt ⎠\n3600 \n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"DP6\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^7) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n ⎛ 8⎞\nO⎝dt ⎠\n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"L6\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^7) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 5 6 \n- 1.0587911840678754238e-22⋅dt - 2.6469779601696885596e-23⋅dt + 0.0013227513\n\n 7 ⎛ 8⎞\n227513227513⋅dt + O⎝dt ⎠\n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\nRK method: \"DP8\".\n When solving y'(t) = t + y(t), y(0)=1,\n the first nonzero term should have local truncation error proportional to O(dt^9) or a higher power of dt.\nSubtracting the numerical result from the exact Taylor expansion, we find a local truncation error of:\n 2 \n3.6854403535034607753e-18⋅dt + 5.8394451383711465375e-18⋅dt + 3.7764963953332\n\n 3 4 5 \n980617e-18⋅dt + 9.542884942003761195e-19⋅dt + 1.2718729098615353529e-19⋅dt \n\n 6 7 \n+ 3.9082629581905451582e-20⋅dt + 4.8075737201581968464e-21⋅dt + 5.1688448526\n\n 8 9 ⎛ 10⎞\n907316834e-22⋅dt + 7.2078645877627939543e-9⋅dt + O⎝dt ⎠\n (Coefficients of order 1e-15 or less may generally be ignored, as these are at roundoff error.)\n\n" ] ], [ [ "<a id='latex_pdf_output'></a>\n\n# Step 3: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-RK_Butcher_Table_Validation.pdf](Tutorial-RK_Butcher_Table_Validation.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)", "_____no_output_____" ] ], [ [ "!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-RK_Butcher_Table_Validation.ipynb\n!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Validation.tex\n!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Validation.tex\n!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Validation.tex\n!rm -f Tut*.out Tut*.aux Tut*.log", "This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\r\n restricted \\write18 enabled.\r\nentering extended mode\r\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\r\n restricted \\write18 enabled.\r\nentering extended mode\r\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\r\n restricted \\write18 enabled.\r\nentering extended mode\r\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec973ad3fbd1e62e80bf76b3a6e4145684636dfc
63,458
ipynb
Jupyter Notebook
dengue-virus.ipynb
Krish-sysadmin/ReverseEngineerVirus
d7baa9536fdba756d9bfc8fa3748553646bce6ad
[ "MIT" ]
4
2022-01-30T17:46:02.000Z
2022-01-31T22:31:57.000Z
dengue-virus.ipynb
Krish-sysadmin/ReverseEngineerVirus
d7baa9536fdba756d9bfc8fa3748553646bce6ad
[ "MIT" ]
null
null
null
dengue-virus.ipynb
Krish-sysadmin/ReverseEngineerVirus
d7baa9536fdba756d9bfc8fa3748553646bce6ad
[ "MIT" ]
null
null
null
90.654286
10,746
0.817423
[ [ [ "dengue = \"\"\"\n 1 agttgttagt ctacgtggac cgacaagaac agtttcgaat cggaagcttg cttaacgtag\n 61 ttctaacagt tttttattag agagcagatc tctgatgaac aaccaacgga aaaagacggg\n 121 tcgaccgtct ttcaatatgc tgaaacgcgc gagaaaccgc gtgtcaactg tttcacagtt\n 181 ggcgaagaga ttctcaaaag gattgctttc aggccaagga cccatgaaat tggtgatggc\n 241 ttttatagca ttcctaagat ttctagccat acctccaaca gcaggaattt tggctagatg\n 301 gggctcattc aagaagaatg gagcgatcaa agtgttacgg ggtttcaaga aagaaatctc\n 361 aaacatgttg aacataatga acaggaggaa aagatctgtg accatgctcc tcatgctgct\n 421 gcccacagcc ctggcgttcc atctgaccac ccgaggggga gagccgcaca tgatagttag\n 481 caagcaggaa agaggaaaat cacttttgtt taagacctct gcaggtgtca acatgtgcac\n 541 ccttattgca atggatttgg gagagttatg tgaggacaca atgacctaca aatgcccccg\n 601 gatcactgag acggaaccag atgacgttga ctgttggtgc aatgccacgg agacatgggt\n 661 gacctatgga acatgttctc aaactggtga acaccgacga gacaaacgtt ccgtcgcact\n 721 ggcaccacac gtagggcttg gtctagaaac aagaaccgaa acgtggatgt cctctgaagg\n 781 cgcttggaaa caaatacaaa aagtggagac ctgggctctg agacacccag gattcacggt\n 841 gatagccctt tttctagcac atgccatagg aacatccatc acccagaaag ggatcatttt\n 901 tattttgctg atgctggtaa ctccatccat ggccatgcgg tgcgtgggaa taggcaacag\n 961 agacttcgtg gaaggactgt caggagctac gtgggtggat gtggtactgg agcatggaag\n 1021 ttgcgtcact accatggcaa aagacaaacc aacactggac attgaactct tgaagacgga\n 1081 ggtcacaaac cctgccgtcc tgcgcaaact gtgcattgaa gctaaaatat caaacaccac\n 1141 caccgattcg agatgtccaa cacaaggaga agccacgctg gtggaagaac aggacacgaa\n 1201 ctttgtgtgt cgacgaacgt tcgtggacag aggctggggc aatggttgtg ggctattcgg\n 1261 aaaaggtagc ttaataacgt gtgctaagtt taagtgtgtg acaaaactgg aaggaaagat\n 1321 agtccaatat gaaaacttaa aatattcagt gatagtcacc gtacacactg gagaccagca\n 1381 ccaagttgga aatgagacca cagaacatgg aacaactgca accataacac ctcaagctcc\n 1441 cacgtcggaa atacagctga cagactacgg agctctaaca ttggattgtt cacctagaac\n 1501 agggctagac tttaatgaga tggtgttgtt gacaatgaaa aaaaaatcat ggctcgtcca\n 1561 caaacaatgg tttctagact taccactgcc ttggacctcg ggggcttcaa catcccaaga\n 1621 gacttggaat agacaagact tgctggtcac atttaagaca gctcatgcaa aaaagcagga\n 1681 agtagtcgta ctaggatcac aagaaggagc aatgcacact gcgttgactg gagcgacaga\n 1741 aatccaaacg tctggaacga caacaatttt tgcaggacac ctgaaatgca gattaaaaat\n 1801 ggataaactg attttaaaag ggatgtcata tgtaatgtgc acagggtcat tcaagttaga\n 1861 gaaggaagtg gctgagaccc agcatggaac tgttctagtg caggttaaat acgaaggaac\n 1921 agatgcacca tgcaagatcc ccttctcgtc ccaagatgag aagggagtaa cccagaatgg\n 1981 gagattgata acagccaacc ccatagtcac tgacaaagaa aaaccagtca acattgaagc\n 2041 ggagccacct tttggtgaga gctacattgt ggtaggagca ggtgaaaaag ctttgaaact\n 2101 aagctggttc aagaagggaa gcagtatagg gaaaatgttt gaagcaactg cccgtggagc\n 2161 acgaaggatg gccatcctgg gagacactgc atgggacttc ggttctatag gaggggtgtt\n 2221 cacgtctgtg ggaaaactga tacaccagat ttttgggact gcgtatggag ttttgttcag\n 2281 cggtgtttct tggaccatga agataggaat agggattctg ctgacatggc taggattaaa\n 2341 ctcaaggagc acgtcccttt caatgacgtg tatcgcagtt ggcatggtca cactgtacct\n 2401 aggagtcatg gttcaggcgg actcgggatg tgtaatcaac tggaaaggca gagaactcaa\n 2461 atgtggaagc ggcatttttg tcaccaatga agtccacacc tggacagagc aatataaatt\n 2521 ccaggccgac tcccctaaga gactatcagc ggccattggg aaggcatggg aggagggtgt\n 2581 gtgtggaatt cgatcagcca ctcgtctcga gaacatcatg tggaagcaaa tatcaaatga\n 2641 attaaaccac atcttacttg aaaatgacat gaaatttaca gtggtcgtag gagacgttag\n 2701 tggaatcttg gcccaaggaa agaaaatgat taggccacaa cccatggaac acaaatactc\n 2761 gtggaaaagc tggggaaaag ccaaaatcat aggagcagat gtacagaata ccaccttcat\n 2821 catcgacggc ccaaacaccc cagaatgccc tgataaccaa agagcatgga acatttggga\n 2881 agttgaagac tatggatttg gaattttcac gacaaacata tggttgaaat tgcgtgactc\n 2941 ctacactcaa gtgtgtgacc accggctaat gtcagctgcc atcaaggata gcaaagcagt\n 3001 ccatgctgac atggggtact ggatagaaag tgaaaagaac gagacttgga agttggcaag\n 3061 agcctccttc atagaagtta agacatgcat ctggccaaaa tcccacactc tatggagcaa\n 3121 tggagtcctg gaaagtgaga tgataatccc aaagatatat ggaggaccaa tatctcagca\n 3181 caactacaga ccaggatatt tcacacaaac agcagggccg tggcacttgg gcaagttaga\n 3241 actagatttt gatttatgtg aaggtaccac tgttgttgtg gatgaacatt gtggaaatcg\n 3301 aggaccatct cttagaacca caacagtcac aggaaagaca atccatgaat ggtgctgtag\n 3361 atcttgcacg ttaccccccc tacgtttcaa aggagaagac gggtgctggt acggcatgga\n 3421 aatcagacca gtcaaggaga aggaagagaa cctagttaag tcaatggtct ctgcagggtc\n 3481 aggagaagtg gacagttttt cactaggact gctatgcata tcaataatga tcgaagaggt\n 3541 aatgagatcc agatggagca gaaaaatgct gatgactgga acattggctg tgttcctcct\n 3601 tctcacaatg ggacaattga catggaatga tctgatcagg ctatgtatca tggttggagc\n 3661 caacgcttca gacaagatgg ggatgggaac aacgtaccta gctttgatgg ccactttcag\n 3721 aatgagacca atgttcgcag tcgggctact gtttcgcaga ttaacatcta gagaagttct\n 3781 tcttcttaca gttggattga gtctggtggc atctgtagaa ctaccaaatt ccttagagga\n 3841 gctaggggat ggacttgcaa tgggcatcat gatgttgaaa ttactgactg attttcagtc\n 3901 acatcagcta tgggctacct tgctgtcttt aacatttgtc aaaacaactt tttcattgca\n 3961 ctatgcatgg aagacaatgg ctatgatact gtcaattgta tctctcttcc ctttatgcct\n 4021 gtccacgact tctcaaaaaa caacatggct tccggtgttg ctgggatctc ttggatgcaa\n 4081 accactaacc atgtttctta taacagaaaa caaaatctgg ggaaggaaaa gctggcctct\n 4141 caatgaagga attatggctg ttggaatagt tagcattctt ctaagttcac ttctcaagaa\n 4201 tgatgtgcca ctagctggcc cactaatagc tggaggcatg ctaatagcat gttatgtcat\n 4261 atctggaagc tcggccgatt tatcactgga gaaagcggct gaggtctcct gggaagaaga\n 4321 agcagaacac tctggtgcct cacacaacat actagtggag gtccaagatg atggaaccat\n 4381 gaagataaag gatgaagaga gagatgacac actcaccatt ctcctcaaag caactctgct\n 4441 agcaatctca ggggtatacc caatgtcaat accggcgacc ctctttgtgt ggtatttttg\n 4501 gcagaaaaag aaacagagat caggagtgct atgggacaca cccagccctc cagaagtgga\n 4561 aagagcagtc cttgatgatg gcatttatag aattctccaa agaggattgt tgggcaggtc\n 4621 tcaagtagga gtaggagttt ttcaagaagg cgtgttccac acaatgtggc acgtcaccag\n 4681 gggagctgtc ctcatgtacc aagggaagag actggaacca agttgggcca gtgtcaaaaa\n 4741 agacttgatc tcatatggag gaggttggag gtttcaagga tcctggaacg cgggagaaga\n 4801 agtgcaggtg attgctgttg aaccggggaa gaaccccaaa aatgtacaga cagcgccggg\n 4861 taccttcaag acccctgaag gcgaagttgg agccatagct ctagacttta aacccggcac\n 4921 atctggatct cctatcgtga acagagaggg aaaaatagta ggtctttatg gaaatggagt\n 4981 ggtgacaaca agtggtacct acgtcagtgc catagctcaa gctaaagcat cacaagaagg\n 5041 gcctctacca gagattgagg acgaggtgtt taggaaaaga aacttaacaa taatggacct\n 5101 acatccagga tcgggaaaaa caagaagata ccttccagcc atagtccgtg aggccataaa\n 5161 aagaaagctg cgcacgctag tcttagctcc cacaagagtt gtcgcttctg aaatggcaga\n 5221 ggcgctcaag ggaatgccaa taaggtatca gacaacagca gtgaagagtg aacacacggg\n 5281 aaaggagata gttgacctta tgtgtcacgc cactttcact atgcgtctcc tgtctcctgt\n 5341 gagagttccc aattataata tgattatcat ggatgaagca cattttaccg atccagccag\n 5401 catagcagcc agagggtata tctcaacccg agtgggtatg ggtgaagcag ctgcgatttt\n 5461 catgacagcc actccccccg gatcggtgga ggcctttcca cagagcaatg cagttatcca\n 5521 agatgaggaa agagacattc ctgaaagatc atggaactca ggctatgact ggatcactga\n 5581 tttcccaggt aaaacagtct ggtttgttcc aagcatcaaa tcaggaaatg acattgccaa\n 5641 ctgtttaaga aagaatggga aacgggtggt ccaattgagc agaaaaactt ttgacactga\n 5701 gtaccagaaa acaaaaaata acgactggga ctatgttgtc acaacagaca tatccgaaat\n 5761 gggagcaaac ttccgagccg acagggtaat agacccgagg cggtgcctga aaccggtaat\n 5821 actaaaagat ggcccagagc gtgtcattct agccggaccg atgccagtga ctgtggctag\n 5881 cgccgcccag aggagaggaa gaattggaag gaaccaaaat aaggaaggcg atcagtatat\n 5941 ttacatggga cagcctctaa acaatgatga ggaccacgcc cattggacag aagcaaaaat\n 6001 gctccttgac aacataaaca caccagaagg gattatccca gccctctttg agccggagag\n 6061 agaaaagagt gcagcaatag acggggaata cagactacgg ggtgaagcga ggaaaacgtt\n 6121 cgtggagctc atgagaagag gagatctacc tgtctggcta tcctacaaag ttgcctcaga\n 6181 aggcttccag tactccgaca gaaggtggtg ctttgatggg gaaaggaaca accaggtgtt\n 6241 ggaggagaac atggacgtgg agatctggac aaaagaagga gaaagaaaga aactacgacc\n 6301 ccgctggctg gatgccagaa catactctga cccactggct ctgcgcgaat tcaaagagtt\n 6361 cgcagcagga agaagaagcg tctcaggtga cctaatatta gaaataggga aacttccaca\n 6421 acatttaacg caaagggccc agaacgcctt ggacaatctg gttatgttgc acaactctga\n 6481 acaaggagga aaagcctata gacacgccat ggaagaacta ccagacacca tagaaacgtt\n 6541 aatgctccta gctttgatag ctgtgctgac tggtggagtg acgttgttct tcctatcagg\n 6601 aaggggtcta ggaaaaacat ccattggcct actctgcgtg attgcctcaa gtgcactgtt\n 6661 atggatggcc agtgtggaac cccattggat agcggcctct atcatactgg agttctttct\n 6721 gatggtgttg cttattccag agccggacag acagcgcact ccacaagaca accagctagc\n 6781 atacgtggtg ataggtctgt tattcatgat attgacagtg gcagccaatg agatgggatt\n 6841 actggaaacc acaaagaagg acctggggat tggtcatgca gctgctgaaa accaccatca\n 6901 tgctgcaatg ctggacgtag acctacatcc agcttcagcc tggactctct atgcagtggc\n 6961 cacaacaatt atcactccca tgatgagaca cacaattgaa aacacaacgg caaatatttc\n 7021 cctgacagct attgcaaacc aggcagctat attgatggga cttgacaagg gatggccaat\n 7081 atcaaagatg gacataggag ttccacttct cgccttgggg tgctattctc aggtgaaccc\n 7141 gctgacgctg acagcggcgg tattgatgct agtggctcat tatgccataa ttggacccgg\n 7201 actgcaagca aaagctacta gagaagctca aaaaaggaca gcagccggaa taatgaaaaa\n 7261 cccaactgtc gacgggatcg ttgcaataga tttggaccct gtggtttacg atgcaaaatt\n 7321 tgaaaaacag ctaggccaaa taatgttgtt gatactttgc acatcacaga tcctcctgat\n 7381 gcggaccaca tgggccttgt gtgaatccat cacactagcc actggacctc tgactacgct\n 7441 ttgggaggga tctccaggaa aattctggaa caccacgata gcggtgtcca tggcaaacat\n 7501 ttttagggga agttatctag caggagcagg tctggccttt tcattaatga aatctctagg\n 7561 aggaggtagg agaggcacgg gagcccaagg ggaaacactg ggagaaaaat ggaaaagaca\n 7621 gctaaaccaa ttgagcaagt cagaattcaa cacttacaaa aggagtggga ttatagaggt\n 7681 ggatagatct gaagccaaag aggggttaaa aagaggagaa acgactaaac acgcagtgtc\n 7741 gagaggaacg gccaaactga ggtggtttgt ggagaggaac cttgtgaaac cagaagggaa\n 7801 agtcatagac ctcggttgtg gaagaggtgg ctggtcatat tattgcgctg ggctgaagaa\n 7861 agtcacagaa gtgaaaggat acacgaaagg aggacctgga catgaggaac caatcccaat\n 7921 ggcaacctat ggatggaacc tagtaaagct atactccggg aaagatgtat tctttacacc\n 7981 acctgagaaa tgtgacaccc tcttgtgtga tattggtgag tcctctccga acccaactat\n 8041 agaagaagga agaacgttac gtgttctaaa gatggtggaa ccatggctca gaggaaacca\n 8101 attttgcata aaaattctaa atccctatat gccgagtgtg gtagaaactt tggagcaaat\n 8161 gcaaagaaaa catggaggaa tgctagtgcg aaatccactc tcaagaaact ccactcatga\n 8221 aatgtactgg gtttcatgtg gaacaggaaa cattgtgtca gcagtaaaca tgacatctag\n 8281 aatgctgcta aatcgattca caatggctca caggaagcca acatatgaaa gagacgtgga\n 8341 cttaggcgct ggaacaagac atgtggcagt agaaccagag gtggccaacc tagatatcat\n 8401 tggccagagg atagagaata taaaaaatga acacaaatca acatggcatt atgatgagga\n 8461 caatccatac aaaacatggg cctatcatgg atcatatgag gtcaagccat caggatcagc\n 8521 ctcatccatg gtcaatggtg tggtgagact gctaaccaaa ccatgggatg tcattcccat\n 8581 ggtcacacaa atagccatga ctgacaccac accctttgga caacagaggg tgtttaaaga\n 8641 gaaagttgac acgcgtacac caaaagcgaa acgaggcaca gcacaaatta tggaggtgac\n 8701 agccaggtgg ttatggggtt ttctctctag aaacaaaaaa cccagaatct gcacaagaga\n 8761 ggagttcaca agaaaagtca ggtcaaacgc agctattgga gcagtgttcg ttgatgaaaa\n 8821 tcaatggaac tcagcaaaag aggcagtgga agatgaacgg ttctgggacc ttgtgcacag\n 8881 agagagggag cttcataaac aaggaaaatg tgccacgtgt gtctacaaca tgatgggaaa\n 8941 gagagagaaa aaattaggag agttcggaaa ggcaaaagga agtcgcgcaa tatggtacat\n 9001 gtggttggga gcgcgctttt tagagtttga agcccttggt ttcatgaatg aagatcactg\n 9061 gttcagcaga gagaattcac tcagtggagt ggaaggagaa ggactccaca aacttggata\n 9121 catactcaga gacatatcaa agattccagg gggaaatatg tatgcagatg acacagccgg\n 9181 atgggacaca agaataacag aggatgatct tcagaatgag gccaaaatca ctgacatcat\n 9241 ggaacctgaa catgccctat tggccacgtc aatctttaag ctaacctacc aaaacaaggt\n 9301 agtaagggtg cagagaccag cgaaaaatgg aaccgtgatg gatgtcatat ccagacgtga\n 9361 ccagagagga agtggacagg ttggaaccta tggcttaaac accttcacca acatggaggc\n 9421 ccaactaata agacaaatgg agtctgaggg aatcttttca cccagcgaat tggaaacccc\n 9481 aaatctagcc gaaagagtcc tcgactggtt gaaaaaacat ggcaccgaga ggctgaaaag\n 9541 aatggcaatc agtggagatg actgtgtggt gaaaccaatc gatgacagat ttgcaacagc\n 9601 cttaacagct ttgaatgaca tgggaaaggt aagaaaagac ataccgcaat gggaaccttc\n 9661 aaaaggatgg aatgattggc aacaagtgcc tttctgttca caccatttcc accagctgat\n 9721 tatgaaggat gggagggaga tagtggtgcc atgccgcaac caagatgaac ttgtaggtag\n 9781 ggccagagta tcacaaggcg ccggatggag cttgagagaa actgcatgcc taggcaagtc\n 9841 atatgcacaa atgtggcagc tgatgtactt ccacaggaga gacttgagat tagcggctaa\n 9901 tgctatctgt tcagccgttc cagttgattg ggtcccaacc agccgcacca cctggtcgat\n 9961 ccatgcccac catcaatgga tgacaacaga agacatgttg tcagtgtgga atagggtttg\n 10021 gatagaggaa aacccatgga tggaggacaa gactcatgtg tccagttggg aagacgttcc\n 10081 atacctagga aaaagggaag atcaatggtg tggttcccta ataggcttaa cagcacgagc\n 10141 cacctgggcc accaacatac aagtggccat aaaccaagtg agaaggctca ttgggaatga\n 10201 gaattatcta gacttcatga catcaatgaa gagattcaaa aacgagagtg atcccgaagg\n 10261 ggcactctgg taagccaact cattcacaaa ataaaggaaa ataaaaaatc aaacaaggca\n 10321 agaagtcagg ccggattaag ccatagcacg gtaagagcta tgctgcctgt gagccccgtc\n 10381 caaggacgta aaatgaagtc aggccgaaag ccacggttcg agcaagccgt gctgcctgta\n 10441 gctccatcgt ggggatgtaa aaacccggga ggctgcaaac catggaagct gtacgcatgg\n 10501 ggtagcagac tagtggttag aggagacccc tcccaagaca caacgcagca gcggggccca\n 10561 acaccagggg aagctgtacc ctggtggtaa ggactagagg ttagaggaga ccccccgcac\n 10621 aacaacaaac agcatattga cgctgggaga gaccagagat cctgctgtct ctacagcatc\n 10681 attccaggca cagaacgcca aaaaatggaa tggtgctgtt gaatcaacag gttct\n\"\"\"\n\n\nfor s in \" \\n0123456789\":\n dengue = dengue.replace(s, \"\")\n\ndengue\n", "_____no_output_____" ], [ "import zlib, lzma\nlzc = lzma.LZMACompressor()\nit = lzc.compress(dengue.encode('utf-8'))\nit += lzc.flush()\nlen(it)\nlen(dengue) # compressed size helps us gauge the Kolmogorov complexity - to find length of shortest computer program to produce object as output, how much information in dengue virus? ~ 3.3 kb\n", "_____no_output_____" ], [ "# Asn or Asp / B\tAAU, AAC; GAU, GAC\n# Gln or Glu / Z\tCAA, DAG; GAA, GAG\n# START\tAUG - stupid as this is just a MET\n\ncodonTable = \"\"\"Ala / A\tGCU, GCC, GCA, GCG\nIle / I\tAUU, AUC, AUA\nArg / R\tCGU, CGC, CGA, CGG; AGA, AGG\nLeu / L\tCUU, CUC, CUA, CUG; UUA, UUG\nAsn / N\tAAU, AAC\nLys / K\tAAA, AAG\nAsp / D\tGAU, GAC\nMet / M\tAUG\nPhe / F\tUUU, UUC\nCys / C\tUGU, UGC\nPro / P\tCCU, CCC, CCA, CCG\nGln / Q\tCAA, CAG\nSer / S\tUCU, UCC, UCA, UCG; AGU, AGC\nGlu / E\tGAA, GAG\nThr / T\tACU, ACC, ACA, ACG\nTrp / W\tUGG\nGly / G\tGGU, GGC, GGA, GGG\nTyr / Y\tUAU, UAC\nHis / H\tCAU, CAC\nVal / V\tGUU, GUC, GUA, GUG\nSTOP\tUAA, UGA, UAG\"\"\" # you can refer here: https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables\n# note both Met and START are AUG - deal with it, Krish\n\ndec = {} #dictionary to use to decode DNA\nfor i in codonTable.split('\\n'):\n acid,codons = i.split(\"\\x09\") # split based on tabs\n if '/' in acid :\n acid = acid.split(\"/\")[-1].strip()\n acid = acid.replace(\"STOP\", \"#\")\n codons = codons.replace(\",\", \"\").replace(\";\", \"\").lower() # lower case it after removing the , and ;\n codons = codons.replace(\"u\", \"t\").split(\" \") # put it in the form like A [''] \n for codon in codons:\n if codon in dec:\n print(\"duplicate\", codon)\n dec[codon] = acid\n\n \n#dec ", "_____no_output_____" ], [ "def decodeIt(x):\n polypeptides = []\n for i in range(readingFrame, len(x)-2, 3):\n polypeptides.append(dec[x[i:i+3]])\n polypeptides = ''.join(polypeptides)\n return polypeptides\n \npolypeptides = decodeIt(dengue[0:]) + decodeIt(dengue[1:]) + decodeIt(dengue[2:]) \n\n", "_____no_output_____" ], [ "polypeptides", "_____no_output_____" ], [ "polypeptides.find(\"TWALRHPGF\") # test random sequence from the translation section of https://www.ncbi.nlm.nih.gov/nuccore/9626685", "_____no_output_____" ], [ "list(filter(lambda x: len(x) > 7, polypeptides.split(\"#\"))) #find proteins", "_____no_output_____" ], [ "decodeIt(dengue[95-1:10273])", "_____no_output_____" ], [ "decodeIt(dengue[95-1:10273])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec976008e3f928ae74e2ddd22b8ef5535fc74a89
49,233
ipynb
Jupyter Notebook
Docs/Ref/.ipynb_checkpoints/PythonViz-checkpoint.ipynb
heenalsapovadia/ml_practices_2018
77f30c14cd585ab71667e2a2c7e9e69c0de2b97c
[ "Apache-2.0" ]
1
2020-04-23T04:05:43.000Z
2020-04-23T04:05:43.000Z
Docs/Ref/.ipynb_checkpoints/PythonViz-checkpoint.ipynb
heenalsapovadia/ml_practices_2018
77f30c14cd585ab71667e2a2c7e9e69c0de2b97c
[ "Apache-2.0" ]
1
2019-03-20T12:13:42.000Z
2019-03-20T12:13:42.000Z
Docs/Ref/.ipynb_checkpoints/PythonViz-checkpoint.ipynb
heenalsapovadia/ml_practices_2018
77f30c14cd585ab71667e2a2c7e9e69c0de2b97c
[ "Apache-2.0" ]
2
2018-12-21T07:22:17.000Z
2018-12-27T12:14:55.000Z
222.773756
20,496
0.914935
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\n%matplotlib inline\n# %matplotlib nbagg\n# %matplotlib notebook", "_____no_output_____" ], [ "from matplotlib import style\nstyle.use(\"ggplot\")", "_____no_output_____" ], [ "t = np.linspace(0,10)\nv0,g = 50, 9.81\ny = v0*t - 0.5*g*t**2\nv0 = 20\ny2 = v0*t - 0.5*g*t**2\nplt.plot(t,y,\"-\",label='v0 = 50 m/s')\nplt.plot(t,y2,\".\",label='v0 = 20 m/s')\nplt.ylim(0,)\nplt.xlabel(\"t [s]\")\nplt.ylabel(\"height [m]\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "import ipywidgets as widgets\nimport liveplot", "_____no_output_____" ], [ "# @widgets.interact #Real-time update\[email protected]_manual #Manual update\ndef projectile(v0=50,v1=10, option=True, string='20'):\n t = np.linspace(0,10)\n g = 9.81\n y = v0*t - 0.5*g*t**2\n y1 = v1*t - 0.5*g*t**2\n plt.plot(t,y,\"-\",label='v0 = {} $m/s$'.format(v0))\n plt.plot(t,y1,\".\",label='v1 = {} m/s'.format(string))\n plt.ylim(0,)\n plt.xlabel(\"t [s]\")\n if option:\n plt.ylabel(\"height [m]\")\n plt.legend()\n plt.show()", "_____no_output_____" ], [ "fig = plt.figure()\nax1 = fig.add_subplot(1,1,1)\n\ndef animate(i):\n ax1.clear()\n #Your Plotting code that will get updated through some file\n # data = open('file.txt','r').read()\n #Create list for x and y that appends new data\n ax1.plot()\n \nani = animation.FuncAnimation(fig, animate, frames=range(num_of_data), interval=1000)\n\n#now Display\nplt.show()\n#Or\n#Alternatively for creating movie\nplt.close()\nani.save('animation.gif', writer='imagemagick', fps=2)\nfilename = 'animation.gif'\nvideo = io.open(filename, 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<img src=\"data:image/gif;base64,{0}\" type=\"gif\" />'''.format(encoded.decode('ascii')))", "_____no_output_____" ], [ "import mpl_toolkits.mplot3d\nfig = plt.figure(figsize = (12, 10))\nax = fig.add_subplot(111, projection = '3d')\nax.scatter(df.x, df.y, df.z,c=df.Tcol.map({True:'green', False:'red'}))\nax.set_xlabel()\nax.set_ylabel()\nax.set_zlabel()\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ec9768701a6f5de942f291ea35510441646df366
22,870
ipynb
Jupyter Notebook
auto_create_quiz.py.ipynb
Akashdu/auto_create_quiz.py
c6953cedca7d8c1e46f0f73e6478ed2d0087de04
[ "Apache-2.0" ]
6
2020-10-11T06:31:39.000Z
2021-03-08T13:04:35.000Z
auto_create_quiz.py.ipynb
Akashdu/auto_create_quiz.py
c6953cedca7d8c1e46f0f73e6478ed2d0087de04
[ "Apache-2.0" ]
null
null
null
auto_create_quiz.py.ipynb
Akashdu/auto_create_quiz.py
c6953cedca7d8c1e46f0f73e6478ed2d0087de04
[ "Apache-2.0" ]
null
null
null
49.717391
1,428
0.60481
[ [ [ "# With ! we can run the unix commands from the jupyter notebook\n#nltk is a great natual language processing library in Python\n!pip install -U nltk\n\n# Lets install textblob\n# textblob is a simple wrapper over NLTK\n!pip install -U textblob\n!python -m textblob.download_corpora", "Collecting nltk\n Downloading https://files.pythonhosted.org/packages/92/75/ce35194d8e3022203cca0d2f896dbb88689f9b3fce8e9f9cff942913519d/nltk-3.5.zip (1.4MB)\n\u001b[K 100% |████████████████████████████████| 1.4MB 351kB/s eta 0:00:01\n\u001b[?25hCollecting click (from nltk)\n Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl (82kB)\n\u001b[K 100% |████████████████████████████████| 92kB 1.0MB/s eta 0:00:01\n\u001b[?25hCollecting joblib (from nltk)\n Downloading https://files.pythonhosted.org/packages/57/d2/45a038246a0596fb73af64c07e95578764d0fd115ce67f6b41eb457eed39/joblib-0.15.1.tar.gz (347kB)\n\u001b[K 100% |████████████████████████████████| 348kB 822kB/s eta 0:00:01\n\u001b[?25h Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"/tmp/pip-build-StboMT/joblib/setup.py\", line 6, in <module>\n import joblib\n File \"joblib/__init__.py\", line 113, in <module>\n from .memory import Memory, MemorizedResult, register_store_backend\n File \"joblib/memory.py\", line 274\n raise new_exc from exc\n ^\n SyntaxError: invalid syntax\n \n ----------------------------------------\n\u001b[31mCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-build-StboMT/joblib/\u001b[0m\n\u001b[33mYou are using pip version 19.1.1, however version 20.1.1 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\nCollecting textblob\n Downloading https://files.pythonhosted.org/packages/60/f0/1d9bfcc8ee6b83472ec571406bd0dd51c0e6330ff1a51b2d29861d389e85/textblob-0.15.3-py2.py3-none-any.whl (636kB)\n\u001b[K 100% |████████████████████████████████| 645kB 119kB/s ta 0:00:011\n\u001b[?25hCollecting nltk>=3.1 (from textblob)\n Using cached https://files.pythonhosted.org/packages/92/75/ce35194d8e3022203cca0d2f896dbb88689f9b3fce8e9f9cff942913519d/nltk-3.5.zip\nCollecting click (from nltk>=3.1->textblob)\n Using cached https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl\nCollecting joblib (from nltk>=3.1->textblob)\n Using cached https://files.pythonhosted.org/packages/57/d2/45a038246a0596fb73af64c07e95578764d0fd115ce67f6b41eb457eed39/joblib-0.15.1.tar.gz\n Complete output from command python setup.py egg_info:\n Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"/tmp/pip-build-vzwnkL/joblib/setup.py\", line 6, in <module>\n import joblib\n File \"joblib/__init__.py\", line 113, in <module>\n from .memory import Memory, MemorizedResult, register_store_backend\n File \"joblib/memory.py\", line 274\n raise new_exc from exc\n ^\n SyntaxError: invalid syntax\n \n ----------------------------------------\n\u001b[31mCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-build-vzwnkL/joblib/\u001b[0m\n\u001b[33mYou are using pip version 19.1.1, however version 20.1.1 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\n/usr/bin/python: No module named textblob\n" ], [ "# Import TextBlob module\nfrom textblob import TextBlob", "_____no_output_____" ], [ "# This is the text that we are going to use. \n# This text is from wikipedia on World War 2 - https://en.wikipedia.org/wiki/World_War_II\n# Note: triple quotes are used for defining multi line string\nww2 = '''\nWorld War II (often abbreviated to WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945, although related conflicts began earlier. It involved the vast majority of the world's countries—including all of the great powers—eventually forming two opposing military alliances: the Allies and the Axis. It was the most widespread war in history, and directly involved more than 100 million people from over 30 countries. In a state of total war, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, erasing the distinction between civilian and military resources.\n\nWorld War II was the deadliest conflict in human history, marked by 50 million to 85 million fatalities, most of which were civilians in the Soviet Union and China. It included massacres, the deliberate genocide of the Holocaust, strategic bombing, starvation, disease and the first use of nuclear weapons in history.[1][2][3][4]\n\nThe Empire of Japan aimed to dominate Asia and the Pacific and was already at war with the Republic of China in 1937,[5] but the world war is generally said to have begun on 1 September 1939[6] with the invasion of Poland by Nazi Germany and subsequent declarations of war on Germany by France and the United Kingdom. Supplied by the Soviet Union, from late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. The war continued primarily between the European Axis powers and the coalition of the United Kingdom and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, the aerial Battle of Britain, the Blitz bombing campaign, and the Balkan Campaign, as well as the long-running Battle of the Atlantic. On 22 June 1941, the European Axis powers launched an invasion of the Soviet Union, opening the largest land theatre of war in history, which trapped the major part of the Axis military forces into a war of attrition. In December 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific.\n\nThe Axis advance halted in 1942 when Japan lost the critical Battle of Midway, and Germany and Italy were defeated in North Africa and then, decisively, at Stalingrad in the Soviet Union. In 1943, with a series of German defeats on the Eastern Front, the Allied invasion of Sicily and the Allied invasion of Italy which brought about Italian surrender, and Allied victories in the Pacific, the Axis lost the initiative and undertook strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained all of its territorial losses and invaded Germany and its allies. During 1944 and 1945 the Japanese suffered major reverses in mainland Asia in South Central China and Burma, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.\n\nThe war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the subsequent German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender under its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 August and 9 August respectively. With an invasion of the Japanese archipelago imminent, the possibility of additional atomic bombings and the Soviet invasion of Manchuria, Japan formally surrendered on 2 September 1945. Thus ended the war in Asia, cementing the total victory of the Allies.\n\nWorld War II changed the political alignment and social structure of the world. The United Nations (UN) was established to foster international co-operation and prevent future conflicts. The victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of the United Nations Security Council.[7] The Soviet Union and the United States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years. Meanwhile, the influence of European great powers waned, while the decolonisation of Africa and Asia began. Most countries whose industries had been damaged moved towards economic recovery. Political integration, especially in Europe, emerged as an effort to end pre-war enmities and to create a common identity.[8]\n'''\n\n\n# Uncomment the code below and run it if you are using Python 3\n# ww2 = unicode(ww2, 'utf-8')", "_____no_output_____" ], [ "ww2b = TextBlob(ww2)\nsposs = {}\nfor sentence in ww2b.sentences:\n \n # We are going to prepare the dictionary of parts-of-speech as the key and value is a list of words:\n # {part-of-speech: [word1, word2]}\n # We are basically grouping the words based on the parts-of-speech\n poss = {}\n sposs[sentence.string] = poss;\n for t in sentence.tags:\n tag = t[1]\n if tag not in poss:\n poss[tag] = []\n poss[tag].append(t[0])\n\n\n# Uncomment the code below and run it if you are using Python 3\n# ww2b = TextBlob(ww2)\n# sposs = {}\n# for sentence in ww2b.sentences:\n \n# # We are going to prepare the dictionary of parts-of-speech as the key and value is a list of words:\n# # {part-of-speech: [word1, word2]}\n# # We are basically grouping the words based on the parts-of-speech\n \n# poss = {}\n# sposs[sentence.string] = poss;\n# for t in sentence.tags:\n# tag = t[1].encode('utf-8')\n# if tag not in poss:\n# poss[tag] = []\n# poss[tag].append(t[0].encode('utf-8'))\n", "_____no_output_____" ], [ "import random\nimport re\n\n# Create the blank in string\ndef replaceIC(word, sentence):\n insensitive_hippo = re.compile(re.escape(word), re.IGNORECASE)\n return insensitive_hippo.sub('__________________', sentence)\n\n# For a sentence create a blank space.\n# It first tries to randomly selection proper-noun \n# and if the proper noun is not found, it selects a noun randomly.\ndef removeWord(sentence, poss):\n words = None\n if 'NNP' in poss:\n words = poss['NNP']\n elif 'NN' in poss:\n words = poss['NN']\n else:\n print(\"NN and NNP not found\")\n return (None, sentence, None)\n if len(words) > 0:\n word = random.choice(words)\n replaced = replaceIC(word, sentence)\n return (word, sentence, replaced)\n else:\n print(\"words are empty\")\n return (None, sentence, None)", "_____no_output_____" ], [ "# Iterate over the sentenses \nfor sentence in sposs.keys():\n poss = sposs[sentence]\n (word, osentence, replaced) = removeWord(sentence, poss)\n if replaced is None:\n print (\"Founded none for \")\n print(sentence)\n else:\n print(replaced)\n print (\"\\n===============\")\n print(word)\n print (\"===============\")\n print(\"\\n\")", "\nWorld War __________________ (often abbreviated to WW__________________ or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945, although related conflicts began earlier.\n\n===============\nII\n===============\n\n\nIt involved the vast majority of the world's countries—including all of the great powers—eventually forming two opposing military alliances: the Allies and the __________________.\n\n===============\nAxis\n===============\n\n\nIt was the most widespread __________________ in history, and directly involved more than 100 million people from over 30 countries.\n\n===============\nwar\n===============\n\n\nIn a state of total war, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, erasing the __________________ between civilian and military resources.\n\n===============\ndistinction\n===============\n\n\n__________________ War II was the deadliest conflict in human history, marked by 50 million to 85 million fatalities, most of which were civilians in the Soviet Union and China.\n\n===============\nWorld\n===============\n\n\nIt included massacres, the deliberate genocide of the __________________, strategic bombing, starvation, disease and the first use of nuclear weapons in history.\n\n===============\nHolocaust\n===============\n\n\n[1][2][3][4]\n\nThe Empire of Japan aimed to dominate Asia and the Pacific and was already at war with the Republic of China in 1937,[5] but the world war is generally said to have begun on 1 September 1939[6] with the invasion of __________________ by Nazi Germany and subsequent declarations of war on Germany by France and the United Kingdom.\n\n===============\nPoland\n===============\n\n\nSupplied by the Soviet Union, from late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental __________________, and formed the Axis alliance with Italy and Japan.\n\n===============\nEurope\n===============\n\n\nUnder the Molotov–Ribbentrop Pact of August 1939, Germany and the __________________ Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states.\n\n===============\nSoviet\n===============\n\n\nThe war continued primarily between the European Axis powers and the coalition of the United Kingdom and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, the aerial __________________ of Britain, the Blitz bombing campaign, and the Balkan Campaign, as well as the long-running __________________ of the Atlantic.\n\n===============\nBattle\n===============\n\n\nOn 22 June 1941, the European Axis powers launched an invasion of the __________________ Union, opening the largest land theatre of war in history, which trapped the major part of the Axis military forces into a war of attrition.\n\n===============\nSoviet\n===============\n\n\nIn __________________ 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific.\n\n===============\nDecember\n===============\n\n\nThe __________________ advance halted in 1942 when Japan lost the critical Battle of Midway, and Germany and Italy were defeated in North Africa and then, decisively, at Stalingrad in the Soviet Union.\n\n===============\nAxis\n===============\n\n\nIn 1943, with a series of German defeats on the Eastern __________________, the Allied invasion of Sicily and the Allied invasion of Italy which brought about Italian surrender, and Allied victories in the Pacific, the Axis lost the initiative and undertook strategic retreat on all __________________s.\n\n===============\nFront\n===============\n\n\nIn 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained all of its territorial losses and invaded __________________ and its allies.\n\n===============\nGermany\n===============\n\n\nDuring 1944 and 1945 the Japanese suffered major reverses in mainland __________________ in South Central China and Burma, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.\n\n===============\nAsia\n===============\n\n\nThe war in Europe concluded with an invasion of Germany by the Western Allies and the __________________ Union, culminating in the capture of Berlin by __________________ troops, the suicide of Adolf Hitler and the subsequent German unconditional surrender on 8 May 1945.\n\n===============\nSoviet\n===============\n\n\nFollowing the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender under its terms, the United States dropped atomic bombs on the Japanese cities of __________________ and Nagasaki on 6 August and 9 August respectively.\n\n===============\nHiroshima\n===============\n\n\nWith an invasion of the Japanese archipelago imminent, the possibility of additional atomic bombings and the Soviet invasion of __________________, Japan formally surrendered on 2 September 1945.\n\n===============\nManchuria\n===============\n\n\nThus ended the war in __________________, cementing the total victory of the Allies.\n\n===============\nAsia\n===============\n\n\nWorld War __________________ changed the political alignment and social structure of the world.\n\n===============\nII\n===============\n\n\nThe __________________ited Nations (__________________) was established to foster international co-operation and prevent future conflicts.\n\n===============\nUN\n===============\n\n\nThe victorious great powers—China, France, the Soviet Union, the __________________ Kingdom, and the __________________ States—became the permanent members of the __________________ Nations Security Council.\n\n===============\nUnited\n===============\n\n\n[7] The Soviet Union and the __________________ States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years.\n\n===============\nUnited\n===============\n\n\nMeanwhile, the influence of European great powers waned, while the decolonisation of Africa and __________________ began.\n\n===============\nAsia\n===============\n\n\nMost countries whose industries had been damaged moved towards economic __________________.\n\n===============\nrecovery\n===============\n\n\nPolitical integration, especially in __________________, emerged as an effort to end pre-war enmities and to create a common identity.\n\n===============\nEurope\n===============\n\n\nNN and NNP not found\nFounded none for \n[8]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
ec97778bd2929f87ef1a043bf70d5417f00a46d8
7,197
ipynb
Jupyter Notebook
CHAPTER 7 exercises.ipynb
AbhishekMali21/PYTHON-FOR-EVERYBODY
7565acda8f694a9cd9e0ac23f98e83ec0b62f9ae
[ "MIT" ]
17
2019-05-01T02:25:57.000Z
2021-06-16T17:09:16.000Z
CHAPTER 7 exercises.ipynb
AbhishekMali21/PYTHON-FOR-EVERYBODY
7565acda8f694a9cd9e0ac23f98e83ec0b62f9ae
[ "MIT" ]
null
null
null
CHAPTER 7 exercises.ipynb
AbhishekMali21/PYTHON-FOR-EVERYBODY
7565acda8f694a9cd9e0ac23f98e83ec0b62f9ae
[ "MIT" ]
3
2019-10-04T17:29:49.000Z
2022-03-07T05:17:03.000Z
26.955056
316
0.545922
[ [ [ "# CHAPTER 7 - Exercises", "_____no_output_____" ], [ "**Exercise 1: Write a program to read through a file and print the contents of the\nfile (line by line) all in upper case. Executing the program will look as follows**", "_____no_output_____" ], [ "python shout.py\nEnter a file name: mbox-short.txt\nFROM STEPHEN.MARQUARDUCT.AC.ZA SAT JAN 5 09:14:16 2008\nRETURN-PATH: <POSTMASTERCOLLABSAKAIPROJECTORG>\nRECEIVED: FROM MURDER (MAIL.UMICH.EDU [141.211.14.90])\nBY FRANKENSTEIN.MAIL.UMICH.EDU (CYRUS V2.3.8) WITH LMTPA;\nSAT, 05 JAN 2008 09:14:16 -0500", "_____no_output_____" ] ], [ [ "fhand = open('word.txt')\nfor line in fhand:\n stripped = line.rstrip()\n print(stripped.upper())", "1 WHY SHOULD YOU LEARN TO WRITE PROGRAMS? 2 VARIABLES, EXPRESSIONS, AND STATEMENTS 3 CONDITIONAL EXECUTION 4 FUNCTIONS 5 ITERATION 6 STRINGS 7 FILES 8 LISTS 9 DICTIONARIES 10 TUPLES 11 REGULAR EXPRESSIONS 12 NETWORKED PROGRAMS 13 USING WEB SERVICES 14 OBJECT-ORIENTED PROGRAMMING 15 USING DATABASES AND SQL\n" ] ], [ [ "**Exercise 2: Write a program to prompt for a file name, and then read through the\nfile and look for lines of the form:**\n\nX-DSPAM-Confidence:0.8475\n\n**When you encounter a line that starts with “X-DSPAM-Confidence:” pull apart\nthe line to extract the floating-point number on the line. Count these lines and\nthen compute the total of the spam confidence values from these lines. When you\nreach the end of the file, print out the average spam confidence.**", "_____no_output_____" ], [ "Enter the file name: mbox.txt\nAverage spam confidence: 0.894128046745\n\nEnter the file name: mbox-short.txt\nAverage spam confidence: 0.750718518519\n\nTest your file on the mbox.txt and mbox-short.txt files", "_____no_output_____" ] ], [ [ "fname = input('Enter the file name: ')\ntry:\n fhand = open(fname)\n count = 0\n spam_running = 0\n for line in fhand:\n line = line.rstrip()\n if line.startswith('X-DSPAM-Confidence:'):\n count = count + 1\n pos = line.find(':')\n float_line = float(line[pos+1:])\n spam_running = spam_running + float_line\n spam_average = spam_running/count\n print('Average spam confidence: ', spam_average)\nexcept:\n print('File cannot be opened:', fname)", "Enter the file name: hi\nFile cannot be opened: hi\n" ] ], [ [ "**Exercise 3: Sometimes when programmers get bored or want to have a bit of fun,\nthey add a harmless Easter Egg to their program Modify the program that prompts\nthe user for the file name so that it prints a funny message when the user types in\nthe exact file name “na na boo boo”. The program should behave normally for all\nother files which exist and don’t exist. Here is a sample execution of the program:**", "_____no_output_____" ], [ "python egg.py\nEnter the file name: mbox.txt\nThere were 1797 subject lines in mbox.txt\n\npython egg.py\nEnter the file name: missing.tyxt\nFile cannot be opened: missing.tyxt\n\npython egg.py\nEnter the file name: na na boo boo\nNA NA BOO BOO TO YOU - You have been punk'd!", "_____no_output_____" ] ], [ [ "fname = input('Enter the file name: ')\nif fname == ('na na boo boo'):\n print(\"NA NA BOO BOO TO YOU - You have been punk'd!\")\nelse: \n try:\n fhand = open(fname)\n except:\n print('File cannot be opened:', fname)\n exit()\n count = 0\n spam_running = 0\n for line in fhand:\n line = line.rstrip()\n if line.startswith('X-DSPAM-Confidence:'):\n count = count + 1\n pos = line.find(':')\n float_line = float(line[pos+1:])\n spam_running = spam_running + float_line\n spam_average = spam_running/count\n print('You have ', count, ' files')\n print('The average spam confidence is ', spam_average)", "Enter the file name: na na boo boo\nNA NA BOO BOO TO YOU - You have been punk'd!\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
ec97885ad9ebbb962ecab7ed4ecab9627bd314e0
248,823
ipynb
Jupyter Notebook
LR_adaptive_Hint.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
19
2020-12-02T06:33:03.000Z
2022-03-28T03:39:33.000Z
LR_adaptive_Hint.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
null
null
null
LR_adaptive_Hint.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
4
2021-08-14T02:23:40.000Z
2022-03-24T08:21:55.000Z
81.394504
372
0.581208
[ [ [ "from __future__ import print_function\nimport os\nimport time\nimport logging\nimport argparse\nimport numpy as np\nfrom visdom import Visdom\nfrom PIL import Image\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets, transforms\nfrom utils import *\nfrom metric.loss import FitNet, AttentionTransfer, RKdAngle, RkdDistance\n\n# Teacher models:\n# VGG11/VGG13/VGG16/VGG19, GoogLeNet, AlxNet, ResNet18, ResNet34, \n# ResNet50, ResNet101, ResNet152, ResNeXt29_2x64d, ResNeXt29_4x64d, \n# ResNeXt29_8x64d, ResNeXt29_32x64d, PreActResNet18, PreActResNet34, \n# PreActhttps://www.bing.com/?mkt=zh-CNResNet50, PreActResNet101, PreActResNet152, \n# DenseNet121, DenseNet161, DenseNet169, DenseNet201, \nimport models\n\n# Student models:\n# myNet, LeNet, FitNet\n\nstart_time = time.time()\n# os.makedirs('./checkpoint', exist_ok=True)\n\n# Training settings\nparser = argparse.ArgumentParser(description='PyTorch ada. FitNet')\n\nparser.add_argument('--dataset',\n choices=['CIFAR10',\n 'CIFAR100'\n ],\n default='CIFAR10')\nparser.add_argument('--teachers',\n choices=['ResNet32',\n 'ResNet50',\n 'ResNet56',\n 'ResNet110',\n 'DenseNet121'\n ],\n default=['ResNet32', 'ResNet56', 'ResNet110'],\n nargs='+')\nparser.add_argument('--student',\n choices=['ResNet8',\n 'ResNet15',\n 'ResNet16',\n 'ResNet20',\n 'myNet'\n ],\n default='ResNet20')\n\nparser.add_argument('--kd_ratio', default=0.7, type=float)\nparser.add_argument('--n_class', type=int, default=10, metavar='N', help='num of classes')\nparser.add_argument('--T', type=float, default=20.0, metavar='Temputure', help='Temputure for distillation')\nparser.add_argument('--batch_size', type=int, default=128, metavar='N', help='input batch size for training')\nparser.add_argument('--test_batch_size', type=int, default=128, metavar='N', help='input test batch size for training')\nparser.add_argument('--epochs', type=int, default=20, metavar='N', help='number of epochs to train (default: 20)')\nparser.add_argument('--lr', type=float, default=0.1, metavar='LR', help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.9, metavar='M', help='SGD momentum (default: 0.5)')\nparser.add_argument('--device', default='cuda:0', type=str, help='device: cuda or cpu')\nparser.add_argument('--print_freq', type=int, default=10, metavar='N', help='how many batches to wait before logging training status')\n\nconfig = ['--epochs', '200', '--T', '5.0', '--device', 'cuda:0']\nargs = parser.parse_args(config)\n\ndevice = args.device if torch.cuda.is_available() else 'cpu'\nload_dir = './checkpoint/' + args.dataset + '/'\n\n# teachers model\nteacher_models = []\nfor te in args.teachers:\n te_model = getattr(models, te)(num_classes=args.n_class)\n# print(te_model)\n te_model.load_state_dict(torch.load(load_dir + te_model.model_name + '.pth'))\n te_model.to(device)\n teacher_models.append(te_model)\n\nst_model = getattr(models, args.student)(num_classes=args.n_class) # args.student()\nst_model.to(device)\n\n# logging\nlogfile = load_dir + 'ada_fitnet_' + st_model.model_name + '.log'\nif os.path.exists(logfile):\n os.remove(logfile)\ndef log_out(info):\n f = open(logfile, mode='a')\n f.write(info)\n f.write('\\n')\n f.close()\n print(info)\n \n# visualizer\nvis = Visdom(env='distill')\nloss_win = vis.line(\n X=np.array([0]),\n Y=np.array([0]),\n opts=dict(\n title='FitNet ada. loss',\n xtickmin=0,\n# xtickmax=1,\n# xtickstep=5,\n ytickmin=0,\n# ytickmax=1,\n ytickstep=0.5,\n# markers=True,\n# markersymbol='dot',\n# markersize=5,\n ),\n name=\"loss\"\n)\n\nacc_win = vis.line(\n X=np.column_stack((0, 0)),\n Y=np.column_stack((0, 0)),\n opts=dict(\n title='FitNet ada. ACC',\n xtickmin=0,\n# xtickstep=5,\n ytickmin=0,\n ytickmax=100,\n# markers=True,\n# markersymbol='dot',\n# markersize=5,\n legend=['train_acc', 'test_acc']\n ),\n name=\"acc\"\n)\n\n\n# adapter model\nclass Adapter():\n def __init__(self, in_models, pool_size):\n # representations of teachers\n pool_ch = pool_size[1] # 64\n pool_w = pool_size[2] # 8\n LR_list = []\n torch.manual_seed(1)\n self.theta = torch.randn(len(in_models), pool_ch).to(device) # [3, 64]\n self.theta.requires_grad_(True)\n \n self.max_feat = nn.MaxPool2d(kernel_size=(pool_w, pool_w), stride=pool_w).to(device)\n self.W = torch.randn(pool_ch, 1).to(device)\n self.W.requires_grad_(True)\n self.val = False\n\n def loss(self, y, labels, weighted_logits, T=10.0, alpha=0.7):\n ls = nn.KLDivLoss()(F.log_softmax(y/T), weighted_logits) * (T*T * 2.0 * alpha) + F.cross_entropy(y, labels) * (1. - alpha)\n if not self.val:\n ls += 0.1 * (torch.sum(self.W * self.W) + torch.sum(torch.sum(self.theta * self.theta, dim=1), dim=0))\n return ls\n \n def gradient(self, lr=0.01):\n self.W.data = self.W.data - lr * self.W.grad.data\n # Manually zero the gradients after updating weights\n self.W.grad.data.zero_()\n \n def eval(self):\n self.val = True\n self.theta.detach()\n self.W.detach()\n \n # input size: [64, 8, 8], [128, 3, 10]\n def forward(self, conv_map, te_logits_list):\n beta = self.max_feat(conv_map)\n beta = torch.squeeze(beta) # [128, 64]\n \n latent_factor = []\n for t in self.theta:\n latent_factor.append(beta * t)\n# latent_factor = torch.stack(latent_factor, dim=0) # [3, 128, 64]\n alpha = []\n for lf in latent_factor: # lf.size:[128, 64]\n alpha.append(lf.mm(self.W))\n alpha = torch.stack(alpha, dim=0) # [3, 128, 1]\n alpha = torch.squeeze(alpha).transpose(0, 1) # [128, 3]\n weight = F.softmax(alpha) # [128, 3]\n\n return weight\n\n# adapter instance\n_,_,_,pool_m,_ = st_model(torch.randn(1,3, 128, 128).to(device)) # get pool_size of student\n# reate adapter instance\nadapter = Adapter(teacher_models, pool_m.size())\n\n\n# data\nnormalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\ntrain_transform = transforms.Compose([\n transforms.RandomHorizontalFlip(),\n transforms.RandomCrop(32, 4),\n transforms.ToTensor(),\n normalize,\n])\ntest_transform = transforms.Compose([transforms.ToTensor(), normalize])\ntrain_set = getattr(datasets, args.dataset)(root='../data', train=True, download=True, transform=train_transform)\ntest_set = getattr(datasets, args.dataset)(root='../data', train=False, download=False, transform=test_transform)\ntrain_loader = DataLoader(train_set, batch_size=args.batch_size, shuffle=True)\ntest_loader = DataLoader(test_set, batch_size=args.test_batch_size, shuffle=False)\n# optim\noptimizer_W = optim.SGD([adapter.W], lr=args.lr, momentum=0.9)\noptimizer_theta = optim.SGD([adapter.theta], lr=args.lr, momentum=0.9)\noptimizer_sgd = optim.SGD(st_model.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)\nlr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer_sgd, gamma=0.1, milestones=[100, 150])\nlr_scheduler2 = optim.lr_scheduler.MultiStepLR(optimizer_W, milestones=[40, 50])\nlr_scheduler3 = optim.lr_scheduler.MultiStepLR(optimizer_theta, milestones=[40, 50])\n\n# loss\ndist_criterion = RkdDistance().to(device)\nangle_criterion = RKdAngle().to(device)\nfitnet_criterion = [FitNet(32, 64), FitNet(64, 64),FitNet(64, 64)]\n[f.to(device) for f in fitnet_criterion]\n\n\ndef train_adapter(n_epochs=70, model=st_model):\n print('Training adapter:')\n start_time = time.time()\n model.train()\n adapter.eval()\n for ep in range(n_epochs):\n lr_scheduler2.step()\n lr_scheduler3.step()\n for i, (input, target) in enumerate(train_loader):\n\n input, target = input.to(device), target.to(device)\n # compute outputs\n b1, b2, b3, pool, output = model(input) # out_feat: 16, 32, 64, 64, - \n st_maps = [b1, b2, b3, pool]\n# print('b1:{}, b2:{}, b3{}, pool:{}'.format(b1.size(), b2.size(), b3.size(), pool.size()))\n# b1:torch.Size([128, 16, 32, 32]), b2:torch.Size([128, 32, 16, 16]), b3torch.Size([128, 64, 8, 8]), pool:torch.Size([128, 64, 1, 1])\n\n te_scores_list = []\n hint_maps = []\n fit_loss = 0\n for j,te in enumerate(teacher_models):\n te.eval()\n with torch.no_grad():\n t_b1, t_b2, t_b3, t_pool, t_output = te(input)\n# print('t_b1:{}, t_b2:{}, t_b3:{}, t_pool:{}'.format(t_b1.size(), t_b2.size(), t_b3.size(), t_pool.size()))\n# t_b1:torch.Size([128, 16, 32, 32]), t_b2:torch.Size([128, 32, 16, 16]), t_b3:torch.Size([128, 64, 8, 8]), t_pool:torch.Size([128, 64, 1, 1])\n hint_maps.append(t_pool)\n t_output = F.softmax(t_output/args.T)\n te_scores_list.append(t_output)\n te_scores_Tensor = torch.stack(te_scores_list, dim=1) # size: [128, 3, 10]\n \n weight = adapter.forward(pool, te_scores_Tensor)\n weight_t = torch.unsqueeze(weight, dim=2)\n weighted_logits = weight_t * te_scores_Tensor # [128, 3, 10]\n weighted_logits = torch.sum(weighted_logits, dim=1)\n weight_f = F.softmax(torch.mean(weight, dim=0))\n \n optimizer_sgd.zero_grad()\n optimizer_W.zero_grad()\n optimizer_theta.zero_grad()\n \n angle_loss = angle_criterion(output, weighted_logits)\n dist_loss = dist_criterion(output, weighted_logits)\n # compute gradient and do SGD step\n ada_loss = adapter.loss(output, target, weighted_logits, T=args.T, alpha=args.kd_ratio)\n \n for j in range(len(teacher_models)-1):\n fit_loss += fitnet_criterion[j](st_maps[j+1], hint_maps[j]) #weight_f[j] * \n# fit_loss = fitnet_criterion[0](b2, hint_maps[0][3]) + fitnet_criterion[1](b3, hint_maps[1][3]) + fitnet_criterion(pool, hint_maps[2][3])\n loss = ada_loss + fit_loss #+ dist_loss + angle_loss\n \n loss.backward(retain_graph=True)\n optimizer_sgd.step()\n optimizer_W.step()\n optimizer_theta.step()\n \n# vis.line(np.array([loss.item()]), np.array([ep]), loss_win, update=\"append\")\n log_out('epoch[{}/{}]adapter Loss: {:.4f}'.format(ep, n_epochs, loss.item()))\n end_time = time.time()\n log_out(\"--- adapter training cost {:.3f} mins ---\".format((end_time - start_time)/60))\n\n\n# train with multi-teacher\ndef train(epoch, model):\n print('Training:')\n # switch to train mode\n model.train()\n adapter.eval()\n batch_time = AverageMeter()\n data_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n \n end = time.time()\n for i, (input, target) in enumerate(train_loader):\n\n # measure data loading time\n data_time.update(time.time() - end)\n\n input, target = input.to(device), target.to(device)\n \n # compute outputs\n b1, b2, b3, pool, output = model(input)\n st_maps = [b1, b2, b3, pool]\n \n te_scores_list = []\n hint_maps = []\n fit_loss = 0\n for j,te in enumerate(teacher_models):\n te.eval()\n with torch.no_grad():\n t_b1, t_b2, t_b3, t_pool, t_output = te(input)\n \n hint_maps.append(t_pool)\n t_output = F.softmax(t_output/args.T)\n te_scores_list.append(t_output)\n te_scores_Tensor = torch.stack(te_scores_list, dim=1) # size: [128, 3, 10]\n \n weight = adapter.forward(pool, te_scores_Tensor)\n weight_t = torch.unsqueeze(weight, dim=2)\n weighted_logits = weight_t * te_scores_Tensor # [128, 3, 10]\n weighted_logits = torch.sum(weighted_logits, dim=1)\n weight_f = F.softmax(torch.mean(weight, dim=0))\n \n optimizer_sgd.zero_grad()\n \n angle_loss = angle_criterion(output, weighted_logits)\n dist_loss = dist_criterion(output, weighted_logits)\n \n # compute gradient and do SGD step\n ada_loss = adapter.loss(output, target, weighted_logits, T=args.T, alpha=args.kd_ratio)\n for j in range(len(teacher_models)-1):\n fit_loss += fitnet_criterion[j](st_maps[j+1], hint_maps[j])\n \n loss = ada_loss + fit_loss #+ dist_loss + angle_loss\n\n loss.backward(retain_graph=True)\n optimizer_sgd.step()\n\n output = output.float()\n loss = loss.float()\n # measure accuracy and record loss\n train_acc = accuracy(output.data, target.data)[0]\n losses.update(loss.item(), input.size(0))\n top1.update(train_acc, input.size(0))\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n log_out('[{0}/{1}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(\n i, len(train_loader), batch_time=batch_time,\n data_time=data_time, loss=losses, top1=top1))\n return losses.avg, train_acc.cpu().numpy()\n\n\ndef test(model):\n print('Testing:')\n # switch to evaluate mode\n model.eval()\n batch_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n\n end = time.time()\n with torch.no_grad():\n for i, (input, target) in enumerate(test_loader):\n input, target = input.to(device), target.to(device)\n\n # compute output\n _,_,_,_,output = model(input)\n loss = F.cross_entropy(output, target)\n\n output = output.float()\n loss = loss.float()\n\n # measure accuracy and record loss\n test_acc = accuracy(output.data, target.data)[0]\n losses.update(loss.item(), input.size(0))\n top1.update(test_acc, input.size(0))\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n log_out('Test: [{0}/{1}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(\n i, len(test_loader), batch_time=batch_time, loss=losses,\n top1=top1))\n\n log_out(' * Prec@1 {top1.avg:.3f}'.format(top1=top1))\n\n return losses.avg, test_acc.cpu().numpy(), top1.avg.cpu().numpy()\n\n# \"\"\"\nprint('StudentNet:\\n')\nprint(st_model)\nst_model.apply(weights_init_normal)\ntrain_adapter(n_epochs=80)\n# st_model.apply(weights_init_normal)\nbest_acc = 0\nfor epoch in range(1, args.epochs + 1):\n log_out(\"\\n===> epoch: {}/{}\".format(epoch, args.epochs))\n log_out('current lr {:.5e}'.format(optimizer_sgd.param_groups[0]['lr']))\n lr_scheduler.step(epoch)\n train_loss, train_acc = train(epoch, st_model)\n # visaulize loss\n vis.line(np.array([train_loss]), np.array([epoch]), loss_win, update=\"append\")\n _, test_acc, top1 = test(st_model)\n vis.line(np.column_stack((train_acc, top1)), np.column_stack((epoch, epoch)), acc_win, update=\"append\")\n if top1 > best_acc:\n best_acc = top1\n \n# release GPU memory\ntorch.cuda.empty_cache()\nlog_out(\"BEST ACC: {:.3f}\".format(best_acc))\nlog_out(\"--- {:.3f} mins ---\".format((time.time() - start_time)/60))\n# \"\"\"", "/home/data/yaliu/jupyterbooks/multi-KD/models/teacher/resnet20.py:36: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.\n init.kaiming_normal(m.weight)\nWARNING:root:Setting up a new session...\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
ec978f09b75186ca3f47b45bc804c7fa7c60b834
574,328
ipynb
Jupyter Notebook
AIND-DNN-Speech-Recognizer/vui_notebook.ipynb
dtsukiyama/deep-learning-models
56077a642587350242429ca5e31f2c695a2b9c52
[ "MIT" ]
null
null
null
AIND-DNN-Speech-Recognizer/vui_notebook.ipynb
dtsukiyama/deep-learning-models
56077a642587350242429ca5e31f2c695a2b9c52
[ "MIT" ]
31
2020-01-28T21:47:53.000Z
2022-03-11T23:14:22.000Z
AIND-DNN-Speech-Recognizer/vui_notebook.ipynb
dtsukiyama/deep-learning-models
56077a642587350242429ca5e31f2c695a2b9c52
[ "MIT" ]
null
null
null
384.165886
201,376
0.860541
[ [ [ "# Artificial Intelligence Nanodegree\n\n## Voice User Interfaces\n\n## Project: Speech Recognition with Neural Networks\n\n---\n\nIn this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! \n\n> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.\n\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.\n\n>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.\n\nThe rubric contains _optional_ \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. If you decide to pursue the \"Stand Out Suggestions\", you should include the code in this Jupyter notebook.\n\n---\n\n## Introduction \n\nIn this notebook, you will build a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline! Your completed pipeline will accept raw audio as input and return a predicted transcription of the spoken language. The full pipeline is summarized in the figure below.\n\n<img src=\"images/pipeline.png\">\n\n- **STEP 1** is a pre-processing step that converts raw audio to one of two feature representations that are commonly used for ASR. \n- **STEP 2** is an acoustic model which accepts audio features as input and returns a probability distribution over all potential transcriptions. After learning about the basic types of neural networks that are often used for acoustic modeling, you will engage in your own investigations, to design your own acoustic model!\n- **STEP 3** in the pipeline takes the output from the acoustic model and returns a predicted transcription. \n\nFeel free to use the links below to navigate the notebook:\n- [The Data](#thedata)\n- [**STEP 1**](#step1): Acoustic Features for Speech Recognition\n- [**STEP 2**](#step2): Deep Neural Networks for Acoustic Modeling\n - [Model 0](#model0): RNN\n - [Model 1](#model1): RNN + TimeDistributed Dense\n - [Model 2](#model2): CNN + RNN + TimeDistributed Dense\n - [Model 3](#model3): Deeper RNN + TimeDistributed Dense\n - [Model 4](#model4): Bidirectional RNN + TimeDistributed Dense\n - [Models 5+](#model5)\n - [Compare the Models](#compare)\n - [Final Model](#final)\n- [**STEP 3**](#step3): Obtain Predictions\n\n<a id='thedata'></a>\n## The Data\n\nWe begin by investigating the dataset that will be used to train and evaluate your pipeline. [LibriSpeech](http://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a large corpus of English-read speech, designed for training and evaluating models for ASR. The dataset contains 1000 hours of speech derived from audiobooks. We will work with a small subset in this project, since larger-scale data would take a long while to train. However, after completing this project, if you are interested in exploring further, you are encouraged to work with more of the data that is provided [online](http://www.openslr.org/12/).\n\nIn the code cells below, you will use the `vis_train_features` module to visualize a training example. The supplied argument `index=0` tells the module to extract the first example in the training set. (You are welcome to change `index=0` to point to a different training example, if you like, but please **DO NOT** amend any other code in the cell.) The returned variables are:\n- `vis_text` - transcribed text (label) for the training example.\n- `vis_raw_audio` - raw audio waveform for the training example.\n- `vis_mfcc_feature` - mel-frequency cepstral coefficients (MFCCs) for the training example.\n- `vis_spectrogram_feature` - spectrogram for the training example. \n- `vis_audio_path` - the file path to the training example.", "_____no_output_____" ] ], [ [ "from data_generator import vis_train_features\n\n# extract label and audio features for a single training example\nvis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features()", "There are 2136 total training examples.\n" ] ], [ [ "The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook!", "_____no_output_____" ] ], [ [ "from IPython.display import Markdown, display\nfrom data_generator import vis_train_features, plot_raw_audio\nfrom IPython.display import Audio\n%matplotlib inline\n\n# plot audio signal\nplot_raw_audio(vis_raw_audio)\n# print length of audio signal\ndisplay(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape)))\n# print transcript corresponding to audio clip\ndisplay(Markdown('**Transcript** : ' + str(vis_text)))\n# play the audio file\nAudio(vis_audio_path)", "_____no_output_____" ] ], [ [ "<a id='step1'></a>\n## STEP 1: Acoustic Features for Speech Recognition\n\nFor this project, you won't use the raw audio waveform as input to your model. Instead, we provide code that first performs a pre-processing step to convert the raw audio to a feature representation that has historically proven successful for ASR models. Your acoustic model will accept the feature representation as input.\n\nIn this project, you will explore two possible feature representations. _After completing the project_, if you'd like to read more about deep learning architectures that can accept raw audio input, you are encouraged to explore this [research paper](https://pdfs.semanticscholar.org/a566/cd4a8623d661a4931814d9dffc72ecbf63c4.pdf).\n\n### Spectrograms\n\nThe first option for an audio feature representation is the [spectrogram](https://www.youtube.com/watch?v=_FatxGN3vAM). In order to complete this project, you will **not** need to dig deeply into the details of how a spectrogram is calculated; but, if you are curious, the code for calculating the spectrogram was borrowed from [this repository](https://github.com/baidu-research/ba-dls-deepspeech). The implementation appears in the `utils.py` file in your repository.\n\nThe code that we give you returns the spectrogram as a 2D tensor, where the first (_vertical_) dimension indexes time, and the second (_horizontal_) dimension indexes frequency. To speed the convergence of your algorithm, we have also normalized the spectrogram. (You can see this quickly in the visualization below by noting that the mean value hovers around zero, and most entries in the tensor assume values close to zero.)", "_____no_output_____" ] ], [ [ "from data_generator import plot_spectrogram_feature\n\n# plot normalized spectrogram\nplot_spectrogram_feature(vis_spectrogram_feature)\n# print shape of spectrogram\ndisplay(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape)))", "_____no_output_____" ] ], [ [ "### Mel-Frequency Cepstral Coefficients (MFCCs)\n\nThe second option for an audio feature representation is [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). You do **not** need to dig deeply into the details of how MFCCs are calculated, but if you would like more information, you are welcome to peruse the [documentation](https://github.com/jameslyons/python_speech_features) of the `python_speech_features` Python package. Just as with the spectrogram features, the MFCCs are normalized in the supplied code.\n\nThe main idea behind MFCC features is the same as spectrogram features: at each time window, the MFCC feature yields a feature vector that characterizes the sound within the window. Note that the MFCC feature is much lower-dimensional than the spectrogram feature, which could help an acoustic model to avoid overfitting to the training dataset. ", "_____no_output_____" ] ], [ [ "from data_generator import plot_mfcc_feature\n\n# plot normalized MFCC\nplot_mfcc_feature(vis_mfcc_feature)\n# print shape of MFCC\ndisplay(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape)))", "_____no_output_____" ] ], [ [ "When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below:\n- This [repository](https://github.com/baidu-research/ba-dls-deepspeech) uses spectrograms.\n- This [repository](https://github.com/mozilla/DeepSpeech) uses MFCCs.\n- This [repository](https://github.com/buriburisuri/speech-to-text-wavenet) also uses MFCCs.\n- This [repository](https://github.com/pannous/tensorflow-speech-recognition/blob/master/speech_data.py) experiments with raw audio, spectrograms, and MFCCs as features.", "_____no_output_____" ], [ "<a id='step2'></a>\n## STEP 2: Deep Neural Networks for Acoustic Modeling\n\nIn this section, you will experiment with various neural network architectures for acoustic modeling. \n\nYou will begin by training five relatively simple architectures. **Model 0** is provided for you. You will write code to implement **Models 1**, **2**, **3**, and **4**. If you would like to experiment further, you are welcome to create and train more models under the **Models 5+** heading. \n\nAll models will be specified in the `sample_models.py` file. After importing the `sample_models` module, you will train your architectures in the notebook.\n\nAfter experimenting with the five simple architectures, you will have the opportunity to compare their performance. Based on your findings, you will construct a deeper architecture that is designed to outperform all of the shallow models.\n\nFor your convenience, we have designed the notebook so that each model can be specified and trained on separate occasions. That is, say you decide to take a break from the notebook after training **Model 1**. Then, you need not re-execute all prior code cells in the notebook before training **Model 2**. You need only re-execute the code cell below, that is marked with **`RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK`**, before transitioning to the code cells corresponding to **Model 2**.", "_____no_output_____" ] ], [ [ "#####################################################################\n# RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK #\n#####################################################################\n\n# allocate 50% of GPU memory (if you like, feel free to change this)\nfrom keras.backend.tensorflow_backend import set_session\nimport tensorflow as tf \nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.5\nset_session(tf.Session(config=config))\n\n# watch for any changes in the sample_models module, and reload it automatically\n%load_ext autoreload\n%autoreload 2\n# import NN architectures for speech recognition\nfrom sample_models import *\n# import function for training acoustic model\nfrom train_utils import train_model", "Using TensorFlow backend.\n" ] ], [ [ "<a id='model0'></a>\n### Model 0: RNN\n\nGiven their effectiveness in modeling sequential data, the first acoustic model you will use is an RNN. As shown in the figure below, the RNN we supply to you will take the time sequence of audio features as input.\n\n<img src=\"images/simple_rnn.png\" width=\"50%\">\n\nAt each time step, the speaker pronounces one of 28 possible characters, including each of the 26 letters in the English alphabet, along with a space character (\" \"), and an apostrophe (').\n\nThe output of the RNN at each time step is a vector of probabilities with 29 entries, where the $i$-th entry encodes the probability that the $i$-th character is spoken in the time sequence. (The extra 29th character is an empty \"character\" used to pad training examples within batches containing uneven lengths.) If you would like to peek under the hood at how characters are mapped to indices in the probability vector, look at the `char_map.py` file in the repository. The figure below shows an equivalent, rolled depiction of the RNN that shows the output layer in greater detail. \n\n<img src=\"images/simple_rnn_unrolled.png\" width=\"60%\">\n\nThe model has already been specified for you in Keras. To import it, you need only run the code cell below. ", "_____no_output_____" ] ], [ [ "model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nrnn (GRU) (None, None, 29) 16617 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 16,617\nTrainable params: 16,617\nNon-trainable params: 0\n_________________________________________________________________\nNone\n" ] ], [ [ "As explored in the lesson, you will train the acoustic model with the [CTC loss](http://www.cs.toronto.edu/~graves/icml_2006.pdf) criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architectures as possible :). If you'd like to peek at the implementation details, look at the `add_ctc_loss` function within the `train_utils.py` file in the repository.\n\nTo train your architecture, you will use the `train_model` function within the `train_utils` module; it has already been imported in one of the above code cells. The `train_model` function takes three **required** arguments:\n- `input_to_softmax` - a Keras model instance.\n- `pickle_path` - the name of the pickle file where the loss history will be saved.\n- `save_model_path` - the name of the HDF5 file where the model will be saved.\n\nIf we have already supplied values for `input_to_softmax`, `pickle_path`, and `save_model_path`, please **DO NOT** modify these values. \n\nThere are several **optional** arguments that allow you to have more control over the training process. You are welcome to, but not required to, supply your own values for these arguments.\n- `minibatch_size` - the size of the minibatches that are generated while training the model (default: `20`).\n- `spectrogram` - Boolean value dictating whether spectrogram (`True`) or MFCC (`False`) features are used for training (default: `True`).\n- `mfcc_dim` - the size of the feature dimension to use when generating MFCC features (default: `13`).\n- `optimizer` - the Keras optimizer used to train the model (default: `SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)`). \n- `epochs` - the number of epochs to use to train the model (default: `20`). If you choose to modify this parameter, make sure that it is *at least* 20.\n- `verbose` - controls the verbosity of the training output in the `model.fit_generator` method (default: `1`).\n- `sort_by_duration` - Boolean value dictating whether the training and validation sets are sorted by (increasing) duration before the start of the first epoch (default: `False`).\n\nThe `train_model` function defaults to using spectrogram features; if you choose to use these features, note that the acoustic model in `simple_rnn_model` should have `input_dim=161`. Otherwise, if you choose to use MFCC features, the acoustic model should have `input_dim=13`.\n\nWe have chosen to use `GRU` units in the supplied RNN. If you would like to experiment with `LSTM` or `SimpleRNN` cells, feel free to do so here. If you change the `GRU` units to `SimpleRNN` cells in `simple_rnn_model`, you may notice that the loss quickly becomes undefined (`nan`) - you are strongly encouraged to check this for yourself! This is due to the [exploding gradients problem](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/). We have already implemented [gradient clipping](https://arxiv.org/pdf/1211.5063.pdf) in your optimizer to help you avoid this issue.\n\n__IMPORTANT NOTE:__ If you notice that your gradient has exploded in any of the models below, feel free to explore more with gradient clipping (the `clipnorm` argument in your optimizer) or swap out any `SimpleRNN` cells for `LSTM` or `GRU` cells. You can also try restarting the kernel to restart the training process.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_0, \n pickle_path='model_0.pickle', \n save_model_path='model_0.h5',\n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 220s - loss: 836.9123 - val_loss: 731.5814\nEpoch 2/20\n106/106 [==============================] - 214s - loss: 753.4433 - val_loss: 720.3860\nEpoch 3/20\n106/106 [==============================] - 214s - loss: 751.6618 - val_loss: 729.8400\nEpoch 4/20\n106/106 [==============================] - 211s - loss: 751.9837 - val_loss: 720.2264\nEpoch 5/20\n106/106 [==============================] - 209s - loss: 751.6505 - val_loss: 724.7656\nEpoch 6/20\n106/106 [==============================] - 209s - loss: 750.2466 - val_loss: 729.2223\nEpoch 7/20\n106/106 [==============================] - 210s - loss: 752.3905 - val_loss: 722.5401\nEpoch 8/20\n106/106 [==============================] - 210s - loss: 751.8397 - val_loss: 723.9407\nEpoch 9/20\n106/106 [==============================] - 210s - loss: 750.3707 - val_loss: 729.0779\nEpoch 10/20\n106/106 [==============================] - 212s - loss: 750.4710 - val_loss: 724.9818\nEpoch 11/20\n106/106 [==============================] - 210s - loss: 751.7834 - val_loss: 725.8800\nEpoch 12/20\n106/106 [==============================] - 211s - loss: 750.5988 - val_loss: 733.3444\nEpoch 13/20\n106/106 [==============================] - 209s - loss: 750.2424 - val_loss: 722.7360\nEpoch 14/20\n106/106 [==============================] - 208s - loss: 751.3828 - val_loss: 723.6667\nEpoch 15/20\n106/106 [==============================] - 209s - loss: 750.8363 - val_loss: 723.0251\nEpoch 16/20\n106/106 [==============================] - 209s - loss: 751.4744 - val_loss: 738.8464\nEpoch 17/20\n106/106 [==============================] - 210s - loss: 750.7282 - val_loss: 713.5608\nEpoch 18/20\n106/106 [==============================] - 211s - loss: 751.0226 - val_loss: 730.4665\nEpoch 19/20\n106/106 [==============================] - 211s - loss: 751.3550 - val_loss: 723.1862\nEpoch 20/20\n106/106 [==============================] - 211s - loss: 752.2249 - val_loss: 722.9603\n" ] ], [ [ "<a id='model1'></a>\n### (IMPLEMENTATION) Model 1: RNN + TimeDistributed Dense\n\nRead about the [TimeDistributed](https://keras.io/layers/wrappers/) wrapper and the [BatchNormalization](https://keras.io/layers/normalization/) layer in the Keras documentation. For your next architecture, you will add [batch normalization](https://arxiv.org/pdf/1510.01378.pdf) to the recurrent layer to reduce training times. The `TimeDistributed` layer will be used to find more complex patterns in the dataset. The unrolled snapshot of the architecture is depicted below.\n\n<img src=\"images/rnn_model.png\" width=\"60%\">\n\nThe next figure shows an equivalent, rolled depiction of the RNN that shows the (`TimeDistrbuted`) dense and output layers in greater detail. \n\n<img src=\"images/rnn_model_unrolled.png\" width=\"60%\">\n\nUse your research to complete the `rnn_model` function within the `sample_models.py` file. The function should specify an architecture that satisfies the following requirements:\n- The first layer of the neural network should be an RNN (`SimpleRNN`, `LSTM`, or `GRU`) that takes the time sequence of audio features as input. We have added `GRU` units for you, but feel free to change `GRU` to `SimpleRNN` or `LSTM`, if you like!\n- Whereas the architecture in `simple_rnn_model` treated the RNN output as the final layer of the model, you will use the output of your RNN as a hidden layer. Use `TimeDistributed` to apply a `Dense` layer to each of the time steps in the RNN output. Ensure that each `Dense` layer has `output_dim` units.\n\nUse the code cell below to load your model into the `model_1` variable. Use a value for `input_dim` that matches your chosen audio features, and feel free to change the values for `units` and `activation` to tweak the behavior of your recurrent layer.", "_____no_output_____" ] ], [ [ "model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features\n units=200,\n activation='relu')", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nrnn (GRU) (None, None, 200) 217200 \n_________________________________________________________________\nbn (BatchNormalization) (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 223,829\nTrainable params: 223,429\nNon-trainable params: 400\n_________________________________________________________________\nNone\n" ] ], [ [ "Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) in the HDF5 file `model_1.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_1.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_1, \n pickle_path='model_1.pickle', \n save_model_path='model_1.h5',\n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 216s - loss: 270.9342 - val_loss: 221.0734\nEpoch 2/20\n106/106 [==============================] - 218s - loss: 199.5961 - val_loss: 181.9309\nEpoch 3/20\n106/106 [==============================] - 219s - loss: 174.2170 - val_loss: 167.4585\nEpoch 4/20\n106/106 [==============================] - 218s - loss: 160.2480 - val_loss: 157.4734\nEpoch 5/20\n106/106 [==============================] - 218s - loss: 152.6037 - val_loss: 150.8534\nEpoch 6/20\n106/106 [==============================] - 220s - loss: 147.0087 - val_loss: 146.3535\nEpoch 7/20\n106/106 [==============================] - 220s - loss: 141.6468 - val_loss: 147.3902\nEpoch 8/20\n106/106 [==============================] - 217s - loss: 137.4440 - val_loss: 142.9138\nEpoch 9/20\n106/106 [==============================] - 218s - loss: 136.5461 - val_loss: 149.3207\nEpoch 10/20\n106/106 [==============================] - 219s - loss: 133.4592 - val_loss: 140.7574\nEpoch 11/20\n106/106 [==============================] - 219s - loss: 130.6819 - val_loss: 139.3408\nEpoch 12/20\n106/106 [==============================] - 221s - loss: 131.0569 - val_loss: 141.7016\nEpoch 13/20\n106/106 [==============================] - 219s - loss: 130.7099 - val_loss: 144.3550\nEpoch 14/20\n106/106 [==============================] - 218s - loss: 129.3713 - val_loss: 144.5044\nEpoch 15/20\n106/106 [==============================] - 218s - loss: 127.3651 - val_loss: 138.0142\nEpoch 16/20\n106/106 [==============================] - 221s - loss: 125.4816 - val_loss: 142.9400\nEpoch 17/20\n106/106 [==============================] - 219s - loss: 125.6474 - val_loss: 138.0472\nEpoch 18/20\n106/106 [==============================] - 219s - loss: 127.4490 - val_loss: 141.4085\nEpoch 19/20\n106/106 [==============================] - 219s - loss: 127.0592 - val_loss: 135.9039\nEpoch 20/20\n106/106 [==============================] - 218s - loss: 126.5614 - val_loss: 142.6996\n" ] ], [ [ "<a id='model2'></a>\n### (IMPLEMENTATION) Model 2: CNN + RNN + TimeDistributed Dense\n\nThe architecture in `cnn_rnn_model` adds an additional level of complexity, by introducing a [1D convolution layer](https://keras.io/layers/convolutional/#conv1d). \n\n<img src=\"images/cnn_rnn_model.png\" width=\"100%\">\n\nThis layer incorporates many arguments that can be (optionally) tuned when calling the `cnn_rnn_model` module. We provide sample starting parameters, which you might find useful if you choose to use spectrogram audio features. \n\nIf you instead want to use MFCC features, these arguments will have to be tuned. Note that the current architecture only supports values of `'same'` or `'valid'` for the `conv_border_mode` argument.\n\nWhen tuning the parameters, be careful not to choose settings that make the convolutional layer overly small. If the temporal length of the CNN layer is shorter than the length of the transcribed text label, your code will throw an error.\n\nBefore running the code cell below, you must modify the `cnn_rnn_model` function in `sample_models.py`. Please add batch normalization to the recurrent layer, and provide the same `TimeDistributed` layer as before.", "_____no_output_____" ] ], [ [ "model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features\n filters=200,\n kernel_size=11, \n conv_stride=2,\n conv_border_mode='valid',\n units=200)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nconv1d (Conv1D) (None, None, 200) 354400 \n_________________________________________________________________\nbn_conv_1d (BatchNormalizati (None, None, 200) 800 \n_________________________________________________________________\nrnn (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_1d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 602,429\nTrainable params: 601,629\nNon-trainable params: 800\n_________________________________________________________________\nNone\n" ] ], [ [ "Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) in the HDF5 file `model_2.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_2.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_2, \n pickle_path='model_2.pickle', \n save_model_path='model_2.h5', \n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 119s - loss: 245.5228 - val_loss: 229.6894\nEpoch 2/20\n106/106 [==============================] - 114s - loss: 196.0966 - val_loss: 176.2809\nEpoch 3/20\n106/106 [==============================] - 114s - loss: 156.6090 - val_loss: 149.7041\nEpoch 4/20\n106/106 [==============================] - 114s - loss: 139.6971 - val_loss: 142.2241\nEpoch 5/20\n106/106 [==============================] - 112s - loss: 128.4224 - val_loss: 134.3123\nEpoch 6/20\n106/106 [==============================] - 112s - loss: 120.5417 - val_loss: 133.5197\nEpoch 7/20\n106/106 [==============================] - 112s - loss: 113.7368 - val_loss: 129.4064\nEpoch 8/20\n106/106 [==============================] - 112s - loss: 108.1096 - val_loss: 125.1745\nEpoch 9/20\n106/106 [==============================] - 111s - loss: 102.7078 - val_loss: 125.9596\nEpoch 10/20\n106/106 [==============================] - 111s - loss: 97.9178 - val_loss: 126.2979\nEpoch 11/20\n106/106 [==============================] - 111s - loss: 93.6220 - val_loss: 126.5618\nEpoch 12/20\n106/106 [==============================] - 112s - loss: 89.3993 - val_loss: 127.8947\nEpoch 13/20\n106/106 [==============================] - 111s - loss: 85.0352 - val_loss: 128.1104\nEpoch 14/20\n106/106 [==============================] - 112s - loss: 81.3817 - val_loss: 128.3109\nEpoch 15/20\n106/106 [==============================] - 112s - loss: 77.6638 - val_loss: 128.7336\nEpoch 16/20\n106/106 [==============================] - 112s - loss: 74.1233 - val_loss: 136.4783\nEpoch 17/20\n106/106 [==============================] - 111s - loss: 70.7326 - val_loss: 135.7775\nEpoch 18/20\n106/106 [==============================] - 111s - loss: 67.5020 - val_loss: 139.8081\nEpoch 19/20\n106/106 [==============================] - 111s - loss: 64.5320 - val_loss: 139.7385\nEpoch 20/20\n106/106 [==============================] - 111s - loss: 61.3825 - val_loss: 145.1237\n" ] ], [ [ "<a id='model3'></a>\n### (IMPLEMENTATION) Model 3: Deeper RNN + TimeDistributed Dense\n\nReview the code in `rnn_model`, which makes use of a single recurrent layer. Now, specify an architecture in `deep_rnn_model` that utilizes a variable number `recur_layers` of recurrent layers. The figure below shows the architecture that should be returned if `recur_layers=2`. In the figure, the output sequence of the first recurrent layer is used as input for the next recurrent layer.\n\n<img src=\"images/deep_rnn_model.png\" width=\"80%\">\n\nFeel free to change the supplied values of `units` to whatever you think performs best. You can change the value of `recur_layers`, as long as your final value is greater than 1. (As a quick check that you have implemented the additional functionality in `deep_rnn_model` correctly, make sure that the architecture that you specify here is identical to `rnn_model` if `recur_layers=1`.)", "_____no_output_____" ] ], [ [ "model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features\n units=200,\n recur_layers=2) ", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\ngru_4 (GRU) (None, None, 200) 217200 \n_________________________________________________________________\ndeep_rnn_1 (BatchNormalizati (None, None, 200) 800 \n_________________________________________________________________\ngru_5 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nfinal_deep_rnn_layer (BatchN (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_3 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 465,229\nTrainable params: 464,429\nNon-trainable params: 800\n_________________________________________________________________\nNone\n" ] ], [ [ "Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) in the HDF5 file `model_3.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_3.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_3, \n pickle_path='model_3.pickle', \n save_model_path='model_3.h5', \n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 342s - loss: 284.7989 - val_loss: 252.4870\nEpoch 2/20\n106/106 [==============================] - 346s - loss: 215.8872 - val_loss: 203.1604\nEpoch 3/20\n106/106 [==============================] - 343s - loss: 191.9161 - val_loss: 174.7304\nEpoch 4/20\n106/106 [==============================] - 343s - loss: 166.3595 - val_loss: 161.0909\nEpoch 5/20\n106/106 [==============================] - 343s - loss: 148.8995 - val_loss: 152.2609\nEpoch 6/20\n106/106 [==============================] - 344s - loss: 139.4056 - val_loss: 145.2772\nEpoch 7/20\n106/106 [==============================] - 344s - loss: 132.2883 - val_loss: 139.9917\nEpoch 8/20\n106/106 [==============================] - 341s - loss: 126.3037 - val_loss: 137.1674\nEpoch 9/20\n106/106 [==============================] - 344s - loss: 121.9429 - val_loss: 137.7060\nEpoch 10/20\n106/106 [==============================] - 345s - loss: 117.7434 - val_loss: 132.0444\nEpoch 11/20\n106/106 [==============================] - 342s - loss: 114.0925 - val_loss: 131.4164\nEpoch 12/20\n106/106 [==============================] - 342s - loss: 111.4977 - val_loss: 131.9956\nEpoch 13/20\n106/106 [==============================] - 343s - loss: 108.4562 - val_loss: 131.6582\nEpoch 14/20\n106/106 [==============================] - 343s - loss: 106.2134 - val_loss: 128.0325\nEpoch 15/20\n106/106 [==============================] - 343s - loss: 103.9430 - val_loss: 124.9030\nEpoch 16/20\n106/106 [==============================] - 343s - loss: 101.0803 - val_loss: 128.2939\nEpoch 17/20\n106/106 [==============================] - 343s - loss: 98.4990 - val_loss: 125.4946\nEpoch 18/20\n106/106 [==============================] - 343s - loss: 96.9151 - val_loss: 125.2837\nEpoch 19/20\n106/106 [==============================] - 340s - loss: 95.2384 - val_loss: 126.3274\nEpoch 20/20\n106/106 [==============================] - 340s - loss: 94.1782 - val_loss: 125.0829\n" ] ], [ [ "<a id='model4'></a>\n### (IMPLEMENTATION) Model 4: Bidirectional RNN + TimeDistributed Dense\n\nRead about the [Bidirectional](https://keras.io/layers/wrappers/) wrapper in the Keras documentation. For your next architecture, you will specify an architecture that uses a single bidirectional RNN layer, before a (`TimeDistributed`) dense layer. The added value of a bidirectional RNN is described well in [this paper](http://www.cs.toronto.edu/~hinton/absps/DRNN_speech.pdf).\n> One shortcoming of conventional RNNs is that they are only able to make use of previous context. In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well. Bidirectional RNNs (BRNNs) do this by processing the data in both directions with two separate hidden layers which are then fed forwards to the same output layer.\n\n<img src=\"images/bidirectional_rnn_model.png\" width=\"80%\">\n\nBefore running the code cell below, you must complete the `bidirectional_rnn_model` function in `sample_models.py`. Feel free to use `SimpleRNN`, `LSTM`, or `GRU` units. When specifying the `Bidirectional` wrapper, use `merge_mode='concat'`.", "_____no_output_____" ] ], [ [ "model_4 = bidirectional_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features\n units=200)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nbidirectional_1 (Bidirection (None, None, 400) 579200 \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, None, 29) 11629 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 590,829\nTrainable params: 590,829\nNon-trainable params: 0\n_________________________________________________________________\nNone\n" ] ], [ [ "Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) in the HDF5 file `model_4.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_4.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_4, \n pickle_path='model_4.pickle', \n save_model_path='model_4.h5', \n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 403s - loss: 230.1068 - val_loss: 213.2899\nEpoch 2/20\n106/106 [==============================] - 405s - loss: 224.4880 - val_loss: 213.0594\nEpoch 3/20\n106/106 [==============================] - 405s - loss: 224.0703 - val_loss: 213.3615\nEpoch 4/20\n106/106 [==============================] - 406s - loss: 224.2367 - val_loss: 215.6243\nEpoch 5/20\n106/106 [==============================] - 402s - loss: 223.6757 - val_loss: 214.7356\nEpoch 6/20\n106/106 [==============================] - 403s - loss: 224.1656 - val_loss: 214.2503\nEpoch 7/20\n106/106 [==============================] - 403s - loss: 224.1322 - val_loss: 213.7310\nEpoch 8/20\n106/106 [==============================] - 406s - loss: 224.0101 - val_loss: 215.1017\nEpoch 9/20\n106/106 [==============================] - 404s - loss: 223.9817 - val_loss: 211.6162\nEpoch 10/20\n106/106 [==============================] - 404s - loss: 224.3228 - val_loss: 214.1584\nEpoch 11/20\n106/106 [==============================] - 408s - loss: 224.5372 - val_loss: 213.9740\nEpoch 12/20\n106/106 [==============================] - 406s - loss: 224.0035 - val_loss: 213.6808\nEpoch 13/20\n106/106 [==============================] - 406s - loss: 224.3288 - val_loss: 213.5177\nEpoch 14/20\n106/106 [==============================] - 407s - loss: 223.9144 - val_loss: 211.9387\nEpoch 15/20\n106/106 [==============================] - 405s - loss: 224.0432 - val_loss: 213.6909\nEpoch 16/20\n106/106 [==============================] - 404s - loss: 224.5912 - val_loss: 213.6710\nEpoch 17/20\n106/106 [==============================] - 408s - loss: 224.1705 - val_loss: 214.2041\nEpoch 18/20\n106/106 [==============================] - 405s - loss: 224.3065 - val_loss: 213.0740\nEpoch 19/20\n106/106 [==============================] - 403s - loss: 224.4779 - val_loss: 214.6950\nEpoch 20/20\n106/106 [==============================] - 404s - loss: 224.3256 - val_loss: 212.5383\n" ] ], [ [ "<a id='model5'></a>\n### (OPTIONAL IMPLEMENTATION) Models 5+\n\nIf you would like to try out more architectures than the ones above, please use the code cell below. Please continue to follow the same convention for saving the models; for the $i$-th sample model, please save the loss at **`model_i.pickle`** and saving the trained model at **`model_i.h5`**.", "_____no_output_____" ] ], [ [ "## (Optional) TODO: Try out some more models!\n### Feel free to use as many code cells as needed.", "_____no_output_____" ] ], [ [ "<a id='compare'></a>\n### Compare the Models\n\nExecute the code cell below to evaluate the performance of the drafted deep learning models. The training and validation loss are plotted for each model.", "_____no_output_____" ] ], [ [ "from glob import glob\nimport numpy as np\nimport _pickle as pickle\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\nsns.set_style(style='white')\n\n# obtain the paths for the saved model history\nall_pickles = sorted(glob(\"results/*.pickle\"))\n# extract the name of each model\nmodel_names = [item[8:-7] for item in all_pickles]\n# extract the loss history for each model\nvalid_loss = [pickle.load( open( i, \"rb\" ) )['val_loss'] for i in all_pickles]\ntrain_loss = [pickle.load( open( i, \"rb\" ) )['loss'] for i in all_pickles]\n# save the number of epochs used to train each model\nnum_epochs = [len(valid_loss[i]) for i in range(len(valid_loss))]\n\nfig = plt.figure(figsize=(16,5))\n\n# plot the training loss vs. epoch for each model\nax1 = fig.add_subplot(121)\nfor i in range(len(all_pickles)):\n ax1.plot(np.linspace(1, num_epochs[i], num_epochs[i]), \n train_loss[i], label=model_names[i])\n# clean up the plot\nax1.legend() \nax1.set_xlim([1, max(num_epochs)])\nplt.xlabel('Epoch')\nplt.ylabel('Training Loss')\n\n# plot the validation loss vs. epoch for each model\nax2 = fig.add_subplot(122)\nfor i in range(len(all_pickles)):\n ax2.plot(np.linspace(1, num_epochs[i], num_epochs[i]), \n valid_loss[i], label=model_names[i])\n# clean up the plot\nax2.legend() \nax2.set_xlim([1, max(num_epochs)])\nplt.xlabel('Epoch')\nplt.ylabel('Validation Loss')\nplt.show()", "_____no_output_____" ] ], [ [ "__Question 1:__ Use the plot above to analyze the performance of each of the attempted architectures. Which performs best? Provide an explanation regarding why you think some models perform better than others. \n\n__Answer:__ I think that model_3 performed best. While model_2 has the smallest training loss it seems to have overfitted a bit when you look at the validation loss. Model_3 has the lowest validation loss.\n\nmodel_0: model_0 peformed the worst, it is a very shallow network and therefore not up to the task when dealing with speech features.\n\nmodel_1: model_1 performs significantly better than model_0, and this may stem from the use of batch normalization. According to the paper \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,\" training neural networks suffers from internal covariate shift, which is where the dicstribution of each layer's inputs changes as the parameters of previous layers change during training. Batch normalization act s as a regularizer allowing us to use higher learning rates and not worry as much about initialization.\n\nmodel_2: model_2 had the best training performance. This may be due to the use of a convolutional layer. According to the paper [Deep Speech 2: End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/pdf/1512.02595v1.pdf) the authors state:\n\n> Convolution in frequency\nattempts to model spectral variance due to speaker variability more concisely than what is possible\nwith large fully connected networks. Since spectral ordering of features is removed by fullyconnected\nand recurrent layers, frequency convolutions work better as the first layers of the network.\n\nThis perhaps explains its superior performance.\n\nmodel_3: model_3 is our deep RNN and it performed the best when looking at validation loss. Perhaps this is simply due t it having more layers.\n\nmodel_4: model_4 was a bit puzzling when looking at the loss as it looks like it remained constant. I was wondering if this was to the use of GRUs, however the results were the same for LSTMs. According to [SPEECH RECOGNITION WITH DEEP RECURRENT NEURAL NETWORKS](http://www.cs.toronto.edu/~hinton/absps/DRNN_speech.pdf) a shortcoming of conventional RNNs is that they only make use of previous context. Furthermore they state:\n\n> In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well.\n\nHmmmm... all well and good, but empircally, with this data, I am not sure I achieved that benefit. \n\n", "_____no_output_____" ], [ "<a id='final'></a>\n### (IMPLEMENTATION) Final Model\n\nNow that you've tried out many sample models, use what you've learned to draft your own architecture! While your final acoustic model should not be identical to any of the architectures explored above, you are welcome to merely combine the explored layers above into a deeper architecture. It is **NOT** necessary to include new layer types that were not explored in the notebook.\n\nHowever, if you would like some ideas for even more layer types, check out these ideas for some additional, optional extensions to your model:\n\n- If you notice your model is overfitting to the training dataset, consider adding **dropout**! To add dropout to [recurrent layers](https://faroit.github.io/keras-docs/1.0.2/layers/recurrent/), pay special attention to the `dropout_W` and `dropout_U` arguments. This [paper](http://arxiv.org/abs/1512.05287) may also provide some interesting theoretical background.\n- If you choose to include a convolutional layer in your model, you may get better results by working with **dilated convolutions**. If you choose to use dilated convolutions, make sure that you are able to accurately calculate the length of the acoustic model's output in the `model.output_length` lambda function. You can read more about dilated convolutions in Google's [WaveNet paper](https://arxiv.org/abs/1609.03499). For an example of a speech-to-text system that makes use of dilated convolutions, check out this GitHub [repository](https://github.com/buriburisuri/speech-to-text-wavenet). You can work with dilated convolutions [in Keras](https://keras.io/layers/convolutional/) by paying special attention to the `padding` argument when you specify a convolutional layer.\n- If your model makes use of convolutional layers, why not also experiment with adding **max pooling**? Check out [this paper](https://arxiv.org/pdf/1701.02720.pdf) for example architecture that makes use of max pooling in an acoustic model.\n- So far, you have experimented with a single bidirectional RNN layer. Consider stacking the bidirectional layers, to produce a [deep bidirectional RNN](https://www.cs.toronto.edu/~graves/asru_2013.pdf)!\n\nAll models that you specify in this repository should have `output_length` defined as an attribute. This attribute is a lambda function that maps the (temporal) length of the input acoustic features to the (temporal) length of the output softmax layer. This function is used in the computation of CTC loss; to see this, look at the `add_ctc_loss` function in `train_utils.py`. To see where the `output_length` attribute is defined for the models in the code, take a look at the `sample_models.py` file. You will notice this line of code within most models:\n```\nmodel.output_length = lambda x: x\n```\nThe acoustic model that incorporates a convolutional layer (`cnn_rnn_model`) has a line that is a bit different:\n```\nmodel.output_length = lambda x: cnn_output_length(\n x, kernel_size, conv_border_mode, conv_stride)\n```\n\nIn the case of models that use purely recurrent layers, the lambda function is the identity function, as the recurrent layers do not modify the (temporal) length of their input tensors. However, convolutional layers are more complicated and require a specialized function (`cnn_output_length` in `sample_models.py`) to determine the temporal length of their output.\n\nYou will have to add the `output_length` attribute to your final model before running the code cell below. Feel free to use the `cnn_output_length` function, if it suits your model. ", "_____no_output_____" ] ], [ [ "# specify the model\nmodel_end = final_model(input_dim=161, # change to 13 if you would like to use MFCC features\n filters=200,\n kernel_size=11, \n conv_stride=2,\n conv_border_mode='valid',\n units=200)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nconv2d (Conv1D) (None, None, 200) 354400 \n_________________________________________________________________\nbn_conv_2d (BatchNormalizati (None, None, 200) 800 \n_________________________________________________________________\nrnn1 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_1d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn2 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_2d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn3 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_3d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_2 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 1,085,229\nTrainable params: 1,083,629\nNon-trainable params: 1,600\n_________________________________________________________________\nNone\n" ] ], [ [ "Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) in the HDF5 file `model_end.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_end.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.", "_____no_output_____" ] ], [ [ "train_model(input_to_softmax=model_end, \n pickle_path='model_end.pickle', \n save_model_path='model_end.h5', \n spectrogram=True) # change to False if you would like to use MFCC features", "Epoch 1/20\n106/106 [==============================] - 268s - loss: 234.2083 - val_loss: 248.5026\nEpoch 2/20\n106/106 [==============================] - 268s - loss: 166.7575 - val_loss: 164.5965\nEpoch 3/20\n106/106 [==============================] - 272s - loss: 139.2259 - val_loss: 134.7334\nEpoch 4/20\n106/106 [==============================] - 266s - loss: 124.7383 - val_loss: 131.8222\nEpoch 5/20\n106/106 [==============================] - 264s - loss: 113.9840 - val_loss: 128.0787\nEpoch 6/20\n106/106 [==============================] - 264s - loss: 105.9944 - val_loss: 126.7443\nEpoch 7/20\n106/106 [==============================] - 262s - loss: 98.3808 - val_loss: 122.1108\nEpoch 8/20\n106/106 [==============================] - 263s - loss: 92.2569 - val_loss: 121.8550\nEpoch 9/20\n106/106 [==============================] - 263s - loss: 86.7612 - val_loss: 121.3610\nEpoch 10/20\n106/106 [==============================] - 266s - loss: 80.9605 - val_loss: 121.4630\nEpoch 11/20\n106/106 [==============================] - 265s - loss: 75.7899 - val_loss: 119.7425\nEpoch 12/20\n106/106 [==============================] - 265s - loss: 71.4461 - val_loss: 124.9712\nEpoch 13/20\n106/106 [==============================] - 263s - loss: 66.9007 - val_loss: 120.8862\nEpoch 14/20\n106/106 [==============================] - 265s - loss: 62.7639 - val_loss: 129.7066\nEpoch 15/20\n106/106 [==============================] - 266s - loss: 58.4909 - val_loss: 129.3104\nEpoch 16/20\n106/106 [==============================] - 264s - loss: 54.8355 - val_loss: 133.5372\nEpoch 17/20\n106/106 [==============================] - 265s - loss: 51.0710 - val_loss: 134.1504\nEpoch 18/20\n106/106 [==============================] - 266s - loss: 47.8121 - val_loss: 139.2485\nEpoch 19/20\n106/106 [==============================] - 265s - loss: 45.2845 - val_loss: 142.9055\nEpoch 20/20\n106/106 [==============================] - 266s - loss: 41.9382 - val_loss: 143.5623\n" ] ], [ [ "__Question 2:__ Describe your final model architecture and your reasoning at each step. \n\n__Answer:__ My final model was influenced by the paper [Deep Speech 2: End-to-End Speech Recognition in\nEnglish and Mandarin](https://arxiv.org/pdf/1512.02595v1.pdf)\n\nThe authors used deep unidirectional RNNs with batch normalization and convolutional layers. I use GRUs as the authors state:\n\n> A recent comprehensive study of thousands of variations of LSTM and\nGRU architectures showed that a GRU is comparable to an LSTM with a properly initialized forget\ngate bias, and their best variants are competitive with each other [32]. We decided to examine GRUs\nbecause experiments on smaller data sets showed the GRU and LSTM reach similar accuracy for\nthe same number of parameters, but the GRUs were faster to train and less likely to diverge.\n\nHowever they also noted that as they scaled up their networks, simple RNNs actually perform better given a fixed computational budget. I stuck to GRUs since I don't think my problem has quite the scale. The authors use convolution because because the perform well with spectrul data. Finally I diregarded bidirectional RNNs, as the authors were able to find good performce with unidirectional RNNs, albeit they used something called row convolution; which I did not. However I was more interested in their point that:\n\n> Bidirectional RNN models are challenging to deploy in an online, low-latency setting, because they\nare built to operate on an entire sample, and so it is not possible to perform the transcription process\nas the utterance streams from the user\n\nI am interested in practical application (in production); therefore this seems more relevant to me. \n\n", "_____no_output_____" ], [ "<a id='step3'></a>\n## STEP 3: Obtain Predictions\n\nWe have written a function for you to decode the predictions of your acoustic model. To use the function, please execute the code cell below.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom data_generator import AudioGenerator\nfrom keras import backend as K\nfrom utils import int_sequence_to_text\nfrom IPython.display import Audio\n\ndef get_predictions(index, partition, input_to_softmax, model_path):\n \"\"\" Print a model's decoded predictions\n Params:\n index (int): The example you would like to visualize\n partition (str): One of 'train' or 'validation'\n input_to_softmax (Model): The acoustic model\n model_path (str): Path to saved acoustic model's weights\n \"\"\"\n # load the train and test data\n data_gen = AudioGenerator()\n data_gen.load_train_data()\n data_gen.load_validation_data()\n \n # obtain the true transcription and the audio features \n if partition == 'validation':\n transcr = data_gen.valid_texts[index]\n audio_path = data_gen.valid_audio_paths[index]\n data_point = data_gen.normalize(data_gen.featurize(audio_path))\n elif partition == 'train':\n transcr = data_gen.train_texts[index]\n audio_path = data_gen.train_audio_paths[index]\n data_point = data_gen.normalize(data_gen.featurize(audio_path))\n else:\n raise Exception('Invalid partition! Must be \"train\" or \"validation\"')\n \n # obtain and decode the acoustic model's predictions\n input_to_softmax.load_weights(model_path)\n prediction = input_to_softmax.predict(np.expand_dims(data_point, axis=0))\n output_length = [input_to_softmax.output_length(data_point.shape[0])] \n pred_ints = (K.eval(K.ctc_decode(\n prediction, output_length)[0][0])+1).flatten().tolist()\n \n # play the audio file, and display the true and predicted transcriptions\n print('-'*80)\n Audio(audio_path)\n print('True transcription:\\n' + '\\n' + transcr)\n print('-'*80)\n print('Predicted transcription:\\n' + '\\n' + ''.join(int_sequence_to_text(pred_ints)))\n print('-'*80)", "_____no_output_____" ] ], [ [ "Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset.", "_____no_output_____" ] ], [ [ "get_predictions(index=0, \n partition='train',\n input_to_softmax=final_model(input_dim=161,\n filters=200,\n kernel_size=11, \n conv_stride=2,\n conv_border_mode='valid',\n units=200), \n model_path='results/model_end.h5')", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nconv2d (Conv1D) (None, None, 200) 354400 \n_________________________________________________________________\nbn_conv_2d (BatchNormalizati (None, None, 200) 800 \n_________________________________________________________________\nrnn1 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_1d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn2 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_2d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn3 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_3d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_4 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 1,085,229\nTrainable params: 1,083,629\nNon-trainable params: 1,600\n_________________________________________________________________\nNone\n--------------------------------------------------------------------------------\nTrue transcription:\n\nthe last two days of the voyage bartley found almost intolerable\n--------------------------------------------------------------------------------\nPredicted transcription:\n\nthelost to das of the voge bartly found amost intolerable\n--------------------------------------------------------------------------------\n" ] ], [ [ "Use the next code cell to visualize the model's prediction for the first example in the validation dataset.", "_____no_output_____" ] ], [ [ "get_predictions(index=0, \n partition='validation',\n input_to_softmax=final_model(input_dim=161,\n filters=200,\n kernel_size=11, \n conv_stride=2,\n conv_border_mode='valid',\n units=200), \n model_path='results/model_end.h5')", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nthe_input (InputLayer) (None, None, 161) 0 \n_________________________________________________________________\nconv2d (Conv1D) (None, None, 200) 354400 \n_________________________________________________________________\nbn_conv_2d (BatchNormalizati (None, None, 200) 800 \n_________________________________________________________________\nrnn1 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_1d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn2 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_2d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\nrnn3 (GRU) (None, None, 200) 240600 \n_________________________________________________________________\nbn_rnn_3d (BatchNormalizatio (None, None, 200) 800 \n_________________________________________________________________\ntime_distributed_5 (TimeDist (None, None, 29) 5829 \n_________________________________________________________________\nsoftmax (Activation) (None, None, 29) 0 \n=================================================================\nTotal params: 1,085,229\nTrainable params: 1,083,629\nNon-trainable params: 1,600\n_________________________________________________________________\nNone\n--------------------------------------------------------------------------------\nTrue transcription:\n\nout in the woods stood a nice little fir tree\n--------------------------------------------------------------------------------\nPredicted transcription:\n\nofean the wot sto den ics lifle for trye\n--------------------------------------------------------------------------------\n" ] ], [ [ "One standard way to improve the results of the decoder is to incorporate a language model. We won't pursue this in the notebook, but you are welcome to do so as an _optional extension_. \n\nIf you are interested in creating models that provide improved transcriptions, you are encouraged to download [more data](http://www.openslr.org/12/) and train bigger, deeper models. But beware - the model will likely take a long while to train. For instance, training this [state-of-the-art](https://arxiv.org/pdf/1512.02595v1.pdf) model would take 3-6 weeks on a single GPU!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ec97911164acd886e5c4604a5fcdd33ea4433527
12,827
ipynb
Jupyter Notebook
BlobMap_DataPlots.ipynb
Deech08/mw_bpt
653b82090803c5fe5304d17c4640388d3b290bd1
[ "BSD-3-Clause" ]
null
null
null
BlobMap_DataPlots.ipynb
Deech08/mw_bpt
653b82090803c5fe5304d17c4640388d3b290bd1
[ "BSD-3-Clause" ]
null
null
null
BlobMap_DataPlots.ipynb
Deech08/mw_bpt
653b82090803c5fe5304d17c4640388d3b290bd1
[ "BSD-3-Clause" ]
null
null
null
33.933862
111
0.519919
[ [ [ "import numpy as np\n\nfrom modspectra.cube import EmissionCube\nimport astropy.units as u\nfrom astropy.coordinates import SkyCoord\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook\nimport seaborn as sns\nfrom matplotlib.colors import LogNorm\npal = sns.color_palette('colorblind', 10)\n\nsns.set(color_codes=True)\nsns.axes_style(\"white\")\nsns.set_style('whitegrid')\n\nimport copy\nimport scipy.io.idl\nimport matplotlib.gridspec as gridspec", "_____no_output_____" ], [ "# Create Model\nmodel_ha = EmissionCube.create_DK19(resolution = (64,64,64))", "_____no_output_____" ], [ "sub_model = model_ha.spectral_slab(-110*u.km/u.s, -50*u.km/u.s)", "_____no_output_____" ], [ "# Synthetic WHAM Observation helper functions\ndef wham_obs(cube, wham_pointing, moment = False, spectral_range = None, order = None):\n # select target coordinates\n lon = wham_pointing[\"GLON\"]\n lat = wham_pointing[\"GLAT\"]\n ds9_str = 'galactic; circle({0:.3}, {1:.4}, 0.5)'.format(lon, lat) \n subcube = cube.subcube_from_ds9region(ds9_str)\n if moment:\n if spectral_range == None:\n moment = subcube.moment(order = order)\n else:\n moment = subcube.spectral_slab(spectral_range).moment(order = order)\n spectrum = np.nanmean(moment.value) * moment.unit\n else:\n spectrum = subcube.mean(axis = (1,2))\n \n return spectrum\n\ndef wham_observations(cube, wham_pointings, noise = 0.01, **kwargs):\n output_pointings = copy.deepcopy(wham_pointings)\n if 'moment' in kwargs:\n if 'order' in kwargs:\n if kwargs[\"order\"] == 1:\n moment_values = np.empty(len(output_pointings)) << u.m/u.s\n else:\n moment_values = np.empty(len(output_pointings)) << u.R\n for ell, pointing in enumerate(output_pointings):\n spectrum = wham_obs(cube, pointing)\n moment_values[ell] = wham_obs(cube, pointing, **kwargs)\n output_pointings[ell][\"DATA\"] = spectrum.to(u.R/(u.km/u.s)).value\n output_pointings[ell][\"VAR\"] = (noise * np.abs(np.random.randn(len(pointing[\"VAR\"]))))**2.\n return output_pointings, moment_values", "_____no_output_____" ], [ "wham_baade = scipy.io.idl.readsav('baade_ha.sav', python_dict = True)", "_____no_output_____" ], [ "model_wham_obs, model_wham_mom = wham_observations(model_ha, \n wham_baade[\"baade\"], \n moment = True, \n order = 0)", "_____no_output_____" ], [ "_, model_wham_vel = wham_observations(sub_model, \n wham_baade[\"baade\"], \n moment = True, \n order = 1)", "_____no_output_____" ], [ "lon = model_wham_obs[\"GLON\"]\nlat = model_wham_obs[\"GLAT\"]\nha_int = np.log10(wham_baade[\"baade_mom\"][:,0])\nha_vel = wham_baade[\"baade_mom\"][:,1]\nmodel_ha_int = np.log10(model_wham_mom.value)\nmodel_ha_vel = model_wham_vel.to(u.km/u.s)\n\nintensity_msk = ha_int > -1.\nmean_velocity_msk = (ha_vel < -55.) & (ha_vel > -90.)\n\nfitting_msk = intensity_msk & mean_velocity_msk\n#Manual additons to mask\nfitting_msk[20] = False # Manually Masked", "_____no_output_____" ], [ "# Set points for highlighting\nbpt_points = [1, 9, 12, 20]\nnii_points = [2, 3, 4, 7, 8, 15]", "_____no_output_____" ], [ "wham_baade[\"baade\"][\"GLON\"][bpt_points]", "_____no_output_____" ], [ "wham_baade[\"baade\"].dtype.names", "_____no_output_____" ], [ "wham_baade[\"baade\"][\"GLAT\"][bpt_points]", "_____no_output_____" ], [ "fig = plt.figure(figsize = (9,9))\nax = fig.add_subplot(111)\n\ns =3300\nvmin = -1.\nvmax = 1\n\n\nsc = ax.scatter(lon,lat,s = s, c = model_ha_int, vmin= vmin, vmax = vmax, cmap = 'Reds')\nscv = ax.scatter(lon,lat,s = s/10, c = model_ha_vel, vmin= -110, vmax = -40, cmap = 'RdBu_r')\n\nax.scatter(lon[fitting_msk],lat[fitting_msk],facecolors='none', edgecolors='black', s = s, lw = 2)\n \nax.invert_xaxis()\n\ncbar = plt.colorbar(sc)\ncbar2 = plt.colorbar(scv, orientation = 'horizontal')\ncbar2.set_label(r'Mean Velocity (km/s)', fontsize = 15)\ncbar.set_label(r'Log H$\\alpha$ Intensity (Log Rayleighs)', fontsize = 15)\ncbar.ax.tick_params(labelsize = 15)\ncbar2.ax.tick_params(labelsize = 15)\n\nax.set_xlabel('Galactic Longitude (deg)', fontsize = 15)\nax.set_ylabel('Galactic Latitude (deg)', fontsize = 15)\nax.tick_params(labelsize = 15)\nax.set_aspect('equal')\n\nxlim_map = ax.get_xlim()\nylim_map = ax.get_ylim()", "_____no_output_____" ] ], [ [ "# Figure S2", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize = (10,9))\ngs = gridspec.GridSpec(14, 12, figure = fig, wspace = 0.02, hspace = 0.02)\n\nvel = model_ha.spectral_axis.to(u.km/u.s).value\n\nsupax = fig.add_subplot(gs[:,:])\nsupax.grid(False)\nsupax.set_xlim(xlim_map)\nsupax.set_ylim(ylim_map)\nsupax.set_xlabel('Galactic Longitude (deg)', fontsize = 15)\nsupax.tick_params(labelsize = 15)\nsupax.set_ylabel('Galactic Latitude (deg)', fontsize = 15)\n\n\naxes = []\n\naxes.append(fig.add_subplot(gs[6:8,9:11]))\naxes.append(fig.add_subplot(gs[6:8,7:9]))\naxes.append(fig.add_subplot(gs[6:8,5:7]))\naxes.append(fig.add_subplot(gs[6:8,3:5]))\naxes.append(fig.add_subplot(gs[6:8,1:3]))\n\naxes.append(fig.add_subplot(gs[4:6,0:2]))\naxes.append(fig.add_subplot(gs[4:6,2:4]))\naxes.append(fig.add_subplot(gs[4:6,4:6]))\naxes.append(fig.add_subplot(gs[4:6,6:8]))\naxes.append(fig.add_subplot(gs[4:6,8:10]))\naxes.append(fig.add_subplot(gs[4:6,10:12]))\n\naxes.append(fig.add_subplot(gs[2:4,9:11]))\naxes.append(fig.add_subplot(gs[2:4,7:9]))\naxes.append(fig.add_subplot(gs[2:4,5:7]))\naxes.append(fig.add_subplot(gs[2:4,3:5]))\naxes.append(fig.add_subplot(gs[2:4,1:3]))\n\naxes.append(fig.add_subplot(gs[0:2,0:2]))\naxes.append(fig.add_subplot(gs[0:2,2:4]))\naxes.append(fig.add_subplot(gs[0:2,4:6]))\naxes.append(fig.add_subplot(gs[0:2,6:8]))\naxes.append(fig.add_subplot(gs[0:2,8:10]))\naxes.append(fig.add_subplot(gs[0:2,10:12]))\n\naxes.append(fig.add_subplot(gs[12:14,10:12]))\naxes.append(fig.add_subplot(gs[12:14,8:10]))\naxes.append(fig.add_subplot(gs[12:14,6:8]))\naxes.append(fig.add_subplot(gs[12:14,4:6]))\naxes.append(fig.add_subplot(gs[12:14,2:4]))\naxes.append(fig.add_subplot(gs[12:14,0:2]))\n\naxes.append(fig.add_subplot(gs[10:12,1:3]))\naxes.append(fig.add_subplot(gs[10:12,3:5]))\naxes.append(fig.add_subplot(gs[10:12,5:7]))\naxes.append(fig.add_subplot(gs[10:12,7:9]))\naxes.append(fig.add_subplot(gs[10:12,9:11]))\n\naxes.append(fig.add_subplot(gs[8:10,10:12]))\naxes.append(fig.add_subplot(gs[8:10,8:10]))\naxes.append(fig.add_subplot(gs[8:10,6:8]))\naxes.append(fig.add_subplot(gs[8:10,4:6]))\naxes.append(fig.add_subplot(gs[8:10,2:4]))\naxes.append(fig.add_subplot(gs[8:10,0:2]))\n\nfor ell, ax in enumerate(axes):\n x = wham_baade[\"baade\"][ell][\"VEL\"]\n y = wham_baade[\"baade\"][ell][\"DATA\"]\n vel_to_plot = (x <= -20) & (x >= -150)\n if ell == 0:\n ax.plot(x[vel_to_plot], y[vel_to_plot], \"-\", color = pal[0], label = \"WHAM\")\n else:\n ax.plot(x[vel_to_plot], y[vel_to_plot], \"-\", color = pal[0])\n xlim = ax.get_xlim()\n ylim = ax.set_ylim()\n if ell == 0:\n ax.plot(vel, model_wham_obs[ell][\"DATA\"], \"--\", color = pal[1], lw = 3, label = \"Model\")\n else:\n ax.plot(vel, model_wham_obs[ell][\"DATA\"], \"--\", color = pal[1], lw = 3)\n \n ax.fill_betweenx([-.01, .04], [-40, -40], [0,0], \n color = \"k\", alpha = 0.1, zorder = 0)\n ax.set_xlim(xlim)\n if fitting_msk[ell]:\n ax.spines['bottom'].set_linewidth(6)\n ax.spines['bottom'].set_color(\"k\")\n ax.spines['top'].set_linewidth(6)\n ax.spines['top'].set_color(\"k\")\n ax.spines['right'].set_linewidth(4)\n ax.spines['right'].set_color(\"k\")\n ax.spines['left'].set_linewidth(4)\n ax.spines['left'].set_color(\"k\")\n ax.fill_betweenx([-.01, .04], [-45, -45], [-110, -110], \n color = pal[2], alpha = 0.2, zorder = 0)\n if ell in bpt_points:\n ax.text(-105, 0.024, \"*\", fontsize = 35, color = pal[3], weight='bold',\n horizontalalignment='center', \n verticalalignment='center')\n if ell in nii_points:\n ax.text(-105, 0.028, \"#\", fontsize = 20, color = pal[4], weight='bold',\n horizontalalignment='center', \n verticalalignment='center')\n ax.set_ylim([-0.01,0.04])\n ax.grid(False)\n ax.set_xticks([])\n ax.set_yticks([])\n\nfig.text(0.39, 0.89, \"Observed (WHAM)\", ha=\"center\", va=\"bottom\", size=16,color=pal[0])\nfig.text(0.5, 0.89, \"&\", ha=\"center\", va=\"bottom\", size=16)\nfig.text(0.56,0.89,\"Modeled\", ha=\"center\", va=\"bottom\", size=16,color=pal[1])\nfig.text(0.645,0.89,\"Spectra\", ha=\"center\", va=\"bottom\", size=16,color=\"k\")\n\nplt.tight_layout()\n\n# plt.savefig(\"FigureS3.png\", dpi = 300, transparent = True)\n# plt.savefig(\"FigureS3.svg\", transparent = True)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
ec9798f0ae5eea80831c807eec37d5d84ea32745
111,187
ipynb
Jupyter Notebook
run_example/sequential-model-fixed-missing-last-item.ipynb
MIracleyin/RecBole-notebook
ef32b3e57a297ff4889dec1f63c7984f8f901a23
[ "MIT" ]
null
null
null
run_example/sequential-model-fixed-missing-last-item.ipynb
MIracleyin/RecBole-notebook
ef32b3e57a297ff4889dec1f63c7984f8f901a23
[ "MIT" ]
null
null
null
run_example/sequential-model-fixed-missing-last-item.ipynb
MIracleyin/RecBole-notebook
ef32b3e57a297ff4889dec1f63c7984f8f901a23
[ "MIT" ]
null
null
null
38.714136
572
0.537131
[ [ [ "# 0.Overview\n**Tutorial**: sequential-model-fixed-missing-last-item\n\n**Author**: [astrung](https://github.com/astrung)\n\n**Original link**: [notebook](https://www.kaggle.com/code/astrung/sequential-model-fixed-missing-last-item)\n\n**Edit**:\n* In my previous notebooks([here](https://www.kaggle.com/code/astrung/lstm-sequential-modelwith-item-features-tutorial) and [here](https://www.kaggle.com/code/astrung/lstm-sequential-modelwith-item-features-tutorial)), we have used test_data with `full_sort_topk`,but due to the limit of full_sort_topk we have missed last item for submited recommendation. Someone asked me about how can use all items as input features for recommendation in this [comment](https://www.kaggle.com/code/astrung/recbole-lstm-sequential-for-recomendation-tutorial/comments#1723707). \n* So i created a notebook [here](https://www.kaggle.com/code/astrung/recbole-using-all-items-for-prediction) for address there questions in detail, and this notebook is an improved of my [previous notebook](https://www.kaggle.com/code/astrung/lstm-sequential-modelwith-item-features-tutorial), applying our new function (using all item as input features without `full_sort_topk`) for this competition.\n* I also create a improved version for adding item features into model in this [notebook](https://www.kaggle.com/astrung/lstm-model-with-item-infor-fix-missing-last-item). It improved a little score when add item features for recommendation\n\n- - -\n\n\nThis notebook demonstrate how to use LSTM for recomendation system.\nI am using Recbole as an open source, as it has so many built-in models for recommendation(CNN, GRU-LSTM, Context-aware, Graph). In this notebook, we tried to use GRU/LSTM model for testing effect of sequential model for recommendation.\n\nDue to memory limit and faster testing purpose, we will just use data in 2020.\n\nIf you want to use with all of interactions in all time, i have created a new atomic dataset here for you: https://www.kaggle.com/astrung/hm-atomic-interation\n\nWe also have other limit: we only train model and predict with users who buy more than 40 items and items which is bought by more than 40 people.\n\nWe will follow below steps for creating model:\n\n1. In order to use Recbole, we create atomic file from interaction data\n2. Because we only use Recbole model for predicting with users who buy more than 40 items, other users will need to fill by default recomendation items. We create most viewed items in last month as defautl recomendation\n3. We create dataset and train model in recbole.\n4. We create prediction result by trained model\n5. We combine recomendation result from most viewed items in last month and Recbole predicted model.\n\nI will explain more detail in following cells.\n\n", "_____no_output_____" ] ], [ [ "!pip install recbole", "Collecting recbole\r\n Downloading recbole-1.0.1-py3-none-any.whl (2.0 MB)\r\n |████████████████████████████████| 2.0 MB 618 kB/s \r\n\u001b[?25hRequirement already satisfied: colorama==0.4.4 in /opt/conda/lib/python3.7/site-packages (from recbole) (0.4.4)\r\nRequirement already satisfied: scikit-learn>=0.23.2 in /opt/conda/lib/python3.7/site-packages (from recbole) (0.23.2)\r\nRequirement already satisfied: pyyaml>=5.1.0 in /opt/conda/lib/python3.7/site-packages (from recbole) (6.0)\r\nCollecting colorlog==4.7.2\r\n Downloading colorlog-4.7.2-py2.py3-none-any.whl (10 kB)\r\nRequirement already satisfied: torch>=1.7.0 in /opt/conda/lib/python3.7/site-packages (from recbole) (1.9.1)\r\nCollecting scipy==1.6.0\r\n Downloading scipy-1.6.0-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)\r\n |████████████████████████████████| 27.4 MB 125 kB/s \r\n\u001b[?25hRequirement already satisfied: pandas>=1.0.5 in /opt/conda/lib/python3.7/site-packages (from recbole) (1.3.5)\r\nRequirement already satisfied: tqdm>=4.48.2 in /opt/conda/lib/python3.7/site-packages (from recbole) (4.62.3)\r\nRequirement already satisfied: numpy>=1.17.2 in /opt/conda/lib/python3.7/site-packages (from recbole) (1.20.3)\r\nRequirement already satisfied: tensorboard>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from recbole) (2.6.0)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.7/site-packages (from pandas>=1.0.5->recbole) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.7/site-packages (from pandas>=1.0.5->recbole) (2021.3)\r\nRequirement already satisfied: joblib>=0.11 in /opt/conda/lib/python3.7/site-packages (from scikit-learn>=0.23.2->recbole) (1.1.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from scikit-learn>=0.23.2->recbole) (3.0.0)\r\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (0.6.1)\r\nRequirement already satisfied: wheel>=0.26 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (0.37.0)\r\nRequirement already satisfied: setuptools>=41.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (59.5.0)\r\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (1.8.0)\r\nRequirement already satisfied: absl-py>=0.4 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (0.15.0)\r\nRequirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (3.19.1)\r\nRequirement already satisfied: google-auth<2,>=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (1.35.0)\r\nRequirement already satisfied: requests<3,>=2.21.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (2.26.0)\r\nRequirement already satisfied: grpcio>=1.24.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (1.43.0)\r\nRequirement already satisfied: werkzeug>=0.11.15 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (2.0.2)\r\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (0.4.6)\r\nRequirement already satisfied: markdown>=2.6.8 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.5.0->recbole) (3.3.6)\r\nRequirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from torch>=1.7.0->recbole) (4.0.1)\r\nRequirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from absl-py>=0.4->tensorboard>=2.5.0->recbole) (1.16.0)\r\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.5.0->recbole) (4.2.4)\r\nRequirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.5.0->recbole) (4.8)\r\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.5.0->recbole) (0.2.7)\r\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.5.0->recbole) (1.3.0)\r\nRequirement already satisfied: importlib-metadata>=4.4 in /opt/conda/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard>=2.5.0->recbole) (4.10.1)\r\nRequirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.5.0->recbole) (3.1)\r\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.5.0->recbole) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.5.0->recbole) (2.0.9)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.5.0->recbole) (1.26.7)\r\nRequirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard>=2.5.0->recbole) (3.6.0)\r\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=2.5.0->recbole) (0.4.8)\r\nRequirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.5.0->recbole) (3.1.1)\r\nInstalling collected packages: scipy, colorlog, recbole\r\n Attempting uninstall: scipy\r\n Found existing installation: scipy 1.7.3\r\n Uninstalling scipy-1.7.3:\r\n Successfully uninstalled scipy-1.7.3\r\n Attempting uninstall: colorlog\r\n Found existing installation: colorlog 6.6.0\r\n Uninstalling colorlog-6.6.0:\r\n Successfully uninstalled colorlog-6.6.0\r\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nyellowbrick 1.3.post1 requires numpy<1.20,>=1.16.0, but you have numpy 1.20.3 which is incompatible.\r\npdpbox 0.2.1 requires matplotlib==3.1.1, but you have matplotlib 3.5.1 which is incompatible.\r\nimbalanced-learn 0.9.0 requires scikit-learn>=1.0.1, but you have scikit-learn 0.23.2 which is incompatible.\r\nfeaturetools 1.4.1 requires numpy>=1.21.0, but you have numpy 1.20.3 which is incompatible.\r\narviz 0.11.4 requires typing-extensions<4,>=3.7.4.3, but you have typing-extensions 4.0.1 which is incompatible.\u001b[0m\r\nSuccessfully installed colorlog-4.7.2 recbole-1.0.1 scipy-1.6.0\r\n\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\r\n" ] ], [ [ "# 1. Create atomic file", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport gc\ndf = pd.read_csv(r\"/kaggle/input/h-and-m-personalized-fashion-recommendations/transactions_train.csv\", \n dtype={'article_id': 'str'})\ndf.head()", "_____no_output_____" ], [ "df['t_dat'] = pd.to_datetime(df['t_dat'], format=\"%Y-%m-%d\")\ndf", "_____no_output_____" ], [ "import numpy as np\ndf['timestamp'] = df.t_dat.values.astype(np.int64) // 10 ** 9\ndf.head()", "_____no_output_____" ] ], [ [ "**We fill with data in only 2020(timestapm > > 1585620000) and create inter file**\nFor anyone need instruction about inter file, please check below links:\n* https://recbole.io/docs/user_guide/data_intro.html\n* https://recbole.io/docs/user_guide/data/atomic_files.html", "_____no_output_____" ] ], [ [ "temp = df[df['timestamp'] > 1585620000][['customer_id', 'article_id', 'timestamp']].rename(\n columns={'customer_id': 'user_id:token', 'article_id': 'item_id:token', 'timestamp': 'timestamp:float'})\ntemp", "_____no_output_____" ] ], [ [ "We save atomic file in dataset format for using with recbole", "_____no_output_____" ] ], [ [ "!mkdir /kaggle/working/recbox_data\ntemp.to_csv('/kaggle/working/recbox_data/recbox_data.inter', index=False, sep='\\t')\ndel temp\ngc.collect()", "_____no_output_____" ] ], [ [ "# 2. We create defautl recomendation for user who can not be predicted by sequential model.\nI use this approach in notebook: https://www.kaggle.com/hervind/h-m-faster-trending-products-weekly You can check it for more detail information. I will juse copy only code here", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "sub0 = pd.read_csv('../input/hm-pre-recommendation/submissio_byfone_chris.csv').sort_values('customer_id').reset_index(drop=True)\nsub1 = pd.read_csv('../input/hm-pre-recommendation/submission_trending.csv').sort_values('customer_id').reset_index(drop=True)\nsub2 = pd.read_csv('../input/hm-pre-recommendation/submission_exponential_decay.csv').sort_values('customer_id').reset_index(drop=True)\n\nsub0.shape, sub1.shape, sub2.shape", "_____no_output_____" ], [ "sub0.columns = ['customer_id', 'prediction0']\nsub0['prediction1'] = sub1['prediction']\nsub0['prediction2'] = sub2['prediction']\ndel sub1, sub2\ngc.collect()\nsub0.head()", "_____no_output_____" ], [ "def cust_blend(dt, W = [1,1,1]):\n #Global ensemble weights\n #W = [1.15,0.95,0.85]\n \n #Create a list of all model predictions\n REC = []\n REC.append(dt['prediction0'].split())\n REC.append(dt['prediction1'].split())\n REC.append(dt['prediction2'].split())\n \n #Create a dictionary of items recommended. \n #Assign a weight according the order of appearance and multiply by global weights\n res = {}\n for M in range(len(REC)):\n for n, v in enumerate(REC[M]):\n if v in res:\n res[v] += (W[M]/(n+1))\n else:\n res[v] = (W[M]/(n+1))\n \n # Sort dictionary by item weights\n res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())\n \n # Return the top 12 itens only\n return ' '.join(res[:12])\n\nsub0['prediction'] = sub0.apply(cust_blend, W = [1.05,1.00,0.95], axis=1)\nsub0.head()", "_____no_output_____" ], [ "del sub0['prediction0']\ndel sub0['prediction1']\ndel sub0['prediction2']\ngc.collect()\nsub0.to_csv(f'submission.csv', index=False)", "_____no_output_____" ], [ "del sub0\ndel df\ngc.collect()", "_____no_output_____" ] ], [ [ "# 3. Create dataset and train model with Recbole\n\nFor anyone need instruction document, please check this link: https://recbole.io/docs/user_guide/usage/use_modules.html", "_____no_output_____" ] ], [ [ "import logging\nfrom logging import getLogger\nfrom recbole.config import Config\nfrom recbole.data import create_dataset, data_preparation\nfrom recbole.model.sequential_recommender import GRU4Rec\nfrom recbole.trainer import Trainer\nfrom recbole.utils import init_seed, init_logger", "_____no_output_____" ], [ "parameter_dict = {\n 'data_path': '/kaggle/working',\n 'USER_ID_FIELD': 'user_id',\n 'ITEM_ID_FIELD': 'item_id',\n 'TIME_FIELD': 'timestamp',\n 'user_inter_num_interval': \"[30,inf)\",\n 'item_inter_num_interval': \"[40,inf)\",\n 'load_col': {'inter': ['user_id', 'item_id', 'timestamp']},\n 'neg_sampling': None,\n 'epochs': 50,\n 'eval_args': {\n 'split': {'RS': [10, 0, 0]},\n 'group_by': 'user',\n 'order': 'TO',\n 'mode': 'full'}\n}\n\nconfig = Config(model='GRU4Rec', dataset='recbox_data', config_dict=parameter_dict)\n\n# init random seed\ninit_seed(config['seed'], config['reproducibility'])\n\n# logger initialization\ninit_logger(config)\nlogger = getLogger()\n# Create handlers\nc_handler = logging.StreamHandler()\nc_handler.setLevel(logging.INFO)\nlogger.addHandler(c_handler)\n\n# write config info into log\nlogger.info(config)", "\nGeneral Hyper Parameters:\ngpu_id = 0\nuse_gpu = True\nseed = 2020\nstate = INFO\nreproducibility = True\ndata_path = /kaggle/working/recbox_data\ncheckpoint_dir = saved\nshow_progress = True\nsave_dataset = False\ndataset_save_path = None\nsave_dataloaders = False\ndataloaders_save_path = None\nlog_wandb = False\n\nTraining Hyper Parameters:\nepochs = 50\ntrain_batch_size = 2048\nlearner = adam\nlearning_rate = 0.001\nneg_sampling = None\neval_step = 1\nstopping_step = 10\nclip_grad_norm = None\nweight_decay = 0.0\nloss_decimal_place = 4\n\nEvaluation Hyper Parameters:\neval_args = {'split': {'RS': [10, 0, 0]}, 'group_by': 'user', 'order': 'TO', 'mode': 'full'}\nrepeatable = True\nmetrics = ['Recall', 'MRR', 'NDCG', 'Hit', 'Precision']\ntopk = [10]\nvalid_metric = MRR@10\nvalid_metric_bigger = True\neval_batch_size = 4096\nmetric_decimal_place = 4\n\nDataset Hyper Parameters:\nfield_separator = \t\nseq_separator = \nUSER_ID_FIELD = user_id\nITEM_ID_FIELD = item_id\nRATING_FIELD = rating\nTIME_FIELD = timestamp\nseq_len = None\nLABEL_FIELD = label\nthreshold = None\nNEG_PREFIX = neg_\nload_col = {'inter': ['user_id', 'item_id', 'timestamp']}\nunload_col = None\nunused_col = None\nadditional_feat_suffix = None\nrm_dup_inter = None\nval_interval = None\nfilter_inter_by_user_or_item = True\nuser_inter_num_interval = [30,inf)\nitem_inter_num_interval = [40,inf)\nalias_of_user_id = None\nalias_of_item_id = None\nalias_of_entity_id = None\nalias_of_relation_id = None\npreload_weight = None\nnormalize_field = None\nnormalize_all = None\nITEM_LIST_LENGTH_FIELD = item_length\nLIST_SUFFIX = _list\nMAX_ITEM_LIST_LENGTH = 50\nPOSITION_FIELD = position_id\nHEAD_ENTITY_ID_FIELD = head_id\nTAIL_ENTITY_ID_FIELD = tail_id\nRELATION_ID_FIELD = relation_id\nENTITY_ID_FIELD = entity_id\nbenchmark_filename = None\n\nOther Hyper Parameters: \nwandb_project = recbole\nrequire_pow = False\nembedding_size = 64\nhidden_size = 128\nnum_layers = 1\ndropout_prob = 0.3\nloss_type = CE\nMODEL_TYPE = ModelType.SEQUENTIAL\nMODEL_INPUT_TYPE = InputType.POINTWISE\neval_type = EvaluatorType.RANKING\ndevice = cuda\ntrain_neg_sample_args = {'strategy': 'none'}\neval_neg_sample_args = {'strategy': 'full', 'distribution': 'uniform'}\n\n\n" ], [ "dataset = create_dataset(config)\nlogger.info(dataset)", "recbox_data\nThe number of users: 38916\nAverage actions of users: 47.47241423615572\nThe number of items: 10962\nAverage actions of items: 168.54201259009216\nThe number of inters: 1847389\nThe sparsity of the dataset: 99.56694768867584%\nRemain Fields: ['user_id', 'item_id', 'timestamp']\n" ], [ "# dataset splitting\ntrain_data, valid_data, test_data = data_preparation(config, dataset)", "[Training]: train_batch_size = [2048] negative sampling: [None]\n[Evaluation]: eval_batch_size = [4096] eval_args: [{'split': {'RS': [10, 0, 0]}, 'group_by': 'user', 'order': 'TO', 'mode': 'full'}]\n" ], [ "# model loading and initialization\nmodel = GRU4Rec(config, train_data.dataset).to(config['device'])\nlogger.info(model)\n\n# trainer loading and initialization\ntrainer = Trainer(config, model)\n\n# model training\nbest_valid_score, best_valid_result = trainer.fit(train_data)", "GRU4Rec(\n (item_embedding): Embedding(10962, 64, padding_idx=0)\n (emb_dropout): Dropout(p=0.3, inplace=False)\n (gru_layers): GRU(64, 128, bias=False, batch_first=True)\n (dense): Linear(in_features=128, out_features=64, bias=True)\n (loss_fct): CrossEntropyLoss()\n)\nTrainable parameters: 783552\nepoch 0 training [time: 29.21s, train loss: 7608.2684]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 1 training [time: 26.44s, train loss: 7102.8474]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 2 training [time: 25.82s, train loss: 6864.3110]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 3 training [time: 25.81s, train loss: 6658.3106]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 4 training [time: 25.81s, train loss: 6516.7922]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 5 training [time: 25.82s, train loss: 6418.4797]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 6 training [time: 25.55s, train loss: 6338.6159]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 7 training [time: 26.08s, train loss: 6273.0269]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 8 training [time: 25.75s, train loss: 6216.3229]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 9 training [time: 25.85s, train loss: 6168.1504]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 10 training [time: 25.45s, train loss: 6125.8122]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 11 training [time: 25.87s, train loss: 6088.1390]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 12 training [time: 25.71s, train loss: 6056.3469]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 13 training [time: 25.75s, train loss: 6028.3020]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 14 training [time: 25.52s, train loss: 6002.9929]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 15 training [time: 25.74s, train loss: 5981.4722]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 16 training [time: 25.50s, train loss: 5962.1978]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 17 training [time: 25.69s, train loss: 5946.1110]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 18 training [time: 25.66s, train loss: 5931.8931]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 19 training [time: 25.61s, train loss: 5918.5745]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 20 training [time: 25.33s, train loss: 5907.3241]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 21 training [time: 25.86s, train loss: 5897.1385]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 22 training [time: 25.55s, train loss: 5887.6680]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 23 training [time: 25.80s, train loss: 5879.6025]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 24 training [time: 25.52s, train loss: 5871.1030]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 25 training [time: 25.84s, train loss: 5864.6343]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 26 training [time: 25.54s, train loss: 5858.6181]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 27 training [time: 25.94s, train loss: 5851.7828]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 28 training [time: 25.66s, train loss: 5846.2326]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 29 training [time: 25.61s, train loss: 5840.5342]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 30 training [time: 25.72s, train loss: 5836.3762]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 31 training [time: 25.81s, train loss: 5831.4665]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 32 training [time: 25.65s, train loss: 5826.3515]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 33 training [time: 26.00s, train loss: 5822.0054]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 34 training [time: 25.27s, train loss: 5818.5471]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 35 training [time: 25.82s, train loss: 5814.4312]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 36 training [time: 25.81s, train loss: 5810.0819]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 37 training [time: 25.82s, train loss: 5807.7266]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 38 training [time: 25.45s, train loss: 5803.9869]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 39 training [time: 26.11s, train loss: 5801.2985]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 40 training [time: 25.86s, train loss: 5798.3783]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 41 training [time: 26.02s, train loss: 5795.3381]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 42 training [time: 26.13s, train loss: 5792.3586]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 43 training [time: 25.52s, train loss: 5789.8296]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 44 training [time: 25.86s, train loss: 5787.6979]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 45 training [time: 26.05s, train loss: 5785.8937]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 46 training [time: 25.78s, train loss: 5782.2815]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 47 training [time: 25.79s, train loss: 5781.2301]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 48 training [time: 25.52s, train loss: 5777.6824]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\nepoch 49 training [time: 25.87s, train loss: 5776.9169]\nSaving current: saved/GRU4Rec-Mar-20-2022_02-28-47.pth\n" ] ], [ [ "# 4. Create recommendation result from trained model\n\nI note document here for any one want to customize it: https://recbole.io/docs/user_guide/usage/case_study.html", "_____no_output_____" ] ], [ [ "external_user_ids = dataset.id2token(\n dataset.uid_field, list(range(dataset.user_num)))[1:]#fist element in array is 'PAD'(default of Recbole) ->remove it ", "_____no_output_____" ], [ "import torch\nfrom recbole.data.interaction import Interaction\n\ndef add_last_item(old_interaction, last_item_id, max_len=50):\n new_seq_items = old_interaction['item_id_list'][-1]\n if old_interaction['item_length'][-1].item() < max_len:\n new_seq_items[old_interaction['item_length'][-1].item()] = last_item_id\n else:\n new_seq_items = torch.roll(new_seq_items, -1)\n new_seq_items[-1] = last_item_id\n return new_seq_items.view(1, len(new_seq_items))\n\ndef predict_for_all_item(external_user_id, dataset, model):\n model.eval()\n with torch.no_grad():\n uid_series = dataset.token2id(dataset.uid_field, [external_user_id])\n index = np.isin(dataset[dataset.uid_field].numpy(), uid_series)\n input_interaction = dataset[index]\n test = {\n 'item_id_list': add_last_item(input_interaction, \n input_interaction['item_id'][-1].item(), model.max_seq_length),\n 'item_length': torch.tensor(\n [input_interaction['item_length'][-1].item() + 1\n if input_interaction['item_length'][-1].item() < model.max_seq_length else model.max_seq_length])\n }\n new_inter = Interaction(test)\n new_inter = new_inter.to(config['device'])\n new_scores = model.full_sort_predict(new_inter)\n new_scores = new_scores.view(-1, test_data.dataset.item_num)\n new_scores[:, 0] = -np.inf # set scores of [pad] to -inf\n return torch.topk(new_scores, 10)", "_____no_output_____" ], [ "predict_for_all_item('0109ad0b5a76924a1b58be677409bb601cc8bead9a87b8ce5b08a4a1f5bc71ef', \n dataset, model)", "_____no_output_____" ], [ "topk_items = []\nfor external_user_id in external_user_ids:\n _, topk_iid_list = predict_for_all_item(external_user_id, dataset, model)\n last_topk_iid_list = topk_iid_list[-1]\n external_item_list = dataset.id2token(dataset.iid_field, last_topk_iid_list.cpu()).tolist()\n topk_items.append(external_item_list)\nprint(len(topk_items))", "38915\n" ], [ "external_item_str = [' '.join(x) for x in topk_items]\nresult = pd.DataFrame(external_user_ids, columns=['customer_id'])\nresult['prediction'] = external_item_str\nresult.head()", "_____no_output_____" ], [ "del external_item_str\ndel topk_items\ndel external_user_ids\ndel train_data\ndel valid_data\ndel test_data\ndel model\ndel Trainer\ndel logger\ngc.collect()", "_____no_output_____" ], [ "del dataset\ngc.collect()", "_____no_output_____" ] ], [ [ "# 5. Combine result from most bought items and GRU model", "_____no_output_____" ] ], [ [ "submit_df = pd.read_csv('submission.csv')\nsubmit_df.shape", "_____no_output_____" ], [ "submit_df.head()", "_____no_output_____" ], [ "submit_df = pd.merge(submit_df, result, on='customer_id', how='outer')\nsubmit_df.head()", "_____no_output_____" ], [ "submit_df = submit_df.fillna(-1)\nsubmit_df['prediction'] = submit_df.apply(\n lambda x: x['prediction_y'] if x['prediction_y'] != -1 else x['prediction_x'], axis=1)\nsubmit_df.head()", "_____no_output_____" ], [ "submit_df[submit_df['prediction_y'] != -1]", "_____no_output_____" ], [ "submit_df = submit_df.drop(columns=['prediction_y', 'prediction_x'])\nsubmit_df.head()", "_____no_output_____" ], [ "submit_df.to_csv('submission.csv', index=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
ec979ad00003fc6453ff28e9a286a90b42e48206
265,594
ipynb
Jupyter Notebook
notebooks/generative.ipynb
tam633/geometric-js
ad5c2ca53691eedb0b246ade18a80738a15e79a5
[ "MIT" ]
14
2020-06-18T14:14:10.000Z
2021-08-18T01:59:48.000Z
notebooks/generative.ipynb
tam633/geometric-js
ad5c2ca53691eedb0b246ade18a80738a15e79a5
[ "MIT" ]
null
null
null
notebooks/generative.ipynb
tam633/geometric-js
ad5c2ca53691eedb0b246ade18a80738a15e79a5
[ "MIT" ]
2
2021-03-23T23:43:36.000Z
2021-05-04T08:35:36.000Z
700.775726
93,051
0.865652
[ [ [ "import matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nBASE_DIR=os.path.dirname(os.getcwd())\nimport pandas as pd\nimport sys\nsys.path.append(BASE_DIR)\nimport torch\n\nimport scipy.stats\nfrom scipy.stats import norm\nfrom scipy.special import logsumexp\n\nfrom vae.utils.modelIO import save_model, load_model, load_metadata\nfrom notebooks.utils import PlotParams", "_____no_output_____" ], [ "plotter = PlotParams()\nplotter.set_params()\nDATA_DIR = os.path.join(os.pardir, 'data')\nFIG_DIR = os.path.join(os.pardir, 'figs')\nRES_DIR = os.path.join(os.pardir, 'results')", "_____no_output_____" ], [ "X_test = np.load(os.path.join(DATA_DIR, 'dsprites', 'dsprite_train.npz'))['imgs']\nX_test = torch.tensor(X_test).unsqueeze(1).float() / 255.0\ndigit_size = 64\ndataset = 'dsprites'\n\n# dataset = 'fashion'\n# X_test = torch.load(os.path.join(DATA_DIR, 'fashionMnist', 'FashionMNIST', 'processed', 'test.pt'))\n\n# dataset = 'mnist_bern'\n# X_test = torch.load(os.path.join(DATA_DIR, 'mnist', 'MNIST', 'processed', 'test.pt'))\n\n# X_test = X_test[0].unsqueeze(1).float() / 255.0\n\n# dataset = 'bmnist'\n# X_test = np.loadtxt(os.path.join(DATA_DIR, 'bmnist', 'binarized_mnist_test.amat'))\n# X_test = torch.tensor(X_test.reshape((-1, 28, 28)))\n# X_test = X_test.unsqueeze(1).float() / 255.0\n\n# X_test = torch.nn.functional.pad(X_test, pad=(2, 2, 2, 2))\ndigit_size = 64\n\nX_test = X_test[:10000]", "_____no_output_____" ], [ "def bootstrap(x, low, high, n_samples):\n\tmu = x.mean()\n\tn = len(x)\n\tX = np.random.choice(x, size=n_samples*n).reshape(n_samples, n)\n\tmu_star = X.mean(axis=1)\n\td_star = np.sort(mu_star - mu)\n\n\treturn mu, mu + d_star[int(low*n_samples)], mu + d_star[int(high*n_samples)]", "_____no_output_____" ], [ "def compute_samples(model, data, num_samples, debug=False):\n \"\"\" Sample from importance distribution z_samples ~ q(z|X) and\n compute p(z_samples), q(z_samples) for importance sampling\n \"\"\"\n data = data.cuda()\n z_mean, z_log_sigma = model.encoder(data)\n z_mean = z_mean.cpu().detach().numpy()\n z_log_sigma = z_log_sigma.cpu().detach().numpy()\n z_samples = []\n qz = []\n\n for m, s in zip(z_mean, z_log_sigma):\n z_vals = [np.random.normal(m[i], np.exp(s[i]), num_samples) for i in range(len(m))]\n qz_vals = [norm.pdf(z_vals[i], loc=m[i], scale=np.exp(s[i])) for i in range(len(m))]\n z_samples.append(z_vals)\n qz.append(qz_vals)\n\n z_samples = np.array(z_samples)\n pz = norm.pdf(z_samples)\n qz = np.array(qz)\n\n z_samples = np.swapaxes(z_samples, 1, 2)\n pz = np.swapaxes(pz, 1, 2)\n qz = np.swapaxes(qz, 1, 2)\n\n return z_samples, pz, qz", "_____no_output_____" ], [ "def estimate_logpx_batch(model, data, num_samples, debug=False):\n z_samples, pz, qz = compute_samples(model, data, num_samples)\n assert len(z_samples) == len(data)\n assert len(z_samples) == len(pz)\n assert len(z_samples) == len(qz)\n z_samples = torch.tensor(z_samples).float().cuda()\n\n result = []\n for i in range(len(data)):\n x_predict = model.decoder(z_samples[i]).reshape(-1, digit_size ** 2)\n x_predict = x_predict.cpu().detach().numpy()\n x_predict = np.clip(x_predict, np.finfo(float).eps, 1. - np.finfo(float).eps)\n p_vals = pz[i]\n q_vals = qz[i]\n\n datum = data[i].numpy().reshape(digit_size ** 2)\n\n # \\log p(x|z) = Binary cross entropy\n logp_xz = np.sum(datum * np.log(x_predict + 1e-9) + (1. - datum) * np.log(1.0 - x_predict + 1e-9), axis=-1)\n logpz = np.sum(np.log(p_vals + 1e-9), axis=-1)\n logqz = np.sum(np.log(q_vals + 1e-9), axis=-1)\n argsum = logp_xz + logpz - logqz\n logpx = -np.log(num_samples + 1e-9) + logsumexp(argsum)\n result.append(logpx)\n\n return np.array(result)", "_____no_output_____" ], [ "def estimate_logpx(model, data, num_samples, verbosity=0):\n batches = []\n iterations = int(np.ceil(1. * len(data) / 100))\n for b in range(iterations):\n batch_data = data[b * 100:(b + 1) * 100]\n batches.append(estimate_logpx_batch(model, batch_data, num_samples))\n if verbosity and b % max(11 - verbosity, 1) == 0:\n print(\"Batch %d [%d, %d): %.2f\" % (b, b * 100, (b+1) * 100, np.mean(np.concatenate(batches))))\n\n log_probs = np.concatenate(batches)\n mu, lb, ub = bootstrap(log_probs, 0.025, 0.975, 1000)\n\n return mu, lb, ub", "_____no_output_____" ], [ "alphas = np.arange(0.1, 1.0, 0.1)\nmodel_paths = [os.path.join(RES_DIR, dataset+f'_gjs-{a:.1f}') for a in alphas]\nmodel_paths += [os.path.join(RES_DIR, dataset+f'_dgjs-{a:.1f}') for a in alphas]\nmodel_paths += [os.path.join(RES_DIR, dataset+'_kl')]\nmodel_paths += [os.path.join(RES_DIR, dataset+'_kl_reverse')]\nmodel_paths += [os.path.join(RES_DIR, dataset+'_mmd')]\n\n# model_paths += [os.path.join(RES_DIR, 'mnist_kl')]\n# model_paths += [os.path.join(RES_DIR, 'mnist_kl_reverse')]\n# model_paths += [os.path.join(RES_DIR, 'mnist_mmd')]\n\nlog_probs_mu = []\nlog_probs_lb = []\nlog_probs_ub = []\nlog_probs_best = -np.inf\n\nfor model_path in model_paths:\n model = load_model(model_path)\n logpx_mu, logpx_lb, logpx_ub = estimate_logpx(model, X_test, num_samples=128, verbosity=0)\n\n log_probs_mu += [logpx_mu]\n log_probs_lb += [logpx_lb]\n log_probs_ub += [logpx_ub]\n\n if logpx_mu > log_probs_best:\n model_best = model_path\n log_probs_best = logpx_mu\n\n print(model_path)\n print(\"log p(x) = %.2f (%.2f, %.2f)\" % (logpx_mu, logpx_lb, logpx_ub))", "..\\results\\dsprites_gjs-0.1\nlog p(x) = -56.57 (-56.59, -56.54)\n..\\results\\dsprites_gjs-0.2\nlog p(x) = -45.78 (-45.81, -45.75)\n..\\results\\dsprites_gjs-0.3\nlog p(x) = -40.18 (-40.22, -40.15)\n..\\results\\dsprites_gjs-0.4\nlog p(x) = -35.43 (-35.48, -35.37)\n..\\results\\dsprites_gjs-0.5\nlog p(x) = -33.66 (-33.72, -33.60)\n..\\results\\dsprites_gjs-0.6\nlog p(x) = -33.37 (-33.44, -33.31)\n..\\results\\dsprites_gjs-0.7\nlog p(x) = -38.78 (-38.85, -38.69)\n..\\results\\dsprites_gjs-0.8\nlog p(x) = -40.50 (-40.60, -40.40)\n..\\results\\dsprites_gjs-0.9\nlog p(x) = -39.15 (-39.29, -39.02)\n..\\results\\dsprites_dgjs-0.1\nlog p(x) = -63.30 (-63.33, -63.28)\n..\\results\\dsprites_dgjs-0.2\nlog p(x) = -82.24 (-82.27, -82.21)\n..\\results\\dsprites_dgjs-0.3\nlog p(x) = -76.53 (-76.56, -76.50)\n..\\results\\dsprites_dgjs-0.4\nlog p(x) = -70.18 (-70.21, -70.15)\n..\\results\\dsprites_dgjs-0.5\nlog p(x) = -65.53 (-65.55, -65.50)\n..\\results\\dsprites_dgjs-0.6\nlog p(x) = -61.40 (-61.42, -61.37)\n..\\results\\dsprites_dgjs-0.7\nlog p(x) = -56.84 (-56.86, -56.81)\n..\\results\\dsprites_dgjs-0.8\nlog p(x) = -53.70 (-53.73, -53.68)\n..\\results\\dsprites_dgjs-0.9\nlog p(x) = -52.48 (-52.50, -52.45)\n..\\results\\dsprites_kl\nlog p(x) = -42.70 (-42.73, -42.68)\n..\\results\\dsprites_kl_reverse\nlog p(x) = -52.20 (-52.33, -52.08)\n..\\results\\dsprites_mmd\nlog p(x) = -28.36 (-28.44, -28.28)\n" ], [ "yerr_ub1 = np.array(log_probs_ub[:9]) - np.array(log_probs_mu[:9])\nyerr_lb1 = np.array(log_probs_mu[:9]) - np.array(log_probs_lb[:9])\nyerr1 = np.stack([yerr_lb1, yerr_ub1]).reshape((2, 9))\n\nyerr_ub2 = np.array(log_probs_ub[:9]) - np.array(log_probs_mu[:9])\nyerr_lb2 = np.array(log_probs_mu[:9]) - np.array(log_probs_lb[:9])\nyerr2 = np.stack([yerr_lb2, yerr_ub2]).reshape((2, 9))\n\nplt.errorbar(np.arange(0.1, 1.0, 0.1), log_probs_mu[:9], yerr=yerr1, color='b', fmt='-', capsize=4, label=r'G$^{\\alpha}$-JS')\nplt.errorbar(np.arange(0.1, 1.0, 0.1), log_probs_mu[9:18], yerr=yerr2, color='b', fmt=':', capsize=4, label=r'G$^{\\alpha}$-JS*')\nplt.plot([0.0, 1.0], [log_probs_mu[18], log_probs_mu[18]], 'c', label=r'$KL(q\\parallel p)$')\nplt.fill_between([0.0, 1.0], [log_probs_lb[18], log_probs_lb[18]], [log_probs_ub[18], log_probs_ub[18]], color='c', alpha=0.3)\nplt.plot([0.0, 1.0], [log_probs_mu[19], log_probs_mu[19]], 'g', label=r'$KL(p\\parallel q)$')\nplt.fill_between([0.0, 1.0], [log_probs_lb[19], log_probs_lb[19]], [log_probs_ub[19], log_probs_ub[19]], color='g', alpha=0.3)\nplt.plot([0.0, 1.0], [log_probs_mu[20], log_probs_mu[20]], 'r', label='MMD')\nplt.fill_between([0.0, 1.0], [log_probs_lb[20], log_probs_lb[20]], [log_probs_ub[20], log_probs_ub[20]], color='r', alpha=0.3)\nplt.xlim(0, 1)\nplt.xlabel(r'$\\alpha$')\nplt.ylabel(r'log$p_{\\theta}(X)$')\n_ = plt.legend()\nplt.savefig(os.path.join(FIG_DIR, dataset+'_model_evidence.pdf'), bbox_inches='tight')", "_____no_output_____" ], [ "model = load_model(model_best)\n\nn = 10\nfigure = np.zeros((digit_size * n, digit_size * n))\n\nfor j in range(n):\n z_sample = np.random.normal(size=10 * 100).reshape(100, 10)\n z_sample = torch.tensor(z_sample).float().cuda()\n x_decoded = model.decoder(z_sample)\n x_decoded = x_decoded.cpu().detach().numpy()\n digit = x_decoded.reshape(100, digit_size, digit_size, 1)\n for i in range(n):\n d_x = i * digit_size\n d_y = j * digit_size\n figure[d_x:d_x + digit_size, d_y:d_y + digit_size] = digit[i, :, :, 0]\n\nplt.figure(figsize=(10, 10))\nplt.imshow(figure, cmap='Greys_r')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec97b62e3c9da6c7eb9c3d81bfbd10fc2d888200
519,021
ipynb
Jupyter Notebook
final_project/final_project_2018-2019.ipynb
TommasoRonconi/P1.4_seed
8c91c7e03f29014cd26af747ca8ecf82a581d6c4
[ "CC-BY-4.0" ]
null
null
null
final_project/final_project_2018-2019.ipynb
TommasoRonconi/P1.4_seed
8c91c7e03f29014cd26af747ca8ecf82a581d6c4
[ "CC-BY-4.0" ]
null
null
null
final_project/final_project_2018-2019.ipynb
TommasoRonconi/P1.4_seed
8c91c7e03f29014cd26af747ca8ecf82a581d6c4
[ "CC-BY-4.0" ]
null
null
null
65.673921
45,636
0.790797
[ [ [ "# Final project, Numerical Analysis 2018-2019\n\n\n## Project description\n\nIn this project, we would like to compare the performance of some embarassingly simple algorithms to solve a classification problem based on the MNIST database. \n\nThe abstract aim of the program is to write a function:\n\n```\nresult = classify(image)\n```\n\nthat takes as input a small grey scale image of a hand-written digit (from the MNIST database), and returns the digit corresponding to the content of the image.\n\nAn example of the images we'll be working on is the following:\n\n![mnist examples](https://m-alcu.github.io/assets/mnist.png)\n\nSome background on the MNIST database (from wikipedia):\n\n\n## MNIST database\n\n*From Wikipedia, the free encyclopedia*\n\nThe MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by \"re-mixing\" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.\n\n## MNIST sample images.\n\nThe MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset. There have been a number of scientific papers on attempts to achieve the lowest error rate; one paper, using a hierarchical system of convolutional neural networks, manages to get an error rate on the MNIST database of 0.23%. The original creators of the database keep a list of some of the methods tested on it. In their original paper, they use a support vector machine to get an error rate of 0.8%. An extended dataset similar to MNIST called EMNIST has been published in 2017, which contains 240,000 training images, and 40,000 testing images of handwritten digits and characters.\n\n## Algorithm\n\nWe start by defining the distance between two images. Ideally, a distance function between two images is zero when the images are the same, and greater than zero when the images are different. \n\nThe bigger the distance, the more different the images should be. Ideally, the distance between an image of the number `9` should be closer to an image of the number `8` than to an image of the number `1` (the digits `9` and `8`, as images, differ by the fact that the first has one closed loop, while the second has two closed loops, while the digit `1` is mostly a straight line). Two different images representing the same number should be even closer (i.e., the distance function should return a \"small\" number).\n\nGiven a distance and a training set of images for which we know everything, the simplest algorithm we can think of to classify an image `z`, is the following: given a set of train images (`x_train`) for which we know the digit they represent (`y_train`), measure the distance between `z` and all images in `x_train`, and classify the image `z` to represent the same digit of the image that is closest to `z` in `x_train`:\n\nParameters of the algorithm:\n\n- `x_train`\n- `y_train`\n- a distance function `dist` \n\nInput of the function\n\n- `z`\n\nOutput of the function\n\n- `digit`\n\nwhere \n\n```\ndef classify(z):\n all_distances = array([dist(x, z) for x in x_train])\n digit = y_train[argmin(all_distances)]\n return digit\n```\n\nWe will experiment with different distances, and we will try to improve the algorithm above in a step by step fashon.\n\n## Data description\n\nEach image in the MNIST dataset represents a hand written digit, in the form of a matrix of `28x28` values between zero and one, representing gray scale values (zero = white, one = black).\n\nWe use an array of `60.000x28x28` floating point values to collect all training images, and an array of `60.000` digits containing the (correct) value of the training digits (between 0 and 9 inclusive).\n\nThe testing images are instead collected into two arrays of size `10.000x28x28` and `10.0000` respectively.", "_____no_output_____" ] ], [ [ "%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "rcParams['font.size'] = 12\nrcParams['figure.figsize'] = [9.0,6.0]", "_____no_output_____" ], [ "# input image dimensions\nimg_rows, img_cols = 28, 28\n\n# Uncomment the following lines if you have keras installed. Otherwise you can \n# use the file I uploaded: mnist.npz\nimport keras\nfrom keras.datasets import mnist\nimport keras.backend as K", "Using TensorFlow backend.\n" ], [ "# the data, split between train and test sets\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\nif K.image_data_format() == 'channels_first':\n x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)\n x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)\n input_shape = (img_rows, img_cols)\nelse:\n x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols)\n x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols)\n input_shape = (img_rows, img_cols)\n\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255\n\nnp.savez_compressed('mnist.npz', x_train, y_train, x_test, y_test)\n\narc = load('mnist.npz')\n\nx_train = arc['arr_0']\ny_train = arc['arr_1']\nx_test = arc['arr_2']\ny_test = arc['arr_3']\n\nprint(x_train.shape, y_train.shape)\nprint(x_test.shape, y_test.shape)", "(60000, 28, 28) (60000,)\n(10000, 28, 28) (10000,)\n" ] ], [ [ "# Plotting one image\n\nHow do we plot the images? `pyplot`, which has been imported by the first line of the previous cell, contains a command called `imshow`, that can be used to plot images. \n\nIn this case we know it is a greyscale image, with zero representing white and one representing black, so we use a colormap that goes from white to black, i.e., `gray_r` where `_r` stands for reversed. ", "_____no_output_____" ] ], [ [ "# Show image number 15, and write in the title what digit it should correspond to\nN=15\nimshow(x_train[N], cmap='gray_r')\n_ = title('Hand written digit '+str(y_train[N]))", "_____no_output_____" ] ], [ [ "**IF YOU DON'T HAVE ENOUGH COMPUTATIONAL POWER, RUN THE EXERCISES ONLY UP TO WHAT IS SUSTAINABLE FOR YOUR PC**\n\nGeneral guidelines:\n\n- Time all functions you construct, and try to make them run as fast as possible by precomputing anything that can be precomputed\n- Extra points are gained if you reduce the complexity of the given algorithms in any possible way, for example by exploiting linearity, etc.\n- If something takes too long to execute, make sure you time it on a smaller set of input data, and give estimates of how long it would take to run the full thing (without actually running it). Plot only the results you manage to run on your PC.", "_____no_output_____" ], [ "### Definitions and imports to use for timing et al.:", "_____no_output_____" ] ], [ [ "from timeit import timeit\n\ndef wrapper ( func, *args, **kwargs ) :\n def wrapped () :\n return func ( *args, **kwargs )\n return wrapped\n\ndef randindex () :\n return randint( 0, x_train.shape[0] )", "_____no_output_____" ] ], [ [ "# Assignment 1\n\nImplement the following distance functions\n\n- d_infty $$ d_{\\infty}(a,b) := \\max_{i,j} |b_{ij}-a_{ij}|$$\n- d_one $$ d_1(a,b) := \\sum_{i,j} |b_{ij}-a_{ij}|$$\n- d_two $$ d_2(a,b) := \\sqrt{\\sum_{i,j} |b_{ij}-a_{ij}|^2}$$\n\nthat take two `(28,28)` images in input, and return a non-negative number.", "_____no_output_____" ] ], [ [ "print(x_train[1].shape)\n(x_train[1]-x_train[2]).shape\nx_train[1].size\ndd = x_train[1]-x_train[2]\ndd.size\ncc = reshape(dd, dd.size)\ncc.shape\nmax(absolute(cc))\nabsolute( 1.0 - 0.4 )\nx_train.shape[0]\n# imshow(x_train[2], cmap='gray_r')\n# cc = reshape(x_train[1].size,x_train[1]-x_train[2])\n# ff = array([ cc for cc in ( x_train[1] - x_train[2] ) ])\n# ff.shape", "(28, 28)\n" ] ], [ [ "### Function $d_\\infty(a, b)$:", "_____no_output_____" ] ], [ [ "def d_infty ( aa, bb ) :\n return amax( absolute( bb - aa ) )", "_____no_output_____" ] ], [ [ "**Comment:** Other implementation tried\n```python\ndef d_infty ( aa, bb ) :\n return max( reshape( absolute( bb - aa ), aa.size ) )\n```\ntakes approximately 10 times longer to run", "_____no_output_____" ], [ "### Function $d_1(a, b)$:", "_____no_output_____" ] ], [ [ "def d_one ( aa, bb ) :\n return sum( absolute( bb - aa ) )", "_____no_output_____" ] ], [ [ "### Function $d_2(a,b)$:", "_____no_output_____" ] ], [ [ "def d_two ( aa, bb ) :\n return sqrt( sum( ( bb - aa ) * ( bb - aa ) ))", "_____no_output_____" ] ], [ [ "**Comment:** Other implementations tried:\n```python\ndef d_two ( aa, bb ) :\n cc = bb - aa\n return sqrt( sum( cc * cc ))\n\ndef d_two ( aa, bb ) :\n return sqrt( sum( ( bb - aa ) ** 2 ))\n```\nThey all take approximately the same amount of time to run thus I left the one above", "_____no_output_____" ], [ "### Timing:", "_____no_output_____" ] ], [ [ "%timeit d_infty( x_train[ randindex() ], x_train[ randindex() ] )\n%timeit d_one( x_train[ randindex() ], x_train[ randindex() ] )\n%timeit d_two( x_train[ randindex() ], x_train[ randindex() ] )", "10.7 µs ± 376 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n10.7 µs ± 57.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n12.8 µs ± 34.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n" ], [ "d_infty_time = wrapper( d_infty, x_train[ randindex() ], x_train[ randindex() ] )\nd_one_time = wrapper( d_one, x_train[ randindex() ], x_train[ randindex() ] )\nd_two_time = wrapper( d_two, x_train[ randindex() ], x_train[ randindex() ] )\n\nprint( \"Func d_infty: \", timeit( d_infty_time, number = 10000)/10000, \" sec\" )\nprint( \"Func d_one: \", timeit( d_one_time, number = 10000 )/10000, \" sec\" )\nprint( \"Func d_two: \", timeit( d_two_time, number = 10000 )/10000, \" sec\" )", "_____no_output_____" ] ], [ [ "**Comment:** The slowest function seems to be `d_infty`, probably because of the `reshape` function (only difference in operations used), possible to find another way of implementing it? **Yes** substituting `max` with `amax` allows to avoid function `reshape`.", "_____no_output_____" ], [ "# Assignment 2\n\nWrite a function that, given a number `N`, and a distance function `dist`, computes the distance matrix D of shape `(N,N)` between the first `N` entries of `x_train`:\n\n```\nD[i,j] = dist(x_train[i], x_train[j])\n```\n\nperforming the **minimum** number of operations (i.e., avoid computing a distance if it has already been computed before, i.e., keep in mind that dist(a,b) = dist(b,a)).", "_____no_output_____" ] ], [ [ "def dist_mat ( N, dist ):\n D = zeros( ( N, N ) )\n for ii in range( 0, N ) :\n for jj in range( ii+1, N ) :\n D[ ii, jj ] = dist( x_train[ ii ], x_train[ jj ] )\n D[ jj, ii ] = D[ ii, jj ]\n return D", "_____no_output_____" ] ], [ [ "### Timing:", "_____no_output_____" ] ], [ [ "%timeit dist_mat(100, d_infty)\n%timeit dist_mat(100, d_one)\n%timeit dist_mat(100, d_two)", "_____no_output_____" ], [ "%timeit dist_mat1(100, d_infty)\n%timeit dist_mat1(100, d_one)\n%timeit dist_mat1(100, d_two)", "_____no_output_____" ], [ "dist_mat_100_infty_time = wrapper( dist_mat, 100, d_infty )\nprint(\"Func dist_mat(100, d_infty): \", timeit(dist_mat_100_infty_time, number=100)/100, \" sec\")\n\ndist_mat_100_one_time = wrapper( dist_mat, 100, d_one )\nprint(\"Func dist_mat(100, d_one): \", timeit(dist_mat_100_one_time, number=100)/100, \" sec\")\n\ndist_mat_100_two_time = wrapper( dist_mat, 100, d_two )\nprint(\"Func dist_mat(100, d_two): \", timeit(dist_mat_100_two_time, number=100)/100, \" sec\")", "_____no_output_____" ], [ "dist_mat_100_infty_time = wrapper( dist_mat1, 100, d_infty )\nprint(\"Func dist_mat(100, d_infty): \", timeit(dist_mat_100_infty_time, number=100)/100, \" sec\")\n\ndist_mat_100_one_time = wrapper( dist_mat1, 100, d_one )\nprint(\"Func dist_mat(100, d_one): \", timeit(dist_mat_100_one_time, number=100)/100, \" sec\")\n\ndist_mat_100_two_time = wrapper( dist_mat1, 100, d_two )\nprint(\"Func dist_mat(100, d_two): \", timeit(dist_mat_100_two_time, number=100)/100, \" sec\")", "_____no_output_____" ], [ "D = dist_mat( 10, d_infty )\nD.shape", "_____no_output_____" ] ], [ [ "# Assignment 3\n\nCompute and plot the three distance matrices\n\n- Dinfty\n- D1\n- D2\n\nfor the first 100 images of the training set, using the function `imshow` applied to the three matrices", "_____no_output_____" ] ], [ [ "Dinfty = dist_mat( 100, d_infty )\n# _ = figure(figsize=(8,8), constrained_layout=True)\nimshow( Dinfty, cmap = 'gray_r' )\n_ = title('First 100 images distance matrix using d_infty')", "_____no_output_____" ], [ "D1 = dist_mat( 100, d_one )\n# _ = figure(figsize=(8,8), constrained_layout=True)\nimshow( D1, cmap = 'gray_r' )\n_ = title('First 100 images distance matrix using d_one')", "_____no_output_____" ], [ "D2 = dist_mat( 100, d_two )\n# _ = figure(figsize=(8,8), constrained_layout=True)\nimshow( D2, cmap = 'gray_r' )\n_ = title('First 100 images distance matrix using d_two')", "_____no_output_____" ] ], [ [ "# Assignment 4\n\nUsing only a distance matrix, apply the algorithm described above and compute the efficency of the algorithm, i.e., write a function that:\n\nGiven a distance matrix with shape `(N,N)`, constructed on the first `N` samples of the `x_train` set, count the number of failures of the **leave one out** strategy, i.e., \n\n- set `error_counter` to zero\n\n- for every line `i` of the matrix:\n\n - find the index `j` (different from `i`) for which `D[i,k] >= D[i,j]` for all `k` different from `i` and `j`.\n\n - if `y_train[j]` is different from `y_train[i]`, increment by one `error_counter`\n\n- return the error: error_counter/N\n\n- apply the function above to the 3 different distance matrices you computed before", "_____no_output_____" ] ], [ [ "def loo_cv ( dmat ) :\n error_counter = 0\n for ii in range( 0, dmat.shape[0] ) :\n msk = full( dmat.shape[1], False )\n msk[ii] = True\n jj = argmin( ma.array( dmat[ii,:], mask=msk ) )\n if ( y_train[ii] != y_train[jj] ) :\n error_counter += 1\n return error_counter/dmat.shape[0]", "_____no_output_____" ], [ "%time loo_cv( Dinfty )\n%time loo_cv( D1 )\n%time loo_cv( D2 )", "_____no_output_____" ] ], [ [ "- some tests and notes, change cell to code (`Y`) to execute:", "_____no_output_____" ], [ "ba = True\nbb = False\nia = 1\nib = 2\nic = 1\nia += ( ia != ic )\nia += ( ib != ic )\nia += ( ib != ic )\nia", "_____no_output_____" ], [ "D_test = dist_mat( 10, d_two )\nimshow( D_test, cmap = 'gray_r' )\n_ = xticks(linspace(0, 9, 10), labels=y_train[:10])\n_ = yticks(linspace(0, 9, 10), labels=y_train[:10])", "_____no_output_____" ], [ "%time loo_cv( dist_mat( 1, d_two ) )\n%time loo_cv( dist_mat( 100, d_two ) )\n%time loo_cv( dist_mat( 200, d_two ) )\n%time loo_cv( dist_mat( 400, d_two ) )\n%time loo_cv( dist_mat( 800, d_two ) )", "_____no_output_____" ], [ "### Applying cross-validation to the 3 distance matrix:", "_____no_output_____" ] ], [ [ "Dinfty_cv = loo_cv( Dinfty )\nD1_cv = loo_cv( D1 )\nD2_cv = loo_cv( D2 )\nprint( Dinfty_cv, D1_cv, D2_cv )", "0.58 0.17 0.17\n" ] ], [ [ "# Assignment 5\n\nRun the algorithm implemented above for N=100,200,400,800,1600 on the three different distances, and plot the three error rate as a function of N (i.e., compute the distance matrix, and compute the efficiency associated to the distance matrix).\n\nYou should get an error like:\n```\n[[ 0.58 0.17 0.17 ]\n [ 0.52 0.145 0.135 ]\n [ 0.4425 0.15 0.135 ]\n [ 0.4 0.145 0.12875 ]\n [ 0.369375 0.1025 0.09375 ]]\n```\nwhere each column represents a different norm.", "_____no_output_____" ] ], [ [ "Nsize = array([100 * 2**ii for ii in range(0,5)])\nefficiency = zeros((len(Nsize),3))\nfor ii in range(0,len(Nsize)) :\n efficiency[ii] = [loo_cv( dist_mat( Nsize[ii], d_infty ) ), \n loo_cv( dist_mat( Nsize[ii], d_one ) ), \n loo_cv( dist_mat( Nsize[ii], d_two ) )]\nprint(efficiency)\nsave(\"efficiency.npy\", efficiency)", "_____no_output_____" ], [ "efficiency = load(\"efficiency.npy\")", "_____no_output_____" ], [ "_ = figure( figsize=(9,6), constrained_layout=True )\nplot(Nsize, 1-efficiency[:,0], label=\"$L_\\infty$\")\nplot(Nsize, 1-efficiency[:,1], label=\"$L_1$\")\nplot(Nsize, 1-efficiency[:,2], label=\"$L_2$\")\nlegend(loc='best', fontsize=18)\n_ = xlabel(\"$N_{size}$\", fontsize=18)\n_ = ylabel(\"$\\epsilon_d$\", fontsize=18)", "_____no_output_____" ] ], [ [ "**In the next assignments, optional points are given if you manage to make the algorithm run faster, by pre-computing everything you can precompute in advance**", "_____no_output_____" ], [ "# Assignment 6\n\nIn principle, it should be possible to decrease the error by using a better norm. From the table above, it is clear that the L2 distance works better than the L1 distance, which works better than the Linfty distance.\n\nHowever, *none of these distances exploit the fact that the image is a two-dimensional object*, and that there is information also in the **neighboring** information of the pixels. \n\nOne way to exploit this, is to interpret the image as a continuous function with values between zero and one, defined on a square domain `\\Omega=[0,27]x[0,27]`.\n\n$$ f: \\Omega \\to R $$\n\n- Implement a function that computes an approximation of the $H^1$ norm distance on the renormalized images. Given two images $f_1$ and $f_2$\n - Compute $$a = \\frac{f_1}{\\int_\\Omega f_1}$$, $$b=\\frac{f_2}{\\int_\\Omega f_2}$$\n - Define the $H^1$ distance as\n $$\n d_{H^1}(f_1,f_2) := \\sqrt{\\int_\\Omega |\\nabla(a-b)|^2+ (a-b)^2}\n $$\n using the algorithm you prefer (or the library you prefer) to compute the gradients and the integrals. Notice that $\\nabla f = (\\partial f/\\partial x, \\partial f/\\partial y)$ is a vector valued function, and $|\\nabla g|^2 := (\\partial g/\\partial x)^2 + (\\partial g/\\partial y)^2$\n\n- Compute the distance matrix and the efficiency for this distance for N=100,200,400,800,1600", "_____no_output_____" ] ], [ [ "def normalize ( image ) :\n integral = numpy.sum( image )\n return image/integral", "_____no_output_____" ] ], [ [ "Testing function `normalize`:", "_____no_output_____" ] ], [ [ "%timeit normalize( x_train[randindex()] )", "8.25 µs ± 197 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n" ], [ "def d_H1 ( aa, bb ) :\n cc = ( normalize( aa ) - normalize( bb ) ).reshape( img_rows, img_cols )\n absgrad = absolute( gradient( cc ) )\n# absgrad = absolute( gradient( cc.reshape( img_rows, img_cols ) ) )\n return sqrt( sum( absgrad[0] * absgrad[0] + absgrad[1] * absgrad[1] + cc * cc ) )", "_____no_output_____" ] ], [ [ "**Note** that we added in a second time the `reshape( img_rows, img_cols )` to guarantee this distance will still work with the ball-tree algorithm that uses a 2D array thus requires the image to be reshaped to `(img_rows * img_cols)`.\nWe checked that by adding this, the function `d_H1` takes approximately $1 \\pm 0.5 \\mu s$ more than the version without it, thus the overhead is negligible.", "_____no_output_____" ], [ "### Timing:", "_____no_output_____" ] ], [ [ "%timeit d_H1( x_train[1], x_train[2] )", "66.5 µs ± 578 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n" ], [ "%timeit d_H1( x_train[1], x_train[2] )", "65.3 µs ± 666 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n" ] ], [ [ "### Efficiency:", "_____no_output_____" ] ], [ [ "eff_DH1 = array([])\nfor ii in range(0, len(Nsize)) :\n eff_DH1 = append(eff_DH1, loo_cv( dist_mat(Nsize[ii], d_H1) ) )\nsave(\"eff_DH1.npy\", eff_DH1)", "_____no_output_____" ], [ "eff_DH1 = load(\"eff_DH1.npy\")", "_____no_output_____" ], [ "# _ = figure( figsize=(9,6), constrained_layout=True )\nplot(Nsize, 1-efficiency[:,0], label=\"$L_\\infty$\")\nplot(Nsize, 1-efficiency[:,1], label=\"$L_1$\")\nplot(Nsize, 1-efficiency[:,2], label=\"$L_2$\")\nplot(Nsize, 1-eff_DH1, label=\"$H_1$\")\nlegend(loc='best', ncol=2)\n_ = xlabel(\"$N_{size}$\")\n_ = ylabel(\"$\\epsilon_d$\")", "_____no_output_____" ] ], [ [ "## Assignment 7\n\nAn even better improvement on the previous distance function is given by the following algorithm\n\n- Given two images $f1$ and $f2$:\n - Compute $$a = \\frac{f_1}{\\int_\\Omega f_1}$$, $$b=\\frac{f_2}{\\int_\\Omega f_2}$$\n - Solve \n $$\n -\\Delta \\phi = a - b \\qquad \\text{ in } \\Omega\n $$\n $$\n \\phi = 0 \\text{ on } \\partial\\Omega\n $$\n - Define the *Monge Ampere* distance\n $$\n d_{MA}(f_1,f_2) = \\int_\\Omega (a+b)|\\nabla \\phi|^2\n $$\n\n- Compute the distance matrix and the efficiency for this distance for N=100,200,400,800,1600", "_____no_output_____" ], [ "### Solution:\nUsing the finite difference method we can rewrite the Poisson equation above as:\n$$\n-\\nabla^2\\phi \\approx \\frac{1}{h^2} \\bigl[ -\\phi_{i-1,j} + 2 \\phi_{i,j} - \\phi_{i+1,j} - \\phi_{i,j-1} + 2 \\phi_{i,j} - \\phi_{i,j+1}\\bigr] = (a - b)_{i,j}\n$$\nwhich is a linear system $A \\phi = f$ where $f = (a - b)_{i,j}$ has to be re-written as a 1 dimensional array with $N^2$ elements.", "_____no_output_____" ], [ "### Build matrix A:\nIt will have dimension $N^2 \\times N^2$ so if our images are 28 pixel$^2$ then", "_____no_output_____" ] ], [ [ "dimA = img_rows * img_cols\nhh = 1./dimA", "_____no_output_____" ] ], [ [ "Now, we know that the diagonal will have all 4 (since we have $2\\phi_{i,j} + 2\\phi_{i,j}$), lets define then the corresponding array that will have dimension $N^2$:", "_____no_output_____" ] ], [ [ "diag_el = 4*ones((dimA,)) ", "_____no_output_____" ] ], [ [ "Now we need to build the off-diagonal terms. First those containing the $(i,j\\pm1)$ terms of the linear system that will be have all elements $=-1$. Since these *off-diagonals* will be shifted by N positions, their lenght must be $N^2 - N$", "_____no_output_____" ] ], [ [ "offdiag_el_j = -1*ones((dimA-img_rows,))", "_____no_output_____" ] ], [ [ "Finally we have to construct the *off-diagonals* containing the $(i\\pm1,j)$ terms which will have all the elements $=-1$ except for the $(N-1)^\\text{th}$ that are set to 0 because of the boundary conditions. These *off-diagonal* arrays will be shifted of $1$ position thus their lenght has to be $N^2-1$: ", "_____no_output_____" ] ], [ [ "offdiag_el_i = -1 * ones(dimA-1)\noffdiag_el_i[where([(ii%img_rows) == 0 for ii in range(1,dimA)])] = 0", "_____no_output_____" ] ], [ [ "We can now build matrix **A**", "_____no_output_____" ] ], [ [ "A = ( diag( offdiag_el_j, -img_rows ) + diag( offdiag_el_i, -1 ) + diag( diag_el, 0 ) )\nA += ( diag( offdiag_el_i, +1 ) + diag( offdiag_el_j, +img_rows ) )\nA /= hh*hh\n\n# Change first row of the matrix A\nA[0,:] = 0\nA[:,0] = 0\nA[0,0] = 1\n\n# Change last row of the matrix A\nA[-1,:] = 0\nA[:,-1] = 0\nA[-1,-1] = 1", "_____no_output_____" ] ], [ [ "### Build function f(a,b) :", "_____no_output_____" ] ], [ [ "def f ( f1, f2 ) :\n tmp_f = normalize( f1 ) - normalize( f2 )\n tmp_f[0] = 0\n tmp_f[-1] = 0\n return tmp_f\n# return normalize( f1 ) - normalize( f2 )", "_____no_output_____" ], [ "%timeit f( x_train[ randindex()], x_train[ randindex()] )", "18.3 µs ± 113 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n" ] ], [ [ "### Build solver:", "_____no_output_____" ], [ "First we pre-compute the LU decomposition of matrix A:", "_____no_output_____" ] ], [ [ "def LU(A):\n A = A.copy()\n N=len(A)\n for k in range(N-1):\n if (abs(A[k,k]) < 1e-15):\n raise RuntimeError(\"Null pivot\")\n \n A[k+1:N,k] /= A[k,k]\n for j in range(k+1,N):\n A[k+1:N,j] -= A[k+1:N,k]*A[k,j]\n \n L=tril(A)\n for i in range(N):\n L[i,i]=1.0\n U = triu(A)\n return L, U\n\nL, U = LU(A)", "_____no_output_____" ] ], [ [ "Here we pre-compute the inverse of the matrices' L & U diagonals:", "_____no_output_____" ] ], [ [ "_x = zeros((img_rows * img_cols,))\nN = len(L)\n_L = ones(N)\n_U = ones(N)\n\nfor i in range(0,N):\n _L[i] /= L[i,i]\n _U[i] /= U[i,i]", "_____no_output_____" ] ], [ [ "Solve lower triangle:", "_____no_output_____" ] ], [ [ "def L_solve(L,rhs):\n x = _x \n x[0] = rhs[0]*_L[0]\n for i in range(1,N):\n x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))*_L[i]\n \n return x", "_____no_output_____" ] ], [ [ "Old implementation:\n```python\ndef L_solve(L,rhs):\n x = zeros_like(rhs)\n N = len(L)\n \n x[0] = rhs[0]/L[0,0]\n for i in range(1,N):\n x[i] = (rhs[i] - dot(L[i, 0:i], x[0:i]))/L[i,i]\n \n return x\n```", "_____no_output_____" ], [ "- **Timing** `L_solve`", "_____no_output_____" ] ], [ [ "rhs = f( x_train[ randindex()], x_train[ randindex()] ).reshape((img_rows*img_cols))\n%timeit L_solve(L, rhs)", "2.76 ms ± 43.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] ], [ [ "Solve the upper triangle:", "_____no_output_____" ] ], [ [ "def U_solve (U,rhs):\n x = _x \n x[-1] = rhs[-1]*_U[-1]\n for i in reversed(range(N-1)):\n x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))*_U[i]\n \n return x", "_____no_output_____" ] ], [ [ "Old implementation:\n```python\ndef U_solve (U,rhs):\n x = zeros_like(rhs)\n N=len(U)\n \n x[-1] = rhs[-1]/U[-1,-1]\n for i in reversed(range(N-1)):\n x[i] = (rhs[i] -dot(U[i, i+1:N], x[i+1:N]))/U[i,i]\n \n return x\n```", "_____no_output_____" ], [ "- **Timing** `U_solve`", "_____no_output_____" ] ], [ [ "rhs = f( x_train[ randindex()], x_train[ randindex()] ).reshape((img_rows*img_cols))\nw = L_solve(L, rhs)\n%timeit U_solve(U, w)", "1.14 ms ± 7.98 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n" ] ], [ [ "Solve for $\\phi$ by first solving the lower triangle and then the upper triangle, note that we reshape the result after each solve to get a function which will also work with the `BallTree` algorithm (reshaping adds little to no overhead: order of the $\\mu s$)", "_____no_output_____" ] ], [ [ "def phi ( f1, f2 ) :\n w = L_solve(L, f(f1, f2).reshape((img_rows*img_cols)))\n return U_solve(U, w).reshape((img_rows,img_cols))", "_____no_output_____" ], [ "%timeit phi( x_train[randindex()], x_train[randindex()])", "4.13 ms ± 50 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] ], [ [ "### Define Monge-Ampere distance:\n\n$$\nd_{MA}(f_1,f_2) = \\int_\\Omega (a+b)|\\nabla \\phi|^2\n$$", "_____no_output_____" ] ], [ [ "def d_MA ( aa, bb ) :\n cc = ( normalize( aa ) + normalize( bb ) ).reshape( img_rows, img_cols )\n grphi = array( gradient( phi( aa, bb ) ) )\n return sum( cc * grphi * grphi )", "_____no_output_____" ] ], [ [ "- Timing `d_MA`", "_____no_output_____" ] ], [ [ "%timeit d_MA( x_train[randindex()], x_train[randindex()] )", "4.24 ms ± 83.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n" ] ], [ [ "- distance matrix Monge-Ampere (first 100 entries)", "_____no_output_____" ] ], [ [ "%timeit DMA = dist_mat(100, d_MA)\nimshow( DMA, cmap = 'gray_r' )\n_ = xticks(linspace(0, 9, 10), labels=y_train[:10])\n_ = yticks(linspace(0, 9, 10), labels=y_train[:10])", "_____no_output_____" ] ], [ [ "- efficiency (all together!)", "_____no_output_____" ], [ "**Note** that computing the distance matrix (function `dist_mat`) takes approximately $t_{dist} \\cdot N^2 /2$ (where $N$ is the size of the matrix).\nSince the Monge-Ampere distance takes around $4 ms$ to run for 2 single images, computing the distance matrix, needed to estimate the efficiency, with $N = 1600$ will approximately take $2\\div3$ hours, therefore we recomend to load the precomputed numpy array.", "_____no_output_____" ] ], [ [ "eff_dMA = array([])\nfor ii in range(0, len(Nsize)) :\n eff_dMA = append(eff_dMA, loo_cv( dist_mat(Nsize[ii], d_MA) ) )\nsave(\"eff_dMA.npy\", eff_dMA)", "_____no_output_____" ], [ "eff_dMA = load(\"eff_dMA.npy\")", "_____no_output_____" ], [ "# _ = figure( figsize=(9,6), constrained_layout=True )\nxscale('log')\nyscale('log')\nplot(Nsize, 1-efficiency[:,0], label=\"$L_\\infty$\")\nplot(Nsize, 1-efficiency[:,1], label=\"$L_1$\")\nplot(Nsize, 1-efficiency[:,2], label=\"$L_2$\")\nplot(Nsize, 1-eff_DH1, label=\"$H_1$\")\nplot(Nsize, 1-eff_dMA, label=\"$MA$\")\nlegend(loc='best', ncol=2)\n_ = xlabel(\"$N_{size}$\")\n_ = ylabel(\"$\\epsilon_d$\")", "_____no_output_____" ] ], [ [ "## A progress bar ...\n... because I'm anxious and want to know how long to the end of the process.\n\n(Credits: [this guy](https://github.com/kuk/log-progress))", "_____no_output_____" ] ], [ [ "def log_progress(sequence, every=None, size=None, name='Items'):\n from ipywidgets import IntProgress, HTML, VBox\n from IPython.display import display\n\n is_iterator = False\n if size is None:\n try:\n size = len(sequence)\n except TypeError:\n is_iterator = True\n if size is not None:\n if every is None:\n if size <= 200:\n every = 1\n else:\n every = int(size / 200) # every 0.5%\n else:\n assert every is not None, 'sequence is iterator, set every'\n\n if is_iterator:\n progress = IntProgress(min=0, max=1, value=1)\n progress.bar_style = 'info'\n else:\n progress = IntProgress(min=0, max=size, value=0)\n label = HTML()\n box = VBox(children=[label, progress])\n display(box)\n\n index = 0\n try:\n for index, record in enumerate(sequence, 1):\n if index == 1 or index % every == 0:\n if is_iterator:\n label.value = '{name}: {index} / ?'.format(\n name=name,\n index=index\n )\n else:\n progress.value = index\n label.value = u'{name}: {index} / {size}'.format(\n name=name,\n index=index,\n size=size\n )\n yield record\n except:\n progress.bar_style = 'danger'\n raise\n else:\n progress.bar_style = 'success'\n progress.value = index\n label.value = \"{name}: {index}\".format(\n name=name,\n index=str(index or '?')\n )", "_____no_output_____" ] ], [ [ "## Assigment 8 (optional for DSSC, PhD and LM, Mandatory for MHPC)\n\nUse the `BallTree` algorithm (https://en.wikipedia.org/wiki/Ball_tree), from the `sklearn` package, and construct a tree data structure **that uses one of the custom distances defined above**.\n\nFor each N in 3200,6400,12800,25600,51200, and for each distance defined above\n\n- Build a tree using the first N entries of the training set `x_train`\n- Construct a function that tests the efficiency on all the entries of the test set `x_test`:\n - for any image in `x_test`, call it `x_test[i]`, query the tree for the nearest neighbor (call it `k`), and assign as predicted digit the digit of the `x_train[k]` image, i.e., `y_train[k]`\n - check if `y_train[k]` is equal to the corresponding entry in `y_test[i]`. If not, increment a counter of the error\n - return the efficiency, i.e., `error_counter/len(x_test)`\n- Plot, in a single graph, the error of each distance as a function of `N` (including labels, titles, etc.)\n\n- Once you have the tree, experiment with different nearest neighbor algorithms, i.e., instead of taking only one nearest neighbor, take a larger number (a small number of your choice), and instead of returning the single closest digit, return the one with the largest number of occurrences. Plot the same graph you gave before, and see if you gain an improvement. Motivate all choices you have to make to get to the final answer.\n\n\n**IF YOU DON'T HAVE ENOUGH COMPUTATIONAL POWER, RUN THE EXERCISES ONLY UP TO WHAT IS SUSTAINABLE FOR YOUR PC**", "_____no_output_____" ], [ "### Some preliminaries:", "_____no_output_____" ], [ "We are now using the `BallTree` data structure teaching it how to distinguish between hand-written digits.\nTo this end we will divide the train set in two parts, one of which we will use to train the data-structure.\nFirst of all we define the vector containing the sizes of the different test sets:", "_____no_output_____" ] ], [ [ "Ndiv = array([3200, 6400, 12800, 25600, 51200], dtype=int)", "_____no_output_____" ] ], [ [ "Here we import the data structure:", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import BallTree as btree\n# btree??", "_____no_output_____" ] ], [ [ "Here just redefining the contructor of the ball-tree to keep the notation lighter:", "_____no_output_____" ] ], [ [ "def build_bt ( N, mymetric ) :\n return btree( x_train[:N].reshape( N, img_rows*img_cols ), metric = mymetric )", "_____no_output_____" ] ], [ [ "Import matplotlib for fancier plots", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "### Building the trees:", "_____no_output_____" ], [ "Here we are building a set of arrays containing the several ball-trees for the set of distances and trainings (import pickle to store them and avoid re-computing at every access):", "_____no_output_____" ] ], [ [ "import pickle as pkl", "_____no_output_____" ] ], [ [ "To speed-up re-usage, we already computed the ball-trees once and dumped into files with pickle. To run the algorithm and build the tree from scratch, switch the markdown-code cells below each distance.", "_____no_output_____" ], [ "- $d_\\infty$", "_____no_output_____" ] ], [ [ "bt_infty = array([ build_bt( N, d_infty ) for N in log_progress( Ndiv, every=1, name='d_infty metric' ) ])", "_____no_output_____" ], [ "for nd in range(0,len(Ndiv)) :\n pkl.dump(bt_infty[nd], open(\"BallTrees_infty\"+str(Ndiv[nd])+\".npy\", \"wb\"))", "_____no_output_____" ], [ "# bt_infty = array([]) \n# for nd in range(0,len(Ndiv)) :\n# bt_infty = append( bt_infty, load(\"BallTrees_infty\"+str(Ndiv[nd])+\".npy\") )", "_____no_output_____" ] ], [ [ "- $d_1$", "_____no_output_____" ] ], [ [ "bt_one = array([ build_bt( N, d_one ) for N in log_progress( Ndiv, every=1, name='d_one metric' ) ])", "_____no_output_____" ], [ "for nd in range(0,len(Ndiv)) :\n pkl.dump(bt_one[nd], open(\"BallTrees_one\"+str(Ndiv[nd])+\".npy\", \"wb\"))", "_____no_output_____" ], [ "# bt_one = array([]) \n# for nd in range(0,len(Ndiv)) :\n# bt_one = append( bt_one, load(\"BallTrees_one\"+str(Ndiv[nd])+\".npy\") )", "_____no_output_____" ] ], [ [ "- $d_2$", "_____no_output_____" ] ], [ [ "bt_two = array([ build_bt( N, d_two ) for N in log_progress( Ndiv, every=1, name='d_two metric' ) ])", "_____no_output_____" ], [ "for nd in range(0,len(Ndiv)) :\n pkl.dump(bt_two[nd], open(\"BallTrees_two\"+str(Ndiv[nd])+\".npy\", \"wb\"))", "_____no_output_____" ], [ "# bt_two = array([]) \n# for nd in range(0,len(Ndiv)) :\n# bt_two = append( bt_two, load(\"BallTrees_two\"+str(Ndiv[nd])+\".npy\") )", "_____no_output_____" ] ], [ [ "- $d_{H1}$", "_____no_output_____" ] ], [ [ "bt_H1 = array([ build_bt( N, d_H1 ) for N in log_progress( Ndiv, every=1, name='d_H1 metric' ) ])", "_____no_output_____" ], [ "for nd in range(0,len(Ndiv)) :\n pkl.dump(bt_H1[nd], open(\"BallTrees_H1\"+str(Ndiv[nd])+\".npy\", \"wb\"))", "_____no_output_____" ], [ "# bt_H1 = array([]) \n# for nd in range(0,len(Ndiv)) :\n# bt_H1 = append( bt_H1, load(\"BallTrees_H1\"+str(Ndiv[nd])+\".npy\") )", "_____no_output_____" ] ], [ [ "- $d_{MA}$", "_____no_output_____" ] ], [ [ "bt_MA = array([ build_bt( N, d_MA ) for N in log_progress( Ndiv, every=1, name='d_MA metric' ) ])", "_____no_output_____" ], [ "for nd in range(0,len(Ndiv)) :\n pkl.dump(bt_MA[nd], open(\"BallTrees_MA\"+str(Ndiv[nd])+\".npy\", \"wb\"))", "_____no_output_____" ], [ "# bt_MA = array([]) \n# for nd in range(0,len(Ndiv)) :\n# bt_MA = append( bt_MA, load(\"BallTrees_MA\"+str(Ndiv[nd])+\".npy\") )", "_____no_output_____" ] ], [ [ "### Classification functions:", "_____no_output_____" ], [ "We define three functions to classify the image in the test-set from the ball-tree:", "_____no_output_____" ] ], [ [ "def classify_closer ( bt, xx ) :\n [[k]] = bt.query(xx.reshape((1, img_rows*img_cols)), k = 1, return_distance = False)\n return y_train[k]", "_____no_output_____" ] ], [ [ "- timing", "_____no_output_____" ] ], [ [ "ind = randint( 0, x_test.shape[0] )\n%time value = classify_closer(bt_H1[4], x_test[ind])\nprint(y_test[ind], \"classified as \", value)", "CPU times: user 4.32 s, sys: 3 µs, total: 4.32 s\nWall time: 4.36 s\n8 classified as 8\n" ], [ "def classify_mostcomm ( bt, xx, N_neigh=10 ) :\n [k] = bt.query(xx.reshape((1, img_rows*img_cols)), k = N_neigh, return_distance = False)\n k.shape\n return argmax(bincount(y_train[k], minlength=10))", "_____no_output_____" ] ], [ [ "- timing", "_____no_output_____" ] ], [ [ "ind = randint( 0, x_test.shape[0] )\n%time value = classify_mostcomm(bt_H1[4], x_test[ind])\nprint(y_test[ind], \"classified as \", value)", "CPU times: user 4.13 s, sys: 12 ms, total: 4.14 s\nWall time: 4.18 s\n4 classified as 4\n" ], [ "def classify_weight ( bt, xx, N_neigh=10 ) :\n [D], [k] = bt.query(xx.reshape((1, img_rows*img_cols)), k = N_neigh)\n return argmax(bincount(y_train[k], minlength=10, weights=1./D))", "_____no_output_____" ] ], [ [ "- timing", "_____no_output_____" ] ], [ [ "ind = randint( 0, x_test.shape[0] )\n%time value = classify_weight(bt_H1[4], x_test[ind])\nprint(y_test[ind], \"classified as \", value)", "CPU times: user 4.29 s, sys: 1 µs, total: 4.29 s\nWall time: 4.28 s\n1 classified as 1\n" ] ], [ [ "All the 3 functions take approximately the same amount of time to compute, this is because the most computationally expensive operation is quering the tree.\n\nWe compared the execution time in the case of the larger trees built ($N = 51200$).\nWhile the time needed for a classification in the case of the first 3 simple distances (`d_infty`, `d_one`, `d_two`) is of the order of $10^2\\ ms$, for the `d_H1` distance it is around $4\\ s$ and for the `d_MA` distance it reaches up to $2 min$.\n\nThese very long time makes unaffordable to run the efficiency testing on the whole test set (which is composed of $10^4$ entries).", "_____no_output_____" ], [ "### Functions for testing efficiency:", "_____no_output_____" ], [ "We define three functions to measure the efficiency of the data-structure.", "_____no_output_____" ], [ "- `bt_eff` simply searches for the **closest image** in the ball-tree and returns the mean error-count:", "_____no_output_____" ] ], [ [ "def bt_eff ( bt, X, Y ) :\n error_counter = 0\n for ii in log_progress(range(0, len(X)), every=10, name='Queries closer') :\n [[k]] = bt.query(X[ii].reshape((1, img_rows*img_cols)), k = 1, return_distance = False)\n if ( y_train[k] != Y[ii] ) :\n error_counter += 1.\n return error_counter/len(X) ", "_____no_output_____" ] ], [ [ "- `bt_eff_most_common` searches for the closer `N_neigh` images in the data structure and checks for the **most common** between them. Also in this case the mean error-count is returned:", "_____no_output_____" ] ], [ [ "def bt_eff_most_common ( bt, X, Y, N_neigh=10 ) :\n error_counter = 0\n for ii in log_progress(range(0, len(X)), every=10, name='Queries most-common') :\n [k] = bt.query(X[ii].reshape((1, img_rows*img_cols)), k = N_neigh, return_distance = False)\n# print(Y[ii], y_train[k], \": \", argmax(bincount(y_train[k], minlength=10)))\n if ( argmax(bincount(y_train[k], minlength=10)) != Y[ii] ) :\n error_counter += 1.\n return error_counter/len(X) ", "_____no_output_____" ] ], [ [ "- also `bt_eff_weight` searches for the closer `N_neigh` images but it weights the entries on the inverse of the distance from the testing image, returning (*hopefuly*) a better estimate of the target label.", "_____no_output_____" ] ], [ [ "def bt_eff_weight ( bt, X, Y, N_neigh=10 ) :\n error_counter = 0\n for ii in log_progress(range(0, len(X)), every=10, name='Queries weight') :\n [D], [k] = bt.query(X[ii].reshape((1, img_rows*img_cols)), k = N_neigh)\n# print(Y[ii], y_train[k], \": \", argmax(bincount(y_train[k], minlength=10, weights=1./D)))\n if ( argmax(bincount(y_train[k], minlength=10, weights=1./D)) != Y[ii] ) :\n error_counter += 1.\n return error_counter/len(X) ", "_____no_output_____" ] ], [ [ "We will run the efficiency measurements in a very reduced test subset to keep the computation into reasonable times. ", "_____no_output_____" ], [ "## Efficiency measures: closer", "_____no_output_____" ], [ "### Ball-Tree with $d_\\infty$", "_____no_output_____" ] ], [ [ "bt_infty_eff = array([bt_eff(bt, x_test[:100], y_test[:100]) for bt in bt_infty])\nsave(\"bt_infty_eff_100.npy\", bt_infty_eff)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_1$", "_____no_output_____" ] ], [ [ "bt_one_eff = array([bt_eff(bt, x_test[:100], y_test[:100]) for bt in bt_one])\nsave(\"bt_one_eff_100.npy\", bt_one_eff)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_2$", "_____no_output_____" ] ], [ [ "bt_two_eff = array([bt_eff(bt, x_test[:100], y_test[:100]) for bt in bt_two])\nsave(\"bt_two_eff_100.npy\", bt_two_eff)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{H1}$", "_____no_output_____" ] ], [ [ "bt_H1_eff = array([bt_eff(bt, x_test[:100], y_test[:100]) for bt in bt_H1])\nsave(\"bt_H1_eff_100.npy\", bt_H1_eff)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{MA}$", "_____no_output_____" ] ], [ [ "bt_MA_eff = array([bt_eff(bt, x_test[:100], y_test[:100]) for bt in bt_MA])\nsave(\"bt_MA_eff_100.npy\", bt_MA_eff)", "_____no_output_____" ] ], [ [ "### Plots:", "_____no_output_____" ] ], [ [ "bt_infty_eff = load(\"bt_infty_eff_100.npy\")\nbt_one_eff = load(\"bt_one_eff_100.npy\")\nbt_two_eff = load(\"bt_two_eff_100.npy\")\nbt_H1_eff = load(\"bt_H1_eff_100.npy\")\n# bt_MA_eff = load(\"bt_MA_eff_100.npy\")\nfig = figure(figsize = (9,6), constrained_layout=True)\nax = fig.add_subplot(111)\nax.set_title(\"BallTree with 'closer' efficiency\", fontsize=14)\nax.set_xscale('log')\nax.set_xticks(Ndiv)\nax.set_xticklabels([str(nd) for nd in Ndiv])\nax.set_xlabel(\"Train-set size\")\nax.set_ylabel(\"Efficiency 'closer'\")\nax.plot([3200,51200],[1.,1.], ':k', alpha=0.4)\nax.plot(Ndiv, 1 - bt_infty_eff, label = \"$L_\\\\infty$ norm\")\nax.plot(Ndiv, 1 - bt_one_eff, label = \"$L_1$ norm\")\nax.plot(Ndiv, 1 - bt_two_eff, label = \"$L_2$ norm\")\nax.plot(Ndiv, 1 - bt_H1_eff, label = \"$H_1$ norm\")\n# ax.plot(Ndiv, 1 - bt_MA_eff, label = \"Monge-Ampere\")\nax.legend(loc='best', ncol=2)", "_____no_output_____" ] ], [ [ "## Efficiency measures: most-common", "_____no_output_____" ], [ "We run the test only for the faster distance measures for lack of time.", "_____no_output_____" ], [ "### Ball-Tree with $d_\\infty$", "_____no_output_____" ] ], [ [ "bt_infty_eff_mostcomm = array([bt_eff_most_common(bt, x_test[:100], y_test[:100]) for bt in bt_infty])\nsave(\"bt_infty_eff_most_common_100.npy\", bt_infty_eff_mostcomm)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_1$", "_____no_output_____" ] ], [ [ "bt_one_eff_mostcomm = array([bt_eff_most_common(bt, x_test[:100], y_test[:100]) for bt in bt_one])\nsave(\"bt_one_eff_most_common_100.npy\", bt_one_eff_mostcomm)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_2$", "_____no_output_____" ] ], [ [ "bt_two_eff_mostcomm = array([bt_eff_most_common(bt, x_test[:100], y_test[:100]) for bt in bt_two])\nsave(\"bt_two_eff_100_most_common.npy\", bt_two_eff_mostcomm)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{H1}$", "_____no_output_____" ] ], [ [ "# bt_H1_eff_mostcomm = array([bt_eff_most_common(bt, x_test[:100], y_test[:100]) for bt in bt_H1])\n# save(\"bt_H1_eff_100_most_common.npy\", bt_H1_eff_mostcomm)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{MA}$", "_____no_output_____" ] ], [ [ "# bt_MA_eff_mostcomm = array([bt_eff_most_common(bt, x_test[:100], y_test[:100]) for bt in bt_MA])\n# save(\"bt_MA_eff_100_most_common.npy\", bt_MA_eff_mostcomm)", "_____no_output_____" ] ], [ [ "### Plots:", "_____no_output_____" ] ], [ [ "bt_infty_eff = load(\"bt_infty_eff_most_common_100.npy\")\nbt_one_eff = load(\"bt_one_eff_most_common_100.npy\")\nbt_two_eff = load(\"bt_two_eff_100_most_common.npy\")\n# bt_H1_eff = load(\"bt_H1_eff_most_common_100.npy\")\n# bt_MA_eff = load(\"bt_MA_eff_most_common_100.npy\")\nfig = figure(figsize = (9,6), constrained_layout=True)\nax = fig.add_subplot(111)\nax.set_title(\"BallTree with 'most-common' efficiency\", fontsize=14)\nax.set_xscale('log')\nax.set_xticks(Ndiv)\nax.set_xticklabels([str(nd) for nd in Ndiv])\nax.set_xlabel(\"Train-set size\")\nax.set_ylabel(\"Efficiency 'most-common'\")\nax.plot([3200,51200],[1.,1.], ':k', alpha=0.4)\nax.plot(Ndiv, 1 - bt_infty_eff, label = \"$L_\\\\infty$ norm\")\nax.plot(Ndiv, 1 - bt_one_eff, label = \"$L_1$ norm\")\nax.plot(Ndiv, 1 - bt_two_eff, label = \"$L_2$ norm\")\n# ax.plot(Ndiv, 1 - bt_H1_eff, label = \"$H_1$ norm\")\n# ax.plot(Ndiv, 1 - bt_MA_eff, label = \"Monge-Ampere\")\nax.legend(loc='best', ncol=2)", "_____no_output_____" ] ], [ [ "## Efficiency measures: weight", "_____no_output_____" ], [ "We run the test only for the faster distance measures for lack of time. Nonetheless it seems to perform better than the *most-common* case.", "_____no_output_____" ], [ "### Ball-Tree with $d_\\infty$", "_____no_output_____" ] ], [ [ "bt_infty_eff_weight = array([bt_eff_weight(bt, x_test[:100], y_test[:100]) for bt in bt_infty])\nsave(\"bt_infty_eff_weight_100.npy\", bt_infty_eff_weight)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_1$", "_____no_output_____" ] ], [ [ "bt_one_eff_weight = array([bt_eff_weight(bt, x_test[:100], y_test[:100]) for bt in bt_one])\nsave(\"bt_one_eff_weight_100.npy\", bt_one_eff_weight)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_2$", "_____no_output_____" ] ], [ [ "bt_two_eff_weight = array([bt_eff_weight(bt, x_test[:100], y_test[:100]) for bt in bt_two])\nsave(\"bt_two_eff_weight_100.npy\", bt_two_eff_weight)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{H1}$", "_____no_output_____" ] ], [ [ "# bt_H1_eff_weight = array([bt_eff_weight(bt, x_test[:100], y_test[:100]) for bt in bt_H1])\n# save(\"bt_H1_eff_100_weight.npy\", bt_H1_eff_weight)", "_____no_output_____" ] ], [ [ "### Ball-Tree with $d_{MA}$", "_____no_output_____" ] ], [ [ "# bt_MA_eff_weight = array([bt_eff_weight(bt, x_test[:100], y_test[:100]) for bt in bt_MA])\n# save(\"bt_MA_eff_100_weight.npy\", bt_MA_eff_weight)", "_____no_output_____" ] ], [ [ "### Plots:", "_____no_output_____" ] ], [ [ "bt_infty_eff = load(\"bt_infty_eff_weight_100.npy\")\nbt_one_eff = load(\"bt_one_eff_weight_100.npy\")\nbt_two_eff = load(\"bt_two_eff_weight_100.npy\")\n# bt_H1_eff = load(\"bt_H1_eff_weight_100.npy\")\n# bt_MA_eff = load(\"bt_MA_eff_weight_100.npy\")\nfig = figure(figsize = (9,6), constrained_layout=True)\nax = fig.add_subplot(111)\nax.set_title(\"BallTree with 'weight' efficiency\", fontsize=14)\nax.set_xscale('log')\nax.set_xticks(Ndiv)\nax.set_xticklabels([str(nd) for nd in Ndiv])\nax.set_xlabel(\"Train-set size\")\nax.set_ylabel(\"Efficiency 'weight'\")\nax.plot([3200,51200],[1.,1.], ':k', alpha=0.4)\nax.plot(Ndiv, 1 - bt_infty_eff, label = \"$L_\\\\infty$ norm\")\nax.plot(Ndiv, 1 - bt_one_eff, label = \"$L_1$ norm\")\nax.plot(Ndiv, 1 - bt_two_eff, label = \"$L_2$ norm\")\n# ax.plot(Ndiv, 1 - bt_H1_eff, label = \"$H_1$ norm\")\n# ax.plot(Ndiv, 1 - bt_MA_eff, label = \"Monge-Ampere\")\nax.legend(loc='best', ncol=2)", "_____no_output_____" ] ], [ [ "### Some tests:", "_____no_output_____" ], [ "From some tests run on random inputs for the different efficiency measures provided, we find that for small trees the `closer` efficiency is the most reliable.\nThis is probably due to the presence in the test set of digits really *poorly written*, thus it might happen that in the train set some similar images are found, but in a set of neighbors it is most frequent to get the wrong result. This happens also with the **weighted** efficiency. We suspect that with larger ball-trees the problem should be overcome.", "_____no_output_____" ] ], [ [ "ball = btree(x_train[:Ndiv[0]].reshape((3200,img_rows*img_cols)), metric=d_two)", "_____no_output_____" ], [ "bt_eff( ball, x_test, y_test )", "_____no_output_____" ], [ "bt_eff_most_common( ball, x_test, y_test, N_neigh=100 )", "_____no_output_____" ], [ "bt_eff_most_common( ball, x_test[:10], y_test[:10], N_neigh=5 )", "_____no_output_____" ], [ "bt_eff_weight( ball, x_test, y_test, N_neigh=100 )", "_____no_output_____" ], [ "bt_eff_weight( ball, x_test[:10], y_test[:10], N_neigh=5 )", "_____no_output_____" ], [ "ind=5003\n[D], [k] = ball.query(x_train[ind].reshape((1, 28*28)), k = 100)", "_____no_output_____" ], [ "D", "_____no_output_____" ], [ "y_train[k]", "_____no_output_____" ], [ "int(average(y_train[k]+1, weights=1./D))", "_____no_output_____" ], [ "y_train[k]", "_____no_output_____" ], [ "_y = y_train[k]*D\n_y.shape", "_____no_output_____" ], [ "D.shape", "_____no_output_____" ], [ "D", "_____no_output_____" ], [ "imshow(x_test[8], cmap='gray_r')\n_ = title('Hand written digit '+str(y_test[8]))", "_____no_output_____" ], [ "imshow(x_train[ind], cmap='gray_r')\n_ = title('Hand written digit '+str(y_train[ind]))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec97bb4eb141f547b7a84e33799df1758ea50f71
663,427
ipynb
Jupyter Notebook
Data with Python/1.GDP and Billionaires/GDP_Billionaires_05172022.ipynb
suhsunghee/suhsunghee.github.io
a186d6ef861b37842ff6eea0050bd44b19a52cf0
[ "CC-BY-3.0" ]
null
null
null
Data with Python/1.GDP and Billionaires/GDP_Billionaires_05172022.ipynb
suhsunghee/suhsunghee.github.io
a186d6ef861b37842ff6eea0050bd44b19a52cf0
[ "CC-BY-3.0" ]
null
null
null
Data with Python/1.GDP and Billionaires/GDP_Billionaires_05172022.ipynb
suhsunghee/suhsunghee.github.io
a186d6ef861b37842ff6eea0050bd44b19a52cf0
[ "CC-BY-3.0" ]
null
null
null
204.257081
182,172
0.879308
[ [ [ "## Importing libraries and data ", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "GDP2020 = pd.read_excel('GDPCAP.xlsx')\nGDPCAP = pd.read_excel ('GDP2020.xlsx')\nBnaires = pd.read_excel('2022_forbes_billionaires.xlsx')", "_____no_output_____" ] ], [ [ "## Cleaning the data\n\n\nCleaning GDP2020 Per Capita data \n\n1. Overview\n2. Drop columns .drop(\"column_name\", axis = 1, inplace = True)\n3. Renaming columns by creating a new list and assigning to the columns\n4. Dropping rows where there's no information on GDP dropna( subset = [\"column_name\"], inplace = True )\n", "_____no_output_____" ] ], [ [ "GDPCAP.head()", "_____no_output_____" ], [ "GDPCAP.drop(\"Country Code\", axis = 1 , inplace = True)\ncolumns_rename = ['CountryName','2020GDP']\nGDPCAP.columns = columns_rename \nGDPCAP.dropna(subset = ['2020GDP'], inplace = True )", "_____no_output_____" ] ], [ [ "GDP 2020 table clean\n1. removing null rows for GDP \n2. reformatting GDP", "_____no_output_____" ] ], [ [ "GDP2020.rename(columns = {\"GDPPERCAP\" : \"GDP\" }, inplace = True) \nGDP2020.dropna(subset = [\"GDP\"], inplace = True )", "_____no_output_____" ], [ "pd.options.display.float_format = '{:.2f}'.format\nGDP2020.nlargest(20,'GDP')", "_____no_output_____" ] ], [ [ "Cleaning Billionare table \n1. dropping unneccessary table \n2. concatenating networth to convert to int", "_____no_output_____" ] ], [ [ "Bnaires.drop(\"Column1\", axis = 1 , inplace = True)", "_____no_output_____" ], [ "Bnaires[\"networth\"] = Bnaires[\"networth\"].str.replace(\"B\",\"\")\nBnaires[\"networth\"] = Bnaires[\"networth\"].str.replace(\"$\",\"\")\nBnaires[\"networth\"] = Bnaires[\"networth\"].astype(float)\n\n\n", "C:\\Users\\SUNGHE~1.GRO\\AppData\\Local\\Temp/ipykernel_22208/400528084.py:2: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.\n Bnaires[\"networth\"] = Bnaires[\"networth\"].str.replace(\"$\",\"\")\n" ], [ "Bnaires.head(50)", "_____no_output_____" ] ], [ [ "In Billionaires record, some of the countries do not match the names listed in GDP data\n\nFinding unmatched country names from Billionaire data to map it on GDP data\n1. saving unique country names from each data frame as lists\n2. using for loop, append country names (from Billionaire list) that do not exist in GDP data\n3. check the data quality & confirm what needs to be fixed for a good quality mapping\n4. after mapping recheck the data for both GDP & GDP Per capita country names \n", "_____no_output_____" ] ], [ [ "Bnairecountry = Bnaires[\"country\"].unique()\nGDPcountry = GDP2020[\"Country\"].unique()", "_____no_output_____" ], [ "Missingcountry = []\nfor i in Bnairecountry:\n if i not in GDPcountry: \n Missingcountry.append(i)\n ", "_____no_output_____" ], [ "Missingcountry", "_____no_output_____" ], [ "GDP2020.loc[GDP2020[\"Country\"].str.contains(\"|\".join(Missingcountry))]", "C:\\Users\\SUNGHE~1.GRO\\AppData\\Local\\Temp/ipykernel_22208/2057144159.py:1: UserWarning: This pattern has match groups. To actually get the groups, use str.extract.\n GDP2020.loc[GDP2020[\"Country\"].str.contains(\"|\".join(Missingcountry))]\n" ], [ "Bnaires.loc[Bnaires['country'].str.contains('Hong Kong'),'country'] = 'Hong Kong SAR, China'\nBnaires.loc[Bnaires['country'].str.contains('Russia'),'country'] = 'Russian Federation'\nBnaires.loc[Bnaires['country'].str.contains('Czechia'),'country'] = 'Czech Republic'\nBnaires.loc[Bnaires['country'].str.contains('South Korea'),'country'] = 'Korea, Rep.'\nBnaires.loc[Bnaires['country'].str.contains('Slovakia'),'country'] = 'Slovak Republic'\nBnaires.loc[Bnaires['country'].str.contains('Venezuela'),'country'] = 'Venezuela, RB'\nBnaires.loc[Bnaires['country'].str.contains('Guernsey'),'country'] = 'United Kingdom'\nBnaires.loc[Bnaires['country'].str.contains('Macau'),'country'] = 'Macao SAR, China'\nBnaires.loc[Bnaires['country'].str.contains('Taiwan'),'country'] = 'China'\nBnaires.loc[Bnaires['country'].str.contains('Egypt'),'country'] = 'Egypt, Arab Rep.'\n", "_____no_output_____" ], [ "Bnairecountry = Bnaires[\"country\"].unique()", "_____no_output_____" ], [ "\nmaptest = []\nfor i in Bnairecountry:\n if i not in GDPcountry: \n maptest.append(i)\n ", "_____no_output_____" ], [ "maptest", "_____no_output_____" ], [ "Bnaires.loc[Bnaires['country'].str.contains('Eswatini'), \"country\"] = 'Eswatini'", "_____no_output_____" ], [ "maptest2 = []\n\nfor i in Bnaires['country'].unique():\n if i not in GDPCAP['CountryName'].unique():\n maptest2.append(i)", "_____no_output_____" ], [ "maptest2", "_____no_output_____" ], [ "GDPCAP.loc[GDPCAP['CountryName'].str.contains(\"Liechten\")].sort_values(by = 'CountryName')", "_____no_output_____" ], [ "\nGDPCAP.loc[GDPCAP['CountryName'].str.contains(\"V\")].sort_values(by = 'CountryName')", "_____no_output_____" ] ], [ [ "# Exploring and Visualizing data", "_____no_output_____" ] ], [ [ "Bnaires.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2600 entries, 0 to 2599\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 rank 2600 non-null int64 \n 1 name 2600 non-null object \n 2 networth 2600 non-null float64\n 3 age 2600 non-null int64 \n 4 country 2600 non-null object \n 5 source 2600 non-null object \n 6 industry 2600 non-null object \ndtypes: float64(1), int64(2), object(4)\nmemory usage: 142.3+ KB\n" ] ], [ [ "1. Top 10 country with most billionaires", "_____no_output_____" ] ], [ [ "bnairecountry = Bnaires.groupby(\"country\")", "_____no_output_____" ], [ "bnairecountry.size().nlargest(10)", "_____no_output_____" ] ], [ [ "2. Top 10 country with most money from billionaires", "_____no_output_____" ] ], [ [ "bnairecountry.sum().sort_values(\"networth\", ascending = False)", "_____no_output_____" ] ], [ [ "3. Average age of billionaire per country , select only top 10 ", "_____no_output_____" ] ], [ [ "agedistribution = bnairecountry[\"age\"].describe()", "_____no_output_____" ], [ "agedistribution[\"count\"] = agedistribution[\"count\"].astype(int)", "_____no_output_____" ], [ "countfilter = agedistribution.loc[agedistribution[\"count\"]>15]\n\ncountfilter[[\"mean\",\"count\"]].nsmallest(10,\"mean\")", "_____no_output_____" ] ], [ [ "Using lambda funtion to find out how many billionaires exist in U.S.", "_____no_output_____" ] ], [ [ "bnairecountries = Bnaires[\"country\"]\nlen(list(filter(lambda x : \"United States\" in x, bnairecountries)))", "_____no_output_____" ] ], [ [ "Joining GDP Per capita and GDP for further analysis and visuzalization", "_____no_output_____" ] ], [ [ "total_capita = GDP2020.merge(GDPCAP,how = 'inner', left_on= \"Country\", right_on=\"CountryName\")", "_____no_output_____" ], [ "total_capita.rename(columns = {\"2020GDP\":\"CAP\"}, inplace = True)\ntotal_capita", "_____no_output_____" ], [ "total_capita[\"GDP_Rank\"] = total_capita[\"GDP\"].rank(ascending = False)\ntotal_capita[\"CAP_Rank\"] = total_capita[\"CAP\"].rank(ascending = False)\n\ntotal_capita[\"TotalGDP_standardliving_ratio\"] = total_capita[\"GDP_Rank\"]/total_capita[\"CAP_Rank\"]\n", "_____no_output_____" ], [ "total_capita.sort_values(by = \"TotalGDP_standardliving_ratio\", ascending = True)", "_____no_output_____" ], [ "plt.figure(figsize =(10,6))\nsns.regplot(data =total_capita, y = \"GDP_Rank\", x = \"CAP_Rank\", color = 'c').set(title = \"Capita_Rank vs Total GDP rank\")", "_____no_output_____" ], [ "top20GDP =total_capita.nlargest(20,[\"GDP\"])", "_____no_output_____" ], [ "top80GDP =total_capita.nlargest(80,[\"GDP\"])\ntop200GDP = total_capita.nlargest(200,[\"GDP\"])", "_____no_output_____" ], [ "plt.figure(figsize =(10,6))\nsns.regplot(data =top20GDP, y = \"GDP_Rank\", x = \"CAP_Rank\", color = 'c').set(title = \"Top20 Capita_Rank vs Total GDP rank\")", "_____no_output_____" ], [ "sns.jointplot(data =top20GDP, y = \"GDP_Rank\", x = \"CAP_Rank\",kind ='reg')", "_____no_output_____" ], [ "plt.figure(figsize =(10,6))\nsns.regplot(data =top200GDP, y = \"GDP_Rank\", x = \"CAP_Rank\", color = 'c').set(title = \"Top20 Capita_Rank vs Total GDP rank\")", "_____no_output_____" ], [ "\nsns.lmplot(data =top20GDP, y= \"GDP_Rank\", x = \"CAP_Rank\", hue= 'CountryName',palette= 'Spectral').set(title = \"Top20 Capita_Rank vs Total GDP rank\")\n\ndef label_point(x, y, val, ax):\n a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)\n for i, point in a.iterrows():\n ax.text(point['x']+.02, point['y'], str(point['val']))\n\nlabel_point(top20GDP[\"CAP_Rank\"], top20GDP[\"GDP_Rank\"], top20GDP[\"CountryName\"], plt.gca()) \nplt.gcf().set_size_inches(15, 8)", "_____no_output_____" ], [ "\nsns.lmplot(data =top80GDP, y= \"GDP_Rank\", x = \"CAP_Rank\", hue= 'CountryName',palette= 'viridis').set(title = \"Top80 Capita_Rank vs Total GDP rank\")\n\ndef label_point(x, y, val, ax):\n a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)\n for i, point in a.iterrows():\n ax.text(point['x']+.02, point['y'], str(point['val']))\n\nlabel_point(top80GDP[\"CAP_Rank\"], top80GDP[\"GDP_Rank\"], top80GDP[\"CountryName\"], plt.gca()) \nplt.gcf().set_size_inches(15, 8)", "_____no_output_____" ], [ "sns.displot(total_capita['CAP'], bins = 10)", "_____no_output_____" ] ], [ [ "Sampling data for plots ", "_____no_output_____" ] ], [ [ "sampling = total_capita.sample(int(0.2*len(total_capita)))\n \nplt.figure(figsize =(10,6))\nsns.regplot(data =sampling, y = \"GDP\", x = \"CAP\", color = 'c').set(title = \"GDP per Capita vs Total GDP\")", "_____no_output_____" ] ], [ [ "Plotting Billionaires networth and age ", "_____no_output_____" ] ], [ [ "plt.figure(figsize = (10,6))\nsns.regplot(data =Bnaires, y = \"networth\", x = \"age\", color ='g')", "_____no_output_____" ] ], [ [ "Merging total capita and billionaires data", "_____no_output_____" ] ], [ [ "Bnaires_GDP =Bnaires.merge(total_capita,how= \"inner\", left_on = \"country\", right_on=\"Country\")", "_____no_output_____" ], [ "countrygroup = Bnaires_GDP.groupby('country').sum()", "_____no_output_____" ], [ "countrygroup.head()", "_____no_output_____" ], [ "sns.jointplot(x= 'CAP', y = 'networth', data= countrygroup, kind='reg')", "_____no_output_____" ], [ "sns.jointplot(x= 'networth', y = 'GDP', data= countrygroup, kind='reg')", "_____no_output_____" ] ], [ [ "Below plot against networth and total gdp/gdp_cap ratio does not provide good insights. ", "_____no_output_____" ] ], [ [ "plt.figure(figsize =(10,6))\nsns.regplot(data = countrygroup, y = \"networth\", x = \"TotalGDP_standardliving_ratio\", color = 'c').set(title = \"GDP per Capita vs Total GDP\")", "_____no_output_____" ] ], [ [ "ranking the data by gdp/gdp_cap to see if this provides a better information on billionaire countries and Gdp/gdp_cap gap", "_____no_output_____" ] ], [ [ "countrygroup[\"gdp_gdpcap_gap_rank\"] = countrygroup[\"TotalGDP_standardliving_ratio\"].rank(ascending = True)\ncountrygroup[\"networth_rank\"] = countrygroup[\"networth\"].rank(ascending = False)", "_____no_output_____" ], [ "countrygroup.sort_values(by = \"gdp_gdpcap_gap_rank\", ascending= False)", "_____no_output_____" ], [ "countrygroup.reset_index(inplace = True)", "_____no_output_____" ] ], [ [ "Below line graph on gdp_capita_gap_rank vs networth_rank has negative relationship, which indicates the countries that have most money by bilionaires will have higher proportion of gdp/gdpcapita ", "_____no_output_____" ] ], [ [ "sns.lmplot(data =countrygroup, y = \"networth_rank\", x ='gdp_gdpcap_gap_rank' )", "_____no_output_____" ], [ "top30networth= countrygroup.nlargest(30,[\"networth\"])", "_____no_output_____" ], [ "sns.lmplot(data =top30networth, x = \"networth_rank\", y ='gdp_gdpcap_gap_rank', hue = 'country')\n\n\ndef label_point(x, y, val, ax):\n a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)\n for i, point in a.iterrows():\n ax.text(point['x']+.02, point['y'], str(point['val']))\n\nlabel_point(top30networth[\"networth_rank\"], top30networth[\"gdp_gdpcap_gap_rank\"], top30networth[\"country\"], plt.gca()) \nplt.gcf().set_size_inches(20, 15)\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec97bb93760d1f46d58444c016cf09da31280aab
11,972
ipynb
Jupyter Notebook
Untitled.ipynb
smithrockmaker/PH213
1f5796337c1f17f649532f6ccdfb59f02e397963
[ "MIT" ]
null
null
null
Untitled.ipynb
smithrockmaker/PH213
1f5796337c1f17f649532f6ccdfb59f02e397963
[ "MIT" ]
null
null
null
Untitled.ipynb
smithrockmaker/PH213
1f5796337c1f17f649532f6ccdfb59f02e397963
[ "MIT" ]
null
null
null
140.847059
1,791
0.703224
[ [ [ "%matplotlib widget", "_____no_output_____" ], [ "url =df = pd.read_csv(\n“https://raw.githubusercontent.com/plotly/datasets/master/tips.csv”\n)# Matplotlib Scatter Plot\nplt.scatter(‘total_bill’, ‘tip’,data=df)\nplt.xlabel(‘Total Bill’)\nplt.ylabel(‘Tip’)\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
ec97cecec23fbf87a41927cba792daa1005121f7
600,140
ipynb
Jupyter Notebook
tutorials/Variance/Variance.ipynb
kif/pyFAI-tutorials
2dc71f67bd949101e6d9475c0fb81f088ca92418
[ "MIT" ]
null
null
null
tutorials/Variance/Variance.ipynb
kif/pyFAI-tutorials
2dc71f67bd949101e6d9475c0fb81f088ca92418
[ "MIT" ]
null
null
null
tutorials/Variance/Variance.ipynb
kif/pyFAI-tutorials
2dc71f67bd949101e6d9475c0fb81f088ca92418
[ "MIT" ]
null
null
null
77.537468
50,033
0.695253
[ [ [ "%matplotlib nbagg", "_____no_output_____" ] ], [ [ "# Variance of SAXS data\n\nThere has been a long discussion about the validity (or not) of pixel splitting regarding the propagation of errors.\n\n## System under study\n\nLet's do a numerical experiment, simulating the following experiment:\n\n* Detector: 1024x1024 square pixels of 100µm each, assumed to be poissonian. \n* Geometry: The detector is placed at 1m from the sample, the beam center is in the corner of the detector\n* Intensity: the maximum signal on the detector is 10 000 photons per pixel, each pixel having a minimum count of a hundreed.\n* Wavelength: 1 Angstrom\n* During the whole studdy, the solid-angle correction will be discarded, same for solid angle corrections.\n\nNow we define some constants for the studdy:", "_____no_output_____" ] ], [ [ "pix = 100e-6\nshape = (1024, 1024)\nnpt = 1000\nnimg = 1000\nwl = 1e-10\nI0 = 1e4\nRg = 1.\nkwarg = {\"npt\":npt, \n \"method\": \"nosplit_csr_ocl_gpu\", \n \"correctSolidAngle\":False, \n \"polarization_factor\":None,\n \"safe\":False}", "_____no_output_____" ], [ "import numpy\nfrom matplotlib.pyplot import subplots\nfrom matplotlib.colors import LogNorm\nimport pyFAI\nprint(pyFAI.version)\nfrom pyFAI.detectors import Detector\nfrom pyFAI.azimuthalIntegrator import AzimuthalIntegrator\nfrom pyFAI.gui import jupyter\ndetector = Detector(pix, pix)\ndetector.shape = detector.max_shape = shape\nprint(detector)", "0.16.0-beta0\nDetector Detector\t Spline= None\t PixelSize= 1.000e-04, 1.000e-04 m\n" ], [ "ai =AzimuthalIntegrator(dist=1, poni1=0., poni2=0., detector=detector, wavelength=wl)\nprint(ai)", "Detector Detector\t Spline= None\t PixelSize= 1.000e-04, 1.000e-04 m\nWavelength= 1.000000e-10m\nSampleDetDist= 1.000000e+00m\tPONI= 0.000000e+00, 0.000000e+00m\trot1=0.000000 rot2= 0.000000 rot3= 0.000000 rad\nDirectBeamDist= 1000.000mm\tCenter: x=0.000, y=0.000 pix\tTilt=0.000 deg tiltPlanRotation= 0.000 deg\n" ], [ "res_flat = ai.integrate1d(numpy.ones(detector.shape), **kwarg)\ncrv = jupyter.plot1d(res_flat)\ncrv.axes.set_ylim(0.9,1.1)", "WARNING:pyFAI.ext.splitBBoxCSR:Pixel splitting desactivated !\n" ], [ "q = numpy.linspace(0, res_flat.radial.max(), npt)\n# Simple decay in q**-2 to take into account the solid angle\nI = I0/(1+q**2)\nfig, ax = subplots()\nax.semilogy(q, I, label=\"Simulated signal\")\nax.set_xlabel(\"q ($nm^{-1}$)\")\nax.set_ylabel(\"I (count)\")\nfig.legend()", "_____no_output_____" ], [ "#Reconstruction of diffusion image:\n\nimg_theo = ai.calcfrom1d(q, I, dim1_unit=\"q_nm^-1\", correctSolidAngle=False)\nfig, ax = subplots()\nax.imshow(img_theo, norm=LogNorm())", "_____no_output_____" ], [ "%%time\n\n# Now construct the large dataset from poissonian statistics\n#this is slow and takes a lot of memory !\nif \"dataset\" not in dir():\n dataset = numpy.random.poisson(img_theo, (nimg,)+img_theo.shape)\n# else avoid wasting time\nprint(dataset.size/(1<<20), \"MBytes\", dataset.shape)", "1000.0 MBytes (1000, 1024, 1024)\nCPU times: user 1min 21s, sys: 1.44 s, total: 1min 23s\nWall time: 1min 23s\n" ], [ "%%time\nimg_avg = dataset.mean(axis=0)\nimg_std = dataset.std(axis=0)\nerror = img_theo - img_avg\nprint(\"Error max:\", abs(error.max()), \"Error mean\", abs(error.mean()))\nprint(\"Deviation on variance\", abs(img_std**2-img_theo).max())\n", "Error max: 11.388897908989748 Error mean 0.00016117392478402854\nDeviation on variance 1534.8914985677166\nCPU times: user 5.71 s, sys: 1.38 s, total: 7.09 s\nWall time: 7.11 s\n" ], [ "fig, ax = subplots(1,3)\nax[0].imshow(img_std**2)\nax[1].imshow(img_theo)\nax[2].imshow(img_std**2-img_theo)\n(img_std**2-img_theo).mean()", "_____no_output_____" ], [ "def chi2_curves(res1, res2):\n \"\"\"Calculate the Chi2 value for a pair of integrated data\"\"\"\n I = res1.intensity\n J = res2.intensity\n l = len(I)\n assert len(J) == l\n sigma_I = res1.sigma\n sigma_J = res2.sigma\n return ((I-J)**2/(sigma_I**2+sigma_J**2)).sum()/(l-1)\n", "_____no_output_____" ], [ "%%time\nresults = []\nfor i in range(nimg):\n data = dataset[i, :, :]\n results.append(ai.integrate1d(data, variance=data, **kwarg))\nprint(data.shape)", "(1024, 1024)\nCPU times: user 4.43 s, sys: 176 ms, total: 4.61 s\nWall time: 4.61 s\n" ], [ "chi2_curves(results[0], results[1])", "_____no_output_____" ], [ "c2 = []\nfor i in range(nimg):\n res1 = results[i]\n for j in range(i):\n c2.append(chi2_curves(res1, results[j]))\nc2 = numpy.array(c2)", "_____no_output_____" ], [ "fig, ax = subplots()\nh,b,_ = ax.hist(c2, 100)", "_____no_output_____" ], [ "#Comparison between the obtained distribution and the a chi-squared continuous random variable.\nfrom scipy.stats import chi2 as chi2_dist\nx_data = 0.5*(b[1:] +b[:-1])\ny_data = h/len(c2) # normalized\nfig, ax = subplots()\nax.plot(x_data, y_data, label=\"data\")\ny_sim = chi2_dist.pdf(x_data*(nimg-1), nimg)\ny_sim /= sum(y_sim) #normalized as well\nax.plot(x_data, y_sim, label=\"Chi2 distribution\")\nfig.legend()", "_____no_output_____" ], [ "low_lim, up_lim = chi2_dist.ppf([0.005, 0.995], nimg) / (nimg - 1)\nprint(low_lim, up_lim)\n(c2<low_lim).sum(),(c2>up_lim).sum()", "0.889452976157626 1.1200681344576493\n" ] ], [ [ "# And with solid angle correction ?", "_____no_output_____" ] ], [ [ "kwarg = {\"npt\":npt, \n \"method\": \"nosplit_csr_ocl_gpu\", \n \"correctSolidAngle\":True, \n \"polarization_factor\":None,\n \"safe\":False}\nai.reset()\nresults = []\nfor i in range(nimg):\n data = dataset[i, :, :]\n results.append(ai.integrate1d(data, variance=data, **kwarg))\nc2sa = []\nfor i in range(nimg):\n res1 = results[i]\n for j in range(i):\n c2sa.append(chi2_curves(res1, results[j]))\nc2sa = numpy.array(c2sa)\nh,b = numpy.histogram(c2sa, 100)\nx_data = 0.5*(b[1:] +b[:-1])\ny_data = h/len(c2) # normalized\nfig, ax = subplots()\nax.plot(x_data, y_data, label=\"data\")\ny_sim = chi2_dist.pdf(x_data*(nimg-1), nimg)\ny_sim /= sum(y_sim) #normalized as well\nax.plot(x_data, y_sim, label=\"Chi2 distribution\")\nfig.legend()", "WARNING:pyFAI.ext.splitBBoxCSR:Pixel splitting desactivated !\n" ] ], [ [ "# With next generation azimuthal integrator ...\n\nThis does not yet exist in pyFAI but is on the roadmap since begining 2017 ... \nhttps://github.com/silx-kit/pyFAI/issues/520\n\n", "_____no_output_____" ] ], [ [ "from pyFAI.containers import Integrate1dResult\ndef integrate1d_ng(ai, data, variance, **kwargs):\n \"\"\"Demonstrator for the new azimuthal integrator taking care of the normalization, \n here implemented only on the solid-angle correction\"\"\"\n \n kwargs[\"correctSolidAngle\"]=False\n \n sa = ai.solidAngleArray(ai.detector.shape)\n denom = ai.integrate1d(sa, **kwargs)\n signal = ai.integrate1d(data, **kwargs)\n sigma2 = ai.integrate1d(variance, **kwargs)\n result = Integrate1dResult(denom.radial, \n signal.sum/denom.sum, \n numpy.sqrt(sigma2.sum)/denom.sum)\n result._set_method_called(\"integrate1d_ng\")\n result._set_compute_engine(kwargs.get(\"method\", \"unknown\"))\n result._set_unit(signal.unit)\n result._set_sum(signal.sum)\n result._set_count(signal.count)\n return result\n \n \n ", "_____no_output_____" ], [ "ai.reset()\nfig, ax = subplots()\nres_ng = integrate1d_ng(ai, data, variance=data, **kwarg)\njupyter.plot1d(res_ng, ax=ax)\nax.set_yscale(\"log\")\njupyter.plot1d(ai.integrate1d(data, variance=data, **kwarg), ax=ax)\nfig.legend()\n\n", "_____no_output_____" ], [ "ai.reset()\nresults = []\nfor i in range(nimg):\n data = dataset[i, :, :]\n results.append(integrate1d_ng(ai, data, variance=data, **kwarg))\nc2sa = []\nfor i in range(nimg):\n res1 = results[i]\n for j in range(i):\n c2sa.append(chi2_curves(res1, results[j]))\nc2sa = numpy.array(c2sa)\nh,b = numpy.histogram(c2sa, 100)\nx_data = 0.5*(b[1:] +b[:-1])\ny_data = h/len(c2) # normalized\nfig, ax = subplots()\nax.plot(x_data, y_data, label=\"data\")\ny_sim = chi2_dist.pdf(x_data*(nimg-1), nimg)\ny_sim /= sum(y_sim) #normalized as well\nax.plot(x_data, y_sim, label=\"Chi2 distribution\")\nfig.legend()", "WARNING:pyFAI.ext.splitBBoxCSR:Pixel splitting desactivated !\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec97d11a23588a326003a061d1ce1654af0568c9
22,662
ipynb
Jupyter Notebook
Colab_Notebooks/MixMatch_(without_mixup).ipynb
Shubhammawa/MixMatch-Semi-Supervised-Learning
0ca9139d07acd9af38ed05efe2f914fe930ab573
[ "MIT" ]
5
2020-09-12T08:12:29.000Z
2022-02-15T10:48:47.000Z
Colab_Notebooks/MixMatch_(without_mixup).ipynb
Shubhammawa/MixMatch-Semi-Supervised-Learning
0ca9139d07acd9af38ed05efe2f914fe930ab573
[ "MIT" ]
null
null
null
Colab_Notebooks/MixMatch_(without_mixup).ipynb
Shubhammawa/MixMatch-Semi-Supervised-Learning
0ca9139d07acd9af38ed05efe2f914fe930ab573
[ "MIT" ]
1
2021-04-04T03:48:20.000Z
2021-04-04T03:48:20.000Z
31.344398
241
0.494308
[ [ [ "<a href=\"https://colab.research.google.com/github/awl-shubham-mawa/AWL-Internship/blob/master/MixMatch_v3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "### 1. Imports", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport os\nimport time\nimport torch\nimport torch.nn as nn\nimport sys\nimport math\nimport random\n\nimport torchvision\nfrom torchvision import models\nfrom torch.utils.data import Dataset\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import ConcatDataset\nfrom torch.utils.data import random_split\nfrom torchsummary import summary\n\nfrom torchvision import transforms\nimport torchvision.transforms.functional as TF\nfrom PIL import Image\n\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import accuracy_score, f1_score", "_____no_output_____" ] ], [ [ "### 2. Filepaths", "_____no_output_____" ] ], [ [ "AAF_TRAIN_PATH = '\n!unzip -q 'extracted_original-20'\nAAF_IMA", "_____no_output_____" ] ], [ [ "### 3. Arguments/Hyperparameters", "_____no_output_____" ] ], [ [ "CUDA = 0\nRANDOM_SEED = 1\nLEARNING_RATE = 0.0001\nNUM_EPOCHS = 3\nBATCH_SIZE = 32\nlambda_u = 1\nNUM_LABELLED = 1000\t#No of labelled examples to be used in MixMatch\nDEVICE = torch.device(\"cuda:%d\" % CUDA)\nprint(torch.cuda.is_available())\nprint(torch.cuda.get_device_name(0))", "_____no_output_____" ] ], [ [ "### 4. AAF Dataset Class", "_____no_output_____" ] ], [ [ "class AAF_Dataset(Dataset):\n\t''' Custom Dataset for loading AAF Dataset images'''\n\n\tdef __init__(self, csv_path, img_dir, transform):\n\t\t\n\t\tdf = pd.read_excel(csv_path)\n\t\tself.img_dir = img_dir\n\t\tself.transform = transform\n\t\tself.csv_path = csv_path\n\t\tself.gender = df['Gender'].values\n\t\tself.filename = df['Image'].values\n\t\n\t#def preprocess(self):\n\t''' Any further preprocessing required in the data\n\t\tcan be performed here'''\n\n\n\tdef __getitem__(self, index):\n\n\t\timg = Image.open(os.path.join(self.img_dir,\n\t\t\t\t\t\t\t\t\tself.filename[index]))\n\t\timg = self.transform(img)\n\t\ty_true = self.gender[index]\n\t\ty_true = torch.tensor(y_true, dtype=torch.float32)\n\t\t\n\t\treturn img, y_true\n\n\tdef __len__(self):\n\t\treturn self.gender.shape[0]", "_____no_output_____" ] ], [ [ "### 5. Transformation", "_____no_output_____" ] ], [ [ "custom_transform = transforms.Compose([transforms.Resize((96,96)),\n\t\t\t\t\t\t\ttransforms.ToTensor()])\t", "_____no_output_____" ] ], [ [ "### 6. Sample batch for visualization", "_____no_output_____" ] ], [ [ "sample_batch_size = 4\nsample_dataset = AAF_Dataset(csv_path=AAF_TRAIN_PATH, img_dir=AAF_IMAGE_PATH, transform=custom_transform)\nsample_loader = DataLoader(dataset=sample_dataset, batch_size=sample_batch_size, shuffle=True)\n\ndataiter = iter(sample_loader)\nimages, labels = dataiter.next()\n\nprint(\"Batch shape (images): \",images.shape)\nprint(\"Batch shape (labels): \", labels.shape)\n#print(y_true.shape)\n\n#print(images[0])\n# print(images[0].shape)\n# print(labels[0].item())\n#print(y_true[0])\n\ndef imshow(img, title):\n '''Function imshow: Helper function to display an image'''\n plt.figure(figsize=(sample_batch_size * 4, 4))\n plt.axis('off')\n plt.imshow(np.transpose(img, (1, 2, 0)))\n plt.title(title)\n plt.show()\n\ndef show_batch_images(dataloader):\n '''Function show_batch_images: Helper function to display images with their true ages'''\n images, labels = next(iter(dataloader))\n \n img = torchvision.utils.make_grid(images)\n imshow(img, title = 'Images')\n print(\"Labels: \",labels)\n \n return images, labels\nimages, labels = show_batch_images(sample_loader)", "_____no_output_____" ] ], [ [ "### 7. Datasets and Dataloaders", "_____no_output_____" ] ], [ [ "AAF_train = AAF_Dataset(csv_path=AAF_TRAIN_PATH, img_dir=AAF_IMAGE_PATH, transform=custom_transform)\n\nAAF_train_labelled, AAF_train_unlabelled = random_split(AAF_train, [NUM_LABELLED, len(AAF_train) - NUM_LABELLED])\n\ntrainloader_labelled = DataLoader(AAF_train_labelled, batch_size=BATCH_SIZE, shuffle=True)\ntrainloader_unlabelled = DataLoader(AAF_train_unlabelled, batch_size=BATCH_SIZE, shuffle=True)\n\nAAF_test = AAF_Dataset(csv_path=AAF_TEST_PATH, img_dir=AAF_IMAGE_PATH, transform=custom_transform)\n\ntestloader = DataLoader(AAF_test, batch_size= BATCH_SIZE, shuffle=False)\n\nprint(\"Labelled examples: \" + str(len(AAF_train_labelled)) + \"\\nUnlabelled examples: \" \n + str(len(AAF_train_unlabelled)) + \"\\nTest examples: \" + str(len(AAF_test)))\n\n# dataiter = iter(trainloader_labelled)\n# images, labels = dataiter.next()\n# print(labels[:])\n# print(len(AAF_train_labelled), len(AAF_train_unlabelled))", "_____no_output_____" ] ], [ [ "### 8. MixMatch Utilities", "_____no_output_____" ] ], [ [ "def augment_image(batch_img, K = 2):\n\t'''Function augment_image:\n\t\tInput: PIL Image/Torch Tensor\n\t\tOutput: K number of augmented images'''\n\t\n\tbatch_augment_images = []\n\tfor i in range(batch_img.shape[0]):\n\t\timg = batch_img[i]\n\t\timg = TF.to_pil_image(img.cpu())\n\t\timg_1 = TF.to_tensor(TF.adjust_brightness(img, np.random.uniform(0.5, 1.5)))\n\t\timg_2 = TF.to_tensor(TF.adjust_contrast(img, np.random.uniform(0.5, 1.5)))\n\t\timg_3 = TF.to_tensor(TF.adjust_saturation(img, np.random.uniform(0.5, 1.5)))\n\t\t\n\t\timg_4 = TF.to_tensor(TF.hflip(img))\n\t\timg_5 = TF.to_tensor(TF.rotate(img, angle=np.random.uniform(-10,10)))\n\n\t\timg_6 = TF.to_tensor(TF.to_grayscale(img, num_output_channels=3))\n\t\timg_7 = TF.to_tensor(TF.adjust_gamma(img, np.random.uniform(0.5, 1.5)))\n\n\t\trandom_numbers = random.sample(range(1, 8), K)\n\t\timg_dict = {'1': img_1, '2': img_2, '3': img_3, '4': img_4, '5': img_5, '6': img_6, '7': img_7}\n\n\t\taugment_images = []\n\t\tfor i in random_numbers:\n\t\t\taugment_images.append(img_dict[str(i)])\n\t\tbatch_augment_images.append(augment_images)\n\treturn batch_augment_images\n\ndef label_guessing(model, augment_images, device):\n\t''' Function label_guessing\n\t\tInput: Classifier model, K augmentations of the unlabelled data\n\t\tOuput: Calls augment_image function, makes predictions for the K augmentations and averages them to get the guessed\n\t\t\t\tlabels for unlabelled data.\n\t\t'''\n\tpredictions = []\n\n\tfor i in range(0,len(augment_images)):\n\n\t\timg = torch.stack(augment_images[i], dim=0)\n\t\timg = img.to(device)\n\t\tlogits = model(img)\n\t\tprobas = nn.functional.softmax(logits, dim=1)\n\t\tpredictions.append(probas)\n\tpredictions = torch.stack(predictions,dim=0)\n\tq_hat = torch.mean(predictions, dim=1)\n\n\treturn q_hat\n\ndef sharpen(p, T=0.5):\n\tp_sharp = torch.pow(p, 1/T)/(torch.sum(torch.pow(p, 1/T), dim=0))\n\treturn p_sharp\n\ndef mixup(x1,y1,x2,y2,alpha=0.75):\n l = np.random.beta(alpha,alpha)\n l = max(l,1-l)\n x = l * x1 + (1-l) * x2\n y = l* y1 + (1-l) * y2\n return x,y", "_____no_output_____" ] ], [ [ "### 9. MixMatch Dataset Class", "_____no_output_____" ] ], [ [ "class MixMatch_Dataset(Dataset):\n\t'''Supply a batch of labelled and unlabelled data, X and U.'''\n\t\n\tdef __init__(self, Labelled_data, Unlabelled_data):\n\t\tself.Labelled_data = Labelled_data\n\t\tself.Unlabelled_data = Unlabelled_data\n \n\tdef __getitem__(self, index):\n\t\t\n\t\tsize_labelled = len(self.Labelled_data)\n\t\tsize_unlabelled = len(self.Unlabelled_data)\n\t\tif(index < size_labelled):\n\t\t\tl_index = index\n\t\telse:\n\t\t\tl_index = np.random.randint(0, size_labelled)\n\t\tif(index < size_unlabelled):\n\t\t\tu_index = index\n\t\telse:\n\t\t\tu_index = index - size_unlabelled\n\t\t#print(l_index)\n\t\tx = self.Labelled_data[l_index][0]\n\t\ty = self.Labelled_data[l_index][1]\n\t\tu = self.Unlabelled_data[u_index][0]\n\n\t\treturn x, y, u\n\t\n\tdef __len__(self):\n\t\treturn max(len(self.Labelled_data), len(self.Unlabelled_data))\n\nMixMatch_dataset = MixMatch_Dataset(Labelled_data=AAF_train_labelled, Unlabelled_data=AAF_train_unlabelled)\nMixMatch_loader = DataLoader(MixMatch_dataset, batch_size=BATCH_SIZE, shuffle=True)", "_____no_output_____" ] ], [ [ "### 10. Loss Functions", "_____no_output_____" ] ], [ [ "cross_entropy = nn.CrossEntropyLoss(reduction='sum')\nl2_loss = nn.MSELoss(reduction='sum')", "_____no_output_____" ] ], [ [ "### 11. Wide-Resnet Model", "_____no_output_____" ] ], [ [ "model_1 = models.wide_resnet50_2(pretrained=True)\nmodel_1.to(DEVICE)\nprint(summary(model_1, (3, 96, 96)))", "_____no_output_____" ] ], [ [ "### 12. Gender Classification Model", "_____no_output_____" ] ], [ [ "class Gender_Classifier(nn.Module):\n def __init__(self):\n super(Gender_Classifier, self).__init__()\n\n self.fc1 = nn.Linear(1000, 100)\n self.fc2 = nn.Linear(100, 10)\n self.fc3 = nn.Linear(10,2)\n\n def forward(self, x):\n x = self.fc1(x)\n x = self.fc2(x)\n logits = self.fc3(x)\n return logits", "_____no_output_____" ] ], [ [ "### 13. Stack classifier onto the Wide-Resnet architecture", "_____no_output_____" ] ], [ [ "model_2 = Gender_Classifier()\nmodel = nn.Sequential(model_1, model_2)\nmodel.to(DEVICE)\nprint(summary(model, (3,96,96)))", "_____no_output_____" ] ], [ [ "### 14. Optimizer", "_____no_output_____" ] ], [ [ "torch.manual_seed(RANDOM_SEED)\ntorch.cuda.manual_seed(RANDOM_SEED)\noptimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=0.01)", "_____no_output_____" ] ], [ [ "### 15. Model training", "_____no_output_____" ] ], [ [ "start_time = time.time()\nnum_batches = 0\n#costs = []\nfor epoch in range(NUM_EPOCHS):\n\n\tmodel.train()\n\tfor batch_idx, (x, y, u) in enumerate(MixMatch_loader):\n\t\tx = x.to(DEVICE)\n\t\ty = y.to(DEVICE)\n\t\tu = u.to(DEVICE)\n\t\tnum_batches += 1\n\n\t\taugment_images = augment_image(u)\n\t\tq_hat = label_guessing(model, augment_images, device=DEVICE)\n\t\tq = torch.argmax(sharpen(q_hat), dim = 1)\n\t\t\n\t\ty_pred = model(x)\n \n\t\tq_pred_logits = model(u)\n\t\tq_pred_probas = nn.functional.softmax(q_pred_logits, dim=1)\n\t\tq_pred = torch.argmax(q_pred_probas, dim=1)\n\t\t\n\t\tcost_labelled = cross_entropy(y_pred, y.long())\n\t\tcost_unlabelled = l2_loss(q_pred, q.float())\n\n\t\tif(num_batches < 1000):\n\t\t\tramp = num_batches/1000\n\t\telse:\n\t\t\tramp = 1\n\t\tloss = cost_labelled + ramp*lambda_u*cost_unlabelled\n\n\t\toptimizer.zero_grad()\n\n\t\tloss.backward()\n\t\t#costs.append(loss)\n\t\toptimizer.step()\n\t\tif(batch_idx % 50 == 0):\n\t\t\ts = ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f | Labelled loss: %.4f | Unlabelled loss: %.4f'\n\t\t\t\t% (epoch+1, NUM_EPOCHS, batch_idx,\n\t\t\t\t\tlen(MixMatch_dataset)//BATCH_SIZE, loss, cost_labelled, cost_unlabelled))\n\t\t\tprint(s)\n\t\n\ts = 'Time elapsed: %.2f min' % ((time.time() - start_time)/60)\n\tprint(s)\n\tmodel.eval()\n\ty_true = []\n\ty_pred = []\n\twith torch.set_grad_enabled(False):\n\t\tfor batch_idx, (img, label) in enumerate(testloader):\n\t\t\timg = img.to(DEVICE)\n\t\t\tlabel = label.to(DEVICE)\n\n\t\t\tlogits = model(img)\n\t\t\tprobas = nn.functional.softmax(logits, dim=1)\n\t\t\tpred = torch.argmax(probas, dim=1)\n\t\t\ty_true.extend(label.cpu().numpy())\n\t\t\ty_pred.extend(pred.cpu().numpy())\n\tprint(accuracy_score(y_true, y_pred))\n\tprint(f1_score(y_true, y_pred))", "_____no_output_____" ], [ "model.eval()\ny_true = []\ny_pred = []\nwith torch.set_grad_enabled(False):\n correct_results = 0\n for batch_idx, (img, label) in enumerate(testloader):\n img = img.to(DEVICE)\n label = label.to(DEVICE)\n\n logits = model(img)\n probas = nn.functional.softmax(logits, dim=1)\n pred = torch.argmax(probas, dim=1)\n y_true.extend(label.cpu().numpy())\n y_pred.extend(pred.cpu().numpy())\nprint(accuracy_score(y_true, y_pred))\nprint(f1_score(y_true, y_pred))", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec97d23d89ecc44f2d67b4fa0384ea53fb79e133
4,370
ipynb
Jupyter Notebook
pipeline_SfM.ipynb
janchk/Hierarchical-Localization
eeb5aa7f6a60fe76d5f23ef87c8efbebce7b18af
[ "Apache-2.0" ]
1,535
2020-07-20T02:52:56.000Z
2022-03-30T18:11:38.000Z
pipeline_SfM.ipynb
janchk/Hierarchical-Localization
eeb5aa7f6a60fe76d5f23ef87c8efbebce7b18af
[ "Apache-2.0" ]
145
2020-07-24T15:21:23.000Z
2022-03-14T10:34:34.000Z
pipeline_SfM.ipynb
janchk/Hierarchical-Localization
eeb5aa7f6a60fe76d5f23ef87c8efbebce7b18af
[ "Apache-2.0" ]
284
2020-07-20T03:08:57.000Z
2022-03-30T21:54:52.000Z
25.705882
347
0.597483
[ [ [ "%load_ext autoreload\n%autoreload 2\n\nfrom pathlib import Path\n\nfrom hloc import extract_features, match_features, reconstruction, visualization", "_____no_output_____" ] ], [ [ "## Setup\nIn this notebook, we will run SfM reconstruction from scratch on a set of images. We choose the [South-Building dataset](https://openaccess.thecvf.com/content_cvpr_2013/html/Hane_Joint_3D_Scene_2013_CVPR_paper.html) - we will download it later. First, we define some paths.", "_____no_output_____" ] ], [ [ "dataset = Path('datasets/sfm_South-Building/')\nimages = dataset / 'South-Building/images/'\n\noutputs = Path('outputs/sfm/')\nsfm_pairs = outputs / 'pairs-exhaustive.txt' # exhaustive matching\nsfm_dir = outputs / 'sfm_superpoint+superglue'\n\nfeature_conf = extract_features.confs['superpoint_aachen']\nmatcher_conf = match_features.confs['superglue']", "_____no_output_____" ] ], [ [ "## Download the dataset\nThe dataset is simply a set of images. The intrinsic parameters will be extracted from the EXIF data, and refined with SfM.", "_____no_output_____" ] ], [ [ "%%bash -s \"$dataset\"\nwget http://cvg.ethz.ch/research/local-feature-evaluation/South-Building.zip -P $1\nunzip $1/South-Building.zip -d $1", "_____no_output_____" ] ], [ [ "## Extract local features", "_____no_output_____" ] ], [ [ "feature_path = extract_features.main(feature_conf, images, outputs)", "_____no_output_____" ] ], [ [ "## Exhaustive matching\nSince the dataset is small, we can match all $\\frac{n(n-1)}{2}$ images pairs. To do so, we pass the argument `exhaustive=True` and make sure that the pair file does not exist yet. If your dataset is larger, exhaustive matching might take a long time - consider selecting fewer pairs using image retrieval and `hloc/pairs_from_retrieval.py`.", "_____no_output_____" ] ], [ [ "match_path = match_features.main(\n matcher_conf, sfm_pairs, feature_conf['output'], outputs, exhaustive=True)", "_____no_output_____" ] ], [ [ "## SfM reconstruction\nRun COLMAP on the features and matches.", "_____no_output_____" ] ], [ [ "reconstruction.main(sfm_dir, images, sfm_pairs, feature_path, match_path)", "_____no_output_____" ] ], [ [ "## Visualization\nWe visualize some of the registered images, and color their keypoint by visibility, track length, or triangulated depth.", "_____no_output_____" ] ], [ [ "visualization.visualize_sfm_2d(sfm_dir, images, color_by='visibility', n=5)", "_____no_output_____" ], [ "visualization.visualize_sfm_2d(sfm_dir, images, color_by='track_length', n=5)", "_____no_output_____" ], [ "visualization.visualize_sfm_2d(sfm_dir, images, color_by='depth', n=5)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec97e75bd8a4b0f557bf32f5abf7f1f37e947ae5
100,379
ipynb
Jupyter Notebook
IA_Eq5/PIA_IA_1raRN.ipynb
iracheta827/InteligenciaArtificial
1fa879705eb643e8d1412c4e24697c7b91a73467
[ "MIT" ]
null
null
null
IA_Eq5/PIA_IA_1raRN.ipynb
iracheta827/InteligenciaArtificial
1fa879705eb643e8d1412c4e24697c7b91a73467
[ "MIT" ]
null
null
null
IA_Eq5/PIA_IA_1raRN.ipynb
iracheta827/InteligenciaArtificial
1fa879705eb643e8d1412c4e24697c7b91a73467
[ "MIT" ]
null
null
null
175.181501
18,626
0.832575
[ [ [ "<a href=\"https://colab.research.google.com/github/iracheta827/InteligenciaArtificial/blob/main/PIA_IA_1raRN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "###Kevin Woge Rivera 1834672\n###Benito Briones Bautista 1838682\n###Jose Emanuel Martinez Rodriguez 1851368\n###Gregorio Missael Iracheta Arias 1851846", "_____no_output_____" ] ], [ [ "from tensorflow.keras.datasets import cifar100\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D\nfrom tensorflow.keras.losses import sparse_categorical_crossentropy\nfrom tensorflow.keras.optimizers import Adam\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Model configuration\nbatch_size = 50\nimg_width, img_height, img_num_channels = 32, 32, 3\nloss_function = sparse_categorical_crossentropy\nno_classes = 100\nno_epochs = 50\noptimizer = Adam(learning_rate=0.0001)\nvalidation_split = 0.2\nverbosity = 1", "_____no_output_____" ], [ "# Load CIFAR-10 data\n(input_train, target_train), (input_test, target_test) = cifar100.load_data(\"coarse\")", "Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz\n169009152/169001437 [==============================] - 2s 0us/step\n169017344/169001437 [==============================] - 2s 0us/step\n" ], [ "plt.imshow(input_train[0])\nprint(target_train[0])\nprint(input_train[0])", "[11]\n[[[255 255 255]\n [255 255 255]\n [255 255 255]\n ...\n [195 205 193]\n [212 224 204]\n [182 194 167]]\n\n [[255 255 255]\n [254 254 254]\n [254 254 254]\n ...\n [170 176 150]\n [161 168 130]\n [146 154 113]]\n\n [[255 255 255]\n [254 254 254]\n [255 255 255]\n ...\n [189 199 169]\n [166 178 130]\n [121 133 87]]\n\n ...\n\n [[148 185 79]\n [142 182 57]\n [140 179 60]\n ...\n [ 30 17 1]\n [ 65 62 15]\n [ 76 77 20]]\n\n [[122 157 66]\n [120 155 58]\n [126 160 71]\n ...\n [ 22 16 3]\n [ 97 112 56]\n [141 161 87]]\n\n [[ 87 122 41]\n [ 88 122 39]\n [101 134 56]\n ...\n [ 34 36 10]\n [105 133 59]\n [138 173 79]]]\n" ], [ "# Determine shape of the data\ninput_shape = (img_width, img_height, img_num_channels)", "_____no_output_____" ], [ "# Parse numbers as floats\ninput_train = input_train.astype('float32')\ninput_test = input_test.astype('float32')\n\n# Normalize data\ninput_train = input_train / 255\ninput_test = input_test / 255", "_____no_output_____" ], [ "# Create the model\nmodel = Sequential()\nmodel.add(Conv2D(128, kernel_size=(3, 3), activation='relu', input_shape=input_shape))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Flatten())\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dense(256, activation='relu'))\nmodel.add(Dense(no_classes, activation='softmax'))\n\nmodel.summary()", "Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n conv2d (Conv2D) (None, 30, 30, 128) 3584 \n \n max_pooling2d (MaxPooling2D (None, 15, 15, 128) 0 \n ) \n \n conv2d_1 (Conv2D) (None, 13, 13, 128) 147584 \n \n max_pooling2d_1 (MaxPooling (None, 6, 6, 128) 0 \n 2D) \n \n conv2d_2 (Conv2D) (None, 4, 4, 128) 147584 \n \n max_pooling2d_2 (MaxPooling (None, 2, 2, 128) 0 \n 2D) \n \n flatten (Flatten) (None, 512) 0 \n \n dense (Dense) (None, 512) 262656 \n \n dense_1 (Dense) (None, 256) 131328 \n \n dense_2 (Dense) (None, 100) 25700 \n \n=================================================================\nTotal params: 718,436\nTrainable params: 718,436\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# Compile the model\nmodel.compile(loss=loss_function,\n optimizer=optimizer,\n metrics=['accuracy'])\n\n# Fit data to model\nhistory = model.fit(input_train, target_train,\n batch_size=batch_size,\n epochs=no_epochs,\n verbose=verbosity,\n validation_split=validation_split)", "Epoch 1/50\n800/800 [==============================] - 42s 16ms/step - loss: 2.8408 - accuracy: 0.1438 - val_loss: 2.5830 - val_accuracy: 0.2085\nEpoch 2/50\n800/800 [==============================] - 12s 16ms/step - loss: 2.4518 - accuracy: 0.2476 - val_loss: 2.3391 - val_accuracy: 0.2798\nEpoch 3/50\n800/800 [==============================] - 12s 16ms/step - loss: 2.2723 - accuracy: 0.3003 - val_loss: 2.2604 - val_accuracy: 0.2945\nEpoch 4/50\n800/800 [==============================] - 12s 16ms/step - loss: 2.1595 - accuracy: 0.3341 - val_loss: 2.1324 - val_accuracy: 0.3386\nEpoch 5/50\n800/800 [==============================] - 12s 15ms/step - loss: 2.0757 - accuracy: 0.3584 - val_loss: 2.0997 - val_accuracy: 0.3541\nEpoch 6/50\n800/800 [==============================] - 13s 16ms/step - loss: 2.0027 - accuracy: 0.3833 - val_loss: 2.0010 - val_accuracy: 0.3791\nEpoch 7/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.9383 - accuracy: 0.4011 - val_loss: 1.9597 - val_accuracy: 0.3935\nEpoch 8/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.8852 - accuracy: 0.4166 - val_loss: 1.9169 - val_accuracy: 0.4082\nEpoch 9/50\n800/800 [==============================] - 12s 16ms/step - loss: 1.8309 - accuracy: 0.4311 - val_loss: 1.8731 - val_accuracy: 0.4159\nEpoch 10/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.7862 - accuracy: 0.4450 - val_loss: 1.8857 - val_accuracy: 0.4137\nEpoch 11/50\n800/800 [==============================] - 12s 16ms/step - loss: 1.7480 - accuracy: 0.4563 - val_loss: 1.8333 - val_accuracy: 0.4300\nEpoch 12/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.7021 - accuracy: 0.4701 - val_loss: 1.8081 - val_accuracy: 0.4411\nEpoch 13/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.6659 - accuracy: 0.4807 - val_loss: 1.7868 - val_accuracy: 0.4456\nEpoch 14/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.6263 - accuracy: 0.4914 - val_loss: 1.7927 - val_accuracy: 0.4460\nEpoch 15/50\n800/800 [==============================] - 12s 16ms/step - loss: 1.5883 - accuracy: 0.5044 - val_loss: 1.7651 - val_accuracy: 0.4574\nEpoch 16/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.5438 - accuracy: 0.5168 - val_loss: 1.7414 - val_accuracy: 0.4669\nEpoch 17/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.5129 - accuracy: 0.5267 - val_loss: 1.7779 - val_accuracy: 0.4596\nEpoch 18/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.4774 - accuracy: 0.5356 - val_loss: 1.7219 - val_accuracy: 0.4703\nEpoch 19/50\n800/800 [==============================] - 12s 16ms/step - loss: 1.4411 - accuracy: 0.5479 - val_loss: 1.7263 - val_accuracy: 0.4723\nEpoch 20/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.4060 - accuracy: 0.5570 - val_loss: 1.7197 - val_accuracy: 0.4787\nEpoch 21/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.3722 - accuracy: 0.5696 - val_loss: 1.7344 - val_accuracy: 0.4724\nEpoch 22/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.3346 - accuracy: 0.5820 - val_loss: 1.7135 - val_accuracy: 0.4851\nEpoch 23/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.3075 - accuracy: 0.5893 - val_loss: 1.7063 - val_accuracy: 0.4875\nEpoch 24/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.2676 - accuracy: 0.6013 - val_loss: 1.7009 - val_accuracy: 0.4861\nEpoch 25/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.2361 - accuracy: 0.6116 - val_loss: 1.7477 - val_accuracy: 0.4800\nEpoch 26/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.1991 - accuracy: 0.6215 - val_loss: 1.7035 - val_accuracy: 0.4894\nEpoch 27/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.1638 - accuracy: 0.6326 - val_loss: 1.7557 - val_accuracy: 0.4850\nEpoch 28/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.1339 - accuracy: 0.6417 - val_loss: 1.7605 - val_accuracy: 0.4870\nEpoch 29/50\n800/800 [==============================] - 12s 15ms/step - loss: 1.0953 - accuracy: 0.6528 - val_loss: 1.7561 - val_accuracy: 0.4897\nEpoch 30/50\n800/800 [==============================] - 13s 16ms/step - loss: 1.0608 - accuracy: 0.6651 - val_loss: 1.7532 - val_accuracy: 0.4939\nEpoch 31/50\n800/800 [==============================] - 12s 16ms/step - loss: 1.0278 - accuracy: 0.6733 - val_loss: 1.7661 - val_accuracy: 0.4909\nEpoch 32/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.9939 - accuracy: 0.6857 - val_loss: 1.8339 - val_accuracy: 0.4850\nEpoch 33/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.9610 - accuracy: 0.6943 - val_loss: 1.8131 - val_accuracy: 0.4883\nEpoch 34/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.9264 - accuracy: 0.7076 - val_loss: 1.8202 - val_accuracy: 0.4935\nEpoch 35/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.8957 - accuracy: 0.7143 - val_loss: 1.8420 - val_accuracy: 0.4913\nEpoch 36/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.8598 - accuracy: 0.7278 - val_loss: 1.8617 - val_accuracy: 0.4872\nEpoch 37/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.8293 - accuracy: 0.7370 - val_loss: 1.8926 - val_accuracy: 0.4902\nEpoch 38/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.7975 - accuracy: 0.7484 - val_loss: 1.9028 - val_accuracy: 0.4922\nEpoch 39/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.7658 - accuracy: 0.7579 - val_loss: 2.0005 - val_accuracy: 0.4754\nEpoch 40/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.7319 - accuracy: 0.7687 - val_loss: 2.0112 - val_accuracy: 0.4826\nEpoch 41/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.7010 - accuracy: 0.7799 - val_loss: 2.0138 - val_accuracy: 0.4913\nEpoch 42/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.6671 - accuracy: 0.7908 - val_loss: 2.0688 - val_accuracy: 0.4834\nEpoch 43/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.6356 - accuracy: 0.7998 - val_loss: 2.0789 - val_accuracy: 0.4901\nEpoch 44/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.6063 - accuracy: 0.8086 - val_loss: 2.1340 - val_accuracy: 0.4878\nEpoch 45/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.5822 - accuracy: 0.8184 - val_loss: 2.1973 - val_accuracy: 0.4764\nEpoch 46/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.5514 - accuracy: 0.8266 - val_loss: 2.2183 - val_accuracy: 0.4844\nEpoch 47/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.5122 - accuracy: 0.8417 - val_loss: 2.3115 - val_accuracy: 0.4794\nEpoch 48/50\n800/800 [==============================] - 12s 15ms/step - loss: 0.4912 - accuracy: 0.8468 - val_loss: 2.3543 - val_accuracy: 0.4827\nEpoch 49/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.4631 - accuracy: 0.8562 - val_loss: 2.3801 - val_accuracy: 0.4836\nEpoch 50/50\n800/800 [==============================] - 13s 16ms/step - loss: 0.4299 - accuracy: 0.8666 - val_loss: 2.4457 - val_accuracy: 0.4791\n" ], [ "# Generate generalization metricsZ\nscore = model.evaluate(input_test, target_test, verbose=0)\nprint(f'Test loss: {score[0]} / Test accuracy: {score[1]}')\n\n# Visualize history\n# Plot history: Loss\nplt.plot(history.history['val_loss'])\nplt.title('Validation loss history')\nplt.ylabel('Loss value')\nplt.xlabel('No. epoch')\nplt.show()\n\n# Plot history: Accuracy\nplt.plot(history.history['val_accuracy'])\nplt.title('Validation accuracy history')\nplt.ylabel('Accuracy value (%)')\nplt.xlabel('No. epoch')\nplt.show()", "Test loss: 2.4387195110321045 / Test accuracy: 0.48030000925064087\n" ], [ "%matplotlib inline\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n#-----------------------------------------------------------\n# Retrieve a list of list results on training and test data\n# sets for each training epoch\n#-----------------------------------------------------------\nacc=history.history['accuracy']\nval_acc=history.history['val_accuracy']\nloss=history.history['loss']\nval_loss=history.history['val_loss']\n \nepochs=range(len(acc)) # Get number of epochs\n \n#------------------------------------------------\n# Plot training and validation accuracy per epoch\n#------------------------------------------------\nplt.plot(epochs, acc, 'r', \"Training Accuracy\")\nplt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\nplt.title('Training and validation accuracy')\nplt.figure()\n \n#------------------------------------------------\n# Plot training and validation loss per epoch\n#------------------------------------------------\nplt.plot(epochs, loss, 'r', \"Training Loss\")\nplt.plot(epochs, val_loss, 'b', \"Validation Loss\")\nplt.figure()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec97e80f1dac5713a4d90dfafa1cf4914f0f556c
6,834
ipynb
Jupyter Notebook
notebook/1-2-2-AccelerazioneAutomobile.ipynb
POSS-UniMe/simple-physics-with-Python-ITA
3c59fff7005d6dc89e255c6fc1c40a326dd1c40a
[ "BSD-3-Clause" ]
2
2020-10-26T11:26:32.000Z
2022-02-28T14:06:26.000Z
notebook/1-2-2-AccelerazioneAutomobile.ipynb
POSS-UniMe/simple-physics-with-Python-ITA
3c59fff7005d6dc89e255c6fc1c40a326dd1c40a
[ "BSD-3-Clause" ]
null
null
null
notebook/1-2-2-AccelerazioneAutomobile.ipynb
POSS-UniMe/simple-physics-with-Python-ITA
3c59fff7005d6dc89e255c6fc1c40a326dd1c40a
[ "BSD-3-Clause" ]
null
null
null
35.968421
368
0.582529
[ [ [ "# Accelerazione di un'automobile\n\nUn certo autoveicolo con motore diesel può passare da $0$ a $100~km/h$ in $10$ secondi. Supponendo per semplicità che il **moto** sia rettilineo **uniformemente accelerato** (cioè con accelerazione costante) calcolare il valore dell'accelerazione e lo spazio percorso nei primi $10$ secondi per raggiungere la velocità di $100~km/h$.\n\n\n### Discussione del problema\n\n#### Calcolo dell'accelerazione\nSupponiamo che il moto si svolga lungo un asse $x$ orizzontale, con origine $O$ nella posizione iniziale dell'automobile, che immaginiamo per semplicità come un oggetto puntiforme.\n\nSe l'automobile si muove con accelerazione costante, per la definizione di accelerazione si ha\n\n$$ a = \\dfrac{v - v_0}{t - t_0} $$\n\nIn questo caso, per calcolare l'accelerazione in $m/s^2$ occorre prima convertire il valore della velocità finale da $km/h$ a $m/s$.\n\n#### Calcolo dello spazio percorso\nNel moto unidimensionale con accelerazione costante si ha\n\n$$ x = x_0 + v_0(t-t_0) + \\dfrac{1}{2}a (t-t_0)^2 $$\n\nPer $x_0 = 0$ e $v_0 = 0$, l'equazione che esprime $x$ in funzione del tempo $t$ (**legge oraria**) diventa\n\n$$ x = \\dfrac{1}{2}a (t-t_0)^2 $$\n\nSostituendo il valore di $a$ ottenuto nella prima parte del problema, si può ricavare il valore di $x$, che corrisponde allo spazio percorso durante i primi $(t-t_0)$ secondi.\n\nCon il calcolo simbolico si ottiene\n$$ x = \\dfrac{1}{2} \\dfrac{v - v_0}{t - t_0} (t-t_0)^2 = \\dfrac{1}{2} (v - v_0) (t-t_0) $$\n\nIn questo problema $x$ rappresenta la distanza che bisogna percorrere con accelerazione costante $a$ per arrivare alla velocità di 100 km/h.\n\n\n### Calcoli con Python\n\nPer calcolare l'accelerazione e lo spazio percorso utilizziamo le equazioni che abbiamo ricavato. Poichè il tempo necessario per andare da 0 a 100 km/h dipende dal tipo di veicolo che consideriamo, utilizziamo uno **slider** in modo da poter assegnare valori diversi alla variabile che rappresenta il tempo e stimare di conseguenza i risultati corrispondenti.\n\n> Per lavorare in modalità interattiva ed eseguire il codice Python in ambiente **binder** online,\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/POSS-UniMe/simple-physics-with-Python-ITA/master?filepath=notebook%2F1-2-2-AccelerazioneAutomobile.ipynb)", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets\n\nv = 100/3.6 # velocità finale (m/s)\nv0 = 0 # velocità iniziale (m/s)\n\nprint()\n\nsliderDt = (\n widgets.FloatSlider(min = 0, max = 20, step = 0.1, \n value = 10, description = 'Tempo (s)')) # tempo (s)\n\ndef calculate(Dt):\n a = (v-v0)/Dt # accelerazione (m/s^2)\n print('\\nAccelerazione = {0:0.3f} m/s^2 \\n'.format(a))\n Dx = 0.5*(v-v0)*Dt # spazio percorso (m)\n print('Spazio percorso nei primi {0:0.2f} secondi = {1:0.3f} m'.format(Dt,Dx))\n print() \n \nwidgets.interact(calculate, Dt = sliderDt)", "\n" ] ], [ [ "Car & model | Time 0 to 100 km/h | Source\n-------------|---------------|--------------\nTesla model 3 | 3.4 s | [Tesla](https://www.tesla.com/it_it/model3)\nRenault Kadjar dCi 115 cv Sport Edition | 11.7 s | [automoto.it](https://www.automoto.it/catalogo/renault/kadjar/dci-8v-115cv-sport-edition/132970/amp)\nToyota 1.5 Hybrid VVT-i | 9.7 s | [Toyota](https://www.toyota.it/)\nFiat Bravo 1.6 Multijet 16V 105 CV | 11.2 s | [Al volante](https://www.alvolante.it/)\nPorsche Taycan Turbo S | 2.8 s | [Porsche](https://www.porsche.com/italy/models/taycan/taycan-models/taycan-turbo-s/)", "_____no_output_____" ], [ "### Get a feel of\nConfrontare il valore dell'accelerazione $a$ con il valore dell'accelerazione che caratterizza la caduta libera, cioè l'accelerazione di gravità.\n\n&nbsp;\n", "_____no_output_____" ], [ "## What we have learned\n*Fisica*\n* Utilizzare le equazioni del moto con accelerazione costante\n* Stimare il valore dell'accelerazione in un tipo di fenomeno che riguarda la vita quotidiana\n\n*Python*\n* Widgets\n* Tabella contenente link", "_____no_output_____" ], [ "## References\n#### Equazioni del moto uniformemente accelerato\n\n&nbsp;\n\n### Copyright and License\n--------------------------\n(c) 2020 Andrea Mandanici, Marco Guarnera, Giuseppe Mandaglio, Giovanni Pirrotta. All content is under Creative Common Attribution <a rel=\"license\" href=\"https://creativecommons.org/licenses/by/4.0\" > CC BY 4.0 <a/> \n and all code is under [BSD 3-Clause License](https://opensource.org/licenses/BSD-3-Clause)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
ec97fdfa242096f236f7ea4274520349b8332523
309,141
ipynb
Jupyter Notebook
notebooks/off_the_ball_contribution_PPC.ipynb
opunsoars/friends_of_tracking
dad7242e63fe1c9dfc5b6cf24f3de0ff1a33f081
[ "MIT" ]
2
2020-11-20T03:59:30.000Z
2021-01-13T15:34:58.000Z
notebooks/off_the_ball_contribution_PPC.ipynb
opunsoars/friends_of_tracking
dad7242e63fe1c9dfc5b6cf24f3de0ff1a33f081
[ "MIT" ]
null
null
null
notebooks/off_the_ball_contribution_PPC.ipynb
opunsoars/friends_of_tracking
dad7242e63fe1c9dfc5b6cf24f3de0ff1a33f081
[ "MIT" ]
1
2020-05-11T19:18:49.000Z
2020-05-11T19:18:49.000Z
125.923014
73,276
0.792606
[ [ [ "import warnings\nwarnings.filterwarnings('ignore')\nimport sys\nsys.path.insert(0,'/media/csivsw/crossOS/playground/friends_of_tracking/src/friends_of_tracking/LaurieOnTracking')\n%matplotlib inline\n\nimport Metrica_IO as mio\nimport Metrica_Viz as mviz\nimport Metrica_Velocities as mvel\nimport Metrica_PitchControl as mpc\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport modin.pandas as pd\npd.options.display.max_columns = None\n\nfrom tqdm.auto import tqdm\n\ntqdm.pandas(desc=\"my bar!\")", "_____no_output_____" ], [ "# set up initial path to data\nDATADIR = '/media/csivsw/crossOS/playground/friends_of_tracking/datahub/metrica_sports/sample-data/data'\ngame_id = 2 # let's look at sample match 2\n\n# first get model parameters\nparams = mpc.default_model_params(3)", "_____no_output_____" ], [ "# read in the event data\nevents = mio.read_event_data(DATADIR,game_id)\n\n# read in tracking data\ntracking_home = mio.tracking_data(DATADIR,game_id,'Home')\ntracking_away = mio.tracking_data(DATADIR,game_id,'Away')\n\n# Convert positions from metrica units to meters (note change in Metrica's coordinate system since the last lesson)\ntracking_home = mio.to_metric_coordinates(tracking_home)\ntracking_away = mio.to_metric_coordinates(tracking_away)\nevents = mio.to_metric_coordinates(events)\n\n# reverse direction of play in the second half so that home team is always attacking from right->left\ntracking_home,tracking_away,events = mio.to_single_playing_direction(tracking_home,tracking_away,events)\n\n# Calculate player velocities\ntracking_home = mvel.calc_player_velocities(tracking_home,smoothing=True)\ntracking_away = mvel.calc_player_velocities(tracking_away,smoothing=True)\n", "Reading team: home\nReading team: away\n" ] ], [ [ "## Plot the goal of interest", "_____no_output_____" ] ], [ [ "def get_goal_lead_up(goal_id,events):\n team = events.loc[goal_id,'Team']\n for i,row in events.iloc[goal_id-1::-1,:2].iterrows():\n # if row['Type']!='PASS':\n if row['Team']!=team:\n break\n return events.iloc[i+1:goal_id+1,:]", "_____no_output_____" ], [ "# get all shots and goals in the match\nshots = events[events['Type']=='SHOT']\ngoals = shots[shots['Subtype'].str.contains('-GOAL')].copy()\n\ngoals", "_____no_output_____" ], [ "g1 = get_goal_lead_up(198,events) # Home\ng2 = get_goal_lead_up(823,events) # Away\ng3 = get_goal_lead_up(1118,events) # Home\ng4 = get_goal_lead_up(1671,events) # Away\ng5 = get_goal_lead_up(1723,events) # Home\n\nprint (g1.shape, g2.shape, g3.shape, g4.shape, g5.shape)", "(11, 14) (8, 14) (12, 14) (2, 14) (2, 14)\n" ] ], [ [ "We can look at the first three goals may be. As they have at least a chain of 5+ events (may be even all passes)", "_____no_output_____" ], [ "#### Goal 1 : HOME -> EVENT 198", "_____no_output_____" ] ], [ [ "# plot the 3 events leading up to the first goal\nmviz.plot_events( g1, color='k', indicators = ['Marker','Arrow'], annotate=True )", "_____no_output_____" ], [ "# generate a video of this g1 chain\n# PLOTDIR = '/media/csivsw/crossOS/playground/friends_of_tracking/output/'\n# mviz.save_match_clip(tracking_home.iloc[11641:12212],\n# tracking_away.iloc[11641:12212],\n# PLOTDIR,fname='home_goal_1',include_player_velocities=False, \n# params=params,events=events, pitch_control=True,attacking='Home')", "_____no_output_____" ], [ "from IPython.display import HTML\n\nHTML(\"\"\"\n <video alt=\"test\" controls>\n <source src=\"/media/csivsw/crossOS/playground/friends_of_tracking/output/home_goal_1.mp4\" type=\"video/mp4\">\n </video>\n\"\"\")", "_____no_output_____" ], [ "g1", "_____no_output_____" ], [ "# off the ball run...\n# we can assume that an OTB run is begun 3-5 seconds before a pass is completed.\n# so in g1, if we look at event 196, pass from #8 to #1 ended at (-37.10,23.80) at f12146\n# while the pass began at f1206, we should track the movement of #1 from a 100 frames before.\n# f12146 - 100 = f12046\n# we can plot the PPCF of each player for these 100 frames", "_____no_output_____" ] ], [ [ "event_id = 196\nmpc.generate_pitch_control_for_event(event_id, events, \n tracking_home, tracking_away, \n params, field_dimen = (106.,68.,), n_grid_cells_x = 50)", "_____no_output_____" ], [ "event_id = 196\ntarget_position = np.array([g1.loc[event_id,['End X']],g1.loc[event_id,['End Y']]])\nteam = g1.loc[event_id].Team\nprint (team)\nframe = g1.loc[event_id]['Start Frame']\nprint (frame)\nPPCFa,xgrid,ygrid = mpc.generate_pitch_control_for_frame(tracking_home.loc[frame],\n tracking_away.loc[frame], \n frame,params, attacking = team, \n field_dimen = (106.,68.,),\n n_grid_cells_x = 50)\n", "_____no_output_____" ] ], [ [ "event_id = 196\ntarget_position = np.array([g1.loc[event_id,['End X']],g1.loc[event_id,['End Y']]])\nteam = g1.loc[event_id].Team\n\npass_start_frame = g1.loc[event_id]['Start Frame']\npass_end_frame = g1.loc[event_id]['End Frame']\nn=500\n# lookup n frames prior to pass end frame\nhometeam = tracking_home.loc[pass_end_frame-n:pass_end_frame]\nawayteam = tracking_away.loc[pass_end_frame-n:pass_end_frame]\n\npitch_control_xy_df = pd.DataFrame()\nfor frame in tqdm(range(pass_end_frame-n,pass_end_frame)):\n # get the details of the frame: team in possession, ball_start_position)\n ball_start_pos = np.array(hometeam.loc[frame,['ball_x', 'ball_y']])\n # ball_start_pos = np.array([g1.loc[event_id]['Start X'],g1.loc[event_id]['Start Y']])\n# print (np.linalg.norm( target_position - ball_start_pos )/params['average_ball_speed'])\n # initialise player positions and velocities for pitch control calc (so that we're not repeating this at each grid cell position)\n if team=='Home':\n attacking_players = mpc.initialise_players(hometeam.loc[frame],'Home',params)\n defending_players = mpc.initialise_players(awayteam.loc[frame],'Away',params)\n opp='Away'\n elif team=='Away':\n defending_players = mpc.initialise_players(hometeam.loc[frame],'Home',params)\n attacking_players = mpc.initialise_players(awayteam.loc[frame],'Away',params)\n opp='Home'\n else:\n assert False, \"Team in possession must be either home or away\"\n \n\n PPCFatt, PPCFdef, Patt, Pdef= mpc.calculate_pitch_control_at_target(target_position, \n attacking_players, defending_players, \n ball_start_pos, params, \n return_individual=True)\n \n pitch_control_xy_df.loc[frame,'PPCFatt'] = PPCFatt\n pitch_control_xy_df.loc[frame,'PPCFdef'] = PPCFdef\n pitch_control_xy_df.loc[frame,'attacking'] = team\n for pid, pc in Patt.items():\n pitch_control_xy_df.loc[frame,\"%s_%s_%s\" % (team,pid, 'pc')] = pc\n for pid, pc in Pdef.items():\n pitch_control_xy_df.loc[frame,\"%s_%s_%s\" % (opp,pid, 'pc')] = pc\n", "100%|██████████| 500/500 [00:45<00:00, 11.01it/s]\n" ], [ "sum(Patt.values()),PPCFatt", "_____no_output_____" ], [ "for pid, pc in Patt.items():\n print (pid, pc,\"%s_%s_%s\" % (team,pid, 'pc') )", "_____no_output_____" ], [ "pitch_control_xy_df.head(3)", "_____no_output_____" ], [ "hometeam.loc[pass_end_frame-n:pass_end_frame]", "_____no_output_____" ], [ "x = pitch_control_xy_df.index.values\ny = np.array([pitch_control_xy_df.PPCFatt.values,\n pitch_control_xy_df.PPCFdef.values\n ])\n\nfig,ax = plt.subplots(figsize=(20, 7))\nplt.style.use('fivethirtyeight')\nplt.stackplot(x,y, labels=['Att: Home','Def: Away'], alpha=0.7)\nplt.legend(loc='lower right')\n\nax.axvline(pass_start_frame, ls='-',c = 'black')\nax.text(pass_start_frame+1,1., \"pass_start_frame\")\n\nax.axvline(pass_end_frame, ls='-',c = 'black')\nax.text(pass_end_frame+1,1., \"pass_end_frame\")\nplt.show()\n\n", "_____no_output_____" ], [ "# [pitch_control_xy_df.loc[col].values for col in att_pc_cols][0]\n# len([pitch_control_xy_df[col].values for col in att_pc_cols])\natt_pc_cols", "_____no_output_____" ], [ "x = pitch_control_xy_df.index.values\ny = np.array([\n pitch_control_xy_df['Home_8_pc'].fillna(0).values,\n pitch_control_xy_df['Home_1_pc'].fillna(0).values\n ])\n# y = pitch_control_xy_df[att_pc_cols[1]].values\n\nfig,ax = plt.subplots(figsize=(20, 7))\nplt.style.use('fivethirtyeight')\nplt.stackplot(x,y, labels=['Home_8_pc','Home_1_pc'], alpha=0.7)\nplt.legend(loc='top left')", "_____no_output_____" ], [ "att_pc_cols = [c for c in pitch_control_xy_df.columns if c.startswith(team) & c.endswith('pc')]\n\nx = pitch_control_xy_df.index.values\ny = np.array([pitch_control_xy_df[col].fillna(0).values for col in att_pc_cols])\n# y = pitch_control_xy_df[att_pc_cols[1]].values\n\nfig,ax = plt.subplots(figsize=(20, 7))\nplt.style.use('fivethirtyeight')\nplt.stackplot(x,y, labels=att_pc_cols, alpha=0.7)\nplt.legend(loc='top left')\n\nax.axvline(pass_start_frame, ls='-',c = 'black')\nax.text(pass_start_frame+1,1., \"pass_start_frame\")\n\nax.axvline(pass_end_frame, ls='-',c = 'black')\nax.text(pass_end_frame+1,1., \"pass_end_frame\")\nplt.show()", "_____no_output_____" ] ], [ [ "## Let's try to plot the PPCF across 100 frames", "_____no_output_____" ] ], [ [ "pitch_control_xy_df[\"total\"] = pitch_control_xy_df.PPCFatt + pitch_control_xy_df.PPCFdef\n\n#Set general plot properties\nsns.set_style(\"white\")\nsns.set_context({\"figure.figsize\": (24, 10)})\n\n#Plot 1 - background - \"total\" (top) series\nsns.barplot(x = pitch_control_xy_df.index, \n y = pitch_control_xy_df.total.values, \n color = \"red\")\n\n#Plot 2 - overlay - \"bottom\" series\nbottom_plot = sns.barplot(x = pitch_control_xy_df.index, \n y = pitch_control_xy_df.PPCFatt.values, \n color = \"#0000A3\")\n\n\ntopbar = plt.Rectangle((0,0),1,1,fc=\"red\", edgecolor = 'none')\nbottombar = plt.Rectangle((0,0),1,1,fc='#0000A3', edgecolor = 'none')\nl = plt.legend([bottombar, topbar], ['Bottom Bar', 'Top Bar'], loc=1, ncol = 2, prop={'size':16})\nl.draw_frame(False)\n\n#Optional code - Make plot look nicer\nsns.despine(left=True)\nbottom_plot.set_ylabel(\"Y-axis label\")\nbottom_plot.set_xlabel(\"X-axis label\")\n\n#Set fonts to consistent 16pt size\nfor item in ([bottom_plot.xaxis.label, bottom_plot.yaxis.label] +\n bottom_plot.get_xticklabels() + bottom_plot.get_yticklabels()):\n item.set_fontsize(16)", "_____no_output_____" ] ], [ [ "PPCFatt, PPCFdef= mpc.calculate_pitch_control_at_target(target_position, \n attacking_players, defending_players, \n ball_start_pos, params, \n return_individual=False)\n\ntarget_position = np.array( [xgrid[j], ygrid[i]] )\nPPCFa[i,j],PPCFd[i,j] = calculate_pitch_control_at_target(target_position, \n attacking_players, defending_players,\n ball_start_pos, params)", "_____no_output_____" ], [ "PPCF,xgrid,ygrid = mpc.generate_pitch_control_for_event(822, events, tracking_home, tracking_away, \n params, field_dimen = (106.,68.,), \n n_grid_cells_x = 50)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "raw", "code", "markdown", "code", "raw" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "raw", "raw" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "raw", "raw" ] ]
ec9800eeaf8e5ac3954d2ba720443c5083d872ac
118,946
ipynb
Jupyter Notebook
Regression/Linear Models/Lars_PolynomialFeatures.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
53
2021-08-28T07:41:49.000Z
2022-03-09T02:20:17.000Z
Regression/Linear Models/Lars_PolynomialFeatures.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
142
2021-07-27T07:23:10.000Z
2021-08-25T14:57:24.000Z
Regression/Linear Models/Lars_PolynomialFeatures.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
38
2021-07-27T04:54:08.000Z
2021-08-23T02:27:20.000Z
118,946
118,946
0.901922
[ [ [ "# Least Angle Regression with Polynomial Features", "_____no_output_____" ], [ "This Code template is for the regression analysis using a simple Lars and Polynomial Features Feature Transformation. Lars, also known as the Least Angle Regression model. Lars is a regression algorithm for high-dimensional data.", "_____no_output_____" ], [ "### **Required Packages**", "_____no_output_____" ] ], [ [ "import warnings\r\nimport numpy as np \r\nimport pandas as pd \r\nimport seaborn as se \r\nimport matplotlib.pyplot as plt \r\nfrom sklearn.model_selection import train_test_split \r\nfrom sklearn.preprocessing import PolynomialFeatures,MinMaxScaler\r\nfrom sklearn.pipeline import make_pipeline\r\nfrom sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error \r\nfrom sklearn.linear_model import Lars\r\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "### **Initialization**\nFilepath of CSV file", "_____no_output_____" ] ], [ [ "file_path= \"\"", "_____no_output_____" ] ], [ [ "List of features which are required for model training .", "_____no_output_____" ] ], [ [ "features = []", "_____no_output_____" ] ], [ [ "Target feature for prediction.", "_____no_output_____" ] ], [ [ "target = ''", "_____no_output_____" ] ], [ [ "### **Dataset Overview**\nPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.\n\nWe will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.", "_____no_output_____" ] ], [ [ "df=pd.read_csv(file_path)\ndf.head()", "_____no_output_____" ] ], [ [ "### **Dataset Information**\nPrint a concise summary of a DataFrame.\n\nWe will use info() method to print the information about the DataFrame including the index dtype and columns, non-null values and memory usage.", "_____no_output_____" ] ], [ [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1338 entries, 0 to 1337\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 1338 non-null int64 \n 1 sex 1338 non-null object \n 2 bmi 1338 non-null float64\n 3 children 1338 non-null int64 \n 4 smoker 1338 non-null object \n 5 region 1338 non-null object \n 6 charges 1338 non-null float64\ndtypes: float64(2), int64(2), object(3)\nmemory usage: 73.3+ KB\n" ] ], [ [ "### **Dataset Describe**\nGenerate descriptive statistics.\n\nDescriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.\n\nWe will analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. ", "_____no_output_____" ] ], [ [ "df.describe()", "_____no_output_____" ] ], [ [ "### **Feature Selection**\nIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.\n\nWe will assign all the required input features to X and target/outcome to Y.", "_____no_output_____" ] ], [ [ "X=df[features]\nY=df[target]", "_____no_output_____" ] ], [ [ "### **Data Preprocessing**\nSince we do not know what is the number of Null values in each column.So,we print the columns arranged in descreasnig orde ", "_____no_output_____" ] ], [ [ "print(df.isnull().sum().sort_values(ascending=False))", "age 0\nsex 0\nbmi 0\nchildren 0\nsmoker 0\nregion 0\ncharges 0\ndtype: int64\n" ] ], [ [ "Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.", "_____no_output_____" ] ], [ [ "def NullClearner(df):\n if(isinstance(df, pd.Series) and (df.dtype in [\"float64\",\"int64\"])):\n df.fillna(df.mean(),inplace=True)\n return df\n elif(isinstance(df, pd.Series)):\n df.fillna(df.mode()[0],inplace=True)\n return df\n else:return df\ndef EncodeX(df):\n return pd.get_dummies(df)", "_____no_output_____" ] ], [ [ "Calling preprocessing functions on the feature and target set.", "_____no_output_____" ] ], [ [ "x=X.columns.to_list()\nfor i in x:\n X[i]=NullClearner(X[i])\nX=EncodeX(X)\nY=NullClearner(Y)\nX.head()", "_____no_output_____" ] ], [ [ "#### **Correlation Map**\nIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.", "_____no_output_____" ] ], [ [ "f,ax = plt.subplots(figsize=(18, 18))\nmatrix = np.triu(X.corr())\nse.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)\nplt.show()", "_____no_output_____" ] ], [ [ "## **Data Splitting**\nThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.", "_____no_output_____" ] ], [ [ "x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)", "_____no_output_____" ] ], [ [ "### **Polynomial Features**\n**sklearn.preprocessing.PolynomialFeatures()**\n\nGenerate polynomial and interaction features.\n\nGenerate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.", "_____no_output_____" ], [ "### **Model**\nLeast-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the feature most correlated with the target. When there are multiple features having equal correlation, instead of continuing along the same feature, it proceeds in a direction equiangular between the features.", "_____no_output_____" ], [ "**Model Tuning Parameters**\n> jitter -> Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability.\n\n> eps -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.\n\n> n_nonzero_coefs -> Target number of non-zero coefficients. Use np.inf for no limit.\n\n> precompute -> Whether to use a precomputed Gram matrix to speed up calculations.", "_____no_output_____" ] ], [ [ "model = make_pipeline(PolynomialFeatures(),Lars(random_state=123))\nmodel.fit(x_train,y_train)", "_____no_output_____" ] ], [ [ "#### **Model Accuracy**\nWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.\n\nscore: The score function returns the coefficient of determination R2 of the prediction.", "_____no_output_____" ] ], [ [ "print(\"Accuracy score {:.2f} %\\n\".format(model.score(x_test,y_test)*100))", "Accuracy score 4.38 %\n\n" ] ], [ [ "> r2_score: The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.\n\n> mae: The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.\n\n> mse: The mean squared error function squares the error(penalizes the model for large errors) by our model.", "_____no_output_____" ] ], [ [ "y_pred=model.predict(x_test)\nprint(\"R2 Score: {:.2f} %\".format(r2_score(y_test,y_pred)*100))\nprint(\"Mean Absolute Error {:.2f}\".format(mean_absolute_error(y_test,y_pred)))\nprint(\"Mean Squared Error {:.2f}\".format(mean_squared_error(y_test,y_pred)))", "R2 Score: 4.38 %\nMean Absolute Error 9374.14\nMean Squared Error 146199736.81\n" ] ], [ [ "#### **Prediction Plot**\n> First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(14,10))\nplt.plot(range(20),y_test[0:20], color = \"green\")\nplt.plot(range(20),model.predict(x_test[0:20]), color = \"red\")\nplt.legend([\"Actual\",\"prediction\"]) \nplt.title(\"Predicted vs True Value\")\nplt.xlabel(\"Record number\")\nplt.ylabel(target)\nplt.show()", "_____no_output_____" ] ], [ [ "##### Creator:Prateek Kumar ,Github [Profile](https://github.com/pdpandey26)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ec981de6bfe356dd3f7f802045235d9527534f89
139,088
ipynb
Jupyter Notebook
Cumulative_emissions.ipynb
chrisbrierley/cartograms
4b2bb5c3c016f6357bfd207dc2a954aaed3ca39f
[ "BSD-3-Clause" ]
null
null
null
Cumulative_emissions.ipynb
chrisbrierley/cartograms
4b2bb5c3c016f6357bfd207dc2a954aaed3ca39f
[ "BSD-3-Clause" ]
null
null
null
Cumulative_emissions.ipynb
chrisbrierley/cartograms
4b2bb5c3c016f6357bfd207dc2a954aaed3ca39f
[ "BSD-3-Clause" ]
null
null
null
72.141079
36,370
0.662372
[ [ [ "import numpy as np\r\nimport geopandas as gpd\r\nimport pandas as pd\r\nimport matplotlib.pyplot as plt\r\nfrom shapely.geometry import Polygon", "_____no_output_____" ], [ "raw_co2_data=pd.read_csv('data/owid-co2-data.csv')\r\nraw_co2_data", "_____no_output_____" ], [ "# There area some data without country codes at all\r\nsubset=raw_co2_data[raw_co2_data['iso_code'].isnull()]\r\nrest=raw_co2_data.drop(raw_co2_data[raw_co2_data['iso_code'].isnull()].index)\r\nrest", "_____no_output_____" ] ], [ [ "Then we need to sort out the countries and select only a single value for each. We'll choose the latest. Conveniently, this code block also loses those without an iso_code at all", "_____no_output_____" ] ], [ [ "nations=pd.unique(raw_co2_data['iso_code'])\r\nfor nat_i in nations:\r\n subset=raw_co2_data[raw_co2_data['iso_code']==nat_i]\r\n last_year=subset[subset['year']==subset['year'].max()]\r\n if nat_i == 'AFG':\r\n co2_data = last_year\r\n else:\r\n co2_data = co2_data.append(last_year,ignore_index=False) \r\n\r\n\r\nco2_data['iso_code']", "_____no_output_____" ], [ "# Remove two lines that have Our World In Data codes, rather than proper ISO codes \r\nco2_data=co2_data.drop(co2_data[co2_data['iso_code']==\"OWID_WRL\"].index)\r\nco2_data=co2_data.drop(co2_data[co2_data['iso_code']==\"OWID_KOS\"].index)\r\n", "_____no_output_____" ] ], [ [ "Next we create a dictionary of cumulative CO2. I can't guarantee that the populations the Our World in Data and the geopandas World dataset are the same. So let's normalise by population. We'll bring the national values along with us though.", "_____no_output_____" ] ], [ [ "cumCO2_per_capita_dict=pd.Series(co2_data.cumulative_co2.values/co2_data.population.values,index=co2_data.iso_code).to_dict()\r\ncumCO2_Nat_dict=pd.Series(co2_data.cumulative_co2.values,index=co2_data.iso_code).to_dict()\r\ncumCO2_Nat_dict", "_____no_output_____" ], [ "world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\r\nworld.head()", "_____no_output_____" ] ], [ [ "Having loaded in the \"world\" geodataframe, we need to add the cumulative CO2 data to it.", "_____no_output_____" ] ], [ [ "world['cumCO2_per_capita']=world['iso_a3'] #add a new column to geodataframe\r\nworld['cumCO2_per_capita']=world['cumCO2_per_capita'].map(cumCO2_per_capita_dict) #replace contents of column through dictionary\r\nworld['cumCO2_Nat']=world['iso_a3'] #add a new column to geodataframe\r\nworld['cumCO2_Nat']=world['cumCO2_Nat'].map(cumCO2_Nat_dict) #replace contents of column through dictionary\r\nworld", "_____no_output_____" ] ], [ [ "OK. So now lets try to create rough figure, without the distortion...", "_____no_output_____" ] ], [ [ "world.plot('cumCO2_per_capita',legend=True,cmap=\"YlOrBr\")\r\nplt.title('Cumulative CO2 Emissions (Mt per capita, 2020)')\r\nplt.show()", "_____no_output_____" ] ], [ [ "There is a function written by JoeryJoery that will update the polygons of a GeoSeries in contiguous area form. It's available at <https://github.com/joeryjoery/PyCartogram>. I'm copying it below", "_____no_output_____" ] ], [ [ "def cartogram(arg_polygons, arg_values, itermax=5, max_size_error=1.0001, epsilon=0.01, verbose=False):\r\n \"\"\"\r\n Generate an area equalizing contiguous cartogram based on the algorithm by (J. Oougenik et al., 1985).\r\n \r\n Note: The current function does not include interior boundaries when distorting the polygons!\r\n This is due to shapely's current way of extracting boundary coordinates which make it \r\n cumbersome to separate interior points from exterior points.\r\n \r\n :param arg_polygons: geopandas.geoseries.GeoSeries Series of shapely.geometry.Polygon objects.\r\n :param arg_values: (geo)pandas.Series Series of floating point values.\r\n :param itermax: int (Optional, default=5) Maximum amount of iterations to perform adjusting coordinates.\r\n :param max_size_error: float (Optional, default=1.0001) A maximum accuracy until terminating the procedure.\r\n :param epsilon: float (Optional, default=0.01) Scalar to prevent zero division errors.\r\n :param verbose: bool (Optional, default=False) Whether to print out intermediary progress. \r\n \r\n :returns: geopandas.geoseries.GeoSeries Copy of :arg_polygons: with the adjusted coordinates.\r\n \r\n :references: Dougenik, J.A., Chrisman, N.R. and Niemeyer, D.R. (1985), \r\n AN ALGORITHM TO CONSTRUCT CONTINUOUS AREA CARTOGRAMS*. \r\n The Professional Geographer, 37: 75-81. doi:10.1111/j.0033-0124.1985.00075.x \r\n \r\n :see: Implementation of the same algorithm in R (available on CRAN): https://github.com/sjewo/cartogram\r\n \"\"\" \r\n polygons = arg_polygons.copy().values\r\n values = arg_values.copy().values\r\n \r\n total_value = values.sum()\r\n mean_size_error = 100\r\n \r\n for iteration in range(itermax):\r\n if mean_size_error < max_size_error:\r\n break\r\n \r\n # This statement unpacks the centroid Point object to np.array and\r\n # creates a n x 2 matrix of centroid [x, y] coordinates.\r\n centroids = np.array(list(map(np.array, polygons.centroid)))\r\n area = polygons.area\r\n total_area = area.sum()\r\n \r\n desired = total_area * values / total_value\r\n desired[desired == 0] = epsilon # Prevent zero division.\r\n radius = np.sqrt(area / np.pi)\r\n mass = np.sqrt(desired / np.pi) - np.sqrt(area / np.pi)\r\n \r\n size_error = np.max([desired, area], axis=0) - np.min([desired, area], axis=0)\r\n mean_size_error = np.mean(size_error)\r\n force_reduction_factor = 1 / (1 + mean_size_error)\r\n \r\n if verbose:\r\n print(\"Mean size error at iteration {}: {}\".format(iteration+1, mean_size_error))\r\n for row, polygon in enumerate(polygons):\r\n \r\n # TODO: Possibly include shapely.geometry.Polygon interior coordinates.\r\n \r\n # Some coordinates may appear twice, however, they mustn't be removed.\r\n # These coordinates are also adjusted, but only computed once:\r\n coordinates = np.matrix(polygon.exterior.coords) # [[x1, y2], [x2, y2], ...]\r\n idx = np.unique(coordinates, axis=0) # Get unique rows\r\n \r\n for k in range(len(idx)):\r\n # Get positions from coordinates for each unique idx.\r\n coord_idx = np.where((coordinates[:, 0] == idx[k,0]) & (coordinates[:,1] == idx[k, 1]))[0]\r\n # Only extract one using coord_idx[0] as coord_idx maps duplicate coordinates.\r\n new_coordinates = coordinates[coord_idx[0],:] \r\n \r\n # Compute coordinate's euclidean distances to all centroids.\r\n distances = np.sqrt(np.square(centroids - new_coordinates).sum(axis=1))\r\n distances = np.array(distances).ravel() # Converts matrix into flat array.\r\n \r\n # Compute force vectors\r\n Fijs = mass * radius / distances\r\n Fbij = mass * np.square(distances / radius) * (4 - 3 * distances / radius)\r\n Fijs[distances <= radius] = Fbij[distances <= radius]\r\n Fijs *= force_reduction_factor / distances\r\n \r\n # Find how much \"force\" must be applied to the coordinates by computing\r\n # the dot product of the force vector and the centroid deltas.\r\n new_coordinates += Fijs.dot(new_coordinates - centroids)\r\n \r\n # Set the polygon \r\n polygons[row] = Polygon(coordinates, holes = polygon.interiors)\r\n \r\n return gpd.geoseries.GeoSeries(polygons)", "_____no_output_____" ] ], [ [ "Before we can use this code though, we need to make sure that world dataset is considered as separate polygons (otherwise the calculations of the centroids will be off). This can be achieved using the `explode` function. \r\n\r\nWe also need to make sure that they each have relevant data for each individual polygon - `explode` will directly copy the data, but that implies that 300m people live on Hawaii as well as over the contiguous States. This can be done by assuming that calculating the average population density for each country as well as the area of each polygon and then multiplying back up. This assumes that population within a country is evenly spread thoughout, but as we only need to stop the small island exploding, I suspect that it's sufficiently accurate.\r\n\r\nThe cartogram code also has problems when the weighting is `NaN`. So we'll just drop inconvenient places.", "_____no_output_____" ] ], [ [ "#Check Geometry\r\ndef compute_area(geom):\r\n area = geom.area\r\n return area\r\n\r\nworld['area'] = world['geometry'].apply(lambda x: compute_area(x))\r\nworld['pop_dens']=world['pop_est']/world['area']\r\nexplode_me=world.drop(['pop_est','area'],axis=1) #drop those ones as we'll want to make new ones later\r\nexplode_me=explode_me.drop(explode_me[explode_me['cumCO2_per_capita'].isnull()].index)\r\nexploded=explode_me.explode()\r\nexploded['area'] = exploded['geometry'].apply(lambda x: compute_area(x))\r\nexploded['pop']=exploded['pop_dens']*exploded['area']\r\nexploded['cumulativeCO2']=exploded['cumCO2_per_capita']*exploded['pop']\r\n\r\nexploded[['iso_a3','cumulativeCO2']]", "_____no_output_____" ] ], [ [ "Now we can create a version of the dataset with altered geometries, scaled by the cumulative CO2 emissions.\r\n\r\nThe line below may come up with some errors about projections - I'm not worried. It will then give a long list of iterations and 'errors'. I think this the mean error size is the average amount of distortion. You need to fiddle with the itermax value to get the desired balance between the country outlines being noticeably distorted, but also identifiable. I reckon that somewhere are 25 is a good target", "_____no_output_____" ] ], [ [ "exploded.to_crs('+proj=cea')\r\npop_cart=cartogram(exploded['geometry'], exploded['cumulativeCO2'],itermax=50,verbose=True)\r\n", "/home/ucfaccb/miniconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3441: UserWarning: Geometry is in a geographic CRS. Results from 'centroid' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.\n\n exec(code_obj, self.user_global_ns, self.user_ns)\n/home/ucfaccb/miniconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3441: UserWarning: Geometry is in a geographic CRS. Results from 'area' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.\n\n exec(code_obj, self.user_global_ns, self.user_ns)\n" ], [ "#exploded['geometry']=pop_cart\r\n#exploded['cart']=pop_cart\r\n#exploded.set_geometry('cart')\r\n#print(exploded['pop'].values)\r\n#print(pop_cart)\r\n\r\ndf = pd.DataFrame({'cumCO2_Nat': exploded['cumCO2_Nat'].values})\r\ngdf = gpd.GeoDataFrame(df, geometry=pop_cart)\r\ngdf.plot('cumCO2_Nat', legend=True, cmap=\"magma\")\r\nplt.title('Cumulative Emissions (Mt CO2, 2020)')\r\nplt.show()", "_____no_output_____" ], [ "print(sum(exploded['cumulativeCO2'].values))", "1526779.4560377246\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec98249cdd96786eeacde5dd8e5a0e4bcb84c00e
12,185
ipynb
Jupyter Notebook
AssetManagement/export_csv.ipynb
pberezina/earthengine-py-notebooks
4cbe3c52bcc9ed3f1337bf097aa5799442991a5e
[ "MIT" ]
1
2020-03-20T19:39:34.000Z
2020-03-20T19:39:34.000Z
AssetManagement/export_csv.ipynb
pberezina/earthengine-py-notebooks
4cbe3c52bcc9ed3f1337bf097aa5799442991a5e
[ "MIT" ]
null
null
null
AssetManagement/export_csv.ipynb
pberezina/earthengine-py-notebooks
4cbe3c52bcc9ed3f1337bf097aa5799442991a5e
[ "MIT" ]
null
null
null
73.848485
7,208
0.829462
[ [ [ "<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/AssetManagement/export_csv.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/AssetManagement/export_csv.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=AssetManagement/export_csv.ipynb\"><img width=58px src=\"https://mybinder.org/static/images/logo_social.png\" />Run in binder</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/AssetManagement/export_csv.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>", "_____no_output_____" ], [ "## Install Earth Engine API\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.\nThe following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.", "_____no_output_____" ] ], [ [ "import subprocess\n\ntry:\n import geehydro\nexcept ImportError:\n print('geehydro package not installed. Installing ...')\n subprocess.check_call([\"python\", '-m', 'pip', 'install', 'geehydro'])", "_____no_output_____" ] ], [ [ "Import libraries", "_____no_output_____" ] ], [ [ "import ee\nimport folium\nimport geehydro", "_____no_output_____" ] ], [ [ "Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. ", "_____no_output_____" ] ], [ [ "try:\n ee.Initialize()\nexcept Exception as e:\n ee.Authenticate()\n ee.Initialize()", "_____no_output_____" ] ], [ [ "## Create an interactive map \nThis step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. \nThe optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.", "_____no_output_____" ] ], [ [ "Map = folium.Map(location=[40, -100], zoom_start=4)\nMap.setOptions('HYBRID')", "_____no_output_____" ] ], [ [ "## Add Earth Engine Python script ", "_____no_output_____" ], [ "## Display Earth Engine data layers ", "_____no_output_____" ] ], [ [ "Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)\nMap", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
ec982f1f60e1aa99cb369c664e266c874671e436
80,958
ipynb
Jupyter Notebook
analysis/model_selection/stage1/05_reg_with_arima_errors1-tscv.ipynb
TomMonks/swast-benchmarking
96964fb705a8b3cebbce8adcf03e42d4fc3dd05a
[ "MIT" ]
null
null
null
analysis/model_selection/stage1/05_reg_with_arima_errors1-tscv.ipynb
TomMonks/swast-benchmarking
96964fb705a8b3cebbce8adcf03e42d4fc3dd05a
[ "MIT" ]
null
null
null
analysis/model_selection/stage1/05_reg_with_arima_errors1-tscv.ipynb
TomMonks/swast-benchmarking
96964fb705a8b3cebbce8adcf03e42d4fc3dd05a
[ "MIT" ]
1
2021-11-16T14:38:22.000Z
2021-11-16T14:38:22.000Z
31.318375
253
0.404345
[ [ [ "# Regression with ARIMA Errors.\n\nThis model is a simple Regression model with ARIMA errors. The regression model consists of a single independent variables 'new years day'. This is a categorical variable to model the special event on 1st Jan every year.\n\nARIMA order is chosen by `pmdarima.auto_arima`", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom forecast_tools.metrics import (mean_absolute_scaled_error, \n root_mean_squared_error,\n symmetric_mean_absolute_percentage_error)\n\n#auto_arima\nfrom pmdarima import auto_arima\n\nfrom statsmodels.tsa.arima.model import ARIMA\n\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "import statsmodels as sm\nsm.__version__", "_____no_output_____" ], [ "import pmdarima as pm\npm.__version__", "_____no_output_____" ], [ "from amb_forecast.feature_engineering import (featurize_time_series,\n regular_busy_calender_days)", "_____no_output_____" ] ], [ [ "# Data Input", "_____no_output_____" ] ], [ [ "TOP_LEVEL = '../../../results/model_selection'\nSTAGE = 'stage1'\nREGION = 'Trust'\nMETHOD = 'reg-arima'\n\nFILE_NAME = 'Daily_Responses_5_Years_2019_full.csv'\n\n#split training and test data.\nTEST_SPLIT_DATE = '2019-01-01'\n\n#second subdivide: train and val\nVAL_SPLIT_DATE = '2017-07-01'\n\n#discard data after 2020 due to coronavirus\n#this is the subject of a seperate study.\nDISCARD_DATE = '2020-01-01'", "_____no_output_____" ], [ "#read in path\npath = f'../../../data/{FILE_NAME}'", "_____no_output_____" ], [ "def pre_process_daily_data(path, index_col, by_col, \n values, dayfirst=False):\n '''\n Daily data is stored in long format. Read in \n and pivot to wide format so that there is a single \n colmumn for each regions time series.\n '''\n df = pd.read_csv(path, index_col=index_col, parse_dates=True, \n dayfirst=dayfirst)\n df.columns = map(str.lower, df.columns)\n df.index.rename(str(df.index.name).lower(), inplace=True)\n \n clean_table = pd.pivot_table(df, values=values.lower(), \n index=[index_col.lower()],\n columns=[by_col.lower()], aggfunc=np.sum)\n \n clean_table.index.freq = 'D'\n \n return clean_table", "_____no_output_____" ], [ "clean = pre_process_daily_data(path, 'Actual_dt', 'ORA', 'Actual_Value', \n dayfirst=False)\nclean.head()", "_____no_output_____" ] ], [ [ "## Train Test Split", "_____no_output_____" ] ], [ [ "def ts_train_test_split(data, split_date):\n '''\n Split time series into training and test data\n \n Parameters:\n -------\n data - pd.DataFrame - time series data. Index expected as datatimeindex\n split_date - the date on which to split the time series\n \n Returns:\n --------\n tuple (len=2) \n 0. pandas.DataFrame - training dataset\n 1. pandas.DataFrame - test dataset\n '''\n train = data.loc[data.index < split_date]\n test = data.loc[data.index >= split_date]\n return train, test", "_____no_output_____" ], [ "train, test = ts_train_test_split(clean, split_date=TEST_SPLIT_DATE)\n\n#exclude data after 2020 due to coronavirus.\ntest, discard = ts_train_test_split(test, split_date=DISCARD_DATE)\n\n#split into train and val AFTER creating new years day.\n", "_____no_output_____" ], [ "train.shape", "_____no_output_____" ], [ "test.shape", "_____no_output_____" ] ], [ [ "# New years day\n\nGenerate a new binary categorical feature representing new years day.", "_____no_output_____" ] ], [ [ "#exclude interaction as point forecasts are less accurate.\n_, _, exog = featurize_time_series(train[REGION], max_lags=1, \n include_interactions=False)", "_____no_output_____" ], [ "exog.head()", "_____no_output_____" ], [ "#combine exog into train array for split\ntrain['new_year'] = exog", "_____no_output_____" ], [ "train.head()", "_____no_output_____" ] ], [ [ "# Train validation split", "_____no_output_____" ] ], [ [ "#train split into train and validation\ntrain, val = ts_train_test_split(train, split_date=VAL_SPLIT_DATE)", "_____no_output_____" ], [ "train.shape", "_____no_output_____" ], [ "val.shape", "_____no_output_____" ] ], [ [ "# Auto ARIMA model selection", "_____no_output_____" ], [ "Uses Auto ARIMA function to select model by AIC\n", "_____no_output_____" ] ], [ [ "##only looking at Trust level.\n#auto_results = auto_arima(train[REGION], \n# exogenous=train['new_year'].to_numpy().reshape(-1, 1), \n# seasonal=True, \n# m=7, \n# n_job=-1, \n# suppress_warnings=False) ", "_____no_output_____" ], [ "#auto_results.summary()", "_____no_output_____" ] ], [ [ "## Test Out of Sample Prediction", "_____no_output_____" ] ], [ [ "print(\"pmdarima version: %s\" % pm.__version__)\n#print(auto_results.order)", "pmdarima version: 1.5.2\n" ], [ "#override if desired to test different model.\n#order = auto_results.order\n#seasonal_order = auto_results.seasonal_order\norder = (1, 1, 2)\nseasonal_order = (1, 0, 1, 7)\n", "_____no_output_____" ], [ "model = pm.ARIMA(order=order, seasonal_order=seasonal_order)", "_____no_output_____" ], [ "model.fit(y=train[REGION], \n exogenous=train['new_year'].to_numpy().reshape(-1, 1))", "_____no_output_____" ], [ "#test prediction\nmodel.predict(n_periods=5, \n exogenous=val['new_year'].iloc[:5].to_numpy().reshape(-1, 1))", "_____no_output_____" ] ], [ [ "# Wrapper classes for statsmodels ARIMA\n\nAdapter/wrapper class to enable usage within standard cross validation used across all methods", "_____no_output_____" ] ], [ [ "class ARIMAWrapper:\n '''\n Facade for statsmodels ARIMA models\n '''\n def __init__(self, order, seasonal_order, training_index, holidays=None):\n self._order = order\n self._seasonal_order = seasonal_order\n self._training_index = training_index\n self._holidays = holidays\n\n def _get_resids(self):\n return self._fitted.resid\n\n def _get_preds(self):\n return self._fitted.fittedvalues\n \n def _encode_holidays(self, holidays, idx):\n dummy = idx.isin(holidays).astype(int)\n dummy = pd.DataFrame(dummy)\n dummy.columns = ['holiday']\n dummy.index = idx\n return dummy\n\n def fit(self, y_train):\n \n #extend training index\n if len(y_train) > len(self._training_index):\n\n self._training_index = pd.date_range(start=self._training_index[0], \n periods=len(y_train),\n freq=self._training_index.freq)\n \n holiday_train = None\n if not self._holidays is None:\n holiday_train = self._encode_holidays(self._holidays, \n self._training_index)\n \n \n self._model = ARIMA(endog=y_train,\n exog=holiday_train,\n order=self._order, \n seasonal_order=self._seasonal_order)#,\n #enforce_stationarity=False)\n \n self._fitted = self._model.fit()\n self._t = len(train)\n \n \n def predict(self, horizon, return_conf_int=False, alpha=0.2):\n '''\n forecast h steps ahead.\n \n Params:\n ------\n h: int\n h-step forecast\n \n return_conf_int: bool, optional (default=False)\n return 1 - alpha PI\n \n alpha: float, optional (default=0.2)\n return 1 - alpha PI\n \n Returns:\n -------\n np.array\n If return_conf_int = False returns preds only\n \n np.array, np.array\n If return_conf_int = True returns tuple of preds, pred_ints\n '''\n \n #+1 to date range then trim off the first value\n\n f_idx = pd.date_range(start=self._training_index[-1], \n periods=horizon+1,\n freq=self._training_index.freq)[1:]\n \n #encode holidays if included.\n exog_holiday = None\n if not self._holidays is None:\n exog_holiday = self._encode_holidays(self._holidays, f_idx)\n \n \n forecast = self._fitted.get_forecast(horizon, exog=exog_holiday)\n mean_forecast = forecast.summary_frame()['mean'].to_numpy()\n \n if return_conf_int:\n df = forecast.summary_frame(alpha=alpha)\n pi = df[['mean_ci_lower', 'mean_ci_upper']].to_numpy()\n return mean_forecast, pi\n \n \n else:\n return mean_forecast\n\n fittedvalues = property(_get_preds)\n resid = property(_get_resids) ", "_____no_output_____" ] ], [ [ "## Cross Validation\n\n`time_series_cv` implements rolling forecast origin cross validation for time series. \nIt does not calculate forecast error, but instead returns the predictions, pred intervals and actuals in an array that can be passed to any forecast error function. (this is for efficiency and allows additional metrics to be calculated if needed).", "_____no_output_____" ] ], [ [ "def time_series_cv(model, train, val, horizons, alpha=0.2, step=1):\n '''\n Time series cross validation across multiple horizons for a single model.\n\n Incrementally adds additional training data to the model and tests\n across a provided list of forecast horizons. Note that function tests a\n model only against complete validation sets. E.g. if horizon = 15 and \n len(val) = 12 then no testing is done. In the case of multiple horizons\n e.g. [7, 14, 28] then the function will use the maximum forecast horizon\n to calculate the number of iterations i.e if len(val) = 365 and step = 1\n then no. iterations = len(val) - max(horizon) = 365 - 28 = 337.\n \n Parameters:\n --------\n model - forecasting model\n\n train - np.array - vector of training data\n\n val - np.array - vector of validation data\n\n horizon - list of ints, forecast horizon e.g. [7, 14, 28] days\n \n alpha - float, optional (default=0.2)\n 1 - alpha prediction interval specification\n\n step -- int, optional (default=1)\n step taken in cross validation \n e.g. 1 in next cross validation training data includes next point \n from the validation set.\n e.g. 7 in the next cross validation training data includes next 7 points\n (default=1)\n \n Returns:\n -------\n np.array, np.array, np.array\n - cv_preds, cv_test, cv_intervals\n '''\n \n #point forecasts\n cv_preds = [] \n #ground truth observations\n cv_actuals = [] \n #prediction intervals\n cv_pis = []\n \n split = 0\n\n print('split => ', end=\"\")\n for i in range(0, len(val) - max(horizons) + 1, step):\n split += 1\n print(f'{split}, ', end=\"\")\n \n train_cv = np.concatenate([train, val[:i]], axis=0)\n model.fit(train_cv)\n \n #predict the maximum horizon \n preds, pis = model.predict(horizon=len(val[i:i+max(horizons)]), \n return_conf_int=True,\n alpha=alpha) \n cv_h_preds = []\n cv_test = []\n cv_h_pis = []\n \n #sub horizon calculations\n for h in horizons:\n #store the h-step prediction\n cv_h_preds.append(preds[:h])\n #store the h-step actual value\n cv_test.append(val.iloc[i:i+h]) \n cv_h_pis.append(pis[:h])\n \n cv_preds.append(cv_h_preds)\n cv_actuals.append(cv_test)\n cv_pis.append(cv_h_pis)\n \n print('done.\\n') \n return cv_preds, cv_actuals, cv_pis", "_____no_output_____" ] ], [ [ "## Custom functions for calculating CV scores for point predictions and coverage.\n\nThese functions have been written to work with the output of `time_series_cv`", "_____no_output_____" ] ], [ [ "def split_cv_error(cv_preds, cv_test, error_func):\n '''\n Forecast error in the current split\n \n Params:\n -----\n cv_preds, np.array\n Split predictions\n \n \n cv_test: np.array\n acutal ground truth observations\n \n error_func: object\n function with signature (y_true, y_preds)\n \n Returns:\n -------\n np.ndarray\n cross validation errors for split\n '''\n n_splits = len(cv_preds)\n cv_errors = []\n \n for split in range(n_splits):\n pred_error = error_func(cv_test[split], cv_preds[split])\n cv_errors.append(pred_error)\n \n return np.array(cv_errors)\n\ndef forecast_errors_cv(cv_preds, cv_test, error_func):\n '''\n Forecast errors by forecast horizon\n \n Params:\n ------\n cv_preds: np.ndarray\n Array of arrays. Each array is of size h representing\n the forecast horizon specified.\n \n cv_test: np.ndarray\n Array of arrays. Each array is of size h representing\n the forecast horizon specified.\n \n error_func: object\n function with signature (y_true, y_preds)\n \n Returns:\n -------\n np.ndarray\n \n '''\n cv_test = np.array(cv_test)\n cv_preds = np.array(cv_preds)\n n_horizons = len(cv_test) \n \n horizon_errors = []\n for h in range(n_horizons):\n split_errors = split_cv_error(cv_preds[h], cv_test[h], error_func)\n horizon_errors.append(split_errors)\n\n return np.array(horizon_errors)\n\ndef split_coverage(cv_test, cv_intervals):\n n_splits = len(cv_test)\n cv_errors = []\n \n for split in range(n_splits):\n val = np.asarray(cv_test[split])\n lower = cv_intervals[split].T[0]\n upper = cv_intervals[split].T[1]\n \n coverage = len(np.where((val > lower) & (val < upper))[0])\n coverage = coverage / len(val)\n \n cv_errors.append(coverage)\n \n return np.array(cv_errors)\n \n \ndef prediction_int_coverage_cv(cv_test, cv_intervals):\n cv_test = np.array(cv_test)\n cv_intervals = np.array(cv_intervals)\n n_horizons = len(cv_test) \n \n horizon_coverage = []\n for h in range(n_horizons):\n split_coverages = split_coverage(cv_test[h], cv_intervals[h])\n horizon_coverage.append(split_coverages)\n\n return np.array(horizon_coverage) ", "_____no_output_____" ], [ "def split_cv_error_scaled(cv_preds, cv_test, y_train):\n n_splits = len(cv_preds)\n cv_errors = []\n \n for split in range(n_splits):\n pred_error = mean_absolute_scaled_error(cv_test[split], cv_preds[split], \n y_train, period=7)\n \n cv_errors.append(pred_error)\n \n return np.array(cv_errors)\n\ndef forecast_errors_cv_scaled(cv_preds, cv_test, y_train):\n cv_test = np.array(cv_test)\n cv_preds = np.array(cv_preds)\n n_horizons = len(cv_test) \n \n horizon_errors = []\n for h in range(n_horizons):\n split_errors = split_cv_error_scaled(cv_preds[h], cv_test[h], y_train)\n horizon_errors.append(split_errors)\n \n return np.array(horizon_errors)", "_____no_output_____" ] ], [ [ "# Run cross validation\n\nThis is run twices once each for 80 and 95% prediction intervals.", "_____no_output_____" ] ], [ [ "#reminder of ARIMA order\nprint(order)\nprint(seasonal_order)", "(1, 1, 2)\n(1, 0, 1, 7)\n" ] ], [ [ "## Holidays included", "_____no_output_____" ], [ "### New years day", "_____no_output_____" ] ], [ [ "exog = regular_busy_calender_days(train[REGION], quantile=0.99)", "_____no_output_____" ], [ "new_year = pd.DataFrame({\n 'holiday': 'new_year',\n 'ds': pd.date_range(start=exog[0], \n periods=20, \n freq='YS')\n })", "_____no_output_____" ] ], [ [ "### Create model and run", "_____no_output_____" ] ], [ [ "model = ARIMAWrapper(order=order, seasonal_order=seasonal_order, \n training_index=train.index, \n holidays=new_year['ds'].tolist())\n\nhorizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365]\nresults = time_series_cv(model, train[REGION], val[REGION], \n horizons, alpha=0.2, step=7)", "split => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, done.\n\n" ] ], [ [ "## symmetric MAPE results", "_____no_output_____" ] ], [ [ "cv_preds, cv_test, cv_intervals = results", "_____no_output_____" ], [ "#CV point predictions smape\ncv_errors = forecast_errors_cv(cv_preds, cv_test, \n symmetric_mean_absolute_percentage_error)\ndf = pd.DataFrame(cv_errors)\ndf.columns = horizons\ndf.describe()", "_____no_output_____" ], [ "#output sMAPE results to file\nmetric = 'smape'\nprint(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')\ndf.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')", "../../../results/model_selection/stage1/Trust-reg-arima_smape.csv\n" ] ], [ [ "## RMSE results", "_____no_output_____" ] ], [ [ "#CV point predictions rmse - no interactions\ncv_errors = forecast_errors_cv(cv_preds, cv_test, root_mean_squared_error)\ndf = pd.DataFrame(cv_errors)\ndf.columns = horizons\ndf.describe()", "_____no_output_____" ], [ "#output rmse\nmetric = 'rmse'\nprint(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')\ndf.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')", "../../../results/model_selection/stage1/Trust-reg-arima_rmse.csv\n" ] ], [ [ "## MASE results", "_____no_output_____" ] ], [ [ "#mase\ncv_errors = forecast_errors_cv_scaled(cv_preds, cv_test, train['Trust'])\ndf = pd.DataFrame(cv_errors)\ndf.columns = horizons\ndf.describe()", "_____no_output_____" ], [ "metric = 'mase'\nprint(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')\ndf.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')", "../../../results/model_selection/stage1/Trust-reg-arima_mase.csv\n" ] ], [ [ "## 80% Coverage", "_____no_output_____" ] ], [ [ "cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals)\ndf = pd.DataFrame(cv_coverage)\ndf.columns = horizons\ndf.describe()", "_____no_output_____" ], [ "#write 80% coverage to file\nmetric = 'coverage_80'\nprint(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')\ndf.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')", "../../../results/model_selection/stage1/Trust-reg-arima_coverage_80.csv\n" ] ], [ [ "# Repeat for 95% PIs.", "_____no_output_____" ] ], [ [ "#95% PIs\nmodel = ARIMAWrapper(order=order, seasonal_order=seasonal_order, \n training_index=train.index, \n holidays=new_year['ds'].tolist())\n\nhorizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365]\nresults = time_series_cv(model, train[REGION], val[REGION], \n horizons, alpha=0.05, step=7)", "split => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, done.\n\n" ] ], [ [ "## 95% coverage results", "_____no_output_____" ] ], [ [ "cv_preds, cv_test, cv_intervals = results", "_____no_output_____" ], [ "cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals)\ndf = pd.DataFrame(cv_coverage)\ndf.columns = horizons\ndf.describe()", "_____no_output_____" ], [ "#write 95% coverage to file\nmetric = 'coverage_95'\nprint(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')\ndf.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')", "../../../results/model_selection/stage1/Trust-reg-arima_coverage_95.csv\n" ] ], [ [ "# End", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
ec98304c9a481d4c4735470910c589bb687e0919
8,097
ipynb
Jupyter Notebook
demos-for-talks/VGGNet.ipynb
nirupam1sharma/Self-Study
212c5040ed2c66d25c93056279f8cf3039afb6d1
[ "MIT" ]
284
2016-08-12T16:11:37.000Z
2021-12-11T03:29:39.000Z
demos-for-talks/VGGNet.ipynb
nirupam1sharma/Self-Study
212c5040ed2c66d25c93056279f8cf3039afb6d1
[ "MIT" ]
3
2016-09-28T19:36:26.000Z
2017-10-20T14:44:56.000Z
demos-for-talks/VGGNet.ipynb
nirupam1sharma/Self-Study
212c5040ed2c66d25c93056279f8cf3039afb6d1
[ "MIT" ]
103
2016-08-14T17:25:36.000Z
2022-02-08T17:01:51.000Z
30.787072
143
0.609485
[ [ [ "# VGGNet in TFLearn", "_____no_output_____" ], [ "#### for Oxford's 17 Category Flower Dataset Classification", "_____no_output_____" ], [ "#### Based on https://github.com/tflearn/tflearn/blob/master/examples/images/vgg_network.py", "_____no_output_____" ] ], [ [ "from __future__ import division, print_function, absolute_import", "_____no_output_____" ], [ "import tflearn", "_____no_output_____" ], [ "from tflearn.layers.core import input_data, dropout, fully_connected\nfrom tflearn.layers.conv import conv_2d, max_pool_2d\nfrom tflearn.layers.estimator import regression", "_____no_output_____" ] ], [ [ "#### Import Data and Preprocess", "_____no_output_____" ] ], [ [ "import tflearn.datasets.oxflower17 as oxflower17", "_____no_output_____" ], [ "X, Y = oxflower17.load_data(one_hot=True)", "Downloading Oxford 17 category Flower Dataset, Please wait...\n" ] ], [ [ "#### Build 'VGGNet'", "_____no_output_____" ] ], [ [ "network = input_data(shape=[None, 224, 224, 3])\n\nnetwork = conv_2d(network, 64, 3, activation='relu')\nnetwork = conv_2d(network, 64, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 128, 3, activation='relu')\nnetwork = conv_2d(network, 128, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = conv_2d(network, 256, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = conv_2d(network, 512, 3, activation='relu')\nnetwork = max_pool_2d(network, 2, strides=2)\n\nnetwork = fully_connected(network, 4096, activation='relu')\nnetwork = dropout(network, 0.5)\nnetwork = fully_connected(network, 4096, activation='relu')\nnetwork = dropout(network, 0.5)\n\nnetwork = fully_connected(network, 17, activation='softmax')\n\nnetwork = regression(network, optimizer='rmsprop',\n loss='categorical_crossentropy',\n learning_rate=0.001)", "WARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\nWARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)\n" ] ], [ [ "#### Training", "_____no_output_____" ] ], [ [ "model = tflearn.DNN(network, checkpoint_path='model_vgg',\n max_checkpoints=1, tensorboard_verbose=0)", "_____no_output_____" ], [ "# n_epoch=500 is recommended:\nmodel.fit(X, Y, n_epoch=10, shuffle=True,\n show_metric=True, batch_size=32, snapshot_step=500,\n snapshot_epoch=False, run_id='vgg_oxflowers17')", "Training Step: 430 | total loss: \u001b[1m\u001b[32m21.37614\u001b[0m\u001b[0m\n\u001b[2K\r| RMSProp | epoch: 010 | loss: 21.37614 - acc: 0.0716 -- iter: 1360/1360\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
ec98304d3ed924b04c7780ef966c21525ec2c4c0
1,573
ipynb
Jupyter Notebook
notebooks/labs34_notebooks/unconfirmed.ipynb
RowenWitt/human-rights-first-police-ds-a
7a52797d7c385aef649b7dc4e28f0d1bc90cf6c6
[ "MIT" ]
3
2021-05-14T16:01:58.000Z
2021-08-09T20:20:54.000Z
notebooks/labs34_notebooks/unconfirmed.ipynb
RowenWitt/human-rights-first-police-ds-a
7a52797d7c385aef649b7dc4e28f0d1bc90cf6c6
[ "MIT" ]
52
2021-04-15T17:33:16.000Z
2021-10-04T21:12:52.000Z
notebooks/labs34_notebooks/unconfirmed.ipynb
RowenWitt/human-rights-first-police-ds-a
7a52797d7c385aef649b7dc4e28f0d1bc90cf6c6
[ "MIT" ]
30
2021-03-10T20:30:26.000Z
2021-10-15T15:08:29.000Z
22.797101
337
0.544183
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "unconfirmed = pd.read_csv(\"unconfirmed.csv\")", "_____no_output_____" ], [ "unconfirmed = unconfirmed[['DSC','confirmed']]", "_____no_output_____" ], [ "unconfirmed.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 9482 entries, 0 to 9481\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 DSC 7491 non-null object \n 1 confirmed 7491 non-null float64\ndtypes: float64(1), object(1)\nmemory usage: 148.3+ KB\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
ec98458aadbdd6411efd980c72cf5c640b261104
10,859
ipynb
Jupyter Notebook
train_pl.ipynb
Dehde/mrnet
3b816b9e244756a2a264fdf377df923d2d5c781e
[ "MIT" ]
2
2021-04-28T19:23:00.000Z
2021-05-12T18:47:58.000Z
train_pl.ipynb
Dehde/mrnet
3b816b9e244756a2a264fdf377df923d2d5c781e
[ "MIT" ]
null
null
null
train_pl.ipynb
Dehde/mrnet
3b816b9e244756a2a264fdf377df923d2d5c781e
[ "MIT" ]
null
null
null
28.727513
338
0.539092
[ [ [ "# !pip install tqdm\n# !pip install torch\n# !pip install torchvision torchaudio\n# !pip install tensorboardX\n# !pip install scikit-learn\n# !pip install pytorch-lightning\n# !pip install git+https://github.com/ncullen93/torchsample\n# !pip install nibabel\n# !pip install wget\n# !pip install ipywidgets\n# !pip install widgetsnbextension\n# !pip install tensorflow\n\n# jupyter labextension install @jupyter-widgets/jupyterlab-manager > /dev/null\n# jupyter nbextension enable --py widgetsnbextension", "_____no_output_____" ], [ "import shutil\nimport os\nimport time\nfrom datetime import datetime\nimport argparse\nimport numpy as np\nfrom tqdm import tqdm\nimport multiprocessing\n\nimport torch\nimport torch.nn as nn\nimport torchmetrics\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torchvision import transforms\nimport torch.nn.functional as F\nfrom tensorboardX import SummaryWriter\n\nimport model\nfrom dataset import MRDatasetMerged\nfrom torch.utils.data import DataLoader \n\nimport pytorch_lightning as pl\nfrom sklearn import metrics\nfrom ipywidgets import IntProgress", "_____no_output_____" ], [ "# !jupyter nbextension enable --py widgetsnbextension\n#%load_ext tensorboard\n#%tensorboard --logdir lightning_logs/", "_____no_output_____" ], [ "class Args:\n def __init__(self):\n self.task = \"abnormal\" #['abnormal', 'acl', 'meniscus']\n self.plane = \"sagittal\" #['sagittal', 'coronal', 'axial']\n self.prefix_name = \"Test\"\n self.augment = 1 #[0, 1]\n self.lr_scheduler = \"plateau\" #['plateau', 'step']\n self.gamma = 0.5\n self.epochs = 1\n self.lr = 1e-5\n self.flush_history = 0 #[0, 1]\n self.save_model = 1 #[0, 1]\n self.patience = 5\n self.log_every = 100\n \nargs = Args()", "_____no_output_____" ], [ "def to_tensor(x):\n return torch.Tensor(x)\n\nnum_workers = multiprocessing.cpu_count() - 1\n\nlog_root_folder = \"./logs/{0}/{1}/\".format(args.task, args.plane)\nif args.flush_history == 1:\n objects = os.listdir(log_root_folder)\n for f in objects:\n if os.path.isdir(log_root_folder + f):\n shutil.rmtree(log_root_folder + f)\n\nnow = datetime.now()\nlogdir = log_root_folder + now.strftime(\"%Y%m%d-%H%M%S\") + \"/\"\nos.makedirs(logdir)\n\nwriter = SummaryWriter(logdir)\n\n# augmentor = Compose([\n# transforms.Lambda(to_tensor),\n# RandomRotate(25),\n# RandomTranslate([0.11, 0.11]),\n# RandomFlip(),\n# # transforms.Lambda(lambda x: x.repeat(3, 1, 1, 1).permute(1, 0, 2, 3)),\n# ])\ndata_dir = \"/home/jovyan/mrnet_dataset/\"\n\ntrain_dataset = MRDatasetMerged(data_dir, transform=None, train=True)\nvalidation_dataset = MRDatasetMerged(data_dir, train=False)\n\ntrain_loader = DataLoader(train_dataset, batch_size=1, shuffle=True, num_workers=num_workers, drop_last=False)\nvalidation_loader = DataLoader(validation_dataset, batch_size=1, shuffle=False, num_workers=num_workers, drop_last=False)\n\nmrnet = model.MRNet()", "Downloading: \"https://download.pytorch.org/models/alexnet-owt-7be5be79.pth\" to /home/jovyan/.cache/torch/hub/checkpoints/alexnet-owt-7be5be79.pth\n" ], [ "monitor = \"val_f1\"\n\ncallback = pl.callbacks.ModelCheckpoint(\n monitor=f'{monitor}',\n dirpath=f'/notebooks/checkpoints_{monitor}/',\n filename='checkpoint-{epoch:02d}-{' + f'{monitor}' + ':.2f}',\n save_top_k=3,\n mode='min',\n )", "_____no_output_____" ], [ "trainer = pl.Trainer(max_epochs=1, gpus=0, callbacks=[callback]) #1", "GPU available: False, used: False\nTPU available: False, using: 0 TPU cores\nIPU available: False, using: 0 IPUs\n" ], [ "trainer.fit(mrnet, train_loader, validation_loader)", "2021-09-28 14:43:22.882976: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n2021-09-28 14:43:22.883027: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n\n | Name | Type | Params\n-------------------------------------------------------\n0 | pretrained_model | AlexNet | 61.1 M\n1 | pooling_layer | AdaptiveAvgPool2d | 0 \n2 | classifer | Linear | 771 \n3 | train_f1 | F1 | 0 \n4 | valid_f1 | F1 | 0 \n5 | train_auc | AUROC | 0 \n-------------------------------------------------------\n61.1 M Trainable params\n0 Non-trainable params\n61.1 M Total params\n244.406 Total estimated model params size (MB)\n" ], [ "m = MRNet.load_from_checkpoint(callback.best_model_path)", "_____no_output_____" ], [ "m(validation_dataset[0])", "_____no_output_____" ], [ "#export model\nfilepath = 'model_v2.onnx'\nmodel = mrnet\ninput_sample = torch.randn((64, 3, 227, 227))\nmodel.to_onnx(filepath, input_sample, export_params=True)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec985a972c04d6c8e551ee9b46f842c30daa50ff
60,532
ipynb
Jupyter Notebook
week-1-python/python_warmup.ipynb
vvthakral/data-science-bootcamp
5c6f43536001c43ed17bf1be947c386643a0b4ec
[ "MIT" ]
23
2021-02-24T18:09:35.000Z
2021-09-20T17:07:57.000Z
week-1-python/python_warmup.ipynb
vvthakral/data-science-bootcamp
5c6f43536001c43ed17bf1be947c386643a0b4ec
[ "MIT" ]
null
null
null
week-1-python/python_warmup.ipynb
vvthakral/data-science-bootcamp
5c6f43536001c43ed17bf1be947c386643a0b4ec
[ "MIT" ]
26
2021-02-24T18:09:48.000Z
2021-09-20T17:07:50.000Z
25.649153
713
0.410328
[ [ [ "<a href=\"https://colab.research.google.com/github/vvthakral/data-science-bootcamp/blob/main/week-1-python/python_warmup.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<h1> Basic Intro to Python </h1>", "_____no_output_____" ] ], [ [ "print(\"Hello World\")", "Hello World\n" ] ], [ [ "\n<h3> Zen of Python </h3>\n\nThe best Python practices suggested by Tim Peters.\n\nIt's always good to abide by the zen even when you have an opportunity to show off your pythonic skills :)", "_____no_output_____" ] ], [ [ "import this", "The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\n" ] ], [ [ "<h3>Help</h3>\nOne of the most useful functions.", "_____no_output_____" ] ], [ [ "#To know more about each method functionality use of help is indeed a help\nhelp(print)", "Help on built-in function print in module builtins:\n\nprint(...)\n print(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n \n Prints the values to a stream, or to sys.stdout by default.\n Optional keyword arguments:\n file: a file-like object (stream); defaults to the current sys.stdout.\n sep: string inserted between values, default a space.\n end: string appended after the last value, default a newline.\n flush: whether to forcibly flush the stream.\n\n" ] ], [ [ "<h1>Accessing Attributes/Methods of any object</h1>\n", "_____no_output_____" ] ], [ [ "#To know methods inside a module kindly make use of 'dir'\nimport math \ndir(math)", "_____no_output_____" ] ], [ [ "<h3>Data types in Python</h3>\n", "_____no_output_____" ], [ "<p>The basic datatypes: <br>\nstr<br> \nint<br>\nfloat<br>\nlist<br>\nset<br>\ndict<br>\n<strong>Note: It is advised not to use them for naming a variable.</strong>\n</p>", "_____no_output_____" ] ], [ [ "#string\nname = 'Vishnu' + ' Thakral' #We can directly add strings in this manner\nprint(name,type(name)) #type prints the type of the object it is called on.", "Vishnu Thakral <class 'str'>\n" ], [ "# int\nsem = 3\nprint(sem,type(sem))", "3 <class 'int'>\n" ], [ "#float\nrandom_number = 5.1212\nprint(random_number,type(random_number))", "5.1212 <class 'float'>\n" ], [ "#list\ns_list = [1,3,5,5]\nprint(s_list,type(s_list))", "[1, 3, 5, 5] <class 'list'>\n" ], [ "#tuple\nlist_tuples = [(1,3),(3,4,5),(5,4)]\nprint(list_tuples,type(list_tuples))", "[(1, 3), (3, 4, 5), (5, 4)] <class 'list'>\n" ], [ "#set\nsets = set((1,2,3,3))\nprint(sets,type(sets))", "{1, 2, 3} <class 'set'>\n" ], [ "#dictionary\nppl = {'Mr.X':'Inventor','Profession':'Research','Location':'Earth'}\nprint(ppl,type(ppl))", "{'Mr.X': 'Inventor', 'Profession': 'Research', 'Location': 'Earth'} <class 'dict'>\n" ], [ "#boolean\nboolean = True\nprint('Boolean',boolean)\nprint('bool from other data type',bool(random_number))\n\n#None type\nnone_type = None", "Boolean True\nbool from other data type True\n" ] ], [ [ "<h3>Commonly Used Operators and Assignment in Python</h3>", "_____no_output_____" ] ], [ [ "a = 10\nb = 10\n\nprint('Add', a+b) #Add\nprint('Subtract', a-b) # Subtract\nprint('Divide', a/b) #Divide with return answer as floating point\nprint('Multiply', a*b) #Mul\nprint('Floor Division',a//b) #Divide with integer return type\nprint('Mod', a%b) #Mod\nprint('Power', a**b) #Pow (^ is not used for this)", "Add 20\nSubtract 0\nDivide 1.0\nMultiply 100\nFloor Division 1\nMod 0\nPower 10000000000\n" ] ], [ [ "<h5> Question.\n\nWhat is the value of a after the below operations?\n</h5>", "_____no_output_____" ] ], [ [ "a = 10\nb = 10\na += a\na -= b\na *= b\na /= b", "_____no_output_____" ] ], [ [ "<h3> Assignment Tricks", "_____no_output_____" ] ], [ [ "a=b=10\nprint('a and b before swap are: ', a, b)\nprint('a is: ',type(a))\na,b = b,a\n\nprint('a and b after swap are: ', a, b)\nprint('a is: ',type(a))\nprint('Pretty Cool Right!! \\n')", "a and b before swap are: 10 10\na is: <class 'int'>\na and b after swap are: 10 10\na is: <class 'int'>\nPretty Cool Right!! \n\n" ], [ "#a,b = 0,0\na = b = 0\nprint('After assignment: ', a, b)", "After assignment: 0 0\n" ], [ "a , *b= 50 , 60 , 70\nprint(a,type(a))\nprint(b,type(b))", "50 <class 'int'>\n[60, 70] <class 'list'>\n" ] ], [ [ "<h5>Question\n\nwhat will the assignment be?\n</h5>\n\n", "_____no_output_____" ] ], [ [ "a, *b, c = 50 , 60 , 70, 80\nprint(a,b,c)", "_____no_output_____" ] ], [ [ "<h3> Typecasting </h3>", "_____no_output_____" ] ], [ [ "# string (str) to int\n\nstring = '10'\nprint('int typecasted is: ', int(string))", "int typecasted is: 10\n" ], [ "#string to float\nprint('Float typecasted is: ',float(string))", "Float typecasted is: 10.0\n" ], [ "#string to list\nlists = list(string)\nprint('Typecasted to list: ', lists)", "Typecasted to list: ['1', '0']\n" ], [ "#int to string\nstring = str(1) + '0'\nprint('Int to string', string)", "Int to string 10\n" ], [ "#set to list\ns = set((1,2,3,3,4,5))\nlist_s = list(s)\nprint(list_s)", "[1, 2, 3, 4, 5]\n" ], [ "#tuple to list\ntuples = ((1,2),(1,'vishnu'),(4,'Brooklyn'))\nlist_tup = list(tuples)\nprint(list_tup)", "[(1, 2), (1, 'vishnu'), (4, 'Brooklyn')]\n" ], [ "#string to list\nsome_str = ' hi hello bye !'\nprint(some_str.split())", "['hi', 'hello', 'bye', '!']\n" ], [ "#list to string\n\ngreet = ['Hello','World','!']\nprint('List to string: ', ' '.join(greet))", "List to string: Hello World !\n" ], [ "#ascii value of alpha numeric characters\nprint(chr(97))\nprint()\n\n#string to ascii\nprint(ord('a'))", "a\n\n97\n" ] ], [ [ "<h3>Use of inbuilt methods, random, math and datetime</h3>", "_____no_output_____" ] ], [ [ "import random\n\n#start and end paramaters\nprint('Generating random range from start and end range inclusive ',end='')\nprint(random.randint(5,10))\n\nprint()\nprint('\\nExploring built in methods of Math library')\nimport math\nprint('Inbuilt constant pi',math.pi)\nprint('Sqrt of 25 ',math.sqrt(25))\nprint('Floor value',math.floor(5.3))\n\nprint('\\nWorking with Datetime')\nimport datetime\ndate = datetime.datetime.now()\nprint(date)\nprint(type(date))", "Generating random range from start and end range inclusive 10\n\n\nExploring built in methods of Math library\nInbuilt constant pi 3.141592653589793\nSqrt of 25 5.0\nFloor value 5\n\nWorking with Datetime\n2021-02-24 17:37:22.836587\n<class 'datetime.datetime'>\n" ] ], [ [ "<h3>Lists (in depth)</h3>", "_____no_output_____" ] ], [ [ "#list Operator\n#Ways to initialize\nlist_1 = list()\nprint()\n\n#check the type of the variable\nprint('list_1 ',type(list_1))\nprint()\n\nlist_num = [1,2,3,5]\nprint('list_num ',list_num)\nprint()\n\nlist_object = ['1','2',777,'%','delhi','^']\nprint('list_object ',list_object)\nprint()\n\n#operations automatically computed when given inside list\nlist_operator = [1*2,'art'+'ist',10//2]\nprint('list_operatior ',list_operator)", "\nlist_1 <class 'list'>\n\nlist_num [1, 2, 3, 5]\n\nlist_object ['1', '2', 777, '%', 'delhi', '^']\n\nlist_operatior [2, 'artist', 5]\n" ] ], [ [ "<h3>Access values from list</h3>\nPython allows negative indexing ", "_____no_output_____" ] ], [ [ "print(list_object)\n\n#Individual access\nprint('First element ',list_object[0])", "['1', '2', 777, '%', 'delhi', '^']\nFirst element 1\n" ], [ "#From start to end\nprint(list_object[:])\nprint()", "['1', '2', 777, '%', 'delhi', '^']\n\n" ], [ "#Skipping in between\n#[start:stop:step]\nprint(list_object[0:6:3])", "['1', '%']\n" ], [ "#Negative ranges\nprint(list_object[-4:-1])", "[777, '%', 'delhi']\n" ] ], [ [ "<h5>Question. \n\nPrint last 2 elements with negative indexing\n</h5>\n", "_____no_output_____" ] ], [ [ "#Negative indices answer:", "_____no_output_____" ] ], [ [ "#Trick Again!\n<h5> Q. What does the below code give as output?</h5>", "_____no_output_____" ] ], [ [ "my_string = \"Hi All!\"\nprint(my_string[::-1])", "_____no_output_____" ] ], [ [ "<h5>Q.What will this code do?</h5>\nHint: It is a string multiplied by a number (be careful with data types)", "_____no_output_____" ] ], [ [ "my_string = \"Python \"\nprint(my_string*5)", "_____no_output_____" ] ], [ [ "<h4>Modify List</h4>", "_____no_output_____" ] ], [ [ "#Make Changes in the list\nlist_object.extend([0,1.2])\nprint(list_object)\nprint()\n\n#Make Range of values changes\nsubset_items = [1,52,798,8]\nlist_object[0:4] = subset_items\nprint(list_object)", "['1', '2', 777, '%', 'delhi', '^', 0, 1.2]\n\n[1, 52, 798, 8, 'delhi', '^', 0, 1.2]\n" ], [ "#Skipping changes\nsubset = ['2','45555','12']\nlist_object[0:5:2] = subset\nprint(list_object)", "['2', 52, '45555', 8, '12', '^', 0, 1.2]\n" ] ], [ [ "<h4>Delete Values</h4>", "_____no_output_____" ] ], [ [ "#Delete element\nlist_object = [1,2,12,12,12,3,4,5,7]\n\n#Can be used as stack and queue\nlist_object.pop(0) #queue\nlist_object.pop() #stack\nlist_object.pop(3) #index\nprint(list_object)", "[2, 12, 12, 3, 4, 5]\n" ], [ "#Raise an error when removing values from present in the list or list is empty\n#try except\ntry:\n list_object.remove(10)\nexcept:\n print('Element not in list')\nprint(list_object)", "Element not in list\n[2, 12, 12, 3, 4, 5]\n" ] ], [ [ "<h3>Functionalities and Power of Lists</h3>", "_____no_output_____" ] ], [ [ "#Trick again?\nlist_basics = [0]*5\nprint(list_basics)", "[0, 0, 0, 0, 0]\n" ], [ "#Concatenating two lists:\neven_num = [0,2,4,6]\nodd_num = [1,3,5,7]\n\nsum_num = even_num + odd_num\nprint('sum_num', sum_num)\nprint()\n\n#Alternate way is using extend\nsum_num = []\nsum_num.extend(even_num)\nsum_num.append(odd_num)\n\nprint('sum_num after extend and append', sum_num)", "sum_num [0, 2, 4, 6, 1, 3, 5, 7]\n\nsum_num after extend and append [0, 2, 4, 6, [1, 3, 5, 7]]\n" ], [ "#Using inbuilt list methods\nprint(len(sum_num))\nprint(min(even_num))\nprint(max(even_num))\nprint(sum(even_num))", "5\n0\n6\n12\n" ], [ "#Use functions in the list and unpack the values\n# * - unpacking operator \nnew_list = [*range(10,20)]\nprint(new_list)", "[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\n" ] ], [ [ "<h3>2D lists</h3>", "_____no_output_____" ] ], [ [ "list_2d = [[13,4,5,6],['a','b','c','d'],[1,2,3,4,5,6,7]]\n\n#list at particular index\nprint(list_2d[1])\n\n#Access the list after given index\nprint(list_2d[1:])\n\n#Alternative\nprint(list_2d[:][1:3])\n\n#Individual elements:\nprint(list_2d[0][1])", "['a', 'b', 'c', 'd']\n[['a', 'b', 'c', 'd'], [1, 2, 3, 4, 5, 6, 7]]\n[['a', 'b', 'c', 'd'], [1, 2, 3, 4, 5, 6, 7]]\n4\n" ] ], [ [ "<h3>For loop</h3>", "_____no_output_____" ] ], [ [ "#Print in single line\nfor i in range(0,5):\n print(i, end = ' ')", "0 1 2 3 4 " ], [ "#Step parameter added in range\nprint('Step parameter addition')\nfor i in range(10,2,-2):\n print(i,end= ' ')", "Step parameter addition\n10 8 6 4 " ], [ "#Unrolling any iterator for that matter\nprint('\\nPrinting list values')\nsums = [1,1,3,5]\nfor number in sums:\n print(number)", "\nPrinting list values\n1\n1\n3\n5\n" ], [ "#One liner for loop\nprint('\\nOne Linear For Loop')\nsum_upto_10 = [i for i in range(11)]\nprint(sum_upto_10)", "\nOne Linear For Loop\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n" ], [ "#One Linear Nested for loop\nprint('\\nOne Linear Nested for loop Matrix Multiplication\\n')\nmatrix_mul = [j*i for i in range(5) for j in range(5,10)]\nprint('Matrix Looks like:',matrix_mul)\n#Does this obey zen?", "\nOne Linear Nested for loop Matrix Multiplication\n\nMatrix Looks like: [0, 0, 0, 0, 0, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 15, 18, 21, 24, 27, 20, 24, 28, 32, 36]\n" ], [ "mat = []\nfor i in range(5):\n for j in range(5,10):\n mat.append(j*i)", "_____no_output_____" ], [ "a = ['Hello', 'world', '!']\n#Iterable Method 1\nfor i in range(len(a)):\n print(i,a[i])\nprint('')\n\n#Method 2\nfor i in a:\n print(i)\nprint()\n\n#Method 3\nprint(list(enumerate(a)))", "0 Hello\n1 world\n2 !\n\nHello\nworld\n!\n\n[(0, 'Hello'), (1, 'world'), (2, '!')]\n" ] ], [ [ "<h3>Functions</h3>", "_____no_output_____" ] ], [ [ "#Fizz Buzz coding\n# To define a function use the def keyword followed by func name with attributes (if any) passed.\n\ndef fizzBuzz():\n '''\n Given a range print Fizz if divisible by 3, Buzz if divisible by 5 else FizzBuzz if divisible by 3 and 5\n '''\n n = int(input('Enter a number '))\n #Python way\n count = 1\n while (count <= n):\n string = ''\n if (count %3 == 0):\n string = 'Fizz'\n if (count % 5 == 0):\n string += 'Buzz'\n print(count,string)\n count +=1\n \nfizzBuzz()", "Enter a number 15\n1 \n2 \n3 Fizz\n4 \n5 Buzz\n6 Fizz\n7 \n8 \n9 Fizz\n10 Buzz\n11 \n12 Fizz\n13 \n14 \n15 FizzBuzz\n" ], [ "def fizzBuzz():\n '''\n Same function as above with more if else conditions.\n '''\n n = int(input('Enter a number '))\n count = 1\n while(count <= n):\n if (count % 5 == 0 and count % 3 == 0):\n print(count,'Fizz Buzz')\n elif (count % 3 == 0):\n print(count,'Fizz')\n elif (count % 5 == 0):\n print(count,'Buzz')\n else:\n print(count)\n count +=1\nfizzBuzz()", "Enter a number 15\n1\n2\n3 Fizz\n4\n5 Buzz\n6 Fizz\n7\n8\n9 Fizz\n10 Buzz\n11\n12 Fizz\n13\n14\n15 Fizz Buzz\n" ] ], [ [ "<h3>Dictionary</h3>\n\nWe use key and not index to reference elements in dictionary", "_____no_output_____" ] ], [ [ "#Creating an empty dictionary\ndict_ = {} # We defined dict_ and not dict \n#dict_ = {key:value,key:value} #Keys have to be in a set i.e they are to be unique\nstring = 'iamdictionaryiteratemeandseethefun'\n\nfor ch in string:\n if (ch in dict_):\n dict_[ch] +=1\n\n else:\n dict_[ch] = 1\n\nfor k,v in dict_.items():\n #pass\n print(f'Character {k} and its count {v}')\n \n#if we try updating for a key that is present the original value will be overwritten", "Character i and its count 4\nCharacter a and its count 4\nCharacter m and its count 2\nCharacter d and its count 2\nCharacter c and its count 1\nCharacter t and its count 4\nCharacter o and its count 1\nCharacter n and its count 3\nCharacter r and its count 2\nCharacter y and its count 1\nCharacter e and its count 6\nCharacter s and its count 1\nCharacter h and its count 1\nCharacter f and its count 1\nCharacter u and its count 1\n" ], [ "#Print value for some key\n#Use .get or check for existence before access to avoid error like below\nprint(dict_['z'])", "_____no_output_____" ], [ "print(dict_.get('z'))", "None\n" ] ], [ [ "<h4>Dictionary Trick\n", "_____no_output_____" ] ], [ [ "#merge 2 dictionaries\nx = {'a':1,'b':2}\ny = {'c':3}\nz = {**x,**y}\nprint(z)", "{'a': 1, 'b': 2, 'c': 3}\n" ] ], [ [ "<h5>Question. \n\nCan we have a list as a dictionary key.", "_____no_output_____" ], [ "<h3>Sorting", "_____no_output_____" ] ], [ [ "#Sorting a list\n\n#For sorting the dataype should be of same type\nlist_ = ['a','3232','21332','bsaa','87','3231']\nprint('Ascending order ',list_)\n\n#Insights of this sorting (only done for 1st character)\nresult = []\nfor i in list_:\n sum_ = 0\n for j in i:\n sum_ = ord(j)\n break\n result.append(sum_)\n\n#It checks the first character of a string if there is a match with other string then it goes \nprint('Before sorting')\nprint(list_)\nprint(result)\n\nprint('---------------')\nprint('After sorting')\nlist_.sort()\nresult.sort()\nprint(list_)\nprint(result)", "Ascending order ['a', '3232', '21332', 'bsaa', '87', '3231']\nBefore sorting\n['a', '3232', '21332', 'bsaa', '87', '3231']\n[97, 51, 50, 98, 56, 51]\n---------------\nAfter sorting\n['21332', '3231', '3232', '87', 'a', 'bsaa']\n[50, 51, 51, 56, 97, 98]\n" ] ], [ [ "<h3>Sorted vs Sort</h3>\nThe primary difference between the list sort() function and the sorted() function.\n\nsort() function will modify the list it is called on.The function modifies/sorts the list in-place and has no return value.\n\nThe sorted() function will create a new list containing a sorted version of the list it is given.", "_____no_output_____" ] ], [ [ "a = [1,3,2]\nprint('a before any sort: ', a)\nb = sorted(a)\nprint('a after sorted: ',a)\nprint('b: ', b)\na.sort()\nprint('a after sort(): ', a)", "a before any sort: [1, 3, 2]\na after sorted: [1, 3, 2]\nb: [1, 2, 3]\na after sort(): [1, 2, 3]\n" ] ], [ [ "<h3>Lambda functions</h3> \n\nDictionary insights with sorting functions\n", "_____no_output_____" ] ], [ [ "#Norrmal functions in python\ndef sumFunc(a,b,c):\n ...\n ...\n ...\n return a+b+c\n \nsumFunc(5,6,2)", "_____no_output_____" ] ], [ [ "#Lambda Trick", "_____no_output_____" ] ], [ [ "add = lambda a,b,c: a+b+c\nprint(add(5, 6, 2))", "13\n" ], [ "dictonary = {67:\"Rahul\",23:\"Darsh\",12:\"Dhaval\",45:\"Manish\",11:\"Dipam\",101:\"Biman\",99:\"Biman\"}\n\n#To sort values based on either key or values in the dictionary and supply order of sort in reverse parameter\ntuples = sorted(dictonary.items(), key = lambda x: x[1], reverse = False)\n\n#Note just in list tuple cannot be unpacked, it needs to be accessed index wise\n#Print the sorted elements\nprint('Sorted on keys')\nfor elements in tuples:\n print(elements[0],\"::\",elements[1])\n\n\n#Sort First on values and then keys, Interesting\ntuples = sorted(dictonary.items(),key = lambda x:(x[1],x[0]), reverse=False)\nprint('Sorted on first values and then keys\\n')\nfor elements in tuples:\n print(elements[0],\"::\",elements[1])", "Sorted on keys\n101 :: Biman\n99 :: Biman\n23 :: Darsh\n12 :: Dhaval\n11 :: Dipam\n45 :: Manish\n67 :: Rahul\nSorted on first values and then keys\n\n99 :: Biman\n101 :: Biman\n23 :: Darsh\n12 :: Dhaval\n11 :: Dipam\n45 :: Manish\n67 :: Rahul\n" ] ], [ [ "<h5>Q.\n\nOne way of Initialzing Dictionary is keys as Ages and value as name of Person.\n\nWhat are the drawback of such keys?\n</h5>", "_____no_output_____" ], [ "<h3>Counter", "_____no_output_____" ] ], [ [ "from collections import Counter\n\ncolor = ['blue','blue','red','yellow','green','red']\nz = Counter(color)\nprint('Type ',type(z))\nprint(z)\n\n#Can be initialized like this also\ncoun = Counter(a=1, b=2, c=3, d=120, e=1, f=219)\nprint('\\nMost frequent occuring letters using Counters')\nfor letter,count in coun.most_common(4):\n print('%s %d' %(letter,count))\n\nprint('Convert counters to List')\ncoun = Counter(a=1, b=2, c=3) \nprint(list(coun.elements()))", "Type <class 'collections.Counter'>\nCounter({'blue': 2, 'red': 2, 'yellow': 1, 'green': 1})\n\nMost frequent occuring letters using Counters\nf 219\nd 120\nc 3\nb 2\nConvert counters to List\n['a', 'b', 'b', 'c', 'c', 'c']\n" ] ], [ [ "<h5>Q.Check if below strings are Anagrams</h5>", "_____no_output_____" ] ], [ [ "my_string_1 = \"RACECAR\"\nmy_string_2 = \"CARRRCE\"", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec985f48cee0d6fc11073401a5f533d7812508f8
6,132
ipynb
Jupyter Notebook
Model_1/Prototyping/train.ipynb
elliottwaissbluth/tensor-hero
be99ca4380a5ec59c0826e5fc8a87ec0f8956201
[ "MIT" ]
1
2021-09-14T20:05:09.000Z
2021-09-14T20:05:09.000Z
Model_1/Prototyping/train.ipynb
elliottwaissbluth/tensor-hero
be99ca4380a5ec59c0826e5fc8a87ec0f8956201
[ "MIT" ]
null
null
null
Model_1/Prototyping/train.ipynb
elliottwaissbluth/tensor-hero
be99ca4380a5ec59c0826e5fc8a87ec0f8956201
[ "MIT" ]
null
null
null
33.878453
126
0.520222
[ [ [ "from model import *\nfrom torch.utils.tensorboard import SummaryWriter\n\ntrain_path = Path.cwd() / 'toy training data' / 'preprocessed'\n# Define dataset\ntrain_data = Dataset(train_path)\ntrain_loader = torch.utils.data.DataLoader(\n train_data,\n batch_size=1,\n shuffle=False,\n num_workers=0,\n drop_last=True,\n)\n\nfor batch_idx, batch in enumerate(train_loader):\n # input = spec, output = notes\n spec, notes = batch[0], batch[1]\n print(spec.shape)\n print(notes.shape)\n print(notes)", "torch.Size([1, 512, 400])\ntorch.Size([1, 52])\n" ], [ "from model import *\nfrom torch.utils.tensorboard import SummaryWriter\n\ntrain_path = Path.cwd() / 'toy training data' / 'preprocessed'\n# Define dataset\ntrain_data = Dataset(train_path)\ntrain_loader = torch.utils.data.DataLoader(\n train_data,\n batch_size=1,\n shuffle=False,\n num_workers=0,\n drop_last=True,\n)\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n# device = 'cpu'\n\n# Training hyperparameters\nnum_epochs = 100\nlearning_rate = 1e-4\nbatch_size = 1\n\n# Model hyperparameters\ntrg_vocab_size = 434 # <output length>\nembedding_size = 512\nnum_heads = 8\nnum_encoder_layers = 3\nnum_decoder_layers = 3\ndropout = 0.1\nmax_len = 400\nforward_expansion = 2048\n\n# Tensorboard for nice plots\nwriter = SummaryWriter('runs/loss_plot')\nstep = 0 # how many times the model has gone through some input\n\n# Define model\nmodel = Transformer(\n embedding_size,\n trg_vocab_size,\n num_heads,\n num_encoder_layers,\n num_decoder_layers,\n forward_expansion,\n dropout,\n max_len,\n device,\n).to(device)\n\noptimizer = optim.Adam(model.parameters(), lr=learning_rate)\n\ncriterion = nn.CrossEntropyLoss() # Multi-class loss, when you have a many class prediction problem\n\nfor epoch in range(num_epochs):\n model.train() # Put model in training mode, so that it knows it's parameters should be updated\n for batch_idx, batch in enumerate(train_loader):\n # Batches come through as a tuple defined in the return statement __getitem__ in the Dataset\n spec, notes = batch[0].to(device), batch[1].to(device)\n\n # forward prop\n output = model(spec, notes[..., :-1]) # Don't pass the last element into the decoder, want it to be predicted\n\n output = output.reshape(-1, output.shape[2]) # Reshape the output for use by criterion\n notes = notes[..., 1:].reshape(-1) # Same for the notes\n optimizer.zero_grad() # Zero out the gradient so it doesn't accumulate\n\n loss = criterion(output, notes) # Calculate loss, this is output vs ground truth\n loss.backward() # Compute loss for every node in the computation graph\n\n # This line to avoid the exploding gradient problem\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1)\n\n optimizer.step() # Update model parameters\n writer.add_scalar(\"Training Loss\", loss, global_step=step)\n step += 1\n\n # Let's print the output vs the ground truth every 5 epochs for the first 15 epochs\n if epoch in [0, 5, 10, 15, 20]:\n print('\\nEpoch {}/{}'.format(epoch, num_epochs))\n print('Loss : {}'.format(loss.item()))\n print('Ground Truth : {}'.format(notes))\n print('Model Output : {}'.format(torch.argmax(output, dim=1)))\n", "_____no_output_____" ], [ "print(torch.argmax(output, dim=1))\nprint(notes)", "tensor([ 0, 47, 2, 60, 2, 99, 1, 110, 0, 122, 3, 133, 1, 145,\n 2, 157, 4, 171, 3, 183, 1, 195, 3, 221, 1, 233, 0, 245,\n 2, 259, 2, 297, 1, 309, 0, 322, 3, 334, 1, 346, 2, 359,\n 4, 371, 3, 395, 2, 419, 3, 431, 433], device='cuda:0')\ntensor([ 0, 47, 2, 60, 2, 99, 1, 110, 0, 122, 3, 133, 1, 145,\n 2, 157, 4, 171, 3, 183, 1, 195, 3, 221, 1, 233, 0, 245,\n 2, 259, 2, 297, 1, 309, 0, 322, 3, 334, 1, 346, 2, 359,\n 4, 371, 3, 395, 2, 419, 3, 431, 433], device='cuda:0')\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
ec98657b405841b165b9055f418684420fc3afba
20,834
ipynb
Jupyter Notebook
how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb
biotovarx/MachineLearningNotebooks
9094da40854cfc415fe3ba14426d6a6bd9cf507f
[ "MIT" ]
1
2022-02-16T23:59:05.000Z
2022-02-16T23:59:05.000Z
how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb
biotovarx/MachineLearningNotebooks
9094da40854cfc415fe3ba14426d6a6bd9cf507f
[ "MIT" ]
null
null
null
how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb
biotovarx/MachineLearningNotebooks
9094da40854cfc415fe3ba14426d6a6bd9cf507f
[ "MIT" ]
null
null
null
35.311864
426
0.542911
[ [ [ "Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.", "_____no_output_____" ], [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.png)", "_____no_output_____" ], [ "# Automated Machine Learning\n_**Text Classification Using Deep Learning**_\n\n## Contents\n1. [Introduction](#Introduction)\n1. [Setup](#Setup)\n1. [Data](#Data)\n1. [Train](#Train)\n1. [Evaluate](#Evaluate)", "_____no_output_____" ], [ "## Introduction\nThis notebook demonstrates classification with text data using deep learning in AutoML.\n\nAutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n\nMake sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n\nNotebook synopsis:\n\n1. Creating an Experiment in an existing Workspace\n2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n3. Registering the best model for future use\n4. Evaluating the final model on a test set", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "import json\nimport logging\nimport os\nimport shutil\n\nimport pandas as pd\n\nimport azureml.core\nfrom azureml.core.experiment import Experiment\nfrom azureml.core.workspace import Workspace\nfrom azureml.core.dataset import Dataset\nfrom azureml.core.compute import AmlCompute\nfrom azureml.core.compute import ComputeTarget\nfrom azureml.core.run import Run\nfrom azureml.widgets import RunDetails\nfrom azureml.core.model import Model \nfrom helper import run_inference, get_result_df\nfrom azureml.train.automl import AutoMLConfig\nfrom sklearn.datasets import fetch_20newsgroups", "_____no_output_____" ] ], [ [ "This sample notebook may use features that are not available in previous versions of the Azure ML SDK.", "_____no_output_____" ] ], [ [ "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\nprint(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")", "_____no_output_____" ] ], [ [ "As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.", "_____no_output_____" ] ], [ [ "ws = Workspace.from_config()\n\n# Choose an experiment name.\nexperiment_name = 'automl-classification-text-dnn'\n\nexperiment = Experiment(ws, experiment_name)\n\noutput = {}\noutput['Subscription ID'] = ws.subscription_id\noutput['Workspace Name'] = ws.name\noutput['Resource Group'] = ws.resource_group\noutput['Location'] = ws.location\noutput['Experiment Name'] = experiment.name\npd.set_option('display.max_colwidth', -1)\noutputDf = pd.DataFrame(data = output, index = [''])\noutputDf.T", "_____no_output_____" ] ], [ [ "## Set up a compute cluster\nThis section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n\n> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n\nWhether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers.", "_____no_output_____" ] ], [ [ "from azureml.core.compute import ComputeTarget, AmlCompute\nfrom azureml.core.compute_target import ComputeTargetException\n\nnum_nodes = 2\n\n# Choose a name for your cluster.\namlcompute_cluster_name = \"dnntext-cluster\"\n\n# Verify that cluster does not exist already\ntry:\n compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n print('Found existing cluster, use it.')\nexcept ComputeTargetException:\n compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_DS12_V2\" \n # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\" \n # or similar GPU option\n # available in your workspace\n max_nodes = num_nodes)\n compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n\ncompute_target.wait_for_completion(show_output=True)", "_____no_output_____" ] ], [ [ "### Get data\nFor this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data.", "_____no_output_____" ] ], [ [ "data_dir = \"text-dnn-data\" # Local directory to store data\nblobstore_datadir = data_dir # Blob store directory to store data in\ntarget_column_name = 'y'\nfeature_column_name = 'X'\n\ndef get_20newsgroups_data():\n '''Fetches 20 Newsgroups data from scikit-learn\n Returns them in form of pandas dataframes\n '''\n remove = ('headers', 'footers', 'quotes')\n categories = [\n 'rec.sport.baseball',\n 'rec.sport.hockey',\n 'comp.graphics',\n 'sci.space',\n ]\n\n data = fetch_20newsgroups(subset = 'train', categories = categories,\n shuffle = True, random_state = 42,\n remove = remove)\n data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n\n data_train = data[:200]\n data_test = data[200:300] \n\n data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n \n return data_train, data_test\n \ndef remove_blanks_20news(data, feature_column_name, target_column_name):\n \n data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n data = data[data[feature_column_name] != '']\n \n return data", "_____no_output_____" ] ], [ [ "#### Fetch data and upload to datastore for use in training", "_____no_output_____" ] ], [ [ "data_train, data_test = get_20newsgroups_data()\n\nif not os.path.isdir(data_dir):\n os.mkdir(data_dir)\n \ntrain_data_fname = data_dir + '/train_data.csv'\ntest_data_fname = data_dir + '/test_data.csv'\n\ndata_train.to_csv(train_data_fname, index=False)\ndata_test.to_csv(test_data_fname, index=False)\n\ndatastore = ws.get_default_datastore()\ndatastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n overwrite=True)", "_____no_output_____" ], [ "train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])", "_____no_output_____" ] ], [ [ "### Prepare AutoML run", "_____no_output_____" ], [ "This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results.", "_____no_output_____" ] ], [ [ "automl_settings = {\n \"experiment_timeout_minutes\": 30,\n \"primary_metric\": 'AUC_weighted',\n \"max_concurrent_iterations\": num_nodes, \n \"max_cores_per_iteration\": -1,\n \"enable_dnn\": True,\n \"enable_early_stopping\": True,\n \"validation_size\": 0.3,\n \"verbosity\": logging.INFO,\n \"enable_voting_ensemble\": False,\n \"enable_stack_ensemble\": False,\n}\n\nautoml_config = AutoMLConfig(task = 'classification',\n debug_log = 'automl_errors.log',\n compute_target=compute_target,\n training_data=train_dataset,\n label_column_name=target_column_name,\n blocked_models = ['LightGBM', 'XGBoostClassifier'],\n **automl_settings\n )", "_____no_output_____" ] ], [ [ "#### Submit AutoML Run", "_____no_output_____" ] ], [ [ "automl_run = experiment.submit(automl_config, show_output=True)", "_____no_output_____" ] ], [ [ "Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!", "_____no_output_____" ], [ "### Retrieve the Best Model\nBelow we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster.", "_____no_output_____" ], [ "For local inferencing, you can load the model locally via. the method `remote_run.get_output()`. For more information on the arguments expected by this method, you can run `remote_run.get_output??`.\nNote that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\nMachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml\n", "_____no_output_____" ] ], [ [ "# Retrieve the best Run object\nbest_run = automl_run.get_best_child()", "_____no_output_____" ] ], [ [ "You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models.", "_____no_output_____" ] ], [ [ "# Download the featurization summary JSON file locally\nbest_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n\n# Render the JSON as a pandas DataFrame\nwith open(\"featurization_summary.json\", \"r\") as f:\n records = json.load(f)\n\nfeaturization_summary = pd.DataFrame.from_records(records)\nfeaturization_summary['Transformations'].tolist()", "_____no_output_____" ] ], [ [ "### Registering the best model\nWe now register the best fitted model from the AutoML Run for use in future deployments. ", "_____no_output_____" ], [ "Get results stats, extract the best model from AutoML run, download and register the resultant best model", "_____no_output_____" ] ], [ [ "summary_df = get_result_df(automl_run)\nbest_dnn_run_id = summary_df['run_id'].iloc[0]\nbest_dnn_run = Run(experiment, best_dnn_run_id)", "_____no_output_____" ], [ "model_dir = 'Model' # Local folder where the model will be stored temporarily\nif not os.path.isdir(model_dir):\n os.mkdir(model_dir)\n \nbest_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')", "_____no_output_____" ] ], [ [ "Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model.", "_____no_output_____" ] ], [ [ "# Register the model\nmodel_name = 'textDNN-20News'\nmodel = Model.register(model_path = model_dir + '/model.pkl',\n model_name = model_name,\n tags=None,\n workspace=ws)", "_____no_output_____" ] ], [ [ "## Evaluate on Test Data", "_____no_output_____" ], [ "We now use the best fitted model from the AutoML Run to make predictions on the test set. \n\nTest set schema should match that of the training set.", "_____no_output_____" ] ], [ [ "test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n\n# preview the first 3 rows of the dataset\ntest_dataset.take(3).to_pandas_dataframe()", "_____no_output_____" ], [ "test_experiment = Experiment(ws, experiment_name + \"_test\")", "_____no_output_____" ], [ "script_folder = os.path.join(os.getcwd(), 'inference')\nos.makedirs(script_folder, exist_ok=True)\nshutil.copy('infer.py', script_folder)", "_____no_output_____" ], [ "test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n test_dataset, target_column_name, model_name)", "_____no_output_____" ] ], [ [ "Display computed metrics", "_____no_output_____" ] ], [ [ "test_run", "_____no_output_____" ], [ "RunDetails(test_run).show()", "_____no_output_____" ], [ "test_run.wait_for_completion()", "_____no_output_____" ], [ "pd.Series(test_run.get_metrics())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
ec986ddb8c829b0586e2c6e010ab6d3afd34a6a8
530,244
ipynb
Jupyter Notebook
Generate_Fashion_SGAN.ipynb
kenNicholaus/Generate_Fashion_Using_Various_GANS
f0876628b73067239397e44eb6a08f9cc5050157
[ "MIT" ]
1
2019-10-06T23:35:38.000Z
2019-10-06T23:35:38.000Z
Generate_Fashion_SGAN.ipynb
kenNicholaus/Generate_Fashion_Using_Various_GANS
f0876628b73067239397e44eb6a08f9cc5050157
[ "MIT" ]
null
null
null
Generate_Fashion_SGAN.ipynb
kenNicholaus/Generate_Fashion_Using_Various_GANS
f0876628b73067239397e44eb6a08f9cc5050157
[ "MIT" ]
1
2021-12-14T13:13:52.000Z
2021-12-14T13:13:52.000Z
694.946265
76,968
0.946193
[ [ [ "from __future__ import print_function, division\nfrom keras.datasets import fashion_mnist\nfrom keras.datasets import mnist\nfrom keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply, GaussianNoise\nfrom keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.layers.convolutional import UpSampling2D, Conv2D\nfrom keras.models import Sequential, Model\nfrom keras.optimizers import Adam\nfrom keras import losses\nfrom keras.utils import to_categorical\nimport keras.backend as K\nimport pandas as pd\nimport os\nimport matplotlib.pyplot as plt\n\nimport numpy as np", "C:\\Users\\kenneth\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ], [ "name = 'fashion_SGAN'\nif not os.path.exists(\"saved_model/\"+name):\n os.mkdir(\"saved_model/\"+name)\nif not os.path.exists(\"images/\"+name):\n os.mkdir(\"images/\"+name)", "_____no_output_____" ], [ "# Download the dataset\n(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()\nprint('X_train', X_train.shape,'y_train', y_train.shape)\nprint('X_test', X_test.shape,'y_test', y_test.shape)", "X_train (60000, 28, 28) y_train (60000,)\nX_test (10000, 28, 28) y_test (10000,)\n" ], [ "input_classes = pd.Series(y_train).nunique()\ninput_classes", "_____no_output_____" ], [ "# Training Labels are evenly distributed\nTrain_label_count = pd.Series(y_train).value_counts()\nTrain_label_count", "_____no_output_____" ], [ "# Test Labels are evenly distributed\nTest_label_count = pd.Series(y_test).value_counts()\nTest_label_count", "_____no_output_____" ], [ "#label dictionary from documentation\nlabel_dict = {0: 'tshirt',\n 1: 'trouser',\n 2: 'pullover',\n 3: 'dress',\n 4: 'coat',\n 5: 'sandal',\n 6: 'shirt',\n 7: 'sneaker',\n 8: 'bag',\n 9: 'boot'}", "_____no_output_____" ], [ "X_train[1].shape", "_____no_output_____" ], [ "#input dimensions\ninput_rows = X_train[1][0]\ninput_cols = X_train[1][1]\ninput_channels = 1", "_____no_output_____" ], [ "# plot images from the train dataset\nfor i in range(10):\n # define subplot\n a=plt.subplot(2, 5, 1 + i)\n # turn off axis\n plt.axis('off')\n # plot raw pixel data\n plt.imshow(X_train[i], cmap='gray_r')\n a.set_title(label_dict[y_train[i]])", "_____no_output_____" ], [ "# plot images from the test dataset\nfor i in range(10):\n # define subplot\n a=plt.subplot(2, 5, 1 + i)\n # turn off axis\n plt.axis('off')\n # plot raw pixel data\n plt.imshow(X_test[i], cmap='gray_r')\n a.set_title(label_dict[y_test[i]])", "_____no_output_____" ], [ "class SGAN:\n def __init__(self):\n self.img_rows = 28\n self.img_cols = 28\n self.channels = 1\n self.img_shape = (self.img_rows, self.img_cols, self.channels)\n self.num_classes = 10\n self.latent_dim = 100\n\n optimizer = Adam(0.0002, 0.5)\n\n # Build and compile the discriminator\n self.discriminator = self.build_discriminator()\n self.discriminator.compile(\n loss=['binary_crossentropy', 'categorical_crossentropy'],\n loss_weights=[0.5, 0.5],\n optimizer=optimizer,\n metrics=['accuracy']\n )\n\n # Build the generator\n self.generator = self.build_generator()\n\n # The generator takes noise as input and generates imgs\n noise = Input(shape=(100,))\n img = self.generator(noise)\n\n # For the combined model we will only train the generator\n self.discriminator.trainable = False\n\n # The valid takes generated images as input and determines validity\n valid, _ = self.discriminator(img)\n\n # The combined model (stacked generator and discriminator)\n # Trains generator to fool discriminator\n self.combined = Model(noise, valid)\n self.combined.compile(loss=['binary_crossentropy'], optimizer=optimizer)\n\n def build_generator(self):\n\n model = Sequential()\n\n model.add(Dense(128 * 7 * 7, activation=\"relu\", input_dim=self.latent_dim))\n model.add(Reshape((7, 7, 128)))\n model.add(BatchNormalization(momentum=0.8))\n model.add(UpSampling2D())\n model.add(Conv2D(128, kernel_size=3, padding=\"same\"))\n model.add(Activation(\"relu\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(UpSampling2D())\n model.add(Conv2D(64, kernel_size=3, padding=\"same\"))\n model.add(Activation(\"relu\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(Conv2D(1, kernel_size=3, padding=\"same\"))\n model.add(Activation(\"tanh\"))\n\n model.summary()\n\n noise = Input(shape=(self.latent_dim,))\n img = model(noise)\n\n return Model(noise, img)\n\n def build_discriminator(self):\n\n model = Sequential()\n\n model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=self.img_shape, padding=\"same\"))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Conv2D(64, kernel_size=3, strides=2, padding=\"same\"))\n model.add(ZeroPadding2D(padding=((0,1),(0,1))))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(BatchNormalization(momentum=0.8))\n model.add(Conv2D(128, kernel_size=3, strides=2, padding=\"same\"))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(BatchNormalization(momentum=0.8))\n model.add(Conv2D(256, kernel_size=3, strides=1, padding=\"same\"))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Flatten())\n\n model.summary()\n\n img = Input(shape=self.img_shape)\n\n features = model(img)\n valid = Dense(1, activation=\"sigmoid\")(features)\n label = Dense(self.num_classes+1, activation=\"softmax\")(features)\n\n return Model(img, [valid, label])\n\n def train(self, epochs, batch_size=128, sample_interval=50):\n\n # Load the dataset\n (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()\n\n # Rescale -1 to 1\n X_train = (X_train.astype(np.float32) - 127.5) / 127.5\n X_train = np.expand_dims(X_train, axis=3)\n y_train = y_train.reshape(-1, 1)\n\n # Class weights:\n # To balance the difference in occurences of digit class labels.\n # 50% of labels that the discriminator trains on are 'fake'.\n # Weight = 1 / frequency\n half_batch = batch_size // 2\n cw1 = {0: 1, 1: 1}\n cw2 = {i: self.num_classes / half_batch for i in range(self.num_classes)}\n cw2[self.num_classes] = 1 / half_batch\n\n # Adversarial ground truths\n valid = np.ones((batch_size, 1))\n fake = np.zeros((batch_size, 1))\n\n for epoch in range(epochs):\n\n # ---------------------\n # Train Discriminator\n # ---------------------\n\n # Select a random batch of images\n idx = np.random.randint(0, X_train.shape[0], batch_size)\n imgs = X_train[idx]\n\n # Sample noise and generate a batch of new images\n noise = np.random.normal(0, 1, (batch_size, self.latent_dim))\n gen_imgs = self.generator.predict(noise)\n\n # One-hot encoding of labels\n labels = to_categorical(y_train[idx], num_classes=self.num_classes+1)\n fake_labels = to_categorical(np.full((batch_size, 1), self.num_classes), num_classes=self.num_classes+1)\n\n # Train the discriminator\n d_loss_real = self.discriminator.train_on_batch(imgs, [valid, labels], class_weight=[cw1, cw2])\n d_loss_fake = self.discriminator.train_on_batch(gen_imgs, [fake, fake_labels], class_weight=[cw1, cw2])\n d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)\n\n\n # ---------------------\n # Train Generator\n # ---------------------\n\n g_loss = self.combined.train_on_batch(noise, valid, class_weight=[cw1, cw2])\n\n # Plot the progress\n #print (\"%d [D loss: %f, acc: %.2f%%, op_acc: %.2f%%] [G loss: %f]\" % (epoch, d_loss[0], 100*d_loss[3], 100*d_loss[4], g_loss))\n\n # If at save interval => save generated image samples\n if epoch % sample_interval == 0:\n self.sample_images(epoch)\n self.save_model()\n print (\"%d [D loss: %f, acc: %.2f%%, op_acc: %.2f%%] [G loss: %f]\" % (epoch, d_loss[0], 100*d_loss[3], 100*d_loss[4], g_loss))\n\n\n def sample_images(self, epoch):\n r, c = 5, 5\n noise = np.random.normal(0, 1, (r * c, self.latent_dim))\n gen_imgs = self.generator.predict(noise)\n\n # Rescale images 0 - 1\n gen_imgs = 0.5 * gen_imgs + 0.5\n\n fig, axs = plt.subplots(r, c)\n cnt = 0\n for i in range(r):\n for j in range(c):\n axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')\n axs[i,j].axis('off')\n cnt += 1\n fig.savefig(\"images/\"+name+\"/_%d.png\" % epoch)\n plt.imread(\"images/\"+name+\"/_%d.png\" % epoch)\n plt.show()\n plt.close()\n\n def save_model(self):\n\n def save(model, model_name):\n model_path = \"saved_model/\"+name+\"/%s.json\" % model_name\n weights_path = \"saved_model/\"+name+\"/%s_weights.hdf5\" % model_name\n options = {\"file_arch\": model_path,\n \"file_weight\": weights_path}\n json_string = model.to_json()\n open(options['file_arch'], 'w').write(json_string)\n model.save_weights(options['file_weight'])\n save(self.generator, \"sgan_generator\")\n save(self.discriminator, \"sgan_discriminator\")\n save(self.combined, \"sgan_adversarial\")", "_____no_output_____" ], [ "sgan = SGAN()\nsgan.train(epochs=10000, batch_size=32, sample_interval=1000)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 14, 14, 32) 320 \n_________________________________________________________________\nleaky_re_lu_1 (LeakyReLU) (None, 14, 14, 32) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 14, 14, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 7, 7, 64) 18496 \n_________________________________________________________________\nzero_padding2d_1 (ZeroPaddin (None, 8, 8, 64) 0 \n_________________________________________________________________\nleaky_re_lu_2 (LeakyReLU) (None, 8, 8, 64) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 8, 8, 64) 0 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 8, 8, 64) 256 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 4, 4, 128) 73856 \n_________________________________________________________________\nleaky_re_lu_3 (LeakyReLU) (None, 4, 4, 128) 0 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 4, 4, 128) 0 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 4, 4, 128) 512 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 4, 4, 256) 295168 \n_________________________________________________________________\nleaky_re_lu_4 (LeakyReLU) (None, 4, 4, 256) 0 \n_________________________________________________________________\ndropout_4 (Dropout) (None, 4, 4, 256) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 4096) 0 \n=================================================================\nTotal params: 388,608\nTrainable params: 388,224\nNon-trainable params: 384\n_________________________________________________________________\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_3 (Dense) (None, 6272) 633472 \n_________________________________________________________________\nreshape_1 (Reshape) (None, 7, 7, 128) 0 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 7, 7, 128) 512 \n_________________________________________________________________\nup_sampling2d_1 (UpSampling2 (None, 14, 14, 128) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 14, 14, 128) 147584 \n_________________________________________________________________\nactivation_1 (Activation) (None, 14, 14, 128) 0 \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, 14, 14, 128) 512 \n_________________________________________________________________\nup_sampling2d_2 (UpSampling2 (None, 28, 28, 128) 0 \n_________________________________________________________________\nconv2d_6 (Conv2D) (None, 28, 28, 64) 73792 \n_________________________________________________________________\nactivation_2 (Activation) (None, 28, 28, 64) 0 \n_________________________________________________________________\nbatch_normalization_5 (Batch (None, 28, 28, 64) 256 \n_________________________________________________________________\nconv2d_7 (Conv2D) (None, 28, 28, 1) 577 \n_________________________________________________________________\nactivation_3 (Activation) (None, 28, 28, 1) 0 \n=================================================================\nTotal params: 856,705\nTrainable params: 856,065\nNon-trainable params: 640\n_________________________________________________________________\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec986e03ba47307b502d644c7fa21e6033e45109
21,525
ipynb
Jupyter Notebook
2016/tutorial_final/128/Word2Vec Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b
[ "MIT" ]
1
2021-07-06T17:36:24.000Z
2021-07-06T17:36:24.000Z
2016/tutorial_final/128/Word2Vec Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b
[ "MIT" ]
null
null
null
2016/tutorial_final/128/Word2Vec Tutorial.ipynb
zeromtmu/practicaldatascience.github.io
62950a3a3e7833552b0f2269cc3ee5c34a1d6d7b
[ "MIT" ]
1
2021-07-06T17:36:34.000Z
2021-07-06T17:36:34.000Z
34.384984
604
0.584065
[ [ [ "## Introduction\n\nWord2Vec is a widely used pipeline that can map words into a high dimensional semantic space. Given a training text corpus, which is in the form of a list of sentences, it creates a vocabulary, mapping a word to a vector in this semantic space. It does this by using a set of models, specifically two-layer neural networks.\n\nBecause of its relative ease of use, it is a powerful tool that can be used to make interesting insights of complicated text data sets. This projection onto the sematic space can be used to achieve very interesting results, such as solving logical analogies and classifying types of blog articles.\n\nThis tutorial will teach you how to use Word2Vec, and will show some practical applications of it, including some interesting insights we can derive from a few datasets. It will use it in an innovative way that is not as commonly used: to determine the similarity between sentences. It will also compare this approach to simpler approaches, namely TF-IDF.", "_____no_output_____" ], [ "### Tutorial content\n\nThis tutorial will show how to use Word2Vec in Python, specifically using [genism](https://radimrehurek.com/gensim/models/word2vec.html).\n\n\nWe will cover the following topics in this tutorial:\n- [Installing the libraries](#Installing-the-libraries)\n- [Getting the datasets](#Getting-the-datasets)\n- [Parsing data](#Parsing-data)\n- [Model training](#Model-training)\n- [Compute sentence similarities](#Compute-sentence-similarities)\n- [Comparison with TF-IDF Method](#Comparison-with-TF-IDF-Method)\n- [Further Resources](#Further-Resources)\n", "_____no_output_____" ], [ "## Installing the libraries", "_____no_output_____" ], [ "Before getting started, you'll need to install and import the various libraries that we will use. Assuming you have anaconda fully installed, you can install genism and nltk using `conda`:\n\n $ conda install -c anaconda genism\n\n $ conda install -c anaconda nltk\n ", "_____no_output_____" ] ], [ [ "from gensim.models import Word2Vec, phrases\nfrom gensim.models.word2vec import LineSentence\nfrom gensim.corpora import WikiCorpus\nfrom nltk.tokenize import RegexpTokenizer\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer, TfidfVectorizer\nfrom nltk.corpus import stopwords\n\nfrom scipy.spatial.distance import cosine\nimport multiprocessing\n\nimport gzip\nimport numpy as np\nimport random\nimport sqlite3\nimport pandas as pd\nimport re", "_____no_output_____" ] ], [ [ "## Getting the datasets", "_____no_output_____" ], [ "So, where do we even start?\n\nThe first step to using Word2Vec is having a dataset to train on. This choice of text corpus will depend on your intended use of the tranied model. In this tutorial, we will explore how the different data sets differ, and compare the vocabularies generated by each. Though both data sets are from Amazon Reviews, one contains only sarcastic reviews while the other contains only reviews about Video Games.\n\nThe data sets used in this tutorial come from the following places:\n- [Amazon Review Data (Video Game Reviews)](http://jmcauley.ucsd.edu/data/amazon/)\n- [Amazon Review Data (Reviews marked as sarcastic)](http://storm.cis.fordham.edu/~filatova/SarcasmCorpus.html)", "_____no_output_____" ], [ "## Parsing data", "_____no_output_____" ], [ "After downloading the dataset (which may take a while), we need to convert the text into a standard format that the Word2Vec model can train on. This may involve decompressing files, removing punctuation, etc., and is dependent on the input data format. We will look at how to do so for the Amazon Review data below.", "_____no_output_____" ] ], [ [ "# Amazon Review Files:\nraw_sarcasm_file_name = 'sarcasm/sarcasm_lines.txt'\nsarcasm_file_name = 'corpuses/sarcasm.txt'\n\nraw_amazon_video_file_name = 'amazon_review/reviews_Video_Games_5.json.gz'\namazon_video_file_name = 'corpuses/amazon_vg.txt'", "_____no_output_____" ] ], [ [ "The sarcasm data came in a different format than the rest, so we need to write code to parse that file individually.", "_____no_output_____" ] ], [ [ "# create sarcasm corpus\nwith open(sarcasm_file_name, 'w') as sarcasm:\n with open(raw_sarcasm_file_name, 'r') as raw_sarcasm:\n for line in raw_sarcasm.readlines():\n columns = line.split('\\t')\n if len(columns) == 2:\n utt = columns[1]\n utt = re.sub(\"[^a-zA-Z']\",\" \", utt)\n utt = utt.lower()\n sarcasm.write(unicode(utt + '\\n'))", "_____no_output_____" ] ], [ [ "The rest of the Amazon Review data came in a .json.gz format, and we will obtain the review texts in the following way.", "_____no_output_____" ] ], [ [ "# parse gzip of Amazon Review file\ndef parse_gzip(path):\n g = gzip.open(path, 'rb')\n \n #to limit the number of input lines\n linecount = 0\n for l in g:\n linecount += 1\n if linecount>10000:\n break\n yield eval(l)\n\n# parse reviews from gzip file and write to corpus\ndef parse_amazon(in_gz, out_txt):\n with open(out_txt, 'w') as amazon:\n for d in parse_gzip(in_gz):\n utt = d['reviewText']\n utt = re.sub(\"[^a-zA-Z']\",\" \", utt)\n utt = utt.lower()\n amazon.write(unicode(utt + '\\n'))\n", "_____no_output_____" ], [ "parse_amazon(raw_amazon_video_file_name, amazon_video_file_name)\nprint \"Completed Amazon Video Game Review parse\"", "Completed Amazon Video Game Review parse\n" ] ], [ [ "Now, the raw Amazon Review data has parsed into a text corpus that matches the format that the Word2Vec model requires, which is simply a list of sentence strings separated by spaces.", "_____no_output_____" ], [ "## Model training", "_____no_output_____" ], [ "So now that we have the data in the format we want it in, how do we actually train our model so we can start predicting the similarities?\n\nThe first step is to separate the text into sentences, represented by a list of words. We do that here using regular expressions as it faster than using the python split function.", "_____no_output_____" ] ], [ [ "# get a list of sentencse from an input text file\ndef tokenize_data(df):\n df_iter = df.iterrows()\n str2lst = []\n for i, row in df_iter:\n s = str(row[0])\n tokenizer = RegexpTokenizer('\\w+') \n lst = tokenizer.tokenize(s)\n for e in lst:\n e.decode(encoding='utf-8', errors='ignore')\n str2lst.append(lst)\n return str2lst", "_____no_output_____" ] ], [ [ "Next, we can finally pass it into the Word2Vec model for training! However, before doing that we first refine the data even more. Often in the sentences, conjuction words, like \"don't\" or \"can't\", appear and they should be considered as one word. Also, two and three word phrases, such as \"in the\", \"of course\", and \"I am sorry\", occur, and since they are thought of as single phrases they should be treated as such. So, we calculate and combine these two and three word phrases into bigrams and trigrams respectively, using the Phrases class.", "_____no_output_____" ] ], [ [ "# Return a model from a given text file\ndef build_model(data_file):\n freq = 10\n size_NN = 80\n n_threads = 4\n\n train_df = pd.read_csv(data_file,header=None)\n sentences = tokenize_data(train_df)\n\n l = []\n for lst in sentences:\n lst_u = [s.decode('utf-8','ignore') for s in lst]\n l.append(lst_u)\n\n bigram = phrases.Phrases(l)\n trigram = phrases.Phrases(bigram[l])\n\n model = Word2Vec(min_count=freq,size=size_NN, workers=n_threads, alpha=0.025, min_alpha=0.025,\n max_vocab_size=50000000)\n model.build_vocab(trigram[bigram[l]])\n print \"created initial model\"\n\n for epoch in range(5):\n random.shuffle(l)\n model.train(trigram[bigram[l]])\n print \"epoch #\" + str(epoch) + \" completed\"\n\n return model", "_____no_output_____" ] ], [ [ "Then, we create the models and indices with the commands below. Careful, this may take a long time, depending on how big your text corpus is, and how long you selected your semantic vectors to be (in our case, size_NN=80, which is small enough). Unless running with a lot of memmory, it is advised to not a use large number of sentences for each corpus. In this case, I limited the number of sentences extracted to 10000.", "_____no_output_____" ] ], [ [ "sarcasm = build_model(sarcasm_file_name)\nsarcasm_index = set(sarcasm.index2word)\nprint \"completed training the sarcasm model\"\namazon_vg = build_model(amazon_video_file_name)\namazon_vg_index = set(amazon_vg.index2word)\nprint \"completed training the video game reviews model\"", "created initial model\nepoch #0 completed\nepoch #1 completed\nepoch #2 completed\nepoch #3 completed\nepoch #4 completed\ncompleted training the sarcasm model\ncreated initial model\nepoch #0 completed\nepoch #1 completed\nepoch #2 completed\nepoch #3 completed\nepoch #4 completed\ncompleted training the video game reviews model\n" ] ], [ [ "## Compute sentence similarities", "_____no_output_____" ], [ "Now, we will determine a way to find the similarities between sentences. We do this by mapping each word in each sentence to the semantic space, and then averaging all of these vectors to create a final sentence vector. Then, we find the cosine difference between these sentence vectors, and that becomes the similarity.", "_____no_output_____" ] ], [ [ "# Averages all word vectors in a given paragraph\ndef avg_vectors(words, model, num_features, index2word_set):\n featureVec = np.zeros(num_features, dtype=\"float32\")\n nwords = 0\n \n for word in words:\n if word in index2word_set:\n nwords = nwords+1\n featureVec = np.add(featureVec, model[word])\n if nwords > 0:\n featureVec = np.divide(featureVec, nwords)\n\n return featureVec", "_____no_output_____" ], [ "# Computes sentence similarities\ndef compute_similarities(model, sentences, index, best=2, worst=0):\n vectors = []\n for sentence in sentences:\n vectors.append(avg_feature_vector(sentence.lower().split(\" \"), model, 80, index))\n \n similar = []\n for i in range(len(vectors)):\n for j in range(len(vectors)):\n if i>j:\n sentence = [sentences[i], sentences[j]]\n similarity = 1 - cosine(vectors[i], vectors[j])\n if not np.isnan(similarity):\n similar.append((similarity, sentence))\n \n similar.sort(reverse=True)\n if len(similar)<max(best, worst):\n print \"not enough data\"\n else:\n if best:\n print \"best \" + str(best)\n for x in similar[:best]:\n print x\n if worst:\n print \"worst \" + str(worst)\n for x in similar[-1*worst:]:\n print x\n\n print \"\"", "_____no_output_____" ], [ "setence1 = \"do not buy\"\nsetence2 = \"terrible product\"\nsetence3 = \"best product ever\"\nsetence4 = \"i love this product\"\nsetence5 = \"i use this product every day\"\nsentences = [setence1, setence2, setence3, setence4, setence5]", "_____no_output_____" ], [ "print \"Sarcasm similarities\"\ncompute_similarities(sarcasm, sentences, sarcasm_index)\n\nprint \"Video Game data similarities\"\ncompute_similarities(amazon_vg, sentences, amazon_vg_index)", "Sarcasm similarities\nbest 2\n(0.92048052816809933, ['i use this product every day', 'i love this product'])\n(0.80848912011328022, ['best product ever', 'terrible product'])\n\nVideo Game data similarities\nbest 2\n(0.67825017781110841, ['i use this product every day', 'i love this product'])\n(0.50801964822782431, ['i love this product', 'best product ever'])\n\n" ] ], [ [ "The results above show some interesting insights about the Amazon Review data in comparison to the Sarcasm data. Both of them correctly found similarities in 'i use this product every day' and 'i love this product'. Also, the sarcasm corpus links 'best product ever' and 'terrible product', which seem to be opposites, to be the next most popular, while the video game corpus links 'i love this product' and 'best product ever' to be the next most popular. This is because the data with sarcastic sentences may often have conflicting sentences, while the video game reviews are more normal in tone.", "_____no_output_____" ], [ "## Comparison with TF-IDF Method", "_____no_output_____" ], [ "We will now compare how well word2vec performs with how well the TF-IDF algorithm works. The TF-IDF algorithm essentially counts the number of each word occurence in each sentence, mapping each pair of words into a vector representing this count, and then computes the cosine similarity between the two sentences. This is a much more naive way of doing things, and performs much worse, as shown below.", "_____no_output_____" ] ], [ [ "def compute_tfidf_similarities(sentences, best=2, worst=0):\n similar = []\n for i in range(len(sentences)):\n for j in range(len(sentences)):\n if i>j:\n sentence = [sentences[i], sentences[j]]\n vect = TfidfVectorizer(min_df=1)\n tfidf = vect.fit_transform(sentence)\n similarity = (tfidf * tfidf.T).A[0,1]\n \n if not np.isnan(similarity):\n similar.append((similarity, sentence))\n similar.sort(reverse=True)\n if len(similar)<max(best, worst):\n print \"not enough data\"\n else:\n if best:\n print \"best \" + str(best)\n for x in similar[:best]:\n print x\n if worst:\n print \"worst \" + str(worst)\n for x in similar[-1*worst:]:\n print x\n\n print \"\"", "_____no_output_____" ], [ "print \"TF-IDF similarities:\"\ncompute_tfidf_similarities(sentences,3)\n\nprint \"Video Game data similarities\"\ncompute_similarities(amazon_vg, sentences, amazon_vg_index)", "TF-IDF similarities:\nbest 3\n(0.3563004293331381, ['i use this product every day', 'i love this product'])\n(0.26055567105626237, ['i love this product', 'terrible product'])\n(0.26055567105626237, ['best product ever', 'terrible product'])\n\nVideo Game data similarities\nbest 2\n(0.67825017781110841, ['i use this product every day', 'i love this product'])\n(0.50801964822782431, ['i love this product', 'best product ever'])\n\n" ] ], [ [ "The model trained on the Video Game corpus accurately predicted the similar sentences, while the TF-IDF algorithm did not because it did not take the semantics of each word into account.", "_____no_output_____" ], [ "## Further Resources", "_____no_output_____" ], [ "If you would like to learn more about word2vec, and the technologies discussed in this tutorial, you may view the following online resources:\n\n- [The original word2vec paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)\n- [The TF-IDF word relavence paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.121.1424&rep=rep1&type=pdf)\n- [Other interesting uses of word2vec, including using it for solving analogies](https://quomodocumque.wordpress.com/2016/01/15/messing-around-with-word2vec/)\n- [Fun and interactive website to solve analogies created using word2vec](http://deeplearner.fz-qqq.net/)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
ec98761fd5f7278f78f37a520e59f2f8649f10fe
454,845
ipynb
Jupyter Notebook
preprocessing/segmentation/original_connected.ipynb
Lab41/d-script
4c5079754ca48b2ab5080f10bb677cbd7375a735
[ "Apache-2.0" ]
14
2016-01-09T12:35:56.000Z
2021-03-03T21:27:34.000Z
preprocessing/segmentation/original_connected.ipynb
Lab41/d-script
4c5079754ca48b2ab5080f10bb677cbd7375a735
[ "Apache-2.0" ]
9
2016-01-08T00:59:54.000Z
2017-10-20T20:26:33.000Z
preprocessing/segmentation/original_connected.ipynb
Lab41/d-script
4c5079754ca48b2ab5080f10bb677cbd7375a735
[ "Apache-2.0" ]
12
2016-01-08T00:33:11.000Z
2018-02-17T21:02:14.000Z
883.194175
60,034
0.948486
[ [ [ "import h5py\nfrom scipy import ndimage\nimport matplotlib.pylab as plt\nimport numpy as np\nimport sys\n%matplotlib inline\n\n# Calculate the connected components\ndef connectedcomponents( im ):\n th = int( 0.75*(im.value.max() - im.value.min())+im.value.min() )\n print th\n im = im.value < th\n return ndimage.label(im > 0.5)\n\n# Threshold connected components based on number of pixels\ndef thresholdcc( ccis, minthresh=250 ):\n ccs = []\n for i in xrange(1,ccis[1]):\n if np.array(ccis[0]==i).sum() > minthresh:\n ccs+=[i]\n return ccs\n \n# Run through an example\n# hdf5file='/fileserver/nmec-handwriting/flat_nmec_cropped_bin_uint8.hdf5'\nhdf5file='/fileserver/nmec-handwriting/nmec_scaled_flat.hdf5'\nflatnmec=h5py.File(hdf5file,'r')\n\n# 14 doesn't work.\n# 12,13,19 has 7.5k+ connected components to threshold\n# imname = 'FR-016-007.bin.crop.png.cropcrop.png'\n# Between 16-006 and 16-007, he has two completely different scales\nflk = flatnmec.keys()\nimname = flk[3] # 'FR-016-006.bin.crop.png.cropcrop.png'\nimname = 'FR-014-006.tif'\n\n# processim = (flatnmec[imname].value < 200).astype(np.uint8)\n\nprint \"Calculating connected components\"\nccis = connectedcomponents( flatnmec[imname] )\nprint \"Thresholding \"+str(ccis[1])+\" connected components\"\nccs = thresholdcc( ccis, minthresh=500 )\nprint \"Finished processing on image \"+str(imname)+\" and found \"+str(len(ccs))+\" components\"", "Calculating connected components\n216\nThresholding 11680 connected components\nFinished processing on image FR-014-006.tif and found 32 components\n" ], [ "# window = ccis[0]\n# a = np.where( window == 11 )\n# imname = 'FR-016-006.bin.crop.png.cropcrop.png'\n# plt.imshow( flatnmec[imname][()][0:1000,0:1000], cmap='gray' )\n# imname = 'FR-016-007.bin.crop.png.cropcrop.png'\n# plt.figure()\n# plt.imshow( flatnmec[imname][()][0:1000,0:1000], cmap='gray' )\nimname = 'FR-014-006.tif'\nplt.figure(figsize=(15,15))\nplt.imshow( 1 - (flatnmec[imname].value < 216).astype(np.int) , cmap='gray' )\nflatnmec[imname].value < 128", "_____no_output_____" ], [ "# plt.figure(figsize=(30,30))\n# plt.imshow(cc1[0])\nfor i in ccs:\n plt.figure(figsize=(15,15))\n plt.imshow( 255-np.array(ccis[0]==i), cmap='gray' )\n plt.show()", "_____no_output_____" ], [ "print flatnmec[imname].value.max()\nprint flatnmec[imname].value.min()\n(255 - 101) * 0.75 +101", "255\n101\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
ec9880019f4dcbb2761d8d34ca5c063ebda5c2bf
35,512
ipynb
Jupyter Notebook
lecture1/lecture.ipynb
ggorman/Introduction-Python-programming-2018
739b864c1499ccdbf9010d8fe774087a07bb09ee
[ "CC-BY-3.0" ]
1
2019-01-12T12:43:24.000Z
2019-01-12T12:43:24.000Z
lecture1/lecture.ipynb
ggorman/Introduction-Python-programming-2018
739b864c1499ccdbf9010d8fe774087a07bb09ee
[ "CC-BY-3.0" ]
null
null
null
lecture1/lecture.ipynb
ggorman/Introduction-Python-programming-2018
739b864c1499ccdbf9010d8fe774087a07bb09ee
[ "CC-BY-3.0" ]
3
2019-05-16T21:08:48.000Z
2022-02-21T06:54:57.000Z
31.935252
567
0.566456
[ [ [ "from lecture import *", "_____no_output_____" ] ], [ [ "# <center> Introduction to programming in Python </center>\n### <center>[Nicolas Barral](http://www.imperial.ac.uk/people/n.barral), [Gerard Gorman](http://www.imperial.ac.uk/people/g.gorman) </center>\n\n# <center> Lecture 1: Computing with formulas </center>", "_____no_output_____" ], [ "## Learning objectives:\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\n* You will understand that Python will help you defy gravity.\n* You will know how to execute Python statements from within Jupyter.\n* Learn what a program variable is and how to express a mathematical expression in code.\n* Print program outputs.\n* Access mathematical functions from a Python module.\n* Be able to write your own *function*.", "_____no_output_____" ], [ "```python\nimport antigravity\n```", "_____no_output_____" ], [ "![import antigravity](https://imgs.xkcd.com/comics/python.png)", "_____no_output_____" ], [ "## Programming a mathematical formula\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nHere is a formula for the position of a ball in vertical motion, starting at ground level (i.e. $y=0$) at time $t=0$:\n $$ y(t) = v_0t- \\frac{1}{2}gt^2 $$\n\nwhere:\n\n* $y$ is the height (position) as a function of time $t$\n* $v_0$ is the initial velocity (at $t=0$)\n* $g$ is the acceleration due to gravity\n\nThe computational task is: given $v_0$, $g$ and $t$, compute the value $y$. ", "_____no_output_____" ], [ "**How do we program this task?** A program is a sequence of instructions given to the computer. However, while a programming language is much **simpler** than a natural language, it is more **pedantic**. Programs must have correct syntax, i.e., correct use of the computer language grammar rules, and no misprints.\n\nSo let's execute a Python statement based on this example. Evaluate $y(t) = v_0t- \\frac{1}{2}gt^2$ for $v_0=5$, $g=9.81$ and $t=0.6$. If you were doing this on paper you would probably write something like this: $$ y = 5\\cdot 0.6 - {1\\over2}\\cdot 9.81 \\cdot 0.6^2.$$ Happily, writing this in Python is very similar:", "_____no_output_____" ] ], [ [ "# Comment: This is a 'code' cell within Jupyter notebook.\n# Press shift-enter to execute the code within this kind of\n# cell, or click on the 'Run' widget on the Jupyter tool bar above.\n\nprint(5*0.6 - 0.5*9.81*0.6**2)", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.1:</span> Open a code cell and write some code.\n* Navigate the [Jupyter](http://jupyter.org/) tool bar to \"Insert\"->\"Insert Cell Below\". Note from the tool bar that you can select a cell to be one of 'Code' (this is the default), 'Markdown' (this cell is written in [markdown](https://en.wikipedia.org/wiki/Markdown) - double click this cell to investigate further), 'Raw NBConvert' or 'Heading' (decrepit).\n* Cut&paste the code from the previous cell into your newly created code cell below. Make sure it runs!\n* To see how important it is to use the correct [syntax](https://en.wikipedia.org/wiki/Syntax), replace `**` with `^` in your code and try running the cell again. You should see something like the following:\n\n```python\n>>> print(5*0.6 - 0.5*9.81*0.6^2)\n\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-3-40e93484ac5e> in <module>()\n----> 1 print(5*0.6 - 0.5*9.81*0.6^2)\n\nTypeError: unsupported operand type(s) for ^: 'float' and 'int'\n```\n* Undo that change so your code is working again; now change 'print' to 'write' and see what happens when you run the cell. You should see something like:\n\n```python\n>>> write(5*0.6 - 0.5*9.81*0.6**2)\n\n---------------------------------------------------------------------------\nNameError Traceback (most recent call last)\n<ipython-input-5-492c3eff3ad9> in <module>()\n----> 1 write(5*0.6 - 0.5*9.81*0.6**2)\n\nNameError: name 'write' is not defined\n```\n\nWhile a human might still understand these statements, they do not mean anything to the Python interpreter. Rather than throwing your hands up in the air whenever you get an error message like the above (you are going to see many during the course of these lectures!!!) train yourself to read error messages carefully to get an idea what it is complaining about and re-read your code from the perspective of the Python interpreter.\n\nError messages can look bewildering and even frustrating at first, but it gets much **easier with practise**.", "_____no_output_____" ], [ "## Storing numbers in variables\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nFrom mathematics you are already familiar with variables (e.g. $v_0=5,\\quad g=9.81,\\quad t=0.6,\\quad y = v_0t -{1\\over2}gt^2$) and you already know how important they are for working out complicated problems. Similarly, you can use variables in a program to make it easier to read and understand.", "_____no_output_____" ] ], [ [ "v0 = 5\ng = 9.81\nt = 0.6\ny = v0*t - 0.5*g*t**2\nprint(y)", "_____no_output_____" ] ], [ [ "This program spans several lines of text and uses variables, otherwise the program performs the same calculations and gives the same output as the previous program.\n\nIn mathematics we usually use one letter for a variable, resorting to using the Greek alphabet and other characters for more clarity. The main reason for this is to avoid becoming exhausted from writing when working out long expressions or derivations. However, when programming you should use more descriptive names for variable names. This might not seem like an important consideration for the trivial example here but it becomes increasingly important as the program gets more complicated and if someone else has to read your code.", "_____no_output_____" ], [ "### Good variable names make a program easier to understand!\n\nPermitted variable names include:\n\n* One-letter symbols.\n* Words or abbreviation of words.\n* Variable names can contain a-z, A-Z, underscore (\"'_'\") and digits 0-9, **but** the name cannot start with a digit.\n\nVariable names are case-sensitive (i.e. \"'a'\" is different from \"'A'\"). Let's rewrite the previous example using more descriptive variable names:", "_____no_output_____" ] ], [ [ "initial_velocity = 5\nacceleration_of_gravity = 9.81\nTIME = 0.6\nVerticalPositionOfBall = initial_velocity*TIME - 0.5*acceleration_of_gravity*TIME**2\nprint(VerticalPositionOfBall)", "_____no_output_____" ] ], [ [ "Certain words have a **special meaning** in Python and **cannot be used as variable names**. These are: *and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, with, while,* and *yield*.", "_____no_output_____" ], [ "## Adding comments to code\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nNot everything written in a computer program is intended for execution. In Python anything on a line after the '#' character is ignored and is known as a **comment**. You can write whatever you want in a comment. Comments are intended to be used to explain what a snippet of code is intended for. It might for example explain the objective or provide a reference to the data or algorithm used. This is both useful for you when you have to understand your code at some later stage, and indeed for whoever has to read and understand your code later.", "_____no_output_____" ] ], [ [ "# Program for computing the height of a ball in vertical motion.\nv0 = 5 # Set initial velocity in m/s.\ng = 9.81 # Set acceleration due to gravity in m/s^2.\nt = 0.6 # Time at which we want to know the height of the ball in seconds.\ny = v0*t - 0.5*g*t**2 # Calculate the vertical position\nprint(y)", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.2:</span> Convert from meters to British length units\nHere in the UK we are famous for our love of performing mental arithmetic. That is why we still use both imperial and metric measurement systems - hours of fun entertainment for the family switching back and forth between the two.\n\nMake a program where you set a length given in meters and then compute and write out the corresponding length measured in:\n* inches (one inch is 2.54 cm)\n* feet (one foot is 12 inches)\n* yards (one foot is 12 inches, one yard is 3 feet)\n* miles (one British mile is 1760 yards)\n\nNote: In this course we are using [okpy](https://okpy.org/) to automated assessment scoring. Therefore, while it is important to always carefully follow the instructions of a question, it is particularly important here so that okpy can recognize the validity of your answer. The conversion to inches are done for you to illustrate what is required.", "_____no_output_____" ] ], [ [ "meters = 640\n\n# 1 inch = 2.54 cm. Remember to convert from 2.54 cm to 0.0254 m here.\ninches = meters/0.0254\n\n# Uncomment and complete the following code.\n# feet =\n\n# yards =\n\n# miles =", "_____no_output_____" ], [ "grade = ok.grade('question-1_2')", "_____no_output_____" ] ], [ [ "## Formatted printing style\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nOften we want to print out results using a combination of text and numbers, e.g. \"'At t=0.6 s, y is 1.23 m'\". Particularly when printing out floating point numbers we should **never** quote numbers to a higher accuracy than they were measured. Python provides a *printf formatting* syntax exactly for this purpose. We can see in the following example that the *slot* `%g` was used to express the floating point number with the minimum number of significant figures, and the *slot* `%.2f` specified that only two digits are printed out after the decimal point.", "_____no_output_____" ] ], [ [ "print(\"At t=%g s, y is %.2f m.\" % (t, y))", "_____no_output_____" ] ], [ [ "Notice in this example how the values in the tuple `(t, y)` are inserted into the *slots*.", "_____no_output_____" ], [ "Sometimes we want a multi-line output. This is achieved using a triple quotation (*i.e.* `\"\"\"`):", "_____no_output_____" ] ], [ [ "print(\"\"\"At t=%f s, a ball with\ninitial velocity v0=%.3E m/s\nis located at the height %.2f m.\n\"\"\" % (t, v0, y))", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.3: </span> Compute the air resistance on a football\nThe drag force, due to air resistance, on an object can be expressed as\n$$F_d = \\frac{1}{2}C_D\\rho AV^2$$\nwhere:\n* $\\rho$ is the density of the air,\n* $V$ is the velocity of the object,\n* $A$ is the cross-sectional area (normal to the velocity direction),\n* and $C_D$ is the drag coefficient, which depends on the shape of the object and the roughness of the surface.\n\nComplete the following code that computes the drag force. ", "_____no_output_____" ] ], [ [ "# Football example\n\n# import pi from Python's math library\nfrom math import pi\n\ndensity = 1.2 # units of kg m^{−3}$\nball_radius = 0.11 # m\nA = pi*ball_radius**2 # Cross sectional area of a sphere\nmass = 0.43 # kg\nC_D = 0.2 # Drag coefficient\n\nV = 50.8 # m/s (fastest recorded speed of football)\n\n# Uncomment and complete the following code.\n# F_d = \n\n# Challenge yourself to use the formatted print statement\n# shown above to write out the forces with one decimal in\n# units of Newton ($N = kgm/s^2$).", "_____no_output_____" ], [ "grade = ok.grade('question-1_3')", "_____no_output_____" ] ], [ [ "## How are arithmetic expressions evaluated?\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nConsider the random mathematical expression, ${5\\over9} + 2a^4/2$, implemented in Python as `5.0/9 + 2*a**4/2`.\n\nThe rules for evaluating the expression are the same as in mathematics: proceed term by term (additions/subtractions) from the left, compute powers first, then multiplication and division. Therefore in this example the order of evaluation will be:\n\n1. `r1 = 5.0/9`\n2. `r2 = a**4`\n3. `r3 = 2*r2`\n4. `r4 = r3/2`\n5. `r5 = r1 + r4`\n\nUse parenthesis to override these default rules. Indeed, many programmers use parenthesis for greater clarity.", "_____no_output_____" ], [ "## <span style=\"color:blue\">Exercice 1.4:</span> Compute the growth of money in a bank\nLet *p* be a bank's interest rate in percent per year. An initial amount $A_0$ has then grown to $$A_n = A_0\\left(1+\\frac{p}{100}\\right)^n$$ after *n* years. Write a program for computing how much money 1000 euros have grown to after three years with a 5% interest rate.", "_____no_output_____" ] ], [ [ "# Complete the code commented out below (don't change variable names!)\n\n# p = ...\n# A_0 = ... \n\n# A_n = ...\n\n# print(\"The amount of money in the account after %d years is: %.2f euros\" % (n, A_n))", "_____no_output_____" ], [ "grade = ok.grade('question-1_4')", "_____no_output_____" ] ], [ [ "## Standard mathematical functions\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nWhat if we need to compute $\\sin x$, $\\cos x$, $\\ln x$, etc. in a program? Such functions are available in Python's *math module*. In fact there is a vast universe of functionality for Python available in modules. We just *import* in whatever we need for the task at hand.\n\nIn this example we compute $\\sqrt{2}$ using the *sqrt* function in the *math* module:", "_____no_output_____" ] ], [ [ "import math\nr = math.sqrt(2)\nprint(r)", "_____no_output_____" ] ], [ [ "or:", "_____no_output_____" ] ], [ [ "from math import sqrt\nr = sqrt(2)\nprint(r)", "_____no_output_____" ] ], [ [ "or:", "_____no_output_____" ] ], [ [ "from math import * # import everything in math\nr = sqrt(2)\nprint(r)", "_____no_output_____" ] ], [ [ "Another example:", "_____no_output_____" ] ], [ [ "from math import sin, cos, log\nx = 1.2\nprint(sin(x)*cos(x) + 4*log(x)) # log is ln (base e)", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.5:</span> Evaluate a Gaussian function\n\nThe bell-shaped Gaussian function,\n$$f(x)=\\frac{1}{\\sqrt{2\\pi}s}\\exp\\left(-\\frac{1}{2} \\left(\\frac{x-m}{s}\\right)^2\\right)$$\nis one of the most widely used functions in science and technology. The parameters $m$ and $s$ are real numbers, where $s$ must be greater than zero. Write a program for evaluating this function when $m = 0$, $s = 2$, and $x = 1$. Verify the program's result by using a calculator.", "_____no_output_____" ] ], [ [ "# Uncomment and complete the code below (don't change variable names!)\n\n# from math import pi, ...\n\n# f_x = ", "_____no_output_____" ], [ "grade = ok.grade('question-1_5')", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.6:</span> Find errors in the coding of a formula\nGiven a quadratic equation,\n$$ax^2 + bx + c = 0,$$\n$$x1 = \\frac{−b + \\sqrt{b^2 −4ac}}{2a},$$ and\n$$x2 = \\frac{−b − \\sqrt{b^2 −4ac}}{2a}.$$\n\nUncomment and fix the errors in the following code.", "_____no_output_____" ] ], [ [ "# a = 2; b = 1; c = -2\n# from math import sqrt\n# q = sqrt(b*b + 4*a*c)\n# x1 = (-b + q)/2*a\n# x2 = (-b - q)/2*a\n# print(x1, x2)", "_____no_output_____" ], [ "grade = ok.grade('question-1_6')", "_____no_output_____" ] ], [ [ "## Functions\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nWe have already used Python functions, e.g. `sqrt` from the `math` module above. In general, a function is a collection of statements we can execute wherever and whenever we want. For example, consider any of the formula you implemented above. \n\nFunctions can take any number of inputs (called *arguments*) to produce outputs. Functions help to organize programs, make them more understandable, shorter, and easier to extend. Wouldn't it be nice to implement it just once and then be able to use it again any time you need it, rather than having to write out the whole formula again?\n\nFor our first example we will reuse the formula for the position of a ball in vertical motion, which we've seen at the beginning of the document. ", "_____no_output_____" ] ], [ [ "# Function to compute height of ball.\ndef ball_height(v0, t, g=9.81):\n \"\"\"Function to calculate height of ball.\n \n Parameters\n ----------\n v0 : float\n Set initial velocity (units, m/s).\n t : float\n Time at which we want to know the height of the ball (units, seconds).\n g : float, optimal\n Acceleration due to gravity, by default 9.81 m/s^2.\n\n Returns\n -------\n float\n Height of ball in meters.\n \"\"\"\n\n height = v0*t - 0.5*g*t**2\n \n return height", "_____no_output_____" ] ], [ [ "Lets break this example down:\n* Function header:\n * Functions start with *def* followed by the name you want to give the function (ball_height in this case).\n * Following the name, you have `(...):` containing some number of function `arguments`.\n * In this case `v0` and `t` are *position arguments* while `g` is known as a *keyword argument* (more about this later).\n* Function body.\n * The first thing to notice is that the body of the function is indented one level.\n * Best practice is to include a [docstring](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html) to explain to others (or remind your future self) how the function should be used.\n * The function output is passed back via the `return` statement\n \nNotice that this just defines the function. Nothing is actually executed until you actually *use* the function:", "_____no_output_____" ] ], [ [ "print(\"Ball height: %g meters.\"%ball_height(5, 0.6))", "_____no_output_____" ] ], [ [ "No return value implies that `None` is returned. `None` is a special Python object that represents an ”empty” or undefined value. It is surprisingly useful and we will use it a lot later.\n\nFunctions can also return multiple values. Let's extend the previous example to calculate the ball velocity as well as the height:", "_____no_output_____" ] ], [ [ "# Function to compute height of ball.\ndef ball_height_velocity(v0, t, g=9.81):\n \"\"\"Function to calculate height and velocity of ball.\n \n Parameters\n ----------\n v0 : float\n Set initial velocity (units, m/s).\n t : float\n Time at which we want to know the height of the ball (units, seconds).\n g : float, optimal\n Acceleration due to gravity, by default 9.81 m/s^2.\n\n Returns\n -------\n float\n Height of ball in meters.\n float\n Velocity of ball in m/s.\n \"\"\"\n\n height = v0*t - 0.5*g*t**2\n velocity = v0 - g*t\n \n return height, velocity\n\nh, v = ball_height_velocity(5, 0.6)\n\nprint(\"Ball height: %g meters.\"%h)\nprint(\"Ball velocity: %g m/s.\"%v)", "_____no_output_____" ] ], [ [ "## Scope: Local and global variables\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nVariables defined within a function are said to have *local scope*. That is to say that they can only be referenced within that function. Consider the example function defined above where we used the *local* variable *height*. You can see that if you try to print the variable height outside the function you will get an error.\n\n```python\nprint(height)\n\n---------------------------------------------------------------------------\nNameError Traceback (most recent call last)\n<ipython-input-8-aa6406a13920> in <module>()\n----> 1 print(height)\n\nNameError: name 'height' is not defined\n```", "_____no_output_____" ], [ "## Keyword arguments and default input values\n<hr style=\"border: solid 2px red; margin-top: 1.5% \">\n\nFunctions can have arguments of the form variable_name=value and are called keyword arguments:", "_____no_output_____" ] ], [ [ "def somefunc(arg1, arg2, kwarg1=True, kwarg2=0):\n print(arg1, arg2, kwarg1, kwarg2)\n\nsomefunc(\"Hello\", [1,2]) # Note that we have not specified inputs for kwarg1 and kwarg2", "_____no_output_____" ], [ "somefunc(\"Hello\", [1,2], kwarg1=\"Hi\")", "_____no_output_____" ], [ "somefunc(\"Hello\", [1,2], kwarg2=\"Hi\")", "_____no_output_____" ], [ "somefunc(\"Hello\", [1,2], kwarg2=\"Hi\", kwarg1=6)", "_____no_output_____" ] ], [ [ "If we use variable_name=value for all arguments, their sequence in the function header can be in any order.", "_____no_output_____" ] ], [ [ "somefunc(kwarg2=\"Hello\", arg1=\"Hi\", kwarg1=6, arg2=[2])", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.7</span> Implement a Gaussian function\n\nCreate a Python function to compute the Gaussian: \n$$f(x)=\\frac{1}{s\\sqrt{2\\pi}}\\exp\\left(-\\frac{1}{2} \\left(\\frac{x-m}{s}\\right)^2\\right)$$", "_____no_output_____" ] ], [ [ "# Uncomment and complete this code - keep the names the same for testing purposes. \n\n# def gaussian(x, m=0, s=1):\n# ...", "_____no_output_____" ], [ "ok.grade('question-1_7')", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.8 (\\*\\*\\*):</span> How to cook the perfect egg\n\nYou just started University and moved away from home. You're trying to impress your new flatmates by cooking brunch. Write a python script to help you cook the perfect eggs! \n\n\nYou know from A-levels that when the temperature exceeds a critical point, the proteins in the egg first denature and then coagulate, and the process becomes faster as the temperature increases. In the egg white, the proteins start to coagulate for temperatures above 63$^\\circ$C, while in the yolk the proteins start to coagulate for temperatures above 70$^\\circ$C. \n\nThe time `t` (in seconds) it takes for the centre of the yolk to reach the temperature `Ty` (in degrees Celsius) can be expressed as: \n\n$$t = \\frac{M^{2/3}c \\rho^{1/3}}{K \\pi^2 (4 \\pi /3)^{2/3} } ln \\left[0.76 \\frac{T_0-T_w}{T_y-T_w}\\right]$$\n\nwhere:\n* $M$ is the mass of the egg;\n* $\\rho$ is the density;\n* $c$ is the specific heat capacity;\n* $K$ is thermal conductivity.\n* $Tw$ temperature of the boiling water (in C degrees) \n* $T0$ is the initial temeprature of the egg (in C degrees), before being put in the water.\n\nWrite a function that returns the time `t` needed for the egg to cook for a given `T0`, knowing that `Tw` = 100$^\\circ$ C, `M` = 50 g, `rho` = 1.038 gcm$^{−3}$, `c` = 3.7 Jg$^{−1}$K$^{−1}$, and `K` = 5.4 · 10$^{−3}$Wcm$^{−1}$K$^{−1}$. `Ty` = 70 $^\\circ$ C. for a perfect soft boiled egg. All these quantities should be passed as keyword arguments, with the specified default values.\n\nFind `t` for an egg taken from the fridge (`T0` = 4 C) and for one at room temperature (`T0` = 20 C). \n\nHint: You do not need to do any unit conversion. ", "_____no_output_____" ] ], [ [ "# Uncomment and complete this code - keep the names the same for testing purposes. \n\n#from math import pi\n\n#def perfect_egg(T0, M=...):\n# ...\n# return t", "_____no_output_____" ], [ "ok.grade('question-1_8')", "_____no_output_____" ] ], [ [ "## <span style=\"color:blue\">Exercice 1.9 (\\*\\*\\*):</span> Kepler's third law\n\nYou were selected to be the next astronaut to go to Mars. Congratulations! \n\nKepler's third law expresses the relationship between the distance of planets from the Sun, $a$, and their orbital periods, $P$\n\n$$ P^2 = \\frac{4\\pi^2}{G(m_1+m_2)}a^3 $$\n\nwhere\n* P is the period (in seconds)\n* G is the gravitational constant ( = 6.67 × 10$^{-11}$ m$^3$kg$^{-1}$s$^{-2}$)\n* m$_1$ is the mass of the planet (in kg)\n* m$_2$ is the mass of the Sun ( = 2x10$^{30}$ kg)\n* a is the distance between the planet and the Sun (in m)\n\nHow many Earth birthdays will you celebrate during your 10 Mars years mission? Write a python code that will calculate the the period on Earth, `P_earth`, the period on Mars, `P_mars`, and how many Earth years are equivalent to 10 years on Mars, `birthdays`.\n\nYou know that:\n* The average distance between the Earth and the Sun is $a$ = 1.5x10$^{11}$m;\n* The average distance between Mars and the Sun is 0.5 larger than the Eart-Sun distance;\n* The mass of the Earth is m$_1$ = 6x10$^{24}$;\n* Mars's mass is about 10% of the Earth's mass.\n\nHint: You do not need to do any unit conversion. ", "_____no_output_____" ] ], [ [ "# Uncomment and complete this code - keep the names the same for testing purposes. \n\n# from ... import ...\n\n# def period(a, G=6.67*10**-11, m1=6*10**24 , m2=2*10**30):\n# ...\n# \n# return P_mars, P_earth\n\n# P_mars, P_earth = period(1.5*10**11)\n# birthdays = ...\n\n# print(\"Periods: \", period(1.5*10**11), \", birthdays: \", birthdays)", "_____no_output_____" ], [ "ok.grade('question-1_9')", "_____no_output_____" ], [ " ok.score()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec988a1597440c1c3106157383a5fed3b04d5769
144,737
ipynb
Jupyter Notebook
code/run_ocetrac.ipynb
hscannell/jupiter_data_science
d0092eee04d04015d7fbe6bdf4abbae8491e4673
[ "MIT" ]
1
2021-10-14T00:48:51.000Z
2021-10-14T00:48:51.000Z
code/run_ocetrac.ipynb
hscannell/jupiter_data_science
d0092eee04d04015d7fbe6bdf4abbae8491e4673
[ "MIT" ]
null
null
null
code/run_ocetrac.ipynb
hscannell/jupiter_data_science
d0092eee04d04015d7fbe6bdf4abbae8491e4673
[ "MIT" ]
null
null
null
198.541838
53,720
0.875284
[ [ [ "## Use `Ocetrac` to identify and track marine heatwave (MHW) objects \n\n![fig](../static/ocetrac_steps.png)\n", "_____no_output_____" ] ], [ [ "# Import libraries \nimport xarray as xr\nimport numpy as np\nimport ocetrac\n\n# Visualization libraries\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap", "_____no_output_____" ] ], [ [ "### Import the preprocessed OISST dataset\n\nThe data are available on Zenodo.", "_____no_output_____" ] ], [ [ "path_to_data = '../preprocessed_oisst_mhw_stats.nc'\nds = xr.open_dataset(path_to_data)\nds", "_____no_output_____" ] ], [ [ "### Import the land mask for the OISST dataset and mask out both poles", "_____no_output_____" ] ], [ [ "path = 'https://psl.noaa.gov/thredds/dodsC/Datasets/noaa.oisst.v2.highres/lsmask.oisst.nc'\nds_mask = xr.open_dataset(path, use_cftime=True) \nlsmask = ds_mask.lsmask.isel(time=0)\nmask = lsmask.where((ds_mask.lat<65) & (ds_mask.lat>-70), drop=False, other=0) \nmask.plot()", "_____no_output_____" ] ], [ [ "## Run Ocetrac", "_____no_output_____" ] ], [ [ "# Calculate the 90th percentile of anomalies across the entire time period\nt90_global = ds.ssta_notrend.quantile(.9)\n\n# Find only the sst anomalies that exceed the local 90th percentile\nda = ds.ssta_notrend.where(ds.ssta_notrend>t90_global, drop=False, other=np.nan)", "_____no_output_____" ], [ "da[0,:,:].plot()", "_____no_output_____" ], [ "#### Set model parameters\n\nda.load(); # load the DataArray into memory\nradius = 8 # radius for structuring element\nmin_size_quartile = 0.75 # threshold for object areas\ntimedim = 'time'\nxdim = 'lon'\nydim = 'lat'", "_____no_output_____" ], [ "Tracker = ocetrac.Tracker(da, mask, radius, min_size_quartile, timedim, xdim, ydim, positive=True)", "_____no_output_____" ], [ "%%time \nblobs = Tracker.track()", "minimum area: 2485.5\ninital objects identified \t 13862\nfinal objects tracked \t 686\nCPU times: user 9min 53s, sys: 1min 54s, total: 11min 47s\nWall time: 11min 55s\n" ] ], [ [ "**The basic output of `Tracker.track` provides:**\n- the minimum object area used to filter out MHWs that are smaller than this threshold. Area is computed for all objects at each time step. `Ocetrac` looks at the global distribution of object area and finds the size that corresponds to a certain percentile defined by `area_quantile`, which can range from 0.0 to 1.0. Larger values for `area_quantile` exclude more MHW objects and will increase the minimum object area threshold. \n- inital features identified have undergone size thresholding, however are not connected in time. \n- final features tracked are the number of unique MHW objects tracked in space and time. ", "_____no_output_____" ], [ "#### Make some quick plot to check resulting data", "_____no_output_____" ] ], [ [ "# Event label over time\nblobs.mean(('lat','lon')).plot()", "_____no_output_____" ], [ "maxl = int(np.nanmax(blobs.values))\ncm = ListedColormap(np.random.random(size=(maxl, 3)).tolist())\n\n\n# Make a quick plot of the labeled MHWs identified with Ocetrac in August 2012\nblobs.sel(time=('2012-08-01')).plot(cmap= cm)", "_____no_output_____" ] ], [ [ "#### Save labeled marine heatwave images", "_____no_output_____" ] ], [ [ "# Specify path to save output \nnew_data_path = '../data/ocetrac_labels.nc'\n\nds_out = blobs.to_dataset(name='ocetrac_labels')\nds_out.attrs = dict(description=\"OISST v2.1 preprocessed for Ocetrac\",\n threshold='90th percentile',\n climatology='entire period')\n\n# Save Dataset to netCDF\nds_out.to_netcdf(new_data_path, mode='w')\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ec9895115dca43b880c77350bb1c53ed0ff19f6b
13,051
ipynb
Jupyter Notebook
alphapy/examples/Trading Model/A Trading Model.ipynb
falanger/AlphaPy
3d46aaaea60599242ee54478b6c78eba60b04cfb
[ "Apache-2.0" ]
559
2018-09-13T00:14:34.000Z
2022-03-31T19:17:12.000Z
alphapy/examples/Trading Model/A Trading Model.ipynb
falanger/AlphaPy
3d46aaaea60599242ee54478b6c78eba60b04cfb
[ "Apache-2.0" ]
24
2018-09-15T21:01:50.000Z
2021-12-30T01:39:57.000Z
alphapy/examples/Trading Model/A Trading Model.ipynb
falanger/AlphaPy
3d46aaaea60599242ee54478b6c78eba60b04cfb
[ "Apache-2.0" ]
116
2018-09-26T12:05:46.000Z
2022-03-11T10:23:24.000Z
39.668693
6,176
0.712819
[ [ [ "### This notebook analyzes the predictions of the trading model. <br/>At different thresholds, how effective is the model at predicting<br/> larger-than-average range days?", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "pwd", "_____no_output_____" ], [ "cd output", "/Users/markconway/Projects/AlphaPy/alphapy/examples/Trading Model/output\n" ], [ "ls", "predictions_20170425.csv rankings_20170425.csv\r\nprobabilities_20170425.csv\r\n" ] ], [ [ "This file contains the ranked predictions of the test set.", "_____no_output_____" ] ], [ [ "ranking_frame = pd.read_csv('rankings_20170425.csv')", "_____no_output_____" ], [ "ranking_frame.columns", "_____no_output_____" ] ], [ [ "The probabilities are in descending order. Observe the greater number of True values at the top of the rankings versus the bottom.", "_____no_output_____" ] ], [ [ "ranking_frame.rrover.head(20)", "_____no_output_____" ], [ "ranking_frame.rrover.tail(20)", "_____no_output_____" ] ], [ [ "Let's plot the True/False ratios for each probability decile. These ratios should roughly reflect the trend in the calibration plot.", "_____no_output_____" ] ], [ [ "ranking_frame['bins'] = pd.qcut(ranking_frame.probability, 10, labels=False)", "_____no_output_____" ], [ "grouped = ranking_frame.groupby('bins')", "_____no_output_____" ], [ "def get_ratio(series):\n ratio = series.value_counts()[1] / series.size\n return ratio", "_____no_output_____" ], [ "grouped['rrover'].apply(get_ratio).plot(kind='bar')", "_____no_output_____" ] ], [ [ "#### We conclude that the model does have some value, especially with more training data.<br/><br/>1. For high probabilities, we could deploy a breakout or trend system.<br/><br/>2. For low probabilities, we could use a counter-trend system.<br/><br/>3. Mid-range probabilities have no predictive power in this model.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
ec98a625f023776a7f85976b67b5a8d1da5abc13
338,039
ipynb
Jupyter Notebook
first-neural-network/Your_first_neural_network.ipynb
victor-cordova/udacity-deep-learning-nanodegree
2f0a218fca3c8e43d42011b2a60f1db201a39f00
[ "MIT" ]
null
null
null
first-neural-network/Your_first_neural_network.ipynb
victor-cordova/udacity-deep-learning-nanodegree
2f0a218fca3c8e43d42011b2a60f1db201a39f00
[ "MIT" ]
null
null
null
first-neural-network/Your_first_neural_network.ipynb
victor-cordova/udacity-deep-learning-nanodegree
2f0a218fca3c8e43d42011b2a60f1db201a39f00
[ "MIT" ]
null
null
null
324.725264
165,034
0.897435
[ [ [ "# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "_____no_output_____" ] ], [ [ "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)", "_____no_output_____" ], [ "rides.head()", "_____no_output_____" ] ], [ [ "## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "_____no_output_____" ] ], [ [ "rides[:24*10].plot(x='dteday', y='cnt')", "_____no_output_____" ] ], [ [ "### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.", "_____no_output_____" ] ], [ [ "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "_____no_output_____" ] ], [ [ "### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "_____no_output_____" ] ], [ [ "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "_____no_output_____" ] ], [ [ "### Splitting the data into training, testing, and validation sets\n\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "_____no_output_____" ] ], [ [ "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "_____no_output_____" ] ], [ [ "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "_____no_output_____" ] ], [ [ "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "_____no_output_____" ] ], [ [ "## Time to build the network\n\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\n<img src=\"assets/neural_network.png\" width=300px>\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$ -> **it's 1 of course**.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ", "_____no_output_____" ] ], [ [ "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x: (1 / (1 + np.exp(-x))) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n def debug_tensor(self, name, obj):\n #print(\"The value for {0} is {1} and the shape is {2}\".format(name, obj, obj.shape))\n pass\n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n # signals into final output layer, (1,2) dot (3,2)\n \n self.debug_tensor('X', X)\n self.debug_tensor('y', y)\n self.debug_tensor('self.weights_input_to_hidden', self.weights_input_to_hidden)\n self.debug_tensor('self.weights_hidden_to_output', self.weights_hidden_to_output)\n self.debug_tensor('delta_weights_i_h', delta_weights_i_h)\n self.debug_tensor('delta_weights_h_o', delta_weights_h_o)\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n self.debug_tensor('hidden_inputs', hidden_inputs)\n self.debug_tensor('hidden_outputs', hidden_outputs)\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) \n final_outputs = final_inputs # signals from final output layer. This line makes me suspisious :[\n \n self.debug_tensor('final_inputs', final_inputs)\n self.debug_tensor('final_outputs', final_outputs)\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n \n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n self.debug_tensor('error', error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error #Here, we only have error because f'(h) is 1\n \n self.debug_tensor('output_error_term', output_error_term)\n \n # TODO: Calculate the hidden layer's contribution to the error\n # HERE: Not sure if using * instead of dot is correct. Just compensating for the strange (1,) shape\n hidden_error = output_error_term[:, None] * self.weights_hidden_to_output\n \n temp = hidden_error.T * hidden_outputs\n self.debug_tensor('temp', temp)\n \n hidden_error_term = temp * (1 - hidden_outputs)\n \n self.debug_tensor('hidden_error', hidden_error)\n self.debug_tensor('hidden_error_term', hidden_error_term)\n\n # Weight step (hidden to output)\n delta_weights_h_o += (output_error_term[:, None] * hidden_outputs).T\n \n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs", "_____no_output_____" ], [ "def MSE(y, Y):\n return np.mean((y-Y)**2)", "_____no_output_____" ] ], [ [ "## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "_____no_output_____" ] ], [ [ "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", ".....\n----------------------------------------------------------------------\nRan 5 tests in 0.023s\n\nOK\n" ] ], [ [ "## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "_____no_output_____" ] ], [ [ "import sys\n\n### Set the hyperparameters here ###\niterations = 5000\nlearning_rate = 0.3\nhidden_nodes = 20\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)", "Progress: 100.0% ... Training loss: 0.108 ... Validation loss: 0.240" ], [ "plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "_____no_output_____" ] ], [ [ "## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "C:\\Users\\pc\\AppData\\Local\\conda\\conda\\envs\\dlnd\\lib\\site-packages\\ipykernel_launcher.py:10: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate_ix\n # Remove the CWD from sys.path while we load stuff.\n" ] ], [ [ "## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\n#### Your answer below\nThe model does a very good job to predict half the validation data. The second half fails because the training data didn't take into account the days leadning up to christmas or new year's eve. The model overestimates the amount of bicycles needed for those dates.\nTo solve this problem we would definitely have to traing a whole year's data.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ec98c1b8a0a4f4d56576bcf192391c040b03ab34
7,281
ipynb
Jupyter Notebook
docs/tutorials/text.ipynb
timgates42/hypertools
9ac3dc11123419f2f00d596ac5920db2486cc0a1
[ "MIT" ]
1,681
2017-01-28T00:28:02.000Z
2022-03-11T00:57:13.000Z
docs/tutorials/text.ipynb
timgates42/hypertools
9ac3dc11123419f2f00d596ac5920db2486cc0a1
[ "MIT" ]
170
2017-01-27T22:59:09.000Z
2022-02-12T03:47:46.000Z
docs/tutorials/text.ipynb
timgates42/hypertools
9ac3dc11123419f2f00d596ac5920db2486cc0a1
[ "MIT" ]
180
2017-02-01T04:34:42.000Z
2022-02-22T15:46:23.000Z
28.778656
487
0.606647
[ [ [ "# Visualizing text", "_____no_output_____" ] ], [ [ "import hypertools as hyp\nimport wikipedia as wiki\n%matplotlib inline", "_____no_output_____" ] ], [ [ "In this example, we will download some text from wikipedia, split it up into chunks and then plot it. We will use the wikipedia package to retrieve the wiki pages for 'dog' and 'cat'.", "_____no_output_____" ] ], [ [ "def chunk(s, count):\n return [''.join(x) for x in zip(*[list(s[z::count]) for z in range(count)])]\n\nchunk_size = 5\n\ndog_text = wiki.page('Dog').content\ncat_text = wiki.page('Cat').content\n\ndog = chunk(dog_text, int(len(dog_text)/chunk_size))\ncat = chunk(cat_text, int(len(cat_text)/chunk_size))", "_____no_output_____" ] ], [ [ "Below is a snippet of some of the text from the dog wikipedia page. As you can see, the word dog appears in many of the sentences, but also words related to dog like wolf and carnivore appear.", "_____no_output_____" ] ], [ [ "dog[0][:1000]", "_____no_output_____" ] ], [ [ "Now we will simply pass the text samples as a list to `hyp.plot`. By default hypertools will transform the text data using a topic model that was fit on a variety of wikipedia pages. Specifically, the text is vectorized using the scikit-learn `CountVectorizer` and then passed on to a `LatentDirichletAllocation` to estimate topics. As can be seen below, the 5 chunks of text from the dog/cat wiki pages cluster together, suggesting they are made up of distint topics.", "_____no_output_____" ] ], [ [ "hue=['dog']*chunk_size+['cat']*chunk_size\ngeo = hyp.plot(dog + cat, 'o', hue=hue, size=[8, 6])", "_____no_output_____" ] ], [ [ "Now, let's add a third very different topic to the plot.", "_____no_output_____" ] ], [ [ "bball_text = wiki.page('Basketball').content\nbball = chunk(bball_text, int(len(bball_text)/chunk_size))\n\nhue=['dog']*chunk_size+['cat']*chunk_size+['bball']*chunk_size\ngeo = hyp.plot(dog + cat + bball, 'o', hue=hue, labels=hue, size=[8, 6])", "_____no_output_____" ] ], [ [ "As you might expect, the cat and dog text chunks are closer to each other than to basketball in this topic space. Since cats and dogs are both animals, they share many more features (and thus are described with similar text) than basketball.", "_____no_output_____" ], [ "## Visualizing NIPS papers\n\nThe next example is a dataset of all NIPS papers published from 1987. They are fit and transformed using the text from each paper. This example dataset can be loaded using the code below.", "_____no_output_____" ] ], [ [ "nips = hyp.load('nips')\nnips.plot(size=[8, 6])", "_____no_output_____" ] ], [ [ "## Visualizing Wikipedia pages\n\nHere, we will plot a collection of wikipedia pages, transformed using a topic\nmodel (the default 'wiki' model) that was fit on the same articles. We will\nreduce the dimensionality of the data with TSNE, and then discover cluster with\nthe 'HDBSCAN' algorithm.", "_____no_output_____" ] ], [ [ "wiki = hyp.load('wiki')\nwiki.plot(size=[8, 6])", "_____no_output_____" ] ], [ [ "## Visualizing State of the Union Addresses\n\nIn this example we will plot each state of the union address from 1989 to present. The dots are colored and labeled by president. The semantic model that was used to transform is the default 'wiki' model, which is a CountVectorizer->LatentDirichletAllocation pipeline fit with a selection of wikipedia pages. As you can see below, the points generally seem to cluster by president, but also by party affiliation (democrats mostly on the left and republicans mostly on the right).", "_____no_output_____" ] ], [ [ "sotus = hyp.load('sotus')\nsotus.plot(size=[10,8])", "_____no_output_____" ] ], [ [ "## Changing the reduction model\n\nThese data are reduce with PCA. Want to visualize using a different algorithm? Simply change the `reduce` parameter. This gives a different, but equally interesting lower dimensional representation of the data.", "_____no_output_____" ] ], [ [ "sotus.plot(reduce='UMAP', size=[10, 8])", "_____no_output_____" ] ], [ [ "## Defining a corpus\n\nNow let's change the corpus used to train the text model. Specifically, we'll use the 'nips' text, a collection of scientific papers. To do this, set `corpus='nips'`. You can also specify your own text (as a list of text samples) to train the model.", "_____no_output_____" ] ], [ [ "sotus.plot(reduce='UMAP', corpus='nips', size=[10, 8])", "_____no_output_____" ] ], [ [ "Interestingly, plotting the data transformed by a different topic model (trained on scientific articles) gives a totally different representation of the data. This is because the themes extracted from a homogenous set of scientific articles are distinct from the themes extract from diverse set of wikipedia articles, so the transformation function will be unique.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ec98cb58d730662b518db23ffe4fcc2464cbb69a
3,301
ipynb
Jupyter Notebook
String/1014/819. Most Common Word.ipynb
YuHe0108/Leetcode
90d904dde125dd35ee256a7f383961786f1ada5d
[ "Apache-2.0" ]
1
2020-08-05T11:47:47.000Z
2020-08-05T11:47:47.000Z
String/1014/819. Most Common Word.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
String/1014/819. Most Common Word.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
27.280992
126
0.470766
[ [ [ "说明:\n 给定一个段落 (paragraph) 和一个禁用单词列表 (banned)。\n 返回出现次数最多,同时不在禁用列表中的单词。\n 题目保证至少有一个词不在禁用列表中,而且答案唯一。\n 禁用列表中的单词用小写字母表示,不含标点符号。\n 段落中的单词不区分大小写。\n 答案都是小写字母。\n\nExample:\n Input: \n paragraph = \"Bob hit a ball, the hit BALL flew far after it was hit.\"\n banned = [\"hit\"]\n Output: \"ball\"\n Explanation: \n \"hit\" occurs 3 times, but it is a banned word.\n \"ball\" occurs twice (and no other word does), so it is the most frequent non-banned word in the paragraph. \n Note that words in the paragraph are not case sensitive,\n that punctuation is ignored (even if adjacent to words, such as \"ball,\"), \n and that \"hit\" isn't the answer even though it occurs more because it is banned.\n\n提示:\n 1、1 <= 段落长度 <= 1000\n 2、0 <= 禁用单词个数 <= 100\n 3、1 <= 禁用单词长度 <= 10\n 4、答案是唯一的, 且都是小写字母 (即使在 paragraph 里是大写的,即使是一些特定的名词,答案都是小写的。)\n 5、paragraph 只包含字母、空格和下列标点符号!?',;.\n 6、不存在没有连字符或者带有连字符的单词。\n 7、单词里只包含字母,不会出现省略号或者其他标点符号。", "_____no_output_____" ] ], [ [ "from collections import Counter\n\nclass Solution:\n def mostCommonWord(self, paragraph: str, banned) -> str:\n banned = set(banned)\n words = []\n idx = 0\n while idx < len(paragraph):\n res = ''\n while idx < len(paragraph) and paragraph[idx].isalpha():\n val = paragraph[idx].lower()\n res += val\n idx += 1\n if res and res not in banned: \n words.append(res)\n idx += 1\n w_freq = {}\n for w in words:\n if w not in w_freq:\n w_freq[w] = 1\n else:\n w_freq[w] += 1\n \n w_freq = sorted(w_freq.items(), key = lambda x: x[1])\n return w_freq[-1][0]", "_____no_output_____" ], [ "solution = Solution()\nsolution.mostCommonWord(\"Bob\", [])", "_____no_output_____" ] ] ]
[ "raw", "code" ]
[ [ "raw" ], [ "code", "code" ] ]
ec98d02ba2b6e66fb83d7d8a59168dbd191459bb
745,689
ipynb
Jupyter Notebook
experiments/torch3d/06_optimize_single_tshirt_mesh.ipynb
alexus37/MasterThesisCode
a7eada603686de75968acc8586fd307a91b0491b
[ "MIT" ]
1
2020-04-23T15:39:27.000Z
2020-04-23T15:39:27.000Z
experiments/torch3d/06_optimize_single_tshirt_mesh.ipynb
alexus37/MasterThesisCode
a7eada603686de75968acc8586fd307a91b0491b
[ "MIT" ]
null
null
null
experiments/torch3d/06_optimize_single_tshirt_mesh.ipynb
alexus37/MasterThesisCode
a7eada603686de75968acc8586fd307a91b0491b
[ "MIT" ]
null
null
null
1,202.724194
648,156
0.954643
[ [ [ "import os\nimport torch\nimport matplotlib.pyplot as plt\nimport xml.dom.minidom\nfrom skimage.io import imread\nfrom skimage import img_as_ubyte\nfrom pyrr import Matrix44, Vector4, Vector3\nimport pyrr\n\n# Util function for loading meshes\nfrom pytorch3d.io import load_objs_as_meshes\n\n# Data structures and functions for rendering\nfrom pytorch3d.structures import Meshes, Textures\nfrom pytorch3d.renderer import (\n OpenGLPerspectiveCameras, \n BlendParams,\n PointLights, \n DirectionalLights, \n Materials, \n RasterizationSettings, \n MeshRenderer, \n MeshRasterizer, \n TexturedSoftPhongShader,\n HardPhongShader,\n SoftSilhouetteShader\n)\nfrom pytorch3d.renderer.cameras import look_at_view_transform\n\nfrom torch_openpose.body import Body\nfrom torch_openpose import util\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom matplotlib import cm\n\nfrom tqdm import tqdm\nimport torch.nn as nn\nimport imageio\nimport cv2\nimport copy\nfrom PIL import ImageFont, ImageDraw, Image\n\nimport wandb\n\n# add path for demo utils functions \nimport sys\nimport os\nsys.path.append(os.path.abspath(''))\nimport pytorch3d", "_____no_output_____" ], [ "def transform_node_to_matrix(node):\n model_matrix = np.identity(4)\n for child in reversed(node.childNodes):\n if child.nodeName == \"translate\":\n x = float(child.getAttribute('x'))\n y = float(child.getAttribute('y'))\n z = float(child.getAttribute('z'))\n z *= -1\n translate_vec = Vector3([x, y, z])\n trans_matrix = np.transpose(pyrr.matrix44.create_from_translation(translate_vec))\n model_matrix = np.matmul(model_matrix, trans_matrix)\n if child.nodeName == \"scale\":\n scale = float(child.getAttribute('value'))\n scale_vec = Vector3([scale, scale, scale])\n scale_matrix = np.transpose(pyrr.matrix44.create_from_scale(scale_vec))\n model_matrix = np.matmul(model_matrix, scale_matrix)\n \n return model_matrix\n\ndef transform_node_to_R_T(node):\n eye = node.getAttribute('origin').split(',')\n eye = [float(i) for i in eye]\n at = node.getAttribute('target').split(',')\n at = [float(i) for i in at]\n up = node.getAttribute('up').split(',')\n up = [float(i) for i in up]\n \n R, T = look_at_view_transform(\n eye=[eye], \n at=[at], \n up=[up]\n )\n return R, T\n\ndef load_shape(shape):\n if shape is None:\n return None\n device = torch.device(\"cuda:0\")\n char_filename_node = shape.getElementsByTagName('string')[0]\n char_filename = char_filename_node.getAttribute('value')\n obj_filename = os.path.join(DATA_ROOT, char_filename)\n mesh = load_objs_as_meshes([obj_filename], device=device)\n verticies, faces = mesh.get_mesh_verts_faces(0)\n texture = mesh.textures.clone() if mesh.textures is not None else None\n \n \n \n transform_node = shape.getElementsByTagName('transform')\n if len(transform_node) == 0:\n verticies[:, 2] *= -1\n return verticies, faces, texture\n # apply transform\n transform_node = transform_node[0]\n model_matrix = transform_node_to_matrix(transform_node)\n model_matrix = torch.from_numpy(model_matrix).cuda().double() \n \n # make coordiantes homegenos\n new_row = torch.ones(1, verticies.shape[0], device=device)\n vetrices_homo = torch.cat((verticies.t(), new_row)).double() \n \n # transform\n vetrices_world = torch.chain_matmul(model_matrix, vetrices_homo).t()[:, :3]\n return vetrices_world.float(), faces, texture\n\ndef mitsuba_scene_to_torch_3d(master_scene):\n device = torch.device(\"cuda:0\")\n master_doc = xml.dom.minidom.parse(master_scene)\n camera = master_doc.getElementsByTagName('sensor')[0]\n camera_transform = camera.getElementsByTagName('transform')[0]\n R, T = transform_node_to_R_T(camera_transform.getElementsByTagName('lookat')[0])\n\n cameras = OpenGLPerspectiveCameras(\n znear=0.1,\n zfar=10000,\n fov=15,\n degrees=True,\n device=device, \n R=R, \n T=T\n )\n \n \n character = None\n tshirt = None\n ground = None\n \n shapes = master_doc.getElementsByTagName('shape')\n for i in range(len(shapes)):\n if shapes[i].getAttribute(\"id\") == 'character':\n character = shapes[i]\n if shapes[i].getAttribute(\"id\") == 'simulated':\n tshirt = shapes[i]\n if shapes[i].getAttribute(\"id\") == 'place.000000':\n ground = shapes[i]\n \n tshirt_vetrices, tshirt_faces, tshirt_texture = load_shape(tshirt)\n character_vetrices, character_faces, character_texture = load_shape(character)\n# ground_vetrices, ground_faces, ground_texture = load_shape(ground)\n \n texTshirt = torch.ones_like(tshirt_vetrices).cuda()\n tex2 = torch.ones_like(character_vetrices).cuda()\n# tex3 = torch.ones_like(ground_vetrices).cuda()\n texTshirt[:, 1:] *= 0.0 # red\n # person\n tex2[:, 0] *= 0.88\n tex2[:, 1] *= 0.67\n tex2[:, 2] *= 0.41\n \n# tex3[:, 0] *= 0\n# tex3[:, 1] *= 0.67\n# tex3[:, 2] *= 0.0\n# tex = torch.cat([texTshirt, tex2, tex3])[None]\n tex = torch.cat([texTshirt, tex2])[None]\n textures = Textures(verts_rgb=tex.cuda())\n \n# verts = torch.cat([tshirt_vetrices,character_vetrices, ground_vetrices]).cuda() \n verts = torch.cat([tshirt_vetrices, character_vetrices]).cuda() \n \n character_faces = character_faces + tshirt_vetrices.shape[0] \n# ground_faces = ground_faces + character_vetrices.shape[0] + tshirt_vetrices.shape[0] \n faces = torch.cat([tshirt_faces, character_faces]).cuda()\n# faces = torch.cat([tshirt_faces, character_faces, ground_faces]).cuda()\n mesh = Meshes(verts=[verts], faces=[faces], textures=textures)\n \n optmization_input = {\n \"textures\": textures,\n \"t_shirt_vertices\": tshirt_vetrices,\n# \"vertices_other\": torch.cat([character_vetrices, ground_vetrices]), \n \"vertices_other\": character_vetrices, \n \"faces\": faces\n }\n \n return mesh, cameras, optmization_input\n\nclass Model(nn.Module):\n def __init__(self, vertices_t_shirt, vertices_other, textures, faces, renderer, body_estimation, reg=0.1):\n super().__init__()\n self.device = torch.device(\"cuda:0\")\n self.renderer = renderer\n self.textures = textures\n self.vertices_t_shirt = vertices_t_shirt\n self.vertices_other = vertices_other\n self.faces = faces\n self.body_estimation = body_estimation\n self.cur_mesh = None\n \n self.vertex_displacements = nn.Parameter(\n torch.zeros_like(vertices_t_shirt).to(self.device)\n )\n \n ## loss functions\n self.reg = reg\n self.objective = torch.nn.MSELoss()\n self.regularizer = torch.nn.L1Loss(size_average=False)\n self.zero_heatmap = torch.zeros((1, 19, 64, 64), device=self.device)\n self.zero_paf = torch.zeros((1, 38, 64, 64), device=self.device)\n self.zero_displacesment = torch.zeros_like(vertices_t_shirt, device=self.device)\n\n def forward(self): \n cur_tshirt_vertices = self.vertices_t_shirt + self.vertex_displacements\n cur_vertices = torch.cat([cur_tshirt_vertices, self.vertices_other]).cuda()\n \n self.cur_mesh = Meshes(verts=[cur_vertices], faces=[self.faces], textures=self.textures)\n images = self.renderer(self.cur_mesh)\n \n #\n body_input = (images[..., :3] - 0.5).permute((0, 3, 1, 2)).float()\n paf, heat = self.body_estimation.model(body_input)\n \n ob_val_heat = self.objective(heat, self.zero_heatmap)\n ob_val_paf = self.objective(paf, self.zero_paf)\n ob_val_displayements = self.reg * self.regularizer(self.vertex_displacements, self.zero_displacesment)\n # Calculate the silhouette loss\n loss = ob_val_heat + ob_val_paf + ob_val_displayements\n return loss, images\n \ndef get_body_image_from_mesh(cur_mesh, body_estimation, renderer):\n images = renderer(cur_mesh)\n \n rendering_torch_input = (images[..., :3] - 0.5).permute((0, 3, 1, 2)).float()\n \n heatmap_avg, paf_avg = body_estimation.compute_heatmap_paf_avg(rendering_torch_input, ORIG_SHAPE)\n candidate, subset = body_estimation.get_pose(heatmap_avg, paf_avg, ORIG_SHAPE)\n rendering_torch_np = images[0, ..., :3].detach().squeeze().cpu().numpy()\n canvas = copy.deepcopy(rendering_torch_np)\n canvas = util.draw_bodypose(canvas, candidate, subset)\n \n return canvas, candidate, subset, heatmap_avg, paf_avg\n\ndef wandb_init(name):\n run = wandb.init(\n project = \"mts_tshirt_single\", \n reinit = True,\n name = name,\n config={\n \"steps\": 100,\n \"learning_rate\": 0.2\n }\n )\n \n return run \n\ndef where(cond, x_1, x_2):\n cond = cond.float() \n return (cond * x_1) + ((1-cond) * x_2)", "_____no_output_____" ], [ "ORIG_SHAPE = (512, 512, 3)\nDATA_ROOT = 'data/radek'\n# Setup\ndevice = torch.device(\"cuda:0\")\ntorch.cuda.set_device(device)\n\n# Set paths\nmaster_scene = 'data/radek/00008_mesh88_animated.xml'\nmesh, cameras, optmization_input = mitsuba_scene_to_torch_3d(master_scene)\n\nraster_settings = RasterizationSettings(\n image_size=512, \n blur_radius=0.0, \n faces_per_pixel=1,\n perspective_correct=True,\n# max_faces_per_bin=100000,\n# bin_size=64\n)\nlights = PointLights(device=device, location=[[0.0, 0.0, 3.0]])\n\nrenderer = MeshRenderer(\n rasterizer=MeshRasterizer(\n cameras=cameras, \n raster_settings=raster_settings\n ), \n shader=HardPhongShader(\n device=device, \n cameras=cameras, \n lights=lights\n )\n)\nblend_params = BlendParams(sigma=1e-4, gamma=1e-4)\nsilhouette_renderer = MeshRenderer(\n rasterizer=MeshRasterizer(\n cameras=cameras, \n raster_settings=raster_settings\n ),\n shader=SoftSilhouetteShader(blend_params=blend_params)\n)\n\nimage = renderer(mesh) # (1, H, W, 4)\nsilhouette = silhouette_renderer(mesh) # (1, H, W, 4)\n\nbackground = cv2.imread('../data/backgrounds/indoor.jpg')\nbackground = cv2.cvtColor(background, cv2.COLOR_BGR2RGB)\nbackground = torch.from_numpy(background / 255).unsqueeze(0).float().cuda()\nalpha = torch.ones((1, 512, 512, 1)).cuda()\nbackground = torch.cat([background, alpha], 3)\n\n\nalpha_mask = torch.cat([silhouette[..., 3], silhouette[..., 3], silhouette[..., 3], silhouette[..., 3]], 0)\nalpha_mask = alpha_mask.unsqueeze(0).permute((0, 2, 3, 1))\n\nfinal = where(alpha_mask > 0, image, background)\n\nsilhouette = silhouette.cpu().numpy()\nfinal = final.cpu().numpy()\n\n\nfig = plt.figure(figsize=(30, 10))\nax = fig.add_subplot(1, 3, 1)\nax.imshow(silhouette.squeeze()[..., 3])\n\nax.set_title(\"Silhouete\")\n\nax = fig.add_subplot(1, 3, 2)\nax.imshow(image.squeeze().cpu().numpy())\nax.set_title(\"Pytorch3d\")\n\nax = fig.add_subplot(1, 3, 3)\nax.imshow(final.squeeze())\nax.set_title(\"With background\")", "_____no_output_____" ], [ "ORIG_SHAPE = (512, 512, 3)\n\nbody_estimation = Body('/home/ax/data/programs/pytorch-openpose/model/body_pose_model.pth', True)\n\n# set defaults\nbody_estimation.imageToTest_padded_shape = ORIG_SHAPE\nbody_estimation.pad = [0, 0, 0, 0]\n\nreg = 0.001\n# Initialize a model using the renderer, mesh and reference image\nmodel = Model(\n vertices_t_shirt=optmization_input[\"t_shirt_vertices\"], \n vertices_other=optmization_input[\"vertices_other\"], \n textures=optmization_input[\"textures\"], \n faces=optmization_input[\"faces\"], \n renderer=renderer,\n body_estimation=body_estimation,\n reg=reg\n).to(device)\n\n# Create an optimizer. Here we are using Adam and we pass in the parameters of the model\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\nfilename_output = f\"./gifs/texture_single_tshirt_vertex_{reg}.gif\"\nrun = wandb_init('torch3d vertex placements')\nwriter = imageio.get_writer(filename_output, mode='I', duration=0.3)", "/home/ax/miniconda3/lib/python3.7/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.\n warnings.warn(warning.format(ret))\n" ], [ "font = ImageFont.truetype(\"arial.ttf\", 20)\n\nfor i in tqdm(range(run.config.steps)):\n optimizer.zero_grad()\n loss, cur_image = model()\n loss.backward()\n optimizer.step()\n wandb.log({'loss': loss.item()}, step=i)\n \n # Save outputs to create a GIF. \n rendering_torch_input = (cur_image[..., :3] - 0.5).permute((0, 3, 1, 2)).float()\n \n heatmap_avg, paf_avg = body_estimation.compute_heatmap_paf_avg(rendering_torch_input, ORIG_SHAPE)\n candidate, subset = body_estimation.get_pose(heatmap_avg, paf_avg, ORIG_SHAPE)\n rendering_torch_np = cur_image[0, ..., :3].detach().squeeze().cpu().numpy()\n canvas = copy.deepcopy(rendering_torch_np)\n canvas = util.draw_bodypose(canvas, candidate, subset)\n \n canvas = np.clip(canvas, -1, 1)\n canvas = img_as_ubyte(canvas)\n \n # addtext \n pil_im = Image.fromarray(canvas)\n draw = ImageDraw.Draw(pil_im)\n draw.text((0, 0), f\"it: {i}\", font=font)\n\n \n writer.append_data(np.array(pil_im))\n \nwriter.close()", " 1%| | 1/100 [00:01<01:56, 1.18s/it]\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[32m\u001b[41mERROR\u001b[0m Error uploading \"upstream_diff_55863cc0c44c4cef5f7d6efd3922044c7c2bf7e1.patch\": CommError, File /home/ax/data/DeepExplain/experiments/torch3d/wandb/run-20200504_095454-25sawo2h/upstream_diff_55863cc0c44c4cef5f7d6efd3922044c7c2bf7e1.patch size shrank from 74125174 to 9052160 while it was being uploaded.\n100%|██████████| 100/100 [01:53<00:00, 1.14s/it]\n" ] ], [ [ "![SegmentLocal](gifs/texture_single_tshirt_vertex.gif \"segment\")", "_____no_output_____" ] ], [ [ "(canvas_orig, \n candidate_orig,\n subset_orig,\n heatmap_avg_orig,\n paf_avg_orig) = get_body_image_from_mesh(mesh, body_estimation, renderer)\n\n(canvas_noise, \n candidate_noise,\n subset_noise,\n heatmap_avg_noise,\n paf_avg_noise) = get_body_image_from_mesh(model.cur_mesh, body_estimation, renderer)", "_____no_output_____" ], [ "fig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(1, 2, 1)\nax.imshow(canvas_orig)\nax.set_title('Original')\n\nax = fig.add_subplot(1, 2, 2)\nax.imshow(canvas_noise)\nax.set_title('Adv')", "Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\nClipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n" ], [ "filename_output = \"./noise_single_tshirt.gif\"\nwriter = imageio.get_writer(filename_output, mode='I', duration=0.30)\nfor rot in tqdm(range(0, 360, 4)):\n R, T = look_at_view_transform(10.0, 1, rot) \n T[0, 1] = -1.0\n cur_image = renderer(model.cur_mesh, R=R.cuda(), T=T.cuda())\n \n cur_image = cur_image[0, ..., :3].detach().squeeze().cpu().numpy()\n cur_image = np.clip(cur_image, -1, 1)\n cur_image = img_as_ubyte(cur_image)\n writer.append_data(cur_image)\n \nwriter.close()", "100%|██████████| 90/90 [00:05<00:00, 15.50it/s]\n" ] ], [ [ "![SegmentLocal](noise_single_tshirt.gif \"segment\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
ec98da1df013151a785a21620432bc00d55a2e85
855,151
ipynb
Jupyter Notebook
sklearn_iris.ipynb
ayushman-rayaguru/Iris_ML_Algo
f3ef8930b8346de0ab906a90aae1c48057152b00
[ "MIT" ]
1
2021-07-16T11:37:50.000Z
2021-07-16T11:37:50.000Z
sklearn_iris.ipynb
ayushman-rayaguru/Iris_ML_Algo
f3ef8930b8346de0ab906a90aae1c48057152b00
[ "MIT" ]
null
null
null
sklearn_iris.ipynb
ayushman-rayaguru/Iris_ML_Algo
f3ef8930b8346de0ab906a90aae1c48057152b00
[ "MIT" ]
null
null
null
1,785.283925
472,953
0.957735
[ [ [ "# *Predicting* **Species**\nusing famous iris dataset", "_____no_output_____" ], [ "**Data Set Information:**\n\nThis is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.\n\n**Predicted attribute:** class of iris plant.\n\n", "_____no_output_____" ], [ "![d8f9d10a-7695-0cc3-bc06-0a3211e78c24.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAADyCAYAAABefE37AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAEnQAABJ1Acd9LjkAAP+lSURBVHhevP0HmGXbVd+Ljp1zql1V3V3V8WSdrHiUkFBCAhRMEGAymODrC7aBZ2P7YXwkMDi9d81nY/vC9+zPCRnrXTLmCoEklI/C0cm5c6pcO+fw/r8x9+ouCcFp3e/5zurZa+0VZhhzhP+YacX+7q/+L/N5Ytvm05bFxwUrxe+05fytVsoULD7v2SyRtthkaLNZ28bZTevlrlojtmn7zR3rtLrW6mxabrZq1lm2xsWxXTmzbaPB1NbXTtqpk7dZqXTK6vW6zedTGwwGlkjGbTgcWqffsVQqZcNuR2lPLJFKWrlYtGqlYLV83oq5rOWzWYtlknp+ZP1+3zqdjrWV52QysUKhZJVKxbK1NcvlMpZM8lzfBsOejgNrtRrWbDat1d5T+jOP8XjckvGUjkkjzKZmLYup3m0rxMxq2SXLJksWz5qV1qZWP5a0vWTXZpmJWWxmuWbGyrs1S+2nrdNt2P5wR/m0rN8b6/dEtDBrDIbWT3csf2RklXWVycbWH5+zRudZmwxGdih3p73k8Fvt5voDVs2uWiydsPnMVKeZ6ji0brcrOvVsPBlQQkuna7o/tsR8bMnY2OY20VWztGiUK+QtvXTSyuWy5fM5r+NoPLBYLOb0SCRiltBfPp+1rGiZziQsk0mpDWKeR7vdtthUv/VcPKlyzOc2HE2s2x94G03nymeeDPSbiwbKOZGIWzwx8zaYTEZWVGFmIuRsOrapzucJ5ZgtWSpftmS2ILKl9O5I5VLb9JqeL+XL54tWyBWsUCx7vpPpyIZq426vree6NhqNPM9Moqg8ExYT38Ric5VgrhZT3eIZ8Uza2r19Ty82i4lOM0uKX9PiKwpDGpaZO595/dMZFTCu66qj6EwdR92mv68Eld/cer2edZT/RI0CDW3QsqkIMR5Plb7KoWfH46HtN3at0dhT2cZWLFStUl2xfK7i+ceSgaaUt9cWP06mNhnPTOxmP/XTP6L/XzzAqwQl8VWDars4quwKoo4fCTPaR1WNiY9TanM1os3T4v+MKMd1JZqWvCEAyal4RE2bnCmFkdIcicJjEknZdBja+csD7RBTm5uNRWPaXJTxI9csrnvKL5GPe54plSGpMsQyM4upWWJqy1lC9FSyU9FYZPX2J10CZdMV8WNcfKp34+InXZtPxfcURWWL6TiXeIit/DgbhusIRlKvx5QU5Iup6knxuqVoX/1OiX/ScZsmVWJdm+seZZnrNvRL6kWyj89EH1VIKktlox1VlljQGdQ9Hlf5lJYUpKdLOrqhdJSeUiI9+GwuWsQohM4TuogsTlXx2XBss5Fkeaz3h+IN6ctZVzIwUPlUD/JN6j2ni9qHY6ARBRD/U17nD9FUP6CZ87COER/EIR5H6Tp/Rr9dztRI4Tj263PqAq+qmE4zT5xrylNH6sKRa2J/ryc0o50nanmJnesJkcyDqupppKClE5PyiN7QQmlEckHpyS8uvpuK1rHJotzUlwShvY5xHeErjgRJvqcf0iE/dBflVb7oCMpInrRtSNJpFclTREuq5DSVXuf43J+hb188/MI//R7XB2SEfphKR0lrWqYg+1UuWr48lM3LWaUsPSTdk0hMxTuBFuioWKpm01FMNmNmzcbU9hp922v2rCXbMBqPbWBN10tt2cd2tyUbtmstbKSEMpFMyy7npM/Sls/kLUvM5mUn85ZJ55RXwnLZtOvwmGiSTKGzJpadL1l6dNymnbr190pSBzXp2ZkNRjsWL16wce6MNeYX9Ft2Kyn7llRa0pWJeFp0k1yIdknRNgV9F/RHdvsj6eth0KMTCWXcZVW0lqxwfzKaSp2Eth67DpWNmNDe4nnplalsBjp+rmfBAM6X+ovaKArYH95xOzQNHC7psrTKk5EeWyqX7MjaYVtdXrETK0nLFrLOvwnRoJhPWFEYoiC7Vy0WxBMwknhjLj00TahcCWEG2W7aoN2xlvTfUPLZVRzqvDeQbe+MhHda1u30Za+l313okU+l4UoJnlf7iqliY9k7+NKvwfNoBHSL6KPjfNZ1YZlKYcVUllwuZrVq1pbq4plS3gpZ6WS18zwh/TtJ2EhyMRonbCi9MZI+7sxawhw9u3plxy5f3LHtq/q9r9TEvujFbEYKF72gsqRUzJx4opRNWTmbs3w6pXYai1fLtnbqhB256ZRlxasIw0R4YzjoW7bcVXlKtlwtWBZsQptI8KWmZKOV8rgouuWlF6rSfXnxYtJlD7mFnpNBVzquY+Phvk36+ypzU7a2ZT3xM/ivM+won4n4e2Rt0bMlW0v+1ZUlW1Ks0qZ58Jzs9GRsv/r/+JTT90ZC4lVvefmDSSn3VHqiBLJWzi1bvbyuhhdQzOXEjEEgbN63aaJr46TAhnWsL1AyFHAbC/jYMGPzbs7GLRG+N1UjJS2TLaqiEjyBGri5L4DZF+OLHUWUiQMC9A4AdSRQBCBKi3h55QkwK+qYzaQlkCkxhhpfCm+sZyYYGSmylIQ2JYAC6JhKMQPiBmoMgCzAtynm29/f8+sonaJAU7VSt2q1biXVLZcrKaoxPK+8VXStWlgSQCxYpiyBWJKwVKbWSXaUvsrXm1i6nbJcR++Mkp5ufyrQ2e2r4UY2EB36qsdUQpQs6d0lAdGiRHMMgNtRo3YlVFU7eeg+u3ntZXZs+RarluoO4lMCPygChAxGDMyI0VSUNYlLkcXF+MKbMqJ6PpORwBYtly9YUgAqIzqhrFEQY5WBADN4vQUac3IMsjnRS7RMKhEciqm3wdjajZ7onlV7iamFNjC+SSEQjPigN5JAAIpJS+0gxqbdxmIyyptGsCWYahH9SVT1HIo2hWJVXgmsje4DekcAdL2L4goGljKoPio36bphkrIaj/Ws2hAgOpGyG2PsoatoAI9MxYsTIaDRSO3dUxwLHGOg9ZcQvbyOAFP+5CDMUWoSaMrl4FT1Ig8cJPKYKw83iAcM3pT0iJRJz+rUlSjtQ1oEFDjCOVb+MHhMdUpKAaWkiGLiTxTvBDrpbyp0MlXZufaa177C33+x8L73vc+PcMRXC1yXqgw/FFz1ozx1mEmxSs86eKStETSwWFwG3o2+v6Y3RHeMeoJSqrLo+RnyJXHHIDkNVHeUcjy8BGvqgr+qFKScifqt1/xaHF5Jif4p8bDnSTngDT0J6ZTOXOWb6SXaDBDEe54aBmGRT0J8SnMkaBcuwSNePp2TFzZkcQygM9TL3+Y/8iWqPEKSXncAbIyj2scBqI6qunLmlVBHf1+FSoLKvOEX96DPImnKzckcIASt+eGZUyeO+qfXF/8FuugYUxqUfy6+dFA7Uf1xGORESMU4WJ8B6gTw4GXkYoahJ0eVg0ApQlb6j9wos06vlX0RAk0J3Kdc+nNdqTRVr7F0BfI0gS/hTzWggwXdd/7X+xMBEK7xm+teDv1NvVwcF+0esvFyUEzeddnXHfQYp/AIR7+sQJv6b27wHPRW4Dd/DmhJmPQ5LtqYtoEu1MVBEOfeSKF8HP3Pj38+hHcpl34o8gzxb/7gz3H7RcMffOg/uP0iL3RRb9iT/RpIF8Zdz+bl3WdzKcvk0IXKAt7zyoZOi3m84KCk25laqzG0Zkt2oyf7JaSAzsXxoy29PREA6iUiD6WrRtJZGSEUb3uliWxAKyqSAEhKNwHy+Y0uAwDHlGk2XrH4aMW6ezmbCKSNBZx3d7ZVhobsiPRWGh9WQE1p5mVbSItUKHPQGUqPo5QFesFBm54VZ4gvBEqxO1MBWvF0XCAtrjIj764pHczpoP8QRf2vSBvQfvAR9dZxwWPOj97GtFWIKsi1c8SfE1rbeVVGJOLloeyZcKvuyRZiO2Q3KEdGep8OHBoAW8Z90pypfgm3b3SOcJRNAxOo4agzOojOL4owwi65/eJNIunpf+kAmiPoKf2W4o3DXKLPHGWrI7+hSWB4Likv2aO0wCUYK0csCO/IgUinSV3ldycPHqAsAuduM+UMQwccgjFR9ND5XM+RsLISPZF1l0KvZU55VIQB6hXhm1JFNm+oPPJW1G9iXHWFz2iHiegXS8gxo1wC/0m3dbRRJOtqsylgnh6RguiTc5tKhwMlgDY4h5RgOhnK5nflNDUlI23x95AWE1aB1jgBc/Gz2k6yhI3KF7NWEA0KafhR6Yn+OETf/JofVqI3FhIvfc0rH0ynBb4y8ubEeJlU0Ur5JStma8o0I29NhVAjTic9ATUVKtWywVReCuBE6Dol1J6aly0zXbJsTARKl60AUBT6jwvU9Kcj6/Tbtt/YkfC0nCAwFSAIoRwLlMCMMHdSCqGQzwpsykOhgUVQPeVC3BdQ7Qk89EUUCCsKilkEKnhfwHKoewAcAARAhV5HAC3MBpBdWT5khw6t2VJtVWC5KqIW5dUWVcakA9mygCy/UwV5BRWVr9yxSUbAPSWw3ZOg7grs76Qt1xWzy2MajvvyMlpm7ZEAF+d6dj60eClmuVUBzopAl4B/t3Peuq1dGa2YHSnfZnccfbXdcuRlVi+ty1vMB0WBpDtZ+A+6ICBY66llZMFTCQFjCWNSZU2J4VPyxHMCs5lCSedFZygUAgAbgAYoAGhmpJiKebx3gVUcA4QTMKs8gyJQnRICnnpfp/JQBcrliULvEcZW5YnByFKmDgBFX7xWfiOgeJAOwhfgFMBI76eDc7hb7w+lUAZ4Y6IRZQQU85xgp26HXk73uhfCRB6Dfs+BIsrDLYDqk5SShj9SKiu6YiYmcJCLwlJegPe0A9ZgDKgbvU9INHlCD5QUCggQj9MTAG2koFDaACi9QIJ+RQpPwJly8kxCioWyY5ygMw4MQuu0hFbKzMGuXo+cBdp16OBZ+Y369oY3vFZpv3j4WsFsFLjmhtxBpMosujiowxgpoty994xnVSfArNhNNkflxlK41tJRD80BOPpN3SWIfqQ5PLpV4bLy8/pzrnQwAgKv81QoAz2j5I1RCzaE80jpoig9Gb8cgBlRiljtTVRy3p7oCx6mrJ6R8vdzegR0jCgBbzh4V90Bq+RNjx1ODSB2llSeOqe6XAuMyvsYA52rkhRVIhfqqPqrRjpRGl5WWpmTUH7OYJmQKeCCC4vShB8OhP1BpQkwmQFcHITrfViEc8CsDBi9/zzrQEaEDnkqcE3JwJ/IGucHj/yFc+inxxfvhWe4wTXaSXUXwcYqA3TTvwCwOVJnvx9qhhFbkD3cJ43oqBNYhfeVhUecQq4TIYEbdbUl+UMC/fP/KA5g14ulG+QXGEHX4AsVBJLSDmRIevBqFEiHa15e/wsXwiO0i25QHv+tK54XZTuQCO8pofD+3P7WDYLZD/zOv/TOlwkgChmfDtQmc8sXUjLGaSuW45bJJhSlHxf8FfKGVjHpwphs2cS6shsdYgenmE6I0NZxRjEovNqZXk6OlJv8JvANzreeI82k9C76F6DjjjTABHoDLpQvnTDwUC5Rt3l32TpbOdkR81HLra2rkp+pVZaWJB+yHbGW5UpxAV/PXWIlGi7ALGlTR+juPbN6AECrzFQnlct1nXThaKwyDlS+qZ4LtgNdRGfDTOkhN3Olqdc8hhYK/Bj1vPI8wc95X5HzKCakx0O+oTy6Kj6ks2DutiZFftLBPuoj3Q49ctJHMguB56QDvOH1NpINQGQUwwEmoxeqb3AkSFt0ULoAWewfjiW9hV70a4HyhpI4n3GuS+hYh5MqP3epO3XmlwNTtWtStMZeZVSndJaRRno5eY52xFZhV2UrY4BHlVU2cyRdEde1VDIrwFm0fK4qrFQWIC44JkARwwc4w4y+lfI5q5VK3jmYEl2Gsll0ApaWalaqVSypfKkrveTY3elc92VjcZogFYJHmV3WvbmyirKBiYJ4gx5UPYdO0q0gUGoZKbTxqGv9Lr2yDdGvL9oKAmdTVhQmSYmfkPmR+B56Yk8dyOIMMnKsvON0K0v5f+OrfoCUbyjEZ115jEOGkvFmhtYb7VhntCVw1pTgydDroYDMBTjldc3pIdM5w8riLQlKwaq5mh1ZOmy3HL3J7rj5NrvtltvtyJE1H/4ejru2JyC7s7dlTYHZkQw6wKMEgYvyGJZXfbpAUmALQXEFChuJ+cWiDljGAg704DabDQepvYEIJUbtCiAMBgClrso+8HIRYGAAHh4PeZRLVavVlq1aEZAt1OU1V8Q8JY85AcOCwG02W7WYCG1qq3hROeeUp/4GM3nenb6NtwR8thWbAiYAPjxlcRY9d0wDmMfl2Wb7lq3NrFATG+u8P96V133JpipnObFq6+U77VjtLqsXjlg2lXOmTknxYCpjEkj6E0WaEMUXDK8W0vKsVSwUJD2yyawUUg4QzvAAw9l4OWF6gQ/3iF78xuilpbnwSD0ugJpHcS+/UykESEwlBTS1kWglj01xDFBv7ameiyEv0Xxvb89azU4AaCopShRDmcioHpmsTylIZPKWVJp4oTO180iM3O+STtv5BZZPCwCnUwz56x21EXRkaN/bVceJHBuUgoNKz0m0FKDH0UpLcHK5tDs7pXJOUeWVEORzGR9eQ1hS7tUFpUUMnneoOwHaoDiJPvVFfBTFUDdTPgK/ij7ErbLGAedSHnGALEBcwojhxqjThggmdGIopSUB7nTl2Ai8k8dA7QJgp12INxpEwi8z4F8ZXGl+lYCSRwlfC474onA9QWi/IMn1gCHTRVfqC0PCK+TkZQHcKbnrRiYc6VE6kLQHN6q6zJGXaFOCP0q6XxHcEByIegjFo/YnfFmNvDyeDlFllj1yhaymkGFWxN4BqHVOD3UEXmfiIbRYqAZ0WtTRC69f4gcMHnnSU+ikw5EB7B2I6D0cOs5JzMGgCuN04G0s2SJg8wmBToHnwnmIoZdT9+nxAii4sQ/DeyHybkjyQLLX6AT/XZNrxXA9HAlRnhEYiNJVCn6fuoT6XD8SpdauRdkbjw68F9HrruehkSe3CItsPaDLCSG/64Gffu1ghRRC2SgjrKh66LZHpenpKpISckf2XoRF+UMMPEl78DvKlyNtHaLooUJf/x3ijYb+rGmtwZ7tNq7a9t5Va0q/TW3oepQpPAx9h95R5eH5SNco/4kiQ+3YrZ4ARbuLzpMdk20ZSXdAUHfcBFhScenIRFaAgql2RSsVylZWLORLSkNONLpV6Yx0jo6hFzL0aAbdFvFhxJPzifhpnLZJP6PfcWu199WuDelR2ceswOxE9mSW9mHlDPIPzUWTa7yjOng9oiNYQNexUyk9TETnBxDDMLL036AlfddTtQC3YU5QIob0CWDG6Y0T6JKcptLoaHg25Oc99DTwV0Suq2ROS/KnzUI7BqeiO+haUzbk0saObWzv295+y/abbdW1ax3Rm1EGmGgqegC15TuovNLh+u2jkdglQKVsBZ1o2TT6Xzwm2zgShsExiCew0SEmpUsAozS1qAu1VUaYUMynckW0Q3FyDLSTXZZulaDqOnVJqexxpc/I9Uw0k6PTpyNGNjROJ1VZ6eZU2rTwgOx+gmkIFQHUmlVLy7ZaX7dj68fs1E232K2332a3v+QOu/3O2+3m22624yeP2dr6EVtZqVu5UnS9NhDPMdWFKUzphNIkMpJIWcYzYZWh8mdkm+NQNg3nRHWCm1RkOsEivePxmr7id7gX5Eltwmi58CRHPWJZOXuliux2IeMAOy+8xxQRbGwaG633oaFTSnSk2XGkvpbAtFib9tU8gySg3oazrnXH+9ad7NuIyWiqAGAgDFurwKqUAyYfJlMDjeTNJfO2KqB4/PBRO7V+Qsc1O1xfsaVq1efAZtLyVgWsChIeQOzKyoodWjtih/Xc2tpxq68csUp5ybv4EwIKWCkq5MpIhSK/kcABYKcv7wHQ4eBAwkyAiBhn90gcpOElFwRgayqDGrMssCovZi6mQFGr2M7EMzFjNl2QJ5LHuts0pRs5gdJ83+YZpR8fW68h8NpQMRpxGzfnPp+UXtiha3iGA0T8+EjviLkrRF3PqRGnLYGaXTHqWMJRtPX6bXbz6kttrXaLlQT+nYnQGl8RfI4NUQZYVZGSROgFTB3IZuVV5RxAMjY0kYD7XFMptzDkHXo5ER6YD5qEtiNGSmORDxBa709nAl2KOBmAsaE8KsZzUTYxOS948e120+8BmCc+pMNwTcJ7fFP0+EoJxOj1FOhDUYxHU4FYtVW7JU+w7YAfQcdAYWSYcxbKGHjLQZ/KDyCkzelRzufzHq+BT/Eb3iMUA6SX5IXWKlV3hOApnuU92p56o5i4hgHICWgjuCiv8URCqrpyhHYII4qfciG4RM4xAhzhLfekF4RzwyilHcWRNFGgB89RN3m4TMFRXRiGRLSRGe9l/Crt/T8tHNADqoqCKqMQFA8nki8d0fFjAQA3ht4uB15U4FmPotGCBOhnJapIkhKZCDSEZ2UGheACuNI9wGb0ooeQL+9E7/1FAeeO58nPe1X0i3Ne8zIoeFqQVfQNUyt0YQFkmSs7E1vOHNSqJfQckRkjRJKgJ4hcoqJ4uTByogm2CF4lzsTTTG+Zwh8T3VP063o5gIZFAtCJAnmhABRRPcMxMnKo7Wio1YPKMRej8MvvykCHtL8iQj+IfiCENg3xOqC9TveoDAdjCBBP/5SPA0Cf96YyTGTsUW+KYucAYCfIR4joUJLwZBYx/KYOShWRWgSuUZ+Q7+LigRCxm7+3eFf2zc/psfFrETkVFlUK1xeRAEWulUERmSQe/M2RAAXJL3BXiDcaUlK90m623922zd1LAk2bclxbKvdY6g+dDe9TZuVBzclbZ2MRmfmj2DUceDpnGo2GNVv7sml9J3Ba79C1kRSQTQsZp5UZc2ML+bJPjSuVKq7XaFdsIOkEWzjxePAc3oFotO1U7TkbS4fNcjYaJKzX7SvtmFWqJQEbAdypAIlIM5m0LKEX4h6hpXSCEkKnoYN9mhe62M+hJ7ZKtkm4gOloKbUXzDKRDRkOmEsrECgwOHMGUhQInUyxN3Q8MdGdjgvZgQT2LsSDbRa1V2QniFyDnwiuW2TbGFGbqgLosO3dpm1s7tjVrT3bb3Ssq7r2ZVugCelEaQK+rp8HuSF97AfzQOkUoR3p4Ov3u7J/LaXTVvlG3q6MnnLf9b7bVdKipQH6qitc5ddC+pEssp4iLoxD/jO1DzLWH4qjehM5OKMwl5l1ABM9Y1nZzKwgmOysMArTA0rFJV+bkc3QGxs66sBTR44K2J44YbfddpvHW2+9xU6ePG7r6+u2vLzsthC7SJGoK+cRPuB3ZIM7HTlZLRytjuMteIw2cfoIQ0S0+otC1G5EWBD6QM9sNulTCdLSzUxhYEoFtM4QxdNEeAj+plzkEdfxawnxoUBHrzkQahcBxdRzAaneSGBW3mdv0lQlxWBqIrp9meNBowEsEJw5Wk0kz0jomI+xXKnbanXJVpdqtrKk37WyHV5atuVy2VZrdVtfPWyHFVcEXpfrh4XUl6xYrjmQLetdhDaVzioPlUMGBUML8ICYEBFPif6LpAiV4CYT3Q92yaMKnCEDGALMAnZYaBTXMwAVJsePZCgkklJAahwBXNklKRsxaaJrs/SeTZIC8vLGBn0phUbOkr2KZWY1KaiMdQX8GhL6njxjhjiHotc4owSK8sxrImie9MUI/aaNuh0HzuX8qh1dvdOO1G+2Sn7Zu9kZCpnjkahOdOszJJCiy148H67hAIjJkAEUmIBjgnrIU4+lc2qRpEAji8YikN/1NkHxRAoNMBWAbOiFIyJTIqfTAmPcB4zJ6E5EhGajay3xwszn48wEZDu+gA4hJt0A3ugNDw4FbYJC8yDFEoZkJJg+XUHgVMB/KmlluBqPECAAyKOsDvgmUuIKQWEGJUmgDWFqB6eybDC256XbPBOXEkAQHKxKoLOiB8rfPU1F5utmcjkrCORiAADdpBH1AkMzyo9yzikNYgb60iYIoRrNeZxpCCq/OyxSxEwloGeEYUZ+iwm9lx5eDeCZ3hQ8awxNAOc4GNANuWGqxA0HtM5fFl2h//nof5wuslLRFspFR85lpaAlNIcPDsYwJz30IB0MPB9FmT0d1R7iGUKUNoAj4jGCP7/gt7CAiPyv1/96/uH9cCQ69FE+oZxUldyifPwd8QFH6M51wjTKQzT2nljpQeavA2pDr6yexdMHaCnyFrRRMp6uR6ULr3pcAFbvjWRO6+L3THKB7DjopwxeLq+kUqIAHAlSxtfOw3P6X3/idRnwqRxFnYlAi6jy48TiZJAwj3v5Fklfi/z2rAUoeF//+/O859ktesT9eXILACrQO8SIZhFN/Sd1UbsTw7Qaoq5x3aNn6sWjuNDKi7oIPEubfGXgUpSHl52LKiAlj8pBeeEbiZ8MpqJkiAWpKEMHt7pHpNed3veDkTofYCulyQ/4O5Qnqm9U92udJP6sjqoENu9GQ22laNW6dEtZuolRPOaGK7IgMKkj88XpqfQyQX+lj36l88SHwqUPBop9FiujA+mcGbDuQoALXSEeE2ywjPQZei0nMMti2TJrPqTLqrKpLOINupjpS/SgEfuuq6ijO+Sw1IJALMSidzYVy9tkmNc9gQvpVuSqtd+yUX8sfcp7W2pnuiNUYgkezqTbEwCq8uNIT1vodZPeUytik91WyX5hw1gWOBn3vRODOZOTsewJz7LuREeA7IS1JhPZq0nPwSIAkM4bOssicBXpfOLBwAIu52k4CJm5JtPiV5V5KLvXaLVta2fX9nXsDaTzpcf72B2mQagN0HEScaUT8TjvKg0JDTo6gDYlr/S8x1hPQl86dVgkRVsxvY2eaGhFRw1HFwqOKC21O+/6+wvdRDm9E40fMLYEeiIDPxjOrN0ZybnpyeZO5OCMrNmmBx+7I7omcpZJ5C2bLIjORUXZLJwc2T7sW7FSFn6Ss1MpWkX8sXJo2Q6tHfIFVaWy3smkLZ2hs088IHsaFtVCXwiHHGBXWQ8ztYb4odFQ1BH7HwDtYiqgeCu0Syh70B8hRmGhJtQuykt8miJf5sn6tAFBGR3p2MxnMz6tgFjKF6xcYASi4LY6Cc4kiy9v+hcN8ZgEaCJmng5hpJKupNX4HdttXREzXLG9hoBtlx0EgkdCg3mPk7iBeSbMuQAI5IgqRFG/qwIHdQGJeqlgVYHTqgi/UqnZmkDsoeVDVmcRljyKlDwN9zrSzP0oWb5QEcGzasCEAzVWvU3EnKD7clHp1ZYEmAVQWdRE97SKAwAfDgB1DONiEPACRCDlX6mUvFeW+ST0+EZd4c5kSQltEmZk1SA1G0opdUTthoSV6QHyxNojK0yOWD11THU5ZlmVr6939sZNayOsnYkN0xIOgdl5UQJYFnMmh3IEGnq366vD44mMyrBmy0snrFJYFTNJ4MXLGB9yZwU1DOZMpsZjnpEqJVlgyFMtJJAUR/HQa83uAJmcDHNSimuqfK4D2QCcApDlyNxTQJv3oqu9oiB+9Ahz8my5eEh5qt1NirJ8VArzqEStKLRbsHy6LqBHeaXUpIx8TrU8LHZF4HdXNKLHnPlEpOVAVnEgh0FY0AVXjel5sUhguigfPaIoMSICAthmSgjgFFCIQuNa6KEtWiFT0HWGXgRxlB7KGnBMLwbDJbQtzorP30ViJGjcZWoAaaFJoAn0iICsHvK8WEiHQiDmpRyYAgEPuTEXXzDX1XnfQRbGQu/G5CTIf/KFddlQZsC1L7YSb3lvuehCmwBqMQjwHb3x/7MC6uS6SvnzASVzLQDsFoYCpe38qGteb6dh6NG5ZvRR9q60SETvyCBERibSY/6TdBXR1RHw4tpBgxTyTLjC83eJ/oPf5KHzRcAm0JJKKrIP/ixHYlQ+LhPIk6kEGI+pjtH0AmcJ2XV+A3YXuvjLAkl5/g7mOCqdoO48ei8sYFaZ0YMt5l6UN6JNiCGQOMreCXDtemgDSkvPjuqgckJ7B2sJXU/qt8q4eF0P6N8iOpjTkXL7MSovhXPikAdpq9wQh3ZYhAjIEQ+WlTS8Pgeil1kFlTZz+ZIUhT9dj3uHgSJ1pov7QLbh3RBDmaP8eU4GfHFPd/wq51EPG8/SC4ushsh59Ft5L4ArouzTSKQooUdEC+jqNImyVLheV+XtlVvUecHL0a4OB8HtjYTDK1VbO7JsR9dXbH1txVaWq9KbApwFFtlKKahgAbxQoGD4gxMHsEzaQLZ2CKgiX55Tu4/nzKdnUXVLOpJOkpnsYtx1Wzoloy/9hx4syuiXinXpnKLXdSy9MmSkSSCWHVOG7I5BveDdMByhugmEyEbOJnL2F7LodJV+7QpIb2/L1gnsMbqKkOjNBT0c6uma+EZpB3rSjoyQRSNTuqdy0DPqaQrM8j76dTSQfleUEjQwhgngmkBsbNb140xglnU4s0lfPDSWrZuqvtKp9NzRqaP2BzjhsNGJAJCOOmqCndMdepzVtA5L+C2jw44PA4H3Vqcr+zhwgB926JFd7oUeWgdmqofvkBJNmwTTKDd6pd3+UqY4C+OFENAjukbnRn/Y80XHQ+YIu/0ijZAmdA/6bUFnUZAjNj4C6BEoDHpj0QHEgsDe0DoCsPTMtjtja7UHOlc9VTcwGdPbmG/su5DAG8JZRQFVACxTKTPZYDcz+ZTlSwKJJdmlbMpGKl9v1LP2UPSg80iOos8bhvcUIt736Z0ql+8wIOzS7ih/xb5oxtoSaA79w/RCveNTRuSkiL8gGdM3lKinhR3x6QNqC6bK0OMKHUifeqRkL7HtTA0sCqsVC0wXDOt7MKEqjBwwFn/f+LQ8QuJ1r3r5g/G0GKiiRJfy7mUyEXjUE9SSB9dtM2zcVyYDi2W7NpjvWUtgjUVCKDeG76vZFatlVgQwi5aSAPtwMmyoQrEifjwYW0Gg4ZDAbLW67ACTFe/sTDBQPvT+EhgKR+B433v8VHkm0TPEDnDICBjTO5wRsIlW0PUlpGMfgqbnS6CI+RiKeK+ADHrs6KllpwZWnPtKb2JcaEsClIqxNRTAtmvpQtuShaZkoyWQyITomK3G77Cl5KoV42VXHnujbWv1923cFajpxWxUEohMjmya1+8iA/PyBLsCut2O2ntiGYHwYyt32c2rL7PVylHLC4z6bgKq5xCjIbAThm/w/uj1FDAFAM0Ycp9YSt5KkkVeAvuMcc1jaXSdygaj0bOpRhejwSjOZAoYSBjbAaHoFYwCczthqMC0vEN8+pkr9ge/98f2O7/9Ifv8Z5+whz79RfvYxz5ply9dFR1LduKmugMyehFwEpaXcUQEdsXK7B4Rl+FEHpSsG3kWcdEbq//0BOpB0ijD54tbKBuC7R64hEoClYxnPN1CUQZBgDYpzw07QJkZnqBnjHrQ3l4xlIArCQSEdKS0qZOus/CBuvkqbPEQAgQYop4DKdZeT/wrkAm9fYhDfJWAP5i/I17xBSvUDMEWT03Ufj0pASUTyo0xAIBQZzfFY5knVnSqHFLALJJgkUYY1sKLlUcqgMJK2Whx3KseeKXn8WLhFx98n/P3XxQxIuiPKB686b/d6C8iyktHtjACyPEMVBQZZEz0PFGGcKrIbxa90ajMr4vrBeSRY+iRJYSeJtozzKf3JnBdxpAbXtk8JZnCtgPOPF+9KzrNMVCSbXQD5XRPH7JTJvIRjX1ullJPQGvdYkFQjCEK8X3oFVQ7UAbKSjrOWLrOgSN19HqHNL3+zm8yHpIRtI2rHOREkTzY+okOOo7kwXZlOEzKWqysOiry5+hNabp+UposeGOURQXXOSMg4ZwS8ucLqjB0Tk8lDNGVDrwiEfE6MC+VujJigwKHBv3B3MrVgoMfsaH0Z8YNof55gcVeyk/vKX96dgrF9IKOyk9l83LRXh5Vj0W9A3Ms2kvBVYaiU5o21UPQNArQmXvUIwBneMFv6HpoH/7zvBftSHu4U64QsnPK8eq1wIgRvwOlRI/Fc14wHlYD+SWecZrzI/wm+vu0KffIVxcpK0dep2eIVdNqFV1avOSVDeXjp7cJz6vOP/NDP+/3Xiw89ex/t4yEKpugR6lo9fqSrawsWUWAlqFUAAQLIL0oRADlNCXjbAJ4c2vL3rUaXekinHk5/gKjDLfPY9JRArRxAVgvEDQRP7Gbj+tBXUO3zhN5JxHTpEYCJxMJL++kM2VLpAuWk+5lmkIsVlHNxtJrA8tM1i3ZPynenlt3vCFAtqv8m+ItkdmYuCprnZa9SlIOGAuahWKwkIhpZOhgp5XyQ47Qf3RqIf8+DxQaK9JF0+/I5neGlpdureZls9GXM3Z9EfgTjhjLxvnWk7rOa1NAi/RmgrTGdJKpXZBZlYOFaN5OOiYUh6p8UoolDcAzRmTV6qKxd1DpOVb/T5UWzg/T0TICo3m1CdPSctL3jIL6GhIYTrJIxwNgeSb5c77BFquxhgBW6X7vFJI+bzQ73m5NXffpDbwvAnn9xfwJlA3iLfvsuhLe4xoBQuo5L6vzIrIITZS/g0Gcq6BrYBocrrCLgQCsrqBfAZ84C5P50MuPHYJ/Rorkw3aU2NFUjq28AMJq31FXPNKzVrdl+72GdYXjWIxVFs9Wl5ctywJytS2OXbvXsU6nZTuNfe+EwUayoBo5YnE29jorDDYei/9gbDqQREvqqR+qHuWXfsWYqExz5TXpd20i8M9uU/CTCqomTQtHqaJ6FpBblAPImpdsFufNvMfWd8cQf6MH3/HKH9SLNxYSr3nDSx8s5gXWskfFJDK6ia7Qs7zEScu2WLi1cUYM0ZViVRUyIsxkxzo9JpDLcxHYzccKVs8csnr6sKXiJZuoYYdiOhoWvMjka04y8lzLtRWrVI5IQEsO3Fjl3xpuS1nLQ6PCIiqLvRiCYegnmZfSEKHYXoqpAmwVITahuUUsgTE8r57eV6MJzlpChEiJaVPyZHMpgSPmyWbaInhGbzD/UtQCuAohxUVMVnAn5SnO52tmlQ2bLD9qPQloY18MIYFMD+p2ovpalT9tI5VHMM1X4LVbA2uLyTvpkXWqW7Y/37BkbWK5YsqaTDzf3lMZh2qgsa1lX2anii+x46VbrJQpiZ8l6FM5BjJAGYFlk9fs9RiIiUb7YkAWXXWcOdKqwyRTs1xBkd0l5ikf5qSxGWIKCwE6DrowQJCdqQpZVtNKQdlsIOao+7ya/b190SHuIL/Z6YjZe6LzwD72wY/Z+c8m7IFj32XV1JLySdm9d99l3/Nd32S3nTLbuLxhhfLYLp9L2/nnS/Zr//4/yrAetZe+7Osslj9rk648TQk/7UKP3YTpFwpTeW4dKS05gi6QkgaRUYpTQuG9+mL4lG7007tWShy2zGDJvvCpx+yxL+7YC0+VbPdy1TbP9O2mO45Zbalmly5dkKeftPrSbRKukm2KN5MVUVmO0kiWfSi69OQ4DXWO4pvOmUssvlJxcM7Y644ebLiHedLpVEF8kJFg4uRIaFlMtwC0KBqpNMkBSkllR0lLIWEcmbeUTZctm6paQU5cLC6hk1DSOwxoHY9U71ZLeYb5YmxfksmEfSCl9uyBB14ONV40/MJiN4O/KIg9/sLg96TM1Siqg4qPvnEwG85RxP46yjNgK+cdwJvPa0ExOULFOOmo366Puc9PjAsp+M8FSOIneWCElCd72wJmfTcBbCPKn7YnS72v5vdzAukRHHjqMdICZPKKA2l6uLw8uiidQhkBtD4crutROthg6oiyB9BSVy/P4hxAhx5e4GJY0ukguxBArI5OB2gidQR2A3ACJoOh0XMAVb1HOr47hArpPYSeucqk8lIf6hIFgAhAEM7iCLgVe0iRy1iP2DpI6SuP0YgeWxZjSB2VJQN7HTcipXLRWu2+j3Ys1coyrkPp0bzzPrakUi64PgXgsTMMziYk9V4Ur6HK4IdQLmQAZ8LDtUNUXtXDfwUQ7A/wjD8nWkePcY9/epz6wxpRlf2gGwFIhivOA345PMSz0IncuOFzokVDX4AXbuoZRUAGP/WYk1jvBweCdKJE+Se6+3loD56NOOPadl8QhYeULmXzlfb+QMx++gbB7Nnzv+OACFsJ0GRE0rcVKrG9ofSu9Cvt4HNnvXzKZ8LCnpkNuiNrq93bAnqMOlKe4HjgrOtZlWaiZ+O67hNtkDnK6fzjxVRyGS9zXPrVZRldJxkYy9li/n6NIVrWY8TEA4mBeLRjyfG6xTonbDoYWbO3IdCyJ9sBYJH+Y0Qro3wSbTnusls5ttdSGVQelz2nt/gIYfHrKq+uRfwd9ThCSVgKvcvWXzPFlGidEy1ELsmkgKuEbDLTfRYcS6C8VagrAFa/06pzGj2KPlWaM2hHNgJWvj2ojqzNAAX4riCqt8ur8g1TAlQ+1Qft7QuGM3G1jzCKwBK7IzHcnhLfBMdcNAWMO48oE6+DzpQHva0sKPcpFYyySfBwKsfDsbVkU7yneKHAsBchCvQD8FhUrSM7TSgTryMFxKn0fA4E8uN/6uh0VL2ZXhKeDTT1V0hHlwCVxroelZ9yuww7gXTQKRigN2bxHfau7bHZalij3bQuo7eqD30LlVrVSvVlyxdwgAT+xV8sWm7puf32vttwet+ZEpATpspmwzQFwHTMquIV+NqVvYoGXyi6fgRgg+NUxrHSED6ZMs1E+Ya1CXLYfXQy0Cypevl2mkKxOILk4/OflfcY+qpqb3/lD5HwDYV4pbIixF3Ri2Ie2kfECYwKS/RlA/GInFW9i5sVlMw5xbChuNm+y4dS9B69YkRXGmKWiOEJzmgLxg89hSE6My8iPQ/0ouEZMOeS1XVxgQdLZmXH0pREZUr48HyIajw9x6bSzGdpR5vtw4zyEHxvs5HKgCcgAMsbMBabHXucJSXA8jiWBGSTV23Qk+DJ+jcbl2zj0rblU+tiiI6XK5nICJCUJBDsgqA6p2f0wdpg2rZ0PifvteRzcnoCo1OTxlKIW8XqAu/5XE35J6XApipjXwqtK+bpisFYVMWiK3lFYraOvCPqTa9XMp31IXCG3+lhxmg5Q1yjMcYxtFOoE40faM4G7ygEeqQnokGn03TvHnq3WqKRhLJcqtjZM+ds2FuxO+99idWP5O3QkVvt7d/4LvveH3291Q6nxfSn7PKlHWttx2ypcottXN21z3/+8/YTP/HX7Gd/9h8IKIp3pMwBcbQfxoc8ojbOsFBNZXSlrTL7EIsPWS16qlT81dKabV/q2r//tx+0hx86bV/64vOif8/uv/+kHT12SMw/tdXlqm1v7dpv/NcP2qOPPG0P/vw/tV/6xV+xXHrF6Uza3vs6lqJUjIZyiK2OhFQR8I4CRwnRS4ghysjTBLnQ88wCN5+WIcFGGYXefHpTw5AI9Qwx7TFfIIbeXXotyAu+9Q97tJWnAC3H6CMYE4EMb6MbDrTtXxIh3l8UCWjKxemXBZQi/KJ7ABDHp+hElKEu+mpfxWtzy7xXElryjNoSAMlLi4CIu5ijfzGuADylLxZY3AtGJgqRrEchnGOwD947eD/wj+slWF15h15aXV8YMxeBRTnc0KocboQVw/SJBTDjtx4Pz3nyCqRPPjpdpEVkcAEQG0WxjUcvC2WQvnSgIcGkt9kNj7dLuA4/hHKTIDVcvCseIKuc1BrbMdHDClfIB5J+cVShYqZsd6ejc3rD0rYveWBKVm1pybZ3WzIqKR2Zd2jSf8zDlLFQ3eD/+sqSdEtH9cPwUL5AAwL5u1yoDBG9iATKd40kCqGeqoODBuRWR9HEvPde96IITaiA6qFXrgXeDzUlj0UmCoEGix8HQlSWAJ5EX2wlvERb6ohTFmcubUq6Ja32TCr63FrVDwst0Mv8VPL0vFVmd3QUI12AfYuY3susqMvh/AZDqRKmr1XqZavVKlapFq0IkM0wkiSbJiIuOMMjmJseOPbZ7kkPjbqyX33AGI6GwFU8p/amNwr7EBfgHcvRoXdTdpD9ZwV6Y7OE5bMFq9dWrJBiL8685VknID4Ju+CMpYOGsuN6dyb9Pu/LNgmMzWVLaBgHP7hSkm3aDE9PdQ6LVwUwIK7uj+Qc0ZHECKR+LuiPPte5IqMA6ER33A9EH1Z2XZmVvWLKBVtGLj7iI+Ghdy8pkI9NQAadH5AL+Eu2OppzCu3orcsqvbxsfikeYk62l2uU03eLgE8UwSV+zn8K5KfmdsqTIjah15dOxt72BNaZZga2EEZgfigLX12/ibHpoWVfWt+zVsRBj9Gzy5zOckGyp3auL1XUDlkv80C2mr32+XDEQPbDh//1TpgeAS9TNhVGPBnpzGu6E29LETr46J3o4qOPiowYqkZuS7Af/uEntyXCC6oH5fYt2khHIcg0U/zC6Gmnya5PAqX7+7azvWc7O3u+ExF2ib2K6aTxbdIOlBUB8CkXTCOQPaRnuk/9FKENbZsSyGZaqUf4x+UMuaIjDSSmNBAk13eivh5h/jh7xpKHfwuADjjVC2eAaS4ZdiEqyBHMyyFULDCingw7CbHDAeT7WkK8XFuzTL4i4yaPRgJmc5hO3gVzRXxbKBYiUXEIJwZQY/vUHClxVuUx51XQy4kLM/lcICqpxH3IYmHMIoVGI1+3QCLI4j33kgRAAaGu8FyaJPCpsk2Vx0iKvS8j1pNS7TE3U8Qf6rmR3gWktB3QtsW4LQGX4JW0u00cBHkHYk55jPSWMtSiBJUfZVadpYCG2Scslt2Sks7KiGxZY2fbZr2SJUdLEoCWhBsvGJoU/OsZhaWypcsidr6vMrQsWyypqCWBagHw8a5oI4CbydtS9TY7uvoSqxbX3KNm0Vij3be9Vtv25AHt9/YEtDat1d+VYDSdId1jVUOyxRVD4O4NiRsiBc28lTmWFgpJon24XgwYRzvpGj0bPoSusmZxBOTBs2IWZsLjAbApC2eeRx55wq5cSNn6ieN2fuNZO3flsvV0byrn6eL2aTt9flO0S9vuZtIe/uwZ0Zc5ug2rrxbtD37vD+0zH7/gi/ZQaJQvArG0LUwP0AM4ukFF6JQ/ZcA4Op+o7TfONuwD//73LSlgfGz5pda4GrMLZ7dED7Ojp+pWLE7siccf9ukq73j7e+zf/dv/bB/96Oftox/5ov3h73ziGnhkiKTXb7sSmIhHGeZn+kkUXNE5/VAGYY4T0XsL5IX3EF45FDyHh81KYvbyY2pAJhOUNGDWvXsZU3oaiOyagcOBIiK9g4sywupQeoQFkmk30eD/thDE7VpA/K7LoB+uhSCN8NiineABQByAlojIeAxgYaFHrwV6DYICFwfSzQkLLH677HNOD5vSjtK/HnRTITwbnud1zgkkB5+4noTFdQ6g5ctR0TQAnnG1onci4MqRKNvneXs5PIZ0/VkRiXcRHVdVpO95KOo64If0Peqc5ouMSJTH9XR1XbwOv0OL0JtGQiE6iPVIhpTTD5623ha/sYdkTMZILvJgZvnSsv2Dn/tF+0//5bfsF3/xX4iX5jJKXclXVnw+tVKp4EPa+Vxa7YLjgWExN2LlavVa/a+V7UCgCFzyy4uj6+UDATjgdFWaEe2d/kTKLL7wV8QzPBe1gR+V3kFe1yO6Rmbhh+evVkZ/8TD3AllVh7hotTC2Dlg9AqR0GwPM4I6MZABaek/Xr78f8gWgRXbF220Ban0Ymsy5J6Ymen3o7LjBwJQiel/d+BYz7tSyETxz6F0fq80ZqqYx+JKcjJpkSaBugKPbc9vEvtsAAEBCJl2WAa9bOn7EErM1y6eOWCZWt+S8ZvNxyabdnE26cnKGau/5kq1U1u1w7Zgdqp20ldKaFdM159vxoGODTsMGdDbNhrKXsiWypaKmCgaR5HALQDGagcPk82Optq6hu4IOjwv0CaAJzKCfIx6KYgS2wvMBgHGM7qHvsTt8nZP1D2ENA20XnqWdePY6L2CXlY8qkNK1hNqeqeNpcV9BmKIim1uTDq4oFmULC7KJMQCXTfT8TDITly6mDCE99O91HSMsIrr7VmiqU2/Q9zUm/aFA1ZBhbOkyeEDPRTbBdwOii0zvU05fhe9bQeasWsaZYN1OxbIpdkWKOt3CguKebPeIRW7IuKdxvUMlyoNjVD4CdIhoAo0iWjo/61nSZccLwCgAFVvXlx2k/JQ3CqTtQFb3eW5nZ8euXr3qcXt72+0jdsnrKI4AehKidqA8AR8w1UNRQixYpLKEZ+gYY5om8WD7BTqD28JaEo4EnonkJJvlQxD0xopH5Vx0+mEPefIKNA5OoE8vVH7QAF3o00EwHF9DiGfyNSlFsY4Ei7HI+FRGPCbvis+flVNWrcgzKatAOXl38oB9ZayehPl9bo4AIXvUIQjBGIL9VUmOKnBUYRbRuPejCh2MThZ/Jih8mD4t0MCHDhgen2eKAs9p6wzn1uyPrdkTcBVDdtUw0u/GvqsMX0sknEgo8+3tLdve27Tdxq4YjqEbldh7ZiNmp/dJeSaUX71nrelpswyrLPt27vRTNheQPVZ7wEataI9QmF91l7uRlJDmq3nL1dUIVb3PLga5lA3GM/f+hrO2lO3EauUjdsvx19r6yp0SBCkqgSOcgIGAdVsecAvmGzRsX2C2N2xIp4iJmLvJNIkcQ0QpKRV2B2AuFUPmocfRBUFWAyPNxsZEvFX0BotIGGJ1MJtijjHtyuKn8JlcmB1GhB7o22ajbc+9cNa+8MUn7Dd/6zft01/6Y/vUwx+13/79j9r5y1ft7MVn7NDqsm1e6fvXap54/AuWyc6sXJmrPA27eGZfRiXrtKc3L/QCAX6CU8P1mOrhvUS67pPrnf4IkxSrmHX/qpTw3tyOrd5pTz18xTKJJVuuH7G9/S2V7Qv2gmK3u2tvftM32PZmR0YqLXoetrXVW+3f/uvfsEuXLtrG5hV/HkDq+w0jYK5ARCfxBkMpbOGCtKDLKRsr9xlh8M3ApRiDfIayhfIFQQ4Cl3LAioIOnrPqp3zoVY96brnOs1EgDdrLtwAT7/M7UgI3Enjyxp++HtQM11/UkTzJliPtE34v2krXqSnV9fLqiF3HACLjImEAdG6bdVXECzJO5BlF0iA/VR1v3HvS3MFycjtf8oP0PGW9R35ROEgTTn24+dq1oFO8IAASyUPoaVOeFJbMOeeoEEaIvDIykNG16+k54PWzkBch1MVfCcHTDs/xDNHrQf38/IAxVriW/uI+/C9VqPRwMGVwxFvXFko4clJQXsyTJRU212cB7FTOdXtnYEfWT9kv/dK/tI985LP2nvd8l91++/32oz/2t+3Z5y7ae7/9++wb3v5uvZW2VptV4CnfzifFolBlXhGILZYrtt9sSC6ly2WM4csoolsP1uVgDEF0EwF0e0Er8azOVaVwLYqiEVM/WBQmE+R/EW2hBX2OUSBFP4rA3lxR1B1vq0XgvYi2lM97jaTLohhArZ4XPwUaLyKF1/NhGDNKN+RHD747PAsedvWjGAAtP/SMg/JQxhsJbtuUF2Xy1dpZtiYEiOC0U1vSIw+lSd70AErf93oD6bG+7e69oLa7LLsknS/9k4gVBV7XLTW/VWD1Luvulq2xmbfNC2aXnhva84/t2+Of37BHP3vZ49UzAjVb4ptJ3VaLp+xw5War5g8pjZzNWTQ9CqOn9BQyHxNaI4SsF0FHA4SgCSONuTQb6UMfvSOQ1GMF/YA93OmdpWNJ1OR91c3bQ7SPAOxXgi9iuIeuLHjvbJbdd4TyPQ0Psk/+joqkKBKq/sQ5WTigZVpaRseSANSSPJclgdiaQGxZejav9DN6Ke3rEGg/tYbKiWzBpD6/U+1K09Le7Mrii5IZMR0OvG4DYYcB6yF4zksU+DAsLhM+AZQhpypPsKVJn6pQkuNSrdBDy36/9KYDzqCXHJRh19PujtjlJyy6Zj4xqatYrtOCvIhWon10TkRWmLISjtIYPnIZbBf7vPL5fnYWAKTCP0Phn1Ff6Y9URi+maiAasMNCnw48Ade9nV3b2Niyra0ta7XajrPgWwIOjq8ZkhCzqwqdiE5DRegZj42tkEv6lqqlgvhDjQ7oD3N65VW6bIs4BOiOMhCx2a3CFL1daUvmK9PzqoiMeKeb6By+BEunX9/5FFpjM6S5RQ/xhwjLXr98tY195b+WEGc4FYOMt5GJZeQZ5YXOi/rNVgk5ARcVqCjGTVGhkTy/sBeoD4cBfmdsvJeW4lBxZAm5B3P55GQZchocD8a9F3lI9JiyV2zwFEauTOhp5LngrcorKxRcKbN1VyKdt4kEhB7ZrgSxI8bsqyFHYoQZE9/xAks1eclVCVTcuu2OezJ4NAzj88UVhktDLwnGkQYVIyeUX7ptk/x532Sdr7JsbJ22fqdrS/mX2krhfjEUG9+z2lBemMontWSzlJg+I4bKyMNIdayyVPOV0+3evrEdF/XPJEu2Uj1lNx19lVUqxyXYS2pc0UiCMRO92Y0gzMcR48/6gQGkEDOsjM8X5TgUdAHwy3BD8ABhSAKLx/BkvHdAIBqQBZii4X0TZzFNOF5XNPQo8j404egeXn9qd911lz361EftiSefttpKzZ49/2f2n37j/20f+ZOHrJhalhd4zrr0HkuQtnbOCtx+0XrdHTt75gWl3ZKxfZfoA5MvmFshKIbI00UxwPxQPihD9iL0BVMoQyk6eIKhk49+5JO2v9eW8X6JPf/sU/aJj39IPLJj08GOhABvPG1/9onP2erh417fqhydvW0B6osXbXNzQ8LekCLvKe9Ap2BSE9c8UsoFPVC4Bw089IH/c/kgB/AJXx7jPeqh236d/WpLxZqcLLb6Cnv2RTTmSFrRbwIKifoHeQgxuvd/S1C5KTv182FmaVXOI544GDA2EUAlMhcNYwCwlR4L8qLfbiiQJfTX4nkPvK+qEenhQ1GSBYaUvKKeUX6TFwFpjII/c61MIU3ajhDKozIoXxZJem8Kw2wAWx1R6HrIDcPiBX+H6CkCVoh+MwQMDPzI/xQEfQwAoYBhpT5l0l3dJqpZPbJA1RcW6cdX0pCAYSQ4XWIAWQxaMLYA2YheUZqSBGMnFmJ7u2+veO3X22984HfsLW99lz351Bl74fRVe/aZ8/aFLzxhVzf27Z/8s//N/tN//k37s499WmlkbdCf+Erm3Z2WFQsVa0j3MY+2vrTi5Yt4M4r8DjHUL4qQ4tq5h9BeUYB9oE8AH5IhXeC+f8JTL0fPQjfofTB4W4iuQGR3TBT0RtAReh6eivLzNDF67nhSfugUrgEe/Z5HveKNqPcUSeFgIJuo197Td97VdR096lwpisPCX+T43EgInbiULdIjQfaT0mXQ1tedxKUL9MeOD+NRTPp7Kjs0cTuCDu60ACZ0LvRc5+1sdeSoD2zr6sSunB/YpbM9u3i6bRef79j55/bt9JO79uyjW/bMlzbsqS9csBceuWoXnunY1oW5DRo5S0xqlk8cslJmWXmxYFvlZG2FChubUs8wdA3NOdJxRe8pO/4A1th5pYtN4GtkI9nqgew8m+aPQqcA7RLxzXUeCrzFMQr+XEw0SWY9/Vyx4NO5SCPo0kX760inG/NaAaXwlaC2otwzneeVZlllXEqnbDmVtbr0b1UOGyvflypFq/KxBz7PnlDbzwWIpPNJk3YgwHPOX6r/RLqfHQ3omWX0tjtgvrJsrzAEeiWSyUhGGRnljECaLODlYxKFfNrKpaxVCnnPn6lmvOPD+wL/OABdPuDEoqsxa4DwVul9hIfFDQtahfmwIe2vDJQFW4/twXZQdi+/ftNR5z3MXY4Cg8JTXIue5Ug7XsdZTG+jtzWkHeXHFoXsse26WPnRNtQDOcIhowf60GrVjq6t2vJyzbEDwXeOYET7Gr0iOddRei447Nhe+AWnRXY1E5dDE3q3sZeUgbJSR6Z8cITWdH64jKMTobloRe9sRjL2tYQ4iUgeLSuOyiVylo9VLBerWjYuQCumTBVU6xTzQ+VZCixANJggIZQeF4CNzzMSWoZYYSBFAVK6rANYC6DG5yT2wrxW5n84mJInQ2VoECe896qx8TRfeeJbxWw3wWIuVUgVxFAoeVeK1JqVfkkxuqVVTgHZUmVZDFeRMKVFMBjYMb8NYWiUGuBXka9usFhslm7arLBp/ek55VO2zY1tu3r5gh1eutcOV1+m8oiJMwVXDHh1/XHbhnMpodmO7Q6u2v5o23rWktDWxMCqW/eK8mzKw87YcuEWW6/fY0fkNWdySxZPZ22eVmOpcfnSWTIlBQRzi9kdiGYzArNyIlQf7vE1rVROgEmMBBBx/ang83BoL4FB9mrzraEyYZ9VB1cofDxW95akiOQs0AsL+CI/6Az9m819a3f27TWvfUAGdNUef/qjtt86axcufsGeeOIhO/v0aRvKGywlu3byxCEx+sQ+/Kf/zbabz9ktN99p3/e9322337luW9unxZT0uNM60BaPV0yt/DEoAB4HQboLk2IEKTveHhO9MViPPv15O3PhadvvXFGbz+zTn/2w7TcvSMhaeqsppdyxcX9gn/zEQ7ZcX7Nz585YfbUkR2VHdY8t+KklPmL4QlECDV/BdyhuhCco8aCII6VCCIKM8KCAmdYhQVf5QzqsZO15uQGzbI1TEoB2QCvgUJSDUizw5TimIKDcmIIQemgjYBvyoFcfBRW2hflaAyooqKG/PEBv4rWgc1dgihxDDO0Q1Z/bBwMKKgK1gFiRx9sP8jF0G6IA7SK6MuQ90rmWFnkERfeV6Xt5FNCDkS4MeYZ4MFx/dpGvyuBARAVDz8DXbMUXyuaPXgsk5U7ygbSJYX7+4iHxxvV74R0Cv8FJZE/0OZyoHAdTwSgRfbN2RV/4pQcjOYXXWbwaaO25eAzOdER/th4kzbyA5xFrXOzae77ze+xf/cqv2d5u3x55+Fmf3sPORnw2st2VE98d2yc/+Xn7tV//T7a8elTO3qddvgYCSmxHxBd7StKDjLINBEZUKG/niOfDoiR+wwe6G4p9LUbhIN0XeNGj48fF/fC1HqUjfcpv3qCXFrWjKupd8rieaETng4E241IUCZ7WIkJJMvaefmjvUwugueolHYLu0IMh6hxfJopc8vy+DNCqZdz5wSHDMug5PUI9RBnP/0YCNmQu2EVPpwiqf8g6cwzo9eaLToz60c+U1rNJ6cOE9COVkI5PFayYOWKl7GHLZ6oOeifSC632rjWaWx7TsZpsiCI2OFm3XHLFsrYsE7wk0Fq1Sbtg3e2UbZ+f2ZXnxnb52YltnYlZ43LCOpsZ6cqcTQcCWsOUosoxUjmngokuP+wnqzJ7WQEKsuvSc+1Ww4Y9dixSeSVXzKVlb1a+cOmytWhnno/alSO8xRFaO425nlDa4sd0hgVxedFIYJnOqilbbKo8FETpuIz5kah2dkOhKJuV1DEvClalk5elV1cEjOvlki0LyC5XKra6tGSr9aovhizmpXNlE3mXtCkT5QmLTOmAM59SwNZcLeEOtudiKzK2MmMKFaVAf5EEUx6Y5ui9l8IIlFPsJmAlMJtLSb5SnufK8pItVcs+Kupbdslm0HHX7csW8fWzYVt5Mp1v4nLmOkDMhtMEvSIaEiLZiCLTOL13XRiGcqVEx3Q25/xGPQCyOEHsAdvkC2eyf3QQYrPohGF7MpxH5pymmLNKM3sbqTZkqzK77fUyUP9QtpToRifT6mrF1g6v2BHFpRof6cAZMfEJDhl6X+kouryj40QjT4deXiEreopRBNxDVpkr7TwHWNUT2GIH/qxRYacD0Rrxw/a6rlVSLPRn7jTHryUo76HyltZUgzChvJisWCW1YvmkFGNMYFFAdpqQgZ8K9Y/Z6oEJ28GrTUpImTOL56d2c6DrnoJ3/QfFzSRqGGyi9/AYoknNRL4u1djbt6a81Xa7pSJMHAjk5P1kM/R+SSgdnAECxPTMkUllXFhYeEaMC4CnMyUrCmjw8QWABrsAYDBETjEnJHa2VJSykXKZJgRmMwJLArOzWN92dwVkr561xLxq67W3CBzWrDe+rDaqiDH0jg9ViHGGm7bdOidQd85607ZlqnkxT1bAV8pgfFV17lohvWwnDr3Mblp7mZRWXa0kpaZGmgoQMq2BoT+UH0MKvg1ZgvoAYgOQTQmcZnIlSwsg8QEJgBI08d416iFOcnCGEhDDMC+W6R4oCsmNyiCGc/DE1iISKHlTE3npKCRAAYCW3kvqzHY//+gX/ldbPda0x07/rjPmicMn7fLZp23n0tNWUN3Z0uqLn/+c6LEhAZnaT//k++zf/uv/an/6px+y9WNlefAMoQfFRtloZ84BgES+eMKHDGBoBEy3nDdwOPCQ3/yNr1VbdOzs5Ufdwbi0+aRt7Z6xlUN5u+nUmj379ONuPC+du2LPPfu8eGsgwybXCmdisOl5EQCMgNr9/V2PDMn4AgcXNIyhhEOCiUHkN72NPtlfNPEeP9EUOgUvlykEop+87KB8JShSJoLiKnxKR3Y9kHES79E21JMjZSHiWERxIkUeOWzE/3+FUOvrQazwZcF1lR+vK9Ao/rmgd8OnR6NUZaRgGa7rF04kQ44+Z3YRlbLzFO96XAQUMErTFSePuMKLejJ5PuTB/54lLaT0OA8Rpa5y6k4YquW3DspSzeQgm+zpeULn0DyUlfo7DbxMwcAS3aFSbtyi1ATSc4CmCx4X74aSKW1FyuB00I0AaFULZEwP+UIkFC28hHEA1AooUkfS8LLo3GlwLahOPpol8COHtywnuLnbsQtnrtp3fP8P2t/7mQftycfP2XPPXLZedy7wOnLnnI4CdGy/h/FNSM/V7WMf+6wdXjtlv/Kvfl0yjrNatJyi93QIJVPOKMKD8GYUI778amwQggyR/o9ocTBwLWhV1RP6cQy3FvQOtOX8K4Nf87aHTsjJl+cRtRdt7MBo0aBO04VcscBGxPB4fZ9kIs9RnhBJ91o5ACw0BEZgojtq3LhPP9A7yoZnADE3GrAHTO+Yee3Dgj10QlhQnJLekB0E0AJi/XpGj7BItGCpbMlWV07YodXjdmRlzZYFiqq1opUrfJo7raPsbz4fnOM0I0VJ76FitGAqMMF2mb3dkXX3ZjbcFxDuFs16NbPBkk07VRu3SzbvFfQ7b7FhRlF2YZSzhByjmIP5kfhWJABQ9PkIQNv2G9vWauy63sNusm5mOqJHWc+NxMOqSzTqAa2jYxQJUdsRWSvBdB9sHLZaP3zOMG1KG15/1l8VDXVOo+gCi9n44MJ8MgiAVtmWpLcrAqy1Qs5qpYKVcllbErA9tLJkhwRoGQ7PSv/CK+xNGpUJXEye6Hf2ZB+qQj3ZMnYCAcwy/QA2A/hFdaGH8GBdlIJfB4hlWHEv+7Hkc2fLVlMZaKOQHT26skmyGdHn9cOODYw2sz9/QCDBlkQVv85zX55nVO5ALzpJov3XKQsjtQFHhY4p2pDfyH4Y3RbSkXx7h6BikPXr9XLZkS7jWhRCHYOuKCkvvtRKzzPglndwavr9sXXaaptFGaPg6agu0VQqADxHfvvXTKVgXdb5TwHeAyNiE33BthqKNFzGxZucE33//S/TEC8eEu945zsenA4HApJDB7OFYtmyeWbZDwQcZMxjQ1UqofOhteV19ARoUUYilWWm+j8tLzNRkFjTQwtbquHQKKoODIVrMlD67Ifm24qwT6oEZCBU3u00BGT57GtbDT8RuBLD1vlqV817wlJqSPZ5Y4U6X6GYCHywhxwMGnhChFwIDV8hY9EUhFCz+XAUTJjOCpSn6cmUYM3yqoc8nFhHpxsWL22pbdP27LNP2t7+eTu28ipbzX+D02IkMDvtLSt5GW8xJDsUsLhrq33OdjoXTXJv9bW6jRoJa3YuOfjFK1kqnLR7b3qbveT4a6yQWDL5S/LMwxdPUMBzKThWBsfluSXnYj4pKobbmVzNlIpkWspH9WAOMobbP6snGlJXtiPB8+JjA3pAddV9KUnvDUf7qJQ+z0eM6YvGxNjZfE2Mzg4PEmh61XUE1OE8THV+4sgR+7b3PiBl2bK9rbhA5ZY19p+y7//WN9vVFy7YF585b3/4B5+SQu7a6uFVe/hzV9RGS3bnXUfsypUtlcMb2/P1XkjRjgnjLJrCoYBJ2XYGwWCYlrmOfFRhIMXD11oOHSva7adeZl/83OMq19zy5Yw98/yT9h3f8Tarr6glhzv2yMPn7ROfeEZKKWMtOQ7PvfCIFNOuHT6ybK984G6ly+IGFJb4bAjCQTgAnZQtgCVoBSDx/V5FX+9N0RMA2TCsPnWFzu4PrOYM/DYST1FuGaPFQkfSHzJU5cpDYJm5s6JrGN4J85KZRoPQwouIeRBW5ad2fMtbvt7p9WLh/e97cHEWhaB8lJL/z5H/A7cjDkHJ+P9444piNQEt5e8gQE+4UoHfRC8pc3SrA0J0rZyeGUNJYjo2IOcaOUI3jmA7jzonwF84ijxA75lUh5hRR+wK5+Sv68hg6L2EH+nB8VfCUY/RW+G3de44Rf+xp7AoLhmhXVDsKp4i6x7VxMYWlTN2/NM1Je/V9XICahxYql2hPBfJXzJE3X1PR/L0/GR4VR7ArAMzbuicOz5vV/SFN8A58C6LTXxlbjqkNyZzHU3X/etjege543Q+Gzvgp03abelP1YdtAukhmw6m1tmXc7MvRd6N20/9xN+3v/b9P2lbV7qSv4FtXGnZkcMnZIgKzpc4tL5fNzpaabOAh3noGxvb9spXvtK+9Mgjdub0aZ9ylFTmCTmc9CQBxhfV8Uj1DkYAnhs4PeSPqPxup/ymaBIOHqAXX7Wi/kxn8mFBxoOVny/yhRZqBPl7IqSi0qIn1eexLujg7ex01S/RNeqlEjt6+vClbxmnH+5AKA/fb5QH9Bz8g66EDxgKpqxRDAu7OCou8ALtAAEoXgJ9IJuDV8YWbzzvzogeYHiVOvztH/1H+v3i4alzf6w8Q2Vxaikbs6OhJW2DzmMLSKY0jibikzFlRkBw7uV0JJi+B2BNK2YFbg/bcv2QipL03WxGDIf3dsVzssd8kEd2I5mc6VkBY4Gj9Dyto/horLSHauOx8naQLgA6jDvISUzTVkjVrZwTMMmkLdFft9FuwYa9HTX7yO0tnUjoKeyB26d5R7UbW34lbsPOyIb9iUBk1fUki3sKxYLXET3m9le1D04JOh5+UxFcVufy8SS7ostM+n2uBvHpOeIReGEie4g5Aymgf7AHtA/8OB3NLKP0M6JnSfawlM35lloJ8d2M+dJKZxKfGh9pyot2/iEJlSvHzglKm209Ad8qiYoXeAjbD+34mib8xhQI9nP2zhbf+zuMogHA6NlkjQUcS+F4PolSUl1FZemFhK95mfgcUxVZSoWPVjB1YaprLAyeyi7A1LwFlejtzkqGfc9WpeegTfdd1+ncp+o4QUQvmFvPQFfslH/mlZ5Z3nV+FW7qCQT2B26DsFnhPdld2VPf1pQdoFxkQucSdojopSFt0XJlecVqteUwZ1rOEtuPsZ86cpwvsLBOcu6dkEwLmAn8i7ZOVzrhMqKhMJfKhdMC3ZWR6kldqY/oiT4QbrSYlLR4CjmbgOEUWWA4wukWn7CtHWDd9xNXpF1wpVVMrxPxNfd/t96/sZB401vf8iDDDKN+13cuKMgL4rN88xhzOcNXOdjZoDVu2X53T4RRg8wArgKHAohlu0tynXeFPpTxB0ClVSHmDsFUyDHM7ZGGY2PmOXM7WgIEA2v1GwIATTV+3JYqS1ar1OWdVp1YYwFX7wWSUcOTdp0Zxp2UH1tCMz0iq/RgP+ZpwHIwG4wIQ+SsmovLkDAfpygDWLNpYt/i1SfMchsiasFOX/2M7WxftpXUK+22yndabkJv41U1ngkcxSyX2pVxFvgToL3Yfcq2J1+0WGFXgr5i2elJu7j5qGi1Z7NhxwrzQ3bHoTfbrauv8R7uhKBsR6CRLVbGAzWUPORRT3Xv7AqMNa0nUMrE+3S6rjqpbGrERF7KoKB6yEGI6dp40BNzhq+U9cTEY6EPtu1KZtgjNS6lKSET02flUflHJUQbjHHSpU1ePPvOqWyALXh6PGLx1sSZztPsXpKg3GL3PPB2e+DNd9m4v2MXn9qwV959txTFZfvABz9psf5hqydfIYD+gK3Vj9mv/ZvfF4Nn7K3vvk9ATg5OF3QhwYiVHODF4nw9hSGRjgQ8b8Vy2VqdgQxTWYpalZQwTGYtlb9hsdxJm3Xl7eZLarld++OP/4nombB//P6/Ye2dj9iZM1l7+EsXVMdVO316w/doZIrED/7Qt9kP/OC7RLuEwBmgByArYzBsS4n2lf9YwiEFQ6+1FJvvLyj+wJgynQXFi4C7D8lEdL3PjhfGRuSDlsXktAm66tmMKxPmKE/1XLfPDhQ7TtOJEBXTLPzzyK4Mg6FotfZEYzYJZ1EfjoUUjtAh77zzm1nA8+Lhfe9/H3pK5Y1COJOoXzsLQbLh/x8I6D2ApWIYnqVXS78xKgs6oAjd2XQgq3tMpNLR56KyUwAyR6Ikrhjl4Uqa33pG5incVPr+dS1EXnkCogGy5IlCVGPoBwYftQZogh6LEpPwIqDEXNErlySZiLd9s34Mnc4B3mLpoB8VKZ/3qqKkPR8dFX2rJmhAtqo/11mcEwAuxRENlDH20RcxoXupCpHy8b/3RvkFd9TCfNlFeqrvXLpyLkDHnov+ERBX5KSjPERYpqsAzHJZOdCDuTX3+jbqyiVuzu2mY3fapJu0207dYz/yAz+hemWkf82ee/acVQo15Zeycqkc+EZpsf8k+IFtbSASzkhfuoB5+Pfde4/9zm//H9IR4mTxbz7PNCsWh0EAHr9O4AC4VD5ozz8dUa8h0C46KPJG9Bave7uQr2gYPsYQHprJPmA52YEAWquZwlHX0MkuYovgfOCBTPgNfZXe4nLIhx/wM6CJi1yDqH7i5YNloYV7J1yAX+FfXYzakCP1cv7VkZ5FXwymtiYPL4seAmzzEH8/9WNf6Tx+9fDM+Y+pzHSaiEvdawNkKxvRVf/EsuIL+FV5+XSEyLuLM8dTADaZtnxWYGGxryagZdCf2uWLW3bu/EUVPNjHbEEOkNqSHi7Ky5A2w66FlMCY3kF/IylBLsTZckCZ14g97DcH1tgeCFgNrChAW7JTNuuUpac3bL+3b/t7DZ2HKVTCoSKhbMNcOlw8lKslrJipiB8ZgWV4PYBYpuaFDw6Enj4CbRidO08pwPPIgvOIBFYwCPFTPmofHRmh9Hmp0pnOezSo6x09LnsrallOtCoBxGU7eGcgfd2TrkXHznNJ0SLp0xgApCwy8ml22EDZP0bj4AvsJG0cfRZer3g7M8JG5wef882xUw293wK0sJnzNNxHXbxw6FrqB68FuWHRMnppJJDHtmls9QUI9kWBkgM+8e86Vu9FYPMaiKUyEVOqLMEZCDT0tEUX5Itroac0jK4QoNdEzgG7+vh+swqepqI7JHT0KT9f3I3s+fnY74WRr1D+ZDZlq3yFVWCW7d5In1HNtvBYXxgwX7DFdLmwmG6gOk5kG+iIZIQIfZYEXEtHQTSnqY6u+1QmqoguijmQlU1V+0edQ0ydoFd8oiNOjzsT9ACrDP5xDtUFnUE5mRJEx92rX3rjYDY+o2teFWYlobC4zkVQZcTWN4KkC68FAMAKvYCyIT7X+dyeEK3aWaBKvEN3Pobdv4oh5c+XlehtzRfCxrt8Gs6H1lVZr7CUYVLMDaOlZRhYAUfjBD2sxlUEVGJvAW5sBUbv68HhMgzNdWbgSYxPGOamUWiQySQnQzC37mTLpsk9S2TpTZlas3dBSuQJy8yXbCn/Eld2/eFFPbspwWK4TqBLvDeN92wwuyoP7KozTSGzJCEoiAmY+9vya6mk6pnlc7w5XVde8qCa3b41mrtiLBRS3CZqyJ48516H/eDatre9Z91R1z8uQFcCc4yYMiBT7uCLz+lBZxiT3j/MMelQ38gowXQZMSgeLAvnGJLA23FmU0rRPFkWHoQVkWHvU4YnGo0922u07clnHld8zNZPrNrP/aO/a//5A//Bbr7jZutLaDPpW+2Wm15rR9fut5X6vZZJ3mqvefl32Sc/vG3/7l98zpZrN4lGAtzphm3uXFTbJ1WOJdGF+VhZ299v+lAWG7kDdLM5sbfagS+RHD18rz3/9Bk7e+aS/fiPfbcNunHLxvL2da96pe1uXlLbXLBz587Zn3z0Q/apz/yx7bXPWqk2s+/5vvfY9//gd1qlmg/eeaHsvfm1at2qcoiYIyyWFpgOe+8xChA+S4uhQJkArKChhAZeRXfpdxBHFIn4S4rRtzcTLVEqBIe+ej7wWlAYbmwxYLzvACjEMPdsIsXMljJqGxmhvNr2f1ZAiVAzjkTEiKLBK9cjCpQ6Xg+QI8RQr+vx4L0DUYbajTXyqd8cef5gIIvr8bp8hsgbahyKoej2TK8DhCDf9efJT88DsnWT+fkH1lQFQItOVwK0nyMYfkRHmiE0mwfSIl0uKXUdg8HhGEV+Y2ucHwBoelis4LqM6L2+8II0NzrIy6r0kEk3OvCT9B87p/C1Ij7nOZQTl0kUbdSO20rlZvvAf/yQnMHftn//679rf+PHf055ydHb70sndKxUoFeo4/PB+WQ0abIgJyo7K7DZmQO9x2jU5saWL5j8hre93ZrtjvfmMq+OUQ/4M9IREU2jGOmQrzzXP+cZAI6fUznotTgeDKEtQwy/r0eCsr4W/uL3w3MhiisW5b2e5vVzAjwSBa5H9Qu/FRcPcC7z4gAJHuHIPXRCeB7+CjwWQDHMcmOBqWqs6uab+ROl4fZJvDgZywbKxjCvkCnLY/0GXGIb2bs8J33CDkGVikkXxq1UjFulnLFKqWDFIg5/xhcYsa3jaNawVnfLWr1t/R6LF2TPMox2pSyVl04pzPSOKcal91n5LV4BUKiOk3bShs2YdXdmtnNxYGee3LDnnrpk+7sNB3XYEYQDWgzZ5kk8qrdlcTICvyk7XD3mW34NmzO7en5XOnmiuxnxqXQza1R4e8HvEe9Ev6NzehEZXWSuJzsapAVMmRroX6rSPUYwHPzQXDoCIFmk7Z/kHYyFIegJlO3XbzpwWuLtXls6vC+bLA+Zr3oCREuiG7sLsOPO8fU1O3XimOzUEVteqlmOnkOV2dfxKFJfRjp8iy4WSMk20Knj98R/6HE64JB91yfwkiJ84rsOqA2ROd93Vm3GnNn6UtUjHzhhxwP/cIPqw+geHReDPgsyW6pHT+Xma2wCk+gSz0BROoIeySBvgRjKyetI5BzcypffRmKqYX/oOAweDrTH7oTyU04cLMdG2C/XEaEHmnPSnsj+hQ9R0HZgLdospMGoLZ83bsvJ4SMR7MELfRhRp4C+zyzevGjhtm5BG2gcdEcA1qqe0wHwDB7zObGqIziN/WNxHmh/ZJdt03yrNPJTO7DOiaQjPvKvfn4NIfHmr3vtg/TiZQWKCjLcCB0EdmQdGwr8SSGrMRpdAa9u04Gq2MkS85wYM2+Z2K1KRhV073AiQkrwcgKSeQBBEEIKp+op3aghRQt3xeQ9jPle81SAJCfGqFu5XJG3lXOjQa9sbwjD0VoBROs/r7C3Mmnq77riDkceoxGZM4KRilktNE7iisWru5YojOWhXrbzm4/a7tWztl56s63lv86/WDaZbKhMfG2H+U5JtVnXxolt2x0+YVvtFySkCQnLUYHvrAzIju02LkhQ5WUItOQTh62cPin6rCzmHcmoSBkx73TQHdhYx8kQw7VvnfauvON9S5dyep/vLNetUKqJb8QUzswowqCsAYHEMGzCNhcwowywjjm5Ug5k82GeC14rQ7v0Do2lZZsCrp2OAG1TCkHC4J91FdPyMQiALR9RqNSXLZkRYJ401IZjO3nsmD32+EN2+tzT4t2X2Lmz21LQM1tbPypmrFq/kxINjttDn3rcjhxdspc/cMz2BTQTKRnm1kygvyE6Kv9RTgon5wYawRr25bBkBehHF8RvVdu6WLTNvS17y+veaZ/72HP2Z3/yCbu0fdFe+4r77NUPHLU/++gH7Hf/6FF3kpqdpt129832j//J/9Pe8rYHbGcXp6MrehQVGXYSHQRs2XUgk5F7qZaHb+hNgNfgDR8+Fs1w21CerGj1Ldq8lwBekpLQMwwN+ecB5fFnigDasGMEK5aD0gnKCMWF84IgY4yZDuMrTPE+kQelXRYgztM7nSvJCSraG972Jr374uH9iy+AkVsI4QxuDiE6qvz8v/jpxZMOiONnSvf4/n06iqUUJX/eTSIwCrBHrBYGH4DoRh+xVHTQqGqqCiFER4JfV74c+YmuxHda5ONH/fbeWeXHfFIedBiLjOqRKJAGQbor8LQrab2j9vGyoDTpbJlIAY5EcwEGfgNWvCq853nqHToLvBx6jmu6F7SrMkHB6s+f5xI6mjZXYaItnDBa9NRCWvQIR+n9Re+25ApaKrJZv+snaoKiRq/pnJ57trrxaTUqHMZx0Jra1qWuvfsbv9N+/h/8cyulD9mzj19y4EDstAa2vi752dt1Q9Xvt+2o5I8FrvBpt8cws4ya0kQH4KwqJ5dfFnjMpkO7/6X32h/+/u9YWzKSzUsP4cxRN9ExGD2VVzEYvgVg1P3wOzRAaOdw7s/7Gechhi3sdA4BRUtakd5uGsE/Bap72H/nPX+Ql3mPGPInRnn4Mwr+GM8pct+HXJV++M25ogJzVDGeABLfqUCGj6PzLtcY1vLfyoe6wb9+j+tKhntkzUGRLn3K7z36yuKnfuwGpxmc/TO9D31UUaWnVFQu0YI89Qc48l598lpERj3gEL54lHRnC9nBTgWQkJF+4RPsS/WSlSoF2U1018DpwhA6C5H1ttpK7c10AARX5We0E4Dr+9wm5WBh/McCMOivZMnYX31/f9v2ruhd6eJsQUCO3n3KLkCuhrMUo7EChQUBtKz0Zy4zt73Nhj395HO2u7WnvHNhp5csgsUoYACxEXj1NqORFwE+CjIMqWeh/fUefbs+ZSjOELMcvSkOl15QZHoTUyYYNUyMlJ4URyoGeEq53WkKgHYEtpiWJoAivmdBrsrlHQ0CR9LL9O5lpYvLpaJslJwCeJs8lC+yCNjMZtiTmYVoOicNAV7m2yZlM5nfSZkpEkrJ5YG2U+l9KoBoy3v+wREAHY+Jt3AwsdPXPj8vfeNyJRAIj9MZwo4jAO9sLuP2h1yinlLn8QVTkmdEU5dRdOWCVhPVnc4T6I4tjRYb82wU/F2VlXeZChH1zEa9utjDTDFrh1fWbLW+IlIKHym94ahv261d22vt2aDX9LVJfJ7d5zqrXNlcWfSuCMvl3ZGgZ5bRo/lCNh3ISmYhStgbVvnOu4JowpDzgcojTUnbi7g4T0MBc3q0KSdTHtlGlF01WDzIF0yhGxJDeq+45zs8jxsJibe/4Q0Pqp2tLI+jICXIKjL5M7rFsOvUBomuAGXb9rs7QtIdb8TYVI0yETHnArOJW5SplI9XhB0Akt4Tm8vTEwuQVQ10X82jc282JzA9GL7J7ngg5R+zshp7ibmyArVsTM+DgAOQu++D5jnrMhnp3I0NjSbQQM8nDR/uozzpOaHRZYCU0HiSt1F8JCB72VICs71p1y5tnrELVx61lfgdDmaz82MyLk2lKWEbFwX4pjaO7UjmR9YcP2c7XYEq27dqZc2q5XUBqb4A1TkBraY8Y9V1XrTkZEXubs0GnYTx7e2hPI9sPisg21el5ZklleaoYd32hkras2o1a5nysg/FlwRk+bwcCAAaoWNnMm5Y5dADGJPSY46PPFzVj6ERwDoglg28mUNEfQErCBcLrJhD0xWwYq859qrr9vpeJsAs86WoA/tbglzYfm15pe4MzIKS+nLNvuOvfre94lUPSACm9s3v+Xq7dPV5q9ULMuoxu3L5nN188pT9nx/6vD3w+putUCM/ypaxdm/b2z+TkWMihyaeGCivgRi1oPx3VLmW5VPH7ZMf3rNv+ba3i25JG7YT9oH/+N/Ec0N7x5tfIQW0qXQu22996Cnrjgb2Pd/33fZL/+x9Vl+t2Nnzz8nhmKicMtqiFyCTr41Bv5zAbFrnfDMarnOn1HlHESMpjsA34rOzDBNNEXhdgH+gOQDJP1fpAiv+kXD5Js5EriMfegZeRujYxB7Ww3Gg99fn0orutBmBxW+ZXN7yRfZdLNpr3/g6v/5i4RcffJ/zwfUQfiFHoS78AlRQFq5yXefoStV5vgCzDmQBX7ruUfKBMWeuGuXGJ3TDTxwDjpQORafnU8FTDUl78K2rlDnqRlTTBf2DxqRNfpI7zxdgqVOm/jhgIFllSDnBT9AvSGwIQW6DMnYJEB/yQNhbVjcpjxxEk9GjrKgBn7akBgujuCoV81lTeh67uygPAJ45kQwDkie4WrdCL6/SYIoB+fjnrVW3hU4Wj3BTeVAX6qCTuHg55vXTUdd8mFrpke5cxgJHkLT5JCeLdSaDhO1vjuyX3///sh/+vr9t/abk5mxLoEPGIV1XXmmfosOIDeDUezJ80QgjLkUHtKwZCO1LmLtuxGGil4Oe9suXzttNp07oesIe/tIXZTRLArVtGQiGwgN9IkDrRla6wY/oX+lhQBV8oFr4deriz3rkaqhfdOS/sE+l4kIQcFocK0ILnbvzAm38Ni+FQPqBuoGmNCK3SUZF9PaPgK8bRw/hfSTXXwdowBoCEs67ftQzHHUPWXZgC88IJMEr3kvr7yEvSp9kBF5CtXVdfz/z129smsFT5z+u8lFY/VBC18sPjYnSH2I8B0AiYEL0dt5SbgkqMM2rMNRFCagi2DPmxhbKcVtezdjJk3fYymrdygK1dOyw13oymVMdsaUp2TtGAgXspGMAT2CjjHQTTiu6qSzgQbrDEWBE7SuemvRrNurLcRpesIkai97WXLpoRw6v+9fMAD2Mei5VDutd2YqdpmJLPB2+8w+4ri2XVd0wTM1nvQF2EIA/fjsxaB+nBe23aEfxH2UM+5nKVgtP+BxP1QE97PvTTxOSAYk3ncYDOQQCsMwW0GPS/RNrylbxWXz2ckevsQ99uVyQ/RPgFR/R44t+ZkEWNpEP3eB4UjaAFLyUFPink0fU8KYDAAN+cQbSst9hTQc9hiqzK0iaWE/S1rSr6kiEocjPyy1nAJ0Wdk8KH8YJq/+Fb8RrTGWoVapWr1WFb+gUARPRQy2O03uua+EJ1zsBxLFrQQC4PCM+XfAteQKomc/qc2lVRwA8dXS6K3LOqCJ1pu2xkUy5nEg38VU31siwXdqRQ0dtpbbi9Sfd3lhgtknH3L7t85l4pcucWW9n5VXIVx3MskgdMIszwcivclO2/K/yul6RyvS6MRWCz/4yfWWoulEHhHbunWnYSXZfoYORdKgLeo0F/0wZdHkiMcnMy+76GsDse77hzQ+WBT7LpfChhMCYSkhAE8+qnegIxPZ8bitzBJN0+QjIzkd5S875GsYxb3ymkuUEZIsFphbIk9M5BcMwUrjQQChOiMy8iYGUvzwZhFkv10oVCZaUvHtbYh5Vhm8ET8TM/m1fEcO/rgETiAiRUsbrQGjwMBBojkExkh9VESCXtzfPta1wZM9ixX3b2dkWGLtoreaWncp/mxWTJ20mg9Lv7slDzFink7fOeN/m+Us2iPVsu/OktYdnrVQt2uryLaJL0Xb3LyueljCJaRNly9iqpUYrNtG7jZ2OAGRTDDSUAmMKQFrMI884rUYet1SmkZXKZVtaOmSpwproVRbQyYmJMFDQH2VHfZlaELwvPFDALPUjsniAfQILbEwsmmHcApDFqE6cYQYAWRnJZrPlYJb9A+nd8T1z1Q4Yw2r1kNpprHZjwv2qpWMSvPKSnTh1m7z2mlVq7Pkbt1c+cI8Edmwf/+THbXvnsg+vlAoVu3ox4b2kb3nb62yv0bR0HuCneqoc9HTRS8kcbHgKZYigJWZZe/axns17a3b3veu2u92w//4bv2Wnn3rBVoopO7Yes93Oafv4Zz8pJr/LfvmX/7l9y3u/zTa2rwhQX7KlKg5AUPAY48hrRshQatfanhzVBsEzhYfgG3GhgBJTQdi9gHJSJlY0o2xJz/kMPpW2HU1UTl0LhgoltpiPtEhzPGa158gXf7H1HKBk5FMamJcrcCJlm8CLzhUEtlL2dW+4MTD7C4ue2etBQnTtfwXxBAGRhz9EWf/tYiYWcmAne0OvIpvN+3ZGXHdNAV1UzwWQJcZkPDiKHNcAAMA1yifK15XngaM/gl1j/iigEkUAkEX2lR/GxoGRHhGFPaKneC8KZCGW9najLt6zpqiH3aACSjjGfL6sjjIC9LJjNNn/OgaQVf7yKD3vqJfYe2WVrgMt3UK1SURUL0X0ktIPPbRKSzHoT5VBZaQXGUcAWaMXFqfGv8anoys0yT0h1Ez5i5g+X06JxOZJ+cRpa+9O7af+xs/aW9/4Hrt0dtc2LzWtlF9R0cQPMgqMqFTKVfEyvSMBnGDQcG6TCYFZ6VqMEulDL/icAJil1wnAxBQjDPI9995tH/rQH9nGxlU7tHrIKwxdqSw0JaAzvRNAIQAdfkOMUBdqw7NBfhb1W9zypEhHFygLQNYfE1/RwymyBTBLFJFp5wWLePA2X6TFEWDBfR5B5TlbkqYngEELfBLKrjz0pKdBeQVUWaRIwwJcw5e8VAZ4l9+AWcllcISUhgrHkTKF5PQf7Qc51cbk/zM/fmNg9vHTHxPNoF3o8fIOGRUMvUCIA6ZVB/ZMdadMNPbN4FWfNM+JqaCbO0j6Q9f4AmPFmDyspBzfYjlrRXY4EPBgXiGjet0OekjV03PoLTolsKNcgw8YkmWrr7yAgSosOuUdfEG/+XhZOilu3fEl6b24DXozW5btufnm2/zZM2fPypY1ZVOqdt/dx62YrVohW3Iwt723Lf3Wt5Ujy7IFJXcC0LHIBYH0XUbIR4VBv3jPrP4gM8Pq+H9Qn8JOYm3RLWzVFYBaXDY+pvpIj/ZVmV5M/D1VeVmwbNbuDWy/3RUQp3dSdS3GLJvPyUYVLZfPOihlQS+jc4UMiyUnOgekCkdI38JTSDNKzUc6EXjxCpiFTiimK2QBtETRGr1IXYIdlr5XHRgh8VFlOROJOO1EfVNKJkyPQNez1yzbrOHI+ei1Xq/IEanXl2xJYJapJNjslI/wISzwMiRRBoqkidxFc5L1z0OgL0Cbub2y/9BX9I50AQG6B0dRLeZCGX4j36FXFzuFTZ7JaSoLzK7Zcm3ZnW7S7wnf7QrM7rUAs/veacP+sPT+MjKQz7FFV0E8K/uqiK2lF5WyU3x0ltcHO6mollQbBzDLbg4BzBLpmZUjJgwCFmHEFaUcwCxtSdrSoXAP1VF46Z3vDSc3EOIFNSIbArPwi5V/0MInykshx8T4vlWHlIMDVlUeZI57xMcSEvOCCkhh6cpndRob+TPXkB4svQNDI2gAMTUEPVxeUCkCelbA9ZVS0eqVimLRano3rxtZpZkWMdKTrmVtoHMxyVge3YhvEwPEACAoJjyItHffw9h06fu8IuWD8ODB9qT4h7ENSxT2LJ4bWmcAmD1nw9bQarGbLdY7af3mUMpi04fddwREdwRyB7Zts1TT2rMzArLbNlfjVSpHVMei9Xsta7Y25Vl0LTmTMTF50Ml1KyYO+95+jd1d296+aI3GFRu02wKIE8sk+qJXx6qq500n77NK4V7bvLIiOh6ysgAt3uRIdXYLK2XI3LeYlBP0g6kYXuQYnTN04R+XyBfcAcCo0aselAwChcwygZt5o2HFPb2Gg0HYhxUAR+91r9OV8Vuyk+unrLuTsa2LSSnP0KP6uS/u28a5iV250LQXTm/Y617/Gjt507325LOP26tfd7/1+l29d9gef6hhf/r7V+3Y2q3WbFxSO2flnBwO37Sul1Wukper1bls5Xxd+Ryz3/nNh239yGEryJikUlMB1Resb3v2+tfdZ1/3xrvtiWcet7te9mb7N7/+7+z2u+60KzLSHSm2jAx9pzdUG4ylhIryHEEcYbiHXRJ6iyGMEUKsOvocLEVowFxhvjdNbwOjDxkBIBQtq8ABtb7DRjSnWOAUR4DVxWPikDhQlEepSG8aX2EhLZ9PrvyiCO8hR8zdxusl9ifiQynw/6sB3bbQb181oE8iBciD/htZBhioMCit6PgXBVeIKBFXJOE5lIqD2q8IShXzhCZzhUb4y9I+GPy9KKB7vZwo8GAQUYe+El5H/5Kc2jYs4LleD4xULKnrgBKxwBzwrCMLTr08pKt79HxF1aF4HnWP9gmRfP0JryvBdQtsJTVCr5Q714gVaS/qG4CgHBt6ZGVoWYSDUmeudL89sfa22bd+0w/Zj37/T9v25ZbtXGlIB2RsLIM3df4TT7LXonTJsD+wTqsjvUivDauzq8os5jJ7kKacT5QfssSm9OSFod3e3vWtCX/wh39U14QH+EKQau1lBMQRdR50Zkjvzx1Fp+s04QrPciNEWCN0IoSh9MArPLiIIorzil6K0owa2h9dRBwLp6N+eBqLwDtOV8qtE+AHkfZHtsMCLxII9fKEFKkbRpItCIk+T1UiOAPMCgyxzINt3ACzDoKVtgMIL3OUL417Y6HVaVpTTivrDcI2gKw72PNPhrfaDWs3963balq/07aRdMisL4Mu/cEKv5h0QCy2r6w7asOxOzF8On42SYkvpHf7Mdvcf9q6owsWy+xbsTa26mrcsiUZ/8mO7XcuiSfEmHHZgVxJNq/o9oDRH+azMr0kKYeO6QqHDx+SrakrfVb64/SHEaluZyQHatlWV9ZFj4Q1d7t+jVGETnNipUzFbjv1ErvnzpfaXXfcZ6eO3SLaxWx7c0d1Eu/i8EN/haj9oGEAWEE+uY7tYQoXwkavL/NNceDYFePLnlVTAAqxd0PiUGUU2G52hra137FLW3t2eWPPdnZb1mlL7yprRtXQ6ywQcodOTiZTBjICo4xUMg95WSBy7dBh730+tHzIgSUjeQAPein7sgcdtQ9bOLKYciJdDo/43GMJv88pRQnQA6vyhS3+VE+Vlx09iMzLZUS5WimJpkXZ4pyoMZMtByuU1A5Vq1bLDmTZdaEqrFMoMu2gYGXhgGKJLbfIK9Jryk+B3/RSgmPATtDrz9GXjpJrbcF1V07+Pu+BEQ5+ap13CZ6ucAXPe71cwOlhlswpKfbeZQS3JeeJOcuAVnR81MZRuNb25L3Inz8wl9tB2dNwZKpDwGsEepaZ4kGZyJ/5sszxp/1pW58eoiIFZzmU+UaDdLk8SL1IFwVzsSbKeCINQY/oWN7RZCwFKsWA0ggEE1EF4OIzgUdW7EtzJITEWekKroCxMqp0UoXiO9WR4oAYVBpvlf1G85m0wI4AbKUqL2HJ6tWaVQVIi6psQWkVpOxzs4GV4hPL2VDF60kxicDyIiAYSo806a0ExAJqMToQHONEI9MLOZoOLVNqWrbW8qGArc1tAdYrPjdnNX2P0pTH2t+RAWgqyggNNmyafcbixcvWE4huDk9L5odiwDV5q2tuLBrNTQGmhhSHWT5VtHxyyQqxQyr3iiWnKYFuAVdBs2x+JvCmWs86Uha7kiOB9HnJRu01u3r2pL3w+C3W3JITkay7UprNB6E3WwLPwoJ4KmydQf2I0XlF4L8sDwtG9Q8nLOqPYKDY6D2CeRjG8OFuMTnMA3247vRTOzMp+9DKsuXocZpkbf9qwj7xJ+fsP/zvf2j//b+dtdMvjGWUk9ben9szj5+3P/voRbv/pa+xbCFu//Lf/KIdWi/Ig37BMtNV++3/8iXb2ZB3rzrbSGUYrvieu+kMQsLQi4xLvC2jbfbIJyfW2lCb19nkW953a8Ne/fp7xB1De8Urb7Wnn3vICrWSveu9P2K7rUv2yBOftYtXzvhGy3yqEyPidRGPJAAsam/qGe2dSASUst8f+z8i7xg05weBCXpVeN+nw+RQtHCl+F0ggx5t391A9EExRfIEP+EEAHTZfuvg3n7eo6Zy0NuEw8x5WL2rI5nTKyzPOEZb/M8IPg8vBImEx6D4OA+KyOfBK15DbAcCl9A1RAy8Rw9/sTKJ0iWQuyugxTkiH5UoOgY1gPL88yEqZxScd3UEkFE4jteUp9dFJ9RF0cGrfnNkZNnzXsQDSV4LXOJViZgbJlfF/iCJBkNB9E5dGRfXJ/6w4iJBLx/tqsgnRNGV3qOL4y8A1WuP7BX3v8n+zk++3x57+LS1dsbSdUvKV8+I97IZCs28soGMmpwz8ZNMhcrBCFldx6zu82Umphx4lpTS6w6fI+t8GIHpNRXpzma7b0899Zy9+z3fbqtHjks/MZR7vZxRjOoW0frgMdA1OD4R70RBr1yLlIdjaNAFLa/RkKj3+dP9KA2ORGSJCE0pRwi64EY4HKMyAlB5JHrMn+eaGpY8RbwAbnXd58MqhiOXxGeyW04DmmbRW0uanpcYiHpgyDHg3pY3GDqNnrX2mtJB+7a3L1uyt227+1uKAptc221Yc79lvaZ0SWtkw66AouzKVM4v+6CyvRajRULdqvHcARhzFzHwWTqLfGsNPSddmMzOrbKastKywEZi3xr9C9bmK5MqR6FYtKWlZYElvkqILcAJEkiqFQWsuFZSmsSq2xY+fDMcMBoVs6PikdXVNS/r1atbVipWbP3YCSuVq77lWy5btptO3GyvfuABe+c3vtXuu/t2S0Oj/thiI9l+8TlTJtzlQHkg1xImIu0U2R2iO6OqKQuPZnTOCLzH2FvdeQwGUSvoHRwkqUmBGml1tWNfmKPVn9pecyja6rwt3T1kBDiDYnDZYHEkOxy4L8JUIp1jF1l0jm2ko6da5eMKdUWB+/Kq7GNY4zBSO7CVY49pd5Il0kKXX5OFA5EQ+HLqn6RXUV0n4Ohmsxnll3csUy6UnQczkkv/HD+9xwKvqZzAby5muZKAZHpmWXZYKqRkK5M+Z9l3YKE3nToseJF8nUaLurkWxCbpmorijoLLostVeA7bw3P89k6+LFMuwh70Pj9cbZWMRzYt5KdW4kfQTUrMtwGUTuPrgsFZBk8BpMPCO96gNJGMEq6VVcegawCv8LCi003X9YvHKYtjkRRaArmUbMguM7+bOb04Ev55X8kCTtDXEuIAG4z3RCiNeSzB4LMnGCvoVSExmEdVjK2crq3+M4FHGGMxJhnXMWyvoER1NbAAiiQwNxHlQ6WD5yChK8lrkRAWdV7yhUzyLgXEhH88qv0l4FOdT0VS5jZKEaicDtAIInLkuZAuIVLcPENdVBjLVcaWLbOp8dD2d5rWbe65QJYSxy2Romet697BUI9bqmn55auWqbKidCSPeMOZtsjwYLxs/d5AnktD7/RVD0B5xXKJkhqgaImphBRBV8PlxahLUizZlBhi1rWMmBjhau5P7Euf27anH0nZqHmLNbYnIl+YyO6T0HHOREc+ZcvCMJgSZRR5aQDXqFc2XGN6hYqtI7+J0GIuzkaRMCQQGJtea/ZXXSgcaXmOXGe4c9AZ27NPbNhTX7xiW1dHduFCw4b9oj32yGNWkqf70Oc+Y8++8Kxt7V6xtZN522o8YW96x712/ITZ4eV1S8eO2G//f3/Xbr3tlLUaU/vUR5+wRx561p57/ilrygDQHnwa+cLZDd27aC+/5+12ZL1qwpN25fIZKZUde9e73ioB69mjj33Obrr9Vjt7uWEXLz8jXmz6IsTdvS0JDHOm0qIw22Dt6ighVz1IH74F0BJRdvSaMpTigoHXK/5zZSGllEzhCInXsgwvIYjcZtiQaStz34mDz/C5kIqH8DJJE+BB7HaZd8yHGdgrOLzjgFFHlB4CzTsFtUdObZCSQCfQgjcYKA/R/UxF1FSIIVBnggNUBa8Wv/Usw1K+UMHT4KrkQQ8g49Q/0CvIsR4Lx2tJ6yU/R9lcD5guoh71ONaLvphLL/KuqufRkYZHpaRyOC/qgSjfKJAnYVE8v+dKUb9dhqGj0gt4hXxET0X/Qg9/kpVEVtpHClz22RgwomfWnUAdUbRerkWWET25Tj7oNOZZo0BZeMIQVwSq6L2gV4bhaxS57/mqwsQFONCV0BRgQo8qBZwOlEZXPKDCDHtq+3HSvve9f80uXxC42epZrcSQXtafBwAk2J5I/EwP7USOr1rL80Xpp5LsxBGMKLSLlP9ooHeVFwttWbRC/txrNtriU74SNpBT2Lfv+d4fEtBlaBEeZB5i6N2N2gDeYMhRlVX69LaQF20YEQqzIyJCK546QD8Cv139Ki1sQojKi1dUPiL5uNzoJWIUeJcy8TwzA+iJCT14M181zQbqnIey6lmlqeYOhk6862UkXzZxpYAwWWA63cOQhvewUzzKNZ/bqCOiR57M00NX+BeiVEPwrwqr/24sTBoTa2627NKFi3b2/At2Xk72pc2LdnXzil25etU2d3al/3o26qo8vbTNOmkbtgUQxB/YmEFLBRzKBsrJZvZVUhXMiekz8PNsZOXUkhlf8BpnBLJkr/pbtn5b1d7wTS+3O19x3KrL0mfZiRUqOenPowKuK15nehRrSxVL5bO2eui4nTp5u62tnrR6cd1uOXHEXvXyW+2V97zGXn7f6+wlt90je1vyNmIUqyA7/MY3vcHe+o432cmbbrLLVzeNL2bxMYd77zlpJ9fKlhzJyd/t2c7ZtsVUt7n0HnwcT05lIwUspe+6CJd40nvzAK/oC9oH2UrnZaRyNokz/YGpELhAamv0EMKohgC3SWt57+BAQt2XXM6M/ePZbi5l2zs927ggGyBHDxkd0SkTk9zJ/jF9ICN9DdiioyIn/V0C8FdqtlJfE9ZYla2rSRaPSd/nlX7btttyRJgaNhTvUQ6fe6S6SQZcb6pAgMS58tATQjiigS6MxG8TZFN2I1NIyLHg62ACtHIKquUl5V2yfK4k+tUsV8jKXqm8ckzmcTq4ZHMVMxwLArUCuLmiMIzAzly0ZHSVKYPUA7Dt+k46ZzzqikYs3ssIe/BVOcmWFB2Rr27SCcmors8VVtkk3TqmlbdAfaUsp7cinskKa4R2y6nMSTnV6DN6R9F1Y2EAACsfZugyR9m/+gWIFr4CgfFJZjp9FGlXdKPvAKF7AW+xx63kSvwwZpoBQFUs4VP6RN8pOFFlQ+9QQtZkzdX26DJGOZkWNJIDE6PzS3XsDZDzGw/x4tGjlqkcVkZFGeepgBo9lIo9eaCqXLfVlVd0RQSYWiFb9x6u4WhDhOqJtiUJEtMOwhZWEJ9P3nbE+C0RZK8npSnF1Wt3rNlq6F7fe1AwIuVcxZYKdavlBGqZT6j32YOsp0bpq5IDXYhJMBkOFpa0qph3SR5dmsnW3jPWFWEHIkDChYEtn7LygBzMCIRipZksn1sRM9Uwf0O7fOVxu7L5sC3n77NTte+1fpNVj2wPVhEwlzAULlipNpBBXbJuWw0RE5BSHbLxVasUT4rZe3a18zHbHX/REsmRQMqtVpvcbrX0bWKCqjznjvWnbTt6dNluPXrCyipbXai8kJFwKI32pGh7OAjzmwTQj6iMl22+e5cUF3uiJq1WucOHpHNVk1e3LCYDEUgRiAEA1IVi2oqlhASFLbPk4WXEsuzbi1KQznBFIOaeJ/Ly5OnbLlksc9zi+SMWz9VE+KxNAW6iJ1ucsVCqUp1Za69uv/LPP2GXLzHP95I1Np6zm6uHbOOxx+3clUu2eSluK/nb7Qsf+VPbOvekffEzj9urXvdGyx4e2tJa3iaiU2J63p74WNGeemhmn/3Mh2zrqRXrPHW/Pf+poRySroRKdB3dbX/2u/vW3d+0i5t/aunS2C53z1qyn7fLH+vbe974Muuk/9i6xZGtn/h6eeKnRXNAvRwEGcVyXspwqHZX2zNtgGG2+LQk+kl5TvWceAgViqBkJJRlNv0W+YR95XSoHVA+GFg9NxLD9piPKXrNFFmtzk4ItVrN6rUlq8jTZuiKbXVYFNcX3/UZnpLwd5lyIF53wR2xUEL1S8ykzHK2LINSr9d9PjOOQlKOTiYtYyD+ZT+//xlB1Qm2fRECiDh4HuL1i9IwXxnc8HP01ALI9PAihp4kYVMd8f4ZHjoIYAiAkCiS7iKLvzTgh+JcApyIY1lsf59GTuHKiJ2Loq/Uj9SCWl1HlSOndmY1ND2DScnOuKd3O2ZLpYpAp56RAkrHMr45+6Az9J5UdiLxRWZCwxg6hvsZcSmJf/jCXq8rJS0eQWEDINGNGBx6EeiZzchoxMRLzb2uSX3aO9/xLVYWgN3fbkl/Dq3R6DgflcsSbNUDpyhyMjgHYAZQGegWaBWmGZAfv31xp4qIY8roAAGQC/HzLHpUuXf2mvYtf+W9Vqsf8uFTyotzC7jEsSeSFmmQ7pfHAGyDsQnl0WWP5HuNdXTklB7S0Jb6bwFiA3gEVIYOhVAPYniX4O8v0vrKwLs4FBwCz0Y3Qn5YRuY0kmDYPQfmUF46UlfyjDpcrn3OdBG559/q13MAWl+J7VEV/IsK9FVCe7th7UbTpxPsNfdsc3fDrm5csYtXLkp/XrJOu+sOxv4uC6lk93Y7er6va31r7LfFj2oHjzL4AqsztT+dNIxwpiSXEz76omqxRSPgLCa7NE/2bOVIwV7x6jvtTd9wv937ctmWalz2oa16Mqw7tXana/uNli0trQnUrtmhQ4dtfX3NTp44Ybfffspedv899vpXv83uueultlRb1v1Ddtddd9mdd94tHskIhG87CFxeOSK9WrLdnaZtbgrUtvd9286e6rR5btOunD5n5557wfY3tgWrkpZLydGTfqSdGeliCysYJtI3Ir3KqfZR+8XTWV0TUBf69I/a6L7YzHyRGEeApJoDfYJV8xFieBeQM5S+7k1E957oKfsv/oafgzNEZwLZ4aDBs3TwhE6zXCYve8n2Z8xbrVo+LVlROWj2KQiCnlkBd74MxhQ8hsXFOaoPMfA0dpWdJKZTdsEJDjyyG1fbME2NL4Mx0sz6lVKhLJzE9I8wqsr+t75+KK36A1bZnlT6i8iopR4TzpcTK9Jgq/yztMy9VV349D2yiO2AH9BRglDu8CZjWf8iV3O/4zzPgk9Gavh8cTYh4CqslkvVZANXbKWybkVAhfDIUrXu5UK3QLtkOuzQMBUdwmeBBWpl6/B3Ac3+8SbJF0B1Mhngc7iuDfIJKKXn9rru8l7ZiH4wMtf0lMfwSHjPeURyidw6mNbz8D27WEivIuYjusG/hhA/evQWW1pdF5CTslcL07DNdssaXQHaYU+/O6qEasbQqQoAo0AMDDVd+czNQDl6TwYInQndKhTMR2xJmbNhfqfL3E0JrhgfLzIlxmXeSUQEwnUldj2StvdaQkQUpIDrVIQdCVDQ04Ex8B6zlpRGm7JKIbiHP5ZQyijVWmKsVdvaxnt+worxmwRm71a6E1uty4PKxsTsbHHFir1D8pqK8qLE1MktGyeuWDZ1VMphWfUfCAhvCPjl5CseFZA9JAYYi4kxUGooWd6UPI9SIWlLZeYBl8PGxAKR8bhAZPy8jPO+dbZvsvZuyfcSTKamdvHyZfv8x3dk7Gpq7D0xVNHpli9NxKB6TwDK6SCBiHpXoQcLJYgSt0A7/bE/35CNnPsMyaBgJNkKvM870NInvEdtJ6+NYZjf/M3ftI9/4iP2xYc/a51+2724F84+6/u4nnlhbo89el6AYWaXt79kn/rcb9nd99xsO1t927wct5fe/U0+fzURO2zFwnH7kR98n/3X//iIPf3CGcuKDiull9rv/ebDlk/eYp//xAv22MPPiu5LdvOp43Z0LW0NGYfPP/R5CbQUc3zbPvPQ03bbHa+yTCkjgQvzhVE+lB2+y6QZOoEnJPmI0mJ4H8GgnoE/6c2mjkXvdSgKRDCEliuUPA34iDmwwy4f7mCRIVMPwmcAw3zksvgAvs66ECP4fBoYJYPhAAwCqhDYtEBPMi5nzNhKiQn6fLyCvW/rVinTG4BCkmKQDJHGjQak4rpkHDynvalrOEYB+eChMKdT9FgYBVc+iqHXOMjUtee/IgAAXQgjrfNl4cvzcyQDeylGw/CuoHT0/BW5HYWDwMaLvgjoTzdkJLe4F92HblE5ec9X3gPEdb1UTUlH8Y1vOSoFPrssZ1igNSlgWkgWLRvL2bQ7t5KMbVZAtLHRtpyucb29rfYexK2ckYM3SoiXh7a90ZGx7tvlc03pFRzwiSXlqMcFbEuFqhsLvoXOF5ZKcnTY93LMPsZtAWLJWnKesWk/ZqtLK/Yd3/a9ro/Y15nRGLajGQ+kQ2XZWfDCVwfjMur0bDD3D1kM/I0+DL1CPhVIihRdyZHhaFYZsziDee5cp6ekJJ7e2doVb42tJUM/Udu999u/0/h+O6BXbK20MPbBgHAOD0e8AH2JYe5wRPMgUweBbBSdP7w9QltGAJZzDKGDTQe24T6B9zCA/qr+i3gwCtFzBExg+I2ZFE85UyuNa2kCTHVUimEE5DpwHgNWde5gVpiQnl//nDnRz70JbMxvpUBeetPByg2H2MQBCTvIAF6ycjLYjQdxY0Quja1SXrT5QIB11Gdh8UBOTc8ae0yz6nrsylYNewJkjOyosPRSsq0lag2QlEywDsXFSgB3oOPMVparVl+b2Ylbc7Z6NGmpfE+mR2nII5sLHFZZUGx18Yb0nnT84bWKnbxpxdbWV6wkIMd15pHSGIC8Uydusvvvud8qxYpdOn/JGrt8TGEuR2zF523HpzmBJaZHJFUXhqDpXGrZxsXT9uSXHrfTT16w/v5YMibQJFsVkwDS2yaG9TZHF9PWLPaCLRg2Zo63L9hVeYlux8Rk/qXFDL2FIjEjK1IG9Ery8QO6ojgfqZ57chC3NxrWEogbD3T92lxX+Fvvoud0BF/gAOakf6M1JowEh3mkLLBMGwvpQqcdC3fliKrNGHFlwflMjeh8PZPDOgVI5nx7MwddrBMSEKaHk/3xAbNFphoUC1Yr1yST0jmysyyWolPFdwCg7pJ9p0FCNFDbJjNzn2pQzLPWSPY9Aw0Az6I5dVd9B5JrOgzj8YrKW1B6JUExjhXlddiWa2uOQRgNGnRnVkocVjsUbdrOWWZWt3JyTW2zZivFU3bq8D0OcpOxgrALDkRCGIA51ziDbdm/kfIRjhNoZh438/dN4NJH9ebSmfG2y5lPSXB603aB5sBVGMtH0IRZwlSDIK8uNnrebRZ6Jqm81TYIDWnzbQIwm8u2ZIiFYXxVjXm7X0tI/MOf/+UHWakG8OGzp81O21e18enagZB4N94SyGhaViAtJQGYyMvC6NXyJ2ypeKvPT0HhqNlVMakftg8RMfhyRK87ce+BTYpJmzkZWXkPrLhkE/mMBJ/u9Ei5sWIwVFq/F9fYXYGJyP59YAk1C7p6SouV9XRjC6KJ+GMZBYaWme6AShlbPDWyck3gZ31HqaTsmdMftc2rL9ip2nvsePXVMmpDWy4zQX4mhhQIEciZ0+OTEOCcnbNG/7y1ew2rru5bXZ5uv5O3ixe2pXz2pQgEYuQZ5ebLtpq+zed9jnsjgdmx1UoJW10WiKuv+Kr7HIyWkJDnNwSeDtv2mTdbc0duWPqqz51r8fGEnbTdff+yFZcbAqF4rQyjC5iP6G4fC5jxzW4ZYpSnPKmw44RIpAiroEoZkmN4kcn83a48TQFaRm+Y5A7w99X8YjRW6fNWgVWzEnB6IVn52m4whythu3tXbHv7kmzOVMa4Yc9dfMpe/or7bePKll26sCdv/xZbPbJuTz33uL38gbslMDkbTs+qfQrWF0C/tP+03Xzbffapz/4PGYxLVs++3D79mc/Z+XNX7eMfeswy0xWB2UP25rfe7726D33qE/b8o8/bPfeUrDd7zp47v2vf/O0/bKV6RcK8WDioCIjl6yNM4geoRooSIvgXvVx4JFDiY3aF4Bn2SOTThxnxrW/XonfwqtmOy4ctcdIUJj6ECR+wGLIoZcfm4CxKmFq7KaAgNxWlwvZb0XY3KGzoOZXQu0EVWSeylhSB7V3c4OroyhFFPB7I++/aO9/9Hs/zxcIv/rndDEJ7XwvRj8UxqAzpiqQUjAwiQ28YBVbhA2x9oZTP4w008xXg/FNViHEWzqgODPWwItwXgSo91IuHRT7hqp+EqLTpjPGhPskPR37zTW56HJFjV1IoNppHr6DfJOr+PkeK5ECYE92DdnozgFvoqWbiA0XyT3VPRz02lCEqlJmLFuoEEKgv1XwxZ08O9Lg3kbyprYYq4iwh0CnnVUBzf6tpNYHTzdMdOVJ9y4hXTh49brecvNXuvPNO79Ha3Nj2DeVpv45AB732ODTsx4mh7EhH8iXCjIwVNIuzWFYGh3mSr33Nm+ytX//N9uxTF2TAsnbk8GHXXRgypijBh/QU+QpjOcPwXrFSkYyqchCEDEQgNiyHd5nXjp7LyegwDWnIFBopekYQ+BAJI1QYFHiWhRvQ+t5777Lf/a0P6D0+RZ70nhaOtAf50aPljjBtsohR8DY4cPQGU+CXX+I9P+gIcFCafHHNp4XQ3ALB147+vCJhcQzpKup5TkP7w28c4QNsSGh/bAq0gA+8iMgU6gua6y47UOB40U4+lU33kUP8TdiWd3jtWtTj3vMHr2JIVW4vuzL7+z9+Y/vMfv7h3xffZaxYy1qxmpWTnJeer/hwdk0Oc0W8lUtK36jtUzi3+sNgD+QIY7tm4qOx2hqHQrkHslBxhIMrGYBfMOjM7Y8x9C0kOJVtY/5tMt2zSiXv/MBXsPJy2BnuZNu/48dO2aHlk+6Ml0s5K1ey4pOCzmuiV1YgmjbiG/t8mZGtqdJy2qW/BagvX73i+dHBxNSFvM83LQlk0kGi+5e2pYMr4lVsyNgXvuE8TccxATkWQAkbDDpqImwRoAb5l+4Rbb0d9Jt2Ho97ej/oUL4MhR6aTsRDU/S5QJ6oAKDxubPSS7wMeeA3eJaFQnxQJF/KiPYCppJJVsOju+mZZNFvCOIJpyl6BZ4NXDCRQhhNOtLnXckfcik5Ftj09ROysxmcCXSR+JByu77xVoLPKI5kyh1qgVlPU3Il+We6Eusj/LOvusyuBP65VtnsPI6q5JgpklhZZBBnlTIhuzi1zv/6S8sRD/vwAgqVlMB0NllVW8nxnuSly/J29eKObV7ZtthUtjGR03XWKQlz1Natszm386c3beN8w8Zds25z4mtZ5mqnw/VjviPToZV1lQvM5xSxdnfP9hsboq1wmvKmzhXxzOHVdfHAkuwoc33p6QZoi68TYdFhgig60T44ZNRpwvQp/1ob61VoQGoV8Iq7jboU5GFsfBCDXnvsNV+aQy+Kgqo7c6RZH9O0t7/+B/3dGwnxXLEuplix2vK61VePW7W+aomcGFxM0aNgMQkQjUzvgQAon1bkiMD452fLFaH7ghqPIfuMKpMWkIpJ4c+s3R7750f5KgrTAdh/LwKzaYyuqhe2JmEORhBeByRc1zlKnH1mfUV+X6BMYGDCZ0SHbZsO6Bnp+LSI0EsrsCylhheWL4mJinErryRtIupd3v2U7WydtqLdYccqr7TV8hGrl9hBoWz18qqUgxqoKCL67gVXrDXYkudSsdzk9Zbuvd2VT3P8BzZOfUJ171lqtGrpcUlgWILeTVivoUYRYM+l53aonheQLYouBf+UoPckJVBgSzKot7hxLpbMvTiGd4rZmupRtIc+vm9DAeZYrC+vMmHtfXoLGyIFc5ElMqJXWieu+EUhIoFfbLNFzwxGE4WONcEDpgfIQQnCovf0v9/zL4yIgcLk8JS9+13faHffe4u8oQ3FPdvau2zb+1fs3KUnrdG7YtLL9p8/8N9FpzV5xHP7+Ec/a+9+5zvt8KGqQEXPstWmFPWmPX/uS+KVpFWqx22e37JPPv5B5Zmx++56nV16oW1H6jfZ8cNrqlLXTp1atycee9I+/dGPSxEKrC817KMP/a695K43WrF8VPzXF3CturJm+B/gDc9xZGI7CgojPRBP+KKtxdYjTh/QG9wlxYpywTQmpLCSbHkkJcMzYiIZiJ6vKGeOjxtQDLQkHCDB1iFEPEaMPzzIFBamCqCIEdy++K4l/mvx8QlFtrAbCLQORBMf1eDLYzJiXfFpq930T93eaKCIxK8MXPoqlz1E7wQaELlKL/L1a4SD4OV6wIwQ//JwbY/RoAm9MBI7BwrX0ufy4pzwZWDpQNbeI6sj3OrK/EAEsDB0H7bi0u+gFrxOxEIxIX5n0Z8eGbM9XtGuntuzzs7Ivvdbf8D+xft/xX7hZ/+x/Zt//mv2zx783+xo/bhdeOKqjfantnexbd/89rfZ/+dX/4196k8eso/+0Sftj//gY/bf/8tv2Qf+4wft4x/+rP3Lf/Kv7P47X+EgdTIQuBjNfOgecELZ/KgyZeLSiTImU7YVknH5ute81fZ3Q4+C9yYJtPJFpJTAR18KnN4enLPREN4VD+q5qG0Y9vORAgEZwE5VIJeeIAAfhpNez363K32blrEp+XA379UF0ulRqQpQMTLF1/COHTvh1wjR6BVpIiOADAyLKyPF0JsVYmi9KP75QNugR7yN1HJBXV/nG5LwHn6F0FYH+Y6IbuelEOgp9ST0CElyP0wpoL5KS0jT86CovPAVIeThmXo6EatE56SPSfWoZz3q3AeTdS88w9M3FvjMbKWas9WVihyVuh1dW7X1tSM6Hrb1w2HVfFk6Ki8Hmt589CuODJ80Zl0CX+IiNvakD5qyX3wRUoabHsI++mgkQCigRPvMpGPo0SujA3HOAWvjptq4pbZqW3UpbjffumIvue+YrZ8U4ClOBECW/etUrK5fqlUEZmXnFFd1/cSJYwJrSbc9czmDu9t7ksG0veS2O+342ilr7XUFfqa+q8F0kBSf1e340TvtzjteZavLNwkMT23UGvgXDU+uHbVV8efupcv2/CNPWXujYct63j/eA/iRnvUGV1tieeBF1nGk4jkfJk8lBMyEGfRfmHsK6Cul5STkFXNyGJJMsXWHPDjjRPjCZIN61lSEnnx2dyLbRy89TEkPNxEOgMt9lEECxC4KPjopGvom/bIH2Al6BfvS0b4ATLLDjjhjHA1PTw5mnM/684XOjsrf9vUtJM+nzwH1YI74nLZRmwsjlQoqu+rC54f9Iz2wpkpERxG22NciyZ7QicdXUFmvQadWnvor4uwy6shaF+bespVfbFqy1m7MGjuyZ21hrXHdioljVkweFaauW3c3bbuXp7ZxbmDPPbpppx/ftRce27Znv7RhT3/hsj3zxSt29sk9u/JCz4qZI1bJr4unpDPmKekTefvCDORNu/GpZb5EyI4LuUJR5WNfebUV2E+0c723CNAZeeWrsRAFCYx6ZenAiUZMokiIHOAY+E+OEyOJyB/z5dGVdBp1pdQBs3vNlr9zoyHxs3/vwQdZteueu4jPlkaDSd9GoGspunGmaZlCmHAek+JmWCyt42r1FjtUudOFA69Ezobui2HFIPRQsR3SRBeZEsB80KKUeVkNXS3wzWUQvUCGmEZVdmVHRMkCKAg+X0bvM19t0OsKCCDwfD4WUNGTMRDzMcdW+QTGjftclVq9Zpni3JIlgcva1PYFIJ458wHr7ozt5tL32Mnle22pGFM5Kj5EF4+VbJyQEZxdtN3JBWuMn1P921aYv8YOJ37Y2qdfYen5CeXdUN7n5MF1rKK6FOaHrRS/1Zo7EoDeUEbNBGSLtrZadSDLV0h8nl1cNJy3rL1zp+1euF9esVk6N5NQl0XLid5bkhLr2NPPnLGl1aIdvVm0lKLptzMygC0xGF/fCENa7NnHPm4AOQ/iDwSwLyDPgqShjHvooeA6BlHMJTr6Pod6xV/TOUOKFSHqWrWqI6s9D9lTTz1tn/vCQ1aWhz2XA7C7v2FLK2Ux8XHb3Lwo4W/r2lO2u3fGwd2jj563+17yFnmEh/V8255/aqx6xuym2++QN3e7NVobUsYTu++WB6zV31Jdu3Zk6ZCde/6sveLld9rK0aL99h99UM7Aht1ycsU29j9n569etHd+608JyMasO90VjapSPgwXhWkDCDrTBDCejBDwJRHAZVjsEow0SoDenWBAFV2gnFpuzJnsPhFP0jurAorPeY/VnzI8EjCsapgjBGCVl0rPqmjKAgE8SudVxJaeNQHovjz8wUC82W+JRxT7bfFqSwLZlGB25GSFaQrQjPBt3/6dfnyx8IvvX3wBjGp8tUDVons6UkWepQ58pcdSUsYYANodvkEHXWObABTEGm7ZcaDZho9rRAzG9Z5ZpYxG/mqB9MiD3gyahZkf3isr5Z1gqJrMQyF9f9CovJwubnEJmkY9szznQ8p0mgt5xMYqlyK9q4A6/453Ss+pTAwnV2T8+F55e39oS5Lp7/r277X3/8NfsvvueKktV1ZtZWnVXvnSB+zooaM+3PsjP/Bj9k9+4Z/aT/7YT9rL73tA9w9bvzW0C+euKL+E7/08EXC9+8577Fu/5VusvFS1j/zZR8RyU+9l6XSa3gvMtAGMaXKasNkorrSncmKP2Xu//XsETmhrgWCBbKZf0aNUyBdsd3dH9Z3b2tpa4BEBU/aXpnOAnjgWWlCnMGXLfNeSIfNzMYzKk/2L0YGs2IZH0YOlcsXY35gPoTBKRkCGP/7RP7ILFy5ajnmXTl/nWpeF4NAFukfB+drjoh0iA0RBCNdYgHTCX2h//Q4I1ube7npfPOE9oKFxeen66wpcutb+iyKER0M5yYEV7n4tikqA4jgrkp2YPwwDy6SqjOFcNXTeDc/wvJ8qAfJzzA2+F4/Cs4wiCCs6v/6DH7+xntknn/0fcrZzlq/QqxkXfVNWyuUFPKSf4iwAlm2js2Cqdld7+hQ0FXAkXch8XQAQouB7LwPOVF9pL//j2jBBe9NrlnQA7FuMKQKIC2n08Uy2RYBQAoeNYTGgO9fSU0xFiMv5Yk9bFu0ypA64gJ+lRV2Psn1mpVj2MrWbDC2zjVRFxDBf2IqTxQd/IB4fJQAMk88Oc4WbQ9mvTd0zW1uv2623nPLRhY0rG95RVRLoTpXptWOkQjpTOo8eRtqVhco4db4PqSIjjuhPX8ynBgKDoIMdQMlZA7iib+nUmqnh3OHQ77QaEmyQFd0rLLDOM22OtgAABt7DgYAf9Z+KSt1DGzA6NJ4MPA5GjFhiB+bGImH2gs/5QmvKSi8v7wi/zJnmp1SkGH16jiu50KuOXSAX31daeXBPT+lZ5any47QC1P0rY+I975zjHe4rTd+PVwVOWNaSM7VpLCddhhMgPSCdrFKp7TN26YV9e+rxS3b5fNOWC8dtdem4nI/bLJ9e8p7X7attu3huy1545pINBHqTc0YGigL5jAYxAilMJn3akpMdzwnUK59qqeb0ZtcBlP5IdpRdgLAbTMNYXl6x5fqqQG6Ybuc6Q/VE0UPfMEUEt1Z1EI0nUxbSYzM7ohk9s3LoqafTRaBVtMdBw38GmzBK0WERtfQoTcsCXL68ydoIFp3tylHfltP33e/8ScTuhkLiJ/7m33oQI4HCYN82visMA7CyD9TcmV2xTGkqT4mvLbGB80RET9lq6Sar5E6JZwRcJQAAWHGrlIYAnApLbytKSWQUk2es6qv95LWI8QC3VA7hZvgABepzZ8TsBFfUYjT2f+u0Bz6/iE/pMkxLD+1AAMG7qEUQ9gNlWxiGzdmAvFiVR1yUwsjt2yi2bVc2Ltnp0w9ZbnSrveTQe22lIo+1HPPPsSKsfGu7PT9tVwfP2Xb/gu0PzrjRKA6/3mrjH7Kt5ztWnH6jTXfebd3tE/JcZnZkxeQZVWy8V5f30JTyiFtFimNtZcVWlksyXHwSmFkkAIu5NfYzdvX0fbZxftk6o7MCHAOBh4q8yrktyRsdy3E4/cIezWGveN2qlMm+PDsUV0+KB2+XofJgzFVV0U6PosQFVvnGNEDWvyQkRoCWQWgZeh+rBIA7KQvxYUoSpds+RFpgKxEZwmK+bEeO1O1//I8/tk9/+pO+un9njz106SVo2k7rsj388MfsnjtutkPLq7YjD7xaDgb/9/7g9yiy8t2zz3zys7YnAHtl73Fr93ek+JoyxA1Lz+Thl7N25fJl67UE+jq79vZ3vM4u7jxvn/7iJ+x1r75Zldq0x5/+rL3yNd9st9/7dbbd2ZJCVHnjRdU5gFQHqGSG0VKER4goBngpKEw5ZRI0aBTmALLFEsYRgwdNwhDrREefF6V3WTyA0mGqCfwHbVGePuSlPMXNuu4WT9ek5ERI//KKFABgvS9gDJBotVve84rT1RWw7bM3rdrAtwVSmkmBcD5b+Z73fIvz+IuFCMw6TtAR+STS9B7gg8VpdI4fyLSCuBSytKO3DUo1ArMAQOgkigXgqkf+HJil60rXvxzMkolCdCREhUK/8YEGovIl8nnnsJgIIxX40YERhdQ7tE+UbGinoBKvDX1L4cfokZUSno901LkDW0dIoc3h5UIpJ37qOYD54e/9Iful9/8ze9c3vMeeefQ5u3Luqu1t7fvwbF86ZH113d729W+3b3/Pt0r+2V1Auqw1su0rApjKZ6V+RMahJAdp6vqF4TPKcu9L77eMDOaHP/ZhiZyc+kLWWo2+8T18QAZDyeO+jEF/Zm996zfbTTe9xPZ2OwIARRsOZXxlCPggAgYC/dXptW392JpPO2k0931RmJNGhoKRFHyeJAtdxY/oTwwgIBrZ74jH4D2GL/nM7bFjx2x3Z8eNfC6fd/5jZGp5uWyPPPxpe+65Z5S/dJDLD6ldlxV+EziPjh7VrsTQOvqf9qDdiBwoK3+67VHP0izYCuc33gXYXmPckA6vc7r4qZPrv4m87mXRPzCeWJVZMUpCPOHoT/9IRLzpfMu5X9TDHjhS+PDLeVgHyoNcUDbnVcVYhqPuOpiFX+P293/053nrRcMzZ/9Py5dSliuxhiFsDVlgCyTpCF94OGX/deZZit7iW1ad+1eYJGRUj/ZktT5zQbk+FKijR8p3B1H60yQr1+NyzkbWaYwFVnoCi9u2e7VpLGasleo+dUbqy2nk4HjW14+u9FrDhixqFG9lxX95PnyAY+SgK0wPwKnKiidZeAOMQifS6YFTReg0u95bx5esGPFET+KM45hBMBzHgRx2mQ9bP15TmsxjbdhkEPPRxHjJjHnD6Qwf/AAXTKQ64UGVQ0lIajwfdCPzhSdiePQtnRRptigrstYh67vJAPR4J7S9/2fs1Tzoy4lUu7GHa7YQejR9uqMYhk88h2l4IrZspPO0/oRoXK/whT0+ZoSNZ+oHnRawB3NcmapAjyplAcxiGwDTzlfYW+nJWVztyPz4mYCmCjeLC38kaIy8HIll+cNNtbXyUp2DYyc3wu2uWkB1HiodHwUkwpQz1XMivKD0EuwxnZQeypSkUwQQ1d7DTsJeeOKqnXtuR5qmavn5io26cZ8idf6FDbt4ZsPae2Nf+BWf5n3ufjlfEw3LSl/gPCnayMlAt7BTwFZz2xfR4UDj3LDOiA8bUc1eZ2Spwtj4JPDK8rKcHNaYQJe83s+JVrKHPs0xLH6FxrgLgNSJaEoHD2CWLQf5AhiGhRFi//yv2o7eWuSSjza1e31rKm92TRiPMTw4DyYdyUhm37Z29mxre89+9Dv+rvK5sZB473v/yoPs80UvKQ0cvArZDgkdXe+N6VmBkZhiThXOCFj2jA3Mi6lDlp4fsZk8sy6LuxhCk6bwqklAmS9BL1EuNr4OZAXI6O5nbizCS9cyBiwK4V16HNi0N+wXyjYnKPFutyWh6ssIiBFV3pHy49v6Yk0xYsrqNRbcVH0CteUG1otdto3ms3b63Gese7Vuq+nX2q3rN9mRpaItV49Ypbpks8TU9ntX7ErrUTu/d9o2BSL7k021Qd+y3fst3/4GgVg1HNMnhmLg3nErTN5kS8lXWUZ06vUetX0BUrzyZRnIZXm4bKLM/BI0MkMJzCW7eua4XT17TM+r0WdtV8ZzefCsTsww6TuRt/2G6iWlctNthyxfbcjTFGNZ3XsBUCy+yAlOgE4SSoTQt9CQF00PJUojfHJPClEKiHmkvpiEeVtqB+baZlESrlgEBAS+y8WSLdVW5H2X7MMf/rB9+jOfMvavZBEYn609d/E522k8al//xlfZsUMnVY+mHT98u91x290SErb1MvvI5/5IyvaiDGjSbrr1qH3hiQ9ao/eC3XXTG62xsynlU7EHXv06KbakFN7AlmsZqx/O2JmrT1t1vWi1Yt/OXPy4tVpT+yvf+jO21d1Rm1xWHWFCkVGgARALzagjvaVOAxkx76F2JQdtUGISNBkpB0i6xbEAT+oZVqEyfISHjhAGA8pRYFnK179UpTTIi+9OJxmGkgLOS+AR6IQP/bAyNSg9nqRQI4GsMI+W9ADACC4GOJQxGQ9DWuwByX6Ob3/HO7z8Lxau9cz6/9cDv9Hv0XU/Ln54nR1cSmf4BwV0EeMN7+gcxY6M+T/IgG5R5KM4nMNe8BYWBA/an8ZiKhwsh6qllxaR9AVkpQx0FAUd9+sd5eeAQf/c4eBFzok6JepUkfbFmcVZUCRfgcv4WLSU8qSHdo5uGeuGlN6c7Xp0jX2thz0Zh4bZ93/v99u//pf/u+8B+uQjz8q4VTyNblsOhWRk4+qmDLSUqxrp6uVNnyNLXRt7Aocq8MqqdIGMTGOfz1mrSiIiRowpI7migKTq9Nu/+0F3fjutlkCCDL2cerYETait2ZMxmSjY//rX/5bKlZd+Ejkk04BiFluyEKxa4VPLWdve2l4YiZR/7ACHkqFN5zsBIkZr2Ceaj88wHxZZJV9APPsrs0sG7ZiX/LZUFt/VQ8/ytSMWkp09e9pOnli3/b0Ne+hzn3CdwHw22gD+gO95PwKzBOSDRR2BN3hIzgjP6S84i5wvghqQtnOQAb+pnQGzpOuv8iLXot+8EzLlzPMmciOch1tBFjmXHHOiTBm982d17mUQvV0Fwq/8pgxIojN6eJcHfcoEz3l5dEQmKONCNuLSuTOe0W/XFyrzjYLZJ5//Pcvk1FrsJCM+D1+4ClMJ2PWEr2My55ypL6zhYA9i9joHvEKz3qhrvXHYEaUv+9plWpJA1VD20qc0xWVzdwa2eVmgssWexRO7fG7TLp3dlC0c2dnnL9oXP/eYPfv0cwKWfZ+revjQkhULyn/ctn05cI1Gw9dOoIdcL6qMLJSVcDutRFGFuPR/VTwmp6qxb6UynTAFt8c4SjgArWZDchDALrRlBHA6Yk1MT3Z2KEAfEw82rNlg4XXC9nbaNs0OJR8MtxedB7yDZcFr2G1APzzFiNqIqYKiD20bfckrxVaJ9GTqHLDI+z4XWmVHFyUF/jpdgdnMzD9KUJRDW5CNxE76dJ409ROPBrZRhEno5VUapEdbqBx8IAfgBOahYQDfTD9gBDTNoj5FnHD0AoBsNqWTTkwkvaqndV2vxUbiLwF4tzl5ZVMQWN13O8VuGt77rqODWpUdnMXnhKExFGG/fpuIIUc5YSqBz0nRjh5+iffAd5uy6x09NEjb1kVhrkHO7rn95TZqJW1/u22bcsJxPNiNhc//MmUgmxJoVcXZatAXjqEvkR/VgVKj1ybCS+19nBQWKU7squtGtUuqoOeylsgOfL5svV5ZOAhyNGS/+PjGZKxz4SvsLLYOuiKDopI7CeC+6Uxg1lgnMtRdhFDtIT4M7aBSiNY4R+0uUwnCRysmKq+kUffkEElP00G3I128J/38N7//5/TejYXEW9/6hgfJFC+NIS6ojE7oDbvWbDWtbecEZAV+BNImMjCtRlfGZCKQuiLiL7uBYY4Dc0HUYkqJLmcBKQmnfxVMnkxRyraUZ7IylQ+Gzb/sogpCDAAGlWVYAUZj/lBTihowywKyvoSLYeoZqzZ5DzCHIaa8ObZU4osfqwILZSlQAYt0xzrTy3Zx5yl74ewjlu281daKr7ATRwe2fmhdYPZWNYo8hFnPLmwIyO4+rnjJOiJqTO/mkmMrTe624uD1lh/npHjOWn922tLKq5g+bqnZus17WRt1ZADtBdWvZPWSPJl8yee/SC6l4FUnaVThHLv87F0SdHk3mb4UdVmNh2aVd5QoSxmckSdbt2ZHDCt65cVAd9yvRkhesWz8Fl/gwhCNeEpB9YVOUi4sqBvJWA5ECB/+EO0d+OlVvH8+hcuCkWSceS9JN6S5PENACJ7aMycHg+FKCQOTvF//+te6cby6sSUl1rKNnYtihYm9+93fan/4B/+HHV9bkkd3yWrVnE8+X6qeskO1Oy0hL27SL9n21ct6P2a5srx/AYBX3f6/2Nve8E47/cIZO39hS6pTRl3GuFZOCMym7Pzms7ZyfMW2rzxuG9tfsOPHXmVvfMMP2fNXvmTjWNMFOyswyV67BHgED44jCtmnxXhPqnjO+cfZb2Gk1Y76gULOoTx1fyRD4QpGF/WaPxemI4RVxABZ0gpTbui6IW2zYpadE/gtAOu9sqFnNoBZtYWUCF/xyWQCWM3LI2bOk3ukMYZjSI/dP9j5o2BveeubvT4vFn7hQYFZmlvnRGXIP6WrMiqqForeb3W97g4spVrU7nLKdS42ordUyhaZdlC7AAiAVjrPsHFxGV/XO4yuRMCWND2/Rf7RkQuElBIkYzk0sbzaQDRFphzAKF/hfkqGOycahaSlq9zIRAnLkRd9VA/Rlqk1DMsmGfsdJS09LlhKDt+AL9HIYQQ45lW/Eytmr7xL7aK8Eirn3XffZX/nZ37JLl/dkM6SkzuUShIIlgi4wp4wxUNGC4eXz2PT7jfddsoarY5d3tywlSNH7JLeRYcw9SQjx4/FGlnpO7YaZH7k+Qsv2Ac/+AF3YpzqODDSgWnx4bBPnnM7tn7cvuVbvtM2Lu+oraUPJFPQh94nVjqTdrVSdaAR5qqzuDEv6sBJ8GHcnXv2i6SNkGWUI5/oZjHmkPmUAkAAYYA2kBKwW5BDirPb7nas3WnZyiHpQclkX87/7//+71+TExFX9aLXCcPMh27EB2oTOElnDmZdZvhTWVicA1N4U3m7qY1pQ0WaHZlB1HTqbQlYhP84KgnxnNLjmcUF3uNBd47gP/8d0na+1m8ALdFBrMoLr8BDnglR7Sn172VyPiXAcBz8D9qJr9V+yt71MOzkDhdlY4qmItsiwZ8e9SD1+Hs/emPTDJ459weSdYbEKbgK4fYMh1a8xvS6PkO7LNIby5GaGl9JpCeKPVHpOWdrv6kyZHpdT9c7smvM2ezrfk/tS492u6m2bUpnJ5YEXi/b1uV9O3X8ThvJH9u+sGcXzlySXAysVhKYE/iaM49/0hdNBIrF6wCGnu53O31KJn5UeXHApfvQlQyv86EGFvKgG+EReolxjAp5euHicpy29Vs2iTZRGzLvvyN7XCwettVDVTt76SkRj564qXT8eYHXqvSf7EDrqk0GIx+mZhgbQBhNp6HRWUFPby8txojmiKlaA7aqA2MIjPkiLHoNaTxoikOAFpGkqFxxgcpESr8d3E58AV5tia+hAbhoF/SJotJjdMmZxRmJVqK5ALKs42FYfORrGthlhjqzAAn5zkkXpASoQ4eJyjGlM4upEwWbi7fGU9E12bOM8KuIEDqpBGAz+ZnsMXZXDk42ozZXGfUEttftl3QR22HimLPTATsL7G5INyUOWS51RIBWNkTH1dpRu3Ruy5597IxdObtn7d2JVXLLlorlrd/B3gccxLzkfJ4PRDByI54CLAtUwpHQC6EaC4fhkIxmI6cZjpcvrhMAZleNvT3xZFdvqI6xWd7ylYwV81UBWWyarLYwD507c//wUVLv43jQ4ysHGsfIiSqIPJF+YluxcUM4jk5Rtq6E/HREIe9QAqw4kEwMhXfYSq4tnCfnjqmk0t1EeKEp3dwSyKXz6W99LWD267/hbQ+22ttSPH15JkkXxgHIebRp2+1nrDF/XAKzZpn4uoOnyaDhwyjZpBogvSShlCCooKB+n2RN7ehy9rGQkcAg21dlBfJYmSvhkVZBMdKjgc2dooxUWXqC6WFsNZvG52DbUvqDTlsCGbdq/B9ac+/rbTDpKo+27OiupbMJa++tyTPdstXKLRLYW22cSltiqWGT0nm7sPWIPffcs1ZpvsIOV1dsWQp+tXKnHV65z4qVJRvGhrbResGebz5iT176sHvKmXjBcr2T9urSb9jXrfyApcZ71jQ4VuBknLWpylMtMSwyEEgeW674DQJnZp3dnh2ps9djV2y07aBm0KYXtGx7zRMqe83mo6oae1+iojqxnUl6WZ7arjhj3aYpKSc5BInBrXbm9LN2/2sLli7I02GLlYJoBQjBwEh39mQ83XvBgdCFUfyKDbsrEh45JPPDApRNy5b2BchPSgFt+7AUQ/VuMsV8AAd6wcLc0qTKf8R7gzBy7/jGt9j3ff+32Xf91b9i3/kd36b4nfaTf/1v26WNC1aoT+3i1lk7c37L7r3vdRLIvgz6ph07ftzWE6dsvillMT5vE9E1mTpp2cLUPvbJ37FupmVXLr9gd9yybmfOPWf1E8t2+NZVPdOx5x75hLV3nrQreyl71/f9uE3kFTLcM9zti5/EH1IoGb4JLRZhWAZFFOa7SmGI0kxXYYh7gqCKEwPgl6ITL2GW8fZjDAFhT+cMx43FOzEJKdM2CipDNmwALrQ1VX3opcTrhk99+KqYlHJi/jjoDgMJQsPzddUqmheUlwyDFFdGgIIyIcAA3uB96x2Vz/cZxGiqHG9969so5IuGX3zfL+h/1xT+G7Fy3YyC1W+l5te577YeGslgM62J4VN6E9HDPm9VgoaH7a/wMgF0yYtUDb3n0wx0AZCg335NBwLHUAoFfhChqdJPlRhyUyJc028wWD7vENuvRe/6azoBjAiL+ev0wDHkWZBVyNL7rUrQ29Tdl3HflxPbUJv0Z3boUML+1k+81X79V3/Efu7vfZN953vutx/4/tfa3/7Jv2Z/9Tv+jhRg1jZ3t2WA2z7ExnfdIQRGmqlTXYG8mfQQjtxSfcmOHj9mjzz+mB1aO+KjTzs70jeS7bQYjR5U5qLymeREmvl5ffulX36/Pf3s05Jr9qCNeY8rTgBOJeSW6rCbb77ZXvfaN9nOZkd5wmPMRxOQFy9yBOBgeADY9ExgEBhZYToLYHMmWjAqxvArfEJPEecAO74b32k3XUZZpIHzisOPgcRwM0oFb9DLC2hgVIxV47/7O7+lcgiAwxdKGxlhLiPtBbhxLhLN3egTFg1F2/GlMn5zyXt39IzbIz2PYeKSwzheFV/RO4uchikH4RpACIHw5/UuzztP+EsKOkBHghs+yQzbXHEbx95f5zGdwO8sLKOs/qwcMP/jeT/qWdERx400yNCdO8q1kAnZccmjfuvI84hE1DP7sz9yg9MMzv2eOw/JNAKCY628VGl6YkcD1k+Yb+HWabOtJTv5tH0+Ip9eRn2kBL4yAjfpQtpSecWc2keAgR5JOi2YGzqfSDdZzTIx2deBQF66YoeWj0rfFe3q+cue382nbrGbTtwECb2+8MnO5hXJwkjlwBGhRxZeY71Jy2lTq9YFNljQRC+u+EFNTM8902Bw3DI51nkkXF+xiwJy0BXoYGExI6LsrT0ayoYkp0qbrS8z4mvkKmNLS1Xb2Lzs9GY9A1/G4rv+Bdld+D3afxow5VBLPIij11H6jMTSwnQCsMk/c2Z513u89Q4uMR1kOHqCq6LRWFFgWB5uvshe6WXL0YuotoTfsWvUUS/rXTGYeMb3GHZwKcdQugAQjXyx3mEkRxGGYCg+n8oLHOYla6qInqV5obGaxdMcTJV+SvKW7AhsASplo7IVY5ut0WzP0omK5HoifdITn/KJYjkVAmzoOepDJ0QhXdRzRQeybAXY3J7pfGinn7lizz550S6euWqf/Ohn7LHPPy0gO/At/+Yj8ZjsNFKEg5JOz1VXbJxqKHqMx3zqX07URCDdQS29pNjFUOeYaIWsMBWGnlDkKZmUrpBgjIfIdEb6r6Jn4j71cHW5ZpWaMI+NfJEi+wTnC2XXoWHfWzpsgmxjj4fiMxzuybQpcsvmxvm0BE9ECkEHpcbIb1+OXrPTsYbowvZbvo2o5FlY20ep4GFoOJrM7Kd/6B/6uzcSEm97xzc9CFBgkrLYWMRXRv2GPKzztrF72gbxbXlrdTFX2T2CwYBtHvRkfFXgbyUYSzS0SkI6rPJjzs7Udx2Q4DCBWgRAYfv2QHocQw+XuHITl6BcYXgWN+C9+lZeEiSuzdLnrFq+y3rtE7Z/+YjNW3dIwOs2nG1bevkFKcPbxUQibm1sSze3LFZ91i7vfdouX3lOZRjbHaUfsoJAZVagpl4zW5FhpId0p3nWzl951h4+91kVoWjDxOds1ktYaeeXbLJ5l0DhZVHjmO1KKTH8AUNPxCzYAPbefcntd8iz6dnGU1JWjbKVi0tSSGIo23MGsoQ8t0LDdrZPWmunpDIzGV8NLBrxWUFWkQ5Fo0x+Scpj6tucMEQpfGS1QxM7ccshZ0AYAmEMvbGh19oXe7EATkw7lLvOzgvZomjX27LqUk5tJGHaT9vaMT4ywFYgoqM0F0qb4Q4EwhWxmiEn0BlLMlwyEr5hWw0BPinrWq1uh48cEQ9cdEOPcr7vpffar/7bf6ym3rBXveJ2AYP/H2n/AS7ZdZ0Hon/lnOvm1DmhAXQj58AgkwRzEkkxSZYlWx4FSrJk2Z5nyAqjp7H9xvPm83v2WGPLkhVsSxQpgqTEBBCJyEB3o3P3zbFyzlXv/9e5F8RoPI/gzLl9+lSdOmGHFf619tprR9Cpx3lvB3OHZrGV10zdIprtHZy78grbglZe0o1qIY/Dc/vJjE3K4Q5OnT6ImZk4/vJLf8x3dXHspltx6vZ7WCbFpJIuBFipdC0vLC1Atb3KvBfDpBpoSNbNcgqUi6icuDr9IuHDVpPbkXXVBEQnvpaggr8oTsgme/E+MeJIyySDxpebgtJCMSImiD1upc9p8JkaseC7d73CzisEDqlECFCdIVwNRzmKzeicx73PGq5yDAedd+Ptb3+7lfEHbb/56/9s9xNL7cgC2/bArOqo//U7Tznygk0hRb13fAPMsqym9Nm2dgs3pcSRgLZMAdoVKyvl5pCGkYgjjJw/p2W56f7dZ+i5AgMCy1qwJEzlLCUoL4dfCoUC0KX3yBFDOawianRGs7LNO8GCK4F8g6C1xb1RpiCu88Usx2QyiYfvfxi//A/+IX73d34bH3zfwyzrOkqbT1IYX2f52qTLAQqFCHIlCvegM+Fy2B4gTAPIRBIL2iLN1QVmyQMaNovGY6z3wJSf+m1rO0cjPoDXz57DocNHDXSqnaR8k5k4nnr2Cfz6b/1TG22wGGjHxSjSonwaQqmMms0Rbj11K06evJ2GW5FyYmhhPAK0ojvRgeLSBK4Vqy4elkGp0RXRo4FuPlfXCtDKKyU6E99KOWsou1wq7oJhZ1RGYNiGZUnLisWTsaJwAxVsSCU2Yvv8+Z/9EUsq+pEClNIjjZOYRE+acCEatc7c61PuAhq2sY3sZ/vBoTXZOns/G+/wp6F+4i4QK9CoSU17YJaVdG7npuday+mU8QePvIa32abLdM5u0VHl5jnRoAFevsxo0/64ibZ2H6LnmKIm3em4N/rkTBYkfCAfyKgUkJXHW+dUVjuy3fTOX/nbbw3MXl5+DH4CUHl3bbiA71G8c4e00CQtt0iTecpAjSwqTKXHvlAcuZYyjSZCCKVjiCQiiCXDFrqnIXKtEiV6lIdNq13FQmOYTh9BOraAqOZOuEIEyE3qkhp6BKb79x0k3YVQJ88oI0Cn2bFl2jXULNBWK1dRKpdIn8p60GbjCHC4CUyHlp9ZdCJgpLABW3CDxr30iC31TT4Nh5SNwGWe2EKxwuvarCbpkuBJYQkjVxeZbIaYIYZiucF29FnoxXZhCQGvvIfS33oveSiVtEnlyhErwG3xrGw6ZQtQSjDpeuV3FcErlEtOL8WLK9RAMlNlF61ZXDGBmjyaOqfctfLcacJWLBa2XXpL2RQMFLNM1v+Sx/wzeUDekrzWCF+XPNobKMSD/UQQJv4TMUeDGdZFoWWkV5eAPw1qUK8PanbstqmvaciEtaCZJ0SMQcykVf9GddIXsUs5iMnJWbaj3uq3MD5nIqDicoM0UHyoVdpYurSBc68sYXWpZJPIF6/m8OqL5y0TSjlfwQoBbaPUQTKURSSYIo3KOyqeo+xQDmIaEy63dFuD5S9TJpRZrzo6rKOljHRRL0m4k3lsMq5GI9inWjBGI+ByjDmxxSxPuYtajYbugHQYSPFIkyHgosyjUcN3CbdIpmvSnZbc1kijJiCyI0hX8qhqTpNyFJM++1puWWUTmNVGRrOJrXwOsUVTk9VbHZQqpNFKw0B/uyX9LGODLczfOwohZR9JAvzST7x1MOsOBjQ0rkBxCWutwtVGpVZEobKNSmObyojEw8qZMNkV8ob1qaTEIIpEt/WmFSMx6MBNK9TDo4eVJHeg3W+i1qqhWCuhWK0QcDVtNSVblYXPNAuNgENAVrFlGoZTeEGT5ejwGa36iNcvkRAIyprTZOBJtIo3oLZ1J9y9W7H/8ASm9yeRJDgiF2KzMMDyFXZ6dR+Ojb0bKT8tLUrBCEFFNJSg4opSyDSwnlvHyuYOiu0yWi6C4tYExvqfwIHkw4jTcqoUMsiXJPBcLIsClevsDALuRgVzkwR6KT92li9jtDIJX+4UqhtjKNdoYdV7fLZAuaxaDREE0BYmMroKsC2lnJS3cncmn34YUkmRAN2+MokpgpeeZgcTIA/6ZceSYedrF4jdm+xlw1jdAZVTmve2yOxVKpsqleYIicghGh8JEgeFBAnaUSSy6hSMrxhRyRkRNAlHQ6Yk6E7fQ0tpiM3tFg2BJlbWKri2mEeu9goGblqhPVnhHvze7/07XLp2Bo999ctG9MdmJ9H3byNzNI5b7/sR3HLsATL8FpKxPiJjYVp9RdvbrQIBrOMlXliYRY5WfDJBI4I0dMdd91oMs5S5ViHRQg4SZooBjLCtghoeJeNpWEqb0SCFq8I4xOQarg0FBXxDZDoJIlnnDojX0NleWiIJNj1XvzuKnM9SDJS8JmwTTbKLa5U1LT9IhhzQau9pGIqAqKXUcPIwsL+c1WWcUAcBFj1LAnfvu96hiTiqRyyastADxd1KNf9wmwOQ/+ZmoOBNm+n0/5OdOv+/uf0fr3XAgXYDBQ6M4JXfL7M9yoAFP7O9tAlohH0U0vJo933wK7F6i0C2xXbuhRFxJRF1UWH2fDT6gMbWiLxCAbhJK7xMyqxRyBMJHV04gI9/8P34F7/zG/j2V/8Sa1cW8aX/+mf4ic/9bcxM7UOjRiO7WEK5chlbmy9hffllFLevU5GXyFs0Rti/SodkFebuCxKskma0WIZmf0s4KoyqUa3x/k14+dvW5rqlLVpfX8X9DzyAtY0N5MpFyqoaas2a8cmXH/syaVdGn5OP04b4WWf1txSUY8TIG618n/LA0sgkX4ruRAfaFDerWFfNFlfjif5Ej4oxs/hOPkDf9UwzgEjvkom6TmBW9LtH0/Ji6Z2iLyl0PVN8oyFRyc1isWzDuhrN0SaD13k2y70LxPUOA6O7+96299nOs9+1U2BY+WRWixb0+f98c36TB1XleeP5bzyXV9hH1dmhLB339jdvAh8CVhot0GeJST1LE9IUk7pHq28uj5SodQp3GZpe6gwNO6sNxdtKG+gLOrwr76riK22iosj8LW7qAz3/ze/XMKqtKNjpU08QjNSqqGjJc5eWUfchng0jPRnH+Gwa6YkYMpNRZCcTmJzJYmo2g5nZcUxPyXmQRnYsaXJPoSjsXtLTAGur23jxhddw/vwlxBJJ0kwA29sNXLq0jcsXtnDl/BYWr2wjv0XaXt8mPa9jeekCFpfPoFBeJJAsoFrbxvLKZdL4Feod6pmIgLwmEFcoz5Q5Q8vbEuyxCdXOotlkIs0j9SXBhpZKljNFQ8ptxay6Ywj5JhHyTxEjhG11s2g0SHxPpmG5dza2cfHceWytrfN3hXKRxqn3pe/VZgJEmljk8zkymyRqoHnQY38bsJHzgUYqAbTy+kaSARoDQfgiLFvci1BUbQ+Wq4FioUJdK48kX6379wzO3f5Rn4nmtWszHUEjwHbxFc/baAcBfpN1azYcvaFheQFBA4PUETJgBh0fdtaG2FlluftzSNHgCLIeSkEZ9o1jIjkJdGjYe6ImCyvbNWwu7eDci5fwwpOv4olvfhd/9ZffxDceexK59Ro6Vb675aFsq1u9JybGDBsptC0ZG2cbxYzepctcI43ANE3f94cN68daY4d9SB07qBOfEUDKAeanjNEoAG3bQFAjO4SUxDHSW9Kh0UBInIw+6ysDyKMsVTTIS4U2WuUA1q838fpLG7h4dpt4a2RgWqES/W7D2vPNm9pNHnw5DBoEs8rqY0YRmVZ09Oaj+lSAVXRkO99vfUZ60WhaqymP7IAyS/23p3/e+uaOhjNE4PIOajjXa7pAQ6tNIX1aG2RFVpitIoRtd0gwipkVl6KGEKEqzKDPhiKRs9PDFBpKxRWjBTgksTQptLUG8k6xgB1arfmSFEaFDF+3iV7aywS6pUoZJQrkOglJgLdDK7NZD2JrZ5EWf4sErDhXKjAK98rGKWyeezcuvcrKszESk1v873HUR99gh29hPHQYC6EfgVbqaTZbFDi0ENuz2C76cG37Ci7mvonV9rfQ9WywA8oYH/4CjkZ/HdlUiwKFj4rHkSdoZOVNQCmuRnKsVi3i8P4prC8torB2DVPhIbIEyT2C03qPYL0TpDUbRj4fxtZyDO0K36u5JyRGi6ki2FS+2KG5q3gkw2roRowWjHQpLLzYuBYkw7CunYIpKw3F7A3HKHOENrM+NXThiiMcYR2rtHbWs/jaf9nE6pU+ZqZ9qJRKJjxMutBSc3K/OYpNgkLfe6MaXD4qyxDBXoAGCa1LV6BITsij795AvzPOfp3gfQT4BCOTEyfwyCMfx9nzr+L/+7/9Di3IZ3DX/TdjrbqJqQNTmB2fwq2Tt9Hw6OA3/sdfxg1HD9EyHaCwvUPS9KBTb+GF517EE088YTG6GVqxC4eOs4RU4LxOBC8PlOJ5rU2odJR0WumJFAdnaVJYJ9WjR8mr63WdEzoQ2t2dVCLOdVJuai+CYgpo7bpe56Xoe2zvIa1MLwFPICxvGhUJBYmGg+RZ1PCusmloosCIz5N1q2eJqWVtOkcBE0pjbnuCU0BG3rlUKksgk6TBqLzCsmbf2iZ/gva/uel9f1OgaNOpN+//rc2Axe7RACsvfPOzJD7e2GgwSBZYrOTuKdt0CfuBbG87GxAukm2r3EF1q45OacTdhVbejfrGCIXFtu2tHVCSpHFs7gTuv+VB/MLf/0X8L//Pf4PvUKhffHkRL3/rHP7jv/oT/Mxnfh4PnH4IWhygQbmQL5SxtVWg0PNgamoOMwQBHhptfQrwVjlHWcP2J29qlSTFC2YTKfMa2Ixb8sjM7DwmpmdprDgAUkpK7arJJxLsudwOkqk4rpOfl9ZWyL8dnLl4nvqDNEtef/rZ7yJEfaL4Os1cV7vJM0ryM3lgqbTYDPLGyuDXxEyNGpTLVWtbtbXoTLSgo96v829uf/GilJVAkgCWfpOMlQdNNGMxh5GwY5jwemUu0GQhKQTLfczPezRpIIDvFCByvJUqp/MubaJ77Xs06ygahyacz87v2vbOSVrsbSqD0QWfLRpx6qBi7dLJSO/6b4Pl3SI4JLR7n33f/bB31KbbbH9T2Qwg8+VvfvZIiE9vpmGr2804sJ3gnUePMmtwN2VO3iVr8ruUu0C9zuv4/ff+oG2vjCqL6iqQzzfR0CGQbXeQL1dQblQtNtYXoXE8FkN62gGwiWwQqSxBSiaIREZprwLIZELIEuyOjUW4xwzYCjSVaVRt0Ni6cOkyLly8bI6PhX37MDE7h41cGWECnaNHbyOPeEkDPoyP76MeG2FjNYfcNvUGjbY6deraOul66Tx1ep7gOYxrywS5qxfItm3qfU2mKqFY2jG6S9KwU854dbDkXzqdxWR2xsBOiWBrazNvI11aWKVR66BdG2EsuUBAN00dR9DkJoagPEjReFfWgcKWkvvTaqW+0bwZpQwbEQuor8QjNkpGxCXcoXzypUoNWjHNlvRll/rYWUotF02GkEiHEUuFMQp04Y1qhcwAAiE5LPoW1qEZ+v02eYK7VsZTGIV4U7mZpe/UU4JGAfKT5Lt26VAtiKLQFIV3aBK7zdOhzlVOaPGzcuL6vQkEPQT23DvVIZbPuvHSN/y48oIfw0YIY5Q5/B+19Uk0iz089Y3vEbi+gpefOYsn/uoZPPcEPz99Bq89dxHl7YqB29xGFenoNDoNl2WI0upk737XO3H8xgN8dw2lahmah1QiPeVpoDbbDZtIqHAQ7e0ujQseFU4gp47CO7TkvbCFMq3I+SfdqZEjJ75dbUFOJr8ECGr95AH1hTmPaMjENOrDtuzUeug2PCjtjHD9Qs32aoENN2Bbkc6t3yhbJVO0OQ4jOdjqFlKjuH7JSnPScXuDX8glkjcdykfVS/2tFfvsKl4zYFsLxPMUr7U3kV5+SDCrQB5NYBGI0lKI8pb2FYOBDtzBISLeCfhdKT6cHcdy6fXOkpIaslWOt6B5CSSsFUMjIRqNxRBPZ5DOTCCaGoNH1h1vKTVa2CwWsb6zhbWtTWxubWF1ncftnAmBEoW2BEKbNeqOPDyyPN42lnaeRrn9GrwEXJ1mgHuMCiyEjeUyll45jtWXb8Di947h2pOHkD93HOH6aUyHj2AmmUUgVYQ3sQFvcgdNFLBV2cBy5SpWKssoDdrotdYw5fl7ODb+M8jS2qsXhiSePLoUhn4yoFJ/WfA6wVRXHcQ+LBGU57aWyYgCUzkyVRPheIPl4/eRn4C8jly+RABHxdufJO8qCLpjTCbLT54Azf5Tm0ogK7RDiwMk47T2XF7EQ9Oo5NklJNQSAalWW9GMUyk1ueod8EaQR2tJXiCtytIoJ3Dm6TT++s96+E//4XF4AwMLaFeP6XopOw05m3KTQOc5fa4Wgyjn/SjuuFHYGdBCVNyXlIY8yEFEPJp44CbxFcgwbTz3wiv42Mc/gQ/+2EPwpC/hiad/D888+wRuvvlG1n0bhw7NIB06AtcggYmUB6dP3kFCdbN8XdapSQYigGB/J5Ufkwx78x13IxCNmydbk6X2lL0WRxBx760Woxg+HcU8MiwU6qGVmOpK2UZjQ4yioVsp/z3Frrbd23Ruz2vqgAU+m7vLJfCa4G8JdnGUDK2UNCMKRvZRj/Wu0dJUeARNSCk9xZXL6yOwIabdU/x635t3lcM8ctGEeeVUH2U0+L+z8bFvbKoZa7B7/N9vb75Om/Tu3rbXJnvl3Pu8B5z3zr/59zdvOiXvmjCElvLsNAe2PGwskMZtJ+/Gx97zSfydT/4sfuEnfw3/+Od/A//Tb/5b/NG/+yK++l+ewDe/9DS+8RdP4Wt/9jh+8x/9c/z4R34Kt5+4DxORWT7Qx2cN0ar0qITbNKLZj+QFWyQjErUh1ksXllAvt7EwPY9sUiviDHhtFVOTmvwZRojCW0pMMX6Ki2tTOLuIViYmpwkEDiGTpiyjAtNysFqMIEUhfvnSBauT1tk/cOQwGt02jpw4Dl/Yhz/9L3+MpdVl8jefy35WezhDruJnxcuSkQxcKW62y98dIKrdDM+uzrmMb0UPe2EHMoLEg3sKwYwqggRdKzqV4BcQ0zWidQ2D6lL1nUPDQcuEIB6OyuijcpGB77yPxjJpfA+o7m18HDeHJ/Sb6FZAUeSrc/adx71d51lj6oTdz3YdHyGwarDg+xt/2r1P9PJ9vhOdOO91Nqe2Dg1p273sjev3Ns1+ttjY3dP6rrv1vL1NoT97ZabS4m+OgWYyzuIddbF0FY8aPrD2JPRUu+vIR1r8H48Ct2910zv2+GKvzlLOdU1cabSxUyoQOHbhj/gQSxPAEqAmx8n/6SD8UTcSBLDRuBdhAl3aJ4iGCXgjlPm7eyyumH3JFmXC8BNkUm4FfTh09BBO33aazR+CL5hELDGN8ckjmJzQJOEklNQ+Oz5GcJyl4Rxl3QjCfBGji53cuoV+JTNDZMaiqFTzWF65wnI3oJzEqo7yJ2vtfWVssSxBGg2g/E+nJm3XqpTVqhZVaFAWDglwKiiQBsPeCOanDyGbnsOwS7BJ/gsHCO1SaYQIVqvU6/VK1YwXZ7lmyVxHtggYKeRL7xn0XVAazlK+Qv7sQOv1q68CQSd8KRINIhj20kDgcwhX/GyrCBtQPKAZ8aVSmeWT04k6WH4iUekbes9xFmkXb+zxkD5r1zXaxD9KR9lq1xwvYcuPVjWOaj6JwkYWOysTKK3RkKzOoLaZwdnvNfG9b23i3PNFnH8pj+89fhWPf/1JhBHF6pUtPPvt5xAYRZEOjKGy1YO/F8K+yX0EseM4MHsMt5y8Cwuzh6FFTg4emMedd91CI6iCi9fPQUvsRhNKzynA7YwqdAaUBS15Qrvmse91paep6+RoVI5bC20Qo4r3xbRyWPEzG8TkDnct1CDHlhyQwiCtZpnXdIwHNFoc4ntmxqcxO3EU7m4Wq9dauH65xD7UwHfK5JjaUf0n+pfsknx7wwiQcU36fzNP710ruSPZLB1v/cTySJ7szd8hR/I68ZWz7+nWt7p53vfBTz6q3GpuSQ4CSIoHVOQR7K9RWzWQ9BwlMaUtUL07KltcZq9DkOuZRTw4B60moiXxBGIDQRJJSBYTFUtEcZwx+KjMladTKS5shQ1WRBWvVAjSzBtL4lGAOYFLj13hksVEZiCFURR5MOhGsJI7RzCcg08B8eVJ+EYZCoMB+q5NWoFjtHZCuP5yFpef2o+1548hd3ES9c0AKttNguQAlUERqXSIQi6AFnYIYl/CaukcusMADoQfxk2Tv4oEAU251EGtFKKlSmZtrlguwQqZV17rHVq7hcI2xsbjaFAYKD2PUpeU17uIjkcweaKIcFaJ8nm+scTO2kLMdxLB4W1mrYxGXYLCkeXw0+pFAmAKL6CZQ2uSwi8g4RZiG3lpBPiRmW4hklmlIJJHVkzaJjGwhVzOMJnigxQDOyQg93ppRZ2ZwPnn49SM47h45Rl84EcPknCl2CV45alwAN33mVeCWOeUFFkz9KWktFNg8z9H2IvQchh6VtBzryFfWmO907wiZh7HZ577HkKeFs69vsJzPhzfR1DS0eplQaysr2Nz4wx7MI1Lr13FvvFDBMld0ooHP/uFH8fXv/6HBLQjPPKJzxE4JC2bgyzJHkGqJuKIiTU7XZJJ1qe8sJp5qhhIrXuuIHcl7NaEBgl+Ma9NumLZZRXahAECX1mjTptpWDHAeu3GwMoEtLo6CkNeALID+l3SaKtjlrC87bI4NWylmaNaTlf5D9ly7CcaJwSzpsn1HJZZ7SrG1ec98KIJFpa4XIzL++6+507nhh+w/ZbFzPJN6hDuqpe2Nybr7G6EGs410vfsVhv+564YRn2Wd06KXOWyz296juSdmcYm3/gA8qi80fLKKIexLjWh+OZt937KV/z8z/0CfuJzP4EPf/Bj+KWf/1X8vZ/8Bbz3XR/BOx/6Edx9+/245aa7cOzgjVR2BygnsjRIfGjXFc/NvvaFbTShXm3ZuvXah90Ri06B1mf7Uk5oMsCAldBQcSYZw1SGNN9ZxuKV1xHwpKjMDmLgPkwjchZ5yhEty+1DyIyRJstva/VTQWmt9BQVg63MQxor0hg9cfwonn/+eSSSCaxtbJKvJ3Du4iXSGUi7KzY897v/4jextb1qRjqf5ghgdrli8UbsTrWTvAdSLAcPHMUtt9yNnZ0a25ml4Ls0eqB+l8BPJdN23GtADbP2+DDRiWJetZqcjB/RopSB+lu7vM3ic2VhkBKRh9kmFlGBhyMxe46yGog3ZOyJNqNRP/qdKv7sv/4Rn7E75M779RzFvolgVG7RxBsdqjulcNjfAu52hgcnPRF30QvlhdSlKScRvm7V44zGuJMmhDvlzdcysZI5Ds1pFw3yev6nurwxFGz/7x0JOHWpvomf7chvQwJUXc/PvMR2ZbKxcAa7j5seoBcYrTvF0XtUhr1z5sHSZ+k6OwrUsjz87Rc++9ZiZq+v/7WFnqkZFS+sxVRa7QHyxSpyhSrKxRL7MYDJ6XFMzUwhmZJzR5NJNfojIDZkH8jBMJR/hH1D2cKy2Gc+r9mnhtqsod/049C+GzGenrIh3um5abb4AFeubrCeEcpAjTT4DNDV6lvYzl/j86lH1ObkM48nRlqPU6YSLGqiUGDAspYxPrGftfAgt1WiXhnw+yxlmx+5Ys5k2qBNsEZQq3hS6WtbCYW00unXSHMl9BvyWEruKrylxfcAcRnrNKq6NEYb1R2TlRHiAsXJ1jt11FtV4gGC0VCQ7a5/7Dd2iLPghYaf5alrkYa7cPPxKrNGQRz5xTKwYTSvZmDOpF1jbCBDhPLc6EMTJn32/HgsDMUdq820FLpNfGV3K5+uJpmLxvuUdZINqmOnX+F768QlPN+Vx7JhRoRCL33IkgFnUd2ZRXFtCtWtBfRKM4gFuVNnyYO5szbA+mIL66tb1HeLSEUDeN+7P4i1lS2M+Lx3PPxudGodbCyvIR5KIEQ6aFT7uOv2t+Oh+x6hzIpRRw0xPzeB8ak0Frev0MC+ilM33o5773yYAFRZT/xIZuMUz1r6ldezZFInAxGgGdDSScRLCr8h38gBpDayfP88WqiOkbwLARKugyOcMA6bnEgGUPspLOfI4UnKKtJNMEoac5G2FFrlYv9qWWRnASe+lO1IPNfpWq7s3M4G8sRGtXqJL6HhbpOeWS4y34hlZNeRXijPqbsrtS6BujIZVEkXPetHlV00yWKzbHq6KIT/8x2/+JP/WGz3ljbPBz78Y4/K+tHMYmelDaDc2kGlvUqzqI6s+xQFc5wKgwQ9LKDW3IKWkAthAhHvLDs1RiKK2KxFAVnFqvl1Lpag9cTKa2JUQLExUuzOKjYCsvJ+lsqaLEX1RfAaJvANEvhGkymE4rQA5MUiUXr6Uyh3dlCoX0SEHZ/y3QA/KxmJtWjJTmB6PEqQ24ZvUKVFRLA0YB2qAZQ2vFi76sPK2ZvRq8cwEb8ZrdI0Njf53tbLtIDWMDUxjQfH/z18BJXlvPK3eQmgBgSfLloZQwKsdYJPEnlvZJ7DSrVolrPXO8CUVk4gYxSomMf2+3Hg1irrS8uO7+p0Fwn6C0gFPgBva795Nd3k0o4yRbADU5kABYNiX6IU/kMehzZhRmlAtNxePO3C+HyVAH4RtZbWL3Z2MaIpJnMliKn7bNsWNpbcuPD8DMEXBbzPiV959wePklRLJCBn6FH6SeAqyn7SMHyflq/NyvZu8rcaCb9GolfOPV7IXZkClL5FTg0F+kfDE7TMsiiXW6z7PszPn8Sf/sl3sDDjJrMNsUIgcP7lRdx15yPYKdYJShvIXd/ADgH/qOnCbJaAo5Cjkmvg7geO4p//v/4p3vGOk7jtwY+g3pYnje/0koSp4LU8nqCUBoeUW9hGAribQKPit4k4xqQUSCMlCxeYlYIWmHWYVdea3uY1OmeKm0cJTpv9zXoJfDrDnV1er3x3YjilCaHxQGHdHTSgSYQGlMlsGpKyZxBYSwiYx5g8I0AiI8EBB+RnvljfdV6zZh1PmFAjcOedt9vxB20/GMzKBmcluLN1LDWWCX+RBncDFXtDqFQAUh5S3HqWbbxJEwp2DXW4ldeXQsuWZzQBJPC993Rn06utGblL6PzWb/4WPvTBjyCdmMDE2ByNlRGuXV5HYZtt1yLPVEgHVPTywNgKST6CNY9GglgwPl40qGExTURQvmpNzhCwVuqgckmGM2mI9LO8soTl65fQodJOhshvCa2mJVkUIMeTvwZRpCYmyVM0FP0J0j8NDdZV6XWUHUVrx8tQUh+kk0nHC0VgKVm0ubVJfaA+HZKXS4gTdEoQX7t+AX/0R//BhjM1RK18zmxSPsePllb+0je2nyaPKZ73yJHjuOX0PcjlqtY2Y2MZAttt41nRg8CsvBaiP8ezoeWYe6QRr53rkE6kgDVhUDQo+pGiV1y7Yq+7XZ3TkKybykPx53NGb5pMpt/iCXn/IwaYFLvY65TxZ3/2p3y+h+0uY1VgmKCQ95gcYZvr/j3eED+ofSSL9LsZOzxKH4h0pQxFblI0RguiC9IBr+T9pC+SDzuShMSTpDfLNcyvAs88Y3fog7NIjkM/2va8rc41zu18Ii/nD3oE6dEIT6/TPSqIaFQE7rxdl+3e79yjiy19nt7F3X6UDGG57OX6rjLqEfqZ+89/5q2B2aXNv6ac0j2CAvKgEsxSH+bKdYvd7Dfb7OsEZmcnMTGRZp+S9oMybgQw5XzQxBi+m+WhuGM5ZWioXyjtVCZfEsWtFoadKO489RCOHSGgzUwQSCSxuriG3HaH9DagbNaM82nqXmcicblWwBZpuVjaJuBIWqy49SOfK+NGXrh1gq0QnzOZzbKsZWxt5JHR8vWUawpriCoUpkP9RqBtBlaDBhLBhuhV80bK5MUOwUhXqze4NXcA5Lmqga3Z6TnWJ0h+umbyNRD2o01QvJPbws52Eel4FNl02p6lPpA8Ur+qB22FRfKCvLqBnhb0CUHL26qLLVSEDa7wLqMztpWcGYPugHo/SvoJGeDzCcxGtECAstQ42RACKiCJd9AeGmZpU5ebo4MgUwabsuD0+nUajC0+kzpDOJ1lVqabUDCNiG+KsHEW/fo42pUshu0xnstaKEY8FmF7abJUjM9V2FKHBrYXd916C/smg2tXNnDvPW/Hwux+vPLSS9DCUco+sG9uDuFAFA8//HYkE+O4fHERFy5cwNzBcWSmgpjdn4XfFcbHPvRZPHTvuynnysQEHszvmyGPx9leWlxBIzykP4pL8bT0mHk3KfwVzqhRI2ED6T4ZoyJ4M9rE8/yuDC5uzfsIEADTKldbj41nkc2mMT2V5LNplNRaaFJ2CxNKPtukaZcf6bEIn6s+E05qGL3t5DZRIJ6r19vsKzn/nFh/yQrZQ6IhjTBpIaEqcZUmk1eqTT5fHlzJId/uMwXExRvqa+4s+i/9MGD2/R/61KPKjSpErxgTrVVcqG6g0FhE30NLznurDbd7QkN0RnkqEKWC6iE4nETYO82OjZKhwtCycpoV6aXA9fK7PxjjfVEqsZgRlxhGMkRr6RdLOZQKeVasilgiQ8ZLIEFCT6TSBmZt6IzE6CLhR6LStH5s50pI+I5jLnEvCXMdA/c6CfeAhQUMqUT7ZLJ2icBGs/KkxdkYPVJnt52nUBniyPydqG8ewsr5cTJsCtPZB/DgLf8Es6MkLl7YQamiiRURMnwXza6WnXRj6SIttp7c8G4CuFkSfosAfAMLC5NGzNO0vKvdAsb2dbHvZBFa5aVXnWIbbqJa7iDt/QkMGzG0ulskJC1pqjXZh0hnw7aWvxsJy30oz6jjDdOElRB80SKmD/TYD9fM+pdgEXGqg6X8pPCkTOR9dHtLuPCyD9fOjCM1XsezL3wRkeAc3vOBebh8FRKOMkU4gEwxeDEKFSlKDZcqjdfZZ904/0oLLz5Vw/NPNPHc4w28/OQAl14JYul8Elde7eP5x7tYWwzxeiCaGqDeu4ZYigROgfXVL/0pThxKYz6dxfWLfVSbcdz78F0obW/YJJ9rxYtkTs3QjaLVpOCLDfDUC1/G+EQLH/jgfQiPnSZgd7IXUMSAIoyEzXYkYcsI0gQsWdd7OXRt6MGu5TlapT1X08CA+kuzu6XAZJmKE6TIeQOpzmEO6UHFNGqXEjcwa3HEAgQUanyXLbE4IBhSkL1WMSH52Qo6UgzysopPJCBlbkoUkzb2vN2m/MnkEtiOgGGfypug69iPEji33HqK33/w9pu/rkUTvq98dbBN1eGm+jsbr5HQF4Al2Wv1LykBNz9TEpsQk+I2EEFlqueonAYGJOfUVAKxfbYbPytrgyZq8Izp/r1372Fo6SLlPtW9c3MLePjBhy0EYHO1YMolHCAPh5IUZvIUdJBKpawNJKBkZIhutR45OZZPo0FCw0BgMUJlr8VGFNeoCVBxTZ4jAJxbmMPs3AxlywjN2gaKm1dscks8FUCLSmzg2kfjd595cC1UqkOjLRwmwB2hYqErEsZuA4QCtAIO4iEeaJwF8fh3n8DE5AQuXr2KEJU5iQ2TsxP4wz/4d7hOAK3RL5VZXo6uHPFsH7WZ2kYeDwHzFg3U6el53HzTHaRFD4W8QmHaBLRZCvgm2xtIsx1EG7UajV6WQ5v6IRQOoUbGkoc4lUrayEOHilFqXuVU42uCimRkoVi07AvpzBh/U6hB3mg2lU5hfDLL9vNgc2MV++an8Pxz38XTT3/HctwK7JL6+bvXDAXRqwwJJyzAAZfmXdU/2x3DUL/JI2sbz+39OSBJbMiW0WeRg2EU0RmvEErbBbO6zv7T43h0lCu/7v4mW8t+ftNRtKLfRWNqa5vFTeWt3VbU43dl4xAg1q4iKsRDD7B7BdD5QvNoU3bINFYfWjYPPZIyQat+6X36T6D0C5951H77QdvV1b8wo1uTaUTXFmJA4ylfqKNC/ZMI+DGZSbHv40jEtHiA0j35SS975R+a51b9ojAa5aKV40DeLHkIu/UZC+3rNv1YuZq3lZ5ym1W8+OwZLF5eQ6vM+31abCZljg8Za/m8MhfQ0AqOUY9t2pyXRMpH4BGmvlXsbI06IYKZ8Sms5a8QbHmQTmdMT16/tkxA5sK+ffup1xvsVALbYsGMNk3sVHYadUSI5VWcbKNBMEteaCvEaqSJiWxnGfuUS9OZSRSadUtr5wt6bFQjGY8ZwPeqn3icnJ8xZwxbQ9xvdCE+GpGHRmzLhCcuwqLs9yMai/M5u2FhlBcaJu/11b/EEcocUfFS3iyQJ0MWJiSgFSVQDIYSlBchK5NboX25OnJrdVQLfWxsrZIH1H9B0o0MkQbpSTHwAu8N6iiFqhG7eOPsu3GMpebh19LzDY0T06ht51huYgUZJaTxDOXK9NQUZV4Gg04YmrT+0ovX2PdzOHH8Fixdv0ZZ2McDD9/C91URGGrp4CbmCVr71MuKnU0Q83jibWQOeBB00zBeOIkH730fFi/t4InHnyDeoW7iLidDMDVB2olTVowjxP6v1wssP0EkZaYW3zAnj9qWTCHeFS8LxDoeaoJL4wPnu3hexqscROwSC9WamJw0PWUr+Ilz+LuMxz7bTCuNhaMDjI9NmBe3UNpEqbqJjc01rK9TFrUpp5KS5/IUK2yBRjrrKPwhZ2CVALlQ7BND1fldjjNiGSUlZ72kw/UuiQ3HxOEfP//S3/kh8sx+7NOfftTDzrGlIwcBlPor2Gw9TYBxHZ5eDLHAISoAMVwNXRCEtrZQq7fgGYYRVVyOm5YCmVvDwvJwSIGrkcMUFhqcU7H6IkJ5QkisbRKX4nEkq5SGYyY7bhabYn2S8ZS5t72eoAUZB0mQSrBbLXZJLF4kUySy4CxCyXkM/RE2ZAOlzjp6BNq5TgmjOMFDkkpsuAN3jMQWprWUGOAd7/ggQXWcjZ+nYCRwKSaRHNyGcXcGZ5ZX0CnPorDTw9olNwExrazBCyjvbKJbi1u8UpUKJjqRQGQ8js3iDhLs9EwwiGkqoPikH1M3LNIyp1BrhZCvX6c1oo4ioBnOINCfojINE1Q6Q1w9AiQfhdFo4KVwKCKcnkGusGVKPCMlxb5NjbMjI6uoD64TOdTgG07D3ZiHt0NB0D6C0to4BU4fykWbiF7HmW9Pw9/PYGN1C1ev7mByfwQf/vF5XNp5CR13C8Eknz2u3HY+rF4u4NLzJfzR//wUvv2fl3DtSQ9e+dZ1jIoR5C510F2PYC58E9rrPgwLflw9s4TyslKQUWheHGDtNeDii31cO9/FwYW7cemJl1FmWQNTNbhyTaydYRsnErjzobtw9bVXsFPJY2ZhmjiDgsaVQdpVQyn/BBZuOIT7P/gPMaBQ4ptAmE9QJQFJUmb/u0lb0pBDLwEtiVogVorSJzDCa3wULUGvmIZgktdbAmd+Fs8qLYuGuqTYouGUKXm1q84NlSaE1r82fkVldIHAR4tyKJaHDNWjYKZwddOkHGkiQbfFlyslExmbR6U9kVKUpark3nFfxOg0wLIIICvBtMIZdFTuUL+GgCQPZCXzGW8VzO55Zk3jOjrYtjd7ZiW0TE1LTgn3sFrKtajP9l0728wZWuVFehR3gRHeaI93E1MKxLoGDpiVwpU7zsCaXu9cZrs2YRwpF1nv27kdfPYzP0FQVSKgrSK3XaZgdNuoQ61WwubmBrSa0PWr13ntNgWYln50o1Ku2INlrccSUZZpaF4flVXnVKuNjS0srqxifXuboLVOsOvFFGk4QyWtdG6d3iYqTSrX1hilWJYKlMaviwqKdehQMFaaLfOyavQhT9CnRRE06UP1F6jVSJS8Ca++8rINP84SmGsC2AQV0+r6Mv7Dv/83Rh8C5RK8uk/DwD2ytnIVq43kyRE47pJulILnzrvvR7PZp8yIGGhVq2mhDMW9C1Cr/yX7BCYV5qJ+ME8sga8mdQm8KqG76m8Kn7+r33Sf4tEU5iDwrEwylUrVjFOBEuUFXV1b5ju10lgMk5NpfO2rX8SZMy+z/KQ/KQl1v3WmADifrT7nCdGsOkO0pPKqz7XJSNP7ZWQavdjNpmKMnnROdG2bns1dablMUbKhBBR0Tp4je489X4/hc3Tt7iP1due8syt9k8CowzTie35kWfVZR9mpmjurMA92m/1utKzfxRt8rt6k8pvX2fiFZdl9gT1aV+jIMipfqbZf/OxbA7NLa18x4KC80ZRI7K8BjXwtTdwkEOpinEZHOpNAVqtTRRWTST3ANiCeMOM7Gg04oIzoU2vme6g/A16tCOfEjqJzmMApQOOwhqUreSxf38G1S6u2UIJmvCcjYzbCoQwdqpkWytBkwzaBg8sdwtR0xGigVs8bkI2EoyiXqpS5BaSpr9KTURvVmJqcwtzsHK5cuYpivoipiRkzcGq1hgFtybMmaa7dbhn41wS3AWWEktpbqi0CZk1ilkfZx3b1s4JeCp3kDHVRgIiiXeZ7chjLjBPVkOZI73Jn+eMKN2DZLa7CRV4JkX/Iz5QZcY3wImwyQmkXbWVBoht5ZjWyoJ5rtdmn7OtO3YXy9oignwboiAYbZXhvWEM2No5WZ2SyvEbQVNrYRo3GdquqEAh5YzuoVxSHzvLQiFY+VMUOaw6GRtEwCpBnleNW2sRtXtSwL4069fvq0jbcPmKRaAgT41nq64x5IBUHqvjVgDdti5cUC11cv7KJ189eIm2UcdttN+Po8X1YpHHs6YZQb1Mn7ktTvtBguZ4zeTGxL46mK4/N64t48J73YthJ4ct//g0sLS9ifCJFnh6jgTSOxMSk9WmTGEwMkMrIWG2gWC6wLGxX8ZX+1B/UQU58sOPkEdF72K5a/MfxdlNnEfHrN80LUHjFzNw8PyvcyWeYrUpDSHlmbaBuSNmDEj/I6woC+yq2dtZwdfEatin7+wO+018l6xIkSw4TxDqTVInh+Iwyjb1qfUQZqcm6Mqb5EDnxZJxSntpGOjFe3dVzv/RTPwyY/dRnH1UuWOppWlxD1HpbyDeo4NtFVj2IsGuBlaUC92vorow6QWOPxBIkeIq4Jmi9pCQ5WCASO2sgT5DWZtYwFqldoJ6MphgvpZWqk+FJdB0CF77cEqZTiAdI3AKxGhY0Vzh/7FEhaSiwXi+a0FKSYL8nSQtuH9An4NUMVQGkZBjTM7NkoAC0qpg6QS7z8amMCZ1jJ09gbm6SSo2NT8tADSiFoTjUQr5PYnIjVzpnxBufOoee52k0ihriU8JyDyKxgMX79HijQEo2lcY6O28ym7Q4PleInYZLqFOxeqloNPu9SitR3uuw+0a4OhMmEJTqR0qlS6oQccn7os72BydoZWmyFhVch4YBrbjI+CL8iU1eF6ZFlKVVexQDgtl2jZb3jh9XznvJKC0KOYIDHEWllERB4IEEVS178dC7DuDut7Nsvg0kh/NYu1jF+aeaeOz3F/EH/+oVfOnfv47qGoFB8Ajv7RDwJWwFFynqyYl9tAS77A8F1pfQ7JUISOoWLyOwLcs8l6ugVu1hhwIX7SgBRR5HD0fZT3mWo0Bw3qCxk8Ktpx7G2YsvmkILeENIUrH3+qsod6/gHR96BPNH7iBYpACxjcrdFJIUjxQAQYssPMOdDrhS6hjZ6NrMs0qmpBgxYKLrtRyhhrycdba9vF/PkRVKVUpapMwxJiGP8wFaTrLMfinyN9Ic9yGFtdKjiEhEx+Zh5W4LffDNEhBaPzwSiXFXFgkqIikhAgYDi9p27xOQ1nNsqIcELAWm0Y9Tt/xfA7OqtXhmD8w6raAj//RqA7I8+lhOKgABWTvH8gh86DkGbK1Y/C5QwKoqD6yFF7DuNoyr7AR6uD7zqI/a98qgZ4qvBELXVtfN4/nII4+wjh4K3RnSR5u0kqBCYLuwL9RWE+NT5NF5S+ovq12jMLl8nkotSgWiyY9acUdD6DR82V4Ssho2jCcICBJxaNGDS5dexYXXX8DG+mUK7iW4AjsIxykztDhIg6ZQ30slnIGXfdEif3spT/LFEovN/mDDCNRqJvPkxLgZNXPzU1hfW8V3v/u4KVcJ9DyVfJJle/6F7/FdryGR0AQaJw+iwKAmvA2oBNWcShWn5OMK+5FilIK4996HUNbkNY0qsam07rkmx+7s7Jjhns1mKXdaBKKK71V6ODYr66zRgCjrIhCrrCVqaanSNvlQoQcCv4qRVb9JBilmVm07TeBQKhaxurpsAELDfhr2HR9P4vf+3b9GPr9FfiBtk28Euh0POVU0P4sftI34XZ9UFr3XaENndnlRl9k5/tM14jVSitGIeVlFFwKGokHVR/zKXeFTZlwZ/ekdqh3rxc88pcfb83a5xvnMH/Sb8Y5eqH+7dCjlaLuA7O5nAVoNkFjxVF4VVopUD7fyc9fDWTC1qP72yqEX6zfbee0vfe6tgdnFza8bWNdIo4bKW5SHRSppLR/rIpgby44hkUohnopa3KeIRcaaPF02o9xFY4Xl1Cz5oD9JPkvY5JpijsCBgCm3niTo8qJWUn7uIaqlHsp56k3ylSa4kuJYYLYTG0qLCGgGuZYflzNEq3QOB5TV8RjlrHKHUg7zetWvwWvqjSINvEnccPwG0hnleq2ImekxLC6u4ByB1/zcITN01EpaQl4Gl+hXM88rBDWapKy41T4VhZ4nECXjXvVSn8nompqbQiwuvqlhWyvrhbRiHRuO12g1NE2sTmWyNnqi0TfpWFsOnEBak8j9Iz9FEEEnn+vslO/kO41eymtHCIHiThP5jSZ21giS8po70eeziU3YkO5+gACrapNI66U6qtRVXsr2WJSyQSk3yyVsrO7Q+KhBK3oGouS/EbFJt0Z9rAm9NMSbBK2eOvmVsp517jV9aNaC6LfDlDEehIJOVhwf+0NzgJRXV3oikUgY4NcIlSYT56mPV1c3CNbbrCNBOY3taxcvUu9NIKT865QPHl3ri6DNMvQIyN/zzveTV7P4w//4ZTz/7Cu4655b8IlPfRgPvv1e7Du4gImZGdatTHrJGRj3B0gjDYVk1Un6CvkjdmE72QRJHo3eyQh96h85IOTNd5hPfUYmMPokPcvoZF9YWBDvU+5ghTJo0QyFB4jv1QfVVsny5Crzg2R3jjJmZXmZMtlZxldeZx0Uo+ykFe2yX9oWVqD5R1qKWODWrjWDmQfuArTaRqQFyRud0/bLP/VDLJrw8U98+lEPmUKTujQ8VuyuIt88TwIukw2UH3KCHUsWCvQw8FbRGVHYUnHEXTNI+mYJgoIEsZpJ22YBO2wMKQ8JS1aIxKHl+jRRqrSbP7ZN0KeCqtEVeyIvl3KqCQCQi8in8gYQ9FEhNVtkmmaJikNLsZLwvBn4BkdoLY0TYMpaC5G5tGIELWN5Qni/i30lRR4goYbI3OkJ5WulFVessIMJimn9yuJSGZokwkY5hK889u+xunIV7/loEH/rwwOz4HJrQ2TScWEDm43eoZKXR3UiM4HcttKNdPj8ADLjtMY6F2zIr9GskrCr6LGz5LoPufbDN9zPe1sW26YC2hAY28cmt4jLtbztaIfCo8Y2SeLQDX6MH1rGwL2BQS+JYLSHeHiawo9GAwV1m49ZWx3Rystja6uEqytVlmOSRNdHaZtAPhvDT39hjsKihCe+UsW3/9ctfPUPLuJ7j21g+VUSZTMNH63DQ3MHyXRuglIP0qmsTcRJxBQPlORztyisScjNApmsbsaHYrZctFo312nMtF0IutIobraQzB7CiEbCoLuB6AIZe1jF9pUqf2tj/01HkNvke9fWkOH9Wiih3r0Gb7yOd330R+GLzlBAtMhMDkMJ7JvgYztJOEiJy7upGFrFOknI2rAx+09ZH0xJknZtmM6ttbVpXRJc+qVs9Ew+z5QqKblHIeOs3EUi8JCuujsoVlbJWAQqyrVHQSPkbKmWqCycpT9lXAmo8o0ERIpZjBDUx+NJA2zKTqBVomzIRu+TqlH5pIypbRUuoSEWC8aXG4/bzW8RzCrMYFelm6Lf20S3zlcNj+5yvBCB2Id0ryHeEfvOQAW/S+kNWTYHVHDXzRQcb6z2JZnCo3s3zMCGc/kbbzHQtve+3W4hf/E/vkNGBHwjnDl7Bnfdew+OHT+Gs69fME+jxWvxHeGQJqdQXrD95N3cyRewtLKKEBWEDKPXef2f//kXEY3GkUqkCArK7F+v8bM8ihLCAr9jYykcOjCDI4fnMDGZIr3z/a4iLOl2WZM1ximHJi1ncYdCUuFNzbamk8pAUVfIQxrjM6nsSUsRAsQOFevM9DRefOF5KrEmn1UmGFDmhDC++8R3qPCWWAf1XQfJOK9vsy/bVL5sBq12qCFi9avatUtQK+H9wINvpwxl27G+mpyo3+W90SbPfJpAWec0a1oTZgUqRTvyRKT4m+MJIbJhyRXna+EwfL6Eu8KC2ItUomHKS76f++bmJpWzwqD6NCooq0mL2UyKPJ3H7/+Hf0MZ1DUFpM4UXSosSaMW3wenoliHmgTx7KRt+t0BJ+pH6RnzcvJ6k1+iA+32EF7OnZc7hqeIxuiNB+3mUdPGZ9k9znm9Tc8Q6WpznumAWaNTntsrjTxxusGO7E8d5eAQkFX/2sO4S3Yo9k4Gn4winZO3VtXVw8QHqgtLopfxpF1i2y99/i16Zrf+mmX0st8UNgOj60qZ8o+0J3kQI6CJEMyFYyHKEJaTYEwZaJxURUPqtAr7MML+iKFedmNtuYGrFwvcd7C6XMX5M3VsrddRKQ7QJnBrN/uk1R7r6sg/zblQQ6r/1UgKM9HqXAKz8tTnd7Z4TdP4R3JP69urvbQsabWSp4xMWrnaPa0oNyDg9hKc6rlDGl0lgr4YZa90uFNfvUv2vWLK1XFK/2R54eXJpI62JebZCRpVkANB9JSm3lQO8aqWJeWu0BCNqlZrJXRJq5q4KN2sYW7NfxBt+SnDRW/+gc8BzPzTyos9yXfKcxlvAr6tlh9XL21iZ71DkBmG0l4qjEbzGPR7W/moa31+DoMmMVyU+WPJNGan9hMsJ1DJlbG9VSC/1012B+I09vwD6oeuObm0LLBCiWLRiOV57TUCqBV9aFVCoHqnAZlkPTXawXahEapmceYBaM4L9XrTw/7TwikicoHdLnXmFg2GRV4XxnjGh6kDxAzdCtuRADg+ZxlhfMEBTpxcwIkjt+FP/ugruHZ1HZ/85Kfwqc+8E/sPRZEa82BsIkxAOMKzT32PhoJio0PstyrCLOv03Bzbme+UzmOhbMIbaUYypEtjxNKUkewlf3TO+EPdy3MOeFSYqWhgm2V2wv4EwPsCnayXTTr30tDgszfW8tjOFSgTWwZi8wXiFxpVmkAoh9OABKMQA+Xc1lK1coDJuNCcBq3+pXJpI7fb/wKywotWjt3fRHf6/g9++q3FsmvzfOzjH39UaY/kZWiSMcq9ReRbr7PCDQQJHgODKRK8h+CIhfQ30BnS2hsFMRE+iLnUYTaoY2kLVKgRJfwUCyIPRJ1mVKlaME+ELDt5TEQEfo9c3yFjfhGuKVx1AhtTAcp6jhZv0Eze4aBF4glTAQmojBFEneB9Y2JrMmeTzDGC8tQq72IoGiLT0mIi8JUwkaXaGTh5ZqWMAgS/ytOqAGQNQ8kzmsp2cO9tP8pOXMS3v/2XiCTXMHeAzNC7AWOZWdajYF5fL5ljNPSzI4CpyTEsrV4jQXSwb/YgGUJLtm7i0uVXbchCHsN2P0+F5kPcf4eBellwkqReKhMRjQwHCQgtrynlpolZR29IYeowwbD3Alo9gtsALXevVmAbI3OGzEqVSGs2lOC5Bbe3S0upgiYtJa0ssjAxjygZtFm9jD/5va/jm3/ow6WnSpRcYbj4zjANkmiU1OvuWB5dzVJMEsAqwFypfk4cuxHFgmbvC/SQwfwubOTXrP4CcpUyrftyFcloGqloEv0mrbOAD3PBSbzy0jOYPp1ERMPAFS/W19dwvX4Od934dqwtrSNBkJDKDrFVOYu5E1M4fffbMCCQ75EJpMTMmGHdZD1KqEmdmfeBdKF4JtGElL9ArGIIWUQ7p1mZ8lhIuEuoKN7SaJCCRkK3M6Cwp6IZUkhKifUJzqudLRSrqzSAtgnoVf8wGVVGDp8tXwYZ14a2+CwXDSktOarJXFq9SUnLFbOkEAN5PQZuMic1qw0bGhpkmVguvV9e5xYFrSbWUDRwH+Lm06cdzvsBmxMz+/2NvWbbm5X83tFkgprPDDm2HJWVg7r4k5hTnlq2sUCC3aV/Eh4srnlj+dktQcjPCrPQdw93He0O3quvGkYWkPX4leeRvBb2sx1L+NrXH8Ptd96Ge++/kzTdt2H9dr3D/tDKQn2C1CoFnrMAiC0h7HGs/u8++SSuX1/BPffeh7m5cYs7lHzYM2xsRINKp0gBm8uvU4YUyMtUYuS7UNQLrSHuHkTRqvZ5Xh5O1sFFxeIXiKaR5Jb302tDXfL2Z9NZURXajabN2I1RXjz8tofw2pmXceXqFXzgQ+83QPDYY18hj4kuJeDlzaQyJ5ANsD395LOh4l7ITxYzS4AkmSB6vOueBxGgolbOWY1MOZkzBuaxEX3sAVvJAk3cET0IUwl4JNNJCnvFkvG9anXKJxuZsOuprGVg8GIZRy3KjkKxzPaokCbDBApRvoft0mvjyNH9eOwrf4Hnnn3CQIs8LqJLORnU6eITlckhHnlr7YOVRW3O5ucXnmOHO0rOoQHdbrv+2D9GQ7qH52Q4sar6ke/jCdHg7nc7ahOQ1EEV5me9VR95iW36rE3lfOMavVtFER2KPtXIAquKsWMfGJjlbrTMXcDWWJBtpXssZMmeqoIQfPIi6RmVQ0aEVYznVYdffque2e1v2D0CxpJVWtHIjBzSupwe6n9NxFMGDJvwxT+rDq/X6zSsrSwD+e0uzr6i5UsLWFukobfZJ4B1Qys3lnINFHM1lIsNVClvFXpDeMCyK2hboQvUhSE9X5MIleGlQfog4KV+kFFfKJaM3jTaKQO+QiOxRz0YUrqwcJZ8tkndIedTnaDpMuI0JG+68RbT2Ts7BRpGUUxPT7CNB9jezvHdbjM49V3OBHnkbAlc26nX2KZBModGY/rktRCB5/h41ryaO7mi0W8qG7fFmDTxSPpX8enKICKgJDkuutRIbEAZfmTgcVcss0CmQKpG3gTIivkhli8X0K1FEFR4EfndybqjEWGlBvOxtBEkohOIBmIIk1+nyPfZ7LjpCS0LW6XxUWS7ttgmLr8mQJFP+VenwUhrmGKTOKEeQmGd8r8zhUNzd2Mye4T9TLkXGJnujLM9gsQwcshp0pkWeyiXKpRXQ5SKNdKphunZ5yRiYY5CvsgyhvGpH7sHxdrlkOYAAP/0SURBVEYBVU3aTC7A3ac+CURw8y0LOH4si29841XiJxc+8qGP44GHTkgkG/nSfqZsGOGVFy/h3Kuvoy6dTx2k9osnNeIVZV8RIBMvSf9popyySOleyW+NJPioxzUnQrwi5hI9Gg/zeo2qK8xEHnkbNWaZ+wSkhsn0LPEJ7/MHEtjaKGBru2DhB+WqHJU5GkcNllHPAcugcCp5y7Wqq5avVXip0r3pd8kxxzDeA7PaNWpkoVeynCj7TAZw++HA7Ec+9qg8SbKeFDtY7F1BsXmBwqBPIDaLhHe/pWUIRUnErjIanTJ1pg8zscPYlzlOi0JC15nsohVm5JFo1AkYajXUKJiV+LxNAneEMRuUHaAhN8V3yOsiIjKCJiDQ0L4DZkfoEIgoUbSWFZW3MBaLUCdn0K8r4DsC5YNTXjkXe1lejmQqxeqoE0fIZBNkZK8xj4beZB1Q1lPBBC2uSemxlItQFkG7s4yI5xBuOX0jkuFD+N7TKyjVrmF+XnnzMlQWfKrAHeusOFcxVDqdMtd+oVJCOjKFw8cz7Kguzrz+FDuLCtufQKObs1n6ae9DJGoyvRQSiUX1VqiBAsVZIDJgl4KLZR6LYf+pGvyZK2Jhtmmc17G9q1F4B8fsWMi7CeC1ewlYRVxulPO0IikcT984RirawZlnruLF7yzh8svraFM4jk0B2YkQNnOL5lUL0jrVAgjbOVqwDS8yqRGZYM0s20Q8i5WVLUQolEcEieVqDlt5fieIk8BXag1N0vERHHVoEYJA0UfLMu1OYXP7MrrJdUzO09okqNzZ2sG59dfx0N3vx3hqhkI6R8GRx5XNF3Dyzltw+o6H2X8Ew3yOrFrRj1m7bBsJTW1iNMtMINqgVpJ3VBa6CTYBRlPKSmNEZuNvNrxPwSflLStRYLZBS7vfk4LU5ES39ctOcRHles6YfJx0rNmrmlCjGCY934CqkAA3pS4TeFauP4WwBAlq5d0yUE2x1+goJyGNJSoCedY0ZGYz5dnfMlK6/d2USaoAafvmW94amP2NX390V9iQPRy+tu2Nc/zT0XYChjdiZAkklBZJQFZAQgpPwFaGAJvJHmAT2iTQtAsMiAX7vE/NSZCp7AYCuQIS9nz+pyEsVtomlrk1WUWzZr09Alof+byKr37tL9m+Xtxx1+3WD93WgAo6Rx6m0UlZYHGGNHympqYIbot4WfHUuRwBl5Z67SCbmSSdhWjB1zE1PcmXDm04LxGKQel2UmkaErE4ZUaMBQhgfW0dW+s5dGjxj2UmkE7O8rkdbBMAwBtin/EaGphmPFPptAk4NAKhsKa+PFmsXIvyKRYJ4L77qaymxq1tNNT52quv4MrlKxZexOak/KDBxHaIhkNsH5I9AbqEvMCdNRB/FMnO7zvIss+bty6Tyli7KcRAHlnRhBYCEZBVzKLCB0Qv6lwN5Sq9oWJrFWMs77S8IoqNUxyfzktWCZA3qHA1yhONxmyCmcVjUuYpHEJ5OCM0MP7lv/wdKrsiZaDjRdXkIwckih4cgC2vtTanf0kPUmpsc31yOtzpfzG+jCDzIptSk4EnsChiseKbohWY1dGWj6VcMxpku1gb6d1qQPvivFO/6V7nzO5n7rrEZq3zInu9aFS0SBk8Eo3aTrIlUCB2M5pVWVUciQM7in75Apu8ojew3KqX4mM1gqKX2He9XGXj3y9//p/yyw/eFre+zsdRxvBe9VGPhov0miYcaU3/AGlYhopf8Z4suBkS/GQx/VTSWlYUNMCWr9dw8WyBANbLeiTIL0ECwSDCoTSadQJU0pBAbJf6Swa7xysZQiVGsKcRK/O8kh8FZjXxR2BWslIGtnIzK++xVMzE2DTbgTKexuCAMjHkjSBNYOnx9E03qu/lvJDhJO9ps67h4SpbT6uAhSiPnQT3Nqudctqys7DOmgQmWa2MIJpBr3oFyP9BymbJQk0MG89OoN5sE+yUaGAOUKzsGOAv5AsWDjQ1M8v6kg/4HJVDMtbTk+BSH7FtKTelMwVsBb2ajRYunN/A1orcsWnWNM6zQ75LdaDZ0HazXcKktSD1KjEL5USGhsUByhNlF7E4UN5aKFaxsb2DXHEHbQJ6X4B1I/Fq9ak+dWa3FkIAszg2/yAeuPOjOHXiIRqycTMslK5ReWGj0qOkM3kgddRwerWirEABtom8xFrlU6kHq5RLeX5uY2J8GjNTAbhIH0EaDJrMFguncfLYMYLjEVaXL9A4iOLUTadx8qYbdulU8bxlfOPb38Qf/sHv4+zLr9FQL7Et6qSqPttwzLzrG1sb0OqpmmAvvSgwaHM5pMt2M8goLluGqD6brpRu4HfxmmhEOEc5mEXTLfabgKjoSiNCYiPJIC+xhlZIE6ZSP9daNMCbJfJSE0o5qry3Yg5dK150JmyyHjJGTSmJX3bBqphI3Mfv6j+BWTKpndOma34oMPujH//oowFV1hWg4q8h176IYvscmcJDQjiEscgNFgOjZeSayKPSzEFpfCZDC5iKLpgC85GZNewmgSPFJBArz6Fm9Co4WtpUVoLHwgkEOggEdiukYQeJLQ3pudQJZH4xiYKEFT82GLYJZrWKEpXYIINmcYpMT4amUWJJ6CkkBBqCipmlFGvUywTCTQIlLTRQNMZuy9VNpeeBn++V5GWnkXE6BBreAS3GxhKt4A7mZg5jfuJv4dK5JnK1J/C2hz5GYiDAYrlkXYgowsEwgeAOxiYmyKQE6rzv8JGUeV00bD1y0TrEFEZsl5E7AX/7pCjFPF4aLpKCkkd2QIb1UChtr69SgU7i9D2TOHhLAX3PMgkgSWaIolJfoqAbo7V9FM2ai8rfjQotsnp1aMM3Wot5bXEF+8buxIGZAJ595ut46dUn4KLAmZzMsH0HKLSKpA9NpAtTmR8n66dZXxJhO499R1MoUtlu5zeRHRvHFq1yBfj7w7SK3TVcW3nN4pmpCygwNTHFGZZqt8gwFKLpVAL752/F2soGPIkqXr3855je16PF7cdWjgIj17WVWW658S4sXrkKBGjRNa7jjofejjvvegeuX77OMjrDQ2pb6R6L9WH7SKBJecpicwheoQcS4o4wd5SJjB/uYjZdJfTGUmpzQhYGpC9K9KHiN4MGvqqtNeQra7zKg2R6BhOJw+bBEJhVQn1NSLAch9LOfIlXwpFlIXTm8xTvRuFKOhcYoHoiM5fQIbhok7EFWIYCsmwrLX0qelQYjNG3JAaPN99yi5XvB21v9syK7//mRlixW1Nuqj6LK9I29EUgYUO+3C0Wike1pWSElLja2R6po5pXxzeBWYEHeWb1WB54B98lBU4F4/Lzm+LXadzKEo8mKJy9zkjBd574Ni5cPI/Dhw/j5OEbkIo7mQyU6SSVSfDaCJW+UlF18fIrL5LWaqT9AHL5DRw4tEA+miadaiRFITc0eCkv3OS9LgW0lHav72GbxkgvSdx0w21IhBMEswS0BMvRyDhSqTn0XTEaKhqaVJJ3DXPqXoUKpQgaI1Q6DUxOZLCydJ0KJITFRS2Pu2qZVTS6Q6rBbbfdii9/6S8sNiwSUZ5HpXtT6BP7ksLWZ4vMOG0rWlVCc6WxkSf4xA2nbFhNNKJWFniVZ9byxbIPFOOo7CS2HC0LplEPk22USQK+kjUGLBJa3nlkcXUaWdqbmayRHcUh65ny5EqZS6kpXdLs3DReful5/OWf/2fyiMDnUPYHj+QQkYWUFzdn5GP3M5/rgNj/PZg18Kij7ud/TqiNA45VDn7iOdKCGTiiMT6Pny2sQHTnF9E5Xkm93N6tQnDTO5xRgjdO7T73+9/VfMI0qp6xjmhTtEo6NRTKthEdq7gshl0nsWEK1KowYn86R2dTz7J+ej6LYrfqHXyhyvzLn3trYPbq1ldZZ9XNAVBSxOovjW6pRT0ERvLoDzUqRMOjTzngGIy8XiNErEi/E8XGSh9bq5QP3Rh3GltN3k053aMOVurKwUCZOPicoRbMIFj0DAjueG6YIN+xb9XOrLxWhZLhLvmkBhMISSZT1sZ9ttVYdpK0kqT8Fi9UMOw2DCQnSHPzs/so+yLWt/L6Lq9cxER2H68rUccpbtRLI3LMPHTVSoN0T4BG3hWIN7phQ4uGRXv8pEoiKDphOTTZUzG28gYqzK47rFPubiCuiZiU4ZoIqdhvZUIK0GjtUU5ayE1HbWQ9ZbvoRuFKbvKWQhSXFvMobLE9BsQEkbRNDPWybXzEF+OZGRqRadK+RkcoCtnJ89SDRw/MWBx5KEJ9u1PnXsDK2hZ2Sjm2XwsxYoRELINIIIVAewbp8DGc3P8OvPPeT+Pmo7faClgXzr2OYn7LRuX8NJYFYC1Hdo11awnUOyBfnu2VlUUa28sEfFW2k7ycap++eZ9LO124Q+T5qTCxVN4yS+yfmaYMq6Cws417738bZvfPsi3r2Ny8jK99/b/iD/7j75nz7L5778aDD96Ihx66G6duOYYjx+Zx8OgE9fXIRhk7A6XXVNin4wGV00eTwBy9KI1FIEt6FdAWn4kzjIr4n76TE3g9v9vIB+mtS2BLbKXPGh01R83AAauqr/SdPL6DUYtUXYUvLBlC2UhGNl4mncqra+93lBOPYoZdAM3+MVki+uMXAWcCD0c2mEBwE8z+EDGzn//0jz2qZPAao6z2KthuniUAOk9lEKKiP45s4AYCsCDcQQLMYR6Nbh7+kQ9jgTlktS4zFY6hbxE4Bb6IWzkTlRZLQzGKExFA8JvF5CWByrpXgZ3ykoL5LgoEfvaReaWL5YlrkClrjSafS3DliZs73zPMoF1b4HuC8AU1GSLM33w2CULDehLGWlWq3a6zoZtkVM3QC6Ol1ClbGnrRcEbAwH+byrFM4BsYTSCWIPOwkwqVbV6fwaH9pxGJJ3Dl2hXs2z9DYSdBTvCiIUbeq4kl4SitQlp+1c1NtmIJM3MLiCWDaLTiKBYm4YtO0Oobg6soK9hnwrdGy03Wu1aIEYiXwB42Arjp9H7c804PJg7SEGjlkVcuQQJwDUm1OiUS/Byt6DAVm0ITNLuURNVrsAGrBPUNVHN+XL68hutrz1LYbKHSu45qd4eMGsLRQ3eQKYoEWyw4G53ij31AxY0if8/D2ydwZvkU61WgBa2lAzvQohlLqLSvIR7Jsj/IqOTeroF6xeFFkE1PEwyMsSGHOLf2CvwJN5YXz1LA1DF/cwwtGgqVxQ4u51eQCsygmqcg9eYwiLbwI+/7KKbH92NnfcUUodrWCP4NAict+DS0z/Okf3ldpUg1bC1aVSiJsJkpRZZWRzFvn0aQLEzFEDq5afmPz/f7omQsLcKQQ762SMFeQzwxhfGxYxRc0wTIyoFKhSSvqzy/PIpG9VwlRRcgGw3ZBsbQArVUYmpOvlMpvSxmi32qiV59XiuBr+F2eYHkFSO180+VA2669a2DWV6uKuhOaxMJAH3XpmfaN76DssEAhNrqjbRI0rfarf2ksPmb2owPMTCrh+oogCBCpOITEDDsz6MeofdrUys77+AnH+ujZ/IoIThyU0gTOCl23Bfy4OzFMzZMf/MNp3H69HEDwAr30Qs1JL68QmONf/LGXLt+HelMivf7qBhqbPcAFhYWqLRayOdyaNa1TGXVRnr66lcEjab7XR/PK75Nnq867/OQH9Oso3JTKv2cYlNHNsSppO8U6wZqtdBHyCZfNqGUQVcuX7RUdXr388+/iMXr10zxv/3hhxGNJ/HUU4/zPikhB5RJxgZIJzLYBSLlFRUtaMZwo82+J3q6774HSYsEvu3erkFep0zRSnBaaCRlwFQTKpXTUe0gb3Q0EbX2yRXyfI+Ud4TPdaFYKlnfTc9OI0WAojaQ90TttLS45NAhlcPW5gpBC+WMq4Of/+9+kvdoogf7ivRoMeZiFj7fwDdbYy+0wCCDnmnftatDnd+MXvVRyohl0EQxNYJUouhB3lfJRRGZgTuCK3uKFBUv1cT8IcuhuEjxrnjcPJr25F2vtvNI23ffvnvCo0Er2xT/KmDtNgDLE0avvJpHsbgB2b1Nn0W0+pm/s/q2maeXt+tWeyeL++bPKtcvf/6teYCubX7FvM/qFxuxY3klN7TJSBu5OyYvzcBt0MAlHYin0HNhQKDmUtxhO4KttSE2V2TsD1EuaHGWLvs9RtlRNiPPowxCVIpKuySAEQ4r7MkN7yjN/pc8ZCuSUTt9rTSnmFHFbw7N2SIwLPkVj6XZFD7KzBDlHWUg+0jZWUSD6+sbBL4dyv04gd4Y+SDIZm2hVgK/p80I297Km+EkL6TuEU1LeJiMZd3lSOtoWVXy04CgULHBPsrIifFxm4wl8JRIZwhoadT1K9S3O/APfZifX0CROvvi5St8f9QyiHR5r+Q3JbzVS3SieRMCuwnqYk2a1hB5u+uMRgY9E1iYPcCysl6ke2VCuvXU3ZiaHifeiNJo7RAv9LFvOoUD86xfTEKLMiHfwgbrtb61TR1L44BgcjwzhoWpg5hML2B/5k4cnL4JUe+kzfu4cn4D1y5dI3mPzINKbEe6o1FN4CrcY3Mq+F3eTIUanL94ga8RX1EGEIPIIytqkyxWmr4Dc7fg+ddeRHwyhiQN/I2lZcvOc2T/HG48eYz0SNqiIfPt73wZzzz7TdJFCfc/cC8+/7mfsKxP01NaCS1kk9wzlJ1KRyOQnsmmLCNKqy7D24nLFx8Y9uDzbWSQIFWpu4wZ+KOdIs2I50QbciiJrhSqIsNc8ecKDWvTKFOoSIf9z2qSJ5WRgH1OeaeQmpG3i0aHhjj/xBN6l8PXkheSkXyPMCLPu2l4WHmMhvh9Vzbou/pfIUTf3+SZ/SHA7I9/7rOPKs2HwGyJQGqrfgbF1hWEfBFMJk4g4TlGwOSlTmijOdpBe1BCwBUgkJ1G0pPFkACjr8lM3AVmBbS0VrgEpSxeWU/hsHKbam1xNaIajW8jh4pYPRT0Ib+X76NCImGZgiAhaAJHkxYkWYaNp2HgAJslg35zHwnJQ8Bco6Ih2JXHk4yn0IV8bguFfM5iyOLxiAnMKxevYnMjZ95DpdDRLHQ/wblPdZJ122EHESgGoo7CKjZfNFAwEb0FoaRSh8jrGyIzJEicA+yQEaQst3eKVE6T6Nfy2Ng5bzk3p+fGKCRGuHqFAN8XQy7P+tQ0QclHQTawpfqkiJRzTbGUGqqdS59CY3ARS4Vv01hYwfjUjIFFzbpUfOBObokC7CCFY4Zglg2nyUqjMNtbhCJ7r45KfQ2vXfkvFBbXWfcDtqJWa1jBKOgmiNzAZv08Dh2OsUf4efuyvT/mm8WonTWAKIGxnc+bF9HtH2KreBlLxRfhDxEs9ykQqZRCVMatTh+VUpvW+35MTx5GpdDBiy//b6gGGvAEkoh1knz2Mo4/nEAo0Udvo4Z824vKxhCJYByVzhICGeB9H/4k25zglH3bpRAWWHXArCw2EbiGhgUsCSopwCWYpXkUE6uQEsWLiQmkFEcU9mLKrgLddwPbFYct+tOf3wypqDFOubaJQnXFQFmagiubOYqIO2yhBYqJldI3L5ZpWpGq22YESzBJaGjUwOJiZQ2RaTVcrzXglZdVAoRn+RMtzF1QLavUJoNx41cr70233Grff9D2m7/+G+xfp4/5n92vTWfshIEKQRNuah42n4FZDWPyswM4ueuzmpbC2FFELKOcOLxFQsYcx5r8RSGiag6pdC1NioSWns3/ZZTqT5MrJRv3dpKyGdsDAqchhZSXYNYTpgDsyWNRxwc+8CHKAieWWMP7GhX57hNPo1Ij/dOyD5OnFFdVp9LPZifRqLbw/DMvmOEqBJDOJq1vgvE4IortDtCoDaXJE2WsrG6aUdlnv2wUc7i6dBnBiIdAuI5auULg50e5zGdX5EFgP1ZaBH07VKSzLDTLq3jHRtvi506fvgMrlA/Xrl5nXyoGcIRbb7+bPHOcwCBvkzEk/yMRxe0p7lfp30SXTg+JJxUCoQluhw8cwsz0LBVl22SWMhqIthTiIKNJky9kbCmEQRNns2Nx9tUQWkFHIQaaWCivVy5fNFk6Pp1lXXpYWV9Gy+jby/qXzEvcaNTIj9s4dmwOUxMR/M5v/yNsbSwhLuDDNtcETylho2fSiPpexhYPBE0aNRAQJzin4lMcsNZ073eUj9rPDuY9NIxEYQIWSjUnftDa6hpZarDfWpRpLU30oFyVMan+DXpDJvf7HhobpA3RrRSaRuZkYEi2a0Kx0SRpTDiQRWG78JWiYcU8UxbozTIW2b0EB6wH6VNzO2zkgOdFzrbp+bufxWpOfzjFN57TruqLD/gu7QrJkc4UjyhfrAzAX36LK4Bd2fgSjZMQy+p4k6TzNDlPownyTvaGinMtQLla26TzbpNyUFmUWjS4ux6UDeAm0ahEsbpYoW7KU784MaYamq/UtlhHOQwkiyjf2Tc6ynvaZXuHA2NWT7Vhj4wsMCnR6WFdFM6mjAGBcICtICPFQ7rM8FlBG3mwmemNOhLJjKPLcjvkkYJ5TkNhP8/H2MYRAhQtUR/BtWvXSR+awU+apNiTxFUKvSDPaWRLnjoVptXRMqaaUEYdznIJwCnNkwx/LSsdTwapVwh4WkTK7OcD+w+YA0oGbaFUQCqbwdjkOHJySpFoZBzJANSInQCaYlN7LK9GKyLUQ00CtlbDQwBLuRAhbZJIpiemcOzQCdKmZsxr5LCLWMCLRHiIdNyLZFKjcx5s0HAoFCu4urjCutcwNTmNEwdvwGRyAXFvFp18GH32VYi6JpOMYorAfm5qEpNZ5XWlriBQFnht1hQKogwWyqKjyX4BqhUPrl6/TJ4ssW9k1PSoq2hMk9DFY0cOHzUwG06ncGHpCtt3BelwGrOZLCbHiTEmQ+hSN3/3ia8SPOZx992n8NCD9+LQoaPUlz22vULy5PGX0A5ibX0LFy5dxbnz5/HSS6+ZjKvT+LbhevYWW4a0RBoVz/AfuQgh0pra1WQX+Vl4Sx5jU3liFMoaLbuve5WJQnzbIAgXjUsviFY06cx0NA17jT4OaUy4Ak30XRop4ntFr8Z/AvV8Mc8K2OpoIXd8j94tHpLeNz6SMtImo5XPkL7WuV/9uz9EmMHf+dRHHw37Z0DIho3WGSzVn0ClQcDmGcdU5AgF4xQJS8PiFTT7m+gOtFY9G7NNgEpwWe/SAu23eX8PHS3ZysZgOWkNBhCPRMgUcTYYm4bKzu9Rwmcifyo/xbnYUn4+WpMELRL0QzZOjw07pMntI0iJaXiCeyR1DbHgCQT67zZwEwxQcfYJUDodWr8p+KM72Kk+T4srR0V4kBbLAsrVEl5+5QV0aP36Wdw0FVGjto7r11+hdad1jxWzS+HrGkOpfRGeSMfawOWmVdmqQ6tiaAGE9Z02AZGWfG0TACnB7xA72yUyEYFbrYrEkVlce30JUwRZKe7xbB9ffvx/xNkzf46L5/4rIt51zGZvQ2WLll6hjGhKk1fcVEAnECXIP7A/hKe+9xV86bE/xl333YrDR8cooFYJmFskqh0KhDQVTBqFrYjFVMViGgLIU1DWaD1O0HoaQ7M8wrnV38NEioqkH6aSW0fAE6Yi3UKu92UcO/Be9OvHKCwKaAyXUR+V0fH1MEzQQs+fwyBQwcBXR3tYs4lnHVrR7mGL/SZll4QCt73uKLKpMaQzUVINrdvcIq6tXUZmNoOZ1EGEmjcjOd7F/hsqiLDBlZfYTwV+9rklZINHcHD/MWtnfzyAT37qZ83z3OxUKYgpeMkwInl5M6VgLV8xhYOAvw1tUwNpaGpvRroYRkOwsgJ9rqbjndF3eeoINkaKMfPTslc890hgwI+N+uu4WHgM9V4RE8k7MZOkUOFb3RT0UtgSlt0+24SCQHylsmiiA1UV+0Mp5eR9IF3SalVsrdbDl9JVzBj1OWmWVi0V+YhHWaESCP1mFxUqmzaFtvYWn3f7HW9tOdvftNRcki+qnJ4vUbCrrO1IJtM5Cgm1kQCs45nlj2/6bABBksquNRxgbS3XlHC5vF0CFS6CJAFbO0ehpYwOe+/Su/cAAatnIEDvsOPuTtmv2VFsL/XhABsrW6S/Cu68604qQ63YVSbIy2B1dcdSASlqUyM2a5sbyFBRaJLK1MQkcjQ8r11bxLHjRwxwaZh9dm7eyr1TKOCJJ57AmXOvYXVrE412C5sEsmWCumq9RGMlhzEassoM0ur6cJUgtFbrsp8INuMpbG/r3ddpxGhyilbBGyK3XTRPqJZpPHfmVRw8OEf50ESDwOHkTTfjk5/8BOb3z+Abf/1NykE34okIhbkm4rCN2AWiBS3WoeG2JhXRLMHyLMGsFlLQcqB91kGeB2fVPsXYOZ4a3kaadtnEGIHatfV1XuMjzQeNN0JUmrqP1GiZYAQUtXzl+uq2c56Kcv+BaSwsZHltD//4134OS9cvsD975hggsbKFyUvs+73UbNoETUXvpnD4vU6wFaJhr9SE0oGa3KI5DvLAKAOI3ingo/y5Aq+BQIx3+TExMWMGiMqr7B4dTfJgu2hd/Q6VvObiiV6GBC8C+hanzTcHCeI1a1w6Qu/gKQIEzbaXHOANJDTLNUsa0hK28swKzFoVSIgu0SbvMZrl7fqsTdWT+paSlNI2Iqci1XuMBwRkdXp3N2OPR3mYNbz9y599ax6ga1tfNuUvo8RZX571bjtgttNtEIzmaJRpoQICAAKedo1GR7WLdr1HQ4vAn+WvFD0Esk1sbyjnuB+zs+NIpmjAkS7KFYK7ZMJS3slg19C1JZWnkdlqDJCKT5qSV8iXRp/6g7bxm7yZGhXVvBOBYmfCmUJkomxPL/tOYX89ZBJZ6rGs9UU4LL3rJqjdhkIQFMKlVJuJZJyflVYvjosXLhmYnRybIFiqmLxTn1n7Uv5JTir2dkAMoB72EQNopESZDawMbF8f39FT7Ge7DHRA4KbV+uqWQUTGrjIbJMbSjoyhDBLY0giZ6q9wI+XV1YhuiHwaijkjM9WSvNpVGs7K1kDNzevFx+1u1SbAaRXMbCqMG6ifZ6dI05Th2zusZ8eNrZ0iisUGsulJnL7xNA4tHEfMm4KnG8Hc+CHMTU9jP2XODAFyIkJAR3pqN9iP5TZ1aM2MChGR5Ten4NNqZGXKN02iXly6Zv2gBUoE5gT2FGopJ8yhg0cxM7mfBriMPcoMGg6NUgc7a9fJ41fR666xTHxnIoAjx2YxNp6i/Gob/QyH8pSCALxKGVDGi89fxl//1dN46qkXcOXKCop50ZvCDwloabTIky4PqeSO0kfKYRlkO6pcGqGRvBKWkrd/Tz9o8/G3WFR5bWdpYGkpZRo4rL/CZMRzlAKUp+oNEgD1nNRFj1ih66Lx7arxm/QS+92eJyIRb/Jq+6iTe+fEqw6/anNALzc+0wzQ3e8/FJj96Aff96jiM+skyLXyOazXXjKwlwhmMRE5iLB3ng1ApEwA0xoVUOsWyZg0NZsEG/0YO4pMReYyrxYLtleHve8iRLWVLGUV3rEEHGK3nUBWsTheVUqSj4LdK4bkHlCjhT1IpIOIBm+EZ3DUiFoeXa01rkk60QSwvebFzgYJPzhgJzWwubaFjY3z7JkLCET3o0Vg3CaBKSyANh+ZgYpBOqWnGc60MEYLGLZmaG1VEVTsU4cWH62wBMFerUnBy/qpXhIS8rLIc1Kv9SwHYNvdJah0o7S1YnEsEwsZvPDKszj7+ou47aZT+L1/97tYX+rh4vkcTtywHydOzGN6epJKMUFQT+Ig+Jqe3ocjB+4iI92JzWU/BZ2cIrMUjmmUt26hSJqncByxnmU0K6zDgO3O+pTqm6zfKgVVHleWvsrnaVWZGYs/GnmD5vn1Dm/DgYXb+bmIXPV1BCJSXsr36kWt3sfx2Q+R+bwE2jvs2wKVg4tEnyEDzCGAfax7FEHPAsK+w+i3U2SaBgXuFrrDbYLFCgXGEYv3atVZGe8W/LEyEuMlxCc2KYCjqOQqGDWTSMZi2Ci/hIM3zeLGU/dRsFIAG/FryIxaZXfbs9gEENTWluCZSl67PNzqCMWtaqKddsXvKs+gs6KRlLfoSx5dKkcCz5EriEprFVvls1TcBUQCE5hIHHWSj2vIj0rflAPbzCxEcSwZSXkaZU1LKDjPHrB9RXfShiqE42UajWiB8zdNJlPh9FnDbq0WlRt5SkOAPXnhCbykCO657wGnoj9g2wOze26n3QPfoPdQZFCRSHlo+/8HZlVcB9BQSFgZeRcvl2BUVQVmTSJZLCLrpGEo/pOIsTfxP12mEwYAAny8uoGfdZSnS++zYWe9mzcob2+j0sZrr53DWGYMJ2+8ke3GC2lUuNg3VwlW5XXUMxVSYAtcsE0nsmNIpdK4cOEiXn2dSjSWsZi9rY0NFPLcC+sklw584RHG58YQTNCwoXxptjXpc5z1kmLdhidI/mwQcJDGtaqgclbLUy4PWIM8vr66agJeZZMHoNtq45ZbbrAhvU6vTFCsUZE+llfXTWa94+1vx/Lydbz4wnnEwjS2yXvKCy3vhiYEihzUtPJEKaTiplOn0GgqzEDLzCaMnqX0pNw1FCtvfjjqI5jwWz7SZrNhAF8jMsqYomWpI+Eky6Z6tHl/0EahNjZ3CKiVk9eDZCLIOvRYpzb+EYHs2spllmfIdqbiICq1/mOZLA5RAtj636FtKQqSpI1eKTuCrg+EojQcaJi1Owjy3SUaAcqiorXgZ6YP4pH3fBSf/9xP45Of+Dx++qd/Fj/6o5/BXXc9gA998GP44Ac+hp/48Z/GLafvwOlTd8AfCmODMliTAAUEJK/l0VZez0a7QbDDsrFcYQJb8VyXxrL0iGIzNbxO6MbiErSqDgZkWQeCB+MBko08tqyRSNk5Z+RP0CvZQb5gTZ2dfCEgSzHg7KJP7lI1e7SrBX+k1H/pM28NzK7sfIXvcDJkaDKxjuYhr1coXxQWs01ZXUe90uTnDsEHjyXqkWqHIKONQlWpGwnuu0n43QkClgwmpxJsj555SYcdj602maABoDRcnaZC92QcExwNQ8hoIZBemzxJg51tOCB4lEdWXlrxlIx5xaWrgQakH+Vs1oiXQttE61rNKhpLsB9I69TlfhpoAsKtuiYiE4yXNDFW4S9RzMwuYH1tg+dLBNhJhINBAqamDTkry4wtDENBoollomv1IUaKDVcojHqItMYO0uehp0vgT6BVqpF2nZjeqtLKiSe4j01NkifC6LU1h0K4gYYd+1KTN+UBFP8qBldr6Xi9UcqDioUFlVheSw9GHm8RxG3nVohfWFeWayITxynq3JnJFMFiAcvrGzh3eY0yoId4LIuTJ07h9E234sDMEcxkDmJh+ij2L4wRbGcR9sf4vL61ifZygbuWaSezK5xAXmmBWDktFBstg8aR85pgLgeIMxdEslajKEkaEUePnMTcvgRavL/ZIu/VEyis17G1egWd4RKxzg6WL5exnV9j32p+ikIrfCgViB8uLuOZp5/DE985i/Ovb+PVl9eJcdTWSaSic4iGp9i2Q5vYLMeLJpyLE5R7XaMlWjBG+tNCYlgmy1JAAWETQ7lLRqndFRqiiXvT03NIJ8fJuz7W1ZkvYPuwanJG+lAOFdGXaLA9yrOPG1QWDpBV70vsmL5im9kfj2ZIiwH5XXpfZdAmmaRdi3dZo9nuxq/8vR8izODd73z4USn9SofEUXwFW7XX+Lo20qEJZEL7EHZrkpeHBe2wwASznZwN43XLLESTQINgQQ2kYcQwrQ8pJu06p1yyyh8nICulSpnBRnMZ+tdnxWl4XQRnqo+8Yl2i/m4TfiLNsKdP5hmhT6td8Y1B780YdQnUyNgilGZjRDBWQy5foqBQbJwseB+t3lWsrD+FamUTvsEMmXgb/U4dXirpDJlBK7SE+HlQI3BrlVHorbLDaJWNImb5JxVnRKzepRXp99XY+CF0GgQ5tOg0BBkMikiHNsSu9bNJtkhFIlSQ1xGLBxAn8J6cTVGoH8Xf/7s/jtsfPoCvf/klFLaGeOD+e8iISuNEQdhoottvIEfmDgeSyMTn0CZQbVVpoQ+mSSTjtDAnWb8uytsp1DcD4OVU1pfQ7F0lKFDy/kMEqzGWPY2VzceJR8o2S7TjzhPMss/8rE9yjJh9g4zMfh2w/tnjBiSbnQKVCevUb6LR2kaAVnomOcU+TJoSGPFZPWyxn6hw3EUCwrMotb6HnvsS391kHyep2A9Zjrx8Tp4JCjQUyEgEJ8lljB+5TmYnkKC1WNhqknTJCN5VHD41jaM33E0wTmHlr5MdFP/qELV2WePyfOyBWQ+PBk6pQEQmGrpRbJpyGlMFYqTZr2JC0RWNDU3ekjJ1E9iDtNUh2FwvP0sw+xqfEcVY4mZkE/MI+TWxi21FerMYLTKPMwTiADqlnFGqGoUZaAhVzC7goln5yl/rIEF5czUxyYmlVZJvAdlarYxKpYAGFVytTsWm2E+CKB3f+a73iO9+4PY3wezeZsJBRwoEKQRt5r2W/hJwlcKWAjfAsLtLqrCN7KjrVXyBWN0ucEBrWBPAbMGIXe8sb3N+14G8qucYAKBufAMI2FHP5kUsp67Ts6U05VFRBpOnnnqKRtwNOHjgEK38kKWwU5soEbcmycmgKJdK5onRqkNHjxwlOO1idTOHfEkTVvq44fhR8nAb/59/8//G2tYiAfUAxTZpjcaMLxQjHSkPtdZo10zeHDJjGSq0GGncSb8Xo8UrL7C8qFpQI7eTs0wLiu+cmphmv9B4JziUsXn12jneS6VLoKKlY+M0wjQspomkj3/nG1bVCOVcgEa4GojkYMJd8k002CVd3nrbHaS/EMFh0/GCSfETSWlIW4Db5x9ScXsRCCiXsQtarEFpmDw0QJXLOZmYIO0o5s2L7NgEtre2bZU1rdqklcwiFj/ZQSLpwj/5xz9L4HgFSuY+JA1mk6w7edDxxsp7TJ7iUTpDgMBi51juUNBn8e/K3iBPrEY/NL+g2ekS6PTYL2mcOn0P3v2uD+Lv/vTPYd++oxgbmzEgkKKSE9CqVlrkESpv8r6XZT95w00Wp3nyhtN4z7veT0N3HJcvXCbty2upZS6V/9tv3mCNrIjvRSMCoAGfEzbDJhKp2q7N8cpyl2rmUXhcw7yysFTFPRqVYhVPmNJ0yJx1cmjUx3ZWXPce7ZqXVkfyiq35z5e9VTC7uP0Y+1JyY4C6lnclcLDUk+R5pZLrUp6qPeTsaDX61FPUVexLzQFRDHWFwDeTPIip8RsQDY2TrxR20mAblQhmy9QFGcpsZb/oolSssC8JEml4uEZBJOOTlLtUu5QxmmSm1SkVsmJgln2vUCcZWP6AGs8ZYXKGcWkeyFHEduk2aVhQzvUGNbYFAQn1kQEd0muJgG1r+7qlfFPaLa3EeWDfISxdX7Gh5oXZGQwIaCTHbGI332WZCOxd8t4rz6gArYCSwgNZAAoYD2l85KKhSVlbzOVJ91p0gIYlwWcoFiGQ62JybsYWDumRJg0QqbC8XRmPNPlL4S0qY9elGPQwytUaVldWLYNAnzTbYpnqasNKkfwlT24f8ZAPNx0/gPnpMRTYtssrm+TJAdt+HkeOnMKJwzdhYe4A0tEJc96lopNsmxoBKpDfaWBZudw3q2gTa/QpI21eEPGM2tVytdKo1WROMjX5WKnJwjYKIWArGSI9ZpOA2S+zs/O46cbT8AX7KFU1VO+zUdQR+Wc0qFD/Ns1QL+doRK+tUA5dxrnXr+Ds2SXKTE0a9dJwjdlqpjedvJe6m+A/OIZQiHiG5yPU281qF5WmnEiiqSGxF3Uh20KrqQpIyrsqzCV6UbmlP8x7znMyjCxXMPl4fHwS2fQE+cJvo/BKryijzQGzTV7LduD9SmcZo2EC3xCN/o5NGpcRKULTcx29Lq4xbcJNOks0IdJxdP2e3t8DsxbytveZz/mhwOyD9931qLwlZYLUleKrKBAseShs08FZJH1zjmeWyHPoYQe5iqh3c2TeMroVtn0vjDiZL0oBK0tSikKeK+1aaUK73NgmnPYaziojoSorkoxA2hcQUAqSgWb/USh4CKy9rhaZkg2STSEWOgF372Z0G0oVIkJyUTi0LG5Lq4xUm1eoIFdpNWr5PVq+A61S5oNWzEoOo8gG55F0TyLumcBkYgGZmDyPAQoOP7yHydTJ82h5zhNgsV/cClgfkCG6LHOU76P13WBnuyL8TBAXrFOYEWDmCWxdGQKlHUQCARKaD+ubKxifiOPwoSkcOTTNz1GcO/8UXn+ljNnsaQqDcWwrPxuJTgs5DHoi4AQqfIaHAm3IcqezMRKMD4srQ96r1cRGyG2sUiiuIUBQqbixQd9H4eRCe9CAuz1BkB/C5eVvYru2RmHgtUl6LoI5fziGWrGDXHEZrQ77NHnCgNlq8VkKkGUqNBIbmb5S3zahOKLyVFxOtUlFz3cNPTkyBsGlL0QhGWMdReSHKIwyFGZKUyawUCEzRXHw8DRm5tLst0lEsjkcPEXBUYnASyYs5go8D4wveEC5gVO3PEIayBhI9rlpRJil9n0CF4jV8JuEgUdhCEJSYhPW3Rixp9ndbZ4aEGAqjEXx1wSzVFqaHCZwIy88jWfs1NewWXoK9c4q0pHj7Ic7qURSDh0qFyCZUwwnnttjIsXAao18A7MNTfYROKNgCPA9LJ+G+ASqVR55YwTMNAtfOUIr5aIt46hdoS4d0qnSqCiRubyz7/3gR6yuP2j7YcCsYoAlL6Skram074JbgSy7RZXUB15qgsJhQzYhn0MhbRPAJKylo/ib7bxEG+WgebXk5PGwjTX5y57N73oPGcWAA4mEz9ZjBeoE8DRE3cHTTz+Nt7/9HRjLZCkcezb5Qw/dS8mmOjWqDdJXyPI3Ts3NokyQoBCDWqmMrfUNHD5wBIcPHsSjv/Hf4zvPfAtPvPhtnL16Bn/xtcfw5PeewdkzZ9nfDUxMainZBsuXYh80sbS0YnSUSWcczwMLqti7armG5aU1JKJJgrQ0drbXcPLkIRqkFPxsz7qGysljSrkXCUexb34/vv3N7/C6HWRSScotZSxQzlClbKMRJUOH9RUw33/wEObmD1oKII2CyJMlOaelcd0EL6EwQZyW6I35qRiH5nEN+MkrnggN/ShSiTG2o5/KMY7zF86bIlE+auUT1eSuEWXF3FwS//xf/D+wunwWyZRCRPqUv14CKAIK6wy2v4wc9p3DXuwY7jZcyI/ytrSa7G8bjRAtaaSmabzztne8F7/whf8eN990N26//R6+d4ArV1ewvVPCqZtuxZNPPmvgbN/+Q7h+fZn19uPQgaN4/PGnCDA2cYDnY8Ek7jx9Fz74gQ/hEPtte2fTwiUMQMfCVMLKQezE52kCnXSMJv6K3KlG9ooriGr1EVi1czK8dNaArWQCv5NWBXftmt0bxbOmd4iOlRzfsgyQbhUOQ7JzeMT4Q7Id+MXPvLXhzOvrX+X1mgQIysom95oNlzdays/t0HyHsrbZUiyx8m2OrJ5tKv9Or21eyqnx40gnDpHm+9jJbbDdKWepZzSi4aW+6rMvRUfKxqFJT1pxUykCM4orbxJodggoyOsyDIiI+HzFZ0omEcxK91IXadU3gVbJTrYEqUGeXEePlWtFAuYwFvZNk8fKtpytQt6cfOLsEwLZfJF8t5XDvoUDpKsgCjQA/eTVTCxL46dto34a1VIMpckiNrvSNQ0JZmXQuUZOrK85sAiSNO9C8bPKTKKRWHkd5UGMJaPUYx1kpyaglcv0HAuXECCjPNI8CXnllWVHQmcU7CAU0SSxEWplLcjEspBXlQ1JWQ3i5GcJPS3qoBCKuYkkZdwQy6tb2M7VMD91BEf2n8SB+eMYy8446fqCcZbLTZnQwdLKFRoRBIVFjTbKAM7YRGdb5ZEtafndSS8aVVZog/jfRokVMsnKapVDyceBJub1FWfapUE6hhtvvBGzC/PY3N5mO1BvsQ5xGqDeUYtySSBxiM0dJ71XhcZfsUpeaQHTU8dx//3vx8Pv+gBOnrrVwnFSqQle08Xa+jauXr2CleUlGkxVVApalEIpM8kQ/BcgkQiceqkYjGvYtqIJ6S3RvTJAKSREfKK5HaLddDxtqQ4VbqJwOYFZzUNRRgPbRz3jlyH1cDgYtVUf3cRplc4aOqMSfHoO6cFChXgUV7J17H/1o4UR6izfuQdktTnglecshZe+O+d/KDB7/333Par0WrVeARvVc6gNVknQHqQCc4h5ZhDxzJri6rubBLMFNPtFm9yBFoWtO4EULQRZ99qlNFSoPVCio62pzHPOjHA5mbmzMS0+Upqz77GY23arhna5gJ6GDeqa2NFG30fAkEkS+N+GenEB+W3Yqijy/mnN43ZjiK38i2hW3DzfR6O9gnyJ4FbB4dNeHLqrgJn5MNILPbjSBQxT6/DObMK1sAb/4U2kTtUxfjOJcPIVBMYvIZpQHr0oLb9xCj1aqpr8Qwur26Ky8sYoDHpURD0yCVAvtWzJ2mrLiRUSg62sLtLSmyCRDLC+cYmCp4BnvvcyAZ1Sft1BUEfrvC73f4TvkEekznZgB5MwtJ63hLriR5fWtnCdVqcAklJ2xCMiMnm4B+a9mp3eT+A4Cy0xm46VyFg97JS+gwbbLhaesrbudVPokBmV9WBIpRuPj5H54tjOL5uAOb7vvcgGPkxw6ni7iYoRDMuKVgLsKJUzFWpfufzGEA5pCNdPgVJDrbmBQuUa6u1VgpkKwlEF6/toVROQtAvYWiMN+Mo4cWeDQA+0eKMUOjkKzxSSk6Q4WqD7970DmcRNJGz2h1tpV6RZnE10o11Ax4hdLhRuoit5TM0ja3HZSpVG4OuNGDOqD2y4hDSoCmqWb42GzeXNl1BtXzVP2mz2dkylbjQmV5vYzGuXhK4YT0BMQIHKnUpVMzdtyIhH6UjLuEFELs+alidUeIMWvqg1HO9rtUphUqqgWChgh4RaKOUNzCqBl8IVVH4Bt/d+4MNWnx+0OWBWjM/N4WvbJBT2PknzK75cSlmTbQQ4tYtf98CseWNNYEh6CMhQaLwZBBDE2pK2rKa8sspuoPOUzXab3clnCQB4/Wwn7fLG6gI9f/c9Eo6UYHy+Pd6GPSX1NZysNlHe1rvuvhvHjx5kmfwoU1nK0peyUqorgStTxmyjffv2w0WZdP7863BTVmhW+EsvvIL3v/8D+ODHP4wv//VfUhaRN9h3pjhZFhkPr5+7THD1Og4fOY7xsSPkP2coWJ4SeSTHx8dNgEfIe5r0KNC7I4W9TymKHFo4eHwfjbMgnzsiD29AK3HJS5WgkL/1llvw/HPPmOKIRTUipVEE9qvqyb7SkFyt1uS7J3Dw6A0GZpXyTYDCPPtBhb10qMhFr21kMnEDIvmcUgiOkb8VD66FXUYWN5vLEUAE1cB9AqY85UGZMmKE48fm8aUv/yGe+OZfmOJMSm51qCBJnxoCVR+oTQTKRdviI/W75O+uY9Y6LUyZpWFCxcL2ujTgQ3H89H/38/jbf/vvY3VVuXDj2NrUikBUstWaTdw5fOQIXnjxRYLaU+zTq7i+eB2f//yP4fHvPImJiSl86IPvZV+dwbWLV3HqxtPkJYKYbh/vfOffokExhsuXr7GN6kZ/An/RaNh4ul53JtaJ4IhjqITtoxVWitYUIBWcaFPpe6SstZvzz+qk/+QR1F1qA/7Aj8YTpFelkZPRZ9k4+B6yvjCP0bW82L/4mbeWmuva6mNGv+0WAWdFHlXKOcpppXVT+JzmYTijhyPqEh2VcUAxyGQsvj8S0gjYAcrxKcpFeWzLvI/30hjShK1GpUd5oqVny2wHGUEyOASeQqSfkGXrkSEtr6zCTZxRKoVI6TqNABDA8LyYWSNNmlwrYCADXPUdsd81yz4aCxGMhEljWwTLLSQIbjvdGlmWICVCmbxLyzkaMAf3H7Gh9B3SwvzEAt9BXmFdJP+04p7y7A56NFxJR4S4lHctvrsJP9tVslnOBre8GewC2qsmazV6q/AMP9tDYRNxGpTxFIEg6bVLetOEJ+VrldGriYOaMCrZThvJjBPJG2Xo2dlgWxWb1G1uG52IR8bZbgr70iJBXurPMq5duY5SqYF0dh7HCGKPHDhJo/cA1YQymwxMJyscxBlpoFHVJf/3ZKkH+U4qe5ZJXuV6k0DcYk7Zz9IHpCszEPi7VmCTs0MLyah9c4UNtLp1vieL07eewoED+8jDJT6fBg37LpWKsH1o8Ch9hIAb9VitEaKe9GFiahqpMYLsyBhmZ2/EwUOnEPLFCFbL1M+Kl9fiTYrVj9HgjbItCfbrLZQJZhs90gfrL45RWIFkq0Y1FbIhJ6OyTElvyumj7DKiob2RSYHZicwUlMZQckt8KLnQ7WkBG2cEhiTK5yksqGlG/uTMPHUPjZ/aEsFsgbzq6G0xn3MkAYj+pHN4HMpjL36WOtLRrnE20027ntk97v+Vv/tDgNkfecc7H1WexEZ/GzvNy2gMd4xxkr55JL0zCHsn+UJafVQePU8FrVGZFhgtw07QZv9lYtMUSFFzse8VbA+QaFMMoVztsrbE8FJYirfpkwkcC4wWVocWFkFbNb+F5s4WQSIZmRZin0Cp5GkhMHoI/XoC1CFoN4dUjpu0Ri7asnGl0jqfMSBjtAnUWDZaJhMnqjj2zg0cvH8RtfFzaIw9j9bk88CBV9E98BJqs89icPgcAqcWySQbZJABLUMKdn+dhJ9DrTDBDo5j4NlhHaMsJ5UABYry0mr2fJjEqNiuZn2H9QlhK7cjtEOm9ZDgtnB4/wQiCTfKBLOvPZcjiLof8eA0rVstZRggwJSEdaPZrZJICFZjvD5Ma6saxfbOyDIlKM/ceNaD+XFa5NFxjCXmcOTwJE7dMY7JWTeB0yKWr17AodkFMnAP6dkncPzUNt7xI5M4ctKDmew+xP2TmJqRxy2MqP8EFmbuRIL9FQyk4RtlsLZSRr7+EoHpDpVfj0KQgo991jbQusl+WUUymqJiL6LWIoDtXKQVvUSVUWVdNes4Qku8QtCWt3Xgl1ZfJw2RQPs+nL7fR7qk4dALUamvIJOaQouMFp1ws7+OYzZzLxm/yPdJ8H6foL9P4LsKmMwgZhPd2EpgpB02swkPDYtYTKCuZ8EtewC/911tGjRbBN2rOL/zPRNa44mjmMncSHCdMWa0mE3eJg+G7rFH8Bl7St+sUNJol7QrYcVL7Fqlg9JwogCS4pLaVDhKXaLVUDQUV6PSr5CW5aVtddu8Tx5iKlkKEg11P/K+DzgV/QHbb//6o9YX2nar53zebSspdxmG9lXygm1C2UU65CmSlwFN7Sy7QKw9hLvVkYpPnkKhToUYkCHfALKsonnR7TYd+QwBAgFZr3m3eI9naIDAgKxImZ8FQiWT9Xytre6lUFe4gGZdy1N0/dI1GgYt3H3P/ZiZmsLKyiaBEumDBZZ3TgsKaIKkk3otbFkHlF5Pw4ca8el2hzh34TJiqQx+6Vd/BV/79mPIV3JUUBG2rwSrlwoizPd0cPXqRbzt7e9nHYIWv8jS8T0CnTJKNURPumGhNUta3lmBgCkqHQ31puLkFcoyGZLFMuVLoWieURlc09PTFO4tXDx/zsCsvDLyPtkAOftYMk/gIxiKGZjVKmTKd6uJKQJkTr7aOpWIi4ZzgNf5Kb8qpCmVh/WgXNDCLIWCDKSm/P+UCduUoQ1EosCx4/OYmU7gG9/6C/zx7/9rJMe0MmKfdRwgSpDQqI9suD4YCpG+CQQE3lg2/dk/63v7aN5YeZ9J/XzvEOPT+/Djf+en8IlPfRbnL1wxj1WVNP3Ed7+DuYVZG0Y/cvgQvz+O8YkxHD12CH/yn/8Yf/enfgrf/vYT2Nrexud//L342leftLRhP/bJTxIQr+C1l8+yPZM0BsYwnpnFu9/5HjMgdjY2aWw3EU+GWB4BryHiCYIIAjCN3AbkdGRBRYfa1IOqgHmWRbs8snmMllUp0ap5lwhiBIgsNo98IfoVgHV4QmCW50S3Oi/aFX3z3i985q2tALa68U2+K0BQosmJNJQoJyST3OxbpUzSg/X7SHRN3mp2OiZDnN8D7CcX0gSzQe+czd2QTFHaLXmvWk0+s6wQnAr7r219KGCoEU2F4ei7QKXqFyQIFLKXl1b5OSUXBGi1QpiMAhlNNFH4fIED/UbQSVptdSqmqy2unQZEq9HgnV3KJhncVaNj89AqNtsbsryp4QD1u2LTqw14hj7zoPpIu+JvTXIUCB+wrm4XDaORRh1brI3C8ShUKLdVNstH76POCLAt2I4+FrLVrlu8bLNPMBqPImmppgiUFJPbVbyw4lAJpmgMaRd4ClIn9kYd+IhZmtURFq/mqXNbiAQUX0sd5krwmWyfYJByf2TOIBk/8wvHcNPNNKZnjmBybJ6/RVEo1bG1s43tbT6jQnxDuosHj7GtAgTqHWurZreABvtDsd5d0pyWuN9FNtzFX/J4jwwTRCLK2kB8UtxAtZ4nEM3g2AmN0lDnsr82ttat7dzePtubFD0kbRC3BP0JRGIzbJQJAtolylnSF9tgcXETr51ZxrmzSzSsd1iWLl599QxefuUKVpc1iU06iDiq1UWRRkefxkSttU3+EFcrhFMeWZZWoxmkAZ0b0DCWvpThJd1pK2XSIJJMlNybGpsl/guq2yxrghNe0KOsaJp+09AGJQZ1HrFMIIIxGrDtYQvblSvouQhmSQtiWYdtyasysoxPSYNqMiUw56Zy7Rna2hwAy/MaKWRZ977/UGD2fe/7yKMaPqz1t5BrUuEQXXuJ5pP+eWSCC0Yc0nQDNzvbS/BFgheYRTuAsDuFZGSCHRUkA0j68FIJFCo2MZEAg9K4KK5G3o02CVTpkzTsIgtFw28NWrf5ahE7BIGlzQ3Utzap3Gi5x2mppkNY7y8h7nkPXEr7VB5wL2Fn+wrW1163wHoJtlL1CnKlKwjE+zj9zgBu/0gOocOLWKkR0HRKBLgVuMNdkGaolXtkRBFeggQdoQBZh881ifxyCJdeK+LcS2vo1j20aDw2w1RrgcjDQShldajQglN6i8kJEnxvFZUdL7ZLeYzIqOMTk7hy4Qz27xvD1HwSz770JPqlfTi28DYCPCk+EZGGWyT8BA56bL8YreIxbJIh17er2C4W2IwdJCj0xmJs42GYTEjr7l43EpPAleuv4bHH/he8eu534Q//J0QItt2+K5g+/B0cuXENt9/pxcEjZZw4NoYTxyP4wIc+jIn0PIE32Y6yRXuxmEO1/SqGoW+btRui0SDrrdtV+hxJeRorwTyiKTJxs8g+I7P6DyKbuIPlvZFEP89u9tkQu2aJ9gnyg8EE0hm2WWiGzDmDWx4g0SefR7UYorClRTmM0jjoYpLtsrk0QWPoKBYOuG2ymghnj3i16fOep0GeN3koBCxtchIRl6xhMaOBUA8lkPEDJTgZZsDfFQ+8nr+ElY2z2OlfRzZ2A+bS9yBFS9fn0ko5GiUQ6CLYlJtaWtMYSKMGElB81O77lcpIgEXDuxplEPNJMBiOlOIibegz9YkpWiW9dwy2vmPFslyKVVTcr5KDP/K+96uwP3D7rV//dSuHXqVtr3Wcr/rffrXzUszyxEpJO0f+JomrriTNWY5QCV59ZlnMSpZgUR10VLOqG/SdD2R1yRs87u1sWlvZSUB2F8z6gjryensf+4EX8rEmBLWcoc18j1BYKtxAypg8c+7MGSqpIR564G02fJWn0aajjGdNOFHeR+W1ZCGRSSYwkc1g9fqiCVolWG+zka9cW0QincKv/MNfxV9+5UvYWFvF2HiUfExapDzJjodoDJLOJmZx08lbcfHiFb6nYGBZK8Kp1XRU3J0mWLXbPaytbyARjSCrVYJIU8lUnHTXQr64QwGulF5bCEcoC0IBHD60H9evXUJuZ53CXDQoL6Li6GR2UdHSMFB4y02n76AC7LJ9acT4fVQKBCHKsUg5mkyFkEqHqbCa2NzMExiPkfeUGUOAwEndJ7oDaXs4qiEcG2JyJkxlWsA3vvkX+JM/+reIE8T7vGxjeYfkruS/ZDyARlX3UGGJJth36k9tBmK187ud4gXKoaz+Onz8RvxP//O/xj33PYSnn3sei0urNKTHCap38KOf+JiFCCjXuEYblFD/ne98O/74j/4Qt99+q6Uhevnll/CFL/wsnnvudfzFF7+In/25n7Mcvk8/8ww+99nPkm9CePWlc2hpffZaGw/d8wDi0SiuXL1EuURjg7xlkxhJt43WENGYl0rRAagqv9GkSs2ymtGlSugz21yGp77rdhlOGiDQFzWf2sDxzPLcLm+YJ1bfRbvc1X/iiy98+q15Zldyj/N6AU+CHQ2rk4Y0bC4AGKZy1ypjMmCtfSnnBUoEFjSsHk3EoBXnUknqVd8k73dSNslQlvexw+fJK6vJcqq0hvvrpD8/288f0qpelF2KE1e2gKBobkC6UsiVZJYMJbbjQKF95G8ZCGxDxzMrwMlr2KDdXs1G9FQ+H8up2Me19UXe16SxskC6TFl4g3aNNGTSWWxvamQtyTp6kSOoyoxnzKsvcKPJt8oNKhCsCWEdHpUbXovhaJU9LWyjrtPkS1tRDALPlAdBD/WpMsT0afQLzEaQHk+j4yYEJzhTqs9Ws8HflMJSQ+NupKIZBNNOejw/wWOj1sflC+uoFdoW6sebyEtaspof2TYePnuCRtldt92D++56GPNzRxDyjNNAHGBpJYetrbLxqmKDff6oDa13a+JLxbxWiF0Iti2cjfKOBrlklACywiNlGPjJ//JsSi46O99L3tOCTaGYH3fdeTvGJidRLleoazSi6xjSyo7j9fUR4mfXkLQTTBIjxinjCKu62zSqlYu6xetlqLtw6eoSXj9/Bcsss3L4F1lfny/J9u9jbXUdi9euY3VphefYJt0i6+/gLw8JXjRgS8SzXEYPUpNkDrW79KoyUcjZKNmmPlYWA00utrR7moDI/hNma/I6eadluKhvm60q6+9DalxpQInfalfJww22DTvbekBgnUfpGvad6VX+NpJhKR0qnPg3wSyvMb4Wt+s7tx8KzH7kQx971E2hWO1sIC/vm+IeyDwpgtlseB/8BFMggdlsRC+Ji78rifmo7ifQSyLERtUycZoYYxMM+FAHyJJRaeoMqI33AKzFzHA3ECtvlsIFiiXkaiVs57dRIZDtlAqsGS0sCmv/ZBJnN56Gt/0OlDence3yGawtXcXG+kUSjIbLQ1Q4BLWVAcZnfLj/ww0cf/t11APLWC320BgkWc4KQmTomCiNwtRVGWEmMI8ZgjN3PoxOoGnrPL/2XRLG2QnWKYOJiQ4t5G2Ut0IIUlBpcpRmWrq8QZQKdVNis3MBVOoX0S5nMaQyX9nOUUD50G9WqWTayE4HsVFaR2xwM6az+7G11mb71AgAi+x0WqmyrH0txN1JdGlpnlt8EeXuEtu5gGw6iLnMBPaN7cd7HolTebvxzAvP4vf+9Hfw1Mv/Am33FzF/YBMnbirDm32MwOJ1hOKX0arUUM3JQ30OwfA6EtlrcCXO4Oa73DhwI8s028FY5iiB5K0YtWcRCaYR8Z2iMk4TSGg51qYpfcWhNqpKZB9CyK8ZzbTMo1qlZZsg8XkUG2fg8hcJyGvsfy09OqDgooD0lcxzXsj7cct9BN/7n0UjP4ZEzIV6yYXsRAapqRCWLqWwsxrDrbdlMNAyjSQaA1lv3qWfSUMKpHcC1qnFpLgo+C1PHrWWecJclAAjClMaC1oLW0M8pdoGljdew/W1M/CkCWwS92EmcRsFGQH9oETBSyuR92hVGoFMMZ8NwYqZ+Fnstfd+ece0CdQpLlrD1IoFl2dBFqyBOQkP0rnFCJE5FRckEOqS9UuFIeWniQxaTe2RR95rz/tBm8Cstj0wa4X6/sFKqdLa7yybKWpWxQGbPM/vJg74uyxwC+VQ2aRsJHBYPwMJPCqDhYcPopxRTRwga/dJ2bNeVJ6avKSJoE6YAX8TKNBQNo+GJHgfa812Y7l0P23bTpMgQ9fxRUrcrskd589dIACq28SLpavLOHjgKArbWrfcj1Qqg518DnHF7/HBUkSKO6uTpwJRH4q1Ih/msbhMKbJ/8Au/hj/7z3+C4vYOwRZ7ge8SkJbMqRQJmh74EQMAGiLTZA0pdSlCGTC6RpNWlXpmZzuPcqnIshxALpfHzNQk+4pwQKkBea9kmfpV9fOzfePxqK20RaqzSa8CHUp9I+PGy4qXCASOnThJYJBlG0jBxsyD1u5ULLWeL6iZys7kNHnH5mYPEdRWSFNhFIpUsAQsaj9/hH3lKiIYbRMkbBNofg0vv/okQRENRRoU6idNstHRy/JZfkh2hQC1YuG0hK8mxjVbWviBwL0DRMjHWp5XoCPIdr755tvw27/7L1iWETYow0bs3PseeBCPf+fbUPL8cQJ8gVkpdE2u+cxnfwyvvPoSLpw/j/sfuA/f+c7jeO/73mc89fu//wf2WaMSF6+exyPvfQ+efOp75v2OhwgU2BepSAazkzO47eZbMTMxg+899yzf7XjaZE/5Q+wbDROTLgXULRKNeEgTwxRC0bM8uAGb9GNxdVKSPBq9qh1IszLmBNJtKFNkLxo03nDolroVGj2233QN9y98+q15Zi9uPMZysJ0FFfhgGdjK5al83ZoBb3mmNTmMILTd0ix/Z1ENLbKjHKmBaABBtsFwpNEGyi8aO/VqAzub6ygWt83jqRHLBgGdCwECKC1ZrDhaMZpiYal7FWZF4C6PrFbbVFnMc0tQIg+p6mPAVvytCktAkCkl6yIhF0EidRFpxE2eU8jCXvYAsCzBmMcArJamLRfyfG/P2rvZbVPHThrYhqePdCKOEevZZx0DfEer1WAfCJyTDtmfqoO1LQlyOKDxSDkSpwGp4X8ZEFqNUV7OWr1K+tpGNBPD5NwEOuRzF9uzWa2QXqkvB9wJ2APuCBK+NGLZlAHH3kCTNNvYXM9je7UEdz8A3yBInqEBTX708+X7p/fh3jvuwz233ou5yQUEKYtfu9TDylqNOrLBumtSVYrGQpR9Rb21XcZAq3WNaExSjyjTkACiM++HPOunjCG9CZhHY2HyGGUd6xOKBxFJRo2eYvFx1itJA/EA7w1gc7vC+2kIC/y3ymxrL/myZmEdSrM1oN6KRhP8nkC9VqDBOkS10iVPdJFIBQmKCeKzBJkTRwnyEyg1ipQ6pKFgnMb3FGVRwkYGtVDGdn6LXS0akOxXHtjdUSi2vcopg1HxzgK1ArTqM/WPaECf47Eogkp/xj2WTFFGhVn+TazvbNC47qLSKLBvBvydBgWfpzSefW+NYHab0GqNfV83Y1Jg2LQoaVFM7aFOUJo9N3lBackkS6WTdLHxIAnVMUj3nFjaNRIL/OrP/BBg9nMf+clH67iGxeZLWG+ukdHZiGE/xrQogvswmSRC4iP48rMBCGDqg1U0qVwGTTJuU7EVBIzUerb2uvENAQiJoNomqGjspiipN2yvayiWoEmrc4jRlcKi1lT+txp/32Fnr6LdcyGxcBq+gwmW6mlUOtfR3sqjc8GP1sUECisVVMoFCkwXSqU+PMkWbnrPcdz8mSIyd7yMXHEDlZyGdCNYp+WFapDEEmEZqRTbJCwK98T4FPokzFKHVkyrhPpWGttn7kAnvw+F5vNojxr4/E/+OtLzaZx9ZYX1obIM9RFGFFH/FFY3VzF5NIZehGByqYD1ShnrHeXNTLPNaB1trmB6NovwWJpkdwKBYRRsOGoZEhXbqD7gfZpx2Y0TePdw9eIGts7tYJbKbJrg+vQNCdz70BEcOg08cfFZ/Na/+in86Zd+DfvmX8A9t+dw8wHghnky0MCN5iiKQKxIBUjLP3oK+w7ebMvRZsdTFGEJZCTkWlewb6GBY8fdiGXyGN83xMWly6jQCtXqahpGD4cVDO6icmV7sH8jyvyQnSVDtyigC6i1lmitlQhe04gGZygswgZ8LB/sMAHvwG+GzwhlMkwCt72tidhEAa0aGT8RIwFHAAq6dHqLxslhXH+RZT0OhMa7BAxUCF0qs3bPrHpxgCxQDVCNhg37TvFFxdAm43X4Dg1fC2jJaOrR1sqQ0UMUAFUUW+dwZuVbuJZbhC8xjoOh92AyTqankJZw1YIbAjaaoCBy7bfJ8Gwpi+2m5tQ641IK8jTImvR76wgHXYjSoAkRkFp6LqJGrVol7wBtWlMKHv6m+FAT4FQiEshuKmEVVJOOpHQVG/3BD3+UJ3/w9jfBrBSTtt2vPLJt9KcTfLxVRgqbilwCQt/1WsOw0vLS9txkKOhO/dOIj1JiybmspR9Zeu4GzU3IUeaY0NFsV+lEi5XVO3bDDJwbeeSzNQlEYE/tJvhPPWc5aQX+OnW2M9tVw5s1Ku4zr57DhbOXcPLEab7faxMO5MHSMGSRoDJGHs1GqCwJGmbnZyxcQ6vzaUUpTZyk6sXq4jYapSYB7S/gsa98iTKkgmQmyP6gkdJWLs8m+eUw9i8cpLxqEfzw+ey7OJVQrVmzsmhFLeUtVq5Q5aDlCcxOz1CZFDA9naKxXbHVitKpFF4igJNS0DLO8/PTNGpzWF5douGnSa9O3lG1oDxsWtN8fHrWMhkIQIQJBsrlHcQSPoxNaEVECvWhA3QUsy4PUbFYJ1gkrUUCBmR7oDJFBQuHE3j94lP42jf/mL8NoMm5V6/vIErQJxvCaI3ozI6sj9EJu0lxbvIw53JUUvEA30WuYf3a7T7BtJd0w373hfBr/+Sf8rkJywoTjafw2pnzBLinsLmxwraYomJOYf/+fZZfd5pgRsrxj/74P1m6sm9965vYv+8gDhw4iP/13/4eHnzgYQupKFUqOHnzCXztr76KibFJHOA1gw6NCQJRKbittS0bEr3x5I247977cenKNawtr2HA/lXIgzIPuAhcNcHGAeyOx1V11ExzgXFe4qBSPlK7qm0bP+iUjgKxlnqL7Wn0y+MeoDUjTLzBXR6iX/ixt+aZvbTxNVPWNOsc3pH0oDxQrKDavkEQJHpT4n4NjevZWvUxFo8jQUAb1gICI9aLAErzMNrUC/mtgnmoxaEyshRzq8wbksGin74MeXKVshS02yX2myNnZOQLiBhg4XsMzPJSjVqpAciNJiNVLvvMiis8Th5jTfjTO9Ru5sVknfy+CGlw3UJ9UgSrclIotaD9TkNeYQBR8qZS5SmH/ERqHA0abgrxU+H1jkFf8ZzK7iLvrDzDGvKm2GAHWtiyMi2Yc4zlJgHLu6vVpMLZGKJp6ogU60weclFGW2aIUZvlHSHsiiLrG8OARmaAgFzyp0kaWl8pEszSGKZcCLKtvB4CZwLQo/sP4R0P/S3cf+fbkYxMYoOA98qFJVzdkA5gecj3Aa9yTfM5dS2AIICvNGgVk280mwwAWniVwCwFpOZM+CjzXPzs0bLevE4hLIo/1ipckrVa5nViaty82Ns00muaI+OR5cT2Yd9ubxYIGGW4a6SI7UAaVqiG5qoEyNuSX5JxBRoSheo239ulvGqSPzvU71NITsTZrspp3aERnkOOhrici5qAFmc5FNbpTMhzZAFRmzllTFfwTzRmcl2sM+LbWGbn2j7LlMTCvgPEfMrBr4nUYWKEui1s0epUKQ+rlDtKk9ZBZ1inkU29mKE28VRZxm0baQoE1dukT9ZLIaS2y+DTu6gThjSyVQDTUSyDymJuAjKt+FsAVnrXaJbt9UOB2U985DOPlvur2KxdIRDI84lthMkcKd8M4t79fIEmywxotNHa8pYsprZJEDpqRODuJgjuEuxgmODRbFrllVPybAXGK/WGVuRR3E+bFk+rpZVCFBPUsKMSTQ/bJKChcuCR+MmIkTAB2EwcwzQF/GARfgKdSJ9WEK2ZVusC2u6zCE5vYv+pPm66z4vTH76CUw/tJ6A8R4HhRTp8gJayD5euPYf0BC2YiRI6yJM0+0gmZzA5dpgWf4oKT8KmiiDBYCI8hstnNrG8fIaW0AauLV7DO+77cSzMziOVnMXrZy+iXKoS3GVZHwkQEl+ojql5dkPehXyVZR/FKKS3kaLiqDRWEIwPMJalleM7TmVOC5a7JrzVaG1WCPJjsTSZOYid5QJeee5xWllDZMZCuPOee/COdx3A985+Hb/66MfxtW/9FhVCHaeO3Y1m+SBeflY5HOdJZAfwwpkWFYGPiifJ+hxDLPowVpbHeXwbFeeDZISbsZrLY6eUwtA1i7WdLRy9MY4Dh4KYyFAwNEHwIAtVqVYoTF2yPinwKOh7vSpBbR4R/1GygCitSQGqCTsKKo+z7FkC2gnSI7+7ssQ17H9fglYxrbrILB74UAO+2Db7QjG4FQvdSMTGySA1bF5fwMbVmym8aO0fEoCUgKHwMquRhE8Cl/IV60kgSqDbhDD+oGEr8zpIgXEfaDYtFWSzXUW5sYyt6nmsF66iQRCuJYn3J28mTcUouDSkpuvF3hLhDqPTfjVFoPdaTliWQ+wlL5u8mZrc5qQJC/Aoj5ji2/S7xIKeQcVGyaDQGnkxNbRiViXLb4zKsu/Fj8sz+KGPfMw+/6Dtf7AJYBQUfIYYe2/Tk5w3/x/BrHmfeJTBJO+FLpTSE5i1VE1Wbyu9DfnolKllPkTWs3YNDZrlzCulCNU2ArGWXN7eIWFOiSMgq2fznAFl/tMQq4QR/1FAO/3DRxg96c3RCC3/ABVmgfTQGuK+ex4iHSlbhSY3ylPpeE016SsRCvH+ngGtWDJB3DIykKTnKFQhTBrMrxcwPzWLz33m03jttadx5co6y+hCNjOJYr6K2dn9uPee+7G9vsP3aJJUxyZhTIyP2XLSepazIErIJk9srG1aKqJapYiJCWUViGAnv4MEFXuL8qtHWpPxJKU2NTWBM2fOElxplZ8w5RvpRo3NttaIk/LNHjioSSaiD01QW8PkdJx8KSVMY5aARMN4jdqAvKyYtC4NOMoRWkDtQQWpcYLH+Siefv4rePzpLyI7Qf6irhfIVJcrHEJDfvKEW0XY1hoZk9eduovtQIXXbNvQZrMp77RieRXdq7hJxaem8M9+67cxv/+wrX5GxiTgrrF+bBPK8pNHjuHg/gME2QW8/PLLBFYhKssY/u2//Tc4fJhtVK3ixIkbkKEc+cpjj+GRR96P226/Fd9+/LsWjlCqFfHI+99tyxX/9de/hgqVs+7pUHfcebsWDvGi0+xb/97FPqq1mlhZWyOtOXQZIN+MpOxJTPLYCM/JzhW4jbAemuFvSk/EppEG/jNA7zSFHR16Ff06tCs6tpELGWL6Td21q1S/8GNvzTN7ef2v4Cc4kQdeQEQztMWDynyilFb5Rp79WTGPqYwV5VHXqEQ4pHRUMoIpc/pa7VKz7P3YXM0hv11i/STtpGs1GiY+2U27RdnscrdYzgbrVbVUXZp3IeDaYptJtMh76IQSaORB8pInd+OGdkWP/ebMZfGAbMBny1MrecbGYDsaDbFR2qTNSqVqoTPpdMIMMWX0UPy0DLaZyVnq9irLH0Q8muJ9pGP+3upqlrs8dmwPlntA41UCV95aM4zZUQPK1gD7Tn0hDyLZyJ6txWrC6Sj88lqnk9TJxBHs8O6AoIlgnSyHGP8yfr5vnGA2wnLSAK2UB9harmD9Oum326CxQCzRauDWm27B+97zfpw6eQ86bT8uX9zBtWtF0E5Gn7JGWRnUFjJwldazwf5S1gE/CzSUjtMEOh4FLeXBF6DVKJRSP8bSHgTCLlie6IiPujZAHogiSSM5yL72hgZmuORoXK8TD/iDQdJAkPihhnpVMc7UOqTXWoP6lQaPQjQVT+sP02DzK9WbJkM2TOdJJylDj8LTCiy8ZFcuv813JWlUxFHKlc2w6BFE9qm7apWyhQIodMXCP/hnfUy9ZZPw1PDiERK+9JdArmMEiWNGBLNpTNGY12Tcfp/XUDcqo4xWiutaOIuAMvEcadLPth6bjiBFjOUKtAluq2Z86LniPfGl9O2IYFYjGUZ/0t8yxHQNjRzjPV5sIysqAhlRbSO98X8NzH7s04/mmpexWb2GaqdExmwjSsWdCe5D2n+El9B645NHArM+NtaoaIzrakcRGmSRCCZsUkWEDBvSbF0WpkJir9FiU4ysZl36lfszwEalxaj4JHl6JFQtoXKXBOAj4fO715tGaiyNyHQTdf8ico1NRLfnEWaDRL0FElIeE0f7OHx3BDe/I46bHiQhzJfxp39yHu3SMfzYI7+DSPM++Dr7EEIWKxcb6EeL7By2LOsg5ayh3iAF/HDQJigvY1CMYXpsCo1KAWfPPUGBvoaAawoP3P45Mgit07DXAJosqkZry+J+sskFLC2dww03a9nZKiq5OjxNLwo7RSSnEuh6lU9vFQfHJgiob0Z5R4HzSuI+RENpTUg8E2PTGBCoX3z2GQrnIbKTIXzoU+/G/NEIfvqXP4s//PN/Qom+jlP7b0Kj7MErr1zHtdVtguR5nLzjXcgu3IuNUgbPPNHD4vUMnnwuh7/42gv4yjcu4KXnAnj+OT/yhX14/dXb8PqZMWwQ8C5eI/QaTSO/2UI66aYFqTRSMdRreWxubtD4UCwh+1GJ4Ue0xFq0tro7aPXkMVf6mDqFq+JXpXgrFCgFA8JeAtneaIPEOSIjujB7IID7PnoGzf5lCjjFehGk9gKYzOxDo53H8sU0Sit3wRfs4sApza70sI8Up0QjSRqJm+Sr+kywSpPNBAoEEHsUfsrTyxt4EesjoUctV2lsYLPyOlYLZ7BT36BgCCKVmcWR1K0E2WFeq5m/FPZ8rtjUPI5kbnlZBTqlHMwLQStSAl9rgWt4MJFMka6jBPnODF8VT9fYhEYK4Y6GRPknBuaBfSyvBJUw/yRAtBa7nm8hC+z3D3/0rXlm/4d/tpuai5vTIs6m9zhA2uEhKWKLzZXHSQBWikLfpSl41JC/rrdrbeN5fqSOMTxqINaOVHTO4/id7cPLDMhq57MsVEFAVoDZbuBR7zLFRGDFQ5/9swdm/bRwlUNUQ16hoLwPBFHtNhU1ERl54YF734Z4JIWAO0yhp+TeymigbCEhAt0Wy6DhTxkpXfJsBNEE+0CrBBLIqhN7DU16iqJIkKQYtk995hN48snvmNGpGGEtlvDySy/j2NETmMhOYECa0ZLX8mRVGxUbJnPS7VC5sa/jkbil+dM7s+k46pUcjp04bDlplXnlwKH9yBe1qhllBis5Nz+LfK6A69eW+NywyT0HMLhsmF3pp07dfCMVDeXmQECghoX9GV4jI7FAMDyJ9Y0caUQrgFF5s+WT2QSqzW1kJqMY+lr45hNfxLe++0WC2K55dG0lOj6vVJLzgB1EI86UBunKmSCpI0+riVQe9q1oWX2nyZ1N5YmUgmH//MzP/SweePBt5MEwnn7me5idO2DLYyqv7XP8fvzoYZa5y/pv830FhNkvWn0tnU7TSJgxI0Wp5tbWVvCxj37M1uN/7KtfJcA/gzvuvB0Pvv0+XLl8AU8/+QTaBBhyBHzsox/GjTfciIuXLuP8uYu4dn2Z8oAGPhXmO9/5I5QddVy/co2At410nG3FsrKWbCPWgfykz1KQzZrAEeshEeCIAaNn/W7ig2XjVwOrNlIhl6D0+C69mldIvCEaNv5462D2wurXqe9C5slUyINm9GsyaIVAXaMKhQ7lIuVdlwhM3mTFt2pEQDwkQ9Hl6lhsbTKRYT26WL62bqBe6QLFFzKKFZ+scJhiadvApcJB4epCKby8BHUaVRBgkUfTFh3ySV4LOGrkQJkR1BJqPIceLYsQm0C0oLzdykNKlUp9THkmXUw6cdIvUd2RV5UtSF5TJe5X1hi1V5N1rFdrBF5RTE+Mk1dpXFLfyVGgDDzK4yq+GCrTBIlSC9s4oVuUp+oclkfyUnJBAI34iuBRgNKZGB5MBOEJs+6pJPWJQI0DZgll+d2HqJxOrjhGE6wHab+peM6OC4WtGpYvLZK/a5gai+DUkVvx8APvxMmjt5E/O3jlxavU1eT3foT1zRCPsHjsG2Vw6NAYEOhWliABVa8QFdvawCwNdnk7IwSrin/VAieRWBDRlJtANkADOwxN+FLIoVJ8WgYJlrnjraFcbGJ1uUQ9BcSTSjc2QIHGtSZqiqdqdS1Y0UIsFnE89vEYJBY12bNYUN7iGo4dO2agf3n1iqXwC0dD5vnUhDyNnPUoB5XOUH3SrNcIgp1Feso0qLQQinSl9I9km0a8RBHqM9GDs8noVR8LzEpia4n3DCYnJ9hfpG2CWZIYtJpavqCJ7h3SJuXucIdt4KeOT2N6f5qf5eWm3EGH+lsOQr57V/ZYvCx1ruM4oalGJdVlu5r64rMsAw7/SR87ap8A2+5lafj9hwazH/roRx/dqFzAZv0KmgIm7NhUOIGZxAmMRW5gxzpZCgRm+76STRBTjFiwn0bCP42x6DgJOkQLRdakZuHRQmVnKM+klsebmJqhdT5mK/TE4kkSQJwKSzntQmzkIMKEnXATdLLiwcgkEtMh9BPXkW9ftrihufrd8HfY2QMRUhzpgynE9wUxijdRHe3g8tkJ1JYeRKj+IJZfIvFvJeGpJ+BrxbH48hA76yV4WzGMk/imaUnYcrAEPy0K0cHQj0OJO5GOZXF4/1GEAkn0mwl8+mP/PWbH7sDmShc77Q2MZxbYmQHUWotsaBLAMIvrSxcQim/Dl2A3FqrIX93m80You7twxQMobezgWPYw/OkjBLtlhHyKXSOApiAJRsIk4DgF9yWsnPkObrrpJD7xuUfw5Ctfxd/5h2/DUuEpHDs+RWaP48q5MPLlOib2RfGJv/12/OQXPox3feRh3Hbv3bj59jtx8RWCzjLbLbKfSrxIJQ6sbDRwdaVPi7SHy1dex+JSHtcvDrF4fhzPfTuAb361iLVlCsM2Ccwfs6ER5Zzs92jhk5A0nA9aYB4frb3BGTKAEmvTYvbNIugdN6IeuUqk1W0KmjjPp9EfsS28MdTaQRwgIL/jfS9jp7SGVlUpREK00GjJBtIo19ZtlbNO4RQFhQv7T/fZJ2EqfCoAeVkNtJIyScgShmpTAVkJ4hGZThMDBEh0kXjCR8HXaJeRqy9jnXS8UV2ksUDrODGHsdRhzAYOkfnlwdJkBAp1Cu0RwZdSEmndfx8RmZjLyZQgwSsh6ySQDynvaTzL65zlSmXRdimMOhLsDQ2vNc0rIWArEGteBjKzJj3K8vdKkPAde3kYJTDe/8EP6cMP3H57N8+sbtOuur6xmzTgTgnAZjLFLDBriluKjPrEQKZ+02c9gZ9llOozW9ZOCbTaEC4/m3eWu7xg5pmlQNN7VA8912KU9zxbAgHc9V4XlYDKIK8gm5ftaF3D55KfqWiVrscEq0AEC6MhtGahhjtuf4B94Uat3MT87AKvd5MOYmxPhZJ0CVK6LKImmkVQpMGoyTYnjp80UBoNhfmcHgV5kXInhO3ctgHLX/zFX8HLL57F0vIKNLLXqOj3FG46cRMBUpeGW9GUvxSD1qp3a4a1N2B1y2bHjK42VlaQSSbJBG1TttOz09jJb1JeuWiUe7G8cp3gRdkTIsiksnju2eepmBxvmSbjuAgA6ixrlIrvrrvuIBipsx5tyj1N4tLqPzkzCjRRMBxOkZYI+mloBUJB1FtljFP+1dqb+Mpf/xe8cv4ZGmVdhOMuGmZRa+el6y0aVupWdqy1txhW/cmd7cx/BnBBAKDRBcULy0Oo3JaaqCcD41Of+zTmtD4+FaFWFiOjk4ZHqFacST+HDx5ST1Hhd2yC3e233YKN9VXuK3jkkR/B4uIVGv0FG15973vfzfpW8KUvf5E8F8Yv/vLP8B0dfP2vHiM4XsbpUyfxtrc9RKMwjtXVVXz9G3+FS5evYnxyGkkC44WFAzhy5ChpJYwbj92EfdMHceXCdVOi8i6HyEPKQ95qakKRE0crHSjgJatJNq1ArT6z6m9s9pk0p1EBo1XxA3fxiUCsaNd4Q3zB4y++RTD7yqW/IG/wQdRXSs8lA038rxyyAioNl5OqUUPUziQclpcgSaA2FPARmJDOIzGkaWypzHnNQu+6EQ0TaPkT7DOFaTlzKzQSIL6RkalRhU6LQDgyTrCqdGpadIQ0uusk0qIIep9mpxMPswQCsARktDrE9+JJyVCfQqNIe32b1U5ZwHuciavyoIlnHQ98NBJAobTOftikMaoQNke/V5t1ZDNJ9kfT8tNanLgAMplfbaF4YXlhjQ7ZRhpilkfPPLQsmI8KRumrfOQnrQbaYb3kxBCQHXmHSI5PmOdUk5I6pCP1k48gP9j2wd/2ohxrWL8JhAe8UZQJElevLCLOst18/Ch+7AN/D4nwBNZXSrh0YZMymoZtfI7vy5BO5QzJWVv2ewRn8rxS+CnFnrPYAwG3zwG2Wq0ukQojM5YgbgnTmA4inY3DHeiTb4lbCGQVOmGhHuR/5RGvENAv7VwlcFXIguJPeb1HeVpbBj6b9QE2tpb5bA/275/D8Rtu2GXdPnwhAvPiNupl6iHS2cK+eZQbO3jl7NMo13cQYn9MzU5hQAC/ubaBrY0t8obyckdpDNVRI4iVS6EjZqASMxlONKzQEhG8Y6zIa++Ew+gaO/Co38QHGcrA+fk5tleG/ao8wAThRRpoxRz1M4H/iPQWLVO/T+GGm+dpdBOH+Sg/vDTo+k0bjdCovQwqySW3DBvFzfKzRiy1+ElPDhGeItnxqDJKB3FXAbj93wKzf+t973x0o3wO+eYiUbdSybgwmR7HofHbMRU7xUZ2wOzQTaDoKaJFZK78kBHXGLKRfRhPTJIRaaWEQiTaIa0LCkgqeX9Is4MnMZZlpdNTtmqMlmgMBeMkHq0XLRNJYEIzWMuWdsIfj8A3VkdpeA7Kq6hlUKd7ByyTgWc0g2B8AoHJEFyZHlpuBYD30Vy5Hb3NGbz8ra/hue/+S1pkIDClRcVGPjSxD56yD17Sf5RKa3Z8H0bdGJau1giyQpiZOInsaA6D5hj2z9yNU8c/hMnEj+DGQ/ejWgLaNVqf/Tw69RQFUITA3EcrJU+FUrHYtdXtV3HnO49hWO2hul5En8S9ScvcRcV0bOYG3H3kflzPVdCjFWmJp4sVtCiYFXvUbpTw0nPfxf7JCN5LcPrEy1/B7/77nwGCmtg1j8WrZbRqKZT85/Hhz74NX/i1X8G73/txCq80drYKaDdpVAy2cdMNc3jhyXW42gsE5WECu0UKsDat5xkKpzY2i39MQyCGiOckiWaMfdzAau4CLi2t44mnlxELCuqFKVxnqPjTFC5tlNn2va5yZJLgR14KjRmC8cPOAhqeDAVAmO9QXuE44p4jiAYPkjfcCHjm0G1OYh8Z9bZ35tiGfexsl5GITlAQaxIXJROFdavuQ21ngUK4i+ljftIJwTDLZWCWGkeKSgSvNespC0knxms84Vj8bkpmDwWxdJHiHQuNLazXrmOzsYhat06lMYmJxHGkQ/sRg5YFpTCXsOa1mlksq59SnHXwEihL2CumRwwtRpPycQLnLfifSlZAWJ5VU1oGZBv/P9L+M1qy7DoPBL/w3kc8b9P7Slcmy6AMUPAAYUkCNCJFGZKi2Gq1Wt1rjelSyyxpRK1Z3a3VPZwRJY1apEgQBEGCJEzBFAoolMsy6e17+bwL733EfN+OTJL9Z1CYuVU3I96Ne889Zptv77PPPmRahcrIm0wgy+vylCidS0/zoSxDqW0CVFoKy5BCUbkCtR/52HvbAWwEZqUWTXbbp/pEh8riv1Zf+1EKj6eApSltXVNX65Png9sfHPanHpWCo8Bwm6DRd13n9/v32LM6VX/StuXq5KmYMelz88zyPdqSUspSYNYEES8P2koiTqOFxht1KHxUYAIBArvaZ/7g/uOYIX9urO4QFGrXI8XfuzjmEoQELjQChDykLGWEbG5uwctGKrm7j8p7Znoc2cIe+7+FaCyOrNLy1Tr4+Z//Rbz0/e+h3NolP/aR3c7h0P5jSMYz6JAGBaoqNEQ6fIfeFQyH1FIb83gkyvqsoVEtY256BvlsjmB2zNq8srGE8ak08QfLzGeptAcGxLSgbXdnx8IKlNtYSrFBg0sr8h999Cz7RfHdml7luwhoW52CLSgtV+pUHjNYW8mzIxVTp+lTB4p1Ar7v/hFu3LsKf5hyLKaYSxcVatjot1wYZXAxZxfpUuOoPzRe/IuX2NlUGr1W36bDZUxp7kDOBQGWhYML+NRnP8vrfkxMThFcfhfPPfsBvP32ZUuNloyncPL4UbavTd7oY219xTxIitdMJtk/m9qKuGNtu3DhYdy4eYXnVRw9ehDjEyl876Xv2Ernp566gGeefhKTExNY39jA6soa3r18mYA3jtNnz3JoCV5Y35WVe9hc36B8DGFjaYuA5Bxl1ywuXb9KIE7AxjNAoBKPauMWpW3qjdovBai4PLKbAVrRnWQFP9U1ok3xhO3w5SWPcwxttkIzGPwUgFWniTb1+V7B7Pcv/meL8de6D9GT8TsPzTjKo1elwhd4HMXdKxZVebvdBIghxGJhGk3a5MVF4yZgOkV1105hw6GfujFOGbJJHbPN8msEFZrmD6FepdHc0CLDkBlAMpjlgJChqZAQyRUBRgEALX6yDC3WC2yf5Bv5W785HR6CWUEe8hiBHKnF7qPE46eL/O+lnA+zDgr3qpN/26Q9ya1Re8RnHb6nVi7Z3yFf6P6UuA+ZTNoAruSjvHEjMKI1DXJIcGAI6rQo0d1TnDoBuFdgVN7tFumlQVDEGjm6yExNEh94LKyoY3GrNLYpL5xV1oP6uJYkSGL7zKNMet7bzKGWa+L44nE88cjTSHkOG5BdWaaObDmJObSATvmctTitRxxBI4mtJYSDEv8r1Z5maaXrFCbpi3gJ3AOkdeIXgnZfkLzUEYDu2qIthQ1oFklxyXKsaAGbZhSUxm93ZwvblS2WH+WYp/jpIZ21eO8A2gVvbW2d9W5icjqDqekxjI+PW5y8jGHNeMlAFG35AlokSiXoqbI+PeRLW1heXaYeoab2UJ+Jl/lupcxSfyuVmeU+bzcstlkglr3FdmkQJNt0jX3P69J58pbrurSL6EMzhtLfmfQYxsbHOOZR9pWD7WoRh5WQK25RfpTZB21EJ9o4fHIBcweIA/zUz06OHeWs8ihLTiu0VGBWW3JrMx5ly1GogeLbNfvXk+6QsBJfmm4RXaq2o+P/LzD75AceeWGneh2V7mjFmqZE5if24+jMExgLn4ErQIXCG/u0ONuOHCptpc5qI+KewFhoEamIEpErGbimmKu05spseB+xZBpjU9MYS81SUaUoEOMkGL8Rvk4RuuKEWgRNAnkNAi9nvIJm8B4KzTW08xyQ8hiGBMYeWmDpzDzS8ymEZliZeI3akXX1uZC9ksLll9/AROQ2/sFvHsSHPjpPpeMkg+zA76EQoQXqp1JFN4l6YQLbyzHsLNPqyPvQLbtw8+JtTMaeQdh3mpZEGuVsBq+9uow8BYpzGMJWfgPRIOvBvvH6WVYgwgHuIpGOYTu3jtRCFLmNComLwJegvOsbEGQ38eyjH8C+yTm8dWOFQkpbyjnNG6RcdZF4BLnsPWxv3cTHP/1h3Nl5Cf+P3/8H8MUKSI9NYGNdAem01qkg/8Vv/yN88jNfoPI+gVLRiWphQIFDi5GM5eKY+CI7WL/dw9r1EKJuInla+bGo2r9JcLiL0/MfRa+u1Y9uFCvLiGV6qNE61ZSmVlDurhRQq4ym7mmfs53Kb1gjgdPKpqCmKCaFKX5HcTG7JNotnhVeo0BCWi4D0oeb7SqZgFG6j3gigjNP581bWy5RsVOgqCzn0EdGbbMvIsiuZmzTi8SsD1NTGb5ZW17SygMBUJvWsTwKUF5fWZKarpLGUtYEWfmidNaVqLfItmxVVrBLY6zYU5iMlwbWAcxGT3DsJ83zaqmy+lQ8mraSAOMp9DWa+hBY0rSMronpxEzyLlBg8nfFc9nWtlqNrvRQEhxUGKNtHB2WOsa8CA+uqUwqGK0IVSyldoeRsmGF+enA+59/nt9//PHPX7ivXB9w+YPPvzx4Qf2g6xQII3DJv6Ww9Tr9resSGLrnrz1vwoP1NK8Rb7bVpvxbPc5W23/8as9I2Ml1a6EGKtvKZVvsnSNQwMdJM2QxCS5+qmyPcm1SuPEXPgfzDMoI1lSX6Glh6oCBzGq+Rt4ZYHpq1gwETc+rQaQAmxIVPSmnq+Lb1u6tQvvDy5vT7jcxv4+As1giOCUfeCOktRpqPI+fOIBvfvtrcBMM1ot121DliUfeR+XlwPrmutG9diRqddt8x6iDBp2+gdkB+WB3ZxsRX4D8F7J+nZ2fwF5hm0Z3BVOz42wDDahsyTZUUB7MS5cumaemP9TixK6BRo9ntA0ve4h9JT6qWSq+IZXZUDMdBCa5fJ1gKs62ulBr1im3YvjKn/4n3F65TL4m/8cVaydAMaCiSaNY0C4/mhngMJDXRKMyxMy4ofRXRgGbtFC8GxWH4rwF3BXepDHQkP6tX/1Vdq8bU3OzePeSdk1z4MBBJcXvUcbM4fyZ8zSWd1AtZ5GIh/luLzY3Vijbi7hx41327SE89fRjpKseXn39+7aD1eK+aZQqe3zfEMeOH8TpMydo1NZw49o1vPTyS6wXcPzUQzj78CM8z2M3n0OGwDeejJhnUe6paCBOzg9g4/YuFucP4NyFMxYbub6+arsmDmnYiriU0YAsxfYRKPQ0rc42m+Jk44wd9CPpVvQp4Cqv2326HX3y3gd/G3/oZif+6y++twVgf/6932bf1M1po0XM4n1lFZB8kWe/TBqQ10/yysd+liEgT7/iT+OxCAFtiGDIxX7VYkR5OwOUR4qfZjfws0hZpu1q5SFXeJMM+1qFdNXzkfYJkPisNrCQTAtKvlDHqcEjY1t10fT4yFDQ6RR4IVeLP52Kn3Sro3SKUAii2Hyl6PJ7aLCQJn2uBAGavMN5Gmh9zMwpswGNr2LVZJtbhlixwDGlDowk7J0SAsl0ktd60FbKWtgkOtR24aqDHAhD6mEBQA9BmBb4aTWQZm5kAWvKHV4CYNJPNJUmkPPbQiFKWgPpriYHtsK2UCd2J+Q0k7wtm6HTIn/EPGM4Nv8YxiKHcePdCo0ytY39FYixXxyoiwe6dbZvSDCocAw23wCdQtyo4WQsUsaEycOxTJTfg7ZoUqm3yuQD0b/ST6ZTMSu3qc2SytpKP0f650mjen1rA8srS6h2mzS8xklSARsjQh7igSJW1+/aIlCt4zlwcJb6pGryUhtrKPa9RZ2k3RI79QoCIQLaSB1zByM4dnqGoNFhCzQLBRrAIW1hGzb60Jjkd3NmRIj42asjzyy7i0PFuvIfKgCF6UkWjMLkRrQimtFhIX4Cs+SBTGYMKeKPQV9OuzJKBODlah7ZwhppqIogDevZwwEsHpgwI7s9oM5nf8qYVhYGjzdAOlZIFWU9DT0tqBWYNe8s66IctH0aLaYPyXt/5eQhOegC66S63FfDRm//3U+yA9j59x19Id9YQmuQJUjp0pr34MD0YZyYexoJ73G4qFsoJmk10erCnqXwkjUQdo1jMryf1mXcKqTwgpztelRlA31I0epIsmO0UlneD6XvkoVGUcKak8BJ+Jqy3aMlUy7TUqQVOIgvoYgrRPcU+rkJtNcjpPUQwokAxhdpnc8RAIcbBI0Kst+i8tvF7TerOHtoP/7h3/0QPvDxhyiUQ/j6t2/hGy++iY1NEkwnCH90isCXii8fQKMcIRCfRqBHq3KrxkEO4NShL1IBpLByr4PXXruH3dwS31tEKTek4CjiyMF9tLwIzEjA03OL8Ab96NKKzJWauL28hnv3aB1SGm0WVkm82hkmh0Q0hUK5h1yB4C5KC1yLRzg4bRKScuyV8svYN08ra3wbL/zb30Crl8P8wjRu394ko3bx9HOP44V/8d/i/EOfht+VQm6nwH6v0DInUZAgBKgqlRbrAkxOLOLOVVqFm32kwzOk3w7rcgdep5fMVkSuu4R02slng4h496FbGUevEkHYk6FhQjItV0i866jWN8gE2mWEVr9Z4xQwbq3upBXvpAXryJMS9ii7qmQMMYkH3cYuaVYJpQnUhzn0nLsEGx4cf4SA13eNij+KbosWpU3LRPgOKqZuAHvL8+jVknAnOljYP0nDjRYqAbRcJ5R9Juw9fK8EsASarGgJHxkFsqp7FIDNZhWbjVsEsssoDnfQcSsuMonZ5FHMxQ4h0idQoIAUIO32ZIGL0SXIpcsE3tiXbIeEjhb2SAHJ4yABbfxOWm2xMppWVnyaGFVxtQp/sAwfHtaNdRmFEUhsKxbMS+FIpRAKcJwDNDw8FOKj6R7Ftj3x1JP2/h93/Iv72QxYqDG7Pv+PB/no/nUDsgYu+be8MnKx/qWyZisJfHSMBIdqabKEen0UYqB0M/ocebskWFgG/x69UzfwOm9QzKFW+g5oXat80wr8TfhUuFUCSLJLPaE3aLtLbX9r3hl2u3Js+shv7XLTlOeBhcOkURq4Lu3KFSavdjh+ylkdsM0ERGPxaMSm8DQtW5NwLdNYJp20e5r2GmJx/yFUKxwfKvyxDA3BtVUcPrIf8akg3r74jhmyG/fWcPjAUUxOTrLcovml/FTKVo5lUfBQ7ikcxkMA66OSKJInGjSyZrCd3UAqEyco8RFYLbOuUj5D7O1RibSUPieB7738fQJVmQYjOlJb2RtYWJgiL2m1uRZ2VQkOQFmoWLQO69yl/NOOg9o1yI9Wv4hvv/ynBHrLcGgrbSreUq1FxapFsUECmxA6rQG2Nxo0Dkm7VAwS9vYmggnFKhInEIQoFaIMCxq0/HLy1Bl8+KMfxc984Yv4jf/q72NmYQFj45O4e2sZd+6s4KMf+QSS8TFM0fDeN7+Pfaztl6n4fQKBLssyMJaOIpnq4+lnjmNm1k9Q+xqW7lyhkl81oJ+IR3FK23XOzhrAeOvi6wTheUsW/9QTT+DQYRrZpA3NXFx84yLyVP7aoEL8q87a293ByvIKNtd3kQjTsOW9MQKHY8dPYnZ8HpVC2eRfh4BbitE1JAiSV5bgr6cFNQS44lfZXaJ5fcpBTRYdAVjeLoIfyAgzHiE96+Q18Yf45b2C2T/99v/Gsauyn7TQhmNNgK1UVFqM1WhQPpA/tAWtio1Fw6TJNPsvaX2kuGMvaTrCQdXiIcVZWu5SV4Dldag3KIez2wRhESjLRquubW0bbJ+LfKMcpwrJ6/MaQQRfEI3GTa5oSldhPhavKF4mQBkdpBN2iOX5ZIfIMwvqLVaN3ynbJcfI1F63Fi/JmxijwelFejzGvqsjRz0YDrsxMzNlIFkLBEG6tFj2aoWsT1BO4Kn2qj7q7y4BvmbhtOubFnfqEEjrDZVn2QEvDQ+b3XH1EVF+epbVoYxlF5jx4YuF4aXsHFBHdimPFW7kUiRMjfqGdasn+gZ+lVWEHIuQO42p6GHEnIvYWiK4rWomWYtKtdbHwX7UzmBKpaj0UyU4qQssTpbjpPC2TCZhZ5hg1k1dEY4G2Nc0BFnHTrdmCy7XN5bIDw7sW5whHapf9mwDhK3tdWzurZmMuLdxFzfvXUOd9dKmIgpTEF2pX+4t3WMZdzExGafcKuHEiSM0GKrGt/LGd9i31XrB9Hq9cg/BSBfjcy6CWY4J8XMiGafsmqVuArbW8gTrecoqLw5Q9qWTSf5dQLaYpXokHbCPNf6mc9ihtpCZxC/+kMdUOZHNM8tDekVpLg3M8pmxsXEs7DtEeh5iezdLA7aCapNlF5Yp86uUgz4cOTdNozvBv5tod6uU78JycmR2SMd+0o9mU7q2a6P4UmkfBWSdpEGbmWMbtWB6xJuUmcaD/M7/pJsMc5BERudPCGbPPHX4hWpnneCsTOLpwEMiODBzCIdnnkDEdQhDv6Zl+1TZdbRAQNfaNA9V1DVBS2geYQ6GGEkxhLlCkVZlk4qc1iiBbIwdHSVhquKamtLUiTIbdGnlaPcPpXrYq+yg3fDBRRDYj1xFvreGXtMDV2Ee1WU2OnAU/kgP0QlaBikyq1+JmdnoYZMD00AilEAzl8f+qRA2763g69/cwW//x9eRr/YQimZIsB/G0JdGldo2Xy6yIl3MEWifO3IcF04+jDNPPApH7yCVEwjG83iTwvb8I/MGlm9dq2J2SptCOKA9rL2BKLTNbc/Rti0Ml9c04BLSQLlRQL29Q+LULiRURKUGCpUahdeYpRpqNaokMA+alrOoh0ZtG089eRx/718/RTlPpR1ZoJVVo8DrEciewD/8x7+O2QUqhwrBI4mm3Swb8NK9iskbUIj5g6R0RwbxhAM7uzdw6/I9JKOTZI4GQgRUHQL5kuNHtKAXEfIcRzXnwurOdSxOejDo5BH1e+ELjrNeNDKcZY7xHtuSM2+5iEtJm6utLRIjBUR/jJIqQUHgg3YpErKRYZIM0HIkSCX8NDoYOOoUtmM4+hCZKLBEAvXTiKBFqd1dhhEs372FZtWF6uYJODvzHPM8Dh2bpcJRkmrtU81yeg4KeQp7D/uKAlELzpTWSp5ZpUZxUoEI4Gh3qI3OdRQ6pEl3BQ6fFiKmMRM/hJnwPgQHBAlsm1m8nZaBDcVpyRurOFx9OjkmBkCoVC1rB2ldDKVDcaMdedr4vOIhtRuK4vckBRRrqeB6eRkEbCVANK3uC4aoqEKkgyD7z4co/xaIFWCSF+b0uTNW9o87/uU//ScmbGzBJL8Itwh+6zBQyq+qpV03wcDvAp0UFIpNU8yopv91o93HpzV7MgotMH1CEMtuv/9p6o/fbVxZqPpC72dx/If0IQCrGwUCFDfLzz7fZzNI7CfRZZ91pVwTvGe/8Tr/lkdG5VDmkY6p5Nl/QQLZSq6GR88/TkFcY308SKfGECZ/ba5tIhaOUG6Q5liH0SIGP0vTOxUfXaXCUf7DHV5xEwS0RhkICDAkp6YmJrCzvYdnn/ow+eEm9jZX2Pc9ZHNZPPHkkzRM+ha77g8qXlaVpKFDBRwNJwkWFfNKg4iKbW1zD07eIM9ai7JqfnYCLQp3KXHlYlYy8SzlxeTsDJZWl7G6QRCi9HZsrGLNiNvw+JOnKDvafFeL7agTEPoI4vYIHLRz1yTb3kY4TmVEg/CPv/47yNaWLH1UiWBSXnYZBxo8efDCwQTWV7XboWL8tChVC3285rmr0vivlWngNmiAkp4n5vfjU5/7OXzq81/Exz/zGTx09iwiBN2NhlIisr3sg6A/ikfOPk76TPJvgWN1hte856VqDb4wDQqCF/cwTIXpRSS2SjD/dfzFn/9bAtlXcXjffpwnLR88sEDl7yf47+LO7TVcv3rdaHJmdor6pEXjYgW3b5FHs+xPtsdLmjlEQJ2gkTIgX0UifszPz5jBmR7LwMM+3NnLopCrI+CI48jCaTx5/gM4uu80irt1VIt15PdoTMvzTNmg9kTDMYvXVAyj16vZEr5H6dEJjkbpuCg37wNZgVt2EZW3gIbAHsmaxP9ffeG9gdk3r/0xn9MCGY6xpnY1zdulTrM4b44DP5WZQwaIPP06RS+Ks5Q88Hm0kFTGHwFZv22eVbc7YI6Sja11jqGyeGjtgqbgqVsaHY63n/0U5/M0wEiv8rZqhysBVdukhXwi2Su5JYNUXrbRQh8BBIJAyirJK2UNsXhWgh4BCg87R7mNY8E0dVRGEp/6LYZKLctHizh0aJzvaiOXz1qYjt8bRsdB0MKBVH5l0bre/cDbJ/0UoozTpkpV8qX4U8a9iLjTb3KMW0j6I0bbbU3dU+bKa62Qi3qnZM4MOXsyU1NwhEhTxAva+MDVoMFcoGHc8aEeraHN8gcEXO5+BM5WGNVND3Ir7NNKFG7StZwMPuo2P4Gsl+DJzTr3u2XTo172nfg+FPRyXPz2GQp5EYsFLe/t7Oyk+QMKhR32IXEPx7ZOoKmF7Mpycv3KbayI39duYW37Lu5u3cDNlctYzd5EuZuzNHaT81HSoBaO9rG7XbEZSI23g3hlbDxp8dXSGQq70yJkhXa2u0VijF0szHdw4qEpTO8Pokjdu1fcNdkdDEXMGNeup9euX+P4ufHBD34YD5972OhRW3QfOXIUm9ktjoUWf1G3uUjo/C46Efg1sCihQprQOhA5FPWsnAIS0gcOHsTE9CJ5qoPNnS2sby4RR5EmB3mkxl04dWYR80eIF7wEzU6FgBCPkczcbhkC8kTLPaTdwZqoUna0GwSm8s6SljVTpJuH0um8S7pE61DEewpxGIXmSG5qhoPlq56kq//+1/+vYrv3dLgefurgC6UugRiJt0PraSyawLHpZ5AMnkIglOZ7CyRJARcHat088nVNg5QQdmeQCBDMupKskIOEV0W5lqfScxDEZojyZzCemqHiUk5PCt2Oh8K0jzpBSKVaQKlSIBCso0/BNYySeCJ3sTN4BwMCtWDpIEKNw5YfzhvoYWIxiuRiCP14Cw2l2RqUjSHb5IfNYR4NbxGvXruM//xfvos/+MrXqJSGSE8eQrHrxaWt72A19w6a/TUk0h0kxslYbOtKfhnXN++im72KlVt5oHEc+Z0QlpaWMD0bwEc//ASWl7+De6WmAVV1eMgdsVyJvTZB7r44FfkOkrsTSEflwavwHU6sVVap5AlqAx3UKHh73XX43EexVVyDk8A/GMqg2SsjPubHv/+LT1LIlzGZnKFVvk5kUcXj7zuJ/+a//ldUysfJCFsknAwJXYuJaDHSWhfIkCDoKHmnwkJcZZaf4GcHd950I+6c4P3XUG2TcEKawiVj+2rIEzxIaA4d6wSuyos4iVz1Lomnymd9FGZacBaCdkdSjNCIAahwzBsgxUrQQRoM+zMIs06Ddh/l+jbazj0yyDytyjACzgj2J44i5ZzF5s0I0vO76HkuI9CLob7TBCr70HR/D9femkUn/35au1VE4pM49cgQPdcemStKeh8BmH6L9RYvKsSBSnFIYSDLrj8MoEZwm2uT0XsUKu4NGioVmv8OTPsP4kjiSUz6j8FLIKvVqJ0egYGmqsgYih3y+bTTS8y8p0pZYulv2lRITY4VrRIpXBelnWIKHQak2GcmrNnd/E08JiYbeWADFs+nBWPKM6npPR8BrRZKPDgFdm2aT/KC5Z06dXzEeT/m+CvPrJSS3i1Uef8gKNVlVUX4lsUauBx5nChQ+KmcrPaIbpTG5v/6KkWrVdWCLtTnBmrVZgXnq11KNcR/zWujQ5f5yKh8IV+OiaboOUgGlqnKeHL8+KwWvukx5W81ja2CDebyXTz1TvOI8x3FXIV9cR77Fw7hrTfftvi7JIGdxkjZLyYmMlSIFZM7WgVtG7NQ8imGix1phsH27h6K+aJ5A6PREF85ZF84EQlH4SPdHjlwCF//8y/zWRqq2SyUtP7EidNURGt8zwCKjZN5aB4qfhMYVT5JzWDUm33s0UgOE4QEvAQPlHGT4xlbCCXPLpGALYapUnBPzkzh3bffYjksiyDHy06tt4CTJ2dI4214/S0qTWVPoBFLMC7PUS7bw+T0fuwWcviffvvfoD2sIJ7x8L1NGkQ04AhaBFRljyQTcUQI9C1nd7FNRUFjjrKvRoO9XqSBQOPg+MlzeP5jn8YXfv6X8NNf+EXs238cUQKQeoOAcnMb62sbtgBOMXlSLNOT86TVEAGKVsdTAHNgtfNQpUY+NXpxEjARsMir0t/D5bf+A3bXv4uHT43jwpn9mErG0Sxv4+WX/ggrVOYD8tCw38XRgwdw9MghTE6NI0RDYFzenvkF4+mgLZpLkc+7BDwKK+K4Kfk+/9PMhzzxAbZd494nP1cKDdr9Sv3npzycwzNPP4Pz5x8maBwQUN9DpVKHz6WpzS75kLzMconFzQNuHh8SnOVk5dhpRmEUdqDf+LeI8j5tK+zmN98jmL21/iLBkrarpR6g0lU+bBnKokF5pZT2TItM5QELkv9FS5IDAnpS0LKFFaZXbWjzobIteFSImrJ2DDjm5TwN6E7bQg20VammiOPUydFIgt9JF6CM4tiMshjQEOe7tG5AoFKfiqUV/+o3gWc9r6l2ZTDQIi1N9Spdk4fXLRSFfBKLZEinNNDa5Gf2h5e6KxjuYHZOi59cNCyoc/mbFgbBx5azEfWKYkXrLHtAOS2+0TtaJv8kRxSn26HBpThV9ZWyAyj2VjGzow1YWJTXhSDlqBZg9Zwt6kXKYN4TTSUwDEjW9+DnIDq1cKtASdB2o+LP2QYKXho6nbIPpQ323a4XrlaKVUvAQT4bUC9qi+B4zEs6qiG3u0a+rGEyw34kvSnkI5kKU2YEEYx4LBY2lY7wWpSG8Brb2rCwgNff+AGuXHmTxmIBxWoWt25dxuVLV7C2dQcrW7ewtHENa4VbcEXaOPHwfrzvw4/g5PHjiGX6CER6lumo2wqR1xtIZaLkB3mDQzTGFfus2WoaJKTFDoH8AAVe7+GRR4JwB+qotHdQ4nsVstnt0UCjfmo0tTVvkOXliBdKBN0eo6+xzBgWFhY5XvPY2Nk0QChvrQw8LztaoFHxrMIO/N8OyVJbmCde56H0g8eOHcP41CLWNtaNPoWzdovLCMW7OHfhMI6d3IeBZo2kbwhmB5rGIxPJaBEwl7xS+IpozdbIsB5SfDKUBZolY+Xc4APGn2q7eenl9JDOsEPGGHUQGVNe5X/8q//n+9d//OF6+LmjL1Q7eQI8KgkSQZoW2uL4aYxFDsLvSbHDKWrIAPLaVnoUiPW7FDwVRFxjtLJmEfGRgFixJgGQ4gm1sjFBIZpKjyORSBsIlDPSdgAj42r7NLmvNdDah5owA2XcQan/DpvaQGx4AOnhacQ8E2RgN1KTKWRmCLbG2Ew/LWACmx6ZSTvzaHUnAmUKgCry213sbPSwu9tAg8Bnp3gL6xuv8voGylRk7eoA/YYXpXwTN2/dwpsXf4A33voOvvXKNdzZWoYjWYInQWHOtq2ulBD1z8HRj6O6G4TXgrir7C5aiQ5a1vAiEx9DkOANvRyVRhQXnjiH93/oOD76kQtYnNlPJljiwHWQ3d0mU2XIMGkKHsUQtRBL1+EOruAvXvx3GEtnbLUg6Q3v/+D78Cu/8vfYd9NU2lUSnI8EqFijkaWt6UkFeWvgZX0qBkmrxWOxOM8A3vpeFvnNLrLVFb7bja67ZODU5aEl29MCLSpvpRwZahMDWmaDXVqcEsrK7jDKoynPlE/af6gULk6O9xW2meDPNcNPH2lzj0Apz3f7EfHMc8w7BHUuePx7VHpadOKkZTbEbTKcwPn4rPrgHgqldygAwzR4gJ2bz8JVUYJ1GhczARx/2ENhVuW4jgSY5W0lXbgI0AW85ZHVKnsxg8BknVZsrrmKfGsDxX6OQnmAEJVyKrKAsfABxHwZtlXpvBTWwTL4rDwXsma1YMLHc5RoXIxI4rTpLApdfkrwSvHKiy6F4OIpQKZT02t2cjykTLSQQYLbuJOH9vvXM7J69al3kE1NkMjqlOD/ScGs3mzAVaiGp77rkAwy9pcylnBhnWzhh4FaKlR5UfmnrHGzQux/AUvWn+UIXNrJQgRopdg0JSQvi8C7AN7II6zy+F4DBPo+arJAriJBBTb174AXNeXKZtoYyTs8qh7LsU++y+rD95A2HeyrK5ev4x/9N/+YRXpw8+otWx09PT6FnZ0daJeiucU5WymvBUVa9KEah0IpAkoa2ARitlV2t0HQucFRJF9oCpV1HJ8YYxu6OHniqKWVunzpKms4tO/nz50lDzWh1IFaXKNpzC6BSL1eJCAiP7UInD0BM2J2t3YM+MfJ36IRpePazW2jRkPSa9kHtBlMA9P3F4stL22R/kdjpLjWw0fGCKwGBArKce0xD55Saw2pjF00QLuUI//P//D/wnaugROn0wa+SuUm5St5gN1nHqSAk2AjYon1V5fZF7Tbwj4/ZqcO4vFHnsdnPvOL+MJP/y185IOfweGDp9jPPhQKFcrCHPmuYDuayVOmHcomxqawf/8RHOCZSIyzDprbdXO8tE+/Fn5WzNMXJBrU1tw7m2twtAoY1G8TKNzEkRmCkMYealTmtb3bVBf3MD/TINAuYmE6goML4yjsbGNvewd72QIqBMfKtdmod1Ak8GwQpLnIM/KUhgmGU2MU6iTkar2JsbEJ0kmfY1OzON1Dhw9Ci20tcwmVpnK3Ki51enoa585R1r7/w+aVv3X3HqmCyrSv/e5pUGo1uma32XdmT5EfRDviEYkSI3qREv/Wp+4RoP3NL7y3BWAbpVcIvghmSReK3W6QBiwtUpPgUzM4NIIGVOay48xDyLEaeeEUG6qQpQGBCfVMvcT+EZi9v8iV/aAYTTkfOl3qSA50jzpTgCceTbNPtImBm3q2SNpWdgTJL23HTCDNMnjJzoAAEglQnjmtfxGwHRAQSaYG/UG+W7NK8sgSgA41WxRBIjVJGa5d6kh3UVbcWUOheIfvKdOQUpzhgDQpsO2ALzQKx7G4XIJv7d5msk0v5zjJKJb8k97XojVlXbD82xIy4kLFwpBHtB22n3Vy87lggLLSL4cQdXePgDhM/pAnW7HFpM32Hq/v8f0dDxypqgHZfjWJ8paf+pnSo5NAzJtExB9Bx02w6yMNuNsEpdR/nSqCHgd5iDxNeezje7VxhN8vIzGEqRmt5/EiV9zGjZsEq5cvUne/je29VdZWulpOjz5u3L2Cq9feQXdYJ7/eRq6+iuiEC+fed5gg9ixOPLaIifkQjVbWJ0RZEiHWaXRRKXrYJ3Hz+PZosIZDvCcU5HgGLRYZTs1hFRFO1DC74MH+fdpIagfb+W1UaXjKedOjhdEb0rDzynjLkEsUllDBxsYG1miYK35XM1MbNLSnZ2dw9NhRjJN/BSqVOk7pLKXX5M0XZjRQS52qcdMh/aidxOZpdNaoz7P5LWxnV7CTuwtnoIrzjx3C6XOHOGbECk6F/lHAcaxNzsvYcMowIkNRh4gm9BLljddGFK0G6aNLDWEv5nNkDNMffF58Z7woBWH/6Af+TgCshWA6/7ufxDP72PtPvFBqb6M9LLDcPtKhGSxkTiMZ2k/rLWaCQSvaNXVd7qwTKN2m4C8h5EojGZgnARHMsg6yAsSsSkIu8BYneAuFY1SqXl5np7bq/NQimroBWeWDlAB1hNzI9r6Pev8tRJ2TmHV9EBn3KSQjYSQyLmQW4oiOkTkjDXZkjQCsbQBOwfcSJJWqF3UKzXDChcVjCUwdCCEQo+WoRP0UHMHBhG0z2KWSWltbwa17N2wxh7ZR9YQDmDn2HE49m8Dp51uYPaGpQCfeev2HZNYsB+sW6sUfIR0+jGTsNEFUnhJqhwQZxER8AhMEkacejeLw4VMEpUFkc6+hREJ/6omP4me/8EvY2LuBe7eXWOccJjIPUcFRcrqKOPKQC83hD/DKq6+QYT1QfrannnkcX/zi38H5R9+Hzd0NCo86BZZWWjdJhLSwqQA01a24KANh7HR9Vx5XbemqHa7efXULe2sD28jCHfIgX6PCoYHiJMNoUwPlMVScVpdg00swOhgQWTpKJLyWxSsKwMijYIm0KURkaVX7y1TQUd4fZl00BVC2MrUoL8CxD/vSaLCvG6QhZ4dCpkmwSYXkokD7b//FF/DJz01gZe1HZMwNzC/O4Njpo6hmx1HeIaB2kvmOZnD0nB99J+mDlrc2zxD2clHR0wzCgAhI9E7xyTYQnPIF+RaZrXEHe50V9kHd4i8zsUVMxY8hFVogsE2aoJcV2GWb5BERL2kKXtvQKlZHHkr1o7w5EtZKB6W0NpraltWqhSkeL0GXw8NnBVBG03cPPg208jTOJBPKmtS0rwFgeT3EqfxNaeBs0Rl5QELkoYdO8v4ffzwAs+oL8bhAqNSBDvGbTv2t1xiYZVXslVLWOqU8JCtNo+tZWdFsB/tCila6XHhXzlb9bR5ZKUTWWbj5wSEQKA/sg5hZ3SzvjVYbj5pOZcnPPp/VdJXJLJ6mwPisjBCVpzFUnRVnp6k19hi0sYq8C5/+qc+QvjyWvq6QK2KW4NBCIvjOel1bP0bN0tfGBAtzh9gmN7Z2du26MrB0ek2b6VEbtdNgnbwzMZbh2HbwmU9/Br/7u79LYV+H9kifGE/jsUceRpWGtNrLESKv9202wh9gvaiM5c1SaIM8SVUqinQyYUaMaCUcD/JdRQIjKuNQgOCCBjVLmZmawS0qQ3nX/OKxzpCgMUnQKJpoIRJTeFCJ7ZKhQ7nqH8Pv/pcv4a1LO5hZ5DiEtGmEpnRlnGoXJSob6v0+edXvCisnPGXIDD749EfwxZ/+2/jcp/8GHjr5GFLRScqKvs3wbG/nsLuzh+xOSdFUNjV8cP9hnDj+EIHuEUxOzkFe2AEHUtkUlMC9SblYpoWp0At5xcNRLYCJYmWFBjFB1UTSgfzOm2gW3oZ3sIfSFmUkAXrQo3ypVLzVLAc8h16riN2NW+g1KSNpHIejERoHAfaFF4lYjIBAgK6PfCWLQjlHo7ZAXm4jny+ad3Isk7Hx03jMzU+TJWWwe8yQMPoigYm2RDOapZDD5OMf+4zlZ33jzYvQdqEyfLRxTq1OxR0aGV/KxEGRZuNlqeWMhvm/eEZAlmWTnfGbP/vewOy9vZf4kDysAgpd9n3NFuAoDMq2CeU4DbqanqZhLIXMdnSo2Bs0YrTFbYXGirKhaBV8taZQO8le3UcgSJCgjD8OV5s9QbnNygkQB3wEVGRw8UCzl+d7CChYd31qAyLVxRZXyfohQFHYlKUZZLtkDOj9yuogb2C7S35kO+QgkYxTv2mr9gHLaxFINpBFepy8GGhjN3sX62u3UczmaSBVsa3QG8oVPZdKxRGhHpRHWmmh1NfRGMFkc+SMEPdrIZCMI218IwBuTgWOo/LHClf4KWcdJPKAlyYYx6vv0nqFNmUZdVDAhzgxgIvGQWE5i+oOQa4jgtC4E40CDbZ1L1p50hhSSASV716L1wigsUWdShml7DiNMvXZAONpeWKjCPnJi8GBeWHjyQDvaWJ7dw13lq7h2s13cefuNWxtLOPG9XdQadJgJngjUkautI2ljRvsNI65s4T0TAgPPXIQF95/Gif5ObEQhSMgz3LJQhE8wSoNVsoXAtG7t0oYSy3SYIgQq2wZMHazD7VQVOGWlnc43MDYRA/Tc3JO3UWxlkOJcqzR4d9dr8lWl9aDBPukNR/BboWGeY6nZN7QPLOZ8XH+7ievn8KBA/sxMan1ShmOU5L01aXso+wx/rrvcOFp+u++02JqaorPHSBmKKHcyOHm3TdRamzgkScP4/yFY3DSQGi0y3BStolnpBkk58VM8sxK6eibZW9huVok26Shr+wGltmANytGWGtx9LjNHGq2hGM24kNSDE/VRzwzCjUA/vu/995mTHS4Lnzk5AuF2jotorx5ssYIYvdlziAenKWCoVVGQlSOtFa3imLrHnYqN8mAZYScY0j4KBzdUUJgKh25wYnqNf06yiUboSAjwTh9FFZa+UkrlKdWYiqQXKkqtACjFtxAofMj9k0JU/5HMet7GjFPHP5wj9Z7EP4xMm6AQNZRREuLjMi4WkGsWKI6hUi3GTbgKrA7vs+L9KwH0ZQPyTQt//EwcgUCvbEwnAR32hovMR3Bwx84hac/dR4PP3cIxy+k8MizXsweVkB7AePxuMWpPv/MAg4eqGFt9SW0lXfVnUZnEGW9YugNFH/TJJCP4avf/i187U//EN//3ldw6Z1v40evfgdf+uofmPX0oU98CNt3t3Bn7YeIh86xjRHAW8CFZ+K4uvSfcO1tCvR2A6dOH8Iv/vIv49SZCyiQ6JQoO53hexSQSKFmnlkiFQmEEZiSVUxhQYAUIJistbcIpHrIrbewdqONifQRvp1iqXKLArwNH+vud0xRuLoJQLVbTc2UgN85hXJ1zZS3NgUQ2BMRWvyoWWAEdrTgFZg+oLHTJ5AcKn52EDFB2qYS8yGBfidIgyGJVPiQCfBWt4aqcw2/8j9MYnF2FgH3LM6dfBYHDj4JV7iOqzeuIb88Sfo5irFjXRw6JY+sAJ8yGVRYjyHcVPqK1ZT3z0PN42Wblbi52NrBTuMudjr3UBpmbRvDVGQKM8kTmIweQdQ3brF08uZqdblN8UsRCrgI7ZGZxFi2KEkC9j6YHZKuFPMmgSuPhtdH0MH3GkATqiAQVf9bQbwmkCkvoOJqdVrcrECyDAGOkZLWayFZ58GOLD1lU+jizJnTI877Mce//B8JZlVdSQ5jdH7wOi8ZONQpMCmFZQqaysqma1gnA7NyuZqQsEJM6Olf+Wp0i7CuQKxu0ye72vrFJJTcrjzUdvO28rtCnuwy3zUCtbzGgiwuVr+zMrZrmx7n37o6OvWdJy/qU2WonkEqQsW4XnzzDbz/2edwcOEgMdAAk2OTVI4V+IJa8e3jSZBCEGBTnCzj9p3bBIgRKgAaeGyzwg5UnqYzNe2rMSgUslAicW1msG/xIBVZCi+++OfkEZCfl3H+3HnEwkk069qxpgsONY82lbDSBtFoI7DX2Gq1eLlYZp9p97oosix3Zn4CTdJ3vpJDgHXQ/ury+kfDEWSzWaze26Jh6SPw6GGBSi6V9pOP6ghFtZd5nXIxzfrGcPGd2/izr19BmM2KJDkOrJtmr9RJykdJvGdjE/WO4/jBx/DomQ/h/U9+Dsf4PeRN487NFVuk1qDCKLGOyjspxRGPJjFPdPzQiXME2HMIBSJsp3Y+G8UxVngWizWL6W9Ljt6PIbYMDATwonmldxKdhwMD5PcuY2/rTfRbS3DS+KX9T35nnfnFwNOwi5DPASWhCHgriIUKBA0NfkaQ29qCn4QW0joHGhhjEykq3TjCMS0Qjlky+ARBRiKVYBuK1nbxUYPt2NhYM2WvnNCKVdcUe2Zsiu+kEd7n+DgDBIMtHD50Ap/85E9ZejDl8HT7CLb8msrmPXy3AKI8weKRUSgOh1rXOb6mRMXI/P+9gtkry38KLWzU4mUBVO1iqQWiyogimePp0TA3/URw02zTGKOOazRRJoDVLlq5YglF0rd22dLMZKmitFuj/tfi1HCMQIPAIRT1so9HuV0trIHySllV+o4y6a1NPhNo1ar+JmvVt5ki6Ql5xMiNHEuBVXnNKNv4n2I+lR2nQ33eoT7TdqyhqI985iYfE/A6NMVfgzNWo+7xYmo6yLLKqJXzSCWSBKrkF+1OVa8ZgFUu2JAtmqIeYtvFd8oG4BjQEKOcE2gVGJdAUOpCS1MmvUX6alDGa4ctP8fIQ7nqdLDfQqwjjckhwZAtHPVTsyhehOA/e2cX7fKQNDaOBvVEdsOByk4IAccEkqEMQvJSs00D8lffmaVhUSP2IMhLxSgrtIbBY2njDh3aj8X9aY59B+ubd3H5ypu4evMtrG7cQb6wxbblzTETDLvIk37c27yJK7ffRq66TeTdxfhMClNHkzh57jDOPHEMswfH4OD1aruIdo96i7JEToWBS7u/NdgWLzZWiU1Iq6GwdAllBQqkOcWVVmw2yOPrIJVxIJFW1pIquq08y+qhSiOzUBuwrySLvGbgdAfK388XcLxloGk2Qvm1C+QdzT7IgBxLTVgaw93dHY7hDM6dPUuDRZs7SX+TXihjZXRpdkrPi/bDNF4X9+3D9MwMSK1Y275pu5gePTWJx546ZoZGjdgtRHokpfCZkcx3UBFQm/Ac/Sf1IFrTuhRlBJJXWCFjlneWatP0pB7kvfpiHl7WWyE47BS7LpKRYpMekZL5iTyz7/vEmRf2lGi+V6Zl48Ns/Bj2j1HYByk4CEzVcCnlZqdsYDZbk2e2gsBwDFGCFK+LwEYVJ5BtSugQ/EgJyKXtIcE32yRmEoj2lG5RwdTIvLWqdgcjUzoq2BwQLPa3EXJPYDr0ONLBOb6XTBUmMI7EMAhto+vk8/28eWCUt7JNAVyjIpIQHrhyBFV1WgwBeAMCmgrOryEUa2N20YkjT+3DzDEO8mIUB07P4vEPPYQnPnQQc8ed8GdyOHWYVj6t0+p6ANe+38PqRRfuvVNEiiBx+94u7t0D3ln+XayVXyYBHcRU5sMcRCcV1gD7ZjP4+ku/g5ub38OpI9MEBR4Kp7xZhzfvXiVxHMNYeAKXb3yHYO8QgWcGLl8ZDz8Zxpf/9P+G3Q0KjakEfuaLP4+HH3ucAoZEQ8soTu027GnyTEJ4NEWtqWrFkAgsGQFx1DUVrql5ZRlQGiG/M4g3XroHZ2capfoWrUpahx6W5ztFq3aBTK4d2VgeDZMY+znqPU8i3bMVlYp7EilKQEpY9odaLV61hUyaftcCPtGC3i3lrQUP8ooNenWEg4ep7JNsvwRrje8YR76dx4kPrpIo1xBkuw/uP0fhsISry99DpbGO1u5ZOGrHMXU2h4VDHDMC1WFPuxzRgqTA0gpfp6GsAWmMbeZnuZXDduUONqq3UOzvoeNsIjxIYjJxCLOpU0jSAFPcrixELbITLpP2Ur31VR4DMYx2dBJoVwiH+lGnMSLbJQ+chwrEp6kyCl9N0SgEwRifdC5Ph56X51f3Ko5ZzCjPiBSxmFGg7kEKMHlmbfx46n1nKVzey/Gv/tk/IfPzC5lcgsIKvn8Y3/NQm/Td8LMALJW0AVp+CoDKSysr2Z69X4SbD3hUd7ZVYnG0UQLfcD/MgDrS+mEERdlkPq6/BEINxAq/sFwDzbrGQqmOTPiILvSp6/ZKftepQ3+rz2Qw6aJCWrw+Za7oYenOEj76oY/ZtFTAEzJBGAh4qGDy5I9p0leUIFShMwPE40ECs12LiTPaIA8I7LQpfwoFGoHpDOVLnTKng10K9c3tHXz2M5+yqbN335HntIPlpSWcf/gR1kn1aFAhkyeoDKWUFd4iD6ny4oouhl0nQaNyfsbJDwQZBGDegAO7LE/bSypWUWOrHc7KhSLB9hIBCME1lcbsjGJGR2DWF5QSAHkjg529Fr785ZdQ5yXNHk9MR1lmAHt7DRQJYkkyOLR4FD/1kS/is5/6BZw8+ij8rgSaVG7lfN08ZL2Og6CPtMn6aQvwifEZzM0csKwE0XAae7t5lApV7O3kRjl4m13WFVbPJvuclaIspdymTJARI7rRiKvv5R0h/mPbi4gGaSwMNxHzNzE7GWN7YqT7vhkXfn+Y7x1tf6ndkxIEYPXyFnqtKoq7OQJhLWgiSKDBXK9RHhEwZbPaRa3C/iHA0S5DtqkG6ZA8pLg/LRQ1o5BKrk6ju9ns8PkGrt+4jc2NHbtXYSDlSg0723mLf9c4HTt+iOORs/RI4gklppe30Dyy4gMDtWwnlagBW/GJSNFOB8Hse/MAvXv3jygL7uvElnaRrBu4k4wQjStLjmZaFXJXrytsTSEDSiivvOj6W0aFgG0T9aqyIGiNAoEf+ULGmdatyKjSIkflplXuU4FUhawJICn3tkD+g+wrlmaQh+KNLV826yHul+GvsBaFpckhpQ0u4vEYagRU2pGOYgvxFMuPkO99bTj9bQLKPhLT6hzqz6DiyMkH7L/FuUXS1gLG0hMGUBpsh/SUjMMI9Y6PRqcYXxmNgtQlNstHeal4TQsz1KJDylv93eNgaJYX1DGKQ/fwOQcNIoevxzop7IA0yWdlKPtYg26+isZOHej44XPHsZ1rolEIwjecRio8TUPKRSO4QpojnZKHh9Q/Yb8XmVScxlwGC4vTPGdoPCVo5Cg131W8e+lNvPLqd7G0fJN1qZqzRqm7agSX3X4NnoCToK2J7cI6atRH8Yko9h2dx8LhOZx59gTGZtKQn49SBwO30uyx3jKYOA5eAvxqNUcdornEIOVUwmabQhGFpbFriVc8XsoLAm7phlicdaXxEAxpO3nFwXotC8luvoStHPlE2U5ClA8E53KMaFF5t0/jPhElr89YzKziahtNhUrVkN0qW2iRsrZoPKTHtMNYMhGz2Q+fN2ryKp9XqKCbPLeAI4ePmmdW+XLfufsqja1NHDk5hQtPHYU3QsOtRxnp87LO0gNt1lP4g+01haBDLZfxIh0gx5HwinQs6dPohaBWRhb1jFsKRUakUC3pR55eMyz1MA89yyJMj1g2g58EzD7zqbMv7JaWCShpVVMoLqbPYN/4eVrgKRMAEuASPE1aaeXuKgqNZVoVDQQwjqhLiwiiZEQvO0iWgmJV3ObVGsV5Di0cwEmw6SITaivKAUHaAD1agnskhQ2sV78FryeOdIxgJHoAfoVx+asAGWtA4u64t9EjqOqRkdsEUK0WhUijawJCg9WiBSOGGhuLk/mpHAnUKqU8wXKTApHW7cw4XMEaxufbOHchjmPHkja90coHMShN4N73hrj67QDe/loMK69NYO+OD3srJWysbOOtN2/gVuE6mqxzhwK63nmHPbxDEh3HdOoQv7dQ6VzEreVXSKhDWkAJWwTXGubMe/Pm6zfwyMnHLPZEq/1C4RSi8SHS0xV8+Su/T27w4bkPPYxPf+pvcnCpCNrbpri373rx+ourllMuOiGQKa+kpmg4HvfjLzXYEuJKa9KjUYB+iOPnx6Uf3UMjn0K9kbXFFLVOFvHAKQxamrIgs1CSixD97gz8g0PQvtMKLVDqlB6NBQk+KROBWmWcGLrKFJBhjmGGxOqh5ZtD35El8VGpu6bQd22wXlrdvUsmvkPhrh1WxhEbW8Dnf/UgfnT5n+LSjZdx7c738d3X/ghLG1u0fN1YvTSH7L0AHv6oG5lZ7aBCYh9oGkxhDF3Wie+UQCPRC8RUOkVs1ZawWr6GveYqhb4yH/gxFqCxkiYoThxF2J2Ats7jP+QSFiemo9K2eFl+d1MxCNBagnFZt/zu4rjYto7Gj2RI0obArBZJyDPZ0QINghyBWHlfRl6HvjGfFLDyydqzAgKkbIFkeWG1iMPyK4o59T4xOct7r2D2t/75P2E7+L+YXIiQ7dHn/VfZp528rlhADsdISEi+sM8UCiDgKUX+4GaBFdnRmoERmL1/K68JyPIPWcTyrvJepUeyKSArR0KMP+v7/XL1OWRf6QY9qqkhGQ9Gl/dvU7/Yd75PSm1UWRlllAC8WYuwqIKxtbFpOSufeepZAsISDaMYn9GzDhq/TYJDL++VB6iOmdkU9i1OYmOdoIVvVqJvZd3QTEV2r2hgR4tgwgFtIznE6uo9Aswb+KVf+kW8+K0/Q47KUVvSTk6OY35uhvSmhZGqqNoo/iIYYKcoblNTxcpVKQ+oFh1FY2GCkCwiiQCpp41ccY9KKkKgRnlIo08AZ21VGTxk+AyoILwYG/dTEbThDih23UvwEsLrb9zGpcsVMzjGpknj5NudzSLbwP7su/GxD/0MPvXhX8b+2dPo1IdYu7eGYnYXZSJdGZbaiCMRyxBczNGgPkrFvUC2DRHsDpHbq2FledMWiexsb2NnN4sKFaCmswUM5Y19MF5Kjq/QMMvlybZrRkIbV/CDxgCV1rBE5beHpVs/sBCCeqWI3W0tXOoQLJexvVPG9at7NI7lTe1hdbnIfg9b7OJEKoyFGRcBzBr7d419R8OY/SCjKRWbMWPa0YvQsNjE1Ss3aIjkUG8pFE1xtR5MT06w/2ZppMc5FhHWWLNKWpjXxMrKKsd/m+AsgXXKk/WNdUxMjWNiMoNXfvRdKnsCoxBpWh5YgVnRK09l5BjlnRUpklBFjiJUnr/5M+/NM3t99Wv8l3KXfK1UY/Jwae9520mP/BJsa8EoaZB0JOBXp45qSKEToMjZ0xvKK6c4V44DDREZGKJZ0bIYxnYUFNDnRe0M1SPwhINy2UXwM8yTJv2kWYJZjplkg3S0i2MmY1ZG9pB1sR2sCPKcLo4raTkYEpCNsC/DGEQJZEMEsgk/4kka7SEacEH2eZhj4yOg8mqxojJoOFHK79Jw2qBRpJyuHRqKYyP9QZ7VjOIoG4NiZKX/B2Z89NoyUHidNBrxh+xT6R+1aYEERE+GCo0KhQA5OU5+tlm5w3vOJmVXHxGF6LB85S1tkW47+Qo8TTeGHR8NmB5pOUZANI6Qa8xmEQY9GpQ8qd3MWxz2uzFHA3hubgqZ8ZSlGQtE3djMruL1d36Iv/ja71Mm3DU84QtJcLYtLVaTuEH00upWCOayWN9ZIW4YYvrAJPYf34djZw9j7sAsWgb82d9stzZekYd+SL0s+eZ1+FDl2JhxSIvf6/YhmUyz7xqkM+W0lRG5wXaKt3qkax8yyQj72kU+025vLerpqOV33c7vYK9UJp05iImi1F3UURTAWo+d3dsmGM1ZONC+fQcwOTFtPH7jxg3rJ8VLi6Ky2T3Ko1X+VhnpOvZ9KjEFpYZTmIFi/Q8ePGgZGEQj8ui+c/eHmJqN4tS5BSQzbmIegl7qTeUgrlAWewNyaKm9Hp6iBb1JpNunXmG7SYjSl6YjeV0yUSFemkWXAvEps4F4TnzJPnTxU7T7AMwqfaZk00iPyDP7E6TmuvDxEy9s5W5zYOpIhqgoxs5jLnWKIIIWJsWtQKFAY61VQKW7zsHaYAW7CDlmEPfsI/GkOCgBVkh+Hr6enUaSJaGPUpb0KXy8DhI3S2OXkOA4Gt4uqr0lZGuXWd4y0okjZJQTJC4fLTIStY/sEiKq9+U4mGJipT5pGOE0BGRrFBAsV3vyUzQilYwhFCCD1XdRym7atoyOYYRgbhIV5Wr0hjETm0W0M4O9d/1490+6eOuPnLj1zRB23pzH5qUwlq9W0WfZQwoMV4AWfvYibbMsrbQlHJ47jdnxJAfqEn//AYKuXZ4dCs7fw2vv/DEOLh4ks2RRbOcw8CvtGBlDA01Q5h3G1YsoNW6zn5IYn4jh3tZLuHXjCk6fOYiP/dQTtHofR5lWeru3goAjjTe+4cNrf1GFo+HFxAmt7vfaqUUEmrI2zxlPgVp4WuYhbdZCtFJ72FkpI782iqOFIwDtQ5+ITKPWXGIf5uDuTyLo0WILGhZd5TqMsiy9O0tBWiZRkQl5TSm1BlSsVg4VpZNluZSeyVlne+QN0ArdKBrdFQptDqmnj5h/kmMdIZ20aDzE8KlfyWCr8A6VaplAhKCHALHbmaMQjWL3+jHS2XM49YEOIkltBUiDBzGWPZqCchPMClz4CXyqrQo2Khu4RyC7y35sDIsUZD4EnQkaXucwmzmGsegCPAOVrxW27CNyTN/yUUqGkgFVFkGRJRnn3wJYbjK0wyvPqvLwyUunKTrqPYIO8pJ5rgatkcdO/S760zlSJKPpEZWjOC9N92kxhqYc5bFR3kkteFScpcZJp1KMPfzwI/aOH3f863/+P7LPxeCG6lhn+//+P7z04CcKAgOzUtiSYfdPGc0Ctg88syOvKZU7n1ciLfn37REiV4FZbTfIG0a0xeuWwoX98qBc2+VLZetvdasEkgQQz1E4An9nX9+v3qh+PNQG83yzorpmgoyVYNdR2Wp7RSscb198F0+/7znEYwns7GT5uxeT49MmDHd3twliJ5BMRKiIbvMlPZx+6CSUq1F9rFyLWozXpkxQ+NLkuPYXB+m5wfbIuGgZgH3qfY/jWy9+g38rnVUeZ8+dQq2u+LZRmyXsLbWgX1O2rDfbFPTFaDx3kM8VLN/jXm7dPFpO0rsWs0pwa+pVMbbhYAhvv/M2lAOZOplgTNOb2jWIiowyz0XZV8i78O1v3wRtcgTCpMmIl33ioTKLY3r8MH7+C38HRw+cId15CGDladFuYwS96Wl4nUFMji3i0IGHCAinCQJCUB7SpTuruHN32eLibHGRwjSkJDj2URoGmbEJykIaigRYonOPT3ytqXgCWJ4KGZJLXkZjq6HY4iLye3uIxfqUU3+GpdsvIxrow9kdYCwRRoKgSCvlNX2oHZGkHxpayMQ+3N3rYDdbRSG3Q57QxiklGvAu6pES+5ayg3XY3qKi3tH+9H2cPH4Kp86cYT+NE1SUUWmULX/n9es3CIp62Fjbpi5xYobANpFMGR+JpsfGxqFsC1q4Um00sXTvju2Spr3wL197l31LeqO8Er/b4iMFiBPYiZ6lRDVXI94xHiKtvFcwe3v7m3xAwFFGkxY1twmoRsDS4vErfBfpRmmZFKsoj1SX/causnbIm9qRU0cAiDSuzB5qk0KctOuaQj2adeXQJuAlbXYoRxzDlp3oUScMAnyWstLZR4hEFrR0hG74CRCdww71IAhSEjh4aBr7909gfmEMUzMZJMYiCMU8CI8NLV5U4S++EOneRbBMHdJ3VlDrKmtPlwack3VuYG9rkzTmIN35CGiLxtySC9qsSJ41rVnRAi/xuE0Z84bN9QI/ewTEXkTDpH23G7ZjWqNtHloHjW6lzNKuW33KySDbK6+jsuVofUgiEuT3ESCuZvPoVwiuXTH2cxBr6wR8LvIBogTClLv1EsvOka/Is8QMGtvJiQzGCWIX9k9jZi6Ddr+Edy6/iu/98C/wzruvkreJL9h3wixhgjoBfm2GogxNyoJQqBfhjfjgjXux7+Q+HD57BLMHpzHGslrCIB5iBPKJ/pNn07a2DdCY5t9KaaX0Ypl0BoWi0ltybClsFH5RbxZJE+qvdfKA4uwHvB5BLK6sBjQoCGQDQQfqBSf1XR25egWluoxN6hgLeyJ68mhL/VEWKHk7bXtagtyJCQLuw4s4euIwjYk+dnObBMQ5+MNuA+k3b1/B5evvYmn1FsfKbSGYk9MTpIG0GSS1RpV82kKhnIV/vITjJ2YQtLCoMrT1sMZD4xeJJjn+ivNlfYgTtKBavGOEwT6RzPVQDvckZ8gDAtWtTs9CtTTrJq5zkQdEL8LbinGWYaOQCekTmdlygNqMCgvTuot/9Hd/AjD7yIeOvrCVvUVA1EY6ksBC6gymEsf5YgKDfpOWZQ3ab7fepkU+2Eatt8XK9hF2TCPuXaS1lyKqj9jACo3Loyf3uQhdin1IpnTRKvGyoYqn0TaoAqw7lStY2XkDHn8DY5nTiKbnCHZo7bFTXe4Yhn5avd5l9JpUlgQnPXaikvHWFStbJ3gggLAgd38RY8mE7QLSq4bQKntYD7dt9uCP95CJNpF0HUZ36QJufe0Erv/xQeTfOoDhZgrDHfbgQFu5gpbz22g53kK28QackSp2q2s4fuYwThwLYGrxJut4DycXp3D+2AQm528inrmD69d4L61SF61EEDB743VaU6tk5CnECFzRL2B7W9sCOrG29zaJPkFBM4vvv/p7tGIC+OSnH8ZT7z+Dci5JYqc141qDpz+B5df3I3d7Aturm5g8nyfzK2xDMUya7u+b4lZ/S5G6fDwpnJ29CSqmGgrrddx+WwsC2LR+gEZI04L1K+2LJCJa5ThFZbTIcjRFSsYeCGxTKAyytErLFLayKJMUMNqxjaCWyIVsSgbWZgo9ElqIz0R5HwX0IE8c0idDR5GOziPteR8WJ48jMwkcf9KNhz9By3D3HY6XrNAEHBzrCgVHcnoT9y65zbN//gMpeIMlKnmlCEsQWNRZTzKpwCzf7uW452tFrBVXsF69hXJ/k4xNY8qTQMiZxoGZRzCZ2o9kcJzgvE9AUyYNKUk3BUlPab1kJVLQk3GUpUEMMoo95pBx7IZuCSIqH4HVDsEn22W75pA/FdIypIJRHFrHlBIVDhlVft1RHr+RFWpZOtjPAhL2yVMemxGYFYjVdKAAcRsXLjx+n/X+vx+/9c9fMIaWUBgBUTKrAOB95au/Jbw11Uc7xzyztsCFQoKyR2v9DFxI0JjSlgbmQ+xaKijh0ftgVr9YvJ70Op+XyJEXmS+SgnJ6aZxSwPx1MCugrN/+Kl6WYFZGA08dqp8Emx2qBH837yxPAQspQt3K7jevjZ9GheLDX/7+D/C5z36e7fQRxFFx0xhRMnPFjysVnTwc4VDUYg3lfX3qiQsEqXmsrtxEMhkxJVmv1WjMlm27SC9BqWIxtUGCZiCk1MMRDy5dvo58oWJJzI8fP4Zsfpf3kqd8HgN5qp3Pz37guGkHOp+PfFQhcKUciqfCVDA1Aj0/QWzZgJ+1jShdKXG2NreRzxZIH8DpUxkKd+WZpcDmOAXDY7h2JYvX3yix7R4aALyJfblv3xE8/9xn8PijH4SjGyKIrRB38d1EscFAFBOpBWQSB5GJHUK3GcE2efzOrQ0sL6/ZDlla/R6N+lgnH0GcE7FEgDLITyWpxYhKq2hUwDoQ+IR9PIOsl7xr2q2nyd96pmjk0XNx8JoEtNqPPxLuYH316/C7txHz0qimrediG2MBL4FJBfGED4m0F+npIA6cSiGSGSI2TrlE/nSQfwuVPrLFLpWyg8q1gUCEgJ+yNhof8PsQuew63n73CvL5im1zO7dIGULw73D3KNMnUK1QzhMIbRD8LhGs7+XzVMSkBfaLaFTboLa0qxx1TqVCIFwrY3pmGm++9QoJsAV/SI4Vciu72XZ6E+2RRxQH7iGtmHf2Pp2+15jZ2zsvmoyQTOjy3SMauc+j/D1CmlVS/nazRrDWJqDVvU7KAMoU6j+l7eoTiMlzJ0aVYazQAGVkESgWZ2r1t2Y0m9WmpQv00NgLcRzj2mSAdORj/4R8NHA4znPTKcxOxDE/HcPhxXEs7g/jyKEMjhxMYX4+irHJAGiPoe+lgY0y9UWFMoOAjHIW2o3OJS9vCz1Hi3qK9ZJnjKc8eWqXgHWcBkM0EsG927fQIl0qBCKWjJm0qDeUO17bNY+msyPxcWxurVInODE/O2Y8q40tQtSPjVKL724TcCtrQ4e4QgaeGwGORcBLWme7NO2sNH29Uh3tXAkeGnq9fgSFapjjPsl3jMNFmTskLnEoJK0nBwzLJPhMTMXJ4xNI8zMQcxLULeHVN7+FV1/7Bna2BOQaCKUEQj2U0dQVPLXw1+HqkFYIpTtldIMujO+bxoHzhzF1bBaxuRTcAoXUle77McjqG4VQKjZUAk4haH2FSkg4u0mzNK4ULiCEJp3dVFx1h/iplUW3ReDI+ylojD9DYS9iNI59IfaJBChxQJH8t75XphFPXdZWrDBlGwExa0uBOWQZitmW190DDwnBS2NGi9WCUY9lZ8jMhuGJ9KAsVRXlu2W7uxzveo+AnbhpdWOJALbCdytNmM/0fr68Rr5rYvIo60c6GZAeAjJGPEGKb4FXeWGlISjL5ZjQN/IRCdno/sHZo86zmU4KQHnX5cd18/42+yC/12C/cKwph2Jse5yfQeoX8WabY9gkL9hMQthD+gojEgng1774j1nqeztc5z4QfiFf3TFlnYzNYyJ6EH5HzBg1T0Ugb5Pc2z0QwQ/2CCZ2CWw5+O45MtMJZCIZ89g5XaMVlfKoyMvT5qBVK7volAeIsrPJnQQmrGgsgLXyTbyx+S1kvSuYST9iKbxkZcpL5PW44I/K0m2iU+/Dw06VdaN0T0pZU6k3zJIQ+HaTCaZjY0iRcVwECtrtaofv7IVoocbCrCOZ7rXPI3HrF9F6/XlsXBniXuEH6MRvWm7IsPsY9trLuLjzHaQPxOCLzOLeXg9pdxjve6aHC++/g4dPhlDeLuHhR6cw9G2g4yyQWU7gxrIP23snkDq9hRPH40gFVjE1qZhedqp3PwevB7+XADCkuBoJ2gamZ5NY3blkXoSHLxzGJz7zESR8KWSbF1kuQVeHILxwFF//T3fZl2wHhWL21lk8/swUfJMXUahR4VJZeYIFFDeUs7GEflvB7xEyyybGJpLI7Tlx41KWhkaGivEeiXqM40PB29R9kyT8EBlPiySiFGgUCpWeWdCKXN9rFM375OT9Sksij22ERqSzz3tdcbQo/MudXQroEjIEjwfHz+IjM/8IR9MHcCj5EEHlFC1vL26sb+Oxj8Qxe/IOrt6+RAUZMUMlx3Fz8F15ytFAcoiT5x/HgYNBFCi4Bm4yqqfDOvk49kqHQpqitVptL2G7toqV/DWUh1vwBSnwKCQijkns4/uPpp5D1BfhM1QA7RIFUotGGBWEDKuu0uVQ2ZB6xWqafpSXVYrdSQAlNBgU6uD1jjI6KIRAzEmGoknGfuLJMVBKICl9Ta0JkAqgKvZQWTq6rSJPCgla3rSkMCDoknKl6meb/ASLpF9bXUz+IG0/+siFEef9mONf/3NtmiDRcR+EGogVIKRAYR1lzf7lSQU9WqnNk82xPLMjXcn7+beBzlE5vPyXYFaAk001KSSQawqZPCqvk0Cd0n0pTEFlKaeghS6oXJ4KASGz8x385PEAcKs4gVn7ou86rd48+RxrZtf1nbLLDoUlhQJh7O5k8e6ld/ELP/dLiEdonFCOKC9rk32aSMRodGrLTBfSlBdKW3Tn9m2cOHnYQoyUgieVDiO7t4lcYYdgdddAW2ZsjMa2H7NzY6jVCjh7/iiV24sEP30sLd3AI4+dN++dLaQhfWshq7wGXRojAcWjUbnIAyiPdalUwNh4wsCjdhaSV7NQpLK4D6503L27hHKlAOpr7N/vo7KgbKJ8VMhGve7EN795lwCApNJWT7hw9NgxXHjsGUQDk2hWWCZ5JepPk3o8BLJxRHxJDLt+FLJKc5XD3m6RAF7pq/xsX4BAP05QnjRvm8/PMdNufTQwNd4yegOhIK/7TIHKuaDE9fKuKHdzr08Zy7bQYqNilkEn8Niw9rvdmu6+hxWC2anxPsaTPoynPEhQ8TYI5qNpGtfk2Q7ZKEfFWO4oJaL0BMH33BkazykMPLNUomPwRxaxvttGtenCnbvrNCyi8NO4nV88QiNkhkZBG5vbe/juy99FkAosEAwjHZ/CwvwBvi+D6ek5yqsA7i0v8/mb5MlRfOpdAtzbt+4in8tTIdMw5zVNpy/fu4laO0eARZojf8jw0sIchbhoFsP4QkwgGje+eu+e2avsD8VD2y5XLfZRi5KC4FPbNGsXtribsoh6TAa0LDzt+qg8oZoulqNgQNCjkBWLhiJNiTNZRXjJa8psQ6hgfGRTyhpDtxdBMrlyuyiHcTiiaWTNGBAMREPkkwDCJDalcItQD05NBHBgYRxzUwnqH8onjpXAKmtLnalMQgRUHCs5nJRGyQDSgCf7QNi+R/kpI1/pt+o0QCrlCsLUGalogiDEi4ZonaBLul55mwV8a9UKxyzA8VyggZOkjGzwukBrgHUKw+vwUn9rJzPJTuIA8puMJllyfhqc0YgyNrTYXxX0VAnKbBdlrJPv6dMIqLcCHN8UaXuC4L5BwN1mf2iqnrRLoyWkFJ5sb4YgNpbg+2io7RZW8dY7L+PKtdfIb3kCI71HOVz7rIt0Xop9QGOrsA1vkAZcl6C828HsqcPIzE8gMhGFO+ql/ma/k38ViiOZJ8eUZvJGJ+lKZKSmkJZ0ar2HCUiO95CySmFnlsVJxnhH+ZP5O40ZEaKyfYQ4hiEaeVovozFolavIsd83aBDvFQjYpevCbBONU2WQcfWoUVpyrrA+HAM/ZYCMdmV+0exDhOA0kQlhdmEKB48s8lzA4qF5zO2fwvz+aaOVAMG5HCAsjPxKgxZVgmGB1wEq/auU1QSV5g1zUEaIR9RWeWH5DtLh6Ljfdsp1yfdRP/A36j7dr3zByiSkPpZB1qhU0KGc0EYdET8ND58fUa/bZpDk6exrO2OOWzodMgyi2dsIjaZf/MRv3n/fjz9cp5+bfEGpIHpksqA7AP8gCkebHUdrUnE/UsCKe9Hq9qZjj4CWwLdNQnXQAgrtRzKkTRPYRLNSZM2xISRExWMpRKFOwav0JcpN2ydIagz3cDf7FjZqt+EMDjCfOYoQLQ+vJ2CdISaRAOr2NF2rweTAiSAIoDV9W6PVYnFVfKdShERDaRJcA+WmAzulDpzhHUxF4oiufhq1F/8GVt44gFsXO9i4twOnr8k2VLGTpfBrUJVwbKrOWxT4cdxaU7saOLboxfufD+KnfmYfFg6EsbN5C+cfj7BtJCRn1LZ7e+1Hbfzv/3kbv/G3/wf8L7/9D/GNv3gROxt7JAiCowyJadyHYmELfrap29O0kJ8M5KQA14rQjqWl+alPfRjnHj6FWplMJHKqu0xY1bMxXH6ZCpV9IQuoUMljejGEI2ciVJItlMvK6xhDLDDPGkmpxWldUizIQ0vLKuiJ4vtfpwVdSiDu34dSk9domrsgoULF7pqDz5mhGFGMXR8Bv5OMIreehKffVow6eS8xBJU/gSOVWNfVZ78XSZQ1LMYX8Ozhz+GJmV/Gfu/H0SDg3C2sYW1rC3v5EkoEdvnGHs4+ncCRx+pYu/0WaYoWIAesTUZtV2iBk9APpU7gE4/8KkqyHClYLV7P8hN6qdAlxjUdV0OhuYrN4hZ2amuEhUUKFQqzrocGzCKmkxQ84UXSDOmO/0kRWxiG3IQsQVOqAu5iNh2a4htNo4+Ahw7tBGYhAASyWkxh1qZK46eEdp+0rgUNoj95XS1vH8epy3doIUirpXyQSlnGvtKYKX8jhb6DQNYC5aW8WA8dAqLvFcz+1j9TzCzrfV/ZGvAkjym20b7zugCh+MAQKs/R4hZdZP3JQ1La1nQ+ILWpf6UrpMc1fSiFosVf6ispYB3qOlXXnmUZtgKcQlJbgUpGS/PqU9a0Fn/pBZqyVRvNULfS/upQXWWNq77CvxKIepVAtkCz/U4Zo7hTeQo2tzbwwx/+AD/1U5+1VbqKO5bHW577Gum/Q/CgzSfkGVPM2Ttvv4HJyRTGx2OsRQvnHz6C9Y1bqDUr1l9BKmKtOG+382wbFXPESQAVwmuv3eDzwMbmBh5/8gKNHhpUpLlQiIYhZZcAnpSusnOoHMnxvdwOZVXA6LVDQykeDxPg5uw5O9iWpXtr2FjLIxYCjh6JsK01yil1JI3f7RbeeK2CfJ5tJmB4/oOfwIUnnoIW0bppkEb8Ywi44zTCU2hXqcBrPhT2tGgqD+2XL8UtcBxLUCFQ0crLHKTy0tpNzUa0u1pF3aIyoUBgJ1u2DlkMslQ0nhx8GaiKF6yTn0MhDWibYESxxjS+qRS1Pam8avFYD7n82yx7HeMTvM8p704T0fEwYtMxlB11OLSLVzCN+jBCkBOj8TaLXDGKUn0Cpf4CJg9+ENMHnkahnsTY1BMYn3yacjFDQB6hcXKcPBRGJE6QQhqb379A+d7G229dx0svvUEQpRCOIa5evcW6thCN05hJJszoUIyschIr1CwaS9BYCRLgV6ydCkdZXruBXGndvG2KbdRK7wd0rBAD8Ykp//tjJjL+++8RzF689DUaHR3bPlkOC5Kn6UrauzSeOTaaVRSIISMpjrtNedOifGlQDnTYx8rxLGNYxg+HxPiB1YOPjKwpeZmbkiMKKwtQ4cvIC/sIwsTsHEZlNxDY1RoAAV3tembP07hSvtZpgvmpVAox3tep11GvVNGjDLOtoNlnxIk2bW3Li/mpOroImBT3OeyqXwhvafCrbrJeWzUaPe0h6xewXeNaBLvKnjHaqGTksRX/P9gJLExgqU0HcrsFlHINPhNBOBSgDuczlPUDOV9YuMurUAPF0mrmsotIjPo8QJnfV5gYwSpBl3IjUyyQ9rWQJs22xtCoZjnMmpUljzralv1han4M+w/O0WDVbpYdbG4u4a23X6Ghc4VldWiApggYQyavleNemzhF43Fk89u4tbKEhUNTCJEOxybmMXl4GunJDALxAFzUjRbTKSOIRGLrgQTEOVajNUHsS7GQZJo87BzTHj81srIP5IGX11oOQW0Kopk/6SZ55Z0uLTIOIkigqm2ONXMjfVOnjtyrUW/uZi3zhXSIwiEU0yoZ7CK+UtaTruiHMsQfCNL48FG+jsK4tMhbdZIBq13QFFLkp+EbI11kxpM4fmKW4HYKM4sZZCbDBNPUDB6C7V6OhrwclTukJ81gaoZgJEesLBpPpl/uS3eTKzyE+0agfnTdDDQppfs6h/9YHLdlNRBuo2EXYb3jBOdh5RcWHXh4E+nXH/IhlUkgQV7XqV0gP/vc37Zy38vhOv7k2AuVJgU9mY0iF2h6Maxr0ZF2p5AbXK5mAk0PFbczh0p3g0KnjrBjEmOR/YgF47yP1Wa9FY+hRTnKAyhgIGAh4imUKuysJhAkkGxewd2919FwNJAcm8ZM8iAZd6T4bRUr+0ExpprK09a38hRpilbJpStizpo8X10OkMeYJBCN25RysUULvX4PvZ0z8N/6W/C+/WE4392P9IE4UhT+0aibDJOBJzBhCwx6BLHrpa/wuRIF7UNo1qM4cnQcX/jFCRw9s4J4agvV8g727RsSbBJM7VURix7Gj35Uw598qc6BfhSFHTf+5t9/Hvvm9uPLX/oKAd0axuZpvQ6WSKDa/s5PoNghUe5S+bVIkF4KtCKUr+2X/u5HkZ7wYXerRbDYw15W0nCATiGBOxcJtNmhZgG7ushnBzj9yDQZTFvFylMdpIWjPmM/abEAaaE3rJEh6pgcy2DpJq26VQpC/zHeU7JE2sMOQatShFBZBWgF9Xq7yJVvcOxCpFdNh3iIgWIUfhxHAvAglepEbBrr2euEtmM4ve85vP/M53Fg7GGUqJRv37yG5Y3LuLT9BhrOVQTSBDyBEvbqN7Ceu47DDztw5JES7q5fY9mKHyb05viRLEgjBCKuNOaTz6A6KHGMZQlSqBKByRI1Gug3SDs5LFeuY7Owgspg1xYnSBgEKdhm4icxTYUY9iZIq2wP6UN0aQmiBVjFRAQiUujGUTxlXUvo6ncpEdGsjAYdArhSQAK2+v0+z7KvqbCMBuWJIbAlLbblheE9PSmrppQurWSWo7RBPgkXKiAt9lFas6EUg2ibAMzHcTh37vyo4B9zCMyqTAmJEfhTv4wEluougaWq6zSBylMgU94DyzTgZh+oaXqeH6Mmye+jT7VbJwW03sHryj+r+0RL+tSN2kFNHi15aOWt5tDYe6wQjpfCDEZAls9J7ulZvYdlP/i0+Cje7+YLlWNQh64T+xsY08IQzQJpmlJeq1AsiFs3b+Py5Sv4lV/5m5hdnKUC7tliIMVnanGepe6ScOXYiq801S7jN5OJ0ajrY3H/DKoEsPfu3bUYWj7CcSvS6lfGgGWcPnUMK/euY/me4kNLHL8ajhw7aF5Z9U6XYFW5Ylv1KpXNKJemwIcMiUq5RAAVpZFeRiTsI80RQPapXDX9yDFWntvbt9cxOwmcOjnB8aEAp1xTyM+bb64SnIEg0oGf/cIv4+z5xwgeI6RdF2LhSQp6L/I7ZVTJ49UyAVNNXjABbD/SY1Eq5bitfg4QbGofeTkYBKprZFrFBCv0SmOpha8KKdAUtsV787qm5uWF1Y5/Sn8XCSu0o0YeoVwKeKkQ3aZ4tRFNu1MmsCqhWLxN4DxK+XPi5BNo9vx499Yqtlmvh558P4HsBF55ax2OwBxBxSHWdQYLC08iOXEWP7y8hesrlK3ThzG9cJJwP04Q2sDswjmcO/8BbGyX8Z2XfkhQQ1DO8je2tzA3vx8L84coi3u4dOVd3Lx1E+nMuPHXlavXUOFYVtn/4oO6sgiQbuTgUIowPxW3ZgcVklKu7eLe2nX4IqQ/A7IcVQJ5gQHFCY/ArBSwro/o8e//9HsDsy//4A8IZFto1mgUtEj9PRZAgKHsMwJ/PhA0ST6QkTTF2iPAbnEMOwI7BIrUkByhEXPISFJmER95W6BUvOghsNH2tcoGpC1KtVhHifa10YvaLfkkZ5IWn/Uo6wTuNXukEBH1wXQiwvJIb5ppYt9IVyq2uVwp2wr2ARmaEMfknlNTLqyKAJJCHGwvfX8H5ULZ+lyZIjQLVpIhWSYYU7hAWNuWyhut9QVD0k6Q9aO8Y/2VbzcYUAiWm0C/j2KubAJlbCxB40uLjEsY0EgjVRK4hGzmQ15LmwGMOyy2etirctAUmqPx9BHM+ti+JOsbZ3uoY/pl8rIyAShjkRv7DkziyPFFjE3GCTD7uHnzbVy/8TbWN2+bDp6bn6CcCdkahnKpStqq28IpLXws1rPouep46OHTbDcNSs3whWnU0RL1kj8UkuJUTKcAG/t0tNnOX5fD7AIDshrvUTy6IJ688jIO5LFX3LR0khwiIyOBMlVT9gSM2pFTu0j6ONaSqfKCKgtGvlrD2t4u8oUSL7tsS+SAPLN8n7bxVd21y6IMYi8NHoVNmCOFdGQb/4jvOf7KVV+v04hoK05YqfckiIsWTx5Lkb4iNKD80m8VlKrbxCDrJh8U8qWF5tKfTsqSUc51tpl843IQH/LdAu56n8CsZt1G19g26VtD+brG7uF/tFo5dm0afKTFSoO6X9se03CmzNGivx5lpyNA/agQqBjrRVynU4twP/HkL4vt3tPhOnQh9UK1nedb+whowGhhB50ZxCIkAgLFOE+/L2yMWMcuyq01EkYdUdc4xiMHaDVqsRAbzcrLK+UmOJIGlBDVVn+a8m2yMe4QCTVYwHr5IvYa9xBNTmBq4hTCnsSoE9hmU8oOMmhPsVwELlrXyE611BNklAoHWVYOeYXgNGRJ052Kdy2RcctrCOVPI3Dj36B35ZNAtkw4XsVKfpP8lEWtdwfl9hoGVNCJ5BgJIUKCKSJFa0VTGB5/Ck89u4Bf/tuzOHRki1b/TVqVtDQTRbzySgNf/IVPYGc3gP/1t3Jolc/ikYc+iNU7IMhdw8/8/MfNvX/p8iWU6zuIpEALU2k3ZqkgdlCnBR8nA01OTWNr+x5/G+ILLC8UDSO/x/rz3nyha4QQHs7jzltkWlqfLSrVBAf35pU25g5GMX24yrJqBjqbrT0SIglh0GJf0GKm5eMPavGWtvoM487NHZSrZDCChSDHKOhPUkGNvFSddol94MF4OkpA26cgpUAKpjE9sUgyIFFxILrtmk3nPHHgCTx65FlMx/Zje4NKYusyhhMbmH6qgrnnSvjgJ5/GhQ+n8cTzi3j0/ft4zuPYuSkcPUdaSLyDt7buot5ieUQ82l2k7/aj5+2h74rQCnucrEqAOVTMGAUCmci2USUtNtp75l25Ub6EQmubBkiTYybLdJTBYDF9HnHfPAJkNglzs361mpZC1oQL6UYC2WhLEpufKlvMbiuPyW8CJ0oUP5q6pyBg/yvNnFjQhANBqNMEhZXAg2XoNxKg00NrmL/pihKby6usRQU+CnYPlZBb1jLvU3laaCahZdupHj9uJf24Q9kMWAXWS3WhSFBdWekHUzr6Ln4xpSy+kSUqg4BKTZ99eeH4VfUV4HzAY/xKiQgqUd028urwDRwHfuE3E876puss0+ll+RTqWvRg4JingVqBZZVtfcbvLJOv+MtT8WP6b+R54qmiJSdUNn/X7IuArBSEZISPgkupiwaONiZn07h27Qq++md/iFNnTuEY+2xlbY33aTq/QzCpHKcyft2IxVKsh6bwKRfYKK3iVhqiuYUZ5HK7liczqOlmGsgedweZVJCKpY7z50/j4puvSeZiY2MTZ84+hGSCsoidM1r8pVg3zXx4+E553alsCSyyewUkKBMt9zCFsLxD8nKKhm3aLxDFddY9GgAWFzR9qBJdKBWA73y7Yn3zMz/zczh1+mGWKzDhhc8TQZu8Wi3QUG+rVzWd7EKSCkdKWknefQFNUVM2KjSAwL2tEBrKSS2y1ayCYtrcnqDJXynLGg07OSik+LTTlDIhNAh6tYGKUod5CfqV+1sxgD6CQe3dnoglEE+mCYqAYuEOAf4i0okFXH43h52dOB576pdw8tHPotRJ4Rs/vI35Y8/hkad+jkDUhWzWxzJmce3yHuKpeTz1oQ+jSND8xsU3EKSBEknEjC7XCFpvLi/h/KOPITUxhctXrmM3XyQgSiCXL7GOLRw9dgSZyQxu3LrB9g0wN7tgG+3cvnOHhkmA9xUxPjZp6w+0E6R2P4olaODT8HJTtiytXMV2dplKW7wqHhnRm0JnZDRrekL0bcQoeuSl9wpmX3rx92nkEEwQWBEJsSgvC6YcIHix/Lc9GgkChpQ1o+2ehwSyNCio1+QZHsk4Gr98t+LulVpu5JV1kSddCIQipFMCHY6HPG4KcfETsOg+W2RNoCIZp3FVyJMWQbMiFheuPLIhygNlYJEjSPH82j4+XyhgJ7uHSk1T3X1Vg00nIbKO0jkK79NmMfKgNQgWledXcbs+D0Ed5bWAerPWtbSaUYU1EIgqJVSPRqY2KBHQ09RwOjPG8chS9mpTHi96fFelWiStOQw8OT2kve4kdZdkuRvxdILXtClNk3yjuFWF2VQ5Nuwrgsg2+7LdSbAeM2x3kEZekfUtkt4b8IRIF/vHcPjUIjJjYRSK67h94zJef/PbpOsi6dhP41bp9DrYoxzQBgNdtjHsjxIohbGxt470TBKLR+fhiYVx5fYKllez8GeAANvoixBM+yi/yL/K/qKdLeU5fzCtLrzDSpEnNfMmT7sW+lJGEuxJhQkMatMIgdlRjCylFMdGukqCdEAhKkDr8VKWESyO+sFh45qvVbGZy6NYLpNlnIiGQ7xHscOUSc2e4Sq+gvKOdCOgyXppDDQDJvkn4Ck6N0POaJ10h9G6o1pnm3RJgD3QIrICMUKWhk4W2bxCs4hVSgT9xTINZAJgyi29d7SDHumEZbqVw120w+OBftLf0i+mY0hPCqmSB0Z6QZNC4oUO9bOtSyEfa9c3RVX42B8KdWDN2M+jrC6+sJvymoCW39XfH7nwN+xd7+VwHbgQeaFOcKOVxanQGCZCBzGZPIrJ8f2IUbDJMvRRSPbZAdXOFkqtexTy2s52EhORBUQCGRskDa68slqZq8FsdORdJVNxABTU7Y0SzDg3sVm+TmDixOz0Q0jHj40Yib0yWsVGK6FPq9fiDhUcLg+vEv1SYMkry1PiIRT0IZ6IkrFCaAd66Cr11Y1Hkbr2b5DIXSCj3kNteAelHgUBBXWl3qZwa7IMBweug3pVMaMziMcforW6jWbHh7GZBJ7/8CwOH85zkN8gM28hEQ/g0pW7eOSRDxCsPoSv/NEWlq4dQLc2j4nMHDpNB66/lcXTTz+MZz52jsowjHtL2jM8S2Zwo0sBFiLBdUmAmiWMsq+UWP3QkRl88jOfM7DpHqbQJnDt96hB2PYQZnHl1QqFUAkuKrA+QX22FKDl6sXBcxUqM3mYFgjAlXqLQEDJmtn3lCu0msKCI5iYjBIs1hEk0C/tiHAFOGgJBdIkzyiFU5TYZwrtehpBWsPK99goK4WaYgVrBMdUpEkXJqdDFDAJLG1d5NhdwmMfieLTf/sYHv3oDI4+MYkzTx+mwpvD1OIQkTSBBQHy7IEMgcSCbee3mXsZ9U4FwQ4NnlIPhY0iutpnm0o44E3izLHPWmiHpfEQE4jJOb7dYRXZ2op5hde7SwRPdQp2gkwqDF83jun4SSykH0LQlbBpH9GKmEUWqASsvAyiG50CcobeJHgodBT/KtD6wABTMnYtppO1zQrwPoHG0d/6dFPYaEtbhUHYFrdUMLKGfVQyOk3ZmHWt2MQgwYvyigpZjgDwsK/QGS/7PkRlE8XBg/t59ccf//qfKWb2gbDgpwHukSAdCZORQn6glAUw7bsJCHIJPwU+dbdaf7+YURN5QfJGQNYAvX7g9Qcl2xeeAgAWbyivLG/SOwzI3j/lndZmCQJo7NK/OlQWhauKMaXNU9Or+q7367oWP2hrT9VX36uNFoJRJ3osqFxtEMAFkM1l8eU//gMK9TyefErx1fstRlRTUApLkAHHAqjsNQ40olkX7Sqo6eeJiRSmprT7TR257CoKhW2O4QDJOJUwFb0U3dRkAhffUN5shfRXbBtI0WarWWVZ8jARGPBvDwWt+kn0WS03oQUYmhpUKjutQK6RV3ukQVIVxzqAty6+iwjZeW7Wa2l//P4I3n17C6srwC/8wudx6qHHUK5JgUdI0z4CtQb5gsrcG0QqppmkGHkpaMap8iW3WjVUa2UChwrr16YyGinPEVQiX7BfbXUxaVS9q5RBTsudO5rilCdP9JtOxdkvaQLBOK/Jo0wDmMpPO0PJO15hv5N1qEw8OHlsH3J7eRq8+/HTn/t1LC9V8dL3ryMUm8PZR57HoRNP4Bvfepvy1Etw/msI+CewsZLH7NQhVJttXCUQff6DH+QYzNCwXsL6xg6m5/bh0GH2Md9xd2UN07NzlIXH2McDbO/kyLdK4K70Z1R4BEAzszO4dv0mrhDwHjl8nCAqYsBVHq8cjYpSkcDH+tzFfi5y+FuYmIri8vXXUK5vUzEa4ZmxN+ITebOMZIz3RYgP+OK9gtnvfv0/8/1sQI8PisHIEAJ+2sBGK8nVf5ZfWh5RjlGLAKLFfm71le1BXlICGnnwSF8mUyi/fPz0+zRz46OuVH5rg5rWNv1vxvRA8YuUk7wuoKQxNqPduHvA8aZBSkbrkHbrLTlCOigRvK7t7uDexgZylbJcQ+wO6mKFDgro8DmVwBJZLwINyuJCrWC0pHc0qDe1Q18ikjS5p9mvdkubAxDcCTBRRspLK3CuVE9a1V8r75phqbZoelreQc0MKHF/KBRDyDmDbDGPPYUQyQhmWT0aZlqUpo0Fgm7qeFcXTfab9LLDOQ2/a5zGwxB16lVqNPIcddN8ysIDBJKz+XVcvvQaLr/zJvulSHAbRSoVRoM0vrO3acac+kbT7eloiu/totKqYv7IAtxRD67evYPbK1vs+yjGFwKIpQl4EyH+TVKRgSTeIgFp4ZfoZiSHJSJGfa8+kG7R38Ohn3JB4I1toBwS78mbrnETKOzSAOmTZkTzfRlDJEjNNskxIiKV065EIJkrF9EkP3o4XhHqGL2zLYdNa7TugxUxEKy0aAKyNvvHa5pNMz3He2T0CE+NwDbHieMcTfosjl7qTpsnVSpFFAicd7Z3kdvNolVpoVqSV7dFvhGYZd28mgkScHZT1IiB9PpRX+gYtds0B3/nJ6/rL21MJIcGf6SRTp6gPu5XOL6qP+vGIjj+Q/JIB13xJ98Bn5wJMq7ZJpbzscf/ppX7Xg7Xvsd9LzRpQYv45lJHsH/8ESyMn6LSICChRaDAdCfReKffRIUAr9haMeEadqaRCS0gHpwYCQu2i7TJTlSuPKV7oIWnzRKoTJwBDmBgD7nmHRJRnopqBrMTpxBxj/HZEeJ38VM7NnUUiE2BPeoch02naHMEAVlN8frYofFEhCBb4Q8OFNn5nmwYnot/C/VXn0Ax9wNUe3dRaiexN1hByEFlRFQXGzuM8al9lGNtPnMXtd4myt0dJEP7MH/gAPYdjeFnf/4YxsbWyMRXaA1RMdYbJObjGJ8+hqvvTuD/8o9eQ4nA8qnHPo07l/N8/x4cBLY7hVV86GPHcPjIPiq8HrbW18kBbeySiZwkXj9BfiXfsamkJg2H5z5yAo8+eYJEu4OQ+wiZhuDaE6VCqaBXCWL5ah9bhTWEyKhaLVuvsW84sE99NM2+pIXZjrJuVIZu7a+ufIba3CBEJR0kIfgJZKqYnAvS6kzh7IUk4pMNtF27JJodFBubBItdKo0xMlmYQGHJAvwnM2lMTWSQjPkI3Kn4MiEsLEwg+f5NPPG5aTz70wcwfYxGjRbmkTD7BOL1nTKy5RJZRIsEg7aopkLrvduS1U7FiBXUbl4F7vWx+8Zd7L5bRcLhJwgliO8G8dDcx9GgYCFXEzgIKHooaChQW9tYL13HWuEqmp4626R7+nA2vYg4SDsEs2PReQTlOSBDK/2VaEO5YAVkJTyU2UDB96LNked15BHRVI/As4SQWE4hLiPGlGdWFDdSCgKuBmoJXGX5ylOivbTlhdUCCQl3nf4AjT2CWe3lbwKJz9t0kyw6MrlWrFsA/X0gfOjAPvHdjz1+SzuASVbwlCCTgNCnQKFVlOVLQYvhpVMfnIZMCUK1WGu0OEsqa1SG9QOfHbL5KkZOKnlopdYkfe4Xa4LI+s3A7Oi7nIgCssSLBmxlQInfCant0x7Sc/c/lelBr1d93TxJ+tYcVW90ixPBcAT5UpP95zaAJrChNvAy+0+rpmWsdfDmmxfxO//hf8Wf/dlXSF9VtsNNoLiI/Qf2894EQdoYDdM0KmUaKorPDsext7uCmekY+apJJUCw2aaCpYAs53MYI5CFo4F9izOsZx93724SJDV4/yyFvaYQKX968rh7KI/kKeqbwtCq7EGfwLtUp0EdYl9pi0ktPCxznBXLS8O168LtW5ctzGDfYtIWsHW6Q1y73MWhg9P40Ic/T55rIpUR/zlRyGsGxI3p8SmEg0EMSbea8m9TBjZbim0lP8n7RktTU7nytsobq2wIUo6sLMtRHDdB64DG3FAxhy1s7W2bl0dTvvI4zxEYTk+PGXjVoqAvf/n38O67b2JzYw2JZIJnGhsb2zS2ywSwE5QlQxw7fAY3rt7B9au38ZlPfgrTk9P43re+jzoN04XZ43jm8Y9R1hXx5mvv4pGzj+Ag5WiBBoji9g7M78fm0iZlXwxHD5ymYh7i4pvX2Ed+7Nt/CEpZde3mdVPIi/sOIJ8vUGb0yGua1nQhr81nOg0aMc+yjtt46ds/JNgPcSy1IC+Exbn92NraMQA8Np4keJSHuoJkOojL115Ds0/QExLxihZHoEReKov15qeBklH32fkb7xHMXvzBHxF8joCnFkkJcIrXFaIh8KCFybahDE+NlfxOclb0SN96H6nKIKT+0xStzdpoylXhXwQtukmGq/hK086K85Q8UYy++kx9JI9fX/xMENx36HqHwK1ptNImTdZ5T6FRwwbHYnWPerdeRUtygjTjJi17fZR3NFCVz1s7bmkhWE8LXKnXK1qNTgCr2HSlgJJDwMP6CPwGWUfHsGP9RQ7lNe3SR0Oe5ar9xWIJTspcu09OZN5rK9udpOtemM9RJ++0sZMvYpfjXZZBaunNmhz3oen1VIgDqrU1NBS7wzC8rjn0OxG0qkXySYkytIGJ6SRmD08hmPRgdes23n3nVayv3YKDbZmZS3FcQL4TUNNGSy3KiCCSmQSSyTj1rc+81J4I5XXUj/XiDrK1KvHBFE6cPYvMjHanS1qaUPa0gT6Fscg0V7sfLBx8cJo5QTmvU2BRW9ZrAylt6qQd+Rp1hUtqRmk06iqDX3mNpk1H/Sh9I9DuJg3LqK7bAvsagbAAoJ/EG/YoTppaloZBk91jtMN+VZiB+EW0LA+uPLNKfSYDR4uZzXsv45f1Et0rflZp4uQBZuM4xn3UaEgX9irY3SqhlK2hT6NBzmPV74HxJO+swsBG+kapBKmHOPYjnXnfoFLfsLMUYjGiX17nb/zKFtLgk06WpdegQcS2tZsNlky6ZrvqXeI78QzpfOBRyJ50hwy4Pj71zN9VJ72nw7XvCccLYoZIIInFiXM4MPEYphLHEAwlqbxYJTKGBbKT2Ku9DRL7qsWJBYdJJHxTBFRpE+SyTuRBrQl4stNr9QJqWt09qGDo30F5cINMtQQHB25m4jjBiJJmkyAISOWLVpqoNhWHbWHIipklyo7XbiOycHRKIStxubyyPgo8WR+V3Co8S19A9+bHUC0soTa8RGW9SF6b4rsIWrNB1Pp1rJXfwb2d72Kv9DqJvY2xiUV4QlPIxFMUNBxZTwFPPBumMruCTKZIotTitQCi6dMEsEl85b/U8cZrPiqcecT8Y9g/vQ9eKsQYwbzSy5w+dx7z+4IIsJxYzI1X3/kBQSQ7OH8Qpw59Aif2fRgLk8fZV3XMLWaQnkgiFInD0VyAdhRSOrFBr4x6lmCoM4dLt96AO0QCpcCrdrfZfh/e/8kj8IZz2Fxhn3UnsbXixcxsDJE4FZ2HgsMRJhG6UKIF26Fh4KZV5U+uYvF4Ao89ewCnLhDUT5UQHi8ikumgiTVMx+YxOZZGNOhF26YhScgkrjDrlh6bxbNfmGY5PZSaOZQ4FtUmBXeDRE5mCbjJJL4oSZUM0guy/gIC4jYHCpXbBOvfx+6bl9FdH2DrUgPNHeDY0YOIj4fJuH4cOfRx216RHMF2Kpasj1x1Fxul29iu3UC+u8ZxETMQKLQdiDgnWd+TSAf2IeAKU5BqekUzAaI/MgoZYwT4KOjJEAKjmio3YSQ0pTeIqUivD47Rjmps9P1DDGr58+waGYqCREJbikvlKCULOZk/jZhZIFbbTQr0SmvKStV0kKZWVB+XO2TlaPZB59HDB+w9P+74V//0n9hYqD06JX8MjBp3yLPM62yTTSUJXMqDKvDJOrIzTcBI6MjzITEqSWTCR32gqvE076y1hUXyFgOl/ENCSGUKeQrMKk2XeWb1Zj6jukiwWU34h4Sj/tOhT11XH+oObVWpn/Que43+4Snvj/bk15SYhKUSyusR6k7KAbKvyIj9J4er0k1F4gGCwCJe+s638c0Xv4Wv/ukf4+I7F3HzzlXLXtAi8FF7U2nyVjpKPmUtqEydpI9YJM32+i3Rfo/yTFvlul1UllSYEwSRG+urWFurElhv4bEnz5rHwgQw5YubRrZWYmtrbjeNUq+DBh0r6WV/O90sn6eM9i4V6WgDDQ9KhW34aEQnEz7LpLC0TCO168FnPvt3LDY+k5ljncsoV5rkuzjGCMY11vJ8BFm3RrdunjaqE/b9wMCYxX3KQ67RlNXBiw9mEGwaj3WVMUdVZP0uxT03PYmF+VmbMaoUc3jj1R/gd//338Hv/Pa/xeq9ZRSyu1i+u4rtzVUC+Tk71W4BmAOLB9juLh5/7ElconwT0FWC9n3zRwhaGrh57S6NVuACf283WnjtlR8hnUjiwqMP09CuYXdzl4bxGDbWdpGKj+Po0VMYz4zjjTfeJO/3+K4pzBJgv/zSD9juAOLRBA3DEAHqJvuzgQOHDmB3L4sdlrN/30HKpAHW7q2P9IKMaZaxOL9gNK08xFqPkB4LY+iu4wc/+pbtwe8PqOOkXKX8SXQETE55fzQFJXrkJTv5/Tc+/97A7K1L37B4PnnmtcBGsmEUaiPFryWITd5F8ELQqHhLuDlefL/4U1PrI2blKIkBOYaSK3LQKEZWuXs1tOZhJ8OIjyitWDbHnafAhVaxa6bNwXZ4KHMIa1FvtlEhICtT71aotwrVKtZ3d3F3bR3bBI4tPtuhrGh0FHYmECF4Q8AjmiGPiR8N/Ci2mv0rwGP5c1lFLVrSVqjipUyatJrWAh3KfJan+E7JIa22Fx1qe+h2tWZ5jZU2bXNnj32rDU2i2N1WfuEOrt4kfpDDQQQ/VAgPpXunTbk6RCIaxvSYl7Kmg2pf2UXi5LM5NMrEAeU9BH1VGo9OTM+NY3w+QxBUxhvv/AhXL98lqOvi2KEFG/NSuYBiJccx10ZGXhrNSu9HADxHTNDu4+bSHQLhFEps2269hIWjB3D4BDHJ1ASS45QX8bDRisJ55GhTCM5onklDRgkn2Wt/GmQzfhx5ZkEDQIuC22YIKMuD0tzJ0aIfpRsE9CV/lVJOaQklZLVrKq0ak8l9jkGb42z9TwNB+XBCmk7gNeXA7lKe6eVak+H1SzdRf+lv6Tv2qTL2SHeNQtLYx6ohB3LkRR4i6I3yPoXFkFbrHRr3VfJqEXubBcqD1ihyis8pO0ddYYF8TkBWp1ob8IeN52Tomz5Ru9gH+m5/q3Gsj13lb+YtZt3MK0vZ26k2UK3VLDGAjEBJrCr7p8H3dXi/W+sBKNcURiZd8tkP/Lo6+j0dBmZl6SlcYP/4GSwkzyIRmjNGa8pFTYmllCLyzNb627ZpQrtZYScnEPdOwzVMUDgRVRMQt9j5jaayDmiThQotgwoZmJX2bKHQvo5Se4eCIEOCPYqwN0IroIG+hDEZRR5ZeYg1fSZPlvpe9dKiDAX5yxuoDozFIhYvKwClVEno3EDjnX+Azs4JpKab2H/gHC31MYLsHLwlgr94FT0/O42Dm4jOYyp6BungcSKcFHZoSQ66eVPWTz53Gh/6+ASF0PcoVFawfHuDYOv9cJL4vvFnXfzxHy1jO+fA80/8Kq69+zY2N99EaDjN7lb+OC/Wd7bx2V88gZ2tl3Fv5RLytQLarho+feolHJv+RSwknsfhheexMHueimaBgNOLUjFsqXeULqRaU/JpDnLRTaD/ML732jcNkA5oJg29eSqWNM4+PgNvZJeMnUYydBLf+foKYsk2gfEAhdISIqExWoJOI7xcsUz6DxMsxCik5FmnhR3qYGrBhYcemcL8gSjHcg0b7wxRzBFArt1j3TcRjyfJxLxWaGNzq4y3vroLH63qiZkger5tFPlMz0frnEJDy018JFxtH6zpB6UXGToLHLsqKs1L6PtetRCDmfRRjhOVrLuH+ZOn0Ax3sUeBdvTMT1OuU6yOkAuanTqF7zqt5dso9tfQcY8SdcugcvU9GA/vw3zmDILOtKQ8lNx7oLyOUuIS8FQSYigBY3l6FXMmQCbGEO3oEJAVc8uaNe8MwcfoU4w/OkbAk6COSsDB56SwPKQfTTtL84mBNVWoT+3cJu+KBeHz3WLQkedXQI7vpvDgQ2qefR4/dlBffuyh1Fx6hLrFhAL1oTkDVC6rRuVIZmd95DGVJ1ZgU65W/aepGyltTWXKu2O1Yb1ZZbtmxYxkm6rE8lgWkaSApi7LiDUgy1OLv+S+tVyz/KqydI/qIEPbZJchVBbHx6xI3qddvjQeApjsFl5QHUafD77rN1PsAo1sg56/f6v1nf7m0Nu98kL5CAyDMRoNNNzy1S1cv/MWXnnje/izb3wFX/rKf8CXv/of8Ad//O/w+7//Ozi8bxFT6UWkkzTMAwtIpg5hZv4QtrMb2N5dodCt4d6922jRQFPar/XNLWzsUp64m7xvwbJYdFo18zQ0Kg34XPKaDsljYfJtlv3YwNz8GArlTQSCHhrviq+Vt9aBUn6LSqlM5R+k7HIQlDXx8KOfIOCaozE+z78p9/oOJMlr2iFI2/oqNKQ7bFrMYptyoGPePMrTtrav7HEsqPT5KRkretMiIYHOnWzOFp0pTjMYDBsgjWnBSyWPpZtX8P3vfsPA6x/+3n/EtSsXqbTW2JYhjXi/rcZX7FqZcvDll162maNnn3kfeamPqckJ26v9ytXrlCchyvUuZUyNQGLBPICKY15bX8HG6iqefvJJ6g8/vvOtbyPEe48cPUa68eHy1Ws4fvIElleXoEwj4xOa/WGf5bJYWVpCPBzDQQLVW9dvoVKuEjj1CHAXbIpTIHZhZp7vWGN/FwlUjmNne8ecDIqnK9CA0ap5vU/ZRRQ+tHhwGndW3saNpUsIR8UH7DeCEq2YfrCYccRI92n7/qfo8+997r2B2fWl71iuW6UOkvFhBkhUuXv9BFL8m++NRP18v5LyBxHlfSkCwFQmRSMmTfqN2fS7rfiWPOG7JX9cMups9sdrvwnsdji+iifX2EsmSRYIDGvxj+I/LVML+dNFOTfkPbp3O1fA+tYeNvf2UCdwcxAIddjkOummpb8pb21rXQJgTQLLEG0op62AC3m53ZU8HwE0gW/lWw6HtEW9m/KjjfF02mSoaFGzE2526urKKrK7eWSS44gHAhb3G4ooy1EUlYobxbwPlYIXezsE0X4P6ZzAh8ahT2tx2Cg5tXzqDkqPTJKN9A1Qkxd6GOb4zsDV86DT2CVgLdNgSRKI8tmYF7fu3cJrF9/lPcCRw9Pk2aaFqMjQabS6CETcvF+LJkM4eHgRD505gd2NLRSrTQwCUVy8cQvphWnMHJhHLMPxJJCKxpVRgjLcRX0pvaHxIcEoblaODIuN5/e/BLCSf6yynCQCsa06LJ1ZMZ8fgVmCeoVZKHODxlCOFoFNPkqjj4Zwk/iK46NFmzYW1RL5uQEtGPNxrIMSxPwukChQrT05XPKwW2iC295tdMxSTd4KT7FAgUfz4LNeet9ooZYTIW/S4rZ19sj/+d0KNlYUYlChsKUWFzNoeoyfMtJkJHeI7zRrquwxWi8iYDxasyHHh0IaCKBZCWXgEHON9KcAtb5Lj47CHqyz+F6FgRXZN9oZr8u+HFB3tln3RofGGt+lesq40yZHn/3ArxnfvZfDtf8Jzwu9Zg8pfwr7Jx7CROooFYsSFDcJRhXjqByMFdQHW2i6N1AebCNXyoH4BQHtn0/ApDx2RBtsjDxVWtBA5E5i8Dq7qHt3UCYA3s4vkUQ8mJ44TkA7btMIA1/TCLhN8FPrVC22SKumFT+ruLY2rc2WgshJHPKKKV5OSb/1DlkMxVIJS7QmMsWfR/92FqXt21gpZ/Hudh65SgTNwSUs5W7Tcgwh7osSgFPwkXk6HJDt3B7CinNMHMDcSeCnf/osAu5lpMZvo5urIV+fQS92EDt3P47f/X9fw6VX9sEXvoNi7S0y10NoOq+iG1ynpSVPZQD9egofeP4wQhkfbm29iztLN/DUvt9DrPk+1r+JzdwtMlgH0fBROFvH0S9No7lL4sQrbAut2cwc1rYKBP5V5PO0zIoRbG+/CO+s5iImCNyXEegcRoRj4+vuo8IIYHs9hq/+/m089tR+xNJDMo+Ih6DKm6VgbVBZpNBhHyq37ID9t7qzgkCYTLcHfPnfrOLmnxxBubll8V7alz8eTbJPKED8MVpwYdY1AW+th8r2mHlDxiadSAYPEnB7SZBbBBcNRNtjKCmPV8yNfIWM08hgQOU/aN5Fp5yD58BhuFoUoJvrGIY53of9uE1BM5Z5GPsyCyR9xUF7Ue0MsF1fxmbjdeQ6l8248QyTBC67ZOYIkr5T2Dd+AanoLDwGQKkQhhxPKnyKCTKfAAGFOkFZn4ykDIpdwm0tfFNWDlugIeFPgSEGs8VgZEKFqpDa2McU0BRUHTK/TsWcOsnAHt4nASQmVYC/BL0EkRQ+uZpggHQvQcf+1dSPptUsFEAAu9dmPQN8kn+rfF47dfIYv//4418JzN4XVKovxYt96pu8rXbBQCdL16fJIIFHCiF+KqOBRVTyJ4UbmIeWj7E5Vm85xCW3ZP9qSkft0yFvqISrebLUB+TpgZnrEvH8nZetXvwu5Sf59FfPqo5/7ZRM1E/8QwrALrBOvO3+D/zOch+0Uz/oNzVHf9o/fGYULqHC2PcSnhzXMAFDKOwhcPAiTNpTjlVLH9hpmWf1D7/0dbz4rT9nG8LYd/gEAeo+8z7NzI4jEQviyttv4O3X7sDnbiOWiJjiy5YquHp7F4cOTWOcIMRHgKH0RxFtCkOaU8yawjS0oKVNeRWOKAyCJh2rZsqHwEbhUo3aLvuliEQ8jpXVAmbnzmJq6iHWPWxhFfKuTU6NUQ76CEzJ60pDSNCqMIGuo2HOAW3tKs+rgK552NX77AYZZaK1clk7aGlHJNhuPtMEia2OttD+Eb70B7+Hr33lD/HGq6/g+pUrlB01lkVwwnpqYWQk5OTzXQJQGP0TtxOYeC1jwNe+9jXWbZzyaIy/+UzGprSNKQHTvgMH8eYbb7J/DuOtty/iwmOPIb+XxZV338WFCxeQTqXwzW980+4/QL5X7PjX/uLPcf78edy+c4ugtIDpqUmCoyC/V3D71m3+PW2ry+8uLePs2XN47bXXkUykoBCh9bUNHDp4AHdu3aVxXWV/pvi+HPzky4P7FyyrRbGgNQPUO34qdm8Xb17+LmoUcD7N2NNAehDzLVo28ntAXPdPozse7xnMrr6IIHWQpqG1E5Y34IFfC1do0GgRY2I8SOMoSpqKGuCNJ2OIEcCGY1HLguG3VdpaDMXKCdCOuAG2mId6rWMgU7TMMRdjGO0LEBDAysNHwCAwq8Vl8t4KvIi37RrvFz9qNsKtjAiBGOknyOcpEwkYFeurxPzKxtInYKFkM2NRIFfgT7pWjivJOVVLq9+1w1WMhCLvfiIZ4RPUw+kUgaUP+WzRFoX5PZTP0TTCfJ+TQNJH/dEbBJEvaItpB8olP0Eeja+WlyCSYEsDoY7vk7f5TukeH4lTsx2xWBd+GqwuP8GdQ1Auin6Lcrm5yyEsUgeHqetiyFb2cOkm9QSByNRETF2BYjaLcq3FMddsDoEgx0gbPBw9fhgnHzrO60N85xvfw63lHezVCDzJX3NH9yE5FUMoSlmS4Jh6iD/4vMIfZeiIb+R1dvOiQo1MDJHeTIeMhJ/1ufUtAVu7SY1WrZFvimx30WYplL9WklvlSCArhEkyQMabretQMZJzHD95trUFsmQNf7Rd9xwcJFHJkOC2SZ5SaI7WawjMqi46CC3tU5UWrchh81ceWdaXdK9sU5Qo1udaPFgtVrG1vkMcsYsqDVVNWKqJul//mwNEb2Z/jNJzKcPEgxlJr8kitcnaxUOG///xoJ5hP4mC5eBRXVoKX9EML/ulQZnZ4+8DPt/ju2hHkeY4drZYWgs6Pfj8B38CMLvwqOsFxRemQmlMpw8h5ptBv+MepXtpFNCklSCPaV3bnWKPFtOerXb09+PIhGeQCu83sKgckdrHPBSM2oICC0Sm0sn3V1CorNIyJHiMRskUk7R2QpQpmlomg5H4Ld6Lp8V2SOjwv15bCx86tBRHwdXaYlIpamQtqcObzSqtc1o/jRxShS/CuZtEo15AtpXHXn2b7+iRuTQ9coDExfFwEfiwQ9PjSaxs3iBDuLBwOI1HH38SsbEKeq0J7D/2Nhz9HxHAH4ebdV29dwIvfm0P77zi4+BSSfbWWM88oqEQMqGH0ap00MrHCXrkve7g3IUFHDjhw1uXXkS9ksRc/B9i7Yby4WnaXlaLttx0IZsr0fJwWqA66R1rayu0UgoUdA2OOpVjcw7zBG4/fPs7xvSuIYUANc7W5j187JPPIxxO48YtmoBk9o2VGtZWijhzbj+cwVtU7gHkV4/gf/oXb2F24iwOn8+isBNHl+/dv8+NbiGMP/3fgB99I89636KwS1EQJZGKjSMcVGLsqBG9ZKiUxmRSaWGiFEYLtpCu6/sRFX8ZPkyi22ihnpOFVSTAoHL0R0HdTIBXZ18v09ItYYeM2S+1UduhYUQw4h5PIN/oIxZexMHpM+i0OdakgVJrG5uFG9gtLbGfy6QFlqiFcVTwYTeVdfI46fMoEsEJji2tUgoXTe/L6BKI0knUwz7hUJHZBJ7knbVFZQRAlpKL5yg114jJ9YgW9pDlxAviXgN/EiwmGliOVvhaaAIFg0Jp9N02UqCAMWEh5CfPJgWjwIe8uGRPYb9RpZyyuEfv05Te6VMnVPKPPQRmdUjmj8AsuUKCwS7oByk8CkZVlMLGPtV81ZmfFhbA320TCFMevK4CJdvYSBVhj/KLRJH+tvqynnqF2jGaUlPZepfK00089J3jrBivv+x7ljD65Dc+8+AUHVn9739ax/BDh3k39EV/S3Hzd+u3+5dGz48EpYwQ3S9E7qCytX5nXXWjsjLIU6BNDBR+5A84kJ7yUWGV8PXvvIgv/fG/Z/PbeP4DzyAZimIilsHpIydxaN8ktjdXsJfdRHIsRjBZxm5eGyds4uGzx5GKx8wIUr21WEdT9263DBrlum4ajWkLUykxxa0qft3l0oKXPFqNEq9pyj6OhcVHqDzk3ZRHrYVEhkZ0kMYNyuSdEgV7lp9VDkubYyRak0FPgE7JrnEYZetQCiMqtaEWlpSpTIYEjWE0W2V876Wv43f+3f+Cr/7Rl/DupdeR3dyi8lfKKp9lOlEoDUmPZTkFX1hnqjTKZynnet1GwMZa36Rc33rzLXzn29+x6eOHz58zYGkLV9jOifEJyqs1nH7oNP70T/8En/r0p9gXbbzyyo/wvqefRiyewJe//BXKtnEcPUajnW15+6138MzTzxo4VdsyBLvKlalQk1KpjFQqbQu79nb3CIjTWF66h4OHD2F3dxebmwpvOIi337yCfK5MI2zU5zHWTeOiHbY0FR4gAIokPXj9ne/apg5BGida1GyxfSIh6hop9BGPjE5dFxbQ+euffY9gduvbBFQEitRFXqU0oxGg78qRqXRSSuYTIBDTNW0hLKDrJxDwB5SZwEcgHKCeDFBpE7CxAl3KPvWr5JK8ndKLxizGezJAR3SuGR/JkO6AMpzPyGs2aoNmVSR/NF3N97qUejGIEGVxOBijPA+wTNKQ5Fhbu0t1qTP/6mw0BWT7BLKaPHOSllWPPstzsL5e6nLq3aDC67wIsb0hljk2Tl1BPlJonDLljCfnqEMmCGqjLK9kRlu9rk1Q3NjbcaBadhJTUP5w7Ho0BC0dIglSoM5JXtbsno98ZTs/equIpuIIKofxQLG7Q/NWNqvbfF/W/LQDAAD/9ElEQVQJQ1+YvwWwuruBZepNj8aA9FwrV0x2ezXVwCNKA/XwscM4c/40zj1ylrTdx1e/+seW8i5HILtVaSIxN4MDDx1CPB2AO9CjseQnYQiDiFdG3tjRxJRm+h7IQ7lONCb8V/KPp2UuID0rBrXbVr5zhUXWbDZCM9byyuoQ7bskN0mz4ikZllogVie4sw04CG7bxFu1WtXCE7SDHImbp3SHsD/p3cJbZKjIEBamsIIpH0nfJoDdrBNl/gPdxFNcbwt9DcyyiaSfFnW3DNGNtXXs7GzxffK2jjyuorU++6FP2lMcq9J/smVWplIi2g5o4qv78lnyhKWzTfJa3wf6qhf7SH2mzEAS7Pqv2x0ZZZVGg3KvjpYUicAsy1PPeoiLtAGHZl+0OPJnPvwThBlMP+x4QWmR4oEkMpFZuPsJEg/FbIWWRWUX5WKRArOEdr+AtjuP5kDxpF1E3WOYjh8gyDiIeCJhMU/aejESSdACjbCVbTS6ZSxX3sROaYVvatt+2pFAmpaYkuOz4iSapnIfEsh2+xw4DoZQ/pBas9mkAKeV2mHDdU1AVqv3lSdQAl1JxLO5Hd7TQmj7k3Dmxsm0tEYCXfS8Bd5DAmmvo1Am8A4MbSqwyfLurt7D3PwRtsmJEycuEJzzvrobmdllPPxUAYHQGN54rUji/The/GYX3/lqlUKVCqfCsj1LmJ+eRye3j2B1l2BxDvumH8FeYQXFeheHjvvxyEcmsbvSR8L3SaxemaJ2IBn0GlQsCgwXoPLSGNDWeGQMgrt+/RBBeZZC8jIKxescRAr4HR+GjTS6ZTeWtm8iFnKjQqH/zLMncf7s++His6urtKS7LSzfJGDcbOCh00dotJdx62oT7748jTdfcuDyu6u48NwEMhk3Qu4u7r4+xFf+7Q5/LyGViFIYk+Q6tLxDKQLXBBxdH8mJQpdWVzwWsm1wp2mF72UpBGllR1IdjC1sU+BU0KyQsepDGgxu8loDpXIeIQpRZ5tW6JBKurOEAcdhiULGQYXZY/8MNF1NS3mv2retOQ/OnaPSdaPRz9n2xqvZd5Cv7VGAs5/I9f1BlZapC+Oxg1iceBQzyRNGP9pFROBOgke0I7ohwRljjYA4hbt9l5eBwoRMKINIlqMF4gsMGbuxDP12/3l5Nnhh9AsZ0MSihAJ/fxCXq92e5IUTQ44yI4yUiEIaFPMmxjZPpwSKaJwCR7zMSkqP4uTJ/9/BrNWXp7yZEg6GQilR2ExTPpQyagbBJ9vE6xJGqoOapTYJwipWSn9IEOlxCTfBXIkfA+H8W/9YzJUqLAFuZY6Ap5XD3yWfJStV3OgQX97/xt8fnHqP1f/+56iE0XfqttHBT/M2//WDtwnsWMvVl/df9uBTAlLWvpSCjQtP/WaglgJ3QMFM+IXxuRDy5Rq+9eLL+NY3/ghPPPIYThw/gUp2B1MTSaTHgri3chW50g6VZBBlKrtaRfvTd2l4x2yhj1Zktyh4bQERla6q+kBJBQIRC4HSrjlDR5t9Uqb+ofKt1yx+7tChx9gUGq0VGqmkjUiM4ICndqvTTknaGUi7V7FRNh2ue/QCLeZotSgb5aVpj07RsbZILhQKuHr5Kn7/9/4AX/kvf4hVgst4LIlxgoxxytjJiQnKxb4t4tGe6pKjkqHquh4VpkCRHD8K4WSRZlgr/aHCiwKBoCkzeW5e/v73DLg+99wzpqRWVlYxMzONWlWb4DQsZ/JX/ugrOHTwsNH+977/Azz5xFNsIwHtV/6Y9ZjGoSNHUanU8KMfvca+OII7d5Y4XkMkUylb9CUvoHY0ixMElykrJienLN6wUCzY/Tdu3OD7WhhLT2JvO0f+D1tsruIsxc+a9pS3NT0RR7WRw5vvfp9Ahoaul+NEwK7fjIaNP3g/QYrAq4Cs6FrfNZ7vFczeXfs6nPLAy2Mury/RjuJXzWNF+eYNKMxLADZoi3K1gt6mTAk+lHJI8yWWeN9Ck2QAc1zYB1okpPC8EAEvL5I3xJEjXtMhw0kGTpsAyDJnyCgnRxof01gZySA/ggSvAV/QHEpBf9jkkrYTFQ/p3gr7TLsXaiOKRq1lC43kLZNxYXGc7FdNUatfBGaV61P6QI4ND/va4wvwnj5/99kubWF/Gs1yH7ndOuplgh7SUatFnis4sbvtQL5IGS3yVhvY/+RWym/WWfTOOvn5opBihtVsynOnp2y7YgUiYT7hQof0W1VaudouiZUGZCiJHHXN6uYG9U6fRmEApUIFRb4nIiODwDtC/HHwyEHS50NIZlIGDuUp1fqbdsuLjiOAtVwJ04cWMX9sFqG44mK7BOgEh5SZNiWvmWbWTf0+ikfl+PJvOTSkDwzI8lO0rMVSwhbCRc06GYvjKc+uFs4ZAOa9ypUtr7vX4SNfjhbyybkiPaJQJ20Y0tLCKH5W61XimyqUJ3hIkKuFU+bBFNFGtOiZuoYdpllAM/B5ikxMJvJPow3VUb9Jglo7RqecfFpU3mgqx3YFyt2s+7WtbiIxWlCtNIwyopSrWhhFToueHEt9ysVQwGjW6NZ0o+hPfSNtIjolP4ih7h/Wf7xmAp3XFTojp448s2W+u056G/J5yT21QhlcbDEkjUNlzPjZnwTMTp53veBkD4Q9YYQQQ6dK4FRssZFF1Op5FHN76PRrlHwNmpwNtB1VEuwQKf8MFseOYzyxYHsyKy2HVoYrJ57AVqW9i93CMq7uvYRaKw/lwEunx0meAQ6u0LyH7euj3itTEUkJkDGtw93sNBIHibjBwdTONkoYrGTNlhqCDKEYFIEnCbwohVVo7/PwlOfQbVIwdqsot7co8MvINTcRcIdslaRyfG5ubSOdnKECm8PMHAVwIIGrt67YIoyH3ncJ+45U4eqdwv/8f78OZ++TWLnnxI++u0KGoPJpDlGpr8E3nMZY6AIHZplEvEFZ6cPc1FE4WvvgjtzGc8/P4dIPJ1DZPMU+IcH1HGSmMuvtMmL3ehVXpziWAQmqYem6EvFxMpMLSyuXSdwF9pET+Z0Wjh1+AleuXGJfbOBD7/8o/k8v/Bp6TQcVLweOgO4ulcvFV69ZzKxy5+U3FnHr7STyuwMcOLSAWzdXqDwqeN9zU9i5t4p//0/vIX9nkoJum5bRXYLY04hR4Gm/a7cjzHHVytqAeXd9fhFhF29fvoW9cgnh+Rs48GgRiTHlCSTj9bfh88RYlxIcVMJSZvEQlT+JutXJwpfoECcTbBdy8HK8laBbU98dWmBFguCZiVMEs2fZti7y9dtYK76O3fqS1v5SEJC4CQ606xDtfkwlj2MuyboG5uB1EjDzPzGXFiVq1bBtuEGBrlM0JAaTktZpHn8xNvv8gXdDLK5D3j7FQPFFI+bnqOg3CS+L9aRwkA40IMxnBWglhGRNyzMnQeT1UGGR5rULkWKa9HKtwH9gjbspuR+ARAmakyeO2/Ufd/xLS83Fp0ybUSBaCVRfuqRKStkZoGX5rORASlWKmWfflAbbw+8PwKxAoSTwSPjxMZ4SMRJB+lTpUnb6VBn2CgqyEQgY/fDg3ZTf9mmCUzfysDrxUJVUhv5R9a1MfdFFfo56mPfrhweHff1rf+vgbeo57VBmfc/+HHkcOEr8W2OjsuwetZH/8S7WTcKXD3NcQpRLjU4d8RSBRKSPMmn1P/7738OwW8RPffJDVOi7HLcOZhZSBDkCigSM/SZ2tgT6slTmo9XabCW7YEC6r5HG5LVxW/JyeWECvpApcHlKgRZlF4Gscjnz92AoQ3B2hLK0jZB28YlQObiV95UgjjJUilDeGgMl7HN1E2U9QUUTdcreUikL5YbVVspNTc1VtGlKDYVcBX53FI8/+ix++Zf+Pn7hC7+GT3705/CJj/wcPvDMp/Hkhefx+U9/Dh/+8Efw9PueNYB48eJFe2dAqIGDFfIrHZpiKGHTzso1qzQ/imdTDlEpLnmFb9++haW7dwgsD/BaD7du38VDD53Gq2+8joMEm9qB6K2338GFx58gqN7A229dxmc/92HKvB6Wl1cQDoaRjKdw+85dzM0tYIKAe+XeisW6yhPrsdkTNpyNTxDQXr9+g/fN4/qNm6Q3J0EUjY3ldVvEl4qlsLW2Zc+KQCRzlDKoWi8RkPiwtH4dSxtXOO7sZ4IR86KJjzlmBlAEZEnLUrDqdZHcA9r/9fcYZnB56c9Ju3yKhMBRG81W8G/zjvJT9dEUvPa0t80rNGMjjzgZTt4sATnljNUiFwFQi7fns/J6GY0rA4JkFUs3fmF9RRfiIdG5au6AZFaXZClApFqMPIfyUAa80pN+juVo8wXF8ltaJY6vFqw5qEdF05YEX7JSwIVlSJ5pbYqBKwIM29qbdKLmqX5+luNTTCTHq1Hv2GxdIjyBZmWIlVvU9ZsEX0157WI0StzYzQ5orHTJY5K7AseU0y4aVaR5H0Grn4ZGgP0SZp0iCtHwE/x5KbV8SqOlbDHURQTi/V6bxmWewK5qbez6wrhBWtrO1kh7lPeU1VpIqVAZhYn5vD1L4zc2NYbNjXWUqmVMTE5ijHpL3sPX3riLQr2Luzs0nA6OYfpgGsm0Hz7Wx88xUX+ony28QPShvtfA3/9nNDuicdIFgduhGYvqEwHahrKt8BevdCKtRZGKxkheV4FYj8NrCyZHuwxKZlAHUX7Y5godZW3SYr4KwSz5nnLA0aUedJCGOGZu8qWT/DTKpEO+Mbn8YIxGnmSFkYzqR/rhaTKStz0wnIaDOmUMQTd1uIwSTeUn0glMTo9hfCZNOuEYalE+bUSFzVhIDQ00bcqhTW/kaBI9iqZGTiPRthw6o5ADEhXrw//0UmsdTykZ+zJyAogeJM/KiiluNUTpZhSKUT0UT9qed5Qz34kvfOQ3+Ot7O1yzD/sMzPo0L1n3oZrtolxooN6soNZQLj+tChzQUuIAR1q0ajQl30PKN435zBFE/Bm+VJH1ynjAAeFg13t5bOSuY3n7Xdzae5Ud5LSppZAvASVgHlmRtJAISpuDKrtbna5OkYucgIcDa4u+KDw1baA0XAFaXOog5U6UpVXIF8kodYwPjiKY/yn4m+Ool9rIFaq0YlqIhWcwnZ7H2RMfQyI6g3tLO7T+M5iYHsftlWuoE2zfuHuRDHMCR06X8ciTXSQydfzhf+7h+392lEoxgHt3pZyyaJYmaH3fQyIyjbu71zCWKiITOoud3bvm6VDi83T8HK1E7XndxPrtCRJtxAi4VlZ+yBq0+YSI0qasSWRSFLLc661NslgMi3MnyDx+XHz7JQSiZVtUUCm5qVh3UOvs4V/9s/+Z9XNg+RaoSN6mIvBgZe0WGtUOxqdp0XgT2LidoYAIw+0nw7l3cOzYUWys93Hp9W3ceqOC1RsbJKJNuL1dMoxi/XwYi82YEaJY0C6ZS6s/3RQs1WYe2cIW8mzD4ceCePjjLWQO1ShG3aQBDrePFh44Jm5ZUQQOQSr1RgdOgsuN3SUEJjy4xc8KAYB7QKVD+mpTQNq2jn0v9i+cw8zUEVrZG1jNvU0w+w6ajhK0RZ+xJw0dL5l1zHMQmcgBgu55lkElZgBACwZJo1pAyDIl6GU5WxYCfhdgewBA5ZnV37YTGD8lPyxulrQkBcSL4gN+yD8pKlZZEgwjD6ue/UsvLMvrCcxS8MuTorH0E8zI86I4OKm3VuN+OiU+Y+8i82uqVfXRhRMn3qNnVtvZ8lB91CZ1ucpjFYx+Rt5Z/s8+sq1r9TebY2EFrL+ApxTtX3pm+TP/H9WJn2q5ZKGFGfBUWyV/JHz5VXLabpKAsphDXjSAzK8SPvapU3/89YPXdEiAqRyVafV/8MkbVDd912E6wY6//DICxA/K4Q22QpbdZy/mqaFUphN55kxo6z6dLFNCW6BMbKlsCQOaR6NYYvULkEyFLS7zBz/8Ji48cY6Gm4S3E7Mz0xxPAtZ6A9k9GSogAHWMQpso/wRs5ZUVIJGnVqBaOVxt6poGmBSxtlrW7kTyXLYU1rP/JAFp28B/NBEi3WuL7xpsm8g+6YsdLKVpC+HkMeF9bYLg9v2E/D5vGJnUFKYm92FibJ5AcAGLM0dw+MApHDtyBunEDPk+hjYN7WqhiVaNioa8FQ4qtAC2sG3/vgP4wPs/iOeeex5f+tKXTJEJSGlKULHdEYIbrVFoaHEQW6JYPHltBIwsJRj7d2d3B3/AZx999FEszC/i3soKgUSAbe3h7JnzHFEHXnnlNfz8z/80vve9V6zN+/cfIKihbOf4yTO7f/8hy4ogGR+JxHD37jIWF/fjHo1sLVyTPM/nS1Tm8hQqZMOLldV18qHHYvk31rZZZ45mk7KrWCSg9bPuBIwEqfVOVbYLrt99m9/zJJXRBisPaNgAIZWl2N3ArNEzv/N3Pm565dc++z/w5h9/XLz11RFvUWmrx0YkqU8VJC+r7hJDjmZ6ZAAoDMXyfVJWRXwEl2SOEeCQztNiPspcDphmdco0XhRuIFBiQFyVZnUfgF0DSBw3vd1exusiQY0pqcliTdV3RB/kBwIOAWjqZ5+fYJZj5gx5EIkFEY2HbdV+LEYjS+tQCE5Ei6q3YvtbTS1OahB4NQyE9Ql8tK1tnx3mcvioy+Po1N3YWM6hRMwQ8maQJLitVgIoloCsHGI1ymkCF2p01plg1tlFlDoj4NJGPuQJn5vjyHrxuy9ADBKkrEXR6EoDxyoRr1fRorEiwCdH1yr1+16uZnJD3u0SaYsYkgBIqsGFyYkgpuemyEcCjA0sLO4j/8xibWUbL37re3jn8ha2VUZjgIVjaSwcGSOYDRBgs0/1Wh8/SSNaDCm6kLE8klU8+VItspJOEDDWNQFCAVDtLlavNYmNWuQdJw0K4ZyRDtEYjnid+qmj7A3SHcINHD/RpuiJXa9cv9VKG9VWHU3qGHlQfXxGjkIfDXNvNAa3QK08+3yHjBHVzcA3v6uuMkJEkaNTB9vD90uOm6PHKYNdGGvIcRa/B2nsKzWfjO0HcbB8WincSDeaZY/EtFubDC+nORh1qDzb3IMgVuWqbSOvLOuhBqnL7JB8u/+N/TZyXCqdaAvaEVTGs2bfdY/o3+XRmgQZXqRpstHPf+wfjB5+D4dr9tHACy4yibdP9qiCYJZglB3aMrBAwiKC95MRgnFaklES+VBbwrURcijP7CwZM2gCUq5yZR6otgvI1tcI+t6k9XMRxeYG0skxpOJT7GxSLYW4hwwnDqzT8uCTbDc7gx0uwSWmFVDQDlpSQhF2sLxe6jxbzUtBVygUyTAV8rET+xo/i+HW43B33Og1NM3hs6D7eGyKiiqKqzffwq27Nyjk6iYU2+0BMhOz2Ngu4OjRMwRPKew7VMbjT7dx7W3gz/8whnxuisw5j++++CXMzYyhUfITXFeRTjv57h+QTCjMB8cQcT/JUa9jctaPwl6HfeDDyWNPGrNrO71BSznvFC7RIjFodx43+0irngnY2E5ZWN3hGgGv+iWK+fnDWKM1eXfj+yTeBgpF5bv04lOf/DzOnD6Ey+9WcPdWmYr4FayvblHRK9l2kP3nJTCOkIA6rFuP4ElpzIZIZ7TVawa5jSAqOQ/B6S30nFu85mF7/NDOOs6OFsVR/LO/nYEB4lNhJKdDiKYCiI3FcPiRk5g5VUYko21vUxj4snD4FT8zbh6wWJiETuJzkxGvvfkmXn7xz/DDV7+Bw+f2oePv0xjKws0x105utTpppytFE8a+fWcRDk+RTl7GSvYqduublEhDuCkELPam50TAGcSk9yii/gn4HXELUFeid62q7RBcaqWlvKUWEmDoSSQtT4OsZ4ED9QeFj/6j8tB33fJASVj6KBNM9z2zbM9IMMjTIaE9mhZRqrIe66R7tUBK+SQFosWobo9Sc7H9mo6hAG00yrTQtYiHfcSXicF1KFuAvh879h4XgP0lmB0JTdVy1A79qzezpSxfPCKQZp8StLwmMaZrI68Rv7Mdo+dZEv8RdtIMrEqmLLXTvND8QXSjY6T071vYLFtlWV/yT97Eesiw5DW7XRf5Hn6//7O948F3A7H3Pw1023V9t6/2nG6wtuki/zYloOt6h17C/+12XeOp8dGDwrIyFiwnJz9ZUzXZYsIkoNXnVRpZ2jY1PZbBVraAeNpPQzZPo/DbaijBo7bBvItKoYQODc9wsEujmLwQV2otGp1BJyrVXZbfuN8m1pX8K4BCyEx6Vj5XGU2a+h5YmJbLEaO8mEO1SmBI+dnlvVqH4KRs0E58jiENPAIZCWzCV/ajaJnfCUbRI082ojRMZzhGaTRr5Neyi3JC6XT4c8eFOo1YqhObSZEBJVrNpJOsXw9F8pxtrkBllS9WqDBaOHHyJD74gQ/iT/7kq1RIClnoGzCqqW/YYdrzv6GcaOpfHqJn9aaPykq5vtXu73znuwbg9+/X5hXjFhawvbOL8fFJrK6soU0WOfXQGbz0/ZexMLdowFSLyBT3evTIERSVW5T1UajL5voW+zlC45rGQzZv3q1yqWwLwra2dwlo+pTfSbatTRlStTCKva09TCrtE8uVh1neIfXrxKzSK9bxg1dfxNDThi8keUA6YLvE4iNaJg1qpkFERRpUm0en7gV+9TPvDcy+euMrpAPRvmZg5M2n7KROatN60voON8dXnOV0eFmuZNJ97xjBgbaudVNmCXYYwHEQOJAABCAEZpXJoO+gPGO92gaAKcdYmhls/F/1VMyzxljxpQK1GncZ3h4XgQj1S0dpAVjmgCDWUuqxLM2cub2UaQLS4Q60jbQWTkaj1JXxAFKJIP9WfnHSezBpxvn/h7L/ALctu84C0X/nnMPJ59ycQ+UglaJL0XKWZFsOGGPATXA/mtCveQbKMg8e9Ac8eN3wbHi2sWVjjGkZ2wqlUFWSSqVShVvh5nxy3Dnn3f8/1tmqspvv6Wqdu+5Oa801wwj/GHPMMQVcbNET/7xj3sdTgfh9Ij5CNdKgF5srZazdLlK+U2dE5+AehjjWYQLMIarko+6Ihpu7wXorzG5Mw8uHaS+Bo2eIMGV9hAaiFlO6yDte6opIzI3umEiYMsD6iOigpxzO1BnCfr2uB8s0eCQPfAGFGYHyVmMK8qnC5mLI5UTzBfPuPf7Y4+ZEe+ONy3j5pQtYX9/B5rYL2xXqxyhw/IFZHCSYTSYCtiuah7znDjjhBcpcI1DHonkY19spunLkkXSG20C/6FOYREabi3wnw0CbYMgg9BF0amy85IOAZq37ioUnAJTQlZeXpUpGyYkn50iTxlqXuk2ziNpMQ/HVIRp7wUQSvmgU/rAzzW/jyrHXe8Ugi8YUM6vYa9XUYWTpM7aLv5s+44ex6Mt2KSMGGfvZT2FE41HyDBlFccKiV+ky6jnJf4VeaXc3XaOwExlZMvoHMoZJH5PUYBMPLd/wmXyomsf66K3aag4WmX/Sy6yjFsXZDBTlY4sYUzSu/vKGiU7k1Q6xPJLcz37sf7LW3MvhmX0w8JSXFQ+OSLg9HwZ1AQ+eZD4Jb+UkUw4/gVl/bIi+q45Wg8TZjxLMZdl5fgqXBuoU1M1WHbVeicDkFonuVWxWrhC4eZHPzpF405TCjkfWS0JrD3oEmPKc0Kwiw/g9JCh5LmV9K9MBhY7X7+JgkpEIFuSeVpC0YmkVQqDp6XA4jhPVX0V/5xhCZBilERt2lYC9Q7C6jttrl2gTbqE+LOKR84/g7t0NAt051inHzgwhlz+KYXAPD5yPI5m6i9/+1x6srx7CwLuDQH8RmxvPU6kEqJT8FFotbO9dJcOTEIc5BH15+EaLiCqUwCOidOPo8aPIZhIoV3dJHGU09qYxu6C95J0UP1qM0KiXaAmmbAcjEIxGE/KseGldVkkwMczPnscb155lnW8SZB/FfP4UPv6jn6RlCRS2A2TG61i9uw3t1JWKzNACpuKiYNGUWyLTZx1vIxGbxeGDcyjQgu32KfRIFNp7XulhtndKvDaGufwRtDnWCtxPZmIEsDHMHE5j9kgGGYLZ7EIU84cSaNCCqxFobq/5UN7OU8BqCmVkSlTPpFGLy6+8hG984U+xdfVNDGrbSKRcePTJ+wnKr5lV79KuN50Bx22HCrNJqyuO2bmTNFo8uLz1WRSaisVmH5KIpdjHBIsBCsaIJ4eUZwkBNwWVZQWQRCdtCKSKYxUfSzqRYjDPhQAGhclwxN9tMY0AhiOODDDpFNNJkfC01CYqk4fuFfPpEFNKwOhVIFGMq9+l/MS4ArL6rGuUKcGEEo0BJeYfKJWdlA+ZV6ElHmVdoFKzAHyO/wkq9Xs53gKzTv11qHo6TZbqs36i/CAZ2Xs7+ZX1jUCSPvO9tZCKT4DWuo3/yQnjCDqdfAqvVe+bIOJ7W4ipC9h+KQuZAipHQstWubISQz5IdVEtdXynj3moPBX11imBpzeshy7X+7cf3/nMV5ahmto3vJjylbepJzguKp+fB6RLKbNJUarqpH90QSIWIP03TSDKeyfFXmtWkc7R+CZtpLM+gqY+vvHcTVx49SI217awt1MlL1KZUeGm8ynKiCEaDSeOE1J0iFGxxsjzU3y2smmox5RfVrSicJceK+DEqabTi+YxjcTCtvq42qjyNyqHUNyANoUVP5NeRnUqMMqzRpugU/GFBCRtH1o1tpXPU+xrpdw0r5h2WNQUtniPYp910246Dh1qpbE2W5BSk/EvhSOPRyKR4nUNA5vK3fqXf+Ev49d+7d+RnxxDTXukq50dnmHFGpOu1X+idRkyGnsNciIWpwzs4vqN6/jG17+BJYUMzMxgdWWdMmXX4pA/9/kv4ujRo7h18w5KBK7ZbA7FQoV9kaaxcAOnT5/GTf4mvtLGB7u7u0gl0+aVDfiDVG495DM57LA8NwHAkGOsaVwtwmsLGFNHbG9ss/1+Apu2kzfUP6SCB1Y2ruON6y8jOxW3dRjy6ok3pF8dL5baIsIy4voOuam/dPziPcbMfuvyZ0n37BL1b085iKmTaBD0qI90umiED/m9Fjdpgc9YM3FEYrbDIf+GvF5j6AsqDIDjSIDnl1eL9dGWqVpcpgWOltKJSt6Z0SHtU1ZJnmgqWqBF+aU1/a90UQrZs9XlfO2aXHNAjqW5FOAlMFV8ryZfw0mthKchQN3q43cBgkKFH2jxmmJ9o/EczwgScfIA9VGK+ikRS9kZiRDUhLMEthGMOn405MChgRXxTxOgpiyd4/aeF9V2A80BDUMC2bGHPKiwAgI4bc6T80VIVyPqdBeBLAEVx2RIQ8QbbCKc0KxBycCgDBXJY3kx2RoyPI2qRg8t9pFmPG0NC2k1QSNHIXbapl703+wUya8zyOdmaSTVcOfmKnV+DjPzc9jY2sRWZYiteguBLMHs+XkcODRDw4g6nX3s8wQ5CDTu+AzzdGpQRCgmsJxDxriGRCn4RtRrmq4vE48UyyVU6nWOe9ucGxYCQ3lv9McxMYcL+T/kihn/CihqpzCpChGqZTKgTOiQfvQM3RfmmEc160cw642w3/g+zLZKjuoa0z88JzGpkslGLzxM1jrvrB2KLVc9tKWtNj5QzLO+02IreeiVWk7ee6UktIVm7FuZs8FomPybQjgWsbAP0Z32E2iT/yQcFCajtkxm21Uf0bnpDh7Os1Vfpy5KDSfQrph87R1Qq9U4ltoFVLiCGDNEo44GmXBjgPT5Mx/721bOvRye3P2ep/x8EyHKDg4icGkXKbcEMEGhm6CCkkIpLhSH5EsMaJk1DYh5ukT0rjQrqVyKClzmOeigM6qh3FknwVxFsbOMXIqAJDZN0UuCG3oQ4cAoEX6DBN+h9anGqZMlqOQdUkOdHW/kbiYxSJOxU5XDTYHWSmdRrTRICBSwJOTpm7+MUSNKC0wWo7NV7YB1qMurPN4hM57BsaX7ceOacqhmjalr7SKFYAcrW5fAGuMTn0pj6+YAL34pw++nEIp5cPPqZeRoCSk1z+rKFtq+LyCIk8jHPoCtwibSvofgjr0EtDnQgRmbIjt8wo0TJ+ewttagAGmi18wRKI4t0FrEKiGyt7eFTDbNPutwwKJUVAFU6xT40wNsbVWwsDCNzcI6Kr3XsXBgAacOvZNAMkElGsLWCgFt5SJ2t+oU5iWaqgR5MScMY2oqz/71Uhm6qWgOc4yof/s+GhcXMT21QEDrs1CRSDSL0kYQg2aQDB9AlFw9vZDHoRNZzB6OwBf3oNQsYbdcRbkxwA6t2VziFIVYHsUNP0prBPKRLqKZTYIMLy5968uo72wiQ0X44NElGkM76I1LtHoPYbO8TiOEJExCHWglLYG3Fr1oG8/c1FHUOFY3K39MxqFw8IRJbwKKCksYIOxOkianER7kaLFpUYM8oGIKFqh4WqXn8oX5GxlkAq6oNDRFNrSpDAFZWoLaEkUUJJ0lPUYmN6+fAWFKDEoT/aZ7JRz1uxSEWeasj/L5TRS8rdrk77pWdKr3HipdxbkNaWxpNx1+ZSBZU2CW/Hyg4APnEOA+dfrePbMqX396vqpgwop1lWCQyBKIlZyXA0gAVKeaZO/5qvAD3s7PE5XNj+oDlqWfnNAoR9ip3rbgQW2nwLK+YB9IEGkhmUCNPZcV6MsLKkc179WpQ54FHc7//MznSCCzK6xs09YGjPUstWm/LfvXmkd2/26n1RTquoYNdWJl91+t4fyd96iZ6lxdw+Gwcuzgqzx7lP/8jsKVHaR0V4rHmij4rS15YIAf/aGP4j3v/Ag+8qFP4ad+4q/jvvPvRH7qACKJiAnnO7c3KQPymM7dR8VynKLrMO7e7GL5ToUGJceZ/eMPjBElb41dNcqpAumbwMKdoNJlPYbOdK125tPUnkC5ErvHEyP+ViDQ3GKbCCqDKdQq/K4toKatZ6dsc5h8NoHZ2RwWl+YoN5KkQW3LqYWao33lQTDBhircaWtzi4qG7dvvY9GhcpEKfGkh4g7B4xJlijYsePrpz5unTOEwGgfakbzPWc2uNqlvNEY2WyEaoKxW+ih5YzRz9uxzz/A+Fz72Az+AvYJy3TZoiM9SdmvmYojd7T089ujjuHr1Ok6fOo319Q1boKbpWCmxgD9kW3snEkmkUmnbPUpTyQEOkp99tb5B+UIiFnBRWMaIQkKx6bpGfCmiUqql7FQM8bQfX37uT1Bp7dlUqWb1BGZZVfaPXgX+xKsOPVtoh8iRv1lZPO4VzH774mfRqDWtvophbtY6aFXbGLcpH+QEEl9QN6nTVe9eq41+h8Y13w8JVHxyPcFvNKt+V5yjcnt6OI4BvhdY8JHZtIKe1TMnTkubGZABqD7hpdwa87chf7WFq7qX9wjQjMjY0t3a0IZok+wmWtcM0dCMOSJYNElrWgUf1sIum871WciILxCGh7LOo8wMpE9iHNKkm3qZr5S9yUweU7PzyLhPoF10obLZgrvlRSI8xd+nUG+5sVNoo8f6FRrbqPd3MPK1SONNyhvyIkGscsuHx3JkEdzKkxpNYLeyC1egiWPn4ohkGnB37pKXG9a2OvXP9m4D3SbHaUzjpksdHw4jHkmwfR4aheof8ZgWV/aoB72YPTLLvuW11JeuYYxANUFeS6LaKWF19y72iFO2m10cOBvHA4+cJ1CLIqTMEgSgA4/C/7SYU8aPaGQfhBq9SayQl1gvpelzjfy2RqdCLLJL+t8s7jhtIQFI12qTFcWfekl74lk5BT3yxIPYSo470oA889oRbkAZpVX9Pcrc5pDvqbx9HKcU6xXj2MSSxBjJBNz8LKNG9RFmsrUd/BMZmAPFZJ2RKaUDP/O9dJDFTRMoKhQApAnNbGj2UzvOaV2PgKOyN8hAYw+gWK3bwsAAAawW0KWzWWgrdsW3U+VSx/bQ6Cq7U99SwmkBl+SCwK90pJyParv60eS8BBDrqLrK06LPkityXCpblryz4hWvK0h96/SXFvOJRj/10f/RadA9HJ6FRwdPJWlppfvTiI1nWcg0BTMrTkEXpDaM+WQNuBHJsSrJHqp9NrROsDFKIe1NIzSeI/E3CRzZQbLY/F2s1p7DRukSIqE8ZnOn2Ehapr4+wlH+7tK0GAHrsM7OLcM3JvMQ8WtwFOeiaSNZbUFykQSSVlmGOOjDchY93x1st++g0ixjMezB1K1/Av/WErJpKoqOm4RFQELG7XZJcIhgOnEcTVqHd5ZvY2F+wdLKmDAkY0jRxENp2787PVvHH3xGuWwXcGgxg95uCNXiLRxYPAOXFm1xoAedINreFzBw7ZJwulhv/me2+yTa9TaB1iwymRhmDtLcC6dIkEryTOuL19baFLizMyi36iSEINsHy5mYT+cRIhBqtMskpDqyqRgBJi29ehhnz5wlgL5OQupTWJyjpbuBz/zuv8P1y18DGmSMoQt1KcaMF/FgnsQdwIySmdPaSU2n4CVx1qhcytUmEqEFRKcrGBNI37pLERjuIzfnxsbyDvY2mjj4qB8zR1LokAnXN4ochwgZLeDE/bUosAh6gxE/1vduIjlN6zoZw94aLXSC17H7mwhtriNGAfD4A0cpGGIo0Dr10zL2zE1jQ1beoMk6ulHcvI2hedZDmJs5BT+5rr57A1uRLRpQFOhUAMEAlZZfHqoeooM8DsfOwNdOUsg7XgcJYa8ArCdhaWC0ctdim4xJBEpHgkCOgDfrUwnIJZR0Db8Xg0tBi/sN1cnwUXya6JHKht8rjlYAVoniNQUz6vOzrFhSlG1BSCFgFjufqYVeY7eEn7xmZGgKJi26CAajbEuE91HoKZiL9dHTFbd75nsAs3YYyOMLC7BpeL3nKUWonwQyaVNIjpGb+Z7f6b1heH1n0k1tdu6TQDGvA7+wW6RA+QtJnGWp/bqF1/O9k6Nzvwx+J9+S5BKxLPuM5VmZvNve6Dkq3/lPt/Axduqwutr5FrCeHFZF50479M4EH29SPL+GzeKc2Th5NwTGnTF3BLlO3aNhMSCmfuBnhd51qXA0PprqG3RogJD/tteAXBr4+3/vl/Hp/+Vf4rHHPoBoJEtZo7QwcRw+dgKpdI78fxLVsha/LCAdO4nr18p47eW7BGt+5POUl3Ea/n7FRNf5fBlpQCAssESFvtc0T6NAdEdyjWAsEY8TuIXRG1SxvX0VpdIGnxtDJjGP7fU+lfQS7j//LoLCwwSyNKzZMVpdLk+RxeeyfUqDuLe3a4tu5YXptJWP1m2LXLVZTJL8qelpLSKJJZMEBo6xJiWoeDp5NbPpFP7jb/8WlQo/C62wjzXuUi4aGz87UMpx0qffOViQHA4Tj9DlSxctYf573vNu3quVzmHcvHHdvLRvvHGR7asY0F5bW8eDDz6Ar3z5q3jggQewu7PHOhOskbhWV9dw5PBR7G0XWTfJnTaNbKUJjJpBIi/moDukwiUvsk5yLgi02ToNd58gK4haZxdff/Fpttdn4XGKiZURRVLQ5I21TfQrunbo+63TZiJ43CuY/for/8m8ce1Wx+RBV17ZZg/DFo0CAm9N9SvGWyvQNXYK03NiJMlTlHGiA8v5KuuL4yfeUR3EZgZI5dxhv/jZPwHKEhkkCqfrcay67A/xhcZzwlAcCeNhvvCzFipS7xhXEsiyfE3tWuwuTwUoaNW4gEyfANnD67QyXl5iMY2LndXsNDCVI3gioGiVqtha3ba0WrNzRzgGKRRuEoDU+vCNfAh7YnxGmPXy8T4PWpTjLba52ilTT9ZZxRb7ogMP+TQWiCJGMOunvFTsrlbnF6s1dEctHDudw0mC2ZGPBkyniL0ijc3dtq1/KZYHFrbTUVxqrYa5Q4eQiKWRn56hzs2p01ieB4sHZ3H0+AEcPnMKJ46exQJ5aGnuILK5LG5Rl774+tfR6JUwDAbhS7Rx7L6jOHH6CNKJKGlMwRscP9KK36tpePar0bhGxTlM9vKUfnDzQnllO40ujZoq61nADoGs4p2VL1dAVhkltFbIjCqWJbDn8SjEImazEHKCyBuhHdC0VqVDXaIxbpKmBE4iHNIExyUeVSxrHN6IdreTB566aJ//VE+Tp1Y3Fsc/yUjRuv4m10k3OYCc7SKNDPqiOdKYLSRzcsaqqQPSRZt4paw4ZOIun3nq4+zvJNvEa9gPA8grq51eld1A8kIUSZnLPlQbLZuPnilPBg/VT/LJSW0mqc36UldbTl7iAwFZhWLZ2iMaKG6fUgcqe5UyNvgIZn/JyrmXwzP7EJ6KuiJIuJKIe7OIBUnItJoiFB4Rubg5KJE4AS2BkztGwqKl1SVhRYcUzP6MLTiyPZ7l6yEjV/srWCt9m8i9zArlLfG+pj0UF6SpI3lnNf0li01Tyj4xHxtvuUAlTAVIJHDkfeOhtE/uQcg8v41+wVJtDVsjZAePwb/zA0AxYp4CpbVQLGO73UGlWiFzArMEkctbd5HPTNvuJOq43UKJipodT2GoVZtn7z+Dk2cSeOGZVQRcWSqVTRR3tZKwQsIMoFbdo0VEoUXwXaheowXhoyWXIqF2EHQdxHxuDs0SCS+WwNkH5zH0KXamhkQkh3HLRwZqEBA4ElRTDzYdTkGmuBkR4YgGA0UbrZA0KuU2YgknJqbW3sXUfB9Hjh/HzAIBKtvz/NeeR61cwezUIsKJFAktQ2KkxZuIY3ZhDq+/+RrLKFMQem1aLuwl4wabmE3NYm+3jESyhGaxh7PHzmN38w4FbRDv+fD9CIdjxgxN9p3GQgnZ1aciwHg4ja2dTaTzSQog4MY19gcZYXquS1BcYzmr1rb52QN44dU3kD16GIv3n8Bucw+V0jYiiWmCahoUt1dQKcgw8MNPa7iVGmInXkGxVESMCs8foJXbKHJMhshGFzHjP4BxVRAySmngKCGHcfiRFqIsToEGxcaKCXUqzshid+w355QikMjX9xJO8j6ZklNBLFHtnRwTIWaKdv97Z1cTqgSBXSkgxf3Y/RIkKoG2LK/RCmV5v2w1KN+rvgJ98mKax9jq2Mf5c2et3O92/LN//Kv831pLGuELlZF5JvmWtoyBNfPMSlnzVac0ol7lkVVoiaSP3bp/qjTLySrBxno5tzjtMFnH6yftsnhZO2lJu0mzuoXXq00CtFYnHk7d3taH+/+p+9iN9mqHyteP9p9eWK6+c+7QN87/Vhbf60cRpTWU6pmfDciqEnw/IiCQF4z62k6KDclwvpEnmc9lH2hhSDbnZxvZGQSg9d0uatvAz37ix/Gb//a/4D2PfBiF3Sa+8fUXcHdl2WZcVrZX8Oobl1HZAZZvtmj4zpGf5vHSS7cJULXD4GEcPDyH3EwE6YymZ5VdpUU6U9YVPocVUeosAR6FU6ke1o9UcvBoF61V7NCICwR75pUs7Y6oFJOYzj6EhZn7DSj3BwKnOfadzxSfFKY8pgJ0knOpVIpta1ksrgS+PI+tdh3VepF1oXxkxyi8QKMrmk8kE7ZYVmnllB1B23s+/YXPoan3BLcCjaqeulqHZiZsNoL9qOFSt+qjOtimDXmxZIN46s6dm/jS01/Cu9/1hKWdE9DWnvS7ewV7TVA2KR/40sIBM5Dv3lm2TRG0jmBmehbFAuvcJZBgHWLRmFWi1Wyx/SkoheGYfaFwCD3XZgsoCEU7mpIce/uYX8rixvLrePPK6zYmWlASIrjR4jbre+t/Vl20ODlFYjztO34QV/0PP3pvYPabr/+B1UfjofqYzpKHwkET5C15t/uUnx0q65YZG0rlJyeKAL95mvleYFYyQQaJeEB/ArECtgFPwLyFyhcrWaIpbd0nj7esSPOW83qvZBpPjZFlU9B7ayxbRH0qniFuMRmkzYLEt2HKXwokq6+uV8iQjGy9ajZpOh23+NGtO+u48fpNXH1jj+C1h1RsnnRKOiPQVMxzLJTgeGVYVAD1JmmY4I4mBoq1Co24moUOwCUg3+a3pMFwColQHPF4i2M4NmeLYreVqu7YqRQWDrLfRmsob5NHm+w/ym2PL2k83KxLL3nNw7pDgCuvbSqXRCobgzvgov4LYW5hGlPTU8jNTqHd6ODuzdu4efM6+foKrt95A5XWLtJTYQQTGcwuTePMA6epwzLkL1bTxb4lSPMECT7ZX/LIim9MX6hzeWh4NcBap6Ls/h0aMdpooFAsEXzvoEjeU2w52dEMsBANEWWQ4L/9suRc4XcEswKymuKX7FSMa498KX5udUQrNApJ6ykC2Qx5R/GqArMu8j6VpJWpKk1m04zo3nbK6Lf3do1ksPSeqIXfkl6NlmhQaQ2Hz+dsvCDPrXhAcawt9r02qRJGU9aLFOVIimBW9MiSjPa7nQHlSZsYyaFhJ7sMn8fHaptdy+pB+eDoYAFdB+DajCifa+Cb9xigpZzUzJVwW6/LsrzKdesiBqVsZWf+1Pd/D2B27iHXU3F3DFkvrfoAwWckSwCaJLESJLFDFbQcjhElp6hMgx3UCFI71Q78/ZAB4BAFrhjb7Q1xjDvYrL6CtfKr1Jpu5FLHeL/Sk3jM9awpX0tyTeGvXcMUTB30ha1xDligdmKnyrKWFS1hQRsYg1aKlssKibxDS40gchzDTOtn4N9+AoW1LQ4KyyYDasGBFqOpjFqjgnUCLeXFi4ZiBKGEjPxJQfpK2Kv6jQmypSBctBYvv7yNuJ+MUKsTLA+wOJeHl4S7U1ons+7QEqGl6a5xUDMY92hZkknToROIEjBGA2kOsw/paVqrBGmVMpm4GULQHcLQ3bWBE5gXsZtVTOEmMKsclh1JG8UNDzTVVkIip11EtPtFDKn5DYz9NQSjbhw7fgiFrW2s313H9737gySQGC3VJnzROIFmlko2iddfv2arTztSJhtKSyaJXUPEN4tGcROdSgkLmaO48O0X0exu4+d+4ePwhQO0qrU6X4BtREKXJdxg/bwEmTEC+zIVNi3NaMAsthENi3gsRmIrIpUps70BVPYaFi+1WiaIp3X86u2LOHZoHkFNVfDZt26v4uKlDeyuA0fmZzBLK7oWb+DWaAXJ8RSFIMEA6ynQ5EEGGc8h5DyLhLFx0oam6R3akJI2/iVjyNrUB9GOpkp16rP6WalpnFWWukYWMb8zsCnGJWPxOmdbW02pUtrwcKxb0Z2EhD3F+cw/la38nloINqFR0aYMMI2eFrkEedpKUCoLMa68SJpy1fajul/KTeEzDz7wgJX93Y5/+ul95SoQy3roQcITqpEUsV5MGZvCVsOtqXw+hYYkMn/TdWqr88py7DanbeIXCR8VpWv0rZotL6bhSRkQLEMeWv0gG1zPl0L8DpgVyNQLP1sd98vRf+wC9q1TplVfP+h3fqGTMtUOR/jau/12Oq+KYdMDBWCllN1a8aAKsPvZ7aw/66dreJrX1gqn0Nb1vF+LkWg7YtihsqASLi13TJl+/UvfwF/8yb9CvqXSppqVYt7a2sRucZcVdhPo0dBqDDGdOUa6HmB9rY7NLW2R6cbJk8eweHgaqbw8/g0axFpo2eW4UnGPe9B2pgpv0QIUeflH7IQmlZ6mleWV0bqCcnXbpnGV83KTZQ875If8I8inz5PfaKRGExaCtLm5Rz5TPH2Pr0kk4ikDgxY3KRBE2pJHRUqIlEhxRgmk2Ect0uBvimXdpLyQwa6FP7pGCfvlfT929BCe//pz2Fi7a15Y5ZjlJVRA6m82hddq/Gxk9J8ztHYYjfA/7VAUiYQpw/yo1av4whf+1FJ4Pfm+9yGTybAsbRIQNqNYOWkVF3vfffdhbW2NxnKVsrppdCCZU6YBHqVBq1i8KPtfmz2MqXA1pW5TorymJVDEAdVWt9qsQJWKJQNIpNx46cJzlOnKf81BJzHLi+RyD6w9Oh1aFAjnacrdoe2JglU9fvFH7nEB2Jv/Vd1hfTLZFcrHwoKULZLp8oApRla8rrYrNnCSk1gecwuZ2Nd1Cr3q8hp5dJ3wjD77mTQsA5w07RYj6hR/kQcIiQz4mFGnceDVetW1ukyesSEZ24Cs6I/3yCMrz6z87AKBbsojSQIHwNBwUPyjvP5sR0Qp2WplvPbiK7j1xm2U1muWiWYqe5hjkbTFh/FQhvXyEqzF2M+kz44XderLNgGepHOpQuOE/DD2aC2M1j9QX3uCyEZSvDeOaGxg2Yga3Qamp3NYOpxFJCEaLbEODWysdlCoEOB0OUB8RqfnIr1owSLHm8Cqq9niZJSNZb0HdUsDqtm3aq1KYFnAlVvXcOmNSzSyrltauVvkg9y8G+cfOYZ4VqFsBOyL05inQeoLsI8UoiDuIL1oAww3LWGjC57CLaKNtw72XN+NfqdvMe7VYg2lctGebUB01Gc/jUkHxEzEHQrj0cZQCmkzDwOFtMetheCiE42tPPQDtrFjC8M7NDgFUrIErzOZFA3ctM3Q+DRDSJDoJqCUoam62bhT3hkh6n8ZJHxrcdoabdKhxf6S7k3esv8kF6TnewSh7MB9Ham2ytBxdFy92mWbaJCQLrX4TIvoY+GIeUv1qDoNTRmbtoW2GZn7s5LSd/2uhauoTMkFW9jI+9SPRo98vpMbl0WNyAviE9KjgKxkm3hdXaX6aJG8DPWf+cF7DzPgc6ngOcBKuKx4paAGgq8hAq0Iv9PKRhUqEKBaDPlwBd9r9yelsRkqdlPajUe1u46t8hUCzzq0eUIskrBGmMWoXawIAPpD3jPqiAM5yPuKX9+T4bVwRx5Zm+ZU4zkAQw5Eq03QNyoY41msh4tgr/6ATYEfPnyIz4rxNwLtGi0KCgZ/iBYQO7MxrBGQJ0mAHuxulVieC8lkhh1KxhsSnIxaJPhrVHC34Ceg1+4biXASfj5/Jj0LDwlD3lSttoyGUpifOonF2YOYTi0hFz6MZDBH8KiYNRILiXBztYVKwYt+mxYl6bI9WCYR+FknbfzgJeP3SA9S1g6o1spA1UHJ0hs1GQSypLfZv2Tc4AkEqHDD7Pu94hqG3gY+8LHHEYwMMb+Ywtx0ClPJuK1ajWdi4FAhP5VFJpVFYXsX7WqRjNXAyo0V/Of//PdpAPDZhTxefOYr7Itl/NiP/hR2GteoeMuoVOtmKWtv+SYBssCZFuOJjWutCpLpLK9zAK9ic7NpbaqQxs52BYMxjRla56dOPsb2hlClsN7Z28Nrz30Ly8++hit/8Dz2Xt02YDpsU3huu9C4WsXgchkxTa2OUhi1hxTmHQqAMBK+aYSGcwghR4OKhgPJjqKfjNijEGnTINDCiCYFcJ1AgUxHQSylMYlhdbyjzkpxeZCDgSiUAk3pfxSvrRyaomcxuYOO+D/HQfcPZGgY8HQ2RdCpNEw623ympkUmv+u09FDqJT7HJcHAZ+rQXvptKjHd0yfNatGGctNqQce9HqIR0b/qZu9tNHQ64M5kmISC6ElVYNt1OoJOwsspQ8fkNx3iK2o4ez85JtdNDrvmbcfks1OOXp17JDb1m/2uS5zutGNy3eSetx8yAZz7nPcKMVRbrUxr76Tdf/ZVckZAljexLyn3lSqIpxOjyH7hqXVVgy77YBDAoBGE9mup7Q7xfe99D25dvIUTSzSwaSBp28jXv/0CeeEODdcscukkdja2UN1r49EH34cPvO/DmJtbIG/skb7ruO/BIzj74EG4AzV0hjs0MltUrHWeVLQe0i9Veb1DIU9aUYpCl99DI1HCPUPDbw7tbgClkhS5C3t7Q7Rq2gL0PB554BM4tPAEZfAM7tzeocEE8vchpFM53L2zTrBDOdIZsR5thAgIKELQaPYJ7DQN7yI/kDJcWpgR5L1jKhvSbGdA4H0Sx08cZWeNKONuGHis1zVtWzJQqMU+Tgo3Z9A0TqIdM0DYnTaTxs+TMdVbsYwWMmlHMsVbNusVKqGKTaVqh6hnn30aH//ED6JeK+L8fWexu7eNKg3ogwcWsVfYoZIsmD7RQrBWnf1EHZIkUI+FY9jdLZC/2Iu9IUEADUPqjCnKnTiNdfGlpVzjMy0PqCrJh0YScVy9fRUblGdhymAl/Q9SHmodhuMZckCJ2bSsv4UZ2PkWQao8nfd6sIcMxGoGRmfQL+84jRW+uv0EBfqjodwjmO0QgGuhjGSU+r9ZJxCgjtGpuNt6tYFaqYZKqUq9UeFZQ7NEmqqTpjiGylEecvsR51gnCQQTIQHIEHzU1wK6ijMfkCe0/bYyeYg1urYJEWWOgWfylr5n8wzgKjsA5bxmVpUVqNMTmJLhTbDBm7WAtU9dsLe8iW6NhNYJYjpxBMcXHkTUN4Nc/ICz5oW6vNkkgKwqVzp5kfpVtCcdbBk+XAQ5fJrRkcdvm+lYvTkmPfKBEEss5cXS0RjmDkZI3zUs3yV97Lrgj86j54qi1CCvcDz9NG68YRqCfL+2V0YmP4ep+Rm0BjXc3bhOrFEhDfdRqu4SzG4TA1QISt1I5UIIx0HDEzh2ahaHT80jmSfQ8nbh8Sk0TIswKY/HBNLyBgprCo6POI4cY+k/vb79vRnKChUhmHXGkn3Z1uIl0ifpS4kk5PWUAaNsFJJZTno2yi8Vr1IUKkdlrRC2IPWtnGnKKqLYaVFlyD3CdDqGhdkcpnJZJBIxm0XW6DkzvCqJHfu2VwOyukK8ydO0El91udZ+CKxrTYecaMqH22VfCtg6ua7lIX3rFA/KgFYb1VYLmWH7CHD4OPLefn9o9jfEekUIePWqUJpGpULDlOC+WqYcJB1rMbTkNg/RghYusgjTy3L6BAj4wzEigzh1tHK1sb5D9l+f9eu2yUNt3vQ9HALWxvBK32BeHQ6IMgno6bJWJKbNS8NOM2uSglpTfHKtqD+FugNE7j13C7uNGyi21qCt+mTBaQcMecCESORhU2dpcQ5IfOatpQDmB2uwLANJTgkbFWzfyYIlcQ3ELAMqDwoGj6tH4OlDt0QhVy+aNbRHi0wbEqRJuWEtqmKnNklkCQJQimgKXnmYQxoG89AJ/PKFAqVM4m6juOW2cIhut0aLqwm3ylzfM8EypkCJRgIEVmn06gTVjSHS0VnkU1QUFASxyDQVzZbVOeDLUSEqtVnENmmodq8S1FK40PKQJ0mCXM/2BQMGeLSd24AIzxdyUSBpai3Ez7Q2SdBaAODqxynIlNJMCcaLNoXyjvc8zOtFwG3WmUqM/ZifThB0yhPlt0BtGSRHDy/iwGISRw8sYSF/FF/98h/j2t3fRyhxBx/96A8YQWZzeRJ3h2PhI6ErVQ/BKIWZPCqaemyQYecXFrCzW+R1fYJCWsoElMoBl8tmjfCWL9+hlZ5EoaxV/G4cmpnHkIJ5/cJ1ZEZJzO254b1Fo6IeRNpDgYxppKtpzNwI4cGbLGOtgWg7jPCAY9VJIDaeRi40h6A7zud3KV4kCByilkdB+1TLcJAVqHOyeYEOGQyONSgjSdKJz7ZpFFmJSo1Ca5ljoHRmei+DZ6LYRG9SmgKsOqWA7D3L7/aaFFDyympxGTmCz9Op60ekAeNCUy4UdBxXAV0tctGUp+ha103uvddDMoDF7Z8Cejz3+0HHn/mNvOhc71z39kPNExBxQCU/7N+nt//907nuO898W3mT98LSk0PP1tdvu8zK+TOHBPX+d28Hsn/+vsl7a48UALvLyrdXp0+snepGnpKx9n7kKBptzKHYC9fQC+3z0tntYOnQYfzzf/y/4iufe8aUqpfKpU75UNjdRCoRRjaVMAB0bOkQvu+dH8JDp5/AQuYIXvn2JXz7xQsmGx946DTSmRBavQINZe1SEyCt81EEGh6vdlvKWkjVcBAhn1Cwj+LkS4GwCBbm7icfvpNg+T6+fwfOnf0oIoFjWJp/D/76L/5zPPrwD2Jzs0nez7LsAG7euU7QW8CCtto8csQAqDyZ8m4WCgXjb+VpXVnboJKTZzJLmnNAbTI9hcXFI+TraZajnbaG/LyImblpo8ut7W2j671SERsbG/zdAXbEYhw0jgk5zZKv06jWSnnJaAsr0M/79GEHx6DV7tOgjSAZlzIW0B0S6EjJjfATn/o4bt++gU996lN2uco5dOgQbt26ZXW6fPmyLQZT34oH5TAQ/4l3BdAE8uR1FFhMxhMEWZT55m11W8YXrVxU7L5A6IXXXyUYksdaG1jYhJspTlNhrKilrzItwFP0ZUaTQ3863v7+Xg7VWXGx0iNS6GqbeX3Zbmos8gb5f0z5QOWvOHzlSu8SSMhDrmn1iVFRq3FceSpeUDNspWqNekx0WUalWCPYJ7DsDs1TyCfZLKGSZchbrbY4CyZVfxcsR7h5d/kseYR5dvlcamoDNwI6WuehU+BbIRgd8woTdFL2K5Sl22xhY2UV1y9ehocybTo5h0xiAQnbPjwBz1gztVnT2RwZykYX20Lw3FNGgjBZ0AkfNMYkLVDzm0ES4vUxAraQnEIkoH4vhKm5A5hZyGLkq6NcX8fmziZuXN/Gpdd3sbUnmR9BOJkl+MwjOz2FWDbFa70EsAS02+vYo1FUa9OQ6hFQ9us0hokpvDSCwmPLGatd/bRVdDgNPPH+Qzh1/0GMvB3qXXmqG6g3Cmx/jfTlYttpkLBs7dApj7nHFqJTf6ifeToyk7LdZBLBqlKxUWfWqSc1jpqhENATjam9rKLJf+kOxylCDGOyTKNGUKryxFOUOdJNoh97BsdwzPHIRMKYSSeQS5HPo04Oc/0uHSIecYwSAde3Xg2Isb/1WYeuF46byH3dp7pIhmghvbCSgKxD+w7WevsOl5NTMfXWaD2LpxyN6qsw+yzM8Qwr1zNPhaMGhJyplzXLMjnVP5I3Klf1UHtFr5P3Fo5BMKwtt2MxAvuwwpvYf5RlHYLZDmXM93KwW4W2Jw1Xd1BZUwF3xh2e2umphT7fy/qRxampCqHqSCxu4DGRnqMFFENzvIPN+mW0SCzBIBnAG2cf1K3iLJ3EoMBzdppWWLoFZhW753hrHSXvTG3Ja6rD6WC2jD8NFKfbUbqSJq0a9u0ghnpRyL1iKV80OLn8DFLZPOmOHQYKGzKoBI8WLyknqaavFGsmkNjnoGo6PuRz0/pbZz3SYnu2icKbgzsznUFljwAsEEOYoIrSAaM2CbETRaPaQk2J0PtKlE9r0x+x+EQJFOXyi+VIyK5dVBpknoh2O6OSJTMIJErhqi8sxtLq5zXrRYQ1glJWeTGScACJiQzQa8yitNlAmH0pZtDCt49/6kcRjoZtWsPABc8kq9isN0hLZKAWlUw8hYWFGRLbCAcW0rhxcYxopo+/8elFfP9PPkjwm8HmBlBYUxyMhwLOeRXD9HgK+GnaSd5ArZzWQo5QKMF6BxCN0TDgGNy5dRXpRBKu2phCIoFvX7mMRDaJzs4mSldu4NHjD+OHfuDn8PHv/zgWAmkEtxtIdpSaJIhcYg4zBK1nh0cwU6VxUYrBtZdFtHcAC4kjSAbixnwD9o/6QcaQpbfiqXqqFyeH0Q6ZWWNtwoGD4cQ8SRgp04DDiKIl9ZesQksjYjFNzpSIo7QpbIxWHdpTuY4XVgLAOSfbik6AruooqTFmmeIko3Myq+6d0K+l5LFTBd+71lSzrekmcPjKP5X/Z48JkHR+tZRZfGc8ui/MBEbefqiJOsVmAi4GVuy6fS8WT/XHnz+c57x1Ot/tv/nvHGYk83WSDmm/a+2wMvhZ53/vcASwxoFt2R83heooJGDSr5r2suwL/LMwAyl78peBWtLIkIbQL/3S/4irF97A3/1bf5sPI70TDN68e922rvWFWDsCEBkdIfLwbHoJS5lDyPqmsXJpC89+4RUqLj9yNPgyuRRi6SDHtUcDVOMeQKXsJa+RLwc02r1ziIQOU14cpZA/RUX0AHk6w0fmyLdLlDWLmJ/5IB46/3P48Pv/Ln7ww/935JLv4HMPU8G4UaqVMHDVcOBoCpmpMJX7HTSaBK5xH44eO4CY1ixEfORBH3YJwiUrjxw7SnC6i2vXb1IGhRBLZLG6to2r1+8QOI1w6OARk7FSKuILzUgIUJ47d87oc2N7y2hGfa0NPzQ+GnVlstD1OvSbA9b2x1I32PdAKu5FjUCsRoM3GBAQ6FGOUP4plIGy9Of/4s9gmYD6E5/4MRSLeya7kxRUM1M5W1iqUATVbZsAW8+bn10w41gGteqqhVNjKscMZbZSe5nxSpDmCyvEg+Z9LIr1rU2O6a7JXelSGdkaaEvFZTTnKM4JXU8+6xDdv/3zvR7CDXyagUpzAPGU94vFU0cSNBBGDfg6cFOGkPadbaVdtlK9y37X4hnl1bTtZNsCuF00+F7AtlTRVHmJ/VK1DDjKW6q44V6HpVIH2cYFPJXZQRhGssbALPmi09dsEA1ngg/lDdepuFwD1SPqmJGm6mnhKYRPzM8xdeQL+4Bltfi87TvLePkb6+jWCSIGARw9eBonT9zPhskbLB1Bo6MnmpDeS/I1wd4Os1Oox1mWtld3eSdeWYWMuWyRrhbzyduui7z+DOYWD1su+K29u7hx+wo2t3dI1x3qpD5ev3QXhZLqS/4iEGxJ/gqgS0OzHwuVHdzeuEVs0kMsFUaD/VeuV6mDe1QTTd67jEL1NnVvF+cfyuPx95zG4pEcvAGOidboUHe1mhUCpRbHjWNnekUhIprCp16QbiDI0vuJR1Z9LdoUtmtW25YruULDQ1vMi07VLvGOFtQJVkpvaDcwTcnLIdRoOhuBKJxHOEE6jZqIN7B8Eo7P5bN6CAzPU4dOsW+itm0s+1pp1WysSFemxwRq3zon+k2n5IL419FnDo2LH8TvvTYNKoJw87pSb2mGW1P9k1PAVa9aICb+1enkMNYpQ5N6nn2nra+10UU0oi2IZQxQpweowyMRJKJJ0qo2DZKhVrZZX9vlizKWlWWZ6iHVz3E4eUkTAuvhWNi8swK00s+S+wo5almI1r0fbLMUPIeAFqUAXW9ERiOAtW1he0TW/Sq/I0OReLojdgKvD1LYZDIzmFs4RABzCO5wAIX2Cgqt23Bb3NYcGSRMgm6w4hQkrOB4pEo6z5nExAqk2FStvuehRM0aAHOXcwDMDd6Td4vCsKH0XBJ2AfjHB9BXTCqZKZlKIztF8Eym2d0pWczHwvwBHD1+wgC3Fl3sFbdJDATRfKYyJmiabExrORYOYqd8jYQeJdNUCYS19WUS8USYwjFomz2kwzNIhlNYmj6Cs8ceJTg8ROGvlB9JZPIp3kOVlZ/ns7LwxeqYOXUDsYVL6I6bHOSz5nUQ6FUIhJSyCF2AQ8H2AkCDgcembNx+CkJafa7BtF3jcXcwbh6ltdYw0K58lLuFNfZ3nLKDxBTLUdDP2xSdurLT1h7LIxJAxzw3SmvGZpGQqhgFb+LDP34fvLEDKNSTWN0tUDm2UdykcSCA3ddqb48xYDyeZF9R6ARDrJOfwqHEaxOIROKmdNjdZvVt76zQuuX9Q7ZDsdXzOQvrePGrX8AJGhXpxDQ82QPYiQZZ5yweWzqEo8kIVjav4nprHcuuKmpRF45xDKdaWQxWMxjuZpH20RIPKvVMD14qRieIxvGuWuwRmcGMRSEi9p8YV/XSKcEwOQRqHW+OwJ2mVZzQAFnKWs1J8mJ/OVarDt0/GSsxmO7Vd1J2ukb3Try2BrQlDFgHPUfAWQyqbXhNaRJoKbBe+UVVxgQsv71+3+1Q88gG9spOcL6z//mJdeK/77y+/dBnxQGKj6Tg7Tv7X0arAypVjUl9vnMKsPB7/e4AAT5PvMhX8fzk2aqP+t85nJKdOr71Kvw6OWTIObF///26Wn14vb3ufydvmrSCiSWOkUteDdEZT3ldLedkSx4SWvB1AtLKAE0qwGapjRZf23z9oR/8QfzKLz+Fl158Fd96/tu2KETKUZ4oTf0pX6PiYLUbTjyctXChWpHAgmV97bkXKYSbmDswj/mD82SPNhVTGYGw27wI7c6QgjxB/s6Q92JUED4CXxq7/QAFf8Jma3gleXMO7XoIW1tdyotpCzXYXNdi1QSBcBY3r5exV1Du2zTlT8VAtrx4EvKFUtEAoBZC5ChnRMOK2RZtbm1tUWl0bZZHsyjFouL2amasxwkYy5UKAcIWwWPartGpQdvZ3cba2gqVcZE03KGMECE4CtPktI/ww1yblL2U96ID8zpqYHh8Z3xJK9rPX5NuKQLuKmVuiPJmSENvNOggqS0wvWN88pMfxfPf+DKN2o9SlsYtRE1yULyl2TOtRpfhKc+VFs0ZPxFoSSGK53SdQO+RQ4c5Pl3jObVDO7Fpunh59TL8wSHbQYUr8iDYGI8oO0mtbJbVm9XnqxQy34sPRefWJtG9853afq/HQLlvqbOIIG0aVjlhBWbM4CKgYK8QzLL/VKQeynPMc8RnDtiffTK1vKZdAocOK9xlO/XapPKutZoGzrRvvUK/tDFQte4slFLYAJ9kXmrpB+nHyaH36ht5RgVOLUaW9RnwWeZx0776BLTKLFDSIkF+1riKzQYckxEBDpEwkVoP9589gENLp1AutdgSynXquWKpyn6O0rCjIdIWKFEGIoJZL8eUAFehCh2CbTlYTLeTodV8gVl51wNEQepn9ZE/St2VCrFPRtgrl7BTLKDZ6RMohRFN5EiDUbYDpNUC7qwsY6ewQ1ocmsGihYeuwBCbe1WLoQ5Tz/VlwFIfLh6cxsKhOM6cn8eJ09P48A88ivd/6BGM/QS45Q24fOwjW6SpBZZtC6mQfBnT8BMdB9yU+96QOZqcTDiOA0VeWakI6QrpjUathVqVRhzpWJ5OjYXksOX8NV2ga8emI6SLleNZHkrFgyoDho2P0aRjNErnGKCjDo0GIsgnU0iSX+X1t8P48i3dNqEzvfJJ+99NaEGLp97SNaqXrjMwy05VTmQtshrR8NF3Y4Jr03+kR+dV4RYcOxr5NjtMGlE+ah/rIM+1tpdlVxu/KVOD8sB6Odaa2YrTYMmy7lYLlj8JrVH4ot4b7uEzVCfjOdOZ7DPSh9ovvRsKaSMUp50W8kC6+F4OW2TrNJxtkcCkhd3uE8wSqDQ6rIhiS3kqX6J2NXKAWMBWy2oXmCAVAfUBiq1tNCjI/JrS92XZw7RuaKU5nauBcTpWHa8tPp2OdhhR30twOtayIxjUcL0OtDp3VGHD2EC2TVs8+l2LJMQ4IsEZgqwYQWwBN67ftJRXAnJZgik9s1gok5D30Oo2WOuheQEKu3sEgQTAqaQJhkB0SMs/y997ZKB11t9LK5HgV254Ul292EQulcHJo2exOHeEZWdtiizK5zSbdUSVnscXxtzcYcwdDGAc+yayB+/g4PEMuo1pluEIZbVT3mERr/pbgyuB7vVEzQMk70ifgtLvnic1UFwEvWynQgwIYrdvk8jaBM8xW9imIGrFzA0GfsxOz5BAaW2SsRTKILClBVtSVIcPL5Hx2njvj23jwOkILnzrAY7NfcgvAY3uDSTjJFQy1JDEJ0/lkAIpSYJUPC95jv3uMEuQjMaP7OsgFZemp4Y4dHgRa6t3zHMQYF/+7hc/i5tbtxFQHljWJRCMIDC9gOd3KUiiMTx8/DjuIzAIJihgFz24Hangy7uvI94dYS6wAE8tjeVLJZR2q1DC7ngiSIHVJSPLgpOAcTyuYkDV1+J5WMkJWNQ5YV4JIPWx4mdNaJOmJNTV5xMwqqkMnRPFoOsnYFaeoQmotXhxGVmUazqsfB563sT7P3m+A2odBtWYB2ixWgJzng6PqRfv8VBbeGoExHPOU/VOQpl1p/zqkeFZdSvX0m3xV9vqUjxNWqBolSwkWFSf6Ddey8+qhQSqyte9qpvdowuoZIQuJcf1vMmhIgRiJ0BWvKF+No+59b3zvW5Tn8s4ksfUuV7lO8pBz1P+WtVb/cPuN+Ul757qShFixhV1ooHYgEu5ryk/On4Mml60twdoFzlmdRBAxWyl7/TCPM6evx+PPvIOnDt3P9vpwS//Pz6Nre0i7q5sU1mOsbK+R4CocBE3Gs0BaXePvBcjcPJhfaMEkjEN2ya+8eIrqHRbyB9II5Bgb/taBBwNBGJe9tnQFKGMfykb9YGLBrnCerToaGo6TRlaI9i8g6sXryOihaGuAGZnZ1lGESsbF/H6lVdI0yEo5dDaWpW8RshA3l/b2CLPKt1WmDJshrJqP5a91qCcaePg4pKB/j7pV+l6NlY3DCTMz85R7hVQKO6wwwgWm0VMz2UtNu/VC6+zXR2kydNxeT0o3xdmpvDsM1+mIVBj/zsLxrQFspuKSgsHx24CIioyEhC/p7wXPwsAEixqVozsZuOkhWMR2poCQ7I5tVLfyc08hI/3xKNjZBJu/L2/84v4R//w7xDgkx+oCSWv5WywnbNaShWmBTEE7LkM+8HHtuySJro0mhME6WXjzcMHD+GhBx6xVf4DyvLZ6TA2N14n2LtBvVSmDBbfsVI8tHBEuVUdpayTY6SOI8yUI4VEu3/ucxYVuW2XfI9HvVLHoK1dvORJo2xgxwjEyFMrOpfhNyST6eyLnslYIyrZIeuhpVcu0vyA33coj+Rt7I77tvtiX2tFaMTrtz7Ho0ca0zVVAtm9coX0QuO/Qfrt8C7ytoCHvGxS+Cb3+GzJbYUPKBe0dkdUuIH4WLhoTP2tPNhEbWj022i06jZeWgZWWN9Ec7eCQ1MLHJ9pbG028fg7nsT99z+O1bUtyhsgQ4NLzi7lyvUTcFJT0dgiSBL9ckzliVPco8ZOoWhaK5BOpMnHbtKHFjRrpm+ExSNhHDyWV1fhXe95krR6mHiDYMrHM9A0IK4tVrWZkGSvQhsVe8wnw0vdPGQ/zS8FEc+kUGkNCFLjOH76NN7xngfx5Icfxvd95EE88sQJxNIEnp1tAnr22ajNMmgIsn4K91IWiLJCOnbKaFV7aBRbGDbVogj5mjqDf9qMRBk2NK4CuRh40awKlA1RKJSwtbOHeoOygZ/lddViPuWfJVtYf8nzbnmIyX+W+cLwjxMuwBcajc5iYS3QEy2l4iksLSwhT4NBmEF6LUSDW+OpbbkNqLEU8/2KXzl2etXnt5+S7w5tk0ZIQ9oSV4agYnx1alakRTqqV9h2Gk1ah6K49iJxUb1ao8wpkKfZV+RV0Yccf9IrenyfxqpDxTS4tWCOukI5YS1XMolMueCnp2iQsF57ezuk2YKF18g4k3Ngov80syL5JECrndEMyEapc3mq3kq7J75XKM73cnhmz3ieyufjqA62sTNYwyg7QNNXIgHssNobZCwCWm8Tg0ALjd42B38HOX8ai8mz8Axm4I2UcGH9C7i6/U0W18YULbkUO0LTeG1fBN5xmrWXpU8hQGCm9ni9YVrMSlM1JmARcyqAWqsAxZTyXpJAqOwGJI4+tUypdo1MSwIbBjCTXkCsfQ6Va1HECJobvFYrQTX9rZjTdqeCje01Pm+EgwcOo0dijfuX+FscxcYl1HsVCsPzFKwpWqm3KXSGmEqynk1aRcEcwgThudgpKoUyHnrnQWwT9FWKbtx/7hBSkTG2bm1gliB+Jp8nA5fgHeSQWZjB1CN1eA6/jEboJurDa3zWqxRGW7Q8B/CN0ki6DmKvegPBnB+tQZ5EVEegR+3gcZJbtzngQYImLTgJE/xJSCh4v+ei0IlfofKexYh1PzRzDvVyH60SjQVaxd6wC5mpAG7cXkatUEWQ/XZgKkMBz3YtprA1eBrx0ANo7xzATGQK/SqJvhlFreJDpdlHPp8leC9bGIR2fUukQxbzVSCDB8MzVBYbCAfmOJ6UP4EqZSGVT2CA+eMc90svYGHxCJV+Hs986ZuIkwAfOTFPQbGD+x//EMrtEG699G0cn/UjwXt6mp5inXCgi/BxN5pJkgyFe5BtTimVV6eGb9xcxkOPfQSe5hBhWtKatLI0QBSGUqpDvSeylPXsGctadxSTsgcoO4EAuCx5h81HVICU5JqeltKigNfq1UGfTEwjaTAgbffkESLDii4FRClglNRaIRVeNwE+5KHg/SxfzKj8y/LqaXtivde2mkG2QauklbNW03uEgqyjA+AkWyTAlJNWzPvQww/xm+9+/ONfcVJziSMmx0QF2yFZtg8yPAZEeK1PljX7S+8piBwPlEDx/vU8eYveWsnS8bb7F7+QANI0qa5xQgP4PLtQX8jT4+RDNEDLHx0PEN8T0GkmRcBVh9or4C9wLWGrFDAmYffLcsqkMOZXygGrYWX3UdCybOraENuihV1hfxyjjt7TNipRiZR6NLbcFm/3rne9Hz//C38NH/3ox/B3/vb/jF/6m/83/NRP/jR+/JM/gQfPPYAf/pGPY252yRZODYkootEUyuWaCXOln9GMTSgYNQO5p8ViiNHwbePK5ZuWAD2WjiJBnhr7OwTLfjNYNQuhdDTsJNKOEo6TNojEBRCOHJ5nP9Tx+uvfIjC+juXlm2g0+pifO4Wp/Fn4KRO1kFIaIRKOU7HHcPXyMjvYiySfVaoUbPegYDhCGnKbQR6lQbq7s20KIkNjWyuHtdOR4kk1OxVPUH4R7GkqeX5hjm2lfNZuZbNTZqxlMnmLr7175w42N9cILOIGaI8cWcLv/M5v4sa1qwZMtQuUDCRTUhx30YIzVs6p4X/74Xx0jCe97pMHT73XTAXfyztH4KzJp0w6iW+/9C28Qjnwymuv4oH776PMWMDm1iYSBCwyhLT4V4uo/FRwStejh0SjEYeY9DuBWCKVwIEDc5ZLNp7y4k8+/7sEZJvsM3nbCdwIjvU88eKQfKg6yC1rZCg+MSJX+/Qq+mR9dVp7gL/6I/eWmuuZ536PMoV19RDEsmCFGSg9lvEm/+uIR/hGKZcsVnLymVfodz1QHlOF8mnauUf6UWiA4I54wxbi8L1mx6TXtJhQC58bNLDaHFfzwk0GRX2uBqqNfLVcoiYhyJt8mP5sUYzhHxq3BGZDgih5CyXnojQkyps7WL58A+W1XTSLdVy6usfrFK85zbqQBwjxUqQ/ZYipNarwjbWBj590HyRQG7HvfaiTtltdbRhS5fM6BoKUbzVMZKdwIBlTPYJcGU2HzyVxZ/0qXn79eQ7CCE3yVatdg5+6NRAak9cWTI5KpmlzjzzBkcKLtN2xWjazwH4PklpNtsTIP7Ok/2nqQBpYkQr7lfXoVtm3DV5Po4PlaIZpEldcr/XMKOgqe0/XhQpBvGZ03DRIxZvK3mS0wm6TXBuTH/vUxdqlqlppobBXwt5u0RxRMhpEW+Z403hwHDrUnZqFltdXa4Zk9GllfzCo/LJRGoQC9j6jdfGMVvhLqIqf5EzJxajXeb0WiY2ouzr8vSWwz/fajMFmHTTW+zQg58rk1KENhpycr/aryY16rYNSoU4ZRxlI+dFVX2p2WrpSoYkE41rM36Z82y0WiQ2IBdhuC7OUU4p9ovKbzY6tKVL2hrLAsBxofIolDQjwOi/pRc+mYFGsuGW5ojLQNrVaz2Kp5HxvzaKO2W5hP82aasc7LZhWnmmpbNG5AO3f/cV740sdngPv8j3VIHBjPRBKhLFR3EWhUYaPlnW5uUuLaYgG2IBhBaV2AdVWjVa12CRE5vQStN3Fza2L2G6s2FRAKkJAGExwJH22otenbUhJeIqV5f/WCHnZjOE0dcJGK85DLm1JUo2RlKX2vxYBdtoknJLAh7yyaSqHJPztHJo3TiHlooVPgHxgYZ5CM4W97W1sbW2TUEQsTqxnvV7mYERJ5F0UK1vQFrgHDpzhs3q4s3bBdgCbzR4z7w/pHul4kkI/xXq2MD0fQNJ/1OLUwmEOkidNa4+CP9rHwJ8gIR/H0oNVZM68iNixz7GCG6juzqC4mkOhcgPV0R8Y4BntfRitmlLM1Jx9w31lgtM8hRrrSsUkkao0TwqoVhC/M0XRNc+gJ7yLjuciiTLBge/iwOz9KG510G2ESQwUqgkSEcu8cvm2LRoIU9AuzGYxPUtwPk+lefMNBD15uIdpdBoeFPcocNjfStCsaakZLR6jBemmkMtRcERjQVqcSojsxvLKBuKBHA2BMKYXgFRS22kCudk66t2X0a57cN+ps3j+pTewub2HxXwEWdKNhwQaTs3Rio6SOEtIEmG7yFRVjmEpMkKRQqc4KBo4CLBdO7fWsb1ewjCTwwtXb1CILeHM6eNY39skVavf96dOSBw2xUIhYgqCbaCk4799EEvmUWo2S4XGNmqRho7JIiwtXuz3lf3A8c5ayhyWI2CsNCVOqhI20FjUERBaNMG3vIZCicBCq081tet4fZ0gdikGi/Wh0NXqUFMk0iF87RJk2BQLZZba8OBD9wpmP22v4jSJKUd0WVWcQwqMgtSJwiAF8VX5Gw3AqgkSZuwDgVoxlcTtPt50CqGwpchlnaQQna/1EPGfPU/X2m8CO7ycN1u6Kd7HZupOE+b6rH7S5YZZeUqQapclHSZ07UdVVnXQtXzlfRwC8qmqqVhxJxWTm3Kl0xihR9zY2Wqj3+7jxIn78ZM//ZfwV/7KX8cPfuxH8Y53vItgRzv7pLCxsY3trR1Uy3W8+cYl25b2vvsfJi/FzTPrJ59GwylLJaPtMz1+Gn+ZNJ9PRdGWdy3De4GLbxDI7hRtRiBEHJWdiZA+S4584lh32lQqjR5pRNs7ZqkY9sg3RRw7voBr117Ff/yt/w3XbryO7qBq4GuvVEMqN4+HH/4BPpvCfaB8nvLmaqA8BKUdS2mVJNhrNFrOVpj8vjvq2LS/zZgMxrZjlnYOlPdHebrVb4qr1M6MWnmsfLYCv8oYI0+G6E5bfqs/FPuurSi15aTCkBTzq130/vk//6ekSwIabWlq9EH+1BjyrXjMxl7vNZ762UbSOaSk9dkUPsfRAbEiNwFczUKQTwUevSOL9dUmOAIbNcqBzd1V3Lx1hTxwDifOHOVFBDPthskhHxX5VG6W9QthY2uL+miMDGV9obxL44Igf9zBzFIO73j3/fj13/jXBF0vwR8aIRLXw2iosmPUHvGkZhhF9wqZUD1tetoqrfZYZfc/s7NI16LhX7xHMPvVr3zGwGyI9bT0Q9RlTpwfqZr80KacsfA4yjtlGrDV4pRPzp72lFP83qAt66CONOgqxU39KB05Ulm8QtcLCMuA7PN3xb7qTnKftUEATXrV2sKBcgwRNYplkhZUhj4JCDrrNljPYQB9XheNEKyGoigSyK7fWIGfNJmN5mg86lnTOH7kPFKpWQxsm/Qo+zVgXl/lRg1QByoEcjDwUtbKw+ZBobRNGhugVi2yHU3ysYxSghbWIBGnYZhMEDx2kZvOo9S9hi98+bO4fPM2Bl7KfM0EeLoIxSkIOI7dDvV2WeWQfqIxk8mSnRP5HIh2CWb5XNKLy5sgnSTIrx6Ek23iF83AaovoLk8KF9bDRYXFnqEMZk/3FbM8JBBTTGcHzUoHhXVimj2CabZT2y8HwwSzEmLsWsWVaix71PnVstLL1bC6tmpbNcvbqLHVMyShVV9J1Lb87UrIzo+mM/h8zWwKEwXDMRo/bAfliTYBMiLkdcr9HeR3So0WDbBtfmIm6pQu6akhHpdeIT17Kb+U03UCZHW8HcjqsFhfXquZCdGbNvWoE4TvbBaxxbYq97PlLKae1OI1A5EKhyA+qVUa2N7lNQSUksMhb4z1JJBVGAllSp34wjzaBLx7BPWNZp3t8SIWjFqYQVh6MaRMKQHyNEF0q065R8OENC3wbjN4pBftJCZAa/qAdVd6sp4yYbBOg26XfEU9wDaqlX/rL/8Dp2H3cHhm34GnYnE/qmzAnlbojVvoeZXdtYa+p8nXFmqDGoqdCmq9JprsCG1KUG3UCSRb2K3fwHr5FpqjBpJULqlEnh0QpjDxENwKhYc41CQKA7NaLSgBo/gZEosUvyz4fQFoQIXdqOlIWUNaaV+ukXh2pf4GiIVnSXAhuMl01avn4a4vIJEVE1FQrq3RAqkhHFEapgiZp896KtZH3O5h+6okIi/i8SyVKAFVdQ3b5ZtwEaQuTJ/CsEMA3WqQCWkR8Xe3u8uOr2KJwFDptmKxOSRo/flSGQRzOQSW1pB+6POInfuvCOWogFzTBJgRKqkt1JpKT7bHNgcQSbpJqGTI8mGMexQWWEMilmedhjYFYekq9EfCVdyMBr5Jy0ngTHufw11F3fUirXKWQQY9vfQubC9X4RokSQRBeGNUcGzim29eJzoYIRONYnEuSQs2Cn90hNu3t5GJHwKhKkGCG9qGUfsx8zEk0DaFTRJ3V1dM4M8tpHHr7jYJvsu+ayCdSuL44RyBMVBpFYz46rUSXIFb7MMVjkuNjDDGCwSzbAKSEfaTt8d2+NAd+m33s0BgBE+njtJeAZUBx3U6gHqiRhOEBhPB4eLswwi0XajScowvHMRLV27Q+uvisXc8ZjTnZTlKsyUlSejEfnKAq0CGPLH2KiUhI4CMoDACLYQTI7ioLHWtAVkxCi1ASwHH92J0KUCtXnUWhDmvkhMaDykYK5fPEMBVSIG8ecqW4aewsbAHCi8ZIxKyfBrrIwHigNnJ2aPAMKEnZiNPPPDgg/b+ux0TMGsd+7ZDJekb+1YKUeRNwSUwK7er+EgLPBTLbZ4nngoD2vcdGBhSIWye9RE5TzXnF2y1CRfnGQaO2RnOs9T3ArPO/TI23w5mdVhZrIsBINWBX5hO2P/B5K9OFiCal0dYZQkcjfpuKj4KTW1sUKJCr7MPCz088uQH8Au/8Dfx5Ps+RJ7JUaE0sbdTIpCs0gibJ9/XcfPGHZRptKpGlwlm90hnBw8cRH5qGrsUuH4fkSkrIiWl0B1lD9GiS8kAnz9GhR3AysoWtqjYta1jJhdEIEzacLWpqF3mgZXwpyqBts8WzW1vb1AeBPDgw6fx9ee/iH/7a/87ASeN4tmMTcVrwUqpNqbQr+Pc2Sf5WxYbNPZITrxfz/VbNpByxeEppcqqU+nod/WXjDJ2Lhbm541Hi6UC28F68TrNClRryknZJZCL2xjVKYujsSjrH+RvNRszpUXc2dnT4PGQ52eMI0cO4PXXX8Ef/sHvWl9oZ0PzpPJ3komVNQFEGiu9feuj80b/T95/5+BHB8gKTCqGnb3l01Q3AQXlu2LwvP6xhT/sFTbwzLNP48jxQzQ6zlDmzKJSKZM3hwZo2XL2PwGIt8Ozh9mFPIELcPbBU1TqNfxP//P/gOe+/gUk80H2G+Gdq8vnDk3OsynG71KUomWBVaNF4wV9Fj84740udZL+RPP3DGa/9BnzREWD2quexr0cM3yIxamS0WxPe3lXNWNIha5dwATklGNcxrRkkfSQ9rpXSIQ3oP5iwZoSZl8NWJ5ArHmmxCCss8KBBF41+yOGk1FsAJb1nxibaouNHwGneN2uUY+MKKNAUDiOUsdpu1rpOQKuTh8BgpRRfYCSwmyqfYQCKZw++wFklJbSI49cjAYJDT9ep8XbsUiSujFCOQvKWaWtDJnHtVLZ4dhRnzXLrGvfeDtKA0yxllkCWRmRmgE9evIYqt3buHn7EtKzHlvA5Q0S3AQpd9kHJVqVm5s1YgsZONpdq0KDdM9kjMIONK79QRnBKIGs6Rj1fZjgifwdo7EaURYM9iH7XTJXC2KpxNmXjqyRzGt3hV+IYxrUG9RPFmtPoGpbrGen4IvKayjCIP2yDkpjJUO2TOOwsFclmF0hqFOGm47JSY2PE6Ms3cTylQJTkwv8zpl691nYWpR9p2cI+2h9hWbpNF4KDZRTQTvbacbFpxAfAlJJnBafXZdOI29oy1jt4qfwMd2nYwJkdVpZOlkaH8v3rL+cYjSUS8Um8VEBG+tFlHeL5oVt0PAt7hVR5OfSXhm72wVsb9BAb7H/upLH1HGjCPodNzGNQhHa1u5ygViQpzJINQnovXxYIqpdwmI2i+SjvJQh3qOcVKo2gVqNhZw/AuvqrwmYVTMkT6WXtbBaW9FrIanCzybnX/nU37G23svhWXzY95T2hm5VBhSOJBoCGKp61AmoXCaQ9oPVyZxOmhytkKXF0CDQIAG3oX2YKyTYINLpaQ6aVgixoixTQtslzyTv16nKS9gJzduOKCxzSMtd8TQiICF0EYZWobc7PXZgAzvFFfTqcQIwNzttHsmkvCl9NO6cRWMjz0HfwM72FgenaiBWuUrl6ZBVPJWbpnBQTkYfFIQcDifI7LT8t9cItLbQp3Ds+zqYzhzBuEPhTlJIx7K0upUNYBa5GSoODpw31ENmPg53ihZpdgPuY5/F6PC/h2v+K+hr4cMohUFjBt0aLdh2g5ZxnYPqQ4jWbigWx8yx1xD2sC68pkKrLplYtPRaHdaxqQURFG4W17gvhJTSSWSZzymWd4CG/6toyMIhI9x38INYu07h4c1zTFhjCnEOFa5duYkwCWYmFcV0PoRkzo8mx6VaCiGbOELwGydo0CIv5avkfezvBolRCdl39naweHCJgHlMq7jJfiLTkRlOnZqx6ct2W9YklfWRKv7j7/1DfO3Zr+LRhx7BtWvPU7keojVXJqMX2Hce2w5YntK9Uh2tzgiLM1kMaGwUNnbRdvO582GUAzsW2xd2xRFPnke058XmzXXa1F6burp99TbOnDnJflvivcp8IY1EphXhC4SSltgAa4OAmANqBVq1MIuCmtdodaZif7RWQwrEpufUw5I4+/1slj4BqoSyZXBgmQ5zCeyKFsWIEgy6loLGrEsxoa5zvC6aMlJZvMpOCRYHyDqgVl/bVCCJnyLHtvW8l+PTv/IrvN5KtM+O+FLN9w99oWIFYCXTCbzkkbX4XJ6m2eWdYh85lEVFv/9GpGbdoLrx/aTsSeF6kR5wLnDuV1N0vxQL1SkvIgSyduo6ySg+l19/Z8GQPd9uZx2dgq0kvVU55jHiJSNWfkDhTRpr7dC4cyXwxDs/hP/lH/0TvP/dH4RS4lWKDRpNAwp8bREcNCNX8Wpa9T0zNYf1tQ1bDf9TP/kpPPLwIzYu9UaNSqijh5vQTaeTlGUtKoUe+UaC0/FwKcSmUilRblHpB/u8vI5MPkBAFTD+kGdLCkiCVl52xZUuHZgmmPXiP/3n38CXvvxHVFLyQnpRJO/4gyPyRNwM1es36hyPBE6cPG8xsVqNbAnfSVeJJOVUs2Te2WgiaRlSPL4oQWmSirLHvlLoVASRSIigt2j0FE/ECJCGiNBgVYodzeKIFjVM+l2AXfFnukZOBBlgFfJehSBBaXUOLM3j137t/2MJ5SNsn8JR5EHXuJBMnMMZKodmnXfOGI4Ejt76bLRtHx1emnhlpaQsZlqzcjy15a9W08tTVCXYCUcp//1DfP6Ln8PLF16An8D1sXc8iiMnjiKeimFmbg4Hj8zj5Hk5KsKIp6M0pEv43T/4Dfyzf/lpXL5yATNLCQIY8p+LypIgS1WVV1Z1U/3lwZJ31vhDMoKv0i8kWau7LtvvNjvUknsFs1/78n8yh0c0SBBFmWAzOQRM0lsWR85/ire0zRF4Kil8i3RjRjZRYIQGR5yGh/KHxqMCYpRBfvYb2YBdR93rgFkZ55JFlHqKbGHd9arxEn/LIcR2kb9Ue40df7X31jbeb97oscBPGMngLELeOerRMAauIhanFihXG7h+4RLa1Rbmpw5jduY4aXIOAd88KqRjZc9JpbKIU2f6CWy9bm0URFDb06Jc6dM+eSCOYnHbpvVHY81ANalbOP6sRIJ0GGEfRcIEvNTxYdLBkZNHsFm4gbWtZSwdnSeYJWCOKhtQiOOlDSFc1DfEG5S//YE2PiLQLupzS4LFvo+HqVfjlAF98n+tBX8oTj2dIe3IG19jm7UOYsxTMyHsVPanDFHrGvZNie2uNwlG2zQqhkGARXfrTgiY0mkGU5Lzkuku4zEt4tI0/e4OgfVmEduFLQP24mUDyyxY8tRkI/8jV/KV9EZ+0SYaQY6vdlOVTg0Qm/gVHkcdYl5K3c/7FKoiAKiZRDO0WBdhgxppRlvUD/gAzTLIESiZOTkcHeN8IVkmPlSqTs10S8/1KIOq5Qb2titYXyZY3SqjU2ddSZdKfVWv8qyTPhsdytQesQiNT2+cgDrBXotj3GU/10eUvy3KkA66LRrONH60DkdGs/rHT+yWSsaREpilUaK0rLYmgnWSE6VLOm4SPIsw1Q+aOdTid4UeGO9Qjgj32cJ81kub0IQox8I8ZaD/5Md+0dp3L4dn/rT/qR4rd/LALN73nicNzOzsFCkH2KkyrNVXPEcETER+bGAA/SZZjvXIZEJo+8pkJi0GiCMZyZp1rVs0YGF/xFZxSrAojlFxIfI82uAr1YWEMXoGNjUYpBEjvDHBboMdXCpWsMe6BNi5uQyZP5GjovDSQiSALB9A4S6fN6o57m0+T4uCNO2rvLKqt2KOhhSiiRQFPDur16blTGtHcZD+CImewKdHBphKHEDUkyYYSyBFABrwJ5GfBVIZ1qMVQHQ6iOBiGa3859Fa+Nfoz/wBev4CyyRA7T5ES5zgZljha8UGxLbLjXbgT1TIrGOW40Fw6kUCyjyKdw/yPo6yO43IdAWNQp+KlAzN/m7Ua9YnUsICUQL1PhoJ3fTTqA3riASSuH/pg1i5solgcArVNo2AkItjVsf6yiaifi+ycR9yuQAVgR+tXgODzjTBeZyWN/uWUlGbDgjwiLO184amivwhD7IUCLdX1iyvrMYgm9MCsxoCkSbmD0XhC9cQz7K+nSv49ovfRCYxQ4EcxHuf/CRef+1NFLbWcfzIHI4fXSTI6eH2nVVaZUOEOdgCs1o04aIAH0+5QLFAAOPGYvwEZqcfQXelhA0C2AKNl2x2lhbiBuby0zj54Dl0SJsSQhLsfS2UoKKXHlV8q+2kQoHuqGP+TwE0SZYuZpYXdmAazJkREOGZACDzTJg/Gk2TIbV6Vd4VZ9WnE35AS1tChLRoQoeH0a2BZln/DljlbfsCRdfqGoFceYIdZWS5Cvln9xEQPXyPMbO/+uc8syza+EqvdkgZ7wNZi5nVSaDl3l+sZgpPCo19QrlK/nPKESTRnwlc9QG/NpDJQ12lcnU4ilJvnBqoj98Cs7qfCncf0KjdAhTOq0CNU0sDDPvfO13I6/kQpc8SKO0Ta3pdUdT2KBRpKf7Yx/8i/tLP/jU8eP6dqBLA1isdKpBdmwLzUDErHCQaVk5CeXtCljbrsccew6OPPIJTJ4+zYgoFaKBJILuxuUFDRXIhSB5umUdTOx0FtbCPQlWbq+zsFamgtRqeirRXZ92VzUOGnuNJUCov5aFsNKsU3kUcPb6IcMyNF196Dv/q3/wzvP7GFeSm5S2VIu7TeFUbR+SnIJ/FutCYU4oZZRg4ceoEdnb3CDYj0FbT0koCIDXyfEhTvmyvFkNEY2EqFoUIaCEEZSI7KUejVlN6Bq5JX/LgSNCLXvW9tn6VktAUYpQASXR96+YyZmZmWaaU5hCl0g6VKfCZz/wWwUqR90vmjgg8OJocG6MLI2C1gQff22dRh/iA7xwScr53eEPjTTrj/X8+JZAWwGQzKY5fhfRJoiGYTtLQVpoil0/AA5QlZTz7ta/gjz/3R7i1fNNCCu6s3MDttav4w//2O/jMH/wm/sNv/Tr+6x//Pi5cfNnCCqaX0ugOazTxm9ZHxn8iK/5ZiiwCLQFs9a2AgWFNXmPT+iJ2vlf9LRyC9dSdoul7BbPfeOa/IBrQRixBgjwnhaHSwgnMqhf9uohgSga1QKV4XmFNCi+QUpqZmkZa6caSCTMoNBsQCftNB2p6tWczPgLGBMO8X3rBOtrapGlkBwip3w2o2xjxZ/6qKkjP6pNAjTyqHoKSmcwJJIIH0K0oDnwHW7eXcfnl1/jaRodATXncU9kDOHXucazfqVMehpDPTSPI13pdWzoHbZt7ZRHp9QT0/DTsunz2GMXSLnu6Q71SYd8TFwTZ4ZR18sxKr2nWVfmAl44fJOjM4aVXv4Q7q3cRoqHmpfEWjGRYZz9lO+9JRM0Y6vS1yJjjSR6UwaUtzjWtr7RmMRoqsTT1OdrYLdfQG3nI536CZerRxBAhj3YZJarg/SMayh63s+Ba6yWGHIdClXxE2lRWBl+fwLLH59UVdtYX4SIypcW/mgXxE6x1CNyaKJeaNJj3sLK8gVq7arQ/Jq2JdibhBRocNgF9hU0QlEvXWjYAyuUgGS8YUHtpYHqDpEnqKI6zdI74TvDfyV1MBqVxqUVhLdanSV3XlQ7h9fLKmiyVZcNjAmQdPnR0mehCjhmRtXSc8ttrFmtjdQ/rqwW2ow9NVBuvi2dIKnrVYt2gT/GxcQRi05SxefhcCY43DRhiPvUVh5v30TDputm/lGuiaepjLdpM0UBPxZUT18cyFH7J/hBAZL2Va1khXpopd+KCHV2rMAs5k6SDJnGz4hmPR+GVkqk0hsJh/OD7ftbaey+HJ7M0fuqx++7HA2fOkWDjeP31K7SI2uxYdgzr0x+QKQgSFYIyaBKodfQ9AVM2gQwBT5UALuiOIO5Lm5uZkB6gEFZMV4AWXZ+VU0C+BtuC7vknIDugstfK9EBAREsgyobKOpPkEVColmoo7BZQ2qnTgnUjn5mh1cfGc9DVSO/Ii72VKaSoBBWkPOgS7JDolapKq98b7ECBw+TMDBlhk0RZMA+QLVzxyHsqUETF5a1jJnkUU8kFAigCWQqVcJRMFGaVg1QweT968TuopH8PzenfwiB+l/U/AnfvJNsXxLAbJVZSXCWtU/cORsE1AswdeMJNeCl8A8EKFdoMBpEbCGavoV+6Hzt384Twq5heyFKYaDMCChkOZLVMxGk9pNQXfvMeSWm7F76A6qhCsD2HM9n3Ye3qFi36aew2aO0k3Vhb30Vxt8Q+GSMT92B2Nop4OkDLiICuPU0Z7kJhp2WCsdNpkio11anpUypTgsXpuRxWNu5iZo5AVKnC6nUk4iwr48LckTj8mZuotG9QCLRx9r5ZBNm2dHweP/ThX8AbN27ima98EdP5OA4t5jmuLUQoqDbWN6k0KxTuHoS9I8uSEMwnUYs0UBoWkAxO41juCRw78k6svHIJK9duo0qAkclOYUgLcX5hHgdPH7ftSKWEBJDl5XCmLvmVYoNo4SkNlvWZGIRKPhRUJgdHgOmQ0NE2tgKzYnCttFU4gsIMdCjl2EQ5a2pQU0uaHuRHHhQ0Yko+UMBVsa8WpsB7zcMrBUOJ8B1PLgWPxSJpesWYXaunFcPneJRV7mOPPqyCv+vx6ad+xV4d0WVN/jOHfVYTJ2CWQl/7Wk/iZyXRnHADKXv1n6bddJ/9Z0LM9nW3J+jkL2I/vTWFz1ceE8+uAIPFzLLbBGoHkgn6Qdfw4sn1AggCGHqWvtOpz/s/OwJUcaNUjP0mQWDDi/vPPoG/8Vf+HqYzB/DiCxcIcgnkfBEavlqMF6TyzyISjNHCbxMkKW9j06x9LUC4cvkix9bZgapR03RpCYW9XT6JxjQNbC+V2YB0PujLm67YVdVEynho4E+prwKU8OXyjnnyE7Ek6cVR4ppZKpX2KJCbpPsj2Ni+gX/36/8CF17/Bs6cO4T7HzpCQ5Jgm4Z3gEal6qEQHinv/rjP78ZUsD5s7qzg7LnTrAPQqAypZFwW8zvm4IjOFD6gVb+Dfpv30gCjERpinZSSq9lqWFytnwpbsYTaQla0GOarniWDXH0epzJRns9mm4A2HkOr3UOZfVFrlDn2HV7rJYCK4itf/TzrW+Qz5VRwAJbGSN4hG0f92cDZcBmd6K3xgz7YMQGyuo88QjqTInW88g5NKUtBpaJZHh/iMaUzI3Ai7yjzANUwwQyVWcCFUEJTqh1cv3MRL77yHL758rN47ptfxM21K6iwntq4h0PIa0l3BDByRHj9BHqsw8SbaQTKU4uIteBQPOjEj4v2WB/xwv6r8YxArurJuovvvxcw++LXPouQdBtpwwkxcNorSKPCA+KXgQM0LJSG9ZK8EBdJDs1NzyCVihuYjdFwcfagl7eOJXA8WkMCY96rHZU0e+nExYox+cLyBFKNn4xXNVZO4/VZzZOBo2uka7vkI58rhQPT9yEdOkaj0YV69xJ85MFsjOCFtCbny+z8ceTnT2Kj0ELKm8fc7Dyyyu3bZX0aPXjH8ijGJMnRaFMGUs9KfxeK5A3Ke49nQLnZZL+w7gRj8sxGKIdjNBy1XiEUC+H8w+fR7LZRad1CIpVHNn+Y1c5TXuUoF70oN/bgCxEPEEhqsSmLZ9uUGizCMuTh6xlIGrE+sTRpxtdGsTpAVXq+UcdgVKVeIhgij+0V6zQO66y/C862soo11yLBJnVMnzqgTznB8to0qodedAnolfNX8i2QVkihwgKIXTgGtUoLpUIN6yvbWCWYbQ+pRzUe1Du25kf8I4Gpztd4e6kHKGMUqqgx1wyn5cRWOBplmbY9N7nLQbRMMIKyxDQBErIAnmRHl+Vq90B5ZeWbl56TUSYDB6QP0brCEu3kezlbDBjzFFgUKJQR3CFoV2jE+vIu9b8yF4zY3v3Nq9hPRrGstmRkOKi1Qmn4I7OIEMuN+0G4Bsq9G2N5fsMFXi+xzlgrwQXySJvkzVDIjUyaYDahsBlhsxT1L+8lrY+kc/kA6T3tfqoYXYXWiOeUzUA5fWVUin9MhxIn+ryUm8RgWi8UJC188PEfdyp6D4d7ZnEWp8+dtQ9Xr17F7vYOQWnQFmNIX/do3dDQJChg9flZ29VpC7JYOg7aM2Q6dqA7jpgnChrK7NkelSsrq5UdVB59EpEUviMoKUBpWYi5lYtMjZLlJWAwsTCG/F2ufcXBKdg44iJI9tcR8iqGh4RIRmorn+P0VQRmdgiYWhZzF4umkEkoHnaAteUVArIGFhYWWL6SkG+g1SmQyUUasgI40OxcL8v2D1XvEEFgGvmpBFI5F06cB1KLBPS5dWwE/gSbyX+OzdDvoEIw3Kg9iF5LO6K0MG5T6XbLxIarGPtuYhxaB8JdeCig/BTigRAtVVqx7XEZ1f4SOpnbmH7n5xCeuk3m3EF1SzuCJUgorJW8fTwlm2SlCaArSXar1oUvQgYhKA7HBcjYx0bEBFdkePKGTSWKMZSGQ1PwilnV71pQIsAqYe1MwTsWkGJpFKOmXJHyFJng9XuRm/JRAdb4HALC4BBzixGslr+Jr37r36MxWCGjtvD8S1/G0pGD+OCHfhSlygD/7t//77S8KrTEaeG2mljd2KDx0EEskSSbsn9obft4BjNR9AnuC+0SlZqyDMzAPziAZlEWmQ/rxTJ2WzRgmjUcOXrU6hWJxins5b3Q7l2KTdXUhGkjtlPChAwj1CRvhYSFBIW8rGIUCrAAGUL3adpItCUwajtzEQTZtB/7Q55q7fKl90aX7F8Lp5EFTKFq0z5kXPWbs2FCy4CqgCuLNH7odBsEHVX2Q5lAq2LvtUJX34seNb6Oa+jeDz3V+ftzh1MtU54c4v2T19npvJ9Y7JP3tviEv4kOLH6MpwSfowxZ4NsOla2vnN/2v9svi//sFJhyynJ+57vv/MZ3k6/+L4ccY+oFF2XKuEcjseXD+9/9Q/jIk59Aea9HWh/ivjMPU5keQCKaQTKWRTSUoEKivCH4m5tdwP33PYj3vve95uGen59FOhNHcW8TX/rif8Pq6jWCz1WWr7m0Bo3DTX7e5Li20W5WCW6V2olj3uxaqIK2f9ZqbyniXC5NINi2BWXz8wcs/ZXSROWnkkgkg/j//eb/hn/wj34Zl6/S6FtIIRB2oVLbQ4S8ogwgIhefPL4UktpPPRwNoMf2JrKUZfU38Pkv/gcai4rZLbEDugZoJPsSVALKxDKVz7AuA7TrNEoJDkRDEvARKtU7dwkAkjFksxkDrAOCnu29bZRrFRr4EcsFXapqY5OQ5ZxVnlkPxW+hRIA+7qFii2C9uHX7Gu7eUko+KSjKZSljG8e3BustmvizdPH2Q79PjJZJ2jljSx4OncijTJPBQ8U3VrotpYFyFroqVtHnc1N29cmnUogdeIId5GaCmDsUxcyinwZ0DKmpCFIzlLNKakBZFEn7KAdZrpToPoilCCc/8hGiVz5bM3t6ttMG6hPWz/HGOp8tjlzf8/e30/T3ciifqAOEnRlGhaGIqg3U8zc/KyeQG2R7bY9+O0MIEwCHZaDxXk0nR8JBKJ95irIxGiVooEEq4S8ZJFljhrM2CxhqlkjrTQTfFWpGWc1LZTjIEy0w4BgSTr2URlD1EI8pbZp0nc+TJEBJUd8RTIxouO0UEPXHcOL4eaTS85SVGdLgFFshUJLm/T7yTRWKFZ2ZWkA2M0XZNzQDT7lWJUNVn729LT6Tup79rEw8Ss0VJphSaIyfoIUPRocE5qJcjiXzpFnqifg8Hn3kozhz/PuoMw4SpebgHc0SaCTRa7JNATdm5nNI5RNEF03eT+MlzPLj7NsY+A2NtDZ5qsH+YddrLLX6fvnGDq5f3MKNa9u4fmkDty7vYnu1aDt2ahbI5w2xrl62m0Ygx0Q+b02Bi3GpVbSrLbosp7RZQKusPLQaW3mMaXArjVyji3aljVGL93RJaDQCHNlKuuO4jdjnI3lbXQRyFKLKZNFTyESrR73o5HiVznEcIs6psdafGSQyBNmRltKNHyebbBgkFuLUQX59i24dupscZsywDCoujpGe4zhs9Cp9Lw+yHDO+qFKckY45PFpYGggkCfbTiITyHPs5jkUUo66PY8EmtlnwKEgWCrJMjy3K9bmpy31xYrKUeXO1zkQx3QKtmucZkzi1w6n0rlIHxqMx22JfGRHkDFO6Mu16J4dEWzqY9RLdSo7Yom3KUOl+zW5Jnn0vh+dn/96Hn1Iqpc2NZTbWh1vrGyQeAoCgm8KQHULihLv/lgXMI5lJIZT0o9LWogYfMoEMUuwUA6a0rryaNnERxXcp0L0N9q+ojkzGUwDZpnHN6pQFL0XrTGEqLY1AVr3axM7WNvZ2y5iNH0Y0vYdM8jhCKXnWCFrUodFl1NbOYXx3huAtTMalECDzCvBooDXFGqPQ3qzQGqvfpgCXIvOhVW8jkfHZashBP06h0sdM5jhmsjO0Bnq0PNcxdzCDQusSyt3XsJz+B/DFOXDRM/DG2QnBZV5XR0y7cjXGZIxtEjAHckQFQeA6JmBWyhEPicIznLHp0UGQIJIMPehnaAXfZj120Fn7ESjVxcxMzDyhCoNoNpyAah0CZvJCyRqNn/syir09zCRO4qD/3dgk43qCOezU3EgSfF+5cgdaieodDpBP+bB0IE2riFY1gdTe1pjEGrbV0yJ9TZmal4712tkuUvFmsV1cw2PvOoZvv3LbvGYHluKIhkekiTew3vs8DhxeIBjwoT3YQLV1EdPJh7C3ksQ/+ad/F6sUaGdPHEWFgCIeo5CmYq7VK1RkHgQ5FnWOfbNZQLVTx3a3hB13Cb5sGAdnH8F8/DGOkwcpAs8XXnoeVRdph2M5l58ji49x33seo6FC40hKiaziBIg7CsumaaRQSHMWg0QFobHnjzb+tkCCN2o/d6UqEfH2usoSQYFMIaZrtajGWWzBU8qCtGmAVvTKQyEGbq/iuh1DQ14whQtwuMmAznPl9XKyIzinvLI6ReMy0jxuTfs6z9fnRx+5x2wGn/5V4w8TdFLW+6cONt+AhDyv5pVls6W4FWYg5aapKsUtWeYACUKe1kK2T+EBMoasjRTk2tBAXmuH7tgH+89wnqf/5NW1W/irda/xqvrKuc65QY9R91u9+F7WueLt5QG06V5+J/ojvqewpJKr+fCTP/aX8cTDH4GrH7J4rDZBphZxtpotKqVb2KZhvb21ibXVFbz5xhu4eeu6LRZaX1+jQm2yDW2CXPI06atcXOX9uxSgWqFboMAsEjBKgQkADCm41T4CKIIBZbTwkE+VN1FTgaqbdlgSbW2tb+PMudM07CI0mku4duMinv7yn+Lxdz6MWmMXK6sVBMNty6PYGdAwonEjd7N4Sn9qZTgWs0VbtNkIKrW4hHQtS5+GpsIiZPD4/KRBjpPizjSt1my0yKNl5PNULuwzJcqXl8XP39Y3Ngl4qMlZTwFVpUqSV7dUrrBdiqlVmhzFB5MWOI7Kf11m+2tNGo6dKmVMxkD9v/xX/4z1ZLkU6RZDLj5RjVk1W4jK+jpg0B5lclu8IbqZHKJ7U5y8QDH84qOJR9bAFH/QOPOdw4MszGbe+J3F/PFapa9z4urYY3oWaWbsIfAClZs809COk1LoBA7sozHlgqY0FXstGlO9jZ54H4ux+jqvevY+sOY4G9/Yez1dr/y3X3+nXW+17V49s6++8Dnz2HtMphjHOPXhe1aP9WQ7KENspoj6VBl5ZDgpzlJhCYopjceVS5gggTJaU/QDArQ6jeoax63Z8tisopLy92RkqzzykaZi1a4RwYk8uBTU7Gu1x+l34zHrEOrWPvXbIEz6kRygvvafR9yXRaHwZbSKt2lIDrCyskY5rbRuZ5DOHiH9s06BadtKNZlI0fBLELAk2a9Eki4/DX55XymVqe+1/75ovkRjz+VWWjXKENKUppK9NNSUAk7pmaqUtenpWSwePY5CpQVfOEmZFcHd28QVPRpzLho4wwB21uqYysxSH60jPTcgfykdXogGI/kSxA++NoKxMQIRjleU5VLOVhvsuQ5BI08vu8Orfi4OCV5JPVXKem1ENHAWYSmFntZ+iNr7JV5HVqkQqNcrNBKaI+IXhRcqlp3AimOYTk7ZJkh9kC95baPcQaNAUN0b0SjwICqnCelIXnCqUYJ1n4HQDtsvkBpgu+QMHCmkkcBXoY+xWArRiHIqa+EiwXWQoHefFrU7moCchKwWkDUol5pCkuQXj4VgkrgohEXbmrU25pCg1mk0zrYKc3gp/0lRERoSJRq19XqDY1XD8vIWaYjGBg3fgWJSFQ5F3egaRzCVO4yl+ROULzmWzb6ibB60RMgKkxBmcRYxqp7acVG87pLs4Lh7vB1bLBuN+ynzEvCG4rYwT3W29VK6iZhOxtGAY9FocdylzyUz2GYZFwLgSg/oFifRcOuRd3zUywoFUxaIJ859v9jsng7P+z8099RO8QbCBDXX7xJQUXH3PMqJqOncLCWKcot60W/pYdqTP410Jky5TGXCgQsr1kLxL9pXl40NuCNIEtwqpUMP2kJPK8tZYXEVjz4HSVO5El7KF9dlg80bpMqMo+jVIti4W8bW1qpID9mpNgHXMVpmKT6TTKI5EnedhONHcOEi/Hd+GI3SKuYDx5FLHsRa7XU0fBXk8ocw5rM2t15GNJShlaF6EjhSQcnaESPVyZSBXAhnD59HdEwwxO9nT02hNvU6ttO/j2ruq6S0+wnoCdpdW/D2o4ggjTAH0jfoI4QsuiJqSRmxChW/BIotqqBShb/MfqRV0y4jG+nC04+hMdhGdJrCrXQI/Y05dP0EveECQfQ2+zJNhdZFIqWppAEqu15Epgmglp5FHS0cyX4EoWYKlc0uahR+5W4W+YwLr924RZBeR6TdwtFjAcTPsC+TGWxfjsA3jKBJizJK4tpeX0cyliahuqmMizZ12c6t4+zcMezerKHRSWBpKYUM+WV7Zwsbyd8mYZcQcX8I5c41XN37V6hW/Dgy9bN449U3KQB30Gpc4Fi4sFVMITl/AOEMiXWQptAIkpFeR7O9iuB0AMvjbYxmgxSchxBuH8BBGgcReWs0tZtjnyaS+NY3voUQvxNYCtCSnotNYenYMQKDMjI0oFqtBk952Hy02rQLVI0Cm0KDIFPWqCPwpVglaAha+D3FvNGWpl4mU/+a7hOA0AKVIDlbyavFeAb4pNR5vTEax1XXCuT0uhRUFOhSmPL6OkaT4r55rbwxHGd5n5SiR98pT5++H4+Vv5O8QwUkgPv4Y4+y9O9+/PdiZieHyTG9Ed9ImJK1nPACgVn+aACAV4gs2QYpXKdVPPgiz4F4WZfYZeozlSnhPHnQ/qvuMiCrk+/ZjfsesP067F+oMiR0Jb/0qlWsyUTM4rL1LOP+oYd86EIsOINP/sjPY27qBEY9Z5GB+lgxVBnKlyNHDuPsmfOIhSM4fPCgfX7ve9+Fj33s+3H48EEcO34Y2WyCiniKgrKLDo3qAY29ve0bWL7zGirlFQK6HVx49Rt8LSKdDpFm3Kg3dgh+5VFrWQy+Vj5rkU6KylshMV3lkSVwunXnTTz97B/iyvVX8PIrL6BN5aIdBT/80e+nPLmFOysFBELailpAVLHpUuj8TINaM0fqgKoWTFCEpjLAocNpzM3OEEDcIUgp4cDBKaTzIcqyEa+rGt2oz+q1igO8SdPaYEVgUPHznZ4MMCpe9rsUlGKAc1NT2NzZNtATjIZtLJTVQGE4Bw4tYOFAHmfPH8Gjjz+Aa1dfx2c+8xvYXb9rW1n3KTsEJXWTlLfE1/5w89A4KiSDtKAx3j8nx8R4cWLTZdA5YEqeQf1moWTGS1S+JC796RbdJTqUHtarYgDtV37WrJ8usNk/vfLj0O4V3Ykm7U67dgJeiSkc8uZpJM/PqoPSvInYZITqrfCdaFL0qcYanVod+dme4JT3V3/43sDsS1/7I7adylidpoOdY+1TP8kQpvCSweu8Ut9RZiiF2kjTupQJHp9yFIdoSGvhcA8d0laVIHavVMJuoUgQ0kJDzg1+b7M/bLdim3XKCxqk8Wo7Mfm9FmProQxTTlfzyBJgj71dEFpYaEBvWCGtNJHyP8xzGnu7z+LmtVextbHDsgLY2KrgvgfeRf5YQrtBuUageeLgAvtURghl4lggdmweWRlMeworUA7ocAhb26uUyzuYJj31iAXkNdaWpMlkEOubGxxLF6bm5hGIxjE1O49oIos3L17Fxas3UCmRP4ptrNy9jVq5bnpf3Xnz5htYPKNFRD7qJi3IZNtC2kSB+CGgbBeUt2wnR5H8TiDbJs2xXxVlHCZgCrDOWuQMgjJlFCBy45iMqTPaxt/VGjELwWul1Uep0ef9LkTdYaQDMQJ3GiishOLqY7EkQpQJfcpvxb6rLDf7wUVg6h4QwFFOaUOOEcdDoXzSGbZIS0Q8UO34TrOt0heshsZbi+U0bR7yxwhm90Pk+KPo0eKgeaUy5wjEKguAMqKIFySY5Ypw8njzmRIvRvX79EfCEyAkO1H2K8KT7echOdFsODGzysQgAK2t6aMphYW4aHDkcOTQfTh2+H5igjz4SFub4HNFCLrjSBBvKRRK3tFwSPG+fB7rIMedMqO4aTS5vDRQgsqq4kM0GSOwJT5Tdg72pRa0OU4n1pR8yWob3zdr2mlVYTAO4BUNa4ZBAFm0rv/lyNOieOmDd5z+sLXnXg7P8feHn2oMaqSIMPYKbXYiW+VpyYmAdpPMp9hS9uqoT0bioCRTCTKSUme1DSwoD50GS9aFVgQqVZG8BXI9a7WtbSO7/ydruC9LkyBYQsamVvhG6laDJKtASsCyE1AJBUjAuXTO0oOEwylD9GaVknGUVUFxvMMGLdndQ2hVfQR4V7FbLdrUhjIKlGl5KRZM3mOBiwAtlkpti0xeY72UhLqCpQOnkAjF8NjjWYySu/AsvoQdfBu9QBGuQIEEy3prlSTb40WKwjJG4cny+gnWV9tF0mJUm2Vt0aLwKK/uMMb+49lLkSApGjRt2GS7Kcl9njz8bgKqURmNlaPwx7MEIyXUi6wzFaGElqY1lMx/0EkimGvCe+AbqPbrOJR9J2KNKVRWaCaM2O5BxoDcpVvX0CfhztKaOXosg+yhOAnZi+YmicIfNstKaU7UBsUmK31MncA3EAkhHzoGD19LVbbBcxez2Ry2+0VcKPwt7G4s0xA4CV/yCl689OsUQgMcm/8JlJZnUdwuEJtcwq3bl8kkHBtPBsXyNoVrFQ2C5+IOFX6kjGQ+gT4Zv8+x61HY+IMZLM0eQz45Z8JGu9SksykKwjiuXL6EWqVmwHFtdRVr6xv4yA9/jG1QeEeVbapbzkJ5OZT4O5ZIEDRpGo59x9PxtjjvJ6el0JI2M/qTwlPuRE1hOJafs3CD1Mm+cRTc/iu/d8pzPLW6V+VoYZAz1opzc7y+5hmxe8mwfObAPCmCfs5nnbLYNd3zrifead9/t+N7AbMCsvIQSJiZ8uZp1+gmndIU+/XTlwZm2Ye2VSw/S0AJrTp/zqUOOFBf6F7dKFHjHHpV8yY10/VO/+n5zqt4VUDWT2km4e+lsVja6SOfmMfPfepvUPFkaYSOCECbBnJnp+dw8MCSea0U17y7vW0e7mJhG6VyAasrd/HSy9/CpUtv4OrVSygWt3D92hs0rtZZmQrlTwMHl2JYmA2h1VnH3eULNKS2cefWRdy5cwlPf/G/4KWXvozXLnwZy3dfY63bOLS0iOnsPPvC6Qfl2261C/jKs/8Ft1ZexvLaFYLCQwTQpyifFGPuxyd+/OdY9gpu3FjB7Hye/aDsCAMoe4rAq7zYMmi1O1Ikoq2uKRdpZIQJemvNAnaLd7FXvk3jdRcLS7OYm1uyDQ6UGk5T/4o7k3ydmp6hvBhYPKBi7vYKu6Q10hn7U4n0lbhcBpv6SmBPr/JopWn0zS5kcXv5Dbx56UX8/u//Jj7z7/8NusMWkhkato0KksowQL4xQ0YjqPGzseQn0oaGXDQrmp8cuo6ikPzkAFZLzC4QZa8CkXz/HWCrMgS+KM8UC8r3xoLGJ7rGQ/lMmv3OSboTzfISnQK86kcpNqNIoXj9oOerrvbKMVORTrGsm8plvYzW+Ww+14w8/SaapAKWN1h0KpA7abGIWcXf66YJr3ztj9l+8rx1kkCG82wDs+IhedBIS5IZWvSluFd9L2NaMf/iz4Bb8bbUehzHtrb8rDZtV6lSoWJJ6zVzQK1F+aSV8ApJCNAICyIRC9NAjFDnaIfECAFJGMoLrpCFSCTAkzIpMEbEF0MkGKd+FYTwYSr2ToSQw41rz9AI9COXm+b9Gd6XwkMPPEF9LB3swQHyn0ueBAoVZeRQBhdtp7uyIuBaMFnbbLRtTLd2VkmfHSjHa02zIOkYsvk0+aBr8aFz84uYW1zCXcrxa9dvsf+d0BIfwUu7PqCel+E2MAfD9mYZNfIdfA0cezBN2epMUys+XG3X9qmWfk3eR22HzwGTEUqFAg9xAFtNEMZxlVeM3ynzgjI5jKmxBTTbzQ5PZ9fHcXdgYLbaJkIlIE54IkhR/yvPq+iwRL4asixflACO+kGLyL0kziD1iGaTRwqx4HVDYhDRoabvlepRukIOO83sKd2WedQFgFmlcCRo+i0snSPnCcG6eEQ0KH4xA4yH6KXa0WIppePqkRV1P2lLxG+0JsArjiB9id54j2hecl8ZoWRk+VhWjwZHoy6d38TuVsG8q+mUQvY8RkN+TxyLs8dx9sRjyFAPdxvEJxxXAf+5mSXMUBZPT82Szghm2Qdav6MZBxnW2iTDZmLk7PQ22BbKHco5zcjq2gDHTDGvklliCrVfgmMItkd+9lqDfaNgIY4l6yyQLcNYhpqlnmN/Ctxaej+W8dipD1nf3MvhmXonnnKzc/uDAIq7LTbUTwBDhU3rLjBOU0nQ2qGVIdGRYIeIaJXcudNu8hsteCAhBNgQvxpNIuSrELd4wjyw6noOsphdA63vlLxblRVhs1vYADaWtNUi4CvslrG1ucn6NJFMRzCVnkMkmmJHRU0QjcYUwhpQZQQY0KKJV9CtHEdtL0CgSlA3apOZ4hSIHjR7NRJpwgLhhfa1t3q5sQWXjxYamiTaFo4deAQHD07BT4LbHH8F7pN/hJ3hm+TnOMEsASUtBrd26eKoyIPi9yv2pMVBavB+El5LTNpEb7zLawmUx5s8ixy2JrFaC32yWoSM0WqRCf0coOExtn+ZhPU8OhunKLymydQdMlkM2lrO5e6TsBp85pgWK5Vcnkps8dsk8g7OzL0P3r0oyqsDKItGy5NFxDvAKq3kHq2qQ6k05hco2PIeWuC7aGzQQNnbtf4SmJX7v1Kro0WhNCSBBUl4s7Sy18o+dDxNnJ7TasZdfKvyL7DWeBmH02nEp0O4cOP/i/npx7EY+ico33oHLdRZnL9vCc9+69+gWmpRsbZw5swTZlHeJFDQdLFy5EZSrGe/ho1iCUMvCT2UIfMsYj5/GJ5RAMMelTCNJ4W3TE9PEcxexcrqigkx0Y8/qLiyOPLZrFl6oh3tciSmcpSiG8rfOAGMA4JInXpvSssEhbxeDo1JeAiAil4FZGX5SSjo98n1+l2vul4CSkzsfO+AVnlk30rHpQT1zqYIk+fqHh3mJeK1etVh8VE83/2uJ+zzdzsmC8D+/KHSJ6dYxwGzfPWx/lqPYopbv7G9PM0LpWt5iIsd1xiVq+J4qcQtjY8u3b9C3SQhbSvD95W/vjMByivYRBsbNsUO61v+qu5wvF7O/X5a6LpYalILurSYADyfeOTDOHbwASfn5ThC404rZglqCdZW1+7g8pWL+PZLL6JcKtIArNmU+czMFE6ePEZlu0g6O4Xjx5VHNoVsOkxa0+zABsdZuRIv487tb1O5FC3jSTzmJ/97aRitwe9VNpMqW1HByvJtrNy9ji994WkcWDqGdDJLJdHF8soNrKxfQndcQCQxxmNPPIYPffCH2bwwNjYUg38SZ849TNpewDe++YztK0+dy7ZKebHdbLv6RSujuwTIlcqYCl45br0ELIqr1tbcPVQIKNc313Hzzl0sr+5Q0SUwP6+pTdGZFrlpVzqlpNMCRT9B7zTHdYgwgUssqRzeI8rhOOsYRr2l3Y4oj7wj8rMXiXTUpv7+/j/8m/jmFz+LIvsnTYNSqWm0ACZAoOyigeaEGWj8nfEyOWyHxtwZUwtx4aGxNRDLChldUwlJGU8+28mLJjxkcakCsgSV+2xnDxFwVScZPYn89KqT5QsgjPTKS/WbfFp2r75xqiGHqxVFsrRXgVDqQKu5nqnk83quths2nuANE+NOfKE6CNTqVYUbv7IMPe9ewezLz/4Rn8M26LlWL01lyystOKgyqdfIlMYvAj+8RmBFQEFGuvShn4xK6WUgQ4t0GnV5Y1voEHBpF8YowWkmlSAAUbxhHBkBxUwC+UwSKRoiArSJhNJ7RSzeNhqV98yLqFaA672PoMmruMMxaY/AJfdexL3z2Fi9wLHrIBHPYXurikcffT8W5o+iUmoin5/j+yVrg3R4lfR65cp13Lx1C1evXTVP4pmzp9EkGLm7fAPlyp55TNsdpZ+T0ZYk3XewVynggYcepuwOUbY7uYMrlZpt9LGzu4toPEIdMGOe6dG4iVKphCbBrbac7Q5LeOD9BNTsL8ktee38lGtealEFtVJas8+7cEsAddn1BLI+glafZsFETNSJPupbv4eyxgadY+J1NrTRTFvQF+GYUL+yuCaNzVHXjbCLhgJBu4wgLbDS1rGNNgG5UkMlU0YfRF4EtPxdcbLEPZahgnrLZgxIfJLrmqXz8XmWzYjEoVATzWBRxRBARpFKpTiuEfYZwSxlkryVoh3xjgCojKABrWGFmyg1nzbJUNkmf1m2NgIS0YmeRHs6HX5jH/jkZJG+IR+O/Wi3ZHQQS9E42tst2O+ZbIQ0QrwWyCGXmsfRw/chR11c2GpgY2WX9BLA0vxB29FTW+tq4Z1SqMpIl2dboS+K+fUqGwOxicJZhq4a5U7DsoxEtJiRD5AhYwveyHgO/0oGyOMqASmg72G/DKm3ial4jeXWNSOYfcH+00y+ZIhkjl6/JzA7/7j3qWR0FsNGCO0SkI9M4WDuEGZjB7CQPkSk7TdmlJs5ldIKWh8bV99fmUYQG44SHAQJeoMI+AhYSLyy1J29oTtsgAaNB4lN3gAnllBxEYopIkGTIGWWK+C6oXxu7PxCccsYZGqKHU/LQVvBKa2FvLLaK9hFReEZhgkMeUbdJF6SezWBQZWd4+si6pmnkM7DFVolaGK7urQeQgQ1HIBqfcdicVokGhHJe97xHpw9n8Nn/sszWOk9j6nH30DDrfbJM7JH0dOjoFbaDFpqtPgEUESIionSXu3ucM3aOB7kSciLJEqeSmuh6RC/cgxWEfBETagKDI97cRLrTQq0W0j4srh+PcM6JpEkkyvwutmsotZcQSJyxBgiMENFPfc6yrS0Hj36EfRWqCTXgWKHxB+eYqdVcGvlJqp7O1iiAFxcCCG5GKVSryIxPkpi3uQYUemTQVRvBZaPvWRigkTFpHmwRgBaxYmTczhyKoCXNn4Z690/wtLM+0ioBVy68wVMZx/Cx975n1C9dQ6r1/NUzEVn84TRFC302+ynER588L3spwi+9cozHMMSheMMep4Ge69Fq1ueqyAtviM4QEDgHpLhCHhjkQjcIbW5gVQigddefRUXXn7VGLxJYB+mEbCzs2OeWKUcmspPo1apGlMlkwnSoYCkhAfpi5yj6T2LaaWiUNoueU8tdoeMophGnbJ0NY1lXn4Kg4lwkGJzwKg+yyOrUx4eWYhvgdLJ4QDdoXkYlMJFddL1OiRkJqcOeWRVVy2cePL73m/ffbfjV76TZ/b/ehgA0BsJuz8HZg2AsqqKRTYtr5P1mNwjpSrBKSwjrGKX8De96rD3PE0XULjoIl5t/aTVvgJrFmbgNHW/nbyHN01OAQv1dTwest1zIv4UasURkuFFvOvRj2BnvYntjQoK20qNpWlXgk7KlfxUBsdPHsf999+Hhx96gECT4PWk4rm053sHxdIeXn3lJQO9teoe+6ZFmmuiUryJUuE6lUUbUzmOr6drm1UUtMHCbhOLMxkanU3M5GOYziUxQyB89+aGybGXv/Uq7n/gPpw7fwql2gbeuPIt7JXX8OM/9UnbAQ+I4gMf+DgefvhJXL26jjcuXsf9Dz6EU6dP4pnnvsR+lgeQncM+0Km+EbDVquydvQFiCQp6y7QwoOInLQw8qFS7FkNYrrRx49Yqy/ki/uhzv4OLl7+GuysX8Oxzf4rPfe5P8fqbrxLcv4Jbdy/xvIxrN97AxvZdXnONPHsFqVwE6WwUoYgLSzTIowkvwfEV/Pbv/jts7FxGMOWhAmMvdWo0wkXHrCLHnoTKAWSd+SIaEGGo+rbw0TSQwChf+OqMp5QuTynLfRA7UTjyMDqhBSxDt+pe0QwJzGY1SChapWz8ta+UlUBfXif9jRTbKuHIchywK1UhYMGi9JGEp6rqUJWESQW4bFcre5ZDrwo51PSn2NRoV23ldyraMfBUmF71vd475eo56o5f/NF7jJl97r/xuU6fiCkEIDRNbHpB3xN6GXjlO4UaSE9IFvGBBFKO15bf8M8Bu8oY0OtotlLf+5GbiWEqmyId01gjgE0lQgRDIWT4GqcxE48FjB+iWhirnKuaiqfhqDhovWrBrGZCFEo2dkvmB5D0PYRU4BDBVhXbu3cJHoeUV17cd+4JTOcPst5hAuYpaKvkSCjKuoxx++4y3njzIul1YDlFQwTNCl8hheDm7WtotLRBghYWdRFPErhSxyq381EC3vd+35PmYZZHNpXJ4eChw4hTvivcYq+wZyn5tFA5FpNhpO2H2R7KikyeMv6MQrMcuhPgDcpD5+6TbtsErD2EXNRZks3ECyOWA/abb6RpeoJag7LOOGurYMVmiyCkD9Tv1uv8rTv0gCqJBjXpl/2kdF4K1RBtS39ohzAxSyqTJX8TQtPocGQmAavCMIlBFGIkOlY9hXYFhF0cXwO2ZCw5joW5tdAqmUjSYE7aLln+IDEL22xhKrzVsBWfref2CWabNEwFZMUrImzV13QRn2sefvGG/Tl0rFk5eTi1a6Bl7KB8kWGk3LHFQpljWiLIVJy2lwBVMvgkDs2fxNzUUXRqHty9tYkm5dHc9DxOHz9D+srS6NcWuCqDurvegLbNllNBRgErzvpoxl3OwBrcfqU9dLKpxCJx0oQzYynnk2ZP5G21GT7ypvSU3xNm30kftjnOcgySf8Q75BPxj8+jjAb6ju/5+vCJD7Kl93Z4Dj8ce2oucRLeZgbhfhiHM4dxYuY8FlPHsZQ9AE90RKJihytmhXhyhLYt8FFaikQ8bSEFckMrXYnlUuPgazB7wy6FuuIF2QJWcxKzqO+0CEMhBhI8Y1pTWnTU6WjrxobF4Sh3ojwS01MzyGqFnc/ZYk7TcCBzyasXQgpBV47v0+zEIfwyY5oJeLpJuDpJXksQGqywHk0+19kooEFgWW9WWHaQiosD4QrjJ3/yI7Yf++//12+i6a7hxHtZ3aAX1XKYBKaFGizbGycgTdLqkZAgu/STBK8xEncA3VGYhEswTlNxhBIGngIGrgrawyoBZBMBCs1+nX3k6lKhKqanAfeghlE5gwMHSgTXh1DZ1NRKHUFoN5wdFGuvIxF4HJ3+Dnyzt4HcFRRKPbzvvh9G/QYtx+0wSl0q0eQUxrU67mzeQadexXEq7aUDBLMLcdy4VkTSddgEjnIC1gmmOjQwFM+inH0yQDRVH4rl8dH3Kr4YeHr1H2A49Q0ciJ8jyGCZsSI2Nwp47/n/N/z9s1i+XSdTss29Gq5d2cDh2U/Rgt0mADmE6ekDiIXT2Ni8TaDaRH9AAiWT+sNjAnWB0Dby6QUcWjhsiZy1GEtE7wnKQ9Ej4Anhwkuv4PrVa6SrBIrFgglHZQO4fv26eevzuRwNqjTKtOaV6kfT0ORvE34OABUYlSB0PKg6tVuXlJYWFuqUBBGNOh5YeQcE0JwwgAFpS+9Vhq6TINEiHJUjYGdf2/ekNb5KyAigKvXIpOzJbypHp5SdgK6u0+uHPnRvzPmrn/60PV9xh3z5MyeLN2FGtvr/D2Z5lWStM5Ur3ct+EpglaKGxbEBGOMxAqMpmeZKxdnLsDLAIPeh+Xid8IcX/58Gs3c9TU8uOV4738/FDjp1tZz2Molnx4QPv/jEszZ5GPJLDiSNnSf9OPKxiX9MEmfLIyyuxub6By5fexM2bBG03rmFra90AUYrG2tGjh3FaXtqlaVx45RukjZcxPxsm6M0ZmC2XbmJ7q46Lr/WQS4OgIImtjTJpk0olHkY05MWdO5uIRbU9s7wkIzzzzOfw2LvPI5p04aULzyA9lcCDDz2On/7pv4qvf/0NPP30C6SDHD7y0R9GWXF3/Dtz7gTlEvDSyy84fcT+0diQxSgzPFRIVK+BgRnOmp5ttUQrY+zu9XFneUzehoV1afYmkfIhEG3w+bw+NOC1ZVtNPh61USSwXlu/zmvXaLRexsbWbdxdv4E7BLTPf/sZAuHP4wW+vvjyc/jsH/8eXr7wDRoK1xCOjxGOKeVNx5Sg8gBo8ORFbBHMUMdIB5tRo0Pj6fCRfXLozKEyZ4x5OmNNlWPvdQ0/G83bxW87xBsqkEqaL857KWK+mLeWzxGYNSCry/iDEaBolveyPD15vyp2yguqKjtzeeRvXqtb1PUCstSJBNvOafygoniKP5ywB5WvG1So3uoZAtcsnmXfq2f2wtf/hGVJFvAmlmGeNZs2loOD5au+ZAByv95YxeV5k2fKdoXyB01fKrenyRW2Rbd4vAEEgxFEEqD8U1aBBOLyuob8BmJjNIgUqiJPrIw/7SwlL7t2cTPPHF+1+YJidGOhsDmWtCua+r9eyGJQj1CHNdkvI9KoDw/e/y50216Wm0cylkMoFKPuaRDANLC9s0c62yaIbRCIHkIqnUI8FSctUYe1W2y29LecUzLgqcDZNm08cvDQEj76oz9m8ubajRtY39iiriG/kP40zR6JRLG5vYEV8p+uj9P40kZHg74fh44s4eSZJax1XzM61TawtntWwE11TDkmoEAQqTXYHgJQpWwctNnd5CPPiMawVx5p7czoOCA6rJc2eyKVsd6U7dQ3Ck2QE4NIhjqb9W+zc7pDBAlmY6EAgXyIupJtYbt7JIxQNGIeZiekhP8oOwOsm8J/FNPaZ50kC4ye+b1ipiVoJWcVOj/g8CuHdZr6TKnYogGOSShC+pTuEU3K6UI9RVoQnpJTpE2ULj1khp8InofpNZbN4vg0yWLxnmNUmjElMCu5z34fUsY0Gk7GnnK5hCHrqRAVH+Wc6GU29ShmcgeJYSLYWimjTVkQD2Zo6M/w+zlEYinK0Jp50avVOvuixrGqkq56lHdazyKvM4H8mGDU3aKM69qixqhtmhCDLxDjcxR2qhzfciQ5DGf5eHmvcoVL37do/CjshLU2I1mecfVJWPHRHHs5HxUXfv+R77P77+XwnHks+dTB7AMIDfPIE4wsZOYxkz7KBk5ZiqdhiIjcS2U9bqAzrtACq1MoN/nwIJWDtm+U1zJEwnPArDrZNiMgmNTWdi5ep85XZzj5O4ccAAIKKl/JlpHy15AYW40hLYk9VOpFkl8PiWQcuewMklR8itMwKUEi0s4avnHUgGzYPQX/II6QN0cAWuTPBDfVY7QshuhjhXUOo9nZZn2cFFSNVkOkbcmZa7Q8Di4dxwMPn8Cv/4df5z0pGrNBHDpHpk+xs5tVDoqADwfGzXZK62Ob1djh2SBR7bCk26gNi2xHAwEKrSDvDyDBumUR8c0g6ltEkgB1bCsr+6gXFb9ZgqsbQ2/jIDLZi4hPH8PNi/IilZGKLLJ/iyhXl1neEfiCbQwzFzBI3MTe7gAfe8cnUbxURbeYwF67D19uipKqjbtbdyjkuzhFUHDgYAThGT/efHMT3tY8CU6xM2UKMbn3aWRwDIp7BVr7cYLLDPIL0+iXeY3vt7CV+j/Qc0cxF5lm+27ha5feIAOmMJ/4CyithxCL1yhcNfWVQjIbx25pwwRyqXoLleqOVAw2Nq6wjVrV6SYI2aawolGSnTPhM5PN48D8og0lUQSNiw6FShMxMnmjXMc3n/s6us02zp07h3q9RgDeIHOKZlw0dGo8qwQ+CzR2muYR9YdoNcuzb2zugEuSmtGgwKoBS2qzCbDUlImuE5jVqd9Fg7rHPEc8nd8VSqDYWKUDc7xQJlDEjBMlxBv1WQaBPK8Cw/pOv+mYlCfAPfHK6pqPfuTeAto/vR8z66j1tw4BAjv4KoXtImu4SKcGat8OZoVWhFYlaK171FCeQv8UtI4dLFjAUWP7NMVj+p7/sYnkT8cbPQEBgkPqHaWceTuYdcANX3Xykeap4CttJbGrGarlnT7On3gXfuj7f4agloqG/LB8dxkVCtv19dt47c1XzAu5vr5OOqqixvP4sSM4f/40Thw/ipMnT1C+aLqywOf3SWMr+Oz/8fvszyo++OSjOHl8CpcvPou7t19F0EdFQAU3nZWnyofVuwSLMWB2ivzd72BjrULFot8IGjICTlWkplz49oUv4eDRDJK5ID75Ez+BP/zDr6KwN8bf+qW/TzmQxh9+9nOs3xb56yCWV+7wvInT506x3q+hpfhzGj2iI80iKLhid09TxqBykGcmQQNvxPr1Kd8oquQUYB+n81ECWD/agzKvHWBqmmCT/b6zU4KSJITCfQRpDAbCI4IJ0uP+ew9BsryxQQJlTfHB2yav7PL5WvU9RIBAVqn12lTWrAJBg9/S/MkjFgpoH3gPQYAGkT/yReNnoErkojv4hUMf+qyRtwv2x9oBuvKY2u/6wEO88NbJ72j4a52CWE6ngONo34Iy5c9zxNNBqTyl83Raja0iLJuqm2/tW+cr82hJzghgOEaUvmMXkP5lXAjU6jvFkEuPynOlelu+WZXP+urzd+oq/MGy7xXMvvr8n7BQ9YpBYZZJ3pHhyO/0WSDEDOSxgBTLZsVNFriUEF7A15ndm4RqqD7y5QnYCOSOPC2C2iDCWiwUJKATgKViD1ocIoGdn8Y177NFM7xdbZXXSzyr37u9Ea9XeigCxZgWGnmxedeNwmqfhmsV2s7V44oRYC3g5pU18grL9sUNxLVbfewWd7C8vM4ytaCxR5miXdziOHz0EHVTizq6gCiNQq0/0R78lVrR2rCwtISHH3kEucVFvPLyq7beIZFMIxyNQ0nzN8k7a6sbaNSrlAuaPRnzmhsETmXSedp2gGt2d3Gt/DXKI4L+AIE62xVg2QGOoYuYYtjtIkxd7h4o9JEDPKAMH5IWoNljXm8Oig7cpHF5/Psylgje5dgaaeqXY6NdwcYUmlq/orCOAcvUAvc4DekEgXWz2TUdqZlMRWJpdlCZRETmkvODdoM6iwYnsUar10GX5RqPiBZ5vTbJ4VMJdEkDHBuNYYpgNhNPUc+F4eFzRAdGkzwdOtBqfyc/uUC4FoKJZsRbxl2S2Xwnmjd1x4+qj+hHoZoyUM3Y5E/dprYVbpD368RUZQJCGrXUk4rdDhOfzaXfhagyIW1UsbNWQyKao15eINaLkA4I3FkX5feX7lZ61DZ1l4XLqI3saeV9tuw+2tXP04VHWVkUCkLsoxy02gjDGQ8CaBeVAHl1kk9brzq0cZbKE331zE3v0K7oOcKyDMyS3gVmzx58n91zL4db7u90Io9sPI+5mSkSVoB95+MgjVDt1EgwAxKVOkoevjotkiYJ3G2KXuEBk0HRgFiqJNbavFwkIElKiUV5+6UE5YLXAGkwdegabZzAvrFp55qCg3st1sGFWCzCgXKAsLnu5eUVsVBQeoYBeAZRnkkESdidZgBNAqjBzAqweBXeHOsYC9EyIlGT2DEm8cumZ285ucvUlX2cPnMMF95Yx7WNZeQzFPhV4PbLY3hJxb7wKgK0OGT1KEWTkr+Nh20CaQ+iBKih3mG4m4tUlNtIE7gmaz+L4M1/A7z0G3C9/G8Ru/W/Yq7yFLzbTxIcHkJseBCRYRb+YRqoLcDVi6K8lkAkdwn5Q1fQ63jR6e0iGBnQwjyGWvs66+2z+FtZbM1an0wXJPhnPyjRM4WSLMO9QolngQw6pOAmwGXzgiGPTUGN3JpO0k4tLoJKWko+WqS0UGfyU0iGo0iSgK9degEvb/027rj+FWZzCSzmj+HyzpfwyhsVHPC9G5Wi3yw8eaLDniUqMQoBxSspNi1+F/HYSaxt7KJUWsPO1g1auEOcODaD+Zksr/Vjb62D9btFxEnkmqLZuHsd9fKegcQg66BVjBJEV964iJWbt5FJpAk8tPsMwQEZXDHcBxZptR87CaWTWbnDMSYNKAZJbRLtTRSqMT/PCQ2q3zSdoV2YZEgZoCW96bNSIjmvSq1FIWVeWSeuVYJENKqFZxJOhkp46r2X9dGpz4r1VJkqW+VNTpWrxU+KM9ra2sLe3p6tOheg/Z4OKf8/d0hg6lCTndNpu3O89V5Go2CVFKUOE7T794qRnPv2Pan7oELfTfpTRh85j+WQhwRG2Kc2ZcxL7fPbHmtvWVf5mHSKu6U7wpRl7XofS/OH8aEPfgwXX7uKV155DRcuvE5jpMqzzDEM4fHHH8VPfOLj+KEf/gF84PuexEc/+lHLanD79m28fOE1fOFLT+Pl119CvVfD6vYyVrdWLNfsz/3MT/GxFXzxT34L494OThzJU9mDskOCns9u9JEicJ2dDiFJxd5raSWyNR9psuGAyoj6FjPz5G33Lq7ceJ5t7ODG9WX88t//f1KpF/DL/+D/hXR22nLMavbl+W99ncpNOWIT5FkffuhjP81ypFwpM7tD0gzlEx9AVrODJGxxZG6v33iTJOuEGxCsGshztfkblV6E11JB1Gmkydjg0PALKRApY4qMepOvirnssTzyRZD0ijplQN0As0BvIkNl7lHyccWkySvEYWU9Gg0tklT8LcVYn791CGhMofJ3ns64672jUCY0MDl1SJHZqyFChx6MJljw5P3bPzu8JK+YeIRA2l6lB5yTl+yXw9N5u3+o4W97dR7r0BtPB57r4O9U8qJpUauOt+jbuU1VV22dNkr3OPSpV6e+LELFTB55D8eY+mBIINlnGfLLvf3skucEoHqUy4QkBKbsC143dJGXWAd3gGCAfCKg68RUUqFTHimlk86uPIeUMTq1f722y9We/TYlzWu1cFrT8/ICGi3wXulF/SY60ZlUiiQCgg5192jcRSzhI+Cg3u7vEnTzehqXyUQWly5dQ6VUx+72LkHmFl67cBHf/varuHzpKlbX15DPEw/MzVne4mJxzzyx1bryGgc5ll3WfYDp6TyN1jCvncYT73o3pqZmsLUjHRbF0RMnceDQERw6dMjOBA26JLGGsnSI9vReW/rK+N7d3aWOUSqpKso0WOttgu62NkNoEJi1aIhJpiiuNYRxlwYB32uDp1QsjlhUTjQCQvUr+0ILjTTW2hFRQN7yn5IOtPWsMECTgF0LpEAdOiQyFLhvkFm0EYStvqcREA+xXhyLCkFdq1aHj/pJPDGglVxpNQg4yXscEi0O05S5cqWavqBwCVFvBc1Z4sggbepjO3jx/sk54am3H7YhDWnTDJ99HtI5ORweFVU7h/GpeWMduW0HGyr+Vj9oe2EXDaNIjEaNj4KgR2NopHzDWeJ7gvZyn/JxyD7MIpeZQsATJj25sLmxxfEuczyqFs+sNSEyvrSzmCMPxLPSkazrkLzQF/7yo1YGCjt9bG80UNymHCuRrhtsdz8KvyvJOmQQD+dMnydpHKXTKWKxgHl5FXfd7igm15m9NLz359p/L4fnyZ848tR8/j54hyFEw152huJgQmhzwOo9eTX30B7UUGxuotzcofDsUFAHLQ5UW8X5Q4qPULLkCJQk2tzllNRa7ae4V5dL0/VkcgMMzmpOrdbVq3YHcfU56kMBpib2drdJHASzBJGpZJbCNwkvAZHFj+yfLhKge0Drpp/kvQmMSYjbBAo7zTHarj1qjjvw9pdoxR1Hx73K+zlILCMUjjv1cnXYcWXsdDfw7nc8gYsXSaxkTu0ytrG3iYF3C2ffW0XLdZECJU9BwFEasl1asNIlgBq8E5H2z8NT+UkMix9AY+MBDNafROX6Ydx6qY03v3EH1167g+27FVTWKHi6cSweLBG8kkFoqSjPXL/KgQ0QcLdpCc3doqWURe3OB0n8VGRuMktlDuXWS2SKRbgXXgLiWyjsufDJJ38Kmxe20SmmsVqpwZVJY1zr4jYt3ETUi6VMBDNzNCrSLrz55jZ6hSTHr0XB1kGNgqFYLtv0/n2nzxJkNHHr6nWkpq4g8q7fRi9M7iimUKm+hhdv30Q+/Qj+xV94BoPwIi69UkUcD2JKOxq6tigAmtjZjtLiFmBJki5eoFCr4Nb1y5ieoTBI+rC6TOFJxq4WatheW0OMtNVtlQlkXsL69g4FewTJ3BTBiFa8e/HKCy/jzo1bOHPiFI4cPozby7cogEaWPF/TMAcOHjLQLgAr4J5IxSiYtNJbC7zeYnSH4ZxTzOBkK3C+nzDI5BwQwCoWyNnC1jmdcAEyBhHGRPi8XcBMDDEBYIFTTX/r/SRmVrRu/U2a1GnB+7xPhoTq/uF7DTN46ikDXZNDrZh81Csf5Sh4Ntt2UaECU8yA+EqnImKEwy1eUCqcdWBr9m/i4bjLeK3apHbyO/a3rSSlsWqxoCYoaVmbIqBSVvt4m052pwk5hQ/pOhrSpG9+x5PFGIjwaoEXYjhy4AyVspNX8tjRYzh37gzOnT2Dw0cOYG5x2gCtprJu3riBSxcv4tIVGjbLdxCJx7C4dAhLh5dIaJQdoRFSeVLiI+dtS9ELL3wJF1/5Pdx3PIB8qo9qadsWksroqJZZY9Yjm1XC7y6iVMJrqy0sLmrF8RjROHuF/TOzEKWs6iI9FcfCwjE8/tAP4zd/4/M4efodeOK9H8a1ayv49suvYmo6jdy0ZFLUtrWcyR9Eaa+GfC6PaqWMu3dvIUtjcEBjv9UhSCGYFKnkpwhGKAc75DstOlFHdymrfBGt0vZRoHMQaOQJ+GejAXRrARQ2KeBpWPs06hwv6g32taNIZExo/EVX1OA21jbjxd8EHLV7kgjHFKmu5SN5mUaeRMNXlcUfRedGX7qGpwE9o3fxkp5F2Kjfv/OdlPDk1CDrOaILK9ZktDySopOJvJZ8FwAwIMuvxFkTmuVXBBosgrSrNE4Cebpb8ceK/7O8vXy+0TlvMSPK6knAwXu0ot1NOeBlvXk526Q+4u+8VqcTbuAm38kZwUKsfSzbOkL1VxtZf9WZP99rzOy3XvhvBIYcHxpCPQNPNGYpA/SdnRz/rtIrDdsc7zpqVNTNLmX6oIOuNliRjFGavoGmWlsEUg20qJeaPGsESsQMNjZa2S55Yt7YIGmFDVS7nOls9oUpfPaDxkI0YvcM0WixPQMaSOSBWDKCUn0P/XYEh6ZO45M/8kHcurOCWzevI5NI2gydjEqTZR3qCRKtNjs4dPAwvxuiVKnQMIwhk0ujUNw1cEnJCa2X0S5zAqGHjx3FX/r5v4RrN2+xjkFihq5tdiNv5pHDxwwMywEiwz+XzeL48SMolxo2/Z3NhskDAVRr6scuPKEmVhsvmBODw2fT/+KBQauHBpWNYjmHZTm9etQ9clDwNxKG6F8GjZPyiu3nuPbZLz2NLWnGDANadnJCsPvZrx7L0qCwAU3PK8RtmnWTg6fXGVmIheJtAyEtRB7SmM0gN5On3lOavA4q1Bnleg1Njnd31GNfy8MqDyQQkNBlmQ3yO9UKjYkQptJ5pJTmKhiCh2OpcdVhYXbkRWGuNnGM9FBDVq7IUwKZlZdH1XYa43grk4GMCJtlID1rwaM8mEG2W9e0GgSDLXlmS9TzBWIeXqNyugS0/mkE3dOYip1DszagrGwTXGZNv9IU4LMIRgtF6uZtbG4S61VLrJubel0xzNR3HC95ZHujuqNT+HwBUclWbR3cqI0pD7scS3nQPXxeCpFAmuX62accBxoPPmJGOSq1Zb9midUOOTS1U6vCT2UM+MjX2sxBccSq+n2Hn7S+upfD82OfeO9TM6mjfFAEFVpewWiaTEpwWd+BK0AA5G2i2F1DqXeXA0RG7UQIKNO0voJE/QNKDO1gk2QlI+iwQi2CUQ2G8l1aqhIO7qCn+AjlNiSzcKwV2KvVh5YSxzOFYq2E7eIyGu2CBUyn4zkLYZB3bjRSCAOJhBYXWrSAumkSzByJKIRiq4bltV0yXBTTqQCiPgIjMoQntYtuQLlbQ/CXMhZe0PPuotDaIpFTkLg2EfOG8cCxn8b68gUMmhEK1g0C5ENoVEKYPbWJ7AEBeh/i7Xn4h076rJH/NQSH78Bg80MoLtPy3UujW5yBv7eErz/zJq1K1iWtRXDaPYjKq+7CmSdXaCQsWjypZyCvr7bL9WBE628QmcH83k0MzkawvU7BtRyHJxHARrmASEMWJ4nl5ItouQniQhk8+diP4OWvP49axYtiNYVo3oX67iWsVneQo/XzntlFROeauM32r60OcXjrDHZKd1ClxVPtaxGdG6eORdAuu3FleRnh08sIPPEfMCgHcTB5CpvNG7hYfBM7K8AvffjfIF44hgPzp3A0dx6NqnJrhtGvUfm3N9jXl5Cmkuu5T5A5nsO549uobN/E1RstTKdPIT0zwObuG4jOPkihGqUVWMFOkZbe2I9CuYdnv/w1pClE5henkE5N4emvPIddtjsxHccDj9+PlZVNbKyRoQIEGYtz1EU9gu9reOdDj6O0W6I+7BF85AgcKAAaTVr6aQKWBAWDABmRFa8YkGkU8qCV4bImJffFINKOjoCgkiVd9rSC1ICp4qOVFiZMZiIhSpAqPohMbKlWdFLQ6tSCB50tPrvTamGgrAbkjw5prcn3PZblJoBVvJLSJWWzOfNOvOPxx8R33/X41V/5s9kMpMh06hAgYBMMFEh2ev0OmFWzBXq0ta20gdIdOSCfilzzrnqnG9lu4Q3jU57CJQ4s0fUCSASxBGAqf6T7TOnTIKUAkhyWgpBSkCKwbT39vGf/foFagaVQIIlmlfe0vDi0dBpLsyeoZHmVbSurDTnWcY3Gz2uvvYrLly/ZQj8BfqUHOnPqNAHr/chkMlheXsVtAkVvaIBzD5xAPB3G1599Fq+/9DrlQAPve2eeyu4mNla3ECEQSKWyWLmrmHDKGDY2bItjvFTgNYJnbejRMy9onAZfqQacPDeDu+tlPPzwu3HlahHnz32UwPU0PvN7f4x3PvF9OHT4JL709JcoF2nsU0F/7vOfJxB/wNIJaaHM4tKC7YR08/brBDFKfyfQz/6huEpRlitF1+xc3nYq3CsqZk/x9QJuUhQCKuxTKqwQ+01xe52qH6UdykuKUe0URQlp/WpxmhwbTc070+X7Jw+NqdED/1O8KdUAf9KY71/G+ogK7JWnvhMgMhrQqWNSgH3BZ4hGWLBqajTjXOAcerv/7LcO5/cJbU4Wf2msRS869dt3Fs8KMPPUYlTRnwwx0at4VMDTqqV7dC/fGojnadO5Kke38ALRr0CswKwArEC8AL70sxlpRuAOiNVzzKhV+QawBRYIHknLf+3H7g3MfuFL/5Eyh2CWgFXeVL22ZaiQ95vtDpV0h+9bqNPQbnZk2LRsOlrA1U4CQU3XtwhilQO2weuUZ7baqto9lh2AAxyKKO5V609YR8odNp31ZwOszvteb/UtXx1A7oyTnwAl6B8ZGNUOcb4wQaE3h+Wru3APW/jmi9+0hdwjAusjh5QWLoE7d+9geoZGZTSCZCrJskfmmasTvEp+KD+qHq0pYW1Z36KsvHXrFk6cPImfJ5B9481LpPMubty8jb1a2XburFXr5L0gjdS6gRflPdVRJ8aQZ1SLuzod7ZYoj5wy67hQa6+jMr7B4SLQYXtcmvliOzQbqfaOh6SVtpt9pm1slXWABoVm3EhrRiwiblpIPd6rnM/KkdvrkYI5xmMSl9YKyLkhEGnyn7collP9naKRGnL7CL6r2COo0wJ2ZXAIUqhMsW+iqZgZMNVGHaVGDeVmzbICdUU8IjFhGr4KzGqcFGap3LNan5MkuI9T6Kg/3ASeokH1q7Nuw5GHAuQ9CtU2wTGLYX3ZXlbQ+IenfVYH8iFyIoi2/bTG/X4fsZvX+rjHMSiXmuasG43rlB8y9ALmkQ1gDkHXFPp1ORVCdsYiGVvr1G3TOOD4yCjf3N20ED4tmrddufxiSvWVIxeI8Jz2qrHkIxmILjd15UiZIuQ4GJujM0E6jAQzfI72JCDxUO+DeMrj7bItkk/kQzZIBqFisW3BKJlRYSUWQ0u5IKP1gWPfQzaDD3zi3FNed4YDroBbDp4/xsYpGwGBZ3CAhquEansLtaY8crJCCBC9EV6nh40Q1tZmWqFHwetM42pw1UCJUxLZkGXRWrME9myTKupYk2I+neH96YU9czMrXiIWiSFMK8lnUywcOIp/FwfL3Q9S5jmLr9otWj91WncpP+Zm5jCdnyXhaNcQKrD4GKHkAJFcE+uv855IhZ27ZDEynuBtW8yFwUEsHvVje5XM1CsTLKZIrLSmhxQm8SKOPSKB6rcYNu1H3Ww3SLAdBAaPorP3gKW8UkqTVD5kRFdrbBHUu3H9inYjGuGDP5rGx/7CCk48mILHTwu4fotCbZtSl8Tg7lm8ZyaXRLDpRj12m2D3CN74KsyaqTQI6kmEbVrV+UduoVDaxdLUEzg78zEUb5JRy0lcvbuDyDQFFZn16vJNZMNBPHJ0FoE8BWOK4HXbi8jOFO4WdjH0k1h8G5idmoevfR82d9YRPPY04mf+FIOYshKMLOD77vYuBUoMJ5c+gfumfwGrrwSVfxrbO+wuChNlZri5+qeIL34Z7/igCzO0Vp9+nn2bJkCI7WKr8hI2i024gl5ceGOTQOAcMtn7cPrIeQw7LovZW9vcItgaYSrnw/PPP4crN26Q7kKWBmZl9a557Z544p1YXV7HxsYOreIcLfES+/WqxT3HE3k0ez2cve8+jocWeZGeSGu2MphWooCLAEVbXg4CGU3nOVPlEgjkJ75Kq4oEJUgEdG1KlMqNVEt6o51KWpawkOdGqZT0u7wXomO9yvOq/J+K61WaMMUVdUgfol9Ny8jCVohCIpnE/OwcFUbWgFkikcKZ/5O1/wC3LD3LA9F37312znuffXKunKu6ujpUR3W3ckKggCQwEgZGBgwYcx+PfX3HDcYeewbPPDYejz0GjIHBgDBIQhlJrVZ3q9WhurqqK4eTc9g5x/u+3zpHjJ+5g4rnuatqnbX32iv84Qvv9/3f//3Hjuyy3l+//eqv/aopf/63be+oTecdwcbzBEECsUr7piFlW16UOzRpQoqddbB75bHTJqVgfMd7xY/80YQrf1KbOACAXwgunBs1M1s6VB4oKhd5hXhCoEF3ObNy2XZ8rs5Z3BoFWH67QdnQTzomjfOYjI4gFh5gO3Vw7fJ1W6JVDx4iDR04sB/H2C4HDx5AemDA4sBu37iOV195hW/34tz9Z3D/uWOYnb+Ob37z6wZMHjrzEJ589CRW5l/A3VtvYmKcwpICsVqmopSgp9Ea9bvJT1o0pGrgbXp6lMKzgAGNXvjZ56Tt/cdGsL3TwMMPf4pGYj/5oIXz59+JF777pq2c88ij5LsTp/D7v/9Zm/F9/vx5vH7hDYwMj7H/qaipSIdH0vjuy19EfyaEbFaLwhCYxgiYEyFTUJrkQrxitNimQatUgYrlU1xxIMC+YLvKCIhRJpRzLmytkl8Jhq2NTWnwv41yOX2iflOzf58OuDlH50dSu0PfzhnbBGwVouU4KSmbFc+sm8QOjp4ymrCjvuv87ndJcgFA2+yoskgZ6bNzs8qil8potDtIJHt8ZyzHsw4C5W+qlwiPhepKGe7RG39XuVVoqxtPGZ3ylGhOr1Zz2Cxp/mDr3vMegVlNgJIh56EhJ2+v2khtJlq2mdT6zpsdMCvvEMvH5wrMCgvdK5j907/4DwSwTdMVNcobAdZyvUYdQSNWxm2jSMBTpD7gzk6vCMzWBHS5y6nSdQCwhqeVT1QASc+qEgxrUozAntrfSd0kdhV9OGVVOwooyVja260O/GNXsF4lOXzclD/eiAGySNKLWsGFb3zhRVSLW3jbUw9j375hPP7YGeqpFbz+2kuIaMJZJk0+2IKfur1cpFGuUBR2jGShZpu3CM5KxRJBtt/mLBw6cgAf+uEfoV5v00AssX96NgloZHwcA+mMASyBpNm786T9Cgo5yspKje2SJekHzePc7lZUIZbfzXLyHb0NuCMbBtK06IDyrmv5aXl2NWE5HI0j4qP+Jy6QL9ZF3ewNESNQzktO15pV6hg/39ulnKG8IkgWHLHhcBnS3HvKhsDnO7Ht4kniCpYhSR4Nevxsf6WaLLG+HfMa+0N+JDJJ+KMh1IhhcuUCNnM72C4WUGfdlTdfowjmBBClUj9IVtb5XNFslDpLE8DisRgi/NwjwZICeb3HwhfUZ5ILogOtoNboOelM9d9kNZ+91//aRf8aPZMx4GO/2OR89pMyYiif7tYWjQW2gyauR4XLEIO3k4G7PYhmJYZuXandHI9sPKrUY25sbm0RwCtUcAfrxBqaA9B1KSewmI1txLa13LAUGg32mRlVqsUuc7oUh0z82Glw53WRQAqp+DBl/oDFY9Ns53tomGuUHQTJBL8ej/L6Os/WMvwN8QfpSDJKtZWnW/HEDx59D991b5vnyR85/myjQRDo0aoPCUPsit9Uh7h9Ley07xJcrXLfskb3ktj9mqkmwUFLSAHE8s6oK5U82MAD62qB7yxsp+kyJa/8bkLbAquSUGJEF7Vvk7/ncgSA+R02BBVgKIRoJGozOxWH29dHxE6h4SYhBgm6Q54RGmAEnjXFBbWxfyKM4SGtHz1EBUUw7dlBL7wOf2odwcFVWndk8OwwmuUkO3bHvDdaXrBQzWFodBK1nGYe59jxU7SK1o1AdwotnH0iiGDEQ6s5yzJqiCdBwBuGt34a1c1jKNL6zGa32exKNRbAESpi5Y6MpFfwz3/z3Xjmh3h9qoSl5aJN/mjUyVDNADs9SebKkLD3k+mPIkxgXWl/G/HRAF7+lg/+wiARQQVtCmV/I4vIoUWE+gaR9p1HZX4EruIgpsf2I5wehCtSwo1XbuDu6iymhtI4vX8IvWgBzWQXpW0fKrfZJ6EYbt5UXtsapkbHUdimRZ94A5n7L6Mdq6BU09KuAaytr1ky6VwugHPHfxjHRs+jR6A9v7KF7U0ZIjECgjCFSJtG1gLKBR9e/FoKWdcWBpPjWLyzBW/Qg+hA2ADxANu23t3EgckHsDQ/jyDfIYFXItFSzmFpYcn013Yxj+vXb5lCkYfzvvtOYXBwgBZmERsbO+xbMiGF/TYZbWx8PwXvAOLpYfho8CytbdKy7DOwqKFqeTw0dCutV6SwcUICxHhSBA6YFQMqtlu/CbiJ5qTYBFhNYVKwafhDwkbiQ7N1ZaBpKEgB+gK0Aqw2QY27WdOsU4c0qng2rauu1CVRKoihwX6MjEwgkUhaFgZ59qamxq08P2j7tV3PLFnovzlqM0WvD6yaAVgpcJ9AAQGlQI+WTmUbCySY4ci7JSCkJAU6WVVT7tqd3Jy8T4/jH6VZMm8WnyX+FfKQ0Fc4hQwatZHerZh5pdoRSrTL+Sylt+m03FRaPSTD4zg4cx/OHHsEk6OHUcy1sTi3ZrOODxw4gLNn77OcsZNsjygVhcBHqVrB0tIiDZtrWCMvPfLweZx/8CHKoyqe/87Xef4Kzp47h2OHj/I5Prx54Vs0cr6AI0do5IQ9NP4aKGbJv+sdpEKgoUvlROMqlwPfmaHCKLCdWhgYi5jSGBhLIDNyCLMLLcqPx2hIvRt/9rkv4ySB8vjYQfyH//C7GBubsjRz+/Ydxu/8zu9iamaKgPYRfPMbz1vctkKmNDR87cbLBMIriEX5hTyieNXR8RE8+MADBA1V1kf5YBVv7kwMEy0qZ3WQYJbND+pMhHhTabuLzZWWgU6lmlJrC8RqyFz9pDMG/vgM26Td+F/njUjYn1KX6nF916U6/X0gazvPqlO5qxw66hrnL69TZ7CjRTpGPtp4dMIP+HR+doCsNue49835JEWnXXxHmuEZ7XqG0a7qpXLzuAdmNcKwqyOdZ+leHfWRn3WOPxk9a4KQ6tVHGt0Ds2J10a3hA/usZzqeWVVH71P5tQkU7yWfF09I9vydewSzf/xnv0med7LyKHZUnzVCo2ODgETLhcurpeFo7fqszCvKq6nvcopolSnF1SsGVsPZUugCuRpCVqnqlDECwXq++l2xnwK18girjSxWnwVXuyheUy3Db2wkAiPq726nTB7XsDf1uBZAavoQdg/gg+95N97/w++ksdfCiy98HavLd9nNHYLsJgKRBOvkIf/UKcNIvz7NgFcebqflVUbNB9Ay0MFgAB/5yIeps7VS57pNWAv4ycOkPM1J0UI1DepYLS/v9/oxPjrGMpP/F52FFkp5LVtdQJ+vTcDkRYFgNxgjCA9Q3yVbCHqJL1w0TlvUVS3iCT45FCYuECCUN5GgOdIfR/9wBkNjA0hqRidxgvLcVvMuyguBYN4nv5qEmIAs26ktmnQ3KPOcUTu1WZP1F5hNs/4RT8C8qQofU7XlKHERKPvI0+6Az4yYjSLrvLXJMgvAO3QkJ55IWuXUkvKiNTtHQlNmnjRlf4xl9rOdNLqi90m/yBEgw0plUYiHwKzSfYl71Z16wff1lojPNtI8iV2j25rI6WMf6XpNnNaCBNs7BXPqBCifQ74YPJ04fN0B9Or9qGTdGEpRnlF/ylmoVKGKSVZYwcbWGrLszwb1oRw8jqBQGjLSroWgiC6aLJ9NmDG+cXQmaZaAWHMGuprLxPInI/0YTI0RxA9SrrEM6kspJMod5eTvkyeX9ZcjwEnb50x+qyrzBdtA+laTd7U/cvKDVut72TwPvv/As+1aiGh6EDECn66WiesEbKiy7SpgufIGLc0tEnyBzeimNeB4TDWDTvFIKYJZebPU6FLsKqCEtWhIndslYTVZUE0YkuC3ODv2TJcNoMkE1QqtHQLZQnGHz9HELxJqmO/wBux3j7tGC4tAmNZcIjRJJTnFcym+R4NMPYwNaPgvxPK2UGotI1u/hQrm0fJuAcECZvaDwDOD8kYMrYrWjg6QgDdoZe0gGj8Gn2Y20gJ0s4Gb3Sw7MECCaBOMuTE8rmTuWYLJfqT8B5EOZdArTKAwO03F2USLRO8PRfkc1qkdxMqCC+/94SN4+0fLuHjtc1SkY2y7BSo2N9umwmetI09mEOgI0qLUZJHRERawuoh6fBE7pRjyb/TTIgliLncFngrLOBzBk+d+HP2+s8gv9GFtjhZu3YWhKTIwlWGolUYkHsTYYIpEFEBkiIbJQAKrd+p441sLyDYKWMrdwuP3P8m+iOHa3KvY/0gR/YdJpIhQqGxhdfEubt1cIDAMIBU4iAeOvhP7MvsQ9rdxZy5Pq9qLlRXlKFQez31knFGsz4+isHIOCTJHbn0dK0sv0UrewBNvfxSvvX4HUxPnUC+GsbRykc9tYnP7Jib3JzBzdMSE+VBiP4YjR1DsbpCQ25ie2kcLv0iAEMP09BQttRZB+F0srywTDDoW/8Z2ATdvL1OYFPDN51/k5zmszS+Z4ZNKpdkuFLjsE8tUQOZxk4kcZhHjadSA1qWMLalJaTTuFmsmptRFJmQcha1rzBtJZjZFJWVExSRQq+8CtNqlCAVwta60UuRoIgLlDIVXhAApQdrop4APmZAX7Y8T4NzL9oPArIEZ7k6YAY/C8HyvDAVp+r3vEpz8tLsL0LJmkozcpRgFQrXZVfadf9Q2Ak52C4UOpamBWTaRhJju1yMUCyUathRBNCo0y7xRo1XdjeGp8z9kGUeqJcoAKtN4lEYYjZHh4RFeq1zVZUvPU67IKCjgzuxdGjXX2fYuHD16BM88/STb04WF2Tv49rf/EulUEk+97RkK4STBYQk7m6t46/LXcO6cn8/s4MrFLRq8LHqLQpyGttYJCPvc2FjrIDNIOdUfQaWVM29s/1gYd+Zp3O17EPsOPolr18t87z6cf+z9uPDWm1jd2MS73vkB8kYLX/vat5HdLuLMfWfYd+P4P//wv2B0dArHjpzBa6++hlgigFDEjbt3r1p5VX7RRSjcR94ewWPnn8LKcg6LCxptCRjg0ZCpGlwedeICylMHzPqpTItbHWyvU8mzfWUk8IGU+/JUUN6xX/jV7lN3GjbjLoWo/tAXg2i6jt/0u+FG/SShrLPWeVJGvIYdrP7cu1c7OWD36Lzr+zvpwvks1ck+1Y3c9CZ7m2jGNn7WxndoSFeKWIBLV5txxMcrFEKFU30cLy2/cje63S2m7tM9e0/VUbdLR3gFWvlMi+/mbjlwea9oX58p5B0gy+82DMr2kydYpK0X2LNF3NQh6gux/t+5x5jZP/nz39ytO8vOo+27/3ROzhwzBvQeXmbDx6agJWP4e88ZRpbO1IICAoKKEfQGqLx5JIojgFA8fgW2CpT+sZ20eI8T108Arcm/5vkjX3JXlhYlote7faE0jfMSZaHCA2Kk+R3SUhgTmWM4ODGD7778dbzx+ovQAiJpylXpPTkIeu4En00QSoNQKZWUwko5oAVCrWrqX7ZZvrKJj338IzZ6Mjc/T92dt11lVg9trm6YZ7dKkKQVzxqUyXvpp7QCVrWWs5zLWrI3EJQ3vc9GfIfGCYIjGiWkHO152YWS4QI/GhHTkD31LX+jxIE3THkymMTAeAZjk8M2uqM8uH2UK+0cZbAIjWV3t2Uostz8agY7BbOW+5VTrSddQFpi7UiKHsT9YQRID022oU064+/qRouy8tMwIM/VCfJWdjZsglSFOswWd+A/5aEV9qOQRGQX44iovNRDqQT1cjJhYN8y7IguSf+2UADfq0VrtKJf3fQK6yfa5e9Wbj5Tv6v5jXFYIPGlvLEBlilAIKt2JRFAOWFLbHfpSKVn82ul0l4cvRpBet8Eov5R9JoBDPdPUTdpERUCehoKCkVZXlnCVnYTuUIOXTkPZaj1ZJBV2DdlHqtmaJkw4e/SrbZsu4A728TAuTL89qhvCVa1ZL4AczKagVJ0UTtYDRQCIrr1UA6qLjLW5O2VKSZAq3ZUuIOyHPjIC+FoGE/e98POvfewec6+/cCzfZ1+JIPDJHqtBS5PHcGhv4VCfQULhe+SAAtE7C0WQkHUmlUoi8BnxBkj86jd5YbWTvYSzfPoCDKlVtJEAAkrKTzFbVpyblasR+JRfIaUWo0Wq5bvi8WcFcXctO7U2Zrt22u5LD1WKjyNWFBgNkzibpEwmgiHBLorKLcJBms3UGwtgqKAQpKCwRVBor+H6MAaQt5hbC8METRvU4iQWHxpeKN5xLwDaFXDtLqW2aAC6wS9JOjsVgszBwhq06NI9A0h1JzGYCyFbj6O2dcJArdpQZJww/FxWpNaKaxMJhzD7ese3L57F2MzQSyuv4pirmvW65Wrb+HmnRsk3DItyQCGRqLIDBFYt9zwluLY6F4BEnmsvTIMX3sauc4lBGnlnDjzfiR9p1HYoEVMZVelENjILiJb3iRtEbwSMJw4cxRHDx8gMflRJYFuZivoVBOYHrof/aNBZJJloBHB7Rt1jBz24LEfHsJW+wIuXP9PyM4SWLv8ePjMeXzk3X8Pj5/8SQz7H0Fr04ONeXmvMnB3KYRaXWwWLpBkgqhXKHg8DSQHi2gQDJ9/tIHjJ/P44z/7XbTrEQI/D5QfOB3ax/plUSlvU2is44GnxnD4bD/Lv0JgHULYNYql/FUMDY6SrnxYWJrHyNAAZmamMT+7CKVPuzV73TwAmlSwmS2SroJI9g/CT3AogXrryizL5MXI6CiFo5f3lKg4JAUomGlIafipTcHkhAnsCsZdUCtLU5JiD8ya4uB3KSgxqoCrhggVViBrXce9iWKa8GU7lVSNgExxt26z8inEKHeCLEvIT4MlrBzCim3ykabd//8Ds9r5Hsczy6OMXR4pP+1ouwQm9717VTf9kfJWvQ3M8ked50f7Lm+CCS2d185fNKlIPEFZbZN49DwD/FTOygUpYcQGo2FAWqYQPTB5ClPDp9CuUEb4qXTSYxRoflsFS0aFjE950OsUlLNzt3Hr7m0D+w8++CCOHj/OfvXhlVdfIUC8zXu28cgj53DsxBFsb+eQL8grXsKFCy/hwXPjGM7kcefGFYymWBfK1mK2g8nRfhpiPayuaKlFZSuI04BtoE154YkAgXgab75VJM3+EPoHTtGIimF5tYWzDz5K49SHz3/+Szh8+DQmxw8gHCS9rm/h9TdewRNPPGETSb/93As2jKnwiGvX38TQUJKKy42Xv/eytaVkWLVSZl/4cGD/MQPft28usZ+UEkhGMFuR+k6NrvkAArLhgAIq2EabLeQIaP2UlwKjwn3OsrFUcuok9o8cDVKIe/h0r4P17t2Pdp8+G6DVCcpc61qxhj7zaGBT4FaOBSMqXrlLXFKmeoazSZbzC//rHeIPiznkL853/rFreEYglt9M2ZF6bLlP/i78bo8QiP0+ffEosCAQy12vFw2qnlYs5zLnWudA2c+24K7yCcgS4xvNy4A07y7bykAtmdBoX+1ozxSY5bv5FIFK0bL0kIAgSfeePbNf+cv/ZMaYw9Ne6izF2Pts12fF2vu4W35NFlpN42HFlFvUR0b10MDSSpkKzYsEI+ZxVE7NPuoSDedrxEAVVeu2qDctLr9RMbCqDmnTcHb0LOvC9tuD0gK7el+pqX6usG5a0ChNvZ2l/vRh6WYB3/nL55EtrGD/zAQeOX8O89RT1UoPZ88+Q9lbpoHpRbNSMdAlgCVeFS+2KS/lXdb5J59+AKdOnTAnQzqlvOKr1NcEkuyEkRHyOdu0Uatzr9HYbVD3bUBLk1eqTsqxtY151KsardVwvLzaPB/249T90/AEt6lPaiy/Mjpo5Ewxw8o+U0epplSNReoR8gnBbCwdpe4N0pgMc1dYIo0CliPUiiDIvulj2TVerAmCoB6lciZNsO99PSjbgBZA6JI+mvqdZQ7yua5qB/ndETfJfhlX3b4emspQwTfXOw2s5nYoL8rWHzZUrnAVEreBzhZ5k+fV9eJPTWATcJSDTvn4bRlbA7OkYYJQyVDpHclWW2SK97rI57pXqe/2GIyf7FrRsHlkSWeSu6IX8YKccfLKlnJFrO8oG4oX0cAA5Uk/cUAUqcgURjIzlDFJGvuaDEejmu2vSWca8VxcXsBWbgtlyqw2+6TRrhFQUu9xb7B/NNKohRk09K/yWXYn0rSoT/pCHmxhQ2E2ZXzqTw1gZGAM6ViaPBKEVrzTKKhCJ3kHdYSfMsDNnlUsuPJ+OKOZapdSQVl/KoYVlVnrnQ9/XGx3T5vngaeOPBvtG0PMN4ielHOVRMtGb6KA9cINbFReN4bqUODJY2izvPnSEAFnLByzJLnqWGe2m9zTfCo7wwArO7lV0wpaCi+gnUWhI6tDEqzdoQXa7CG7s0XiyJKwW4iEldMtwkrzQgpFvUsz10kWJIxBRAKT8Pf1s8E1PFBA20OLkMdKex2F9iyt0BW0XFU2nMIg2JDufoLaJqL9WaSGNBQcwM5GCtlsm0y+jlC0gnT4AAVjEJXGMomc9wRILAQia3NAZqCLR8+fR9gTQyefQCZBi6TRxeWXu8hv98hEQT4jYcMfW7kVePxNSznx6otVgqou3vHes2YZeVxxlol1i/VDefe0rKFmhCq3ZzxOxeuaYB2KKHq/h9UbITSWDiIWX0JsaBhvf/tHMHujga2VDda9RVAXRTwTxMLibQoO1o8AokBreWV1me3oxsDAPlqkASr2GBIElCdODOLQzBG89tI8XKEiHvlYBdu9F5HPVXBw/DB+/MnP4pnzP4+nH/hxDAROob45gOqWo/gU2L20WcTqErVtnybO3SWxKcmykitrpu6LBJHXCPrncHA/ic81SFrI4Or1t9h2ceyfJAP5hvDyGy/Reg7h2MODGDsmgNfE8uISy0Dwvk1gS1pR6IfiXAcyaQwPD2FpYcViZmURFgkWJVTTmSEWykcLtIxMKobZO9dJQ1qrvk6AG6Fw81Lo5Sj0CjYkVydw1ZCShLImpHwftErwk42khBWTJGFt+y6AVRyUHbVTUKi/6jWBVw0NykNC+uFneR0UG1elkG0opppARSBEnkolMFfasZ5X3g95IPgDWXl6apLHH7z9P4NZKTE7SJ4YaPXS0BWYpewknzlHeaVsCVIpcfKcCUj2qQEObfwso56yVeyqR9l38aoNZ+uEbRp1obGq9hMAkGTl5qWxqPsUbqR25dMJZj1U0AM4e/JtKKxT4bapqH1RvkNCPYmJiQm2Qx+WlxYoQOfIh1tIDaQsRvr48RMoU+Fdu3oDt27dJm/1Y5D0/8CDxykbarhOwCrhlqci+e53X8R73vU2nDg6gMU7L6JZXMdYOo3CZo0gVnWicVtpYXUDGNvPwgZaNHLZp9QywXSchlma74jhybd9BrkigXVwCF/7xncImI9jZGKIvLWCSxev4eGH3oabN+ZYZj+0JO0LL34Hjz/2NGvqI5i+bJO/BgYT2MltmJGk5ZijoSjWVlbRopxQ6p4PffAjpON1gvPXjWaUNqhPM9atLwhk2XcRAuhYkHK140F+s4EyDWCSkPWdFJi8jXICyMiQ8hBQ0yY64M/2x/Se9t3+0Wn7Sbud2wWsUpK8yKUO3pXRu+zg3MtrdDRaEejjcfcWk+mige+DQZ7TJkAlL6Fdo0fwg2S0wlI0DGuTlHQhX092NNqSR9a8sgZA+ZsV1PldYEA0p/dq08E+85zqq0vVNgo3sByXpEGF1wjUarSwz8d2kydWzycQEZ3a6kg8CgSap7jjOFU62lm+e42ZffHlPyF4I3DlrtAmLfShXUPvAhleLffJskpGqJEUOxoikJHjR0eFAAnsKtxGCxs4coFlUigTaajhFo2oDdierLA8+co0IJkkGeXq7coq6ktn6Fx8SV7jPWqgRpc6qkPQR11HVc4urpAnksguECxQVv7ET/woHn78cXz+s3+KlYU1Gm33E3B2MLtQo+6dQL20Q6Dlpa4kDSpLUShsk7kkP/fv24+f/LmP4Rvf+gb1UMgWREilM9SXrAuBap5ApENAuLO9Y55ZeSbDSp/I+u1kc5i9O0t+W0O9Qt3faKPZ0rD4pmVQeODxQ3AHthDsRK1f5F3UJgDa8wj2NAju6+ZNFZCNp4kTyOsy7ILsC8uLzQ5Pge3MNg2x/6M8J4cTqDcJFdkWlN+kFf8ugbkImNqkiy5BqId7i4ZyoUoQRzAuGpbBovRq1Q7BdKuGcrOGvOZG0KBQ/mxSnsX1qr9tnJhMzaIgSnrQ6HJYBgv1dYi4po+0olUBRftyasghSAplMWS0UJ+wT21hAdKH+NRWvjPe5MPFW0bEFGUyoFgneWcVOy5Q3qBuyu8UkN3OsvwNC/2cHDmFmH8CzVII8cAg9eWwGVXlHHUUwaw87lo8aWVtGXcX5qjH8kZvTbUxaUx4TIa3peAU8GabmsFpvSmGVek1oVVpyfiZddK5IHlvZGgEo8PjSBHMKtsQ4bvxo/qn5yI2odIyPExZqMUXFEYoZ5AcSBrF0GJJ4lT16Yee+jQ/39vmeeztZ55N+icR9SQI1DQ70HEpZ2vLBLNvodi9zRcIBMirGrDC2ZABOyoairNQAZu5J8tCjWHB9vznLP9GAqk22fDsPFZSwNQ21qTdcqNGhZPNrhuzalUwrXyiIVkTsux1eYHbrja8BIOBvkEbtkRPXlmWEzvcN1HzrKKGLVq0ORJ9i+UL8b4IlQGPbDy5zet12mjhRYQHCizZGIo5xTYtwt2IoT+xj8qljHrZTQEVQau3hZAvZat1lXMNTByJEmCpbah1PGuoNbdw+3qQ94fgCpLI3CJsrZ3dj9XsWwQVJQLaMQqFJFaXvTh0MImhwUkcPHwaJ0+dw/79R5FKDpG4Rtjp085zg1X4OtPoS17D0vYWtl+ZgLu2hMFTZ3Dq7H60K1oJRrF4Tb63z5haOX/DnnFsFRbZ+asOM5CVPd04dla2TBg1Oxpy9WHxehMvfOdFTN3XxImnUljfbPOVT+Gg7+dx6ZUAblz24nUC9MuvV8kUTWSGKBzYDTvNLI0V9lPZTxqATXraXmbbFQpU8t9hO1ynMbKMbr0P4WACT77rHHYat3Br6ToO7DuHG29tYnX1ogG8SquIQL8HR87tw9DEAAVQFN/65nM2MUZ9LqtOMbMTE6PYt2+GQLeEl155xaxtWe9DoyMYHhklmK5gZ3OTDC2vVhfZUh2RhJegiErEJ4+8ViahNd+uG8BTajcbCpQypIa2YVIpC2O+LjQRYM9jq7gg87byswwzKRg2q4EQ5avVrvyxezlkNSSSLxf4mDYVk4fGnd/WU0+Rjv2keeWALBOg6FWW3YP/lJrqXrYfCGa1sV1lIJpQ567PSimoo/3Gd1rMoP6p0irBblvoq3m1hA74X3JKoMDixMiLxqn2HvIQX6jwIQEKNaPzRwK3Y6Cdt5k8cPUCVBRJHDvwAGrKMtcNsj3SBGsx5LI7uH3ntqWwytN4nZ6ewMkzJ3Ho0AG2awVz84s2vKeVgvoHMrh07RJp/wQV4xq2tzcJGoeoLMtYWFomyDyLseE4blz9DvJbb2L/OGl+ad1WExwdVsJ1NzZ2eggngfQYeZ+IsVCjgRsegy82ikp9CN95MY93v++/Q40APKZsGn/5LUxOT9IIG0Q8msbv/Mc/oELt8l2P4fNf/AscPXrIvCz/x//xW3jooUcIGFxYXl5BJBqy2C55YvcfOIBTJ87iySfejvnZBWhZyV/8hV/Aa6+9hqtXrrFcmkgjf6WUkBQVgSzZ2pYoDbLdZZhuNlHJU5EK5JFwRKXK620ZKyhb5Y2zCX57qI47xaX16V7f7PUv/9slUps6Ke+ROlFeVssmQ925N8yu3Tq4KwUrStVXXctbd3/Xd/tNP3LTYe+ziFKgViBZtON4ZHWvQ0t6vF1PQhNgFcHtAdk9r6yV2Y5O2fTd7tHOh+hdqhNJ2hS5xcwazco75IB8gVvlGDXgr2cRjOg5hjjYZg42UB0EZg1v/o3A7Hdf/HMbAu9j4X3UT4odV9ynjDR7H88rl3e9XIerRb4gYE2GktSVBDdex4uYDCcQp+6M8iiQq1FOp5Au6rEi25hgVeBL8onAwkaRqORlVMvYbFJmOSOh8upRRknXsj/Fp24EKbNqfGYUFY2yUhaE+uIorTfx2NmHcPTEfvz+f/qPqFKGP/Lwk7jy1hJm79AQd/djZ6OG/mSAcpU6loC4z1tmm7aRy68g0R/CO9/5fuyU1whK56CYWjfboaTJXdk8jc3rPGZx9+ZdA/X9/Wnj643NdZObaiPzRLq0yMCud5FGnIa5feEOHn3mOIIxAi13jPzZRKFYoJ6j4WfGCfmA/arwkirr3p9JU0bEWAb2N/s9RDBsoQMkEl+TxMQ28xIAa7U/jcI2upTdBM51xRqznX0kONGyx+cnuXtMV7r53Ab7rMl2lfNDmUTE1x3SWKlRQ1FxzwSxDV3Pa+V5Fx8rhZpAoo9103K20yMJjA0MYSDJPqfxEiTI9/I9HtKJhs6Vg1peZ3Mu8D4bvRCglVFCgrSRMTGL+pO8JrArvhJt6B6NBmiSrY9t0Ue6VcaHerlhoQ+awNXjsxVKcHTmQcT9lHW5DrQKo9KCSb62qwScNFCE2XKlPJZWl2xEtEV9qT7puDRBSzKGINZ0A8tkTOoAb2W3cffYl6y/SFY0rxF0lU9OqTj7dDAzjJGBYRoyMauL2lN9I/5VrmB7ruklx+hTrGxZDiHFiPPdeYU7EEzLG/zxd/8dXnVvm+epdz7wrCodcMWok2ssUJMCvoHt2gK2S7fR8tKSarLBqZS9tCYVcKzZwiGfrM0YO7vPgKwmf6nkNgvcBKU6mkCBhOmnEBeYVVyExKTidHR5mVZEPrdJAF0n0boRjRAEELx05bYgePZSM2slDb8nioBnAH6k2QD8XRaWaxstpcsKbKPXxw4gUXu9VFjKf8srtZ6z211lA4VQbRRYBw88gQrSI5sULrRYto5gazOLRGQc/tgq6oVBNl4HpepdAsJhs7zvzGdRC13DydOTfHYU2fKbyBIEb61OoNJIodwusr6bBJgEP80ohdM4O8GNYjWLweEMluZzWL7ZxMIdF66/1cSl1wu4fbWLhVse3Hmrh7deq2Bt9SbW2y8gVH0Ah+5rYbU5i+obZ9DZWcXJ938IwSiBapFW8E6eZdLw7CC281sEzEG0igQPyQYZoggvwUS3S+3dCiJAIZDJpFCjEPzWVz6Hlbkqxg8Sxh9N4O7scazffht6W5OYf42E4+b7GlpIjG1uArVG4BhCkX23U2/TGClR8VHAlEjMbRE4+6z8Im4s/Dn6Y/1YWtnAlesBLF8eRK5cw+2Nt3Bj/pZNBsuMdJDPvoFCadOSTOfqZZx8+Cypzo8//N0/x/wNWvbxKGkqiM2tbSiZ8kc/9mGCxSouvP4mBTiQGR7ETm6bNNYkqMlTgSnpcgR37l6Bn8bR9NHjePKpB3Hs1DiFk5YtJrOGPTSQKuwLCh0ltCcTSoFpsoOzUQhIaHCTYNoLG9gDswqL0fXaFKrgZC9wwGuFjKdd4QyFAhmw1TDPQDoRweAADZf+fiSpsDQE2qLwKzSdSY8SFAInR48ctef+oO0HgVkrHmWIPLOKmRWYlYdWysuAghQrBQOvtnstRyzpQcLFZBTvlyAiu5pHVsCW8onXSmDxs1nOuttNxaZ4vF3Qw9MSqnq/BE8flYaObqKU4k4dTzz2QUuW3aCRkklmbDWvO3fvsM1K5IlBnCaAve/sKRom7FcK3wsXLuCtK1cs4bq8s9tUiFdvvIWHnjyJaNJPwHob01NH+L4QLl68yntPY2p6CHduvo7Lr30Fxw9GkF25jhDLHmTd5RnZKfTIm6xTwofoYBRlGjS5yiDcgUMIx4+hXB9EvpjA8TPnKROoWFj5zY0cxsb2mYEejaSwtZ6zDBtDo0N44onH8ZUv/yXe8a73YGR0EH/8x3+E8w8/SuBNfro9x7bzoj8lo3eAfNLFk4++Ex/78I+Zoju4fwqvvvIieeQWwVuHfdJnNMWWY/OSPtmeYdJsMkbFSvBcpDFZLVHmUR6KHS3fIgGbLQNL+nUmOekudoB1ovpJfUFlvkvT2tQ/6lunr/hF/c5/Ci/Q9714WZ0XALVYaraZjvYsA60OcLXLdOQHkVCLu8ombrLz/5ejTYjhTQKNUsK6X2U0WmOBbA15Gkxe6gTpQNXEfucHfRcNWmyjbtQ79E7uerfeaaMHPJqHWkeCDbfJfip37lRHuw/j29W+9m4qTu7SPSqnQtwUDKm6tNnmxC/42Y/cG5i98Pw3ELRsPpTF1GMWL8j+EdhS6Iw8plo+3N0gmHLRmPPHEeP1Eco4LRyjcAJ/z4c4jf9+paAMxyyESh5cjQK1scMyskBEwgL7AnMazhZ/aSJTlcJacw6a7KwWO1COJ4ov7gRGLda/7qEeCmOrsE3dEWbD9WHjbo7nezh37Bi+/u0vY3HxLn7hF36WwPwiFu9WoIVxlAVEnmO3i4Y59W08noc/OI9QRLp2C7/493+a8i+IG7dvsC/4zLUsNtd2sLOVtcm8wg3DQwOmwxPJuLW7Mh14fH3mmZQRLYeBL+BDMEJAOiC5H7X8u9GBJs49vh+hRJi6pUpd4Xj2NbJmwJJto8lOXjbI6MQMDWS2aUwhiV5iEPU7uclHnomwraSrmiXKcYJxX5uGDUEq8YUmb7HT0W6QNkgDbln9AmSConyuDAINgYvWFB8uh51hGfatAVfSjwBwk7zeJaE2pVPYJQKXPRovBBqIB8I4NTaIsVQG/eEI0tGohWIoRaOMWI+FmnWJTfhc9qkks2Sr4nRtmJ3v8JIXSQT8bkhJhTE+Fb37eJ8vGIFEfJTXJjQ6VqdepexSNiLlXY8GItg3ehgzQ0fR141QnijjTs10hozERpn9QjnSFmisFrC8tWQjZAoroIaHy0tjSqPsXYIC3uSEQrCV2EZyZGp0W2sKKJzG0qa1yHssB1uM9NsyED8zuY/yfFAvtDrqvZrwpjzEfW7ew7NEisSQSQRoaClEYmWb5ahtocZyKCSs1qpa6rXPfPhXjO/uZfO87el3Pht200JUqhhP2TqsSCLabF3Flvt5WoPyoLps5QillmiTUCSYYnGCP0+K9mORlmCDYIOtyg6wWBde0NEwLEGJJyiCIqANCMzyWVVW2+2EJmyyIUuFNSp6DyJaIMETZENKyPeZQNOMDm+bxNqmImop1jCANsFjo+8WLaQFdkiWHVwgqInD1xsiIWtyGu8J1SloG6gSSLdrLtS6BOS+Mhp+Pj8OJA8W4Uu1sJmLI1gvYDA9RYHSRjQaxGhmP8ZG0nxWA4cmRrGzfhcHhw9j9e5lFFdp4a3EecySgiqI1GNYy/oILFss+45lJug1FVxdRzDuRTydxvLdNVy/Poe7dzZpAedw89ombl3P49LFLVrFOXz3pS6e+yrwle++jD//7BymQm8jg9/BGoH2Qf8QIrGDGAyH4OnQcOhjfWgt1xrKikDryEcLOcp6K5yiHqL8U+LhJkb3ZfCdC1/DN5//LNzRfqQP8prBadxc9mNlrUThQsEVJZmFC4i4Ndue/RukEK4qP2jcPH0dEqm/F8D6yjaFVYHtn+T7lhEJJbC48gIGUjO0km8gwP7tcyntTBt3rzWRCGYQD5eQIqBbm/Xh6vKrBJTAqQMHUMzN4/jpUVuH+9aVBdy5tE4AMQNXZwC1Uh0feP850hZB67qbhkYdO+UVZDfu2LJ4idAImZJ0ublgQjoYmsa+Q+/AfeF3IFSbQq84ivxqkMZQ2pbzTE+34YovoMa6ugP8PUgh6Z1j/1BgK1UJAWurWUM51yZ9VwkgqqhXlJi7iqY8sCX+ViBjUbIUC3loFTRNUsznlcVih+BWy9g2LC3c+PAIpsfGaRH3o5+CNhlNGMCRQYduzXLQ9sj4PRqEZ+4nmL+H7QeCWX2ispMXVv3lEZDlUWBWk790XopdEznMC23KnFJR95lgc37fCzPQUUBBl0oRSY5KrkrAa6UiCVcJepOv/GOCl4Jey1KqMNSBCPrSOLTvQdSKbvjdEbal492emBo3EDswSHBbzmNpbQmvXXiNxmzFZiI//fQzlAEBvPTSS+TBBA4ePYDtwgrL0cPpEw+gUm7Yb/fdf4rgsh+XL72Gb339i/jRDz6FZvkGeYMAgABxMBOmUKaQrndRI810AgSYE/swu1RHrjSIaOIEQf8Y1rcpSSIT2HfgCBpdGqQ0Nq5fmadskkLXMtpDNMIK5OuUeWovX7qMRx55En/2X/8cD59/wIbobly/g9HRMYLOgGXOCId9tnpOwJ+kDAyTPwaxj0L9jTdfwgsvfRPLy1nysoeGLxuKgNbl0oQJAm5vlwadDDQqVAKg0qZS7Mi4cBSJskoIiJlnUTewjwwEGpJTZzqgV5OqDHyyj3iVndPt/NXO6Vb90WeRgYFZXSnHg+7TR+66T6DXwKTu1cGutYNOGV2oDHq/Ttkfnedu4JnH7/9uv+kEd9KbpfXhfwuVkBG0Ww+B2L3dAZ67z1V9+MVKw+/EjEajDpB1jDCSOOUvLyCoddqFVyu8gNfpHid7gj3B6uikDHPCC5wVyoCfu0cw+/rzX7E2VBJ74wG/B30B9ifLIPBcpUyRESuZF3YHEPFFEPGHaGT5bNeytn1dgjDSjTygGnJVkc2TzWc2ujvkQwIsgl5NEBNo8LDB5Q3zU/8JFAkcaLlWeb0VytLhuyz1VJPtVqYcp+GvYXnl8KzTMJrMzCBI3XvpwqsIhDP46b/9GXzx81/F9kYRjz76JJaWF/muHoZHE6T3GPK5Nbz/hw7gmXftw7n7TyOWjKFEAL295aXuqZMf6wS2VdKQ28ICk4kU+VaTtvt4pJ5iIyvGVout6PdImMa9ySEXAqGQjW4qLlIjpqrH8IwXZ88fcECWkBwFk4AcOUCkwfahLiIojoT4rsF+8mWUu57rJWDVRCt2r9EWDQpvgiCYMlzODE28Y3tp8p1yAmvykWUmEv8ZvhCY3TXk2Jby2oom2B0SlwTRulYsQtnH72r3GolPNC7RbllGiDTkIY3Tih5K9eP48AAGzSsrj7v60GeRocoBIO+sVyMIej93vUl+O8tiIc8l6adPzMVzcop1+S6xqwwBjUR4+d1Cnli4MIG2lzfXC+qXLHaon7QOQEKZBBKTSIXH0amzvyg71QYaWdMKhVooQQuUiA6bNJjkEdWqa02LiQaqtW1z7oiJPXyX4pZ1lBdeoD/Ql7Dfmw15UVk/ryKTnb5S3PQ0ZeIwdaIyY+leeXUtNJW7aNlFmpR80MIWbtKz8p9Xa2VslbQo1zYqtTz7qkIDhLq32cYvfOIfsgXubfO87Zm3PSuLR25ygc06BXWuuY5s9wYqvRUyLa0SWmIBMqXN0iPxaWgkEk7ye5CdUTMmZPsYsVpgMzvCVnng7uT6YweSY6UYFaukSiplieJlFGuoYZYAO19ZEtRg2kQ4EuZKHaQMCkEqGR8Fv6uvSMLSWuYECCR8F9GpW8HHso5piUlwtfkOeWKbTVoLBGUdAmRPp5/WchC+VhFxTwX9vjgGI0OIVc8inRimAtEKGgQ1BDSyMns0eZLpEEHcFK6/3sPsxTE8/wUfLn2Xgql5APN3l9mRtPjctCz9R1m+GbiDDXhCWsUlzXoGkMtuoz9BcMsytchggZCHyltAzI14wkfBEcEYFafiOn3uMNYWs3jle5exk6+Rk4JYym3hpYtfRoHlikT9FCIlAjoyHusnUDA0MmjWq5/WodJniCgygwlcunoR27kdpAeGkBg4gp1sGdntPJLhGPaR0NIEDn1k7j4yecwfYftWae1KUXSxnVX2AGBiMoRCvkxBRUOip7QtYJ+tEZAvYXHpDdQ786hU8iTuMxTEFBx9qwRUZYux6nR8pKMKbs2/ZLFKD555DFsbedxdXMVbt69g38HjuHl5DfPXV2n5u2jQlPCJj30cY+OjFo+koPTLV66TbjRB4hSZdMMyVTQbBQzHRnBm8mN4aOrTuG/8Q4gQZOdWQ1i+GUR1cwqltUGs3CL9VAaR9hxHdOgWgpF5Cg+iu24KCG0T7FVIF14KewpVGl3lOsFqaRvZ/I4Ne5XKWr2HxhDBcza3afkXS2zvMgXwNq1YrY4SpuEzvW8SgwMZDA2mMcI9lYggpuUVgwFHCFIL50hPSv3lojCQH+7B8+eNvn/Q9n8Hs/rkfJMwkGo2r6w8sn4KniA/+/SZ53eFu4ST+I4sZALRhrZNmfM8y6ffndngAgUSXuQhHk2eSprzfRKqJgzZ/wIC2sSbUgOW2FqggecUuz3YP4PDMw+isN2iwaAllAMYo0E4MjJEvojprgAA//RJREFU5VfAhYuv4/YdGqKtBo4fP477zt6HoaFhfPNbz2F2bh5Hjhy1JTSv3rhC4BnD9NQBzM0uEAw+h4cfO4WJmQGsrM/hyqU38czDz2D/aBpLd5+Dt5fF4KAyBbTIKxSakRGsEBBmpk9hdPoBfO6Ll3H6/g+hyf5vIozNrJb5HEEq089+VowgG6AdwzUaWKVC1bzEGhKUx3h6cj9u3boLpc4ZHMrg2vWreNc734ON9R3Sf808L5btgspTE1oj5H3Fp9XIwwkq+OXlm/j8Fz5LdtbwGeUg+8XrDbIPaOxQgcmznUoSzNL41fyCwiYVbp19s2dlqKcFXOzI/zptO3/Tf13DE07X2FV2i36T0uWlzndueppArY52Torb0ZbsXB4kw/mrAKEUrd3GZ9hRD9J72VTaNZImr5W8VwqDsCU7RT8slz4beORzVEyjJd2v8hgSVf2ca8zTbHTHN+oUvzsVVBH5DKdo9l1VVViM3qFL1ER6jBlt4gXuTvvwuTyn6/UunXMqLXpWdXkkfXbailV1PLM/99F7A7OvPv9Fm4XdVAojllkrYHo06khrTrGrVRqv5t3mu4IEr0GCN3+Aeo2gV0ZmlzpLsYMa9tWuuqie8mYLbBWqmzwnsEUAwEop3rjTYNu0yGsss4uVVVywk4eWv9HCdGnYi7pbq2P62TB+lkkrdWqimcKstpa2kdvcxuTIKD7+M/8cf/p7X8ZXv/giPv2TP4ZgvID7Hh7A0VMDOHXfOJaW5mjsXcB7f/gg9p8ew/ydVbZnmvpEq5/dtDru7GTZrB5EI3EHJNW16lfYvHH82UZXlV1mIDNM3RSg4V+1USyLI2Z9lZ5JHdPtkmfbNZBdcPjkuNGNjBwZl8Ia4kul9FQy/UDIa1kBEgNx6tAYEknJWdGN+lsOMvYD2yZInNJptNEoNFArUPtQlQqblalXBHIbbREE25yM4JC+CIPfhU7ZB1pFTT0i2lLGAdFZh0wib7H4oulW1ho57NjHBIQeguAAb01T7o8ODODI4ACN0ziNWgI51sdFMFvj/URJNHo0YsyLWS/hHLGd2s9CuJzS8N3ESEbzwksqkr4QzPI+8VqQbRth+0So8z2tLgq5Erao27UqmUINB9LjSMcm4HcTtDYVfqFVR0lHfJhofiA5yJIQOFOmaMU0YSiNPPZYBtFjvpDnO9meHq1QqLBP4S6tI6BJfgHiQOp3ggE5LENB0a/bgK0moymk7PCB/cj0D9C4j7LwrD8b0LI26Ki6q67iR9WHnSd2b3UqBLNrpLs1FIrr5EkCcPFYvYVf/on/Dy++t83ztre951k3zUhZ6t1uBBUS2k77ErLtq2j0ymhreJ4Wpo+WpDpWlbDg5hARDyvd6Aj8ORNA5I2SZ1YtpxQiHVKRx0+CYGfa0JUkS08xjC6LidneWWNvtozgLXetNDGvUf8J9avyAb5Xw+tBMqi8jj2XwGyJ9NcmIbPRMWyA2eOjcOmTVQoqmR6qDVplirvoq7M480gF6pj0H0Hf8qNYf+lhrL3yKKqzj6BVInAh0a+tbNnSftVqieBvFXfuXiVou41kaJq/l7HMz5Gw8uyWWbgsy1bCytoN1FmewcFDSGcy2CxuoNjatJgeDQtVclrLmoKJZLnNjgoG3OxwhQNU7XOA1nBfs4SkVgjxpWhVDUKz/+q04zZr61iqzBM43cTixgWMjVJplv20tPvIyFFsF2hBsV/aLbaFLNoWAWl/nOAvgCvXbqBa76JJxq2Uw0jG+jFEizlAAgl16wSEEQxSIEQpZP2BHfalF5UsrcYewSPbulrfov5Mwse+WtikEKr7Kaw32NZ9uDv/PVrdGwhFC7QQRzGb+z9RkzeBBkWfO4VaKYNmNUPhLMF8k2VKopij4cLn+CjYsztlvPHGddy8RCDrHcTocAYPPnyQQGQDf/zHn8fVK4sEJGVs7FCIkuCCaTdyajtPGk+c+iQ++fb/GcOuR+DLp1FdLiGISQynR8i8LbafBEsHCe8IQe4xFO6ME7SQCQO34A3T8vROwhUkKgvPUdCI5gfR9OTQ6JZRrOWQL+dsUkOFwlkrx8jLp+wXYnZNKKsQmOqzQNq+fdM4fOSACS6FdPSzTyJBxc+JbiVANazWQqkq5SYFTIVG2n7osccczvsB2/9vMOtsBma5O2CWfKdwbkpUA7ZS6AS1AqsS9JaflP8c75t2PYC/8zfZpjYznHwmECFgawDAni+RLk+FBK6UrXOfDuJlKRrdL96Wd0yJyjWqMT542pY1TMeSFv/apoC9SQC7trFO42sIJ8+cxolTJ21C38rKCl57/XUaTwnLZLCwsGBA4cCBfTT24pYF5DkC3YcfeRBTU0O4/NYbuHXzDg5OzeDE/oPYWHiT/PAGknFN2KCy8PfjzoILwzOPYKPsxrGz70C5ksRXv34VTz7zUfYxAXYsw3d4Ua52MT+/YF6eQCDO8vuxMLtmMc0ry8uYmh43j0c+W8OJ42dQLOWxsblmnvmtzRwOHjyKOQLwWCyMYjGHKGWivFUaYZoen0G7romDFRw4NIprV99kXVdNOSkFU51CSpMrLS6MYDZBMJuK+y03Z36biobKV14MCxswpaujGt7pGx1NMaiP9n7n5vz9q6OArm6x23VSt+igzyyMA2R5xr5LoTuv2MuxLHmvi2XcGOAUaCQ9aUhdYFZlcBbp4Oc9WiMNiZZspTIRk/5LY+nFRme6lg9UIfg88YrUgu53JmnxGp4zOuVm4JNHfddvpoP4LFVfgEMA1ujYysZ9N9TAnqfX6lV6t3b+ES2rnjI8qLYMDOp4r2D2e8/9GY2XGpq9JtuBdSTPdd0a8q9T3/CosBu+UA4i5UKW50qNqhkNzW6D72LnUj9aQn6fn3XTMLiyFtRJL01slzed8lF/qFytehtd0qqbwJtamPokTBARtgln8njZSCjPe4mUlZReQ97yBsrAIt4mwKXebvbhFGn4gz/+Kbz6jev4rf/wn/Grv/ZPaJyFsJ2/hWS/h/oyhz/73J/g4hsX8ZFPPISpfeRx1unmzU3S9gGsb3Zw8eI1gh3lMW0TyMbYmn3YIco1MMtyyRmVTKQNVMqTJz2fz+ewSSCtUSwtBb28ska5WuTvWrggaCGAicEOpg5qlU11KNuUfS3vtACqljttteqsUx95LYJoOohIVOnwNENelE9eYifLISb8IBCs9muUeyjtNFAuNdm+BJNtAlki20ZLsa68SzqT1xnII00oO4c83TqKtkRjBjzZPwKzJBfSkk6y7wgmLeWbACpPBXk+Rl2aJKCdjqcQClAg87xyTrQpHGvsyDq/uwlmNbLgYt2UdlTODmeJcIUdEb7qNxKLzuvptvOzYt0FZvXeIOsXJu2E+QxlisrnCtghoK1Q1/SIv6LBIQSoK1u1ALotTbxNWbt7CE5DoShi3MWPvgDxlLJo7BoOIdJTPJqwTA0CstJXXm+YhknEwKxCXcyzSmqzSY1hGmlsbyUHUAo3jbBNTUxj/9Q0+ylueEBta8Yr6UBtrM8KlzAwKx5lv6lemkuVK+9geWMZO4UlYifVn5CbfPHLn/onxnf3snmeftt7n+3RbpDnqOeKotLdQLbzKvLdO2Q+dkjDR4DFTlCMB1+iz1ECWQWAa/ZZg+BXQlECUPGM6ox2q8kKKvVGmwpGLnVewE1Da25yv5R8Nq+FErbgJwHLkhOYdRhA17GSbDAlbY4EU2xMrwktl2Z69snbS4Jjx7m497kJDj2Ks6nQWu4aiNMQR7uX5T07VCo7SHpmkCq9F8XX34nrnzuC2RczQD6GVNSLpcU5nDl9FpPjE7RKl7C4PWdCSOtRK/WTrz6CRIwd6y/RslvG6CjvSwzh9nwOY4lzyHYvwBfZQmwowt+TVN60YnwaatpBPJgmmApBSeG3qAjFRjEyfZsCJxIg87P87s4OQSQZL5tFq5KlkNAQzTq8wTZSmQgr3UChNo9z5+5DxD2DhRvK/MD6KeE0FbOrLUvRbxNRRJDPPfcdtm2F78xQuCQQ8yThJcGFXXWMJP0YTHrY3k2WGRbUf/w+Nw4f17BNH0EGcOQE+4sK1x0uo8NGd5EZlWoln13HznoLC1v/FpXqXQqiCjZyZMLQNQozMm8ngWgiQzA8TIF2gO2lIbN5uHxdLCytETRHTdtoSdzCeoEErvIm8Z5nPoCP/djDBOarKObJ/E0yxcw423KLwHIL9fIq3v3IZ/CjT/97GiMfR242jOxSlm3rJcB1ITNJQdMr2SQhrQIX8kZpLbPfyDhhAsvt26fRKe2HO5QH4nOkFdIoBVMfjZtAIANPuE4hQ0EixUQaU/qkPk1uIDJ0sf4apg9SAPAXtkObwCuFkydP4uixIzRMtNJJmEJAs/YdASc+lQJW2pNKTfa4ACLrTRoPkHfuP/+I8cIP2n7t2V/jX8FQSrjdv3vbHv4QuDAASyCrMANWjTTL86yHLTFLSascjyZPBHmMt8RHvFa72mHXKyYFrHZR+Z3nOx4CKf89oa/79QCBCj9f5JJByTpTHtrQeDo2hkziMMpZJWJXPPgKjcMqEqkUjhw7ivGpSWSzikW9jnUtK8eC3X///cbnSqgei0dwgtetr6+g06zi5vVrOHJoP44cOYjbs3NUhJQXXl5z+BAV/CYW7n4TschdCuYSeT2F2WWKQN9ZDE49hcu31nDs9FO4fr1MZX0Ih4+doUxgO7Gwc/MrrJBme3cwSJopFZrw90WxxjIpx/H8wiyGh4awubGDK5fnKR/ut4wF/ZmkjVb4/UEC2CoGBjJYXeP7SWdB8vP2VoHKpYhMuh/xUJiN16QicVv8/Of/4suUBUG2K+mIAN86lKhVXsREwoM0f2vKM7vN3whktPKSecil5KQ4dT2/q++skwQytfNB2iUz5c2xTQrC+lrS2HmV7fzDR/F67uxPgYa9XWBWP+g69anAqP2gdxo4JC0RPZpXijRjWQKk4E3J6ybSFz9r9rPF99r9u7/xgwFRPkMAWPdQhfMdGqoXANZz9Ezneu2OQuc/PkbaY68+Tm0d+pVXVrsMOIrSXWArulcd/upZ2vc20bRAliV6F5hlc0vR3jOY/c6fseQEmNJB3LueFvUkgSz/WeP62UYsjNfCBwR3FArQhJZ512phkv2Ky1cBBUad9H81GsoVi8vPV3KOAcldKZdkFLlblGWeIKLKhaqJZAQnGgETUNYzfG6lP6LRDoIcP8EKCUc6s1rU8qBxPPHIO/H4Y8+gU27g3/zmb+H9H3gKybTau4oQZdebb97EFz73NUcHs/0+8gnlD2/wdx9eeOEapieeJA0M49qNa9RrPepkvp+7yijAqWFlTQBdW12jntvB4sISjbc15HIKycpiY32bhuCGAd8yDbZSZcdW71S+9CZl98gM9dJEmAYB25AAVTJHTjN5+0SDcgbJu62sEZG0gJRGcgkIhStIGGoDpcFTFiR5djXhSUCulu/yPcqasGts8F+1Qdrm8xWCYZO9+Fkjy4QOPM/+tPexXiR8GfqiQUHNnmhXAJvGCl9BWpSE7FFuEMyyzfz83U/aHgnGeZ2bPM5+VbgHZWuNzCZN4CF4tNXNjKFIf3z3XgiA3qO67vGzw8sqqzSIcioTJ1HHyPMeYt8GyDStctPCoYqauCbPvJcGTifKuoeIJdgWiJi+Eq34KLPCEZaNrwkRdAdZFnMGso1liFvWBfapJuZrbpRG41VDfmO55JBjeyvpLttJK44JC7bqTZadOCVOA356Gvum9mGwf8Ccj8ZjfJctXLT72XhdPEH5IIPYJsLpH9utUMljdYM0kp8znlbWF40k/r1P/w08s8889e5nXbQsXUTjcqVXMEswewGVdtmG69GhoqYiloBjcVhhLWqgoX0NP9L6U8AOCyuitpRaLIBihgR0zRcrYMQKmFBWnBjJpEKmFZit1ksWhytgLGtAnWwNwOfIW6tJM9GgFmVQUSn41amUWApvcAtcCx56ilRSe8sDkjga5DAKlwABrt/bwEDvPMLbn0DxjQ8hf3UEgxEPjh8jOPWVMHd7jvdWHZc2rT/NDFUMR4va20fmTKQGaeGJ6UAFdRgh3wFcurOOgcQU9u+bwYW734E/nMXc6msUVjkDcYnQDJrlMKoFKjUqvy3WMxKioqqUab0FqLTYdlQccrErLlP5KRUYn4hoOTiCbwVB05pNBQewtVgyJm51iyS+CPraA3A3IhTIbF+vlutNk9S8ZimLMDaojG9cv2mTWOR50JDJAMF/NNDE5JAPJ05mcO6RGZx6JImJY0HExtwoLPss1iZbvYzNnQouXd4mI3YxNDGIuaVlVPJRrCxsoLjhRYjC++hjn8Wps14cO3qEVl4b2dXjfM8Y+4/M5ZrDdukq8sU1CudtCpsmtnPLLD8otNo4feYUPvDe9+CJx5+kEbGCickp/Ov/9dfJXD2cOfUQHjwnT1qedS0jt5ODpzGITz70Gzgx/MOor8eRWy+jUtzB+HgGkViSgNcDwnIqAWAnW2OfJUgXSpWTJBMFqQxo2VdIE/n9bONBdMLfA3zrNL6GSJtBAu0VWqdKzeShkqCwJDCNxGN8dhTBSJg7z4VSCJLZRXORcAwz0/tw8uhJaFWboC9Iw0YjB1JeFHaUPw549LINe7ZLQMrQUxiNwnNO3f+AiPkHbv/PMbPGbqR9R4F7AxQ33DUBrE+efnllBWgpEARAJDxMKJKxdI/+6JR4ikU1IUlWZns5QEOsJn+HBJm8sabwddy9j1fqD+9XIVrmldWzNF8gFZvEcP8JS86tWbyJVJIg9hhS/SlsbG/hzt275tnMDA7hgQcfJN9RUFM4y3ujxOcpWvd37twgUMwSYOYwOTaGY8cP47vffYV1CdoqfwOZIdIzlXz1NjbXXsJAfx4JGoy5YgTPvbCBp9/399HwjKLRCRDAPoxXXr3N/svwnSO2opyUREN5Xqn8hwbHUcwpk4qfwngf7t65g1x2y0IdiuTfUyfO4YXvvMI+JL+yrSwjBduhRb6qEXgGgj7SFemOGk3KXMpbykVGy0AiTqVSszjrg4cO2vLXF1+/gp6vBx9pTbPRlYBczUhyQzpFZV5vo7TTJC/LmKIsFCBUiSXc2cY6qvm/D9C0S7aqY7g7fa2PlJQsqMWn6ryddS7nCT3SlJp2vULgx7nKMWo0nGkUwHdpd1KDkTbY2RLxAgw670yO4QPYNjoKxNpjdl/oeI9ZDpXH6iI9YAeWT32we43ew4O8NvJ+qW579OqSh1LPE9HrGv6ueojmzDPLe4V5BGStj3bLtlt1fnCO2tV3omOF02ko1Vn5T9/vHcy+/vLnCWDZJ3wXvASqboFZymPq0D7KAZvcRqtYaReljDUJUNlRqnWFLlVo3zjHNg0aGYgyeIvFoo0K6pgrFlCvatGEJpqkh06duoiyPOoL2+QepXaMEZRoWF+pr5SzM8jfQn3U0xoKRswWGfKTX8YG9+Ph+5/GSPoAVpd38J0XXiWoLONjP/pBTOwbQLNRw/zcOn7jX/5v5K0RTE7M4PiJE3jgqYM0Tjdx5/YmK6mVC4/jW89dJ/ClrC2yvOSNarVuWEB0r7SK4msnnRYxAgF3JpNBnDJa32s16tAaARd/l1wxZwx1TKPhQqm+gcxYD7F+dTHbjO2pjpKzSmESlpeXIE5p+SIRArLk7oJDlHdOHxMFUPdYbCfxRY/6zsc2cbX9qBU71CdK/dRAg30knpNMFhBrCOASwQrEii4cA4f95ZCZyTeSN8vrhADI2JKsVEYhk3eqCY82MY0f+ghYFT8b1WgphWa5XkW5yTrzmhqf0yB/eCkTfSROy0EsnuDLda0mKovaRfeSteJPAVpnJE0tJodBH/WMDzG2RYR4TKOmzWqD/VGlHpRRRWxFGlD+drRp9HjSSEQHDcwqp7HjWeXvrQYxhUYQvc67+FI5KIXLVB6fl0Y4GVKrO7KJWEyNNAqAy5tM3uM9ol31pzBeKpnE/v0HiIf20bgfNHrQ9dJ7AsDCa049+FT2k7yxWvzCJTCrdmB/qMWLtSo2t3ewXbjFIul65Z7t4Jc//T/w93vbPO959zuf7etjBVjZunsT+e5lFDu3UWkpQ0GQAiNCRSwQqjgJskq0H0F/jC/i78pJRmTuuMGdGW1qEBEPWGBL87M77KrKaAavEquLcfPFbdahQTAb470kRFqBzopfSokRNCBrgNZHpWAh1HV7v0CFi+9SU1muvc6mgc86La4OO0AERRiCQDMDd3EEnUs/hfz1k9i80YdC7haFCpXgZhGl7AAigTPoeBbx5p1LmF26S8XsR2oggy0q0o0dNiyV7EA/ASPB6KWb16kcj9IKTeGFS3+O9dLrZKws8tUytqs7KGzPkjDnEe5LIOTSwg4JmwXanxk0r12zkoXPU0cyBvT39xHIJTA+GcPI2ATGxgcwOibQ1EUqHaT1xbaoBOGtxlEmuFU6kztLsxRyLRyePk7GCWCdSrcv1mfxVPqn1TMuX7yIeCTCNmjSwipgeCBCkBjCu953Fg8+MYON8k189uu/jd/+4/8dv/VffhO//Qf/K77wlX+KL3/z1/EXX/13eO6l38TF2/8Wudq3CYgPIe07hdk7TdQLAXQ1OSz2AoYO/gFiqTUCgTgGRtew73CainsG3t4hgoUcau2bbNMlMtAAhpLvoDWaQ4BWX6m8hT/+s/+Cn/rUJ/HgfffjS1/6Mv7hP/lHeODcINZX6xQ8/Zgc349TJ4cIkvtwfPx9OD76tzBcex8K68qjt8R22w1v6YtifXsdPT9B73a/GR6xftKfv0LjooXkANutAiwv53H8ZJWkTaFVnkCvNkEgsYi+4BIZ0cntKM0ngahZwwEKzBAFZiBMuovQ+tVs2UiCCpJghe2qFaD2TU3TsEkhTpqI2yzdhgkb08l8luKKhDK13nibhpHy50kRW1J1CqHjZ+7fZb2/fvtBYFYnBFx9QQqjAIWMgVmXeWQFZmUQ6xopf/GeBKQ+SKlLGBtAoYDUkJHNMue+pxx0ueXi5Acp+j2Br00gQJvqK+OSMpE8yO8EHanYBIYzJ2mshW2pV71LaczuzpJ2G00cOXoUExOTpuxWV9dQZCcpUfehI4fMqLt06Q0aMj5bOS3G9k0lYlhZXaFxEkc2R8NscJQ81KWRmsVrL/8u6buMwXQXsfAw6WkZg+PP4MxjH8bVuxuIxUYIcoewuKwV4gIEqDM0oLaoiNlf5C/lZoyGo7h2dRbHjpxmHQhQ62UaRVs4euQIXn7pdUxN7iddBPHy976HJ554Ci+//D0cPXYYS0sLNuS5tbWFwYEBGpHrZmAtLMwjqmVBCUT2T4xT0REkas6Al8bf8VP4y+e/hXIpbyDIvJIERGrHCI25TCpKXu7SeFQIQoDAjA3LH/kzrxNAYx/ru47cHS+nOnIPzPJC2x3AKI+TlKbTT84uyGodyf7UXATzxlO5SMGw2/mbXilgwPsUusXyy9NpMZr22fluupi/G7DWkTfbKlu7LxKp8WV7LzVgqi+K29a1+u7UhT+rDlReVid+NDDLD3ueXT3gr8CsftulX+7mld0FsrazbB6WQ+US/Vt59Ex7zm7VDbSQcrkrZlbZTARof/5j9wZmL7z+Rcr1DkEslS2N+wbBlzx+TqibhrilBfmPvN/Ts6VH+YImAUCTn+XBlidWy6Y3acxpImkum8cODSHzZBZKBIcEfwS05pVl2YJkboHZCHevvLJyAGkIRt448qOf9BL0KpNPECHfEKtPAFF3UW67UMkBn/vz5/DHn/0S3rx0C+nkFI2xCm7cuowvfeEr/O2b+MlP/RI++tEfwwsvvIAPffDH2CebpMl+ZLfcpPNh1mmMfKih+oItD62Y2Xy+xPd7zJs8d3cepWKJvDBku0CNJn0pbZdGYBRbrkVjlP3DFlCS955t2Wb5W8hicIYAOMW2Yj+1CDoFmsxpJdnM/tRqV+GwUnkRhIX0LMorGS1sS9GNpQuVMGTNex6FHxIfEGcI7Al0V+vUL5TDAtNd0pcm29niN2bJiOZILCQUlUmTo/RJnlm+nT8LaPIcaV8LFqjEGr0TjTpZD1gOXuE49UgXlTZqnRZyxAXlNvEReadjFhf1p3SzvJwifNZVMrbDexSSp+Ft8bHyHpshx99J7WJkPl/5ZT0G6hOUZRo5lre+SsOizDauNjsE6XwPDfhmhW1Bg0aT2seGpykbo/Y+yXKtqqkR8zaBpjzCktt6i7JOaVEL9aUMPWXtsRjnpvpK3mi2hskJwVOVykX9HmRfD2JmeppycooyO8V+Clp7ms4g7Rue470CvQKmllOZ9Fpvay0D8TBbs8a6U3coTCKbL2KrSINf4Tjc9b5f+vQ/ViHvafN86EPvedbryaDFG/Ody9hpvYpyK0fLxcvO91MgR0xBSmnLexqLENx5tIKEZitqjh5biS0vBpL72bFE6yyIVg/hdxEcCUhWqGZcaphCs8GL5DLqdjJhxCpts934XHlpbYk//iiCZpezvSv8TZMl1KgEVqT6VldhBTmzQrudkMUHdVpFAsEWQs0xdJYfQOna46hf3Yfla22CoRoGhoYIGu9jxyWxXVpAN3yRz0xjqH+MoHUYm/ltFCoFMowL8YEoLZ4yreM4mayDUMKN169/EzeWXjev5WauTCswiVCsi/1TQYxOLCKaeI31uE0i78EfilnQuY8CqFLaQSTUwMSYh2Ugs3oXkS1dwE7hAq2RN9jecyh1VuFP0dIZDeLU/QcIArbZLiW081kKzR5KrjwarG/AFUK7FEG5wTLR8NVwRjoVY/22cOXSBVryeRoBFZw5OYqn33YMp56O4U8JYH/l138Ov/eVf4c3bnyLQPAKnz9PYLCJY6fLGB0KYKw/icP7UwQcsq7XUFgjkOs9iW4wSwbKsB1uIDDwP5LxbrL/O7S6wzhw1I+p4wsE4Q0yURCzt1xUykcRpdBqNXwo5hVW0kKNzFp3V/DLv/wZ9KoVVLIlXLpyFafPnSMIL5Dhxigg87y2ijhB8OzFCJYvZpBfiKPX8rOvawhHQ2SwFhmSVjkNmmJ5E5X6JkYm+rC2vsB+jPLaOA2bgBPbWL6FyYMeWrFxKuIt+Lv98DeHKTMCpMmrFGoFvi9BuqbgoImqI9jPmj3u9lExKeULeTMQpqVJxotE/BgaSFNIJBHTak28llAVWwQ/mqAgQ0u7Rhc6ZE7ttKfRIkDSpqF0xScdP33Ovv+g7dcJZnd18PePe5sEneSv5Lc3QM4TiPVLmfKNArNU6vpNMlMA2zx2eggBwp7nTad8FC5S/tpFs3uJ2vmNwo5CiXwmISgwa+/lZ4lu4QwNwXmpzAVmKQpMiEWDw8jED9nwYaVUoHGzTgMhSANQfDdhAnBpecW88lpoJBlP0jhRCM465uZnkcmQF0cGrf8O7duP1bU1S9U1MjZpadnkDe+2q9haf4vg4SbBrRRdHNeuV7C6OYh3vv/nUaMReZkAdd/+48gTGMjrqxy1ip2TM7TVdPGdMXvHQw8/jJdeuECgO27eYGVbuHzpCsYnplDI85mrG7jvvjO4c/sOn1MkYJ2wVck0ce3ChTdw6NBBzBKo+4N+ApQSeYRCWGCfbZeOx4wvy6Uitne2abzuo2E7gG+/8C22L+mFIMwMAfZDJCy+0fAg+bxIpSgl4KWScVEJUI463krtTr8ZsKVyFfjThFedE02oj00Bqv+MngVYedBH/dM5/tfO24khnaulLPWT/dH7SAdKIeV4YbWz1wUUdRNlkd5vk0j4YIdvSDWS7/y4Z/iYMaWnsz32PDMiIx35n89Wffi7+E3IlPUQQrWhSBXIqYbVS/RoH3bro8k3ukXvUPkMK6ic8swazfNx3PfaRUf+d94tuuZJKXLR9142g3sFs69f+Asq4yYaGramHOxwVzysjAqFJHmoNwVk3fL+ymNGoKB3adKM2kaTs+Sl1WRFpYMql2uk06LRV5EAsaHk/dRpAgOkAgQIZJWfVkP7sUCE+FEjkw640MIDSo/k6F6fgWhPL4GQ109DO46bV+ZoIN5FIjGO97z3QzTETmNxjobga6/j6rXrfHcL//Af/hOcPnk/5ubmcPPGLbznPT9CmaKFAzx48YUbGCRvNGpJ8uoRxFJBGlsVApcBa7Od7ay1n7IZaPJXhvQtICMvoEYbC+Q/jb5odT+FRAhASs6Uqfu1BK4ms8X73Th2rh+BWIvn2aZd8UYfZZMT1ig4qfhshRX4AjJscib7NAy9J8uU2tES8aujFeYhgc1Gb5Efq/Uqyy9AS91B7CFjQnHSCnWUZ1bUL/yhXWCyLbyi53J3yFDGIXnCjCb2CH/3aaSaZRTvarO+tWd1USxVqe86qLTYxwSrHoJwl48AMyDPMoGl4CCfLcDqxMtq8QuWizwk/m5J4PK/3qwRXHk4+Wq2nxbioU4W73c8aFe7KOUqFtZUrLRQqbkJ2GnAlEgHrihGB5WVadqcFTKk5EwoaGI520WecnleRUNiFC0DLxoUiM3lCpS129THOzxfYx+JrtXg4h9lVZDnPYQhYimt1Dk+PkYjJWY0qXoJGIsGtIuXbQS/TVpnGewatqoDZgWOPTYa1SO/1Ekb2yzDeuENXqsQ1Qb71oVf/NT/W41xT5vnox9+z7Me9wiBUgs77e8h13oTNVow7VaED9XswBAro87Sw72IhjMskJeNUWGbNx0ConDwuLwspteIt00wq4khanyypRGcKsf6UJl0Lc2RAr+DIS98BGd6hgUck4AtfpYIX2k+rCE6Jd7fIiiUFa5oew3RyRIpkPAL6NVilL9hPp/vbW9DiTB8tf2o3nwSG28+jmD3a9aBhco6VouvodS+i2ItbyBkc2eRluwYcmQ6E0TKOenrYHVrHvnKBgHjDirtG8jWb/O+JXijZLhmETUCdm8wjlT/OI4cS+PBJ5t45G07uP8cLR7PZdxa/DoJuY8MeM4CsC9f/B4uvfGXuPTmX1CQ/Aleu/hZvHn9C1je+B5m57+IL33zK/jOhS/gude+gm+9+kV86u98ECfvH0KhfhPToSncXL6Dgnubz9xCr+zFZP+jiEYyWC/exdBY2mZ8vvryS2hUC0hE+/Cudz7AskyhUV/BL/zrv4XnL3wD2doaLcQqhVMUB/YlcOJAGkdmCMx846iVlzA8UEUm5kejVEc40IQvfoG08G/I3IcRxn622ecwceo/EuAehpY/1pDsxsYyAr2zOHSyhZPn3QQwYfjb78P2ShqF2i24w69gMP4wis0CHnzbGXzoA29Hs0jm2ynaCk8vvf4aDk6eo9FQRP9wjc+8gN/9zTew+sZ9mH/Th2eePEJwksfGdo4CcQAlGhp9vgJK5XXUC26kg6dQq7ZRzjfNStzaXqXgKlPg9JH20pgYmcD8HdJtZQzEwggEKWgKAtlegvRVBPs3KTBoEVB4GMN1CXgIvgUQ3F4yLmnOTSGsIS0/GVoZPyJBH+KhAHFvA4WsYnzyBAbK9iFDjBeSQWXdSsUIBXfJJxJUYmR5eI+fujfP7D/71WclP76/mXzbPe7te2BW4QVub4eAluesCFIoJqckq22TB0ByUuwqhSL55PVQMEtI88gqOKBCAIX17JGTBP4a5FeT09wEJJxdxqeGp5RLt0tw6MTMhgMp3H/qaSo5AbuSzf6XB9NmyebyuH3rjg0BHjpwyPhcCmBpad6Wxjx96pTFXG7vZLHvwAFL5i1UMjVzANeuvMXfD2Jt6S7GBkYIJL9FQ6UJd4DgPXQfXr/Uxr7D70Ji4CgFt4/g9jaOHT+CcrVM+pA3TIaEyzxjK4sbtnyyZm0fP3Ycm2syoggMKLiVRmj27jIBaBWHjxzG62+8junJSQs7eP2113DmNMHA0rLV+cTpU5hfXrAFHhSjH4smMD+/SMVKI5yGi9cXdIS8h8CHBlyx2sTbn34HFu7exc23riIcDlDBsoF3Adjk6ACqJRp6NMq11GPXrRQ2CsNQp7HxreMcAKghe+XVdhFIySOo/rZ+0z92sg0fWl864NcBsbyOB+ICi/PTIy2DhdGJAKVTDq+AK40jl2WOYccTJKqMAp8CngKHGng1gLpLEJZGSK/Qa/leARf5tgRwHaWt2EDex100KeApkmLxTFnKCyzac7yy/I3nDSjrM0sq3SGnrupmKyPx8x54lYNSO9nL9u8DWj5Xz+Kj7I+aQHSsXRkC5AbUzPvWbqjB3/3RewOzr7z+JQuZqndqNlG3q7YjyFc5BRii3aiBDRdBqYvvkedUqEdD8BYORn5VZgxNKNV3gVpNHBQw7SMD17TAAAWPUjApj62fx2iIYDYQElRFj5VVewiMCJToudK7flreGpZv01jTMtry3t25OY9jRx/AD//QJ3Hm3EMEPTWsLGxjeuoY5eUBfOxjn8Dph09RBtTx3LefJ2Adxqn7zlJQtrEyX6bhtoNn3v00trc6uHmrRKN0mPoyQLpOmuFVLJJeWXatiqiG1UQwjWAJ5GqirI5aMEGEkSVfq9zbuTXyZQ7nzj2A+YVFHDkzjIkjPqxnZxEIRwxAWT5s3iOd7A9opNZNWacQwy6CcfEz23XXyFOdveI5drzwSUs0ScGlYeoeDQ5N1lP71szgDljmmjqBrQCcJukKpcr7qMUGZMa7KdP4aOMV+WG1iTYFJrWHtMQgCbcjGSlrjwRtXmT+5mYZ6nx/jnWuUD9oRNfH3Us849dQP/tEHmTxgxaE0JwiSlejT77S2Vhn1g49MkyP5RMtW1YM6p0A9YeP90WCCZR26thay2JpeRX5oiY3x7G5wXoW+zA9dhCTY4ct3MXyolOulNRXlEEC+KJVta8MJuG1BmkonxeI3cHi4oI5IQRc4zTIbbncVpP6jXovGkV/Io0xysOJ8XEaSZroxXbjc2TIKxuUvPBOWB0NDLaHxfzyddJ/khkKixGQ1ZBDl3Tj6RE3sh1rBIe1dh3blbdshKzT5XNY978RmH3fT3/k2W50B97kDpLpFEYGTmFi+BimR/djZmIS+wdPYSg2g6H4AQwnppEMOjnUZC36exF4KXl8LikmFwtBC6hZZjspi4D6uYOWvAvqL1qOkiilwjYtwg1+b9GIYmVC/exJMi1BbcKfQsIXRZBE6WFj+vQ7BbbCDXx9UZNSio+td4ok0rJ1RJASqUvLpMxytDMkqvIQkt/7FDLrb4c7mqVFdB8GJw4hns7QanKRweQt9SFE4Nbhe2MULuubi9boGubR0MhgZoQWvIsCIoqR6Wewsj5P5TNmeW17nSRG0o/jxEMtnH8X8Pjjmxgeu4lDh4C5W7OsR5hM7+OzXmfnvoLsDWDrdpXW0hIqzZcInOVNHcF2EciV+xHxzaDd6CGTSJL4RFwFzG1dxQd+5jx+/9u/geW5bTz5yE+jvOAhmLtJAighNnAY2WoDkVSNoCyGTm4LxdtvUTAv4qd++aO43noV/6/f/ChueV5gmebY7nW2aw+HhgZwdHgccU8C5Z0espsuXJkj0ykt2J1h3L6bxGqTzNN/GF0S53CqgHjk8xiM3sCRB34HpU0gmuygVHdh+EAKrpAHleJFVLNd7CzV8a53nKNy9+Cll99EzxdGra+MVq2KT37sDP7Hf/CL6G43sbI8B1eY/ZaJsA3WsLJzBQemD+Frf3QXL/5REH0b51Hf6sOJo+NSD3jl2zfILPJQVdHzs48qRVTZV0qw3SPoVvxWsj9JRi1RuBLskmlGKHSlABbnN9jYbvhDBOeRHiosdzzZTzofRZXXsuvRytbRLVIwVQkKO25ExbTBAOnay3MEaV4qmm4JXW+FtMpyuBsUPx3LeZujZVxuVajoaX37o6R6GgNUXPVO3QyhYnWNAKVMWaXhKD6faO3YfffmmVWYAeXNf7N9/zuFg3YDs0EJerLGLvi2xPGsl8CpwIIzCcXxnAkcSNbrvJxs4lGvFIOu53mBeP0zpd+RYUqQT8ViQELv3N00xOaXYDYw5PC3PF4+Txwjg0dR1+xrPlB5NbUuuVZQ29rYoRCcwPGjx21Ia3tzyyaFCBQfP3kMGqLXRAhNFNNiCqvrqxgaGWGflk2Qe70tGtIBLM+vIBYn8AluYObgUWysJlAsp3Hg8HlWSEtvUqCWy/y+D3dn75ri0uxarXCjFEGry5sU0rynmMfhw0exuLBG4b2FwcEBAwf1Wht37szigQfP0Wgq4uLFS3jH299uAPeVV17Be9/7fnzl61/H/gP7sbm1xbbvg5awXt/Y4HtrNKYaNtzWIDgZzPQjSKDWaFLhd/oQiUYwOT6Gb33zG+bJDUdpmBOwUryR/5Xypm2T0tTWmscgL5JpAkNlDpC1oxQ5nytwafqWu3WP/tjl6vNdMMc/u6edPtd39p/MFZ007wx3/a4b9A55Xg31Gg3JM8PrBQ6pXfRdIFa0qL5zXq67BZz13fmsoXwBSAOiPIqGdKkUlAFj3SYdwe+axe54Z3kRn63fRItKV2X38mbFNGrTz3qdgKtonPrdvLN25LOIKVguXqN991rd41RQ5dHzeCS9tglkiScNzP7Cx+8NzH7r25+FVk9SykGB+rY8axrZYT0FgIL1sBLtmMe0XeNVpP02gaeGbNuNLuouJ3aUl3OTk4cyiKBFi5LoGQp1U2y9VkUM8nPAr3X4gwQy8r72URZpXkjb4m7r1boZCT6CFjmCJF/kNatVZEzWKAenSIMTBJgp/NGf/ld8/vOfw9kzD7EslOORJB57/GHS6Q71URl/8Pt/hI9//KeQGE2zEbu4+OpN0mUSMwcm8M1vvoGdLS+2tnKUZ3nzuMrDNqDUj7EU+apMILVhI67y7Am0yFjVRDfF0srLl8vlLLwwm1/D8EgGExNT7H8P9h9Nw5fIouMpsS+cjrMhfD7f+o40oRGgPpZJBrvHp5EufuZfmyTOesvJJVCl7z3RLUGiMrsqDFLguFrSKlOam0LQVC2YPNKuIX3RgmW0IH2JBi0H625f9vGc6Ej0o9Eu8YGG+sVbmkxquVIFZPl+heCo3CXSakPD6uQXpfcToNVSvwJ3GnFWLLDax+JkuTvxsiyyyIGbnik6kBDWyImX3wN8h+ZlWDYD7p26G/N317B4Zxnb2RwNeNbDFaX8IuB2JWw5W+WL79DoLhPMlkpFyjIBTU3MV7gLcRkNe77a2tiWjC8VoJW3FhfneH2BbaBwDRl7DZaNADpKfBaPk56GrB7yGBf5XPWt+li6t1QSXbAf2K8aiVS40h4PSiZp19LmNOvI3yRcYjZqRTPimmz3SrOCtcJFyktiu67St3X/ZmEGf+unn3o2REssFOohFvFabsQkCTQV7UcmRmBIdJ5MaMgsgXQ6iUSSOxG5dksq3j/E8yTqeD/CoTi03rS8L31dKW8pd1pS/KwYlnKJ1gQttCIr7aW1FU+m0GiRICiVNBShdA8evwctDxude7eP5aIV4vEE2KA8z0cpZZImYNAYIDHQGmpFScBswBCVXi+F3q3zyF2eIMjita065u/cQiG/Q8avWPB1KtFvawb7+sLseK1mocUawgZyJPC0rn+9XaEgrdqwRrXC+g/VsM5G7ms9iBMHfgJjUy088GgDP/a32U7pl2g59ZEI7iA1EMLcwhJ2ChW7p9mdR/bOo/BFV+COX6bVNoseGdMTIDF7NtmpcySWDQpH5dqNUJlFyZgprFG5JlIBPEyr+bP/4Qts+yEqzxHMzV0hoZAp/KMkzibSmTjGJw/iwkvfRjm/gF/5Bz+DV288h9/9s3+L1KgPB2eGMfdmm33JPoqNUOh6sUxr7sqduwTMO9jIEmxG01jdWECl4UPLF0GXlnWzcQBvPp/Czt1JvHBxDYW6H4MzfoL0gwgTQAfDbfZTFonwEKIUtAODERw42ofZ5b8g2M3jE5/4UYyk7sPVlzvYKNbxyZ85x/baj5uzLyNbvIzyZgzltSBp5hrKN/Zh6WIQV77F/iwNsg1a2MzP2uokdxe2WMdBWzJ2dWuTFmie/aJAeC2T57WUXxEylvLcSRqI2SLhMAWVlp6tGjONjw2QJTqoNspsswbpDfATTHe9G9jK3YSybahNQyHSdWwA6eQwnyMwTHombfg7tHqphCCaa2qSIQUkhWSZxlSukSeRidHjBEgpgjk/wbuS3hPUVGgB874GyyHhJa+KmPnE/Q/ust5fv+3FzGojWX7/qN08Vzwqg4H4SGBWnmSFG2glHAOyAgnSDXwnS29C2+7hA6TspfQFCOSdM6DC8xpm03VOQnkND0nJSrjrN97MTddJ6BtIIhBTehwpR83gRieMsZETfACNTyoWxQjLQJSlfvDgYQOVWrddQlBpuM7efxaZgQxm5zUUGjMPV5DC/8KF1+y74uadWLx+yhXFI3Zw6/YNDAxFEKExOpjZhy9/4Q3sm7kPdRotwyNjuHnrOvvXj4nJSSwuLfGdCWuDfDZLWhgnfSmRexSzBLrjY5MsZxCXL72Fqekp/gYkKQe0TO3I6AiOHD6M69duEqhu4cypMxYzKyVw5OgRvucG+glWF1mPkREaR1QWSwvLlpaHLcN2JsiNR0lDBBtURJpUKRB/4sQRLK3M8Z2X0BfWBMYOAuy/aEiC3k3apRJj57ko7NXWTmiBnsh+lWLVThBr2QT42bS6rlP/WQfxBHcpXPvKH+wyO0rWUlby+fI6WfweiUUjbSIYeUiUUYbahX3hoEMHwO4Rk3YpTn3UH72AIEyaXwXgfw3J2vKc4heiRH2WbtYu2lL59RhL6UW6s5AXvso5z3ryqGdKxVvs4O59eqnVRa/hLuBq4JX3OnTsHPfOq8i2ieB1AzejbelQ0rP62sLVBDy5/8In7g3MfvnLv2fKv+3SSA5BQZs9SIBAdGMeWV/NZwukdASY5H0liO005P0SoCUNUG7IWyudp2oJSMtzqwVVFO4UoSEdVuqtUIQghnqUlZHeUnyhbmjymjoNJqW2lFdU8Yhe9p0mg7n5XK22mE6Gqacj5J0aXnr5dXz+C1/B3bk5fOpTP4HHzj+OP/jD/4wHz9+H0/cfJi/mTS9vbhRw8uSDiCZaNsJ38+YS7jv7AGL9Sbz83Su0bUYRCQ0QSJWt3QTgpOvlSd7e3CRQpWzutGiM8X7WXTykdHTr62uW0UByUzlMm50SeeAodXMXt+/eIfovoRNYRWJABqCoku3A9mjLkmZHeTShS5MmqTPlnXW55WJlQ5AQJHPk6TQwq87n3fJqahS5SzCk1Hca9q4R/NVLbrQaStCvjAoEdjQEbCU40obm3MgwUWx7m5aIUnSB8q+Pz9OrjO5EV9TzWv20zWfa5F7iD3lbbblaeWbZD00aaiy+EbTX77PJVn5/CIEQ+5RHOT6MDi1WluUjopRzhG8xvpLX1AwjlkkLKCgmV/giEOhje8sD77WQyauX7mDh7gqBbIvPVwqtDOVHykapFTvt7vrYT/KYqq7KQS8gq3KRj/h8ewf5THKkzX7LF3PIFnZoJBVZvzov7LAKrAvpS/W3kXIVnHRbqmhuwY4Z8DLoFTJSoJxWWi/NlxIdKD5Wu3Tw3ii7do8mpqvu1B3CdJorpTR1TXfLlrxfLVyykdpWl0YLAe4v/cTfAMz+yCePPdvryZoiElYQMwvfbbKy/NhtSHjUWAkJVlZkVxpo0hc/WSXlJlYqB62/niIIziQHMdo/ionBSYwPThP4BAwYh3wxMisty+reTDj1nIcWp9IakYED7BDlzqNydktBh0kMUaJ7rXalBiCzW5JrMf6uMNAkCVcrjl68TaIpIrP1EDpvfhhL5JF89RZaBTIdlXzdhhsJoNmwAV/QLNtyvoIKQafe16DgNZAUyCAamyKoOYRM+hSS0eN8zmtUhH0IxxRT+kEcPp7Go++cx/1nm4hH30Srfgux9A5WsxewsLRDBTeNcw/NoJnPoHj7/ShsTOLsU8CBc31IjAwgko4hFPNhZJBAdJyAjKhbKV40E1hWreJ/8rSMrl6+hnqFQjBXwZtXv4sDB8fxyKOP4sXXXzVFqaVBPVplqU4btLaOc49O4fL88/iTr/4226+K40eUVaGE3CJBDomo1igRlObRC3WQGk9h/NgUDj1wBIGDCzh41oOf+LtP4Ud+9gwe//gEZo5OYfYtEu3yQSxWO7iz9hZuXKMQLd7HNpL37TDBvw9B9xiNkJQNQyiN2cF9hwk+3VTW38GZ0xnSRh7zs4/hzde3qLRLZGhgc4WfsrMIEiRuvnEQxZvHcOfVNho7KRpTw6g2Cyg0c2izTXL1CtZXt1Dk0RfyI57QzNo4+zFEGlU2CGdmqfK/atPqM1pesE4lIoWoYePNnYKFDsiy7KfxpRno6UwIYxPKgVpGx99ALDqAgf5xDA/uw+jAQQymJpGKDBCs0/AhpQdctLApuAS0NbzYcjdssYwyiuhryvMrBaTE0gQClIPytNtSf+QhnVI+Ps3u1yz3I2fOWll/0GaeWbKIdDvJ/ftHBzA4mzyxtmCCPLMCsQozEDDgTrlK/tTuDIHqCeRq0o4ABK/js+SZtRm6PKfr9D7zGmiUg6BPitbWfRfrO680MOEAIj1cysUBu4pZU5z00QMPkSZpFBbKNgkkmUzDT54TqNWkL6Xm2trewtmzZynQu9Da7RYPRkQnQzJHxTcxOW7ZDTR5JEwlIEAkL/C1G1doOEdoPKUwODSBjbUWFuZLePqpD+CtKzcwvX8/FeRNzOyftji+770iUJw0Oshm8wTFwzRoyhbft7KyjkOHjpgwvXbths3MlqdudGTcyiRQnUr1QyvZXHzjEo4ePmoG96uvvop909PYye2w/pqYoXCMNo3nCgFtjXUSwJKnSEvU0kgmmpVQV95rjVxpMPPQ0YP4+nNfM1BPvUOaBOUj+4VdoXy9lm+TStWUjfWNdZHz3byXanD9xg4RaOORZ/6bTZNL7AJ2nkS37pUysmeIDtiH5gETSLKH86iP0tr2XOecrjEFqJPqfP4XLQhcOWDToQyjH37WELHaUbvoR7Sxe4nRrz1OdMc20Wd5mGUcKYm7ZKDNZOa1ArO7T3buF3FyU7uquKJh7f9XMGv0zF3n7dm6hX/UhtpUFsNIu+XS4jMKM9D3ewWzX/qL3zMFLw+egKT6HgSrFDFwk999dRZAXlbyjeSFJoCZUSgwS70jUGcpGVlIxVr2+H55pxTnSa1moXlB6kHJCg2dq92V5UHtIBCg/lK4jJYorUqP8hm26BB1qADt2Hia392kzxyuXr2DsbHDBLVtPPH02/GTf/sn8NUvfQc3b1/Hz/3dn6bMcFF3pfA7v/WHmJyk3nrwHFyBDubuLtoIwYn7H0C5UMO1txR6Q1DtSWF9ZZbytmoxvstLK1glYFXrDg5mLI5StVF7K+5S9KCmlwEYIj/Jm6dR25HRIdQ0GYnfR2bC6ItuU3YRdFKmK8bT8ebzSZrsGVCMpia5Elx52NbuqvWD6M0Bs45X2viE7Sdwp1GYTltOBMom4gatbFkreyiXeSRYk2e5SN0vuhLtWXouXqv8r7ayGvtOS/Jr4EOVk+RURIEtEsMyaTlqjbwoFaj4RWUQy9qcCaW8knzkCWUgkIdSabH8NFA0YmPL2ArAsi8NQrMQ9s/eRWBN2hEIVYiKwKyfZbJUXKQLhQApJGB5fhs3rixgZytnmXdSiQG+I833ZYh1iGVIi1raWKBcwFL0Ii+0RscdT7ZAuPiC76Wc0aQ7hX3Wmko7KBAsmlWYhqrPupPmFIanUdAS+77I9iuVeT0NAvWteF5H7cVCzvpCSQM0uuDXqme7YFZbT3Yf/5kskBEmA5rMWyV43a5sYS13mfqbgJraVbz+d3/8b7AC2Ed+6v5nnbgrdqJbjSuLmhUkkyqGUEO9lrmAQFfnBWRlmSpOQsOCrToFOI+qtMipj4ThYcP1USFqneEIOzDsjSAVSmI0M4rp8SnMjE9jJDNiSz7GPBFE/TEEZE3wehMQAst8pog/yN9l8aij1bp6p8XH6jruPqWb0DDFTgDBN9+B1o0z2KptoufPIdpKIpgKGaAOEnA0WZ/1rRXb4WkhPRDDTjHPxhSLFdB0bWO7fB2Lm9/DTu1NbJUvYv/UQQT7TpHx43j4HXF85Cc9OHc+R31yicLmAqr1q2RsCqi+I4jFDyEeOYHf+Q0//vTfPwTPxqdw9slxPP5DAxg5MIlI4iRimTCvC+DIzFmcPnwaR089gqHRDBL9ZMTeGglqm4o1iMJ2DXeurRsg6HQLmJu7TfA8SYCVwUb2DgqVHMZGj5JJcxgmKL61/CK+8t0/ZD00WQ4YoUFx5+oCCSyLQKSKULqJ/hkfZs70Y+h4HP5hEmwkz95tYXxyBEMTA1goXsHNzeexmSOj3KK1tHIIeQLSaKKD+aUD2Frox+paHbeupXBg/wfQKB9FejiBk2cfxte/dhXV0hRSSQLawTYGxgr4kY8dxeihKp77bAZ//BfLuHbxKlA/hNWbg7j5Sggrlx5EterDxs4a+6uAYpNGwTbBJ2ils2TF+hJS6SEEYwSLBAbKXqFchQUC1C4ViOz4cDhowkRWpJip0VACfTIplVWTAEJAOBbT0oIElEGvnZf+CUc9dp07UUR/fwbDQwRQA5MYTE4gEcnQuIoQYPiR9JEhPVqOUcrCAlKpjCkYqLNk+UtgUKxRn9WtfJosI0+cLFIhxohoj8opSKbW8oYHT5zZZb2/fvvVX9sNMxDZa+dHs9x3BaxAjDyzymbgVkou2x1lLnYxTGLyQ57ZXUBA4aUAfskV/aRr5A3U9bpuzzNrMY67M71t+JPn9G69Vs+UIlZcpABmj9JJgEJhOVqq+sTxR1HKNynMaDTSSKxWanat7hag1UxgpcGSBV+ksaXk7hKqZnywj0dGRiwBt2LsSrUyDb5x87wL5Ny4cRUnTx80gBuPTuOrX3wNM9MnkOofscUpNBFrgZZsIhlje0dw+dJVTExMWLjD8tIahofHLANBIp5EbqeI0eEpU7pKPK64uoA/bApZs3pv3b6NudlFHNx/iMqrhTcvXcZDD5+33LkaVlNqOsWWZWggyeupBRjkiYtGtQADaYHgJRmLIB0PW9xaS7TGBpeH+MTJYwQUN3D75nX4EwQvVNZhGtUUbKRfLaSgdiXCErhTu0vpiHD5XUDW+ss6w9lNNPKj9RNPWLwpv1t/6wyPwqGGkXe/W+iATuoynbc/VKS6l++Rs0A3GUhguY2WRB/6XZ/5HF3rKGMWXf9YRnk6DTAKLPIH7fZoltlYgryjCT0W00q04NApL7Jdz+U9qhPfrefykz1LdVNZVGTVVzQs8LoHZmlL29HJaKCLnOv0x2k/p0xWNjatvHJKzbUXZvCL9whmv/HFP7I2VLxjj/zuIpjsI3BwN/he2dBtKW5519UuznWKj2wSHCjfqZsvU45mhXfI4HWRzxQXG6L+U2otTfxTeiOBCoVMOU4b7mwztYloSNVSCIYMTW0KQ4hqMYVQiKBzhzTUMKfP6loJ4xPHsG/mFD71t37G3vWvf+OPcOTofjz6xP00wpXfOYdvf+tlvP2db6P8aJF3G3jllcssaxNHaOy9/toslhaKJL2g8fVAv5ZrZ5vSuJfcTUSjmJqaxMy+SWixIKV/0kpdCi8aHx+l7I2xjZtmCCq3qdLYaTi6WmkjRkNz6nACvcAmcpUdcya5STB8OmlVk76UA7WP+lteTk0ek6wgUuPGKwyIKr7YGV0QnQhUafUo5ZmXxaKZ9H40KgS0FRpfNNAbdQeIF/IFB7CyDpJtlORG227hILEe5ZnFO7OdLWWcjC7yTB8xhJYQbrIfFSsuvrHJsKQ5OQZs5TbRKnlBfWyyUmCW8lD54LV4gTyx4m+joV1Aq56UPtKkNIu15tFWc6O8lGfWMoqQlrOUVbO31gloiXlId1HqtrAyqLTCaNY0klVDvdxErcJ2aCqUQvOaaCwH4uyLNA1nL+WnVp+TziSO69ZInxXSp0aj9xYLatDw0mcnPEZ6VhisTeZRiJGl3WIljS7ZJuJvyQ7tWmgiJoCdJg1EtCCNwL1j6Op+0WBbcTgyDAhktWui605lAys781jaFpiVA6pqsuHn/ybL2f7Izz7zLFsKLj+LzEZzU0FqAoAnyMIGNTmQJbXekRBlh7EQsjik0MXUfQTA8ixqMeo2FbqWbdXSjpqIU69V0NUEDAIPBSPHwxELIM4ktfdjIDmAsdQ4BhODSLJDIkTyIXa8Vv3yqQnZWVo/Xyt3tGjRWhwHG4zFJXF4qHz60HaXCHRYvrsPofva02hut1ALrhKspJDxDWKHhF0n6K3y2FROWUqcJpQCYhF3c9dIgNsoN9YtH2qDFp0/SEUYHiKgHKfiH0SrmjCQ2+ht4xf+0RGce6SJCgFvKnMJa+svUQA9iUNnpnHn4gP47X96H/7d/7KIpbthfOQ9/4JlmMT4/XfhSW7h5myWgLCOQv0uiqUluOrKKxnC1MEZDAylMD7Vj+n9owaSBBC0QpiHpn4x78HEVNqU7ZVbS3jb409jdfMqmbROZXwYybgLF69/A9+79hUEIzIqusgkhrG6sE2B1IfRfT2MH8pg4ugA+qcJylJArkXCyS9iU1kWPOOYGT3FdkhgbbuItZ111IsVJOpH0Jo9QZA3wDao4X3n/z4u3foKZjdvoF0/RfBaxWZ2hEqBQjcUxUd//Cdw4Y0trCz7cP1GlgwzTkKeQti3hrpSsxRPYO3ONBV4i+2WQ6mZx1r5BWw2yliv38JOax5rhWWN7pBBk6yfsgW4beWWfCGLbH7LhGCLYDVMwaBV0zTRQTGKXQoXiwXibzbb0gQHBRmZJ5ZOElAGLbanRqNeGTlqpFFNXkom+tGOrSJO0JFIEiDxmUGCWBOQGm4ipO652E8qiZsCyStg66eQoZXMgrq1ylOfLPQGKo2i7RIKykPZF/YiFAvacLsUkYSzFPHB4/cGZn/t13/VhIQErXZtOtpH/nGbV5Z8EKRSEYj1kV/NQ+uABoEHG7blB0dQcuc/ARqBGK137uBtXqOCiXJ4TmuTy4rXMKABEkPPupdX6P283wEV8vg6hqXACQ13ClM39k+fpKYNIE+wOJAZRCqRsfQ4in3V9ykCWQlDZRmQ0OQbTMHt2z/D/ikjFAlb7Nbm5oaNPig9j4wGpfjyUS4NDsXYDl6+K45btzZx/7mHbeKNQpa0RHahlMPUzIx5TxSnO7Nvhvxaxd07i5ie2o9sdttSB929vcTz5KHRYRPUqytbmJzYZ96HVDqFl178roVU5bNFjI1N4K23rtiCCpGw8snmbHJEvVIyGlNGhu0t0letaZ0kA0eZXDTzORWhsc02DATDFrPbYlkVZ9ul3Hz+5ecod3uUZVRKlGdSYMInmkDW0QpT1ldSmGp9KjjeIyNG/WAbPzsE8VdHJ1m5PJ0CBc616nOKa+tvPVN9aICP/fd9EElgaTHTLLO+e2SA8GiAl9fqaLp9F2CLloymqLQ1ZCllpXoKGFLc8rOza3Pqsft+A5tS+qI7lsUKxrKqOHo+dx0dMKu66B08t8sEKo6utedRVgrMSh/IM0tcYvpJ2Tmc0BldqJudZ6hse2DWPMdUrNT3dv6XPnlvYPalr32O7xJIIf/wmeozD/WUlyDWJ5nAQmkykMVQUg4Y4OcLLJ8oGyYAyg8yrxkr5DGFfASp76KBKOVaCIFonwM2LP+6hqgdR4w/JCBIPc1/ZiizIQSCBCi1WE5UebEJGJXAXv3fYN2WVyoEq2UcO/wQxob34bf+99/B1WuL+KW//wsIRYOUV138g//+f8D9Zx/B25552CYJvfDCdWyuVTE2rbRLM/jS595AverF2fvPoE3ArhRRChXSULuWrM3091tIkIyIMo3TickDBJRNk9VyLAhYO3yuvnEcCJoUVqm0sElDstW3jdgAgRfxg1YgRZeglf0mEKSY8mjUS96hPPVRwPAa9Ts/8Ei5ynZQbnvLRGC0R5hIvSiHnNqg2/WT3/oI7ghBKPtdMmBYZ82LUfmkN5yllxVjbhRtAEpdI9ZQW8t+1B9NuBRw81MWyWOu8AYBU5VDRr0cFQoBEKEpC4HkmjZ5exWG0EddpTAG6Ral4jK+4XMFY+2v0bnALKmetype189nB+V1J5HvpQDbIhCfu7VBo4VGjJc6ibhNK7SWiqBM7RGkF1ErNwzMKrOUZIfoQqGg/f3ENaSjGPVSgGBWGQNqDS3TTn1czdlRqQtVNnn5WX22K40r4jKNemrxBQPbqp5t4k/xLuvvkyc2iInxYQwNDfJd/XBWHnP6fM9zqzADeb+1GJPCmtTGjW6dun8J8xu3sZG7Tdotkf/Z16Spn//R/373XT9483zgZ5551kb82VBCzBbHQfWtIF0Nz5JcrLnlhlfDO8qMrc0O0W5D5OJqm7BACvCSwQgUpKD6aFmRi00YCrUr15sUleJ0ZDVIzMT9RO8Kcg+TIWnNpRIJ9MeTSBAgJQIxNGk5aEhMXamAaIECDZvL5S6hUnNvYtR7ComljyG4dYbvLKDlKQB1AiISridJ5VPIE0jOYz23RoHShDfkEFS1XcaZ6V9CPHSUxDlOBo2w3CmCwjjrRGuIwqPjbmF25Tn87U//FB56JEUg+XlMzpSRrVxEu3AeU4d+Ff/2n13HP/jMCgLtd+Hcg49j5tBxFF2XETv9B4hPiWHlxld8GpnHlyWDZy2XYKPpwvzKmwYGlFrD6xtGOjVNAqCgYr1avTybLo3t7CImJyextVUmkZJq+8okyDiJK8375vDS5a9Q+WhiQR1DiRRmeG22vIOpY5OYPD6A6ADbguC4pBn4RHRlGhgKDRkgcSutWITKdHNrFusEsvq9WtxGojmN6u3DFIjfxWjiEM9FsF26gIHhcVRbHWSGDmKZwBal+7Cm4d7lCss9jfnZPpw7+xF8+7nrePyJ96KebeLooxeR2v8dWpSkLIJa5TMudO/QEBlFse8K6Y6ywxMlLVFwhUhr7SJ6dT9SwYME/6sUUGQuG6oPWTy2LOp6pYqdrS0yUdiAjoZBNKyh4Swxl4ZqxIxVGlZiNNFNjPQVT8hK7KMgIZhNhdAIrttQVp+PWs1D+uWzbbYpQWmTTFXteFEnUFa2jrA/Yh6UoIZr2qTJJp9F4S7PCmUgASCFZCVvwzUeAq9AlAqIyksGioSkhMDBo/cGZn/9X/xT3sQP3KXHpcT1EtqRBAAOkNXkL4FZylTu4jv+rs8qjO7lPfIcOBNpnPawR1LBOx4tKQMxtLQy+4AHeaso48mfUvICtrxBm96/ezRLm0hFz7QhUD5fIUNKyRUJDmFocIICvQ+Z9IAN78uTMTk5xfZpWSyd+idNwKjYKwGbFA2O1TWl6wqznTxYWlZO4X6eT2NzY5VyooM8AeT4+Ajr2EEkksbrr8xhmApaXlhLNE95srC0YNdqMY7sZsGGQYcGB1lot0380qIXiwuzOHb0JBbmtEJRCQ+cPcNf3VhaWIOvL0BQu2LxivLGPnjuIVy4eBEnTp7A6NiYLXN7YP8+XLt+hYpQY3kdSxqez+YIxGuI8vmzc/Om4CMEsZ1WHSkKdCW4V77akpIfs03LpTKGRwfxxtWLNNI22Y89xILsOLap+soZ+qdhKhBpslNVMO3Ktnf6V6BP9LS3U9Xyj3Z2LC+UvDYBy03koHvY1Lbr+j6T2XqXFLijxM0rJccEr9FQpC0vy89alck0BOW+vK/OpBX94yvU9fyjXbQiYKjz+iwFrTLrGSq33qH+tuwJArO77zWdsrersHaPVPdfgVlVR5v4QOVXfXSveWSpd3UUuLVJifZZ3mTeqIu5WTnF3nyOnqUwN/PMOt2IX/qxewOzr3/rSwjKa+onaNajBRiaBAxs97Bmaof9BF9+eKX7rI15Dcsgj5YUe7BDUEOGFsDVSKSGfcOUYWE/AYOfx1SQICBMsEGgQhCkeSRKS+inXJOeVHyssqJIJqgX9Ewl01e4i17VKMsg8qBAUnvrygreurSCYt6Fb3/9Zbz0/Gv48U9/DA89eBRf/drr+MrXn7NwlF/5lU/hxvUcdja72Fgj7zdCePf7H7P++73ffh6To0dx5MgENlY3bT6A2lDr74+NjfOYwsCggEuQPCMg6TNDVM6D7e2t73tlbXl71llx1AJFWvO/TrnQ8+4gNUGwGvTyng7hA2UtOzbENoyECZBCSslFg7mP4MZDoMv6yvhUvLcmfDlg1pH3Mqy7HYUXyBDyUScoOwRQziulo6wYVohGh2J6a9QfltaTp9Rwyj+r0BHJV9GHRprkfeYnPl8hDuJFIEL+UPkj0r9BB6zJ0LPVudgPIQrgAAGrwu4ElLV7DMxSb5FQpSvEP0Ks4ifJXuMfFsTCMEgTog/1ZpC0oUwGtrqk6J6ybXkzi9vXl9FrUdf52K5txf56WB83qmUaMbUGr2PbsLDOsrDEaPpM2myzTl0ayZ22UnOVicO2UShvIl/axHZ+DTv5dbaJz4yolNKtZYYwNjqJ6akZfh4k3YexU8qpINZm4lF5qzUiJgdAZmAQUwSzyaTCAcNsm92RdmJAp240+lgWtbUm/7nkleX5KvXmWmEei5t3UKyvsF0a1G9qc+BnP/oP+LJ72zyf+MV3PWtDjRJsbGQTUbIs+GIpJOVI6+nFJnzUogS0vMbJT8nLTFPrFjIREbcCe9vsGLn5lZS4L+KDiwq34+2iSXDc5D/NN3Br4g2tCq3BX+3QiuvVaagq8FjWRJedScHEzsgMjSER60c8wgYKxWgBEaxQPLcUVF9twpOoIL36QXTeeoBSiUzSHwBqIVSzVID1Et648T0yiIuNPIGpMSURTrGeyp+bwECU4GvjRfjCBcQybaxnb2ElfxeVKq0bAkN1DiI1fOyjP8Ey+HD6Pg+SQ68S1H0PtdwTyM79FN7z9JfwystFpKNThC1jBEjjmLh/CYuu30D6cBmblXkqeU2iSyMSJYjoxIl6YijVFORcYz2406Lf3GKHrpfJJAOW7ifVT8EW1coxBAMdChASoDySyyvzBAVNWscVTE0dxu07L6LSVF7gEjK0kB8+fYrg8gYOnaMldsiDqq+BUo+E2yxCPmk2BdsJCLiC6A/3Y3AwjVK+hForiyL7yxPsJxMQpGwPY/vSMSQjZdx/5v3I0eLLlWkMNEaRK72KdCxjq5rdurqFsYHT+Fe/+V9w5sjbCYKS+Pyfv0EBeB9+6z/+CZnsCHbWR3H+sXM486AfxeoK7s6u8X11lJpbBMqTZNpBhL3DFOhpgtQmilkaMGTWbsNPpVyjXtdM3gBJj1TR9iLij5lXX1knNCQnK1LpnxSeQaql5a+lITUyUDdmk/RN00DyBXzSPda3eZqyxVIHiQlZ3C22C42gLgWcFBwBh2Kv6mSyOsGwYq6l45W/MchdcT7WkOSTpG8A6VAK6UiCgCQoVjDPhMC1cgDrOsUfOcOSPRw9fm+puX79N36N3El2E8tJcPCzlLQ8spprJUCrxRL8ymagkRTylwFZ/qbrTOBo5x8W3d6t7xKKAhZ+8x5R7OnZPKmfDczyj4FZKnkpLRP23PQM21QeCuA+vYhywOfV6AlBCQtYKbXRbQVw5OBpi2fusq0jBHNpgtJigQZgNmu5KLX04draqsXHasGEXD4LrfEd1VJYfL6UotLt1JtKnN1Guj8JzQpPpOImI0KhBMFpkfeexMrqAqq1CvozwzZ8GE3E+dwC1le2WIceDh6Up6iDteWcxcwuLc9haGiY/FM1MKOJZdo0AUaptZyyFlkvHw2fJAUqZdTGJu6//yyuXb1io02jw4MoF/MY4THo81rogOLwarUagWuM7dGwYbwQjeFMMkFZ47c8ilrKWXkV69SwB44cwO35G7gzd43gx0PjPUT67lqZpfRt+E59LiWqTmNb79GC+k/9oqMtjSnjTEK1p50nqYQVBmZdTgUmsE51Zn0kxc+/rBev01HP0LukA/guZ9d7NC7CR1Geq/fNi2RDo84QqTajKTuKVpz3sUmlQexeKy936QkBTtVFhrvepzrpPTYJTKBT5eBuxdJnftkDs1L0u68y+tWmZ2rfA7NmnPEoj6jAuXI66927t/F+AR7St2hae6ePO3mT9K7v9+qZfePbXyZo2fWSSk7UCR5aBB80cENU9j55vfwB88xrWNj0KetioRpkTE+D/EJZIAAjo9DiXT0BkytKexXqjxiYkAdPu4CqPK+aRS/gZIsTsX3aPWdESp5dXad3yPOlJcwbBGzrG2XcubODUycexUB6EitLa/jkxz+Jj3/iGfzuf/4LFCsNLCys4OlnnsLU5DCe//ZFAhxRShK3b1/Hu95zHFcvb+DP/vAlvPu9T9PgquPihZvmGBgbHceRo8csK4higGU4KA2e0jXOzS0b/Q6TNwTKZcjFEwnyshZY6pJPh9juCkeh7CWvagXPWnsb/jDrHkwSjOWs7QRiQ2EPQbwmtmr0R+CGHUWZxdZkOxB0sd5O7Df718Cgds2+12iO0pR5yadtlErKasD2It26iGk0glelYVmn/rLYUHWsZCAfpDjNPaPHZPwu/bOKbGcPCGfM6aZYehkgIkeBxgDloLyoKRq0IfW/RrrFV0THfZQRfTwnWaL+Vty7pd0Tre+CWTlQjEZZH9lAPpYlRB7UkuyOYcnryIOzc1ncvLZE2zZEulFYiXLHBlkPGhM1vpPP10R8hUyRw6zepXIN25KJ65vENiUUKG+1aFWusIN8JW99l7PV56psW2KsPmKDVAb7pvfj2LHjNl8gQfwi3lEqyqBGCwJO1g2NSmnitZwGo4Nj7HMf+cNJy6VsCrYqHOsn3ldfeVwE86y3Gc3K04wqqt0dbJbmsbwzS/SX43Xqc3WJC5/58K+wke5t83z0M489a15WImgJUxcb1EPh6O0FDUDUXFTIfKgNRRLkaOlXASt91hBll9fZb5RCshjdJHavPGHyVnm9aFPhNlm4FoEsjTaCJTIjrVZlOdAyqt4gC07C7ZFY67SqKnWCvLry7tVIjFpilkRAQR0MxNho/UglM0jG04gS2IZCUdT82+i8/E7MPQdsNZcQio8jWNEqPHW+GxhKhMxFL5BULxFglFnrBq1ed4rgYxirOyuWrFkpLNbKq0hGY9h/ZNIauc0dffswd62KBx/P4sH3FGiBXcLqkhe/+asxvPLdMOa364h4H8SJY0O0UikMOvuQxxVEjryMKytvkKhSKGeHCIRoa1ERexrH+d5jqHcLqHWL8NSSBF51DI71I5ZUgn657AMYHpyg0DjGdrrMd/aoKBXTxwZkqbKFGgXUDAlsAKubbxFcb5NhPHjw9Gm4W1VkRsl5qR1Uw8sEsmxLTwkdj1bCokBw07rttJChZblvYpyE3MY2DQoEaZ3VNnjso5AimCyGUblzEJnIaezfdx5X575OC2qJP88Q/F/BgO9J+Ivn4Qm8jG6TgqUXwfMvfd4E1MKNPrz27R6GYu/G925/A999cREvfn6YQH4/fvK/O0omOY386hkMjw6hMUcBRoHTZp8HvHEEXRMYSZ+gNThJo2gD6eSYMYCbwominzQZJKiPIhKI87wXJcU8U7pL0Msyl4JVKjcNzUnw7puZxvbWtsVo6toq6WpgOIkAhaxAa2aG/dJXoyGWp2Kr8hkEBgQFFuemONhKjqC9ZLwhS1nKUgKoRaSmuPIR/wSSFB79FG79ySQFsNLUdQiUKSB2shaDpIk/ivtsESHef/Yx8d0P3P7Zv/o1R9FreEtKmnTh7AShFJw+CksDr35yh59QhrTu8QqY82YW0YaRJeUp0Ewus10kPMWzuka5R6UUddmeZ8CZfKDvpDLJcAlX3syfTa7b9SQtAyJS0Hy+htkloJt1KQovdqhED+8/jkqhReMzbp6nzc0tCtCSzfqXArxDpaccm2fP3kcwukpDrIEhKj8ZJdEIAR9frslXuUIWqUwM2dw2aZ8GaEiZM8izq1tsT5WX/UPlpXCANgFFhQpaXtVYJIkewbgmZCl5O+E+5u6u2CSXna118tYIK+jDrZu3MDk1gVu3biMRHbB4bK3oNTo2ajkXVW/lBFV6H4VFHDgwTQAwi5lJeWkXMDY0gBQF+SrroD52wJ0zE1uhA1rqcmp0zLzfjY7Afccmncao3E+cPm6G/IsvfwPBKAW81jknGPH5QtbQ8qjIs2EeJ8lmyVj+Zxfxs4ZQZUgEbZg84KccrJAeKZZbDSpKGsdlyrqgL4ZCTp4b2s9VTVzhs4g2texnty06oKJl/6h99EwpIBGJ6FeAQbQmWncm/PL5mjchHUHwyScZOHE89IKdDj2QREwJeUyB8xlGL3wMy24ZDBRyI8RKZW5P4b1qNdVN9GkKnptoUL/ZD3aF84M1g57F52r/b8Asd5vcwvN7OXoFIlU+GWkGZkXfBLFae15hBsqjLMPt7/34vYHZy69+jfpNE7gc0KOQCnnwFJYhfSeDTGBWS10rHtAyg8g5xLZVsn2FHfgjit/zQDG06ptEOGryIxEKwxOPGRjw83oNwlKqGViS3rUQq+ggaXGL9WraSFyzqgoFUCA401wWf2SEusJNWr0fB/bdD62gKd02OT2Kn/27P4rP/umXrUkVn76wuIi/9/d/BH/59Tdt5bvJGdL9qmIoZ/HIo6fx6nNlvPj8G/jUzzyDYn2dtF/HyRMnyNdRGpA1A/R3797F1vY2bpOnR0bGCGJHTT6UKTP1Ho0QKHuRaCuuCbjU4wo/0Iz5EGVX0N9HYzCPWpV6K0O+9VYQDBPERQlkwx0aem22tzPqJDnv9mvIqA+BvpDhlCaNS3ax9bsmptlIA3GLwt86TY0W9SwMSQtd1Ls1yxAhYF0rF4kFKuy7LmmIGIMPUYo18bvCAgQsGyQMhd14SbdartwZ8nchk86Q74k/SGhBvi/IY1jeyVgKUR6jsQiS6Tgi8Qj1I3uRtOKiENdkMNG3GYjKEa3PfK6lsSMdiFbaenejhSh/TGsSFemlpxFzNxEJhdrKbBdLs5uoUWf2uSIsV4L0Eae8FliWfgqTVjTioEntAvU0nkijcoApXDO/XbDJgfkiDfBaiWC2jJ1CnX3Uo3zws2+oVzwaHSBNUgbL+dDHeufZh9uFgnnnlTEoRpwUDYaRJI4Y5HWJaIJSRLG9rBTRqq2EJ4OD90quiyclB+BOElsJD9bQIHAtd1exVZ3FRvkujTylXN0xb7Rkk4v9/NMf/gWH8e5h83zsU+94ttuicGyS8kgcbg2LsuFlpVBM2cir8tcp1kGpR3ptloqE4upIIPI6Wg+Ka3NiOsh43EVQGoKUBeqlVPLyWX3sHHkEFDDfo7CXuFaaCw8FsXkW5EGggOkjQLEUMbKwKaHKrjoa7MwqCVGNX6wVyMQkSgraVDSEo2P34da398NTDyNKQp1dvIOl2m20NGHH3SBIqpFgY2i2u9iqrKDCczkCyeu5r2AJv4+UfwRlPldWYSQ0hsGhfWTOMijjML3vAOrFO3j6yYP42X88wYb4Jl56zou//Pwj+M5LXmQmzmNl/iLSYRemktMI1IYJPm9g8MTz6MtssX1Oo4/A0eWR57RoAi0YY90DyqxAy8qdBnhvOpMwpTiYGEI6MmRpoWyG/fAkfIk+gs1N9CdICc0dMmCLdafwIhMPT/bj2tpt+LpVnBhPYHzMg1aKbZSg0knG2WZRRAlKFe7RoVDQ7Nduzo2UaxDDmTGEUhEC41lk3XdIYCMINgYpAJooNrYRrmUQuPgJxIdXcOGNV3F35xJOHfohZKtXsJy9giP7n8DK+iqqnllMDXwUG9svs4xhLN92EYRnMDQ2gNn1l3Ek8zgKGwXcvHUFL7+yitnLg+z/EA4eHCBQ3m8UHiMA1uzQfHaTFFIhA1Bwt2oEBFGM75/EkRNHkCHYMcVOzmwWyihvEXzSCJjDDlpUDGUKHi31GOGzpGhbNJZqWtiD7S/hqnX/+zMpJFhnpWzLsk0FHsrFNIZPXUWV9FYhKPdQaLTyZEIXwQwFW7mRtTCDFhVfl7Qur5JirxutLSqPHHqRBHxUTtFgnF1JkE1ai4CCVvYereI6mbPeyKHWKdIwrODJB9+/y3p//fbP/82vmvfVAICOAgH2XYqT/LQbzqMsBgppcPIwOsDX8axx5x/t4mNtOicwIKEiTC5Q4cTMKuZMQ9MCtdQV3DX8ap9N+bMuup/32nP1mdwsfiajW4yZlt7sNNk2rPe+ySOIBNPkZT+VWpnKPYT9+/dDK8ysrq7bJKsDBw7g9u1bKJSKOLB/xlKpKYPA/MK8KcSdbM48rpEYZQxaJjzldVR82NZmzoCWPDXFUg5hKletnFWv8TrFPftpZM4uEmA3LZtCLlvExkYW09MzLDFBGIX78WNncOXKNVaENaN8S9BAVn5ZLegyPTWNAmlMKZICwQAVRZsAoYxoNEyQmEWe4DoRDmJyYtS88Ysss+IIBb40oUTtqqTtUtSjA0M2iUPQTa0ob0U0EjGFNrd0By++IDDrw+GZfVhd2bA2txEtKUC2tTZ5Pa3R9dnpSv5O64VGF1Us1tfybIMUwcsxHNp/DA8/+CSeeOTtFif56EOPU8kOYYKyJBFNU8ayLAT+MgSVsaLdFNDjuySDJf9JdBq61eQPBwRS9RAkCaiq/3WO/62cOrI5naNtDr3pOuFVvsyUmHmXTZmpTnoO20nfeZ2d52424u45PsZAuyCyKu68U/fxOTxlz+e12vfArEYIxB97nlmjdV6sZ+h+ldehZ4FQZ9ckx5Z5wnv3DGbfePnL7ANyFPWJMgmIt2SsuAlwpYDDEXlVNbFF7alRDgF2lpm/ycmjHLteglpjQlZE+ZrjkShilF1K69YmXdMM4D/+zHtNC7Pgin9UmE6X+llgtlbXiFGNFdJzxPReytE0pqYfwq3bK1he0hyBDubnlzA+MYYf+uEPEvA2cPb0aZy57yD+5b/81/jwhz+GSrXPQmMUKjA2PoIvff5FfPhjj2GwP47P/9EtdoILkwc1slBFfseHEfKu8qbKkN/Y3MTmxqZ5KgXgQwQ54mMtIqJYWRlIPr/PQiSCYaWnUixwhH3mIZitWBiO2KIlY62vSwDlpd7OEsz3IRb3IpHwIxLVAjiUNWovu5bgtUeQ18d39imkjPKaQseewWtsNS/+3uvKM0sjksadVsCyiePEJcosI53QZDnrFTkZNNeH/cRnkzxIC9TP7D+BcN5luMaW6+d39XM85kcyTr1K0NrnkqGha7zk9SD7MI4o5UKYciIYDcIbClD3UqewDSRHfErbR9o3mpdcJ32aI4Hvl5yV3JVf2cfvMdJxVAaN6Ji01SXwr/Ha5RsNLM+tWvxyyEvd40mQ3mKiQj5L9SVMUI5jVshCaVQ3PrxNQ1ST6sVcqmOjXUWVOKpOumoT0/XYZm5XkPSpiV6kM+7ywCp8RHGyC0vLlt4tt7VlRpTCRtSOah/JN01W1BLZzlK/8pY7xoGcFnpez6UJfF0aFDQwOiXUe3l0PEX0AsRz7hwqbdJ0s8j3Sv87oxKKR/70hz4jtrunzfOhn3ziWROWbA42rwkBs5jFQKyE2FUdK0+VQgdMMaqDSZAumcNs8B4bXhZnS/fY9YrTI+GJi0lI37feWVnzAKjirLQI3WI72Kmy5kWvNttawkqEqRNFNhotFa2IofKpXK1Og2CTOxVGYaeCjYtHsHGtiFvXbmKTykhxu51GHy3ZAutAS5CNF1T+3LQAaQHr2ZetrIdHfgzRQJwEOkpLpYFghI0YbNFincOj59+HtcU2ot4Z/C+/O4Ru6AV8/Qsl/M//uITrFwbQz2fVandRXNxBkkSa8I2TkEMYmLmF/n0rZN40fBDj+8m0SlvENma7KM4mEkqQAMJmEdIYJ9CKIp1MIpnIkIH7ycBU3L4OgTHLnVDC9RJ2NldIuBq6jGJlucB3d5FI9lOxrxjoSieDGBhPwxVnvXlvm6BMQ7caNtku71hIR9gdhrfhxfToFAbGMljdXrSUXRUU2LZRPof9pAlNrjJizSF477wDGzvrJEQ+P3oYmcEB3F26RGLs0EqlUMIMWu51S30So0BWvM/M5FGsrRHAENwJxJWLq+yLBN7+tg/g/vsP4Etf/a8obHtRyLawyGcVcjUK1DIVA61RKnkN34gRNVSsmZsr5Swt0h6qpLsGBYCvP4ng0ADcFLbuTBxTY0MYJmCQB1CxRj2WTYbSQH8ao2MjBPg0UvhMTW7c3FrHjTs3cWf+Dja3N1neDra3m+if6CEzFqQgLtFYUFhDlQaDk0anVpYHQGQu4VyiYCyKpKmEBmlMHSKYLCPEZ9viA1T6ltuQ4KtLA0y5BxtSwhKO5LAGGfvtj37IGO8Hbf/Tv/s1vpPCQUDAQCoBqynuXVCrI8Gsl0JOXguBWfPU8jcHOJB3dhW/8ax24y/ndx0FupwUWxSq5FWBWQlVshoFFu/bFbACEvxv/CnetJ0A0LwZVOrKWqDBGlWyUeuhPzmKAzMnCFy3aUBkSM9RrBHEtvngY8dO2FD+5sY239Omoh23HJVS9PKuZDW5KpEk8K0auGk0CdQiAQPEAV8ElXKbgLOI4SHNnFbhYOEDm+tFA2mUr46SqhHIxjT5L2pxZJvrO5ZnVrOC5+cWzbt/V+B1J4ejR47y3UEUi+SznW0oKXg8Frc8tUpXpETiA0P92CSvDSssp7BDYBigARohXusSOCwhly9Y/awNSQOauKoQhpH+AZuQKJlqqd0qWkmxD7V2jcrOjeXcIh597GH83M98Bot3F3Hx4iKN2AT7QUYY+4CyV32jvuSjrSdJCXwWFRexQKPaw5FDZwhgn8ChAydx6thZtv8YIoEBnDz2AGYmDhHAPIwzpx7m78cxNjKD8ZF9GKAxm0nzOhrxHg8VFp9VZ991Wxq29hPIB/kOJ2uNTcIjIdRqrB9pn7rVDBgWyRCAjtolsyVvnLQ/vM8mzYhuScNGk6IdJ0zNlDnPGejc/U39LXJUJQUuVFc9eQ/MahPt6RoBWfGDQKxXu/EG30WadmKAVQbdwCKy/dR2Rs8CslSoArIK6ZDiF83/8t+6NzB76btfMX3W0Agl+0ZOF8VHalKf3q/UfzZKxN9kCGjoX2XXfACFDyiRvlI1aaKYDAeNHmmijGL95blTNTX5hw2vwrKdpe8oy0k7Cn2SId4ic9pMe4ESgopylUa7K4Shwf2Yv9vGxTeuY2FhydJmvetdz+Cd73g7aXcTx44cRGYc+Bf/9PdZhhDe/4H34Qt/8RU8/uRTWN9YJ18O4LUXr+HTn34ML3xrHt/5xlXj3ZGJsIGUTiNGudpvHl09W8ZpkryqyUEOIDNhYfVX42tURB5BfRZPKRTMG5Ted5GPlZs0b23aQROtniZ4epAecSFCfo+Sv0K8Vn2sUTE5HiRfgrzGgwDbhe3Hvu7zKVWZYkCbxsMugnoZZTLUmi0320rlEiU5RoeAJynD8ISG4JVtpUbDTZjFiIo4xlmSnEf1mwiUhEMcxvupV5IC2GHqdRnZlKySoXxmgPwiUB/SSl2asMfyK37aE1Q4pQ8u0odkshx9kruidZVJstlwFstosbTktxCvi1JOBUlbFgJEINzkDTmFhrxFw5xy1dX2IuxT2GWE9Q6RHh0AW65R9rGtzNCSTGfZNSKozFQWZkH+E34SmFU8sy0YwfYVf+iPy61Je2W2T439RZnMftco4yL7e3Vtk9eXSYdy5lR4PZ+p+VVGh84CFTUa65rwq0l+yhFerhQNh2hOlfi84aoQu+XY41m4/BXudctcoNyyTS0uQ8NE6ekEaJVj+cfe91NqqHvaPB/5maefFWgUYZhr14SAQCMblw2heFkJEw0pqYMlfSz2h7t5dNQrvF8WhqXyYX8ItJmPlp99lD57IJYf/urIXe1nqUb0XZ9ltfCDUx4eWZZwhcKPzKxwGUtXwnMa/u0oNoYnq1sdlG7ch1AzjWMHD+Dsww9gdGqYSiyDQ/snUWuFQb1F4FvA0vJNbGdnbUhgov8hgtn3YE1poWjtBkJ+C4ZeWLlF4f8QSrk+FLd9ePQZN8aPL+F/+3dfwe/9GyrT7HEqJA9OHTyBS6+8junYMKZGBwhukvDGdhCbfhH+RAGpwClMZo6SoAkg3REyIhmD7aJ/GtpTSjEJtSDBiNKqCMAKdEW0hFzYRQttB4X6ik1wy2QiVP7zVFyyxH3YWK8Q+CUtFm9jfREhMvzoVBr9U/2o+QnwRXCKP+4SVBHgVEkkbiqpmDuBsfgoJscn0PU1MLd5m8RJUeIlEmkTVUvwEgjXuhWkekMIrz7DupKxkcNQ/zmUalsGLvoQZ2eVKHQPotJaJHPswO8ao6UcY5/RElYGCAJCd18T2xSSbkTMC3b02H68+53vxXe/c5nllkXvQzKZIQ24DEjK0yBQGgxoIpCXQEs5B13Y2chieXWVTEVrkkIrVyb4JZPHBzKsD42VagNrywTNmpiWTlFR9yNOy5hyAzsESnOL81jfWrPQlgzBiBLia4hMhlejyNLFU5g8QquxSSYr8mx7lX0k+lUgvrwQFBAEJyLdCK3veCxD8D7IcqZ5zTLbgcTJd8vKl/LSzOUOX67gf7+PxkkoTrp1lmF+6pH3ie9+4PY//ftfZVtSuBAQSKDLO2uTEAQCBBIUskNA63ho977znTqKd8RQ4lXpRfKVMRh3srnVw07xOz8aH4pXFWZAuScysF3KX7tdpfrs3uvIYf7lPRq2lrdRw/qamFerdEjvIQKn05QHtNr9fssmIE+VJi2Kh5VMXbuEeiQm2t5ELBHF+vo660PDgPeor0kCWFm/jeFhGpMEqvFYP+Zn1y0+NRL1W67DQCCEWCiJpXk+I5K2eFQBAJV7IDNgHlYp6bXVDaSSaYKGCO7cnrdJWCZweTx8+Bj7yW+zsaV0FfuqSQwbG2vkPfLVQIrloeJlJX1s46EMDS7Sa49KtEj62lxbQ75QMNChvid0oszktazfAA1OGXoacm9QuRe0UAmPdfLk9IEJHDw+jaeeehzDqWHcd/ocXnzpeQKRDRrDlBSSndY3lIH8IL2q77TW2U+irSjLMo37zpxHxJ+iQSFfOdu72KaBFSfYHkB2gyC7TtquNCmjIrx+BNMTB9gPESosgofUIB5+4HEcOXzKQjc21nYI1qlciJQl46WotfSngLpWYtNciVKxRxDm0I6VS3RktCHjS4CBZSe9ig7NGFO5DSTs1oFHxcnqs+S97jMiNELkf34V2emrNkfR8iQ3R2s4z5aOMBDLnbrfDLnve2b1O5+r23S/6TPzVPHZPArIUpeTt/X93sHsRYJZ9YeAgTbxpwxNy16hMCDyvEBJS94/yoMGwZziIW2VM7eXtC0HDgl7V886Sx8T9EjesKyW0UIggzQir5bkiZ5hQJagodsiX7houIsGaGxUGzSwCcgzAwdw/PjjWFmqYGJiEg89fA6f+OTHcP7h8wRiftJKyuIsP/+nL+Dzn/sifuzHfxJb20UC6TjL5iF9VwwMdep+HNw3hj/4/S9bqieNrHzwh59mq/uwOL+Fne0t0nwJ/f3ipTDBm2MEqpEFapXzW/yjCWCKkVdbCBArflabL0gacXdQzG9DeUqVrYG1NDAbpA6ePs5rCQZj8nASDGpoX3MU5PkN+8MIsq0E3uoNB9j3ycnN/u6wP7UkvjyykjvC0Gx+tt+eMSg5SiOG7Ws0ScLQEsIKTdDSwuYqMxClUAA540iI3ERvlvWAXwPs3/4UQSb5QPTVIo/LNBNIlcyS0WrZBfzERgFiIy3byt3NMmvSk9GxMI7onaBSo9fCVTIEJH+Fofx8Xoi0EpSHXjQsjz/rXah3KAtzWLqcowypGC+HfAm2ZdDoukygq5Xl1C7iVZVeHGNGlRlepFftbgrqXpvlZ8lZ7r3wUQFe87j2crxMqxiyngEaBaTjbRr4ipWVYWJLBJM+dZ6lZhtTBrLPNUelQiCd3SlYWsXt3A7y5RzxV5m03oYvRB4JkMZ9Bccj6yuTX2iE7a6wWannCaIpd9gXAtBOuk03PvGeT6sb7mnzfOAnH32WzWlML+8ou8Z22T9iOuVPM8XH1tEvXkqpPvUkzyno2q2hKX0miu1SqbkoHDyEOx4+0NPl0X5kR/J+9aWRBz9IFIjJ9cUsb/amGseukeDTP1YmRi29JwwEQn1kenI639AmobMcdQKnuYeRvVPF5YsXzfOWr2axs0VrYvYmFnfWkS2SsQp3sJZ/nSIghqn+d8PbGcbW5mUUSAACuz2CMwGq0aEjyOc0FBgjgy/jRz/TxHefr+DKy6extZGhAqLSqBXhafiRW9nE9NAE+ocSqHuW0UldRnJmmwwbhdc1gYArhh4fHvRrV1yOwAKJjYynoTx5mxRaofYwI4JCsU3LJd9YQq52C+XOPIlYXisCzMo2tJpHKV+j8mxhZLCfVnEM6wQFw8Mh7D8yBU+a4Lae5f1Ftl2bfQWUvE0SfQidUg/pvjROHD5BBmnj7vpNtDy0hNhf7qCGHgLsuyqZroMqhcSAd5Rg9jzigTFLqBwMJcxjncyEUW9lKYTZy65+EuwiBcoSPK39BJUtEv080on9VNabZFCCDt5fqi7g5Kkj+MuvvoFqwYUHHzpMgl9BsxzG2iafmUyin+BBcT5KKSKAZICfYH+YSjlE4BQlm8fdfoQ7HrgLNGZyZTS2crh7+wYFV8smgg1nBikIgyiQmZaXlzE/N2teWGXRSKQTCESCVAgtZHNZ80RUCGSOHjhEBopj4MAGG4IgueFlOZZYt5gpgECIzEjidWlVK69m7WbYFhR4fQUaHCuWlkcpX+RI0a78ji3tYiiC8aR/3OJ7bZIGif6hB98mCv+B27/6rWcNEJhS5i7FbymHeM5ALc9JoUuRC/Q53ll9dgCFcRsFlYDDLlfZMxRWQLYiLzvP1QQSgS8JUxmie7k3RRcSiLZKjrwW/G/DwzrqWV3eJy8XlYWGqsXmkhqaTa3E/ydOPERQE0C1IlDkxvDgsCmPQqFoSlp5fyXEC8WixRlqRSMdladVAllgKkArrU0ampwaM4Dcx31hTtktFCcaoLLMEbAlDXzN3SXw7B+kYs0Zr8m40bNWVtbQn85YzKvA2dDAKOliwZavlSGtSWNamlhZVJaXSMs8p5hAxSdGwxGsrC3j2MkjmF+8g0iEZVZaN2q2qrxK5JNqqWirHG1t75hicLEj5D23ZYXZTkkZP9EIlR3bptFASWCWjTs8Nky548M/+ef/CLOzd3B45gjBbwBHjpzASy8/z3anjONzHKPfia8T+GGPsZ19lg80GRvCA2cfo7hUHHmKdRhmedpIxQbJ82FsrW9heGjEvE8tm83tKPetzR3Ka7fNPYhHSeMEByNDkzh5/D4MpEatPeQhb9SosMhvyq8s40Czv2WUSMlJXqs0Rgvc9XnPGSGZL2Apo8smFps8V/n1G4/czeDiF9GiKVSjQmcT65AMv/9dm/SHNl2pzYAy3+HjLrrfA7UOeGY5xCv/X8r+A9zSNbvrxNbOOZ19cqpTue6tm3NHdW6llkABCWUM2AwYS0IY5rE9M43hYQz4Gc3gYcx4HgZMEAiJkQSKqHO3Ot3QN9atnE6dfHbO2b//+qoaje3n6dK+96u9z95feMMK/7Xe9a7FvT1cDjoLQOyDA/4cAmyd1vmNofmln304MPvNL/4m8lr6Ugd8g7z0PO20IYTBoiVpebuk5JUzWIeeo3a4XkVRq+Hqu++Ep+GSpRPup8wCkQEGka7lUJy9NpEqp6l4U4AobBjVkyiAEQjIOROkUCZXshMnn7P1lWc9TGZjc8E+8tH327PPPGUd6KRxDJ8dx+2Lf3jZvvDl37NHH33cvvt7Pmxf/dobdvLUGfvMH37OPvrRj9nXv/aqbSydstrR0C69fQM5vuxp6H7kR56Ej8Z28/odAA+8MjdvJ05sOm/LcJGhLi+aUl4dHO4CflSqet89dWVA79bmpuUKWrljzGIC+So53WKslAUgxXg5nMWQjNq5Z/K+0TmHrFaqMm1GVmrHHjJf8eYKN1VVUDkK5NSy6Nhxiwq9RKZxjAgOQGwPw0z7FVyOMSeiPYHIMUBO6RLloZWekRdRKfWYMuYk4s+WN/bbxWIQ6qoiJrrNwf+LJTAQ8n8CffXlLIJKhVOi8IU2pCUAsVqqBwAoCsgBbRgDRpu/PE2bBC0v0bY7Ce8DWX8W3+Xg7wTtTIJzXI6kMALiWTuu9uwWBvvOGxX40Ghn0RIRDNJpDGPHALPaSD7jc5A1Qn3WPb04A+OrVH8+eJFhwHNu9AV8pzAipelSaJTFmvRjagmMDpdZANx6uw5Q7UJryD2lFwQsi0fV7h4yXZvQvPIbekAlwJXRp90DtAJ8Y6osOxe3bClqiSzGduYAPukB0JnLKPeacb7SgvEMFbRQyWcVzvA4W+bhx777ZzVcD/UCzH7g0x63oYGldQLvsmTEehKeYRquMVDuNGCrA1mJU5Xe06EiCfpN5wjIyoKJAz5UMEGrgMNRL7AU7hOH35fR9Hf+9t/0XAFbvtPhoPq+Bg/LUwIBTphgWU2wPYIYi45Tc1gv8cgpu/y5smXHC/b0k49aMp+xneO7trvNgDYqdty/Ztut2zbpDWylvGbrS49ZZLIAk6qM7F2LZHJWa9+wo+pNhPl77M6NMUpsxRrda7Z2umUXH9u0L/9u3urVnPXHN+z65cu2kF3Agr1nJ1ZjViqf9GX5e8Pfskrki5YsLFuhdAbgJMsiZ10BfP5TGrEHSbClRN065/tRP+ye05FxAKjljT1oXLLW6AZMcGzTaN6u33gHBT1FeeWs1+4DYLtWyEcBsVqu3bEz51dt5eSSb9Y76FUBqTAsTKRlbyAfgCtmc+E5O7dx1pZXF2yndsduH92wWAYGG3FeEubsx2w2rEMDCE4odTN71mJ3n/DvFUsznNyDkOlJZGQH9Zdpf9lyRYBjew+CvGnz+Sc8NCAZZ25gvtHsmL8PEFAFW1jK2d1bd+2xcy9h3e/a3u5NW1pcsu27x3by5Arg/JDjCGU8b6e2TmN1JnzzDkRpfRhFhK2dvjHoKce4bcwt2GI6axGsuDYCTYZW5eAYY+MII+YQ5mo6qC3PlazeqgGMel65ZO/owEvvSSUUmPeNlXWUd9La3ZgVNo4B5cwBkmFmMHR8EcE0B9O2LI6AS6ay9B1hMWxYe3CAkjlinGoWnwCKANqRCGArlgNUZuEqGS5J/y47m0PhgjTFY9OBPf3s+8R33/H1y/80yDOLDPxfvwsw8K5OC9wqy4iW2rRRzEvZSolzrpS/zvdVluCWwfV+L2iRdwfF4mV41QUr8y7Fy/8IxEC4OpjlHP9fB3wnLkWzujdWCkFVjDQHkguDrgTeyJ56/L1MXwTFMva41X53gALreYEKpR1STKaW5VQ21b3ZCNxTyg+LstTuWlV6S6YjtrpeYi6QJ+GM7e9h7dc6ls+Vmd+M51NcXly2ynEDcNeHJ0rw9bErJO3czaPku50h/Y26UFZt+FJh0bMWPHrxvJ09e8Z2MUgll5SLWMm+K8dVTyO2v7dr84vzdlg5cLCRysR5nsBz2t59+3UbYfjK8aLxqNcbnjMzrHnmCyktVcDRUmYWQzKfhVb5qQsN9hkbhVYtrSzaAkbwH37xd+yrn/uChwA06JtWx67fuAaYPICPJCdk/Eth6taaJ4wIDOBwKG0Xzj8NUN+w48MuYPQU9A9tAl5UNnPY7VixkLe9nR073N/3dGFhZHcDA0DhF8qprDEsMkYaay8iwJyvrWza+9//AdtYB6yg5BXH22XuNIbyouklj7bSI7oA/2MvvoFG+I5DitBTb+lLXqIZpx9oM4ihpW/0R2BIL0FZpyv9w4FqcYX5n17BeQ++Ej/o+P/2zH4bzPI58MwK6Ag0BDTt4Ebv0K42gHkFMGj3YcHsH/3hr7m+0qa4iXRRJDgUi6Q8vcp37eAMOaKiLTLcpCdlECo0Z0L7Ak+hxgB5SgcEbBXPOWE8o/CClokFYhWrKJ081vwzVgK/8TDyaayYU+34DlsM2Z3JI4Miq3btWtW+9eqrgEgtFffttVe/hVzsYNT07Aufec2m/aRNIy175pnnOT9nO7uH8MXYKpWafeCDj9hv/86X7aWn32tf/+q3GKMYRksSupP0WPQsBeubK/CIQmCA0H0MnmrV92/I4NIKh1ZW+gBVVdyTg6JQKHgcrVY7fEc/E9UEuPQ6bXfMyBUtx05v2EO/9Ljv0JqjS3aMLO80WhiMXavBj7t399AfO7Z/D7kLz6dSGL1zBYDG1JSBRjpLWQpmowjPj/tGR4F95ZrW+Ku9DqQ5ZFwFm5KEYxhHeTP7Mgw4h3lJgjHkzZRhIeoWmB3zG+xs5VLelucBmsgCbHkbgksUZqD+R7mXYkZluCmP6gSaGGtOEbTKqSrTTyvbknWido/hBWxpfh8QveY4C9FLYkumx1MJCwP2h8Oo7e627cbVPatd6/AsVZwswX6qPqgsFvBkD33J9ZJzDmaFt7i3A1UMY57M7+iMGOAfRlSbA76QvhtZLC6HUAidPrWkh66JhxgTG9sQvNVnjnoD0AQPU9iQaC8Ej8lD61XXeNem6Jjuj4GhdIM5QGx5OWXzHLkyYD89sWjhGCA7BHN0YQl5edsA32YQjoDxo/4o9EaFdkTvP/KJn2ZsHu4V+alf/P5PK15NlpViezTIqkwyhggENAVWNasSNRpsDXUgdgIPawjrxGWaDixeWM6FtRa8GC0f4LBmHkbWrk/Fo8irw3jTcK7RKfL8cDjIReDqORJe+jxORB1cTcX80hxcoAFWnFQiFbO97bB1rj5hS4kNCA5h3UcRlrII61UbdFqWLowsjFBZmrtoS/kXnHmag7e5H4otf9Ha42OU0R4MqFQWeRRxBPBVsqdfWoUQVdzgnL31astBmoqfrM9v2dOPL6N4w5ZPL1t1CBN3XrNQ+RWL5OURPYMVWsaY6gNoF03VfxQALQEnr4Z2qsqSldXdxopp95vWnR7bOHJg49ix9Q2GnR5CdB2UudnuwQBBctnWVlX7nzGD+aaTthWLWQiQOYp3bP3EkqWzaWtAbIgFrDmFgEBoEOCYcZs1zB7deMQeOX/ejjr7dvv4Bn0TAwuNwGiMO/QK6K27ByCRLNtjK0/Z+MopOwQ4xxM5O6y+CcjfsJ7aO7mGsMhA0AiLaZ/2dLEkc7Y0v8LnmaWwxqr1bd+AUMWQyMU3EF5x6/YP6EfZCqkT1qxwHf1WWeHHHnnMVhbXbOcuQN6Xg+dQvmWEP4xEv44aNQwOrDb+lte43W+55ZsuaoNOCqGralNaWlVKkTmuL/JeROhpybhrGxsn7NEnHmdeV2BEhCo0KHC8oIT+DY09grsctlQRwNXfdsGbSBRN6UNmswTAfGi9yRHzsguJ12GyMbSCYhhnsSSjCGIGO6xcggK0KZSTlqq0KVIeAOg5imLC4BmFO/bE4w8HZv+b//ffcv3N9InV/F2HWMDZg5fAQbAZLMR7oMBhmW9f8+AlD5JeUp6KGxOY1S3ETwKhEmrifeUhVOnNEbwp1nW2RknwFL+f4q38+bqZeFhLeygkxVXCkihtrc4YQE9Cs2Dnz2Pg1FQqWxsvlJImjGBWzGud+3K+vkfRyWtawvDQkp+WNbV7VqBuPO7a+vqctZoNaFmxrrvW74wBcMs26sH3LZUizllTKblSRTd2tXEwl8sGhlUyAz1KbqBIRDcoXGWqkHfp4GjHtk5u2TtvXAYEFzFyKqYKSh3PoznzMVJI02g8YM7HgFmMPZRtF+No5/Y1jHd4B752OYdSUm1yhg1a0bV8C7jh4VZMF9zrG0IxaCluhKKVVzpbyNju0T37/Ff/ECO5Z6FBGBmTZg5mVscI396+Ad9BP1q25s4P5lShL1PATAr6fOG593smkWxavJJwMPvIuYt2YmMLwFBFOaAgMAzOnDkJL7aQcSn4b9XHsw+YWFldAbSuu7HYBjyogpk2L2nXvYo8nD13wc6cPOMeuFs3b7vMUioeaQBNoBS36OHBIVxK16Ez3uPQlsCs5D/nfZt+OSdQpjqPf3iJzh54kvwbznGV4NcE5+hLB/UP/rr/rP9fMHv/N/ECJ+uKP+6ZDcCsinzcN9z4TbT+sGD2i7/3r60/GriCF5zV/gx5oMeATIV2DXsCsHyGxsboHa1caAVD7ZDB0AWkChBoWVf0og4pnaUAkNRkBB4aCUxpzn0g6YyPoZadFZsvWKTxU3iDXDsRy+VP8HnV3nzzrtWrPdtY2/JwFpX8vn1j125cuWOnTpzzORLIeOrZZ+0b33jZSzcrDZ6P/xSewdA/f+qcvfL1S4Al5F8IuZ6OcY89DKM5e/Lpk3Z03IJ2lMtUskKgduDGXKfVwfgS3UTh2XWv3CfZrA1hQbEbAAzPqWnpWSkTmcvRcApPdhiXiYVjE4+jffXyl3neXdu+c9v2MaQO7h3Z8T7XNAf0J2ztWpPxiFm2mLNEDmN5Jj3CWI3hfXRYQ/lWuyPuBa/2Au+42vnAYFYct1CKNqRHQwKDQYpPhTxMkU1yoCgHsICn6FRgUIAYVYLcKdnqotKFJdEZMzQtMyiDhDlQ2IIb84D5GbQ3jUFrgL4ph8Cs7BlvB/MepLhjvkV7Gnxezh88L6G+MOWKyY0iG0fweu14ZLevHdqty/sWaiUZZ/ReLMf4IZ+gZRlkHeZRwl/xsMJw4hUJdyifZyglJ30DYEtHKJRP4VxyGs4mzEV4wFwpT/vUU6ElMRIESMdTAOwIuQDg7AzrPKPCueo/Oj4LpE6jMzPo0lyC78B8CQB/OWbF+YStbORs42TBVk/mrLyWAMyaJfMTi+VVcXVAWxRHCw+EtFI9dCArB4fKD0uOywEkHPlDH/vJYIAe4hX5kb/8sU9rmUcTpyXHIOhZSi/YkelSAmmh2FmBHiU9EAad8FnfBbGrDAqMJsApjpEgkntbSD0TVS42phlGlmXqYJa/BXwlXBhxDpiT72RF+2YT/tK7p5VgTjQvgfrFGsKKmECQ0xTXJyM2PMza9jc2rXm3i1I7srt7O3YFZbNzVzFuTEwob6k44Kiw5XnYavVD2t1moiJWbypdSo9+5+3k5tNWrbW4/6H94n/+YXsFJbey8oh9/QvvQMxVe/L8GTu5fM7ObsgTdNUODmq2tzOyvc5b9vatf2HrpyN2dus9VgTAFsoTzyOrFCnhEEJPClHeKxSpYp1kAWqnfVfKJrxr4XTVQpkjiL8C4XchVp0TQsEM7e3L20xwj7YUrEX/tKtVVkAun4RMEZ4JLCCArTZaaPlSdtYYIh6pjB1DFpvGbTm9aBdPXcTSi9jl3bfseHRgyl04bMtK1HwyB30ICNQ7pY0Li+fsxTPvtf6lNdu+AYHN6rSd+UVgpHMwGEQ+hQhbXK/crdNhjrmr2vLcGaseTSyR0RLlEMEr71Pc7u4c26PnHrN727cwFrrg0IIowObKCXRBwhoVxqrds6XyCgZG0WMQtdkmmYjZ0tIaAH5gRSx8LQ1PMJ72jvdsr3Fkh+1ji01gplQaoI9FjSJQzs+dXdUSbziQTQKMjitV2763a/d29xz0ZlHUMZhmiuWuZNLhjLJl5G3x5JT5eJs5SzCGMFxcOy1HMHSD+ayghLSslrJEtIySyli/GbNGr+ZxvKJ12bFuy3KeygjSIBvFW9aaVa0x27eWHdnzj37Sx/w7vX75n/wtxjz4/O33B59RvvocKHMoCv5zLyuKTp48saELST5ARsFLOpHP8uZqI8O3PWGcJ6HqabmESXmXwhfP6RZKyK2Hiit9GZnvdB+lf5IgEN9OQb/qrv5WyqfhcGa7uzX7wPs/4p4PhVjIcJUyF2CMMq8C+YpLkwWuzRvugeKk9bV1wBN8Meow3m1bXZtHTtCvUAZ+GAg/2tqSljhHADVt+gh71ZsC0lJgcUzjlQNxD0Uo75UUiFLJyGMRjSagy4mtrW3YO++86Uvw23f2oatl0yaS/f0De+rJp1w5r60uQzO3+W3BQaG8Ftppe/PqWyhB5BpKuwWwVfrAGFZnq9ez/oBxYISKGFQyFJWXchHDbGl5DnA3gyZbABwUFsbXgTYghoeWykXt1Ve/aRfOPm6PXbwIf6Xtxp2rdnP7OvwqMIssYJw06ZqvGeM7GasgxYad3LrI3MVRsvDpvT2vVFaam7fjqvJJVhnbkFdRk7LX5je9Dg4OoIGwLSwsMN4TTyt2+84dBx1eepLJlXGgcAnl8lxe2rBlxklzpNhmLX/LU25heAdacvCoA3JwQMvnYOOiwK5oUr9Jt0BNfA42HPIR2e4v0Rn9ky74tmLXP7pONO3/QX36kn981U70qHvpcOUcPFeg9sFmM+cDnqvLHLByfz3j24fGUaBWNMuDHxbMfv53/hV0qY1NgEDGQKnxlFVEekmxhKOuPHnIAnhCTiFfuaDdWv1QbxoYESoVKk+WAxpktahG3nzpP/GaZIjYTqsW2igm7tNPMo5hXe6l8tkd6wEkO4C31ZWn7PTZF9ApIcsmAA/QQDSc8nR02iD0vvd8yA2omzeu2Xs+8B575913MTTP2/PPn7Df/K0/dHrd29+3U1sn4eORHe0NvWDJaKrQGQBItGgvvvQk+n9ox8cNxprG0Q7xm1LmqQS1/lZIwsLinHvW5B0UTSm0xuN+3XkDf9IRebaVekwxxa2mNgvHLZkNw08Vixd4JsZxpzWy+nHHKhhojSqGIvJFWQzkbdF8JvMp9F4SXQ4d6r9p0vrtmdWaIev3xoBT5Haf+XCnnACtDIgh9AcvIU/kdJM3VWWn+9qL0ezYWOAXfnaPpdMfMop50jnphOJlS7a0AG0jv4bQvlaM3cmmueUaGbaqgqrfI+mEeVVVxUgzHtpbpNAFgVkv7qC5Zxzuk7zzipx4UcZNnvo4+lmOkV5nasc7Pbt99cjuXT20UmLFVz6jkbR7oOVHUBim4n51L2UNCFa5JS+CdwnmgFfFlxjNSnOIDPI0k2AIi3SQFZJj0PCgwfdcB5hVjvt659Da6L6u57JHLsSGgNgx4z+1TG5m+YWYlZfSlivFLJGdQHsRW1hJ2fqpOds4U7ClzbSVMADSpYmluCYUA5cBnlVdU15/ecqld3y/CXyjVGoeM4s+l9z40x/7Cdr4cK/I+37osU/L4pZLWmBWLngtU8kClxIYaTXcA5NA/TAa9pEDpqkCMgG6AjbKbDDjwVMmQ0JHyyUKjh6i4JOTBMoORkWraok9jkLRoQ0+arBAgHOCht6ZnuGXsOEfFV+YYFkF5eEAa2jMPgQ0gIAH0PUABTHbBSwfvN/6BxPb372LxWS2sL5gm2uP2mOPXrBRrWzjjuI8a1brvWt9pFc++wwKZRHwCOipd+zc6ffZ3Ztd6wBa3/vRFcthWXzms69aIvS4KZfaY08A4rSc1I7btbcv27CzYNevdWx+E4um2LNvvPvfAgynVoo/Z5ls19ZOiqhhtBnjN6ig+DVhijPTLmSBBVlKAwQx7c/dw2qByTJK2g+oRQjHw1lrHA/dGjsGlCUBrPkMwL3XYixmdutmjbHsW7GUtb1+yzIoIZVZFTsNAAR1pS+DCVPZlMVHEXvq7NMoVZTz4S27Xb9hw+jABUCMNjqzSlgycMlIF2WRsuXVC/bS+fdZ8/WSXX0LQTS7bac3GSOU++aJFds/bFuq1EA5p7HMVNkoZaVSGyu6j/Gwbo322yjPC/DQvFUA4MVSyBqAz1PrLyGgRLD3LF8ECB4zq4CNjdUNz1Mnz5kERx6FLj6rSuHvtm0ulrTVQtnCQ0APFujC0rwVsMzrTRhvpA10ygZRtjNnztr5cycROmVXeE1+H2BxK7fphUcetZOnTtvy4ooVMgVPN6fNBPPlJetHGnZYS9vZZyCqzOsAnpj1xoeAXEDs+A5KRXE8BRTUnA2aOQSfduMOfLNRZ9r2JUEBMYkmZTNQaM1E8cconaPpnlX6O3bY27bqcN8+/OSPQcvf+fXf/dO/E7DFfXkkJQbveziPsI08XuJX36ELzUhISbnrPKEKgVNOZWYRtHwpDuLUbyt9KccAGPAH2l8LZZ6eC/oU8GRa7j9foEReChSABDdfKR5eIQYyUEeDsQ0RQFI2isOU10vBYpMWBhigcb605GMVi2Z8mVqxhYNxzxrNKt1C5iADlAdS1eiU3ka5lBXvPYt2bX5ZoQbwUB8jmkPxsr3WwE5snALA1lEsi16MoVVHIaazrtzlpVD89DHAVMv82pVfyOU87dba8jq/T1GsShMTg2YXEZxJOzg8slUAm2Jw9XccY1I7rRUbmk7HLJsO89wjqx0rWbmq8B1DCyja6oElaJ9k3xDjfGcPeoUHlSosk8RYhD4KpaSH2Sje9ujoyPooJ42xgM/TT120FAbmFz7/eVvf3LKL/D2J9Ozlt75u+wDSVD4T6CJ4PsrEgw8xBGMOiIv5TTt98nFAK4qM/ojXT5w+Ye9eu4TS6aJg5qy0UKZdyErAxOFxBV6f2unTp21xeRkDr+LKVEZEGh5WJTVlAlERF2UVSSRUojRtNQzNeDxlp06dxRjch/f3mHjaEp+gcAN6gtScngQsldFAOkS0puV+KVSnPf0HzUJKnAu9ynHiJCaPK9+J1nSe0yTniZj5NSDBwIulE+Uxc10t+oXOg5hN0TTvyFPlBHUwi+Jmkv1Ehc4p16voWr4ARAwGl4wurUAGyvSv/9zDgdnP/Pt/Dm9Bn/QthrERz0DQPFf6Tnl4ZdTJK6tla82dNi9pRVLx4vJCHqqMMzJMhoR3nCYqfhFWoq8Y00hjefPcIURH1dVAufMFDW03lb3gEPlUv7+B6xQA8inb3WvZce0I+VpBbjbszKnHMG7O2QvPfcBl3Ddf/bobVRsnNgApXXcMXLmyA+3vQUOAGYymzY1Nax4f295uz5ZXFq3SvOErCsNBzK5du8n8v01j49CxQrlaHpYjx4tWViQjVtdW3GHS096OdpP2yxPKZNBuL23b79FGeBVgrZhZ98qqhxiH43HLmq1D686OkETKppEFmGd97jpNQG+9Z/Wa9N8E3TFnqnBZXi6a0lpWqw0EVoLfOa8XdSNWcZ0K8xBIci85el9HCJksWow4AIVOGPgoxxg9MQBsa7y1SqBVQNG0hO4UXhVdKXxiZQk5CV2hdmxAWxQSwsmmAMgAJ2PEKh0XciMMfXhefawt5WfWfhnluH3Ad+6w4xGBsSpyQR7yngB7xZCFCte6fW3frnzrrh3fRuda3oqZJZ6jqpTQHM9VFgOl15JJpPHuK5E+7fbVN/hDIFbyHlVnMYwuba72tFmcGwL4xhNjZBz6Ld6l7203AlWwIhrX4AwsXYzY6ok52zyzaGu8l5fDduJUwU6dL9mpsyXbOJUHvMaRNRGbA7Sur+Vt6+w8R9HmV2KWAcQm8n2LpwH5PCOM3JCH2Cvo0TCtWIfQGYKBCl0ZjEKmDYSKFhBt/MjHf8rH5mFekYPZy5++fPVVu73zLbu186pdvvlNu3L7W7ZbuQcoqtmoo52VgB8GPMYkZhiVPApCZdsUE6tpyKRykm8QBIRCw8D+DGiKCSnZJNO3SQzhzSCNETAKdFc8iVsQIaxb/g7inCSc1CTIiZsi7riPPMJi/CaEjYDk3skwQDu0ZLNhGOUUsZuvrVrr+qKtRjcBUItYCo/bwtpZlEnHmjuHdqt6zarNe5aJpOzc0mO2WlyzJCB4NIOxjlu2UkxYq5qwXviKzTLbtjT3Mfv9f3/bzpwu2spK204AQBOjDdvdH9vrd75qlw8+a5d3/o1Fkl+EqX/Tzr5vw44bl+3mnXeZ+7Zt5R9HkRfsV3/99+0yyndrLmtpFM32/q7vgo+HtKuzZ7PEPbPiHZumYV4mOIbAi4cCUCov0e7xgd05uuuxaYVp3F58/AU7tXzWPnLqWVsGON9BAV9LMq71LlbjgiVzm3ab72aFiTWGByjgqE27E3v/4qcAfmk7HN6w7dYVALwYOs4zmDAthfQAg9k4AnIXRRGxxdwZe3Hr43ay9B6rTg7s679319737AfttVcP7fknP2nXjj9v1f5dCHAJPt+xtSICUEUbtFyAwq1Pv2L1ySv8vQWdbFlzcBvhVURe5SyNpSklr01ex9V9O3VyAcWcwQhSMP/UooByKV8xqepIL5ZXrRcZ2yGCsY/yiOayAH4pv6jl02Ur5xZtmflR3O9uq2YHowPbbgh8hmw5v2Ivnj9vy8+v2LknV2HQONZ6h+eHUeZKnRKy1ujYjnYaCDT6MNmxxNbXzOa2PbQlkzkNoxcs17po01bRRq0o/YXJoeUQhoiC2z3AHYET1vIBw6ldrxKCg1DX2rM241C17f43rd7fxbqtIcz69vFnHy4G6B/847/loEfC3FEph2Se/827lh0lbBHJLrwUAqCXlIrgg/8F7eicB54s8ZcWWlxI66UPKHkPM6BveleYQQBk9b2giM7Tc4LT3aOm7++3baLUN/LKMgfBEjjfAeymKBJ5Ks+decK6Cg2YX4HmkCWMj7zWc+Wi7R/soWjG0F/JPfa+PK8bMOfaCFmcz2PczKwPHSsBer3SsgQAS/HmvkLBq1GTJyEOj2UdrMmDmMYArR1pSSzsXlp5QlW4QR7GXL7gy6TKPqCO6b9ms+nx1bpe+RM1Xj2MRHnAioU0QKpnf/Slz/CsA4zKhBsNSsUUiyeZ174d17kXRoGAcr8/MlU266Ks5zA2c7moZXMYw+kEYLhmzVqfNmXt5NaGrZ9Ys3fffcu+9OUv2KnTF1AK83ZU37NrN9+xCs/yqk+0D9HJ+Asmq8VhxmOKrHvEKwDGExmMupq9+NIznkJMZX/XN9YwSBcB17sOnJQIXobbwsI8/epZDfAhHKWk8Bl4anFp2Q0A3V1puVS44fbdXc4DcLR7zAuyV4AK5XPr9nXnAcksyRBhzFJZRgxQjClJaic3/CeQK3oLQKxmSvSjD8GY6/miVNGplkQFarUq6IQanMbB95wn8Kp3vbRa5zTN+YFhxrMAg6JrAQ4BWge3tDd4rsLnkHMy7rhWntgA2AYxwvLQ6m6/9HP/lU7+jq//+Jv/jDbTWj1L85MQ8OR+8tLxnFFHXkBo/D6YVQidGy/QuYokVBUuJeMXXtDyujyzAhB0h9P5TxuG6I+8tB5yx29y+kgPCJzIEz+c1K3e3rU9AOyLL/6o5dJn7Xd+77PW6latz/3PnDlvH3jfR+3JJx6Dtqf2xS9+hbYN7VM/8L0eH9tVbCKdv7ezjVxMi9UZt5jHttbgE9X9n5svAU4MsAi/dGPwVc9SOfR7BP6DAXRofgRmS3NF+CTIVKJ8poqFF63IEysgKeNG3lllOIhEct7vFjTbgu+aGI2KlZ3C75Xqrg1nyozAvVFOYYxijR9Po/3IJWQBjA8Q4jnomtxcGmCb4Xv4vovB2ZlZs62QIugSIeZ05bQluQawBcdM4W+mStQGvfK9AFS7Yz14eMS4NLttd7AJfGJxeR9Vel8ftZo5X8aY5LOcesHGPAV6QPOiUWhMWaHiSrOGzpd3VmVsxTuIFvcSy6v/oE2St4FvVjqF+WZ+44ybsE8H3HWwXbd7N46tsg1e6CZ8lTkeVRo0ns+zBjLMpXME1BlDAUOt9gRAVrfV3bX6iG5F12tFJQlI1nMEZEMhncsRVRpEFZzoW1qZF6YDi6aQMVtL9uRzF+zp5y/a2QsbtnV62S5cXLPzj67ZmXMrtr5VtKXVHAeAdg2gv8bnpSKfM1YE2CYzYI0EBkukbeNwC93Y5pnS38rNG4SjBoSvNINj02a8ZkshBwp9kFd8Yj/+PT/HOQ/3ipx6X/bTCawpxf3IatBO72MJ3SYTi+XUbBxj1e9ikd9D+ey4N6LZrmN51azRqmBFKE5GJcxiTtiq06ulF1mbsWQERdaCEEWQgAEEvgNU5s+FHzPsaV+0TMnfCmvQcoi8I+q0LBilAcNURVDlID5mZJaC+VAc0x7CBMF+66IdvZWzSBMkPxvY1du79vVXXrFL775i7QPAS7qAlXqO9ywKomE1iPWgcmSVRsXTAV2f1Wy3sw1xNuwDL37ISgjutczUzuSWrdw7a+nC2O5Vr9vi0+/a8z/9Ffvgnz+07/qJx62d5j6R61a5U7d89gyMFPcdwCB3lNM7lp2L2Re++Y8tnyzZE888avlyyuqtPcYGwowxbqWO1Xq3LZ3T7suRE2AqlbYOVsm1G3ftzs6uexFqKOe5Yt5++Id/kDGZWJm+b8bz9sbLr7sQ7UH0mWTBrcsoRPDUyU3LAq7iE6zbZNkubAGuwwiKzq5VZNHD0IGKkUIBOPYMZacKLANbnluxs+vP27mNl2ypsI7yBVB/c2rVwyrAEdAZrdrd/TtY+Bdc+e8fvYORtezzlwFEz82dNmQF88xcDtcB5iX+GCFceBrtU2EBbRgQEImF45y/YOXCste83zx5wuOLVc+7UMi7h0GbuDK5PISt5cAJQAAhXq27dzIOd0qAtFG8YWjm1CIW62Hf5jF0ZJzmVmNWSd6ygxtftHff+qa9/NpX7OrNyxgJx8w/J4SztrYJWG5sMyZpA5LYhefitrqRs1zkjOVCJ+mD6o033KOgTWRaHoKs3Rsq76SErJaY5C0SiNWGhFBy7LGxjf6+1ZjvNgBbQkaNlhL+2DM/E3Ded3j9vX/0aRdaWrp3ECthKbnHZ8lZvbSc5IpUA4JSlLcI/Rd4eaAnKalASMLonBd4zui6fte99HIwy5v4kXftrpXg1UKcHvPgPN+swn/S2F6xRrqac9ERCHvOQzEoxGDK3wI1UxSPihU8/+IH4ee0DQV+uZeUflDlS7FoWO0x1fAW8NDS9zxPmFilfmjTyMjWNtbd69ttq3obsum46Zu69HyXGQyQgKOUsMIV6K2phLH6qo0jOmcRupgrz9vBwRHna1kLwwkeoCf8LlBd8OpH6WScsVLaqbrnfQ5jcKuK13gMg8xG9sjF83agMBUE7Ic+9GFbXz9pZ85eQMHlAMkb9p73vs+eeOJJO4cBtbW1Zpsbi7a/d8sGw5Y99eRjKHez/d1dRhW+zGQdDCm+8K23XrOvffObnPOCPfX0s3b16jv2lS9+1nIYqsmovLHMG3OouXaPJfzf6aHIc2XP2dttN6yQRV4i+44xDp5/8hnkRcHu3LnFvIxsfWnJVJluikI+PNh3XusjY3rdnnVQ4lF4R4UoWu0+gAAlOQjB18h3fpfHWsZBp9u1qjzhS4so7z6y9S3fEBfBMBaNJrQbl/bFeVf1Qo2zL2FGgjbzj9POt2mYF6fzWco8eA+8U3ypiVF/mQuudhrWJQ+uffCd/vhfe2R16G/RKo+WYQSh6xrRtZZ4BV5Fq4oN9/yj0KQML53zsGD293/rXzBm6CmMpJiWk2NBe+SR8+VpVVnjWV5GGNkgMKauScaL5pu9Ljwygl/GDpAE9JIACPe+0l6tOIpBXenzUXpBO+W1gVg8Ek8WrdXfA8w0ub5kayvvs73dme3u7wNMJ3bxkRP2fd/3KcYib2++cce+9a1LzPXQLjx6ATlp6ChVTDRkbdv29vcwitaggy7yVNWaEra7fc9KpTWPiczPTezEaTmJztmKqt0txKFnzbEy8YiOAS35vHss5ZH35Xz+a3HvJuBQoQUy8PU+pr9K0xeJAD7lpR324H9osAfA0XJzpE977pg2QvveGgFEJjKI74z72Ipv5fGWF1JBXdorsLCgsIaMHR13Ad0zdCw6jWdKJis7gXu0GUiNvUoPT3g2P7qMU97nEQBqCB8MQPATZIcqYsmo8Kp3XOvL4JwnHlTO+FIe7MG9B9xAqysuUzRRtE/P9XzZGLyJfMriGQAtskmr0DyetgNmlYEBgnH6pw26NNgrpPeZZVNZ7hmxZrVvu3dqdniTcawp4V7eMtECoDDh9+pBQwPGQqsuAZDVvhyerxhu7qcVDidsRioObyTAZDqUmUFOFe1ziQJkE4mJJRUnmxL/YjTA79JVxXLaTkFLjzx5xta2NPdZvkthuOZ4z8D/gFGVGEbnxXTvNOA1FbZ8PgHtQLdxheF0HHvMmFtfbQMLKOOEOi5aE+07f2BYquCN5NFBte30o/mTEf5zf+pPUDRh66Xkp7XRIAYwlDDXQCcT2o244GBDgcBKAt7ptxBwTWsAYqsAwb3DHbu3d9fefPdNiHDbE9I3Wsor1sNa6KNgQfwMrCrNaNOT0uUoabTkkG+IQtFIxyu/qnJFxgGwcruLWOWZlBKWZ0LWjzY3KOg5HEEJwAzakTcxZUlIWP0bF623DdBtKs1Gi0HN2vkLF+zk6qYV4mWbz6uaVtGqzZbdPdzz0rrpfBomTFsZof/m0W3LFqr2Mz/8Z82O07ZaNNtYAHD3CnZ4L26fPfpte+Fnr1vpPb9u68/N2Te+Nbbf/t13LJkP2/x62na/1bFtLKf55UeZ5A0IIe67On/uz32v/bm/8An7n//n30RRHNvFJ9a5BkWbbVontGvTVM1mib5bdkpOLgkjAj+qNGxn/4j+RayMIn53e9cef+QRFOUFmLVBG6s2urVvB0qTggKqc56U+RSFwyTYGVnOxwPLJhbt9PnnLDefs+PWrh00t607ktXJpAMkZfloSTEWz9GGNoQ9tgubz9q5tQ/ZUva8qWpbpXrNjq4l7Hh3iCJL2M3trwEwN7DCz9vuwTXaW7HV8gswvqqz3bR0bAPGL1sJwBEeLyKEAd+lFZsCSLQRQhJEsTpKSSYPm+r3K+dmt99BEKhsaJexGJqXMoYG5FUb6Dpe2qFeLs17Ci+VJISDwW8AplnDhr2qTRsRO7v4KIzRt/d8dNkmqaF987Wmxe88ae163noIyEEca7e9Z5duXUX4VxGyKctjPQ8nOSst5+2lD21iXQIMRliFCL1G5QaGWxu6rsNoTQsjuJII1xhjJ/AmwTAC1c0QFirHPIl1EcVVa4z3rNK7Z/XBvlcLm4WwUCVT4a+PP/NwqUb+3j/82/cBo/O+K2SIwhWyvtfyqBheIED3BiW5R8yXZ9GSfOuCUpJWQlLK3XNJ67wHco6XgKHuHYT4ME/MkehQS5wS+Hq2zpdSFXx38Au5KuY9AK18BMAq7ECeDgfguk7n8vupM496MZB6tYWglQJMIVMiDpSU4zQaTWI4t2xjbQOApvj6CWBq17JF+HN+3vrtodUqLatXOq6UU8gmgRAHSbx6HVWKE7DShjEEJn0QcGg3mC/OWduAJxGYKrmpl5ZYtYtYBTU6yLMUxrdyXqpCnPJaNpBtAplhwHQa4axKS4soTG0yPHXyFIq7YJls0WMHpUyOeJe3V6U8BfhFs6pPvrYyb8srZQ0K4D0FrWqDiwDkyJWNYpeV9uvWnZvwWdOefOolr4r2zruv2+2bl6yYRUEwb5pEATjUrhsuMxBJtz+ycnnNLjC22XTGlJO026jZB977kg0Aqrdu3mI8IvbM009ZDwVRYazzHrIgoIUhyW9zc3Me9qDCEYonV2ofWApZNbADxkqplFQeVTQjQKssE1pBW11fsf2De8zfro9dNhuxdhugEkfhKtuB6EpGlU+PGi/6gk6hCfdEyWjie9EfpBS8i+74XaQjWnMlIVILptj/1m05hdvpx+AUGW5ealQr/fAgmAHF7aI0uA9XOcCEkNU3P6BTOQlk0+uzdJDa8LBhBn/4H/4l9KG85MEOfTce9R/P8x3k90PqtFCivMv6QUaVlrwH0KW8swKxiqdW2c5sOu00I7CqseopuTztccPR/8WYcXoRaBbPAzqTaL9h19bXnoBGTtvbbwFKMaAy0MzywhrPT9gb37pqx0dNz12+sDxniys5gJ6qHmq84r6fQN7ObDZnKkKSzSoVoWI0offFTWt2qpbIjmwJOo5HZNRjUHYOrQQmUOERVcNbWCx7FhDV+6/VKtYFmNZbTffISloUi0XfzyAwq6VjvaTLlZQfKOh0KF0vO1TG487uTYxYZc4AJwQC0+lPy/bywMuL7Yn6HYwyt3xeXVn1QkMHhy2AsUHHgF0FkvJ8hTjI0yl+U1+9AhaEJrmp8ADJCW2SHmG4DTpNZH7X2tA6M+ghkqJHPV/p1hTKpRAohQUNeb57ZYVjOEmS1g177q3MNymMdWVVijE2UTCP4p61aqMNToMJAN/bTlf0zrXBniX1FfkRZ7x6U2sedq12r2u9Gm0YZ9HHyEalHkPfSyf24EcBWoW8+P4MtRYQqlUTbsn/6BzurtCCOIeAP1A20LVDgcUe3ynMYIqsUE8lxBk37rvMnJ95dMs2T0NTBXiauZlCl1oBce8v8lPgOQivk3OMB4KrpIum3Hc07XiVL73LU6zfxI9ycqGYGU/1FTCMYa7VPIWFaJNpC7q5d1C3JrJbfCIP7V/68V/g/g/3ipz7QPbT8qqKWDylA/3RMlMOAKhqVRM6IZ5UXKxc5jEm1L0bTGRfHg7ArqakATEcHh/40sXl6+/Yu1fesCvX3rZsMgfC7vAopEaIAWNA9Sxt7lHpvyYWkce2MIjyCj1QnPSV86cAUQHgBIM/sO6wz2Tomce0AeHbLlvh7o9YwRbt7MaKraOgaCQK5sD2tw+xtqY2nymB/2rWYtBbWEV7gBkoz70Z3UbVHllbtx/7+NM2uDuwUniNnjTtXrNm71YO7fLg6/bJf7Bt2Se+ZtP8wN59ZcX+/t/8st340hW7/kdHdu+PypaMZS2KJXJ1544dV7QZKQPYf9OqlTv2yQ/9nE1gzN/9nd+wzc2CZedGVhnctlChZ+1RjQnrw9BYuVrqQeHuHR2jWPYZk77HPCqWsNcP2cWtTVsqZ+3W7UtWuX3bdl6/ZDFoJKc69yhwxctlUI4xgG7yEEHRCtnSqcdt65n32E79im0fXbdmT5vLGFL+kaWo3aQSPqOJljX6VmCethafsaXMMxYbFQEY+wDW162xn7ZuZRGLTRb3tm2eeMaOEX53jl+z5blVmy+cw9DZx6K9bJHZqsVsxT0NyYiScTctNplDGEeZb0CjLGsYMAbgTwGiVS++g3FULBWYEhiEeVHBgwQCSIJHHqYOiluCUMtBirVKS2Fyrkrora8s22JhAiAtwDRRa4Tqtvl4mf7etbe+dcWO7uxZDUBZutC2Cx/t2lOf6tsz3zOxiy+YrZ0cWK5Ut4EQWXpmq2ezduLRuI0B6K1GAyWnDQuHGBb0DUHdaTZdGyZgaFm9oxEKXh4GjDbI00JpBHYYQ6+/DZC9g6lVsRnWqYQgope+aflxaN/z/F8U333H1//tv/07gNYHwBFxcf+zgKNsHx16eUYCB6q8S/lLByAzaCSCRfwErKS97k3mEH/JUJRAdaDBm3jevbwOPIK/hxyuE3jpGhe2PMDbAW3KES0vl8IM3BsrWch3ArWSbcpNqpQ242nUPZi+AYY5ktKsVJU7WRWBstaoqbRzHOA65wpWqXD6CNuNrQ0HATVAsHbpdzF8VOFHS42Kec3nVAJZu18nvqlMfCSvvax9bZKSt0nPUk7Me/e24bWhnThxwg0leqDh4f7wfClDnwCuaW26lKIcWQEBnkqHbNhveFYNbWxRPK42kK1tbNntO7u+AU2gsgVYLBZLrjglhFX2toXxU6sduUe0kFeVHkByveYC2zCq1F6FWQi8C1go7ZtCkJSFod+v2ze+8SXArGQybaSd2vA6wdgcIotpofUGUzt76gkA82nrNPpWZCy0pLy7s2NvvfG2nT590k5xvPbqq7ZD30+fPEk/S4x30tZXl6EVQDo8pbhzbfRRZgUhrBqAWAnQVeFIoHUeY0KGp8qvppBripUUKDiBPDo4gC8A/qWSKvZJMUadf6UPPU6SdsqHFtBVAD4eeP2dPkV30JvHafOb052ITT+JRsU3vPvf/q/4SD9A5zLQ+EY6In4fzLpnFoWtzZAeL6sXD9KZwjAeZqB38RA0DDk4kNV3ej0smP393/jnAIAEwB29pAago8Qfap8aOlGlR/okg1OeAxm8otMutK3cyYqVVEoyybjM/fvE4QtlAvGNY5Gp1jCdV30wual72jQefKVqVYUSMhnaWyidAwyfs9u3jjlLwKpj8ekGhhzPHUetBE/NLWYsnTfLFM3mlpI2aksOBzJ1DprwHf+9EZ/nna+Vr1Xx5FWA7/rWvC1BL3fvHNjhkdIsakPhGvSrDDLyssqjpmwFTWg7yNqThd5lKKl07aCnOFltvIx6n/VZPNBTDlIV9WHGNSae8xRApY2LSiapMCLJKBlTzJobKTGerZVel72Mk1ZIxoDk5aUlW1xas1oLfACu6vUFD4NDBrwyPnhGGf7zceQ/VTOVbtWGMMmbGXp4IrnAZ0FAGf99fpMOdjAr2cm78shqRUU0zAMYK9G35l90SUs50pmcpeDrVD5nUcYgCu9IN6mwjBfRmPbu84Ou5yaiZ8nW+wZgdBqz9nHPjrdb1tynbZ2URaZp5jaKXKR9yI8+ctU9s4y3VgQVqhlkmpGs12qJ+FD5dBVWEENXSu7zPTothBxh1hzIxrSSCCYbYBgpi4tCG5KhgadgO3F6wxI5jB54uw+zcLXzaWiMspPXXFYcDZZTLBQGEwKyJxpd7i+vNtTsvKH5kqddPKEN8FEH5ep2lM/y5nNfxkZpC7VnZvuo5dkv3Figj3/1p/86z3u4V+TkB5Kf9gliUDTBGmTPqwm4UZ1xWZOqDMKpEIWW8wATdMYnA2mTSNFgOiRBRgt1GgMvKx8BF50CQGL+WRtihiNVB2lyTywDP2doC8UlBJBigrQsng3yHqYRrjxAywJN5YMdokS1445xlPWfK7ZMZe16e2et8bmnEApRGCds33j5sl2+dglAmQREw5zSBCjEO/vbXglrfrWMEOjacWfPivTj0fPn7Pnz5yEYLIhO1nLpBTtCIb1169Dambv2Y/95yE4/ewUBMLSt+cdsJVe2L/32b9voOG5PbX7Mzi89boeNfVs6sWJRLcMzS7fufsOyqZllE+v2hd/fs/q4ZY3DI/vER16yx5/ZsoPOHauPKhAh1p5hwfnyUtSX+Xb3jqze7FkYBaelV1m08XHE1lB+j184Ya+89iVA1TFgU8yNwrn4iL1N20GatpSO2/wsavF+xEorJ23xqaesDci9sv9N26/dQ9S1uSYgRAESt6ywSuUhVxz0Um7d1gtP2kKa8WC87+1dtnevv2yNFkCss2W13ZmtIDjiWNq3d18HKIQAkdqI08bI2LFkrgHTbILrFqzdPLQ8RsSEfvYagfGjpVVt5ul0JdBGgNmsLS0sWR+rXpb8qdOnfflTIQjrawin4wqsEUJQrbr3QsJEy8IOIDnoNAdjCFEcHyPY12I2v9mx4zowspq3x94btw/8+Ov24p+v2blPMt+P3bbcyWM7/9SCve+9T9j7XnraXnrxnD3xPXm7+J6ynX+2YKki10Nvlarq/Kfs+GiMUL8L4ALg1usIPQDV/YT4yuag1YdBUjHhHRuE69Ye71truGf9WQMgq40ACEGUg0CsUo9oPn/gvX+Fdn/n19//7/5ruscIMFeuxaQgJRBgM1fAgRx0JaeCGwKzEqjiVE9zdP+zy1kJSwkVvtHf+sEBKe8CmQGwDWIL5V0Sb2O7ONDV6cHBvzxfbC45JVCtUAKtmunzDHCADPPfdJ7ybMah0YPdQ/voxz7BeI59BUE7jPWMYrFMM5Req+epfAQ2JdAlhLX0eOrsGavXW9Zrj6AxgECH6xMYxiheLckqR6qyDggsKZ+l+rCzc89Bl9rabnYAh1oCzbjnV57Kcrlk7Y6qfEVpR4unTz2WVbGxkbAkLEZuIQuwRhFpSQwwqTRiWk2QEpYcPDis8IzsfaCvWL6Zp5LLZARM5R3GMEdpq48q4SuZubiw7EClrbjgcMorIgm8r6ysWI22jZBT/Z5CIsr25DNn7A8/++/pT4/xi9qIucMesBms2wfc9BH8ake5uGan10+bCqKoXv7t27dsb3/Hnn3xJcava5fefcNOn9my7/rgBwA/RV9NkJI5wmDOAzhKcwVPzSWlIy+5lmETjIW8jNqFfnBwCDjetwZg3b330Msahn+hWLRnnnnKr1dGCK2mKKeuAKR2arsHjec4kBVZOUCCJKAbvft3vE+YIwey/O0bffWbExr/cOizp5Sj/6ITXzngZ4FTgQGd+gDM6pkCsh5uwHcB+Atu5ffS8/wZ6gf8SFvQk85H+k3n/9LPPlyYwX/4tX8CCEBmM9fOY9CMQuGc27i/mEb0J++TlLZArcJpZMR7zllOizMXSQCewIaMTA2MyrXO5LWlH4JeArPeXw0HY6CQDWUNSqLbVP65XF63XOqE3bxa91LO9WbFvXBnT7wXA2zF5uYL6LomTerb6kbZMiWMS4GMdtaNM1V70sbYVht+SKB3CzLaMNiRB8oNG0lM/RqF4qm0dClfsJNbGIPIMoXDzcALoqU2hpsAeSaThF+i7mDx1RH6o43lCvuRfJdnVy85BaT7hyOtxkkuDhxwajVE8erN7gFTRn/p+JRnTDDftEQdgf6V3UNgdNgXkB3biGtlAC6trMIjSdoWeEB1uRvfjKF/5h/dUzoENApQEh/BoyJKnhHhPcZcqnxsJJ4KjD3aptAoTxMoUKwpAHyJrr2YVBwwxjM83hma09woI0UGwzIJmE2CB+LIO2Eo0a82yqqvQy9OcJ8XoBXNr9rq+4Z4bmwUQ9c2HcwOqtIfOaghbUNoawCxjqFfhTP0EboC5eJXhR8qe4h7PiE4yUKaRBtVxQsaQ2fNZAZP0bvK9QqITYBR0mmMDOjKAaWw0ixu5RyyGTmpwhBtQG4LWaycu+IdFZdQRirRpYC3aF6GpTyy8tjK2013mTfANO3Shn+BXk/vCEgPWdz3BGnYNReB913PH3PvvoPYneOuOzjVB5kkP//T/0enm4d5RbbeE/m0JlwTK89sIpG28tyCE7eIVPEbDDfvem6ApKVQBliG+pzLKYUXk00jFU4gRpfrmSEHVMa8dGqS93hcwmTEhKrSlkpJdukAFsHRzAbtiQ06M6sf9Wxv+9hqh214N2GFzJydPXnK8ql1B2MqAdkGXFl43+KjnB1dftZGrzxit7dfsyMYMZfbsPmlIkzUt6ODDoQJcUM8J86dtHqvbm/d+JYtzZfs4+//oJ09veUTfgTYjKRL1lOcyqCLoBjZpSuX7Cd/bs2yc79h7/6u2R/95pv2y//Fr9iVV25ZeJKxw8q+7dWuW326bfPzRZSIctOetu29ryJUK/bImfdYMgIojOasPmhYLpay6t5dS2Wntvrogk0hpLn8It/PYZ3MrFpt2Z3tPdvfr6JksZ4gbo9LQnDt3d21589uAY4HdvvwKkwjy2tqC1un7YgO1KIpq9+7Y2lA5VwMJiotWvmpJy10YtHe3LliR53L1p+2kFFa6tb8oTwEQoRAYM0EiqCY4JrEacuG1i00jAOq79lXvvl5++LX/8is0LczWxft6A5gIrSGQTGxneprMCpzGjpBWyA+jINQtIpAWLEwVuRwpDRdm9aoVnw5tAWAaANU17dO0qe8W9ZdBFypnLch4FYCNQ/w0KaNw4MDgG7CMxLcvnnLDo6PAARhF5Zi1igMurhcdmWKSLNeaslW1tNWAjhlJhu2u3vZnv3Uga187GXrbb5tx5O3LYQQzGUv2ELm/VaIvGCz9hKCEEam732lEZs0oZMmz7qL4K26saXYpmpLVdea1m21PczAoCVI3BIZmBChEEogsNNN60wwQvr3rDs+splSnChkBw5wL5AED/SopNZaUvvhDz7cssnf/weAWS2ncw8HtPcPLbHTcVfCegVWPQpba5McEvYuYPhNepKf/OBMDgkffRa/SwiJp3kXb/O7eFxKH8PfFb4rUs71azhXOVuDIgkhlAk0pKVawOwf98wK0EqZCyioSpfm+vTpc9CNDJI4xkzPMtkCYx9HgdNeGqmYOy3lCUg2GjXrDXq2sbGOkVL3wggDJUAH7GXSeb+/8lmq/TIQBHGKxYILQiVuF1BT+jV5QRTTJ6GplafRqO/njkZd09J0ESU9GnYYT4HUoeWzSb5TxoUI32tJdOQbVBQaoSIBqAzmb2zKwVqra6e2QJoq1vUtGY9Dv/SBgarKCEOIFwrznJvlPnFrAsaLpbL3X+EJiqvLoPAuXb5sb731Dt8nmcc4AKRo73n/Y7ZzcMPevXbFEsWU9RD4A+ZzmoRWBYoY1zjK/9zWBTu9CT+hPfooHJXLXT2xZUcAfCnT5158jDb3bBeAe+2mUgkeWA8FrnABTaaWGxWeoc8tpULjNyXgHzGRKl+pKmzqp8K+5LVpYhzI4xZGOVUrFVteWrRK5ci2t++4IaVd01Kqojx5tNz7BK2IjgRUBf5FE3KAyPvntMQYPvDWBi9RjhwXHNJzTquaNd0voE0ZD3qJpv8TmMVEl0eTdnjFSq6Vd1i0FRhrAT277BsJdAdgVkpVv2neHhbM/t5v/ks3yjSHes0wgvQcNVB0pywNDkZ5buCR0jOm7mmSV9uXrLlO56hRym2quFJ11XmX+znAFU/zn+CKxkKOEg2T4pNbrT76Y8OKmS179eUrXt40jXx87LGLtrn8iLW78M2gZqXFjC2tln1FVd78Rn1gvVqQMiujuHXaKyNBcd8Knzk8OIZnSpYtpJFxEau1jlzOa+VUq20yeNrdRtAf6EXLwBpzGZMKNxDoO6pVrdnQOROPpxWQrEBPVXSBDD15P9WHMbLU41g1+XRehpve6807zHvwncp1+y7zCNcpfjw2hXeT4DW+h7ZG4BCN1/zykuUXVzGMtPSozU4YG9AQ//vcwj7B+DKuIcCpgzMO0bryrIYAYmnoKJ9O0n+l0QPwMifa1+sxsFzjhgnXaMOVNlFlGe9UAqCrZjMGgfwNWS5fQs9nLcn4ynmknONyFGiVaghmGsz6TsuiOxlzNEr/M3biIXiiMbHqvYY19voW7meglQLtxPimjSrCoBzTArHKReybDmmYg1l/58aMqeJ7JSPiGFxasZDelGeW2zOGHcYsCC+QUa9nCmxGwBHp1JxtLkKz/N9A71VbSknZoc/Mt68wqNFKMwogD2mNSIbGRPaBX6N+eGpHN2Dv8zL3nozgxSnAPoShwFVarRBdyGATT8g4V0YNbQbcPgqK28Dq3Dtkv/AnAbMXPpr9dLBEpJ28irmIg96xBmhhzz0MIxqPAGQGnDh4iNC/Bl/55NrNYwdHvsHDvVCAXJSRlmMXy3MooCKdhMFFSN4BERoIPa44WcBvd8Gis5x162G7/NaOffmzr9rXvvy6XX9323Zu1+zGpR3b34GRw1iHC4Ds3JaVchBDa9Guf/E9Frk+b8WVniXzZQR2CGZQ5RylhylaHMtj69FT9s7Vt+3yzht2YmnF3vPis4w7BHN05IUGugjhPlbDcfPQLt+8ZvcOavbsizmrtP+tfeXzv2q/9mtfsXcvtQFAF+3q7YhVjg5sGm8xc1P7R//w9+3E6Tn79f/wZUD3c0zQgL5XYeI5G4WSlgBw3dvbtQxjmZfllOzaySeXrdaH4VFw4W7UshC8qhR1uoFXXMJeO0eV2FylObMLc/bdH3geC/xVi+UB/tOub4SZW920t/b2LYrw6d+4Y8n9gc1l8pY4ccpmpzfsINS06/fettZszwP/RfAS5nL1ywhRjKo8Yckwijy2bnPR8xbuZeyY/l269qq9/OYbtnOvapsvRezEKYizHrPu4RpjG7XO8A73wDK1opXnCx5T2u4fAri3TIUSspmprS88btXDvpWXsqaazRUA6+mzT1qhWHa62d29A6NxbjSPQbBox9Wqp2YSUFeM3kkUtTbxaFd2OoWhlIq7V+C4duje9QFMVe/UUcQ1e+zxLd6n9tXXv2Lf95cSlnv+39i94ddhijN2ZvF5O7P8Pju7+F5bSp61UC/K99wDABrKcZ/usmmPD2RgvUYb2gDEtHat1riHMt+zyTAKWNDKgnahhyxboi0FGCE1tFGUeQrtW3d0bL3REYIFUAHtKRZIu+/BSoCENnwlw2/sivtHP/SLAed9h9d//Xf/FoILRQbje6nobx8odQ7JTpfQ8JTnl9U7PKm8qzxeKlA/IaRkHfPBBZpOl8JEmcCveiFLXLiKd/Wu76X4HWBwPqfyo44AgCiMwD37SHpICN7nb9lFapf/FpyvOVMRBBnHild98cX3oeRalk5nuQ8CnEa2Ox3aDj112l6hSst729t3fYf9wuKSHewd2QEG3hAhqjzHxeKcKzLRh7w62j0tuSLvjKoRKZavVJrD8OgAaoLvZThLE2kpNJdPATrTtKMSACAU2OKSQCYClvtJQSsURhBCck67yLWcGaw8qaqXNp8iwxjjXBbgC6hRztBsPmeL8wvufdrb3QP0lXlmxDetleeWHcR5snQ4RspYYQbbyIV3Lr3Ls5PwxDxyVzHCRSuUY/Z1DMlr29csPZe2HnPe5wYCwgptYaqtnI7be59+Cn6PWzo2RH4cAFiUbzpnc4srVlwo2o2bbzK+NQggbFtbJ6w0X0auZ+jvEoB24GC+3qjStol7W2VgSO4L1GRzRaeLxcVlnx+VwFW4wfb2PQeyN2/dYLyPbWlp3nPxjsZd+itjWcvGis8OaFT9ltL248Hn+++u0B3YcvBZvqRg936w8ce9O0xC8N+D8520/CWy5DTmMOyASpENeq7AhRu9AkKcJVAt0CwAK9rWxi//DK16+zj0etgNYJ//vX8LbaO7AHlqg5hK7dbK5MiVvfQgh3jXjc8g6b4MIaWKU7iWOqJ8rsEqEzqDa907JzQAD4jepOwV2+kx+XqQ5De9b9YP7MTGSdvfa1shv8bzUoDZrjsJPvKRT6L3MtxDXkWlzoJepesqKiENOK11rX2MjGOgFhdljNTdYSKDTXypQiOZZN5a7boDFOmNdYzKjjyrva5vGlQMuTbEalzFXyVoR3thtNzus8IEKJxF8u7o4NAq8L7OVWiC+Ef3SablJYaHGROvcsZYCbrLGFAom4SISgRjsyJP+TOkOHONETwobCIQyDVD9IoA3fLahq1ubgFSJQsVchLHIIj4MraMdPeS+6eIh0kG4VTCNQBEeHbGu8KTMoxBFjAr3h0wT+J9fa9iFVrV0Lz1mTd5rks5eCUJYHVcozbSf97TAHjvazYHKFYoBoPIcz03MXP9xz2zmlj+d1kl2tUKbefewCq7HesdTy0+LSIxFHo0sRZG6JBxCc8A86LnmbIiME6S60j7sD6juxSep8wUMuDlrNF3CiWIRoO49pHV6Qbj6Z5+xpzrBqJNjRfXJSMt18U7R/t2iAFSayoOumOqBqgQmvGo5/2Q/NWmbHXE6fU+zUYVy8j8qF/a3CWdp5hYuVkUbjidSHbTJnhW12gqpUsaPKOBIb59pHoBCkOhr/z3Cz/zNxioh3tFTn0g8WkpuzETqkGW90PCWPGybtUwSSIQCX53V6vzAyVD1m7EvhWwQuIIOglbyN89IgPQfAqrf2mhbIlYKeioZlqTJ4t8og00CAAUU2F2GsJMYWVhMfUTFgvlPURAS/7xcN6G7ah1W2nbq960AwDo3t2+9dr7Nmuu2N2XP2yPojQt07F6e2THx4rDDdvSYgmlH7fOoGHffOdVrtu1M5ub9uwzj1taZRaR1EoQriXHSXtozVoHQqkxgRU799hjdrP26zZKvWqrq8/apeHbVutiVRQAgJGr3Ldnp0s/bD/68b9uH37vBfvXv/GbVizN24XzL1hsvGx3AcQ3dr5oXYhmefMpLKGoNXd2bGulZG9c+pLlTiRs8QRKGaEf6oQQ/j2USBbQtmarG5ue5DyXzzLBfcBs1T76Yz9oj2yu2NHuZUvORa1SP7Y4/9U7E+vAKJMBwuOgYkv9oZ1YP2mxM6esWkratnbSN3esOT5GYEWZN4iXsY9F0q5AFCcjBsqnwraSf9ROlJ+zdKSEoKpgke1C9FFbO/WUnX7/yI4OP2cb+ScsUn0BgdiyaKpjc6U1hJtKC+adUS3StvXlJ73ueyYzs+Xik9bBQKm0r2NUZKCvqJ088yjt0DJ9GKByh7mKWyGxYDkse5UR3T3YM9XF1y7gg/0DmwO8JLPME0BYYFaeKy3ljGHGXDHj6WPOZoZWPypar9Cyiz99yXqn/p0igiyK8bGa3LLi+H1WDK1bpIuire9Dcw2ENEwdSSAgoB3GfzyoWQTh2qmCPge0+aBhnSbtDC1Ys13x5SzFIuUKCSvOo8hyIRtGEDhT6G50DyWDtRsJ4sdF4/Cr50adDGLWH9d97MUb+u1HvuvnnfG+0+vv/p2/FQgKrhWw0nItrBd8ls7ks4CmwKtXAZNSRwG5hQ+vyikUnKxncw/pdtEAhwSMhKkO/Syl/iC2Ud8JyEpO6ZrgMu4H2HUhJZkuIKBNffTT4/91LodcRzIKBKylKASaBcC6nZ4vrSupf62u3cuMS187hFVFJuGKQkaSVmq06TNfKMKfabv0zmWeD5hCiIpmhwMAw1ClmrWruu9gVoJcD5aClXdIS5ribSmZ+bLiOYPQkGRSq0eAHJTiZNLn3ATtQaEfH9gbr79mmyfWoNs0RmUbeRV2Y1seSck1CXk37BGwcZSEQK82wKjMbaNWDeJR4ymr15re3hiftaGqLICLUpSy1hKiMregC91z+eabb6MYJ1aaW6Q/OVMtecWOa2fxP/zH/8BSBe1YxhhIzjFOaPVxyHp1jHbG+fs//Ixd2ChYLlpHRsJv8GMKDBNParNMBjKb8ewMR8kKtM1RH4okmtBmmJCnCpRSybnnOsaYDVA02iUdtXt3D+0Q+j9GFqr/zKaDlRwgtwQ/eiYSkKOAUoL79QZNAO0tnq/lVAAC0yE61MtpS8Dx/iExIY+o3vWbwKVoUXk79bdApwCuvGZSxFIZWk0SIUoXefgM93W65Z0hdRAbh/7jzLFSD8XFB3yWYhX9i3aDlGu0hbFTO9RG0bOArV7ST3/tZ/9L//ydXl/8vV9HN6n6IaDIG8LzGEO40PlC7ZGnddQHDGkzGI+QUpeRrrjYdCxN2zCZOacHmBWY0AYnGUbKRxqHBnVbBz8Qi5beFdutkDx5RAu5GGBTm0qj0HzIbtzYpV8J+0t/5W8yHln72suftxMnVuzE5jq/d5FjgM8OILGBfJrGbL6oFdc4QLbqYQYCkFqJFUBpNJoWGUnn0y3kXZzztIKguOh2p8GYj01lqTdW1+EfDEWMVRnHymAhL6tK46rQg0CvPLFNDEjxSL6gTd6MF+Pn+214plZotKFTK1cCm3pXpYgZ+qrZrHiYQzTNHNHvGPdMYTwkNcfQbIhxVJ5axSLLc6pqdcp6s7a87LSijDmKoZd80eExu1J2/KZAAeVTluGgjDgCWEr1GIXfE+miZROKcWeOxDPiWyZQXml5+t1RFx4xhkVbgm8z6FKdk4rFnAaVVzqBTktwvkB3OlWEuBSqOfRVZW3UghSYd0AtfRBtix9V2CchmlDKylsYHzuc21LR2oLNRnHrSu4h47Q1zUbKDIMsmil9lcYO2QitTBHEUKDTnuZXm/ol71StMqb42JDiwdo2iip8CnAO8wyRZ32u02Y3pSrtj5rWrtTAQsYRtlZv4Pt3tErT53O9hu4EQ4xltPEu3pYcH/b4wDUqtCV+00q9QLZkhWQ7w+zGyQy97RvxMHpVVMtz7QroIwdrDYUbDmyv0rJWHVwmMEsbfuHn/mbAeA/xijz6seynZSFJGSlmtSDvHgLcrUgGSS5tIWtjwiMhFA8CSCkTFCOhpfBRqHV/s0gcwtO5bQYPBinoXgULxWmwdsMhZBS8PB5gxXUSNm2mLMQxbQOGejA2gr7Dfaco4zygZWl1xRZWVqxQXrHMHEQF80wXmrY/ecvymb6tXPtlW6/krBWrWbMxs0x8Dqu3bUUsq0atbQcHt7lv3b5VecUunD1pFy+cUeVBG3RhQAgzQttnMAfkbM3DEsx7COGUrTes21v7v2yZlZj9+89/wdo7Y3t882NWim9had6wjfIpe+HxnwIUn7DOaN9+5Tf+oWWn77PtnS/Zl9/827Zw7qR1Uby95k2Lt0ZYV4/YNLxnT56CGC+/Zttv37b1x1+0pdNrNunetc3pnFU6WM4o24yWRw7r9vT6lq2uLFtpY8V+6KVP2O7NrwKQ3rHIeNeGnZEl82esythZP2VrxTxW+mVLobzygOe5zQ/a/tHEKtUrPPc6zIuwnRW4Pst4wHDjDgRfB6gBRCHq08vP2UruI7ace8aV0wDGScTO2NrqOVtcQr6Mvm7hfNz2w8cWnUeBDddtul+ySaMLvYxsAACcK+QYuzQECkjIJW33sONzL2BR7+55HKOWbpRGpZDPWEsbdUIA8sOmrc2v2b3bd2xjZdXiSraPsFydX7YQiiAVw8LvROyRU09Y53hg6+VlWyqUrFttW7cytrkUFvniKTsMvWNLz71lZ5/jvkPoau+UFcZrGEJHHmtocZg/ofyvCEGMBCmKEYJn0KjDwFquHngCb2XhqLXr0CEGG4aWbN1ZG2aMzyxRmlpicWihcpthr1ptemyV/iEKUbnzhBdQWLLoYYY2Sqg3lhXeA+RKICtvqTZMju0nPvbXAs77Dq+/+3f+q2+DVljQD/0toeHP82cKxCI8pLy1nISQkHCUEkduO0jQuxSOL7HpJyEEQUBpWd2Oe7pO5k8dD8AHTfZn+FIYIMNNaACVvLDywEqgeZiBgAnnC9DKeSEHhV4CDALaasuYG+7tHdgTTz2NoJOiSdICAAC/S1EJ9ErwdpTGj8Zoh7IUq2L4dJ5ArLxf8rhISQl0J1FuUjACZQIv2oAisKk+lgGF2uGv7xWTrZQ0Mq611JYA8CSSGFZbG/Z7v//b9lv//n+xP/NjPwqNJjz5u8ZPHugoxs4MeSWFozbJmyDPr4CchHIWJS/Pj5RqDmNUgEbGnYCrAKI8RgIp8jL4Uh4DU/MCKBkMsxxg8cg60J32JuTyRVMKslQ6ak88fcbu7Vy167euABjTnsVhgEKRRy3NNPzAR87bi49tWD4EcB7sWHRyiDLs8EyUUb/OOECf0LY2ww2hbaXF06pPFUV0jMF4XD3CoKgAiJRyTuFgAogRDAnV6FeN9DHPhb8ZS20SUplPjb+84QrlEOgQ6JasUNEWsITduHWV52tVj3lJBQQgKqPTwWe90Xa9nI5FZ7yL1kRa8nrrd/mYMMUgPK6BloG2frEu1V3dU6s/7r8EHD23LMoyKkNF3nZhFs6TUadWiM71TF/+FF1zCNjKM6t33U9A+WHDDL74B79maYCnb3Kjf2qTlLe8eNodrxs6QKdP6ou8gd59PYd3Fb3wNFzoH+nPBAaYCs/EEgnnUe1JUKiCQI/obgCQ9bReMJf+bjWOOS/my+F7e1Wbm1uzn/iJvwDvmH3uc1+zD3/8OVteXuSBZu1a13ot6WSF9ACqOujJNGC4JYPNMMiQ2zQsDYBWoQOtkKSitCedcoN/OB1gDOVtabnk9pCA5XxeBWlkIAg0JeAbdDh9VSiOvMz1ZhPaPvYVDt1fG39FNyqyIDoQ8A1WgQOPtTyJPj7MkSZ5Bl3LqB0pB304MG7ETw4Y4f8wdDdClvZoq86X0SsArTarJHYGXSQq0n9BOXGIQjREPzW+CuvT3+5FvG+EaLCV4Uf7gdLIJrne5BWXEHMgDNhU37X5XIUyynNlWywsAECjALmey6DJTClG+xZD/2lfTzJTRM4AvOXVhegUYjZCv3SZS1+dhnFEm+KnBO0QlQxbfdu/1sdoxShAn2LWO432GZuhxoL2C9wqPd5kpnABztOgitaQSTI2ZGjJ2+5V+pB3iplWFVI/wvTBz6NrzL82rMmY0n/MCN+CA0GyWn3Ss3Q8qELnj+E5e9v3rKG9DMg6Oce0ciPjTIZXm/bLiaH59YwFuh4+lA5CW9BuedjFzwpH0ve0kbFW8ZJWs+cOgN39ujW02Zq5laHx8z/7J/DMnv2u5KfVSIGeYGetlv+FAOgQjfIoKHofoUNqtBSTvCKKH9FET8JoNEcMmg6dO4DoZlbIZgAuOa4R4we73RS43ap3rVmBceoDBC+d7wKIUQASqPLYxBAUqqCjJTExex5LMpbivATcmmkjWMe2EMEy3P0BG+1OLFtWYvIsAlubN0r28mvfRAB0bXlj3t648qo9+vgjtrm2BuAqQByBRTygD75bjn5VK1d8w8lh7ZYlcysWzXbt9Vu/ZlevX4eQRvbxj71kz7/4uL31xk1rdKq0peOpnk6ffME+/L1r9ua7O/aNN/8f9vQTJ7CSD61RuWyra4uWDC1iYd6xR048Z0vzA8sudG2vddNGWKsprOVMPmLRsyV7t69StglrVmuegD25tmT7Iyzbetve9/hz1p/u2Z1b34BQbrngtGjBDo4hvxGCJJ6x4eGhx+Quzq9bcW3LmqmQXWlest3BZWvZvk2HGA4w64i5FOCJxWS9YWECQrRkcfHEh2ytfNEWS2swtzwiih1ctFRhZqOIUnq9y7kbHGlbWe7YB96zYYd3ABmzOUBb34oA3GgkD2hIY0isWpLrW40KAnXelKtQBTdEsMpZnMzk7Ny5C3b3xm2bywlwqGRhDcGcw9JXvKKWjQPBlM8XIPKhlZjfGPIol0txPsICQLF9946dOLkFfeWtNTqyE09Pbe1JnpIUs2NFwz0RrDPRXlwbz6ApefM8dYgrNe3m7SO8O1iE+9Zu1hCOsvSDzWXylnvJXut5jHA8BwgqYETwPosjtAGpg0HHBZTiksToWrFQYnQPKUAIPthgJcvUN0O4h2VsP/2Jh2PO//vf+7/Sft2Xg7+RZcHf+pF3f5Mn1o8ADMr6deAEEBD2FICSsFMWAqaAF23Sf9zUgTIHZBG0E+EjOnmw61uv4DHcD6Hly8ECA4qVHWKd9xCu0JAOZGCgjLjPfV3h95RccYDBnHW7HVtegUYLZXh1wHdShKpGo9WCAGgLjMqyn59fsJ3tHeYJuTCVR3MQeFkQfpIJmiOl0pJQHPQF5NS5iWlThnY4K+Y1RMOkXIbDLte3mIOuKYY/jdF45+5t+9f/6p97bPj/9i/+BRSuPL3BBh0te5bKCw52fEUJgay2uQdEXgbGWR5NgVpGxIW6PDEaI28vMkaSWF565WKtQ1taUhVNqWJdJo3hzDnanKawLvVfWRfiTORw2LKtrXmE+pF99atft2w6BG9n7fFTp+2xE2V74cKCffz5sxbr79u4fsusJzB7hPHZtkRUZbAHls+Gbb4gz9E5yyaRZ+GM80EYkKKsBKJX0Yhiw10J+6Rq74MyL5TpA4Y4AFapc7RpMw9vSul7LChjoY1KWqLVkCuGMZuLI/cuMc/a4CfvsMYBYCugwDVRDtGf0yD0ICUlOuGUANCK/vQV/+hdesIVHrTr8o5Dxs4Dr6xeIjXxgi/N6pA3Vu2BSQRiAzArmgjo3Q00LvIDemWag2fzrrmVHvvFn/kv/N7f6fXlz/47QAoyxsEC3YHXfAUCC0/hG/LCeU5b+ipD0HkLJlPbJdtUlCIGvShHre6TRE9mFLKSTjN/gDXoQG3X+bqP4u2VOcg3yaGwFAvZ7Qx4tFYIzA4PABa9MAZK1z74oY8iN0N26Z137bWX37KjA/hgkoNW5ywWztMenpsccm7N8oWsP1fjo/hZVexS3HQpO+fCZjAGCItnsgn3rMpAEshMxQBZWA9K6SWPveZamEArJgLJh5WjwKOL/pY3lCFyHpKsER8LXCrW1sEv4xVcL95TKBbADv7sYTh14BenBozfeDQD33K/kOJd+x4r22+rLHocnk5hFHYB3kPfrDa3NM91og2tWDGWEMgDUKX2eAo00SXfq/2BB1ffa+UGvkymGGOlP5N8UsEjDDYAszZcyfgcA1plvJbQPVr9mSJfPCd1JowcgSYwYJWdJJ3md22w4BxV41QpdHnYu+J5xoBHOpCNQoAyIEYyJverdnSjb9M+QDvE3DBfXeRtT3RLs6XDEdTwAoa7aQ8A9+IrFbJRAIGcGXlAtFbqtLFzNEFHTdrQH2OpTFL8rtSkatOUOeYy6J8ba3J0X+SXQJFWRuToi2cA5sgw0brHxHNqBz2ueT4+rnhol/aCyHgLVh8U18+zaKecAGqVrvF35kTneSiN/6V/9TkAsyrOorC0w2OlYFTeYxVvidlf/olf8nMf5hU588GU5ySRotDGClk6EkSBMqJvMJE6LALWdwIYUswSGPJ8TEH/YYjKR0brLAxyKhG2Qk5MChCZZQWTHbA2AWgHexU73gN90+h2HVDcw0KhW6pkk+D8TKbgxCDrVfFTKj/bmd3VVgabRFoW7mdtPfw+yx2810LViaVKUbtydddu3rpDW2BxhGljWLPX3/0GVlrcXnjxRd85OlQKEphEgkUesiqTUqk3rF6t2EHtiMmNWWl5w77y1j+x2uir9vSLT9jP/qXvs6efPGuvvnLLLl++CXEq3iXGpBUAtns2je3ab3/mD2xjvW5/9X/3vfbMmWft3TcOINKBNVr7gGQsuVzETpcntro0sfxq3EpYbmEAXDYbt9TJeesnUGYwfyEJKNdSB/PQPqpYHmovwlSvXP0d2997EwIbuicrkV61vX3GeQajSPkfHNjC3Lo1+hFrAWrujO7Znd5l68drFkkz8uMckxuFATBCtIzQbbhBMF9YtwtnXrCt+Y+6dzOHMBXITcSKli+VHMgedV62amPsQqFS2bWTGAXf+4n3MM57AMeY1ZopK5RzKDgs/H7OlkvKQdiz48odFwCxxDxMCQMx96oWJCvuxecesde++bZtcC9t1jjcObL5hTL0MUKZFhD0McZvauX5sm8MmyunEd4tQCpjjxZKx+UJMDtz6iTdwp7MX7VHPhi1zNrIWh3FEK4g+KQ8RijoNd+hLe+BwKwDE6hRXgVtHpJArdXvWgeQ0W62eE4QD8Sv8BjCIjG0eLFv0fwQIweDKgbgnTasN2yiaBR6ILCmcBnlkRwyvgEg8o0qtFEyQhWXJOx1jsDaz3zyPxe7fcfXP/of/r7zlzInqOyxjFgJQMkCCQgZYvqOqYUuEBawIGzrBovwlINZ7iNFK8AUoEwpFYGyoG1+uBBSe/mNUyRwpOhd3Iin+c5DC3jgdISQxHL3cpF/fOMX5+teOlfiSa8kSlebXQQgJFxV7rfR6NgjF59ijJgfnpdBIcproVhZARLF7eneen4FYSkAq00TDJ/Pn5aLJRe8GhHzqjhw5U31zBwoNIUTBFVuFM8YNG5+Lo9hVbYcSlk73pUK7vU3XrGdnW37vu/9pD3+xJPB5igUbBpQqcHThhcpHD1fLxkLGneFMqidCwsLjIPi0wRmea5WsjhP4E0jJ6+svCSKG9Y1vsGK9gkcSzF3BEYY2l5fabCSNugFYPvkyUXbOFGwz33uN+H56/bko2fsJ3/4R+x7P/J+e3Rrzlby3LVxx5p7VywbBbgmppbG2NKKVF+5lgdVjIy6dRota1aQg1WFfAFoaJfikPMY9MrzqyXfLOOXxWjUpNeQgZUqdI2yVqomLQPLWygAKoWrFRbxjQxOxQNmsymAiFIyVTFAze7t3kC+9CyBIe3xeXEZUIAEhWgAHgKDgLGBnkRjTo8Cexpc/yegN/3muo5jFqZfDz47MQanQp5+sr5CnDD2AqM8zz2zMox0DylNPecBTcMvXCig7gd/+3fQlYCzQMwv/vTDgdkv/eG/dRklACzA7WEQtMxjZpFRou1v8xf0qjRJaoTGQEBOgNark9FQVRCTV1QZN9IcSXk5GXsfEv7xsYJmpNfk6VWqOnlPpejlaRfQq9QAFpW2rW+eskJpzi6/+3V75ZVvWbs1tjNbj9nK8hY3U2Ee8dzUoknl8+7b0vKCnsID0EsA21ar4XsDVDhH+eT7MHgIfklntSNfceoB/Sqpf1VVxDBORe+SaVqCVnWv7e1tAGXBlF9WcbOSeeqEPLgCMcIOkrlyIsjrrxAkl50yApC7+l1FhOQoaPa0SsL8KmRATi7GTLzuRhhj3Gsjl6GxrPa9IA+6tC8JzeYwct0p5ysxmhvuwbUydDUHyurhpfTpk3Lbyyj1+FqfH8AsAFTzIOyi8Ld8qWjRhGLXx+6ZrSv/LiA4C+hNMC/Sb8oKkC0mmUuYAZ5X6F0CQz1K24VttNcjcGYMmEvt/ZCXXYYx+hweiU2iVj9u2e6tPRtVwQcTybY0hlfMmn3GBzoQVWgDp5s5SgcZUtgT92H8xAtaodBqicoAz8Bk4ynGx7iJUdLkigFCDP0rPsFYkFhD7Pt8+gtilUdV46piQPJKh5GjmTwYjkPGiTY1u3MRfCYnRaMxANAO0J9V5l9tkbde3nv6CK+BGpl63mmuG7M8KwC4IuyAZ/yd52mTqcCsKoA1oFvJeTk00xhEf+5P/+WgjQ/xipxWzCwCXALZixcwUXK5B7FGcg3fd7XDiAIZIjgpcgkkKXIXNDTK8+TBAJHw0AWlvGZKcB4eZQGPI18y1G7JI6yP5hGgQcsfgPiYadktDUPNWSpbcmtGwgXVibwDrMV5Zuyu9SOARIOhOnlb6H3SRjdOWvPg0F67dNOKc2XbAtxcu3XNusO2Xb97ieuG9iN/9gcZXSZGd6N9irERwGxgAVQBspVGE8JZhhCzMO6idWc1G2Q/Y//Z3/iI/Zd/++ftI9+/Zl/73Nv2md+SB++uxbIVWy5+1Jcc3r37r+yNN67b6UfO2Ee/64xlQg07OX/GFopPoMQ27O3r79jcctK6maolB7uW6e2j3DsqIU1/BOxrVr10yx7Nr1o2jBUCgw129y1zZ9eWUX5rS3F7Z/tVe/vOyzDjhHmRYEtC0EqtIpWb8KXyPMqjuHDS9moTSy6VLQzw6o+PrN/sWh+FFrOgIo+WRlWxR7tJpwMAdHLdVssX7NT8J5kvBM+4Zf2OYlp4xrRtdytfs1uHX2D4tMMTE6UxsfXis1bOnbBiGSJMtWwa76DUABlDLM1xlN8EGKtWbxzDEAXafBLibThxyiMnz8ITjz1il9647Lu/0wgEWYP6LKtfcWOykAV8sgh5WewSttlU0lr1ppWLy9ZtY4EjZEaMZSJNm967Z5mVkTVVEIK2lLILlpEHCsaeW5yHuVEUArMSaBhkSp8kxvIxQRCr3wJQiivTyoGASCiGlZseeTm+UebAptGmDa3CuFawrGu0r8c9RP/yMkiBjeALDgkErFOBWecd2i+hLY+wBKmEwU989OHy5v3j/+m/cWUnRaLUNRK6kj2Sz+I5YUcpc4FY2BXliPgQ9uIcB7Ku7SVIBMQeCBEp3ftAO/jTwYGW8gRqnecFXBFKeoEJOCmwuAVolcUA3OihBnK4O8jV3/qNe95/pL8k0Hhs0FY+q73tVtM2NjCc8mV4UcvS2mAVbOTSWEoJaUlbYzqAT+WNE7BUeIGKrqispJLVazzzhQwGkDw7Peaa8bkPnpX0fQBIbNaPALVItvDUbt26Dq++QhuHtrw07/lVz545bc8995yXfM0ioLUEL4PL40Lpt4wSeXO83j6Dori5FEo5QVsU4zblXMUxD3ra0Rv0WfMkBT1mkDRfqjCktgrIMnK+AjEdyAM7huZTnNsGPB6gkM0++YkPAmTz9rWv/Zb99n/4l/bsk6ft4pmTtrVcsuN779r+7Vdt3LqDPD20EytFDF8UCwZoHj7JpWPG7SwdRbENG17N6NTGo1bIFd1oDkdQ/gAY7RauI/O0GS7jPAVvguxymawVMWCltDq+4sCc0MfKUdWXjPsoGdGPNskpPV4qpXRF2oWsWOioHR7tYHgy3mnJ7cF9kCxvqbxizAs04LTkNKj/tWIg4uQPEc39l06Rk0QeJOk50bKPLTeQvpGKhHyDl07jdxnE2gCG2IAGAi+/vHKiez1XNC36DTY2CtAK2Gp+AzCrl87/xYf0zH7u93/FedLzj3J/Nxbph8CseH42BaiKj5B5HofOIUNAm4hE49r4KLpXpUwpa+18V0lZ0bh3Fnmovon2I5ynZXmBV22SGfbH7iEV38o7FuF75TGNRlN27tzjdunKZfqldHBx21g7ZeXCilfhU1VFVXvK56G9ibKtIKvLJegB/ccjtbIpmlDxg7niIvpa6SEZS86T8aPfxV9aPlfMpECHvMi+aQxltIvekqxT7PjK2mog+xhgGaAKI5IMFFCWXlcuZslLnaPxkxyW7NRqi75L5sUpPau39zhfIV5h+F4rwYrbxEhTmj2FK2Lkapyz2l/CPdwjyP3igFu1t1DIwcfQqHgRuo5rczHjprRocpK5XEQfSDYqvZaArHRPgnmKA4Q1N7lCyVLgGAbC2rRPBRVa/abfIwV/p/x8AD9gNpmBJqIzi+cUgpOl3SnkMZiJyZKjxg0dEVw0zVwiW6CZFLpoNgALHDBH9+DNg7alRovIlShyJw6WYazlRMBoGMMXcpgoi0AoOoSiArmkbBhKGSYdrsqKKks9wZBXBqPxrGlTlamNcY7IC7oNIRNBVvwGrULPtNrH0fdkQGNyakr+RdCfKi6lHPbyzAbxt9AxtKbVA/HeVBkp+oZc71i9WrPjwwptVV/F31xDu+TAcrrnAiSCy0OfKD2ceRR/DjG0ZOD7ainTKPqWrFQ2mh//7j8ntnuol4NZAVVZGyJWxbOpIWJGgVd1+UGYQKC0FYjO4KA4xQjgaHqFIkS4K21DOhH2ZPf5XBniyAASIqD3pu3s7tnB/rF1QfSRYRKIlQOOwcyJHNZcCWG6CDhBqmswdVcGRWkfxuGG9aJ3AYAgOCYpMy7axugnbbCTtUG3YRmAbAtwev3mDTuqHVm1ceDW3fd+6hOg/BqAD5SfBVgzyQo6P6rWrYnC8yVBRm08hnBLzEw8Z+eejtqL30P/4hX78lc/Y//uN/6FvfwfG3Z28wcZ5IodHG3b6c0P2s7BFSycPfrLmMXXbbG0b4+uPGux0Yrd2XvLrt2MWbu3ZL/0d5+zJy+um2W79Kdtte27Xk6wsLXsVtDo6oE17tatmCpYGEGVgWEurORtbYWJX+rYK7c/T59kDSd8KTSZLKN8B0x82CbK28n4F/KA1cScbVd6Nr+6bOtLRcszhiVbtMXYaWv1GtZtYuliQSmOUMuZueSCbS5etLNbz9ly5jnmtmOdFmBNm3rQIofNq3Z9/4t20LwEE6Rs1lqwrbnHAYop6zVnWLenEKLyNF+F4RSgz9QAaBHR0I08HwJgGZtbWLNe68ABycLCkg1R/CfWz3n9eHlDZZnKuvYNAihXCTwxvsCnlod0KI7m5MlT0FfISoVFaKljU8B9rX/HNh7J2daHDqwN07dgKqUbyaLkC6ll6GnOEvm+qdSvQKRoXMaXGEvvEqRiMN8JbXD6jE7wOcw9QskBYJYb8t6a3bbhtA6QbfiSzdTX1cVwjBVCVII9WCKTR1HfSXiN3dOgQ8yrZyrUQd7EH/nAw6Xm+p/+2S8zhhKQKEvI05WnNDZz7m2Xt1aeWGVP8AB7LuJnZIPLCZ2nQzwq4O1ggDb/ca+xhIYEvP52g1TA1X9HAes7XgIC8soq4F8r0vLOOqgdIPwEesX2fM9jkRvBoQZIdEkhowMNEgIg8T037cCPFx9/mvHifpynfim2Tu3TyknovtvgQY5K3Vm7/6UYRRtS/PJYywvbbmNYoCQUAxvmZtqkor+VcubZJy/aic0Ve/fK2/baq99AULZoW1Al8OTJTVtdW3NeFOCqAwwUzqT2aFOqZJ9yMEuZe5UgAJxWDZbmF91bK8NKAEOfBbrlXRYQUZxnu9ehxTOAxLx12loO1sqWPCeiNRQURryH0iQjhr60TE7VyLLQUMV+53f+hX3lK/+LbW0VbL6UsJWFrNWP7ll1/4blEn1bnkvyHcYZIGe+tMDzUcDcK4/SmcunTNmNEhh/SsavlFltVXyaNuHhqaWUSiibY2wUIhGzWqXhc6qNPAJYWkHQ6pmWtuUNbDa0rKwNOvIJoaTRAaIlrWgoZEKrBSnA68Ji3o6Qi7WmQDl6QknZRabwtYBC4JWRMRXQheaTb3iGvkMG841oRu9+juhbx316Fl1o57O/eNMciXb1jdoj+lJ4ASzmckiARM/XxmWtRqCC3ZhSvx6AWLCE0zLYye8h/nxYz+xnfudfupL29qtv0K9o3fnKdab4Az7jITIUxFzaECmMrjarmIhWilSYIg/dyyMbhzYUnqC47zhzKseQxkNFOpTu0jNKyFhGjiomsl6p+fWCyY2Wqm91rLy4CkCdhy8G6N95aDZmN65s287dPfgnZsvolXQ2BI+oUEgJOkiYUmaJt2RMyvsuh4IK0KihAjDyeNITK80VHcg0kdnH+xXmWbHSCa90qXzE6s/y8pLfV3HiAq3q61i048KFMaf9wTPkxQu+E19IdipVmOhK36eLmu+RVeo7TLT4HxnLuIQRMkMMLenNBIJP8ydvr+SqPOSqrCXwymC6Z1jp+kRI7gEFjD5wYKhAhebOaRLCE6jV/EgOqFoYT0e2ahMnY89Ya8OZsiSoDHELAF1rH7rBkUC2pQGHKeSvNpfKFvE0bWAYz8ih1W3uKWAo54zArOTEDOSjFcYYNDtjziu7x3Zw59jaRwOzQcySw3mMcUDsMGx9QK32IylVnQZU16tylwoToGX9PgKDWlF3IOvpK7lu3AEEV8FOHYsk6X8SShEgpa1TMNVoikwT8dMHB/X8JxmufrnpJQcKwiSRVrgFuhP9IqeSVncV6iF9KrrR2EP5yO0RmGSMXOzR7iBtodIyysOd4PBVdtrlfMsYBZYq/Rcvcwj8ajOgVgzaXSgO+pdnVh79H/7oT3Luw70iZz6Q/rQrWy1x0wApfT1MQFaTP6UREsJ6STnLGyIilJtemkxxHB4bhGWKfrESgnVewDSufKIxrMKOVwbbPzywTqMNbMhYKbFg5eSq5WMLCNg0TI0FpBKoKCyBY6H7GZakRUdW792wtu2DlbSsN7L8LGmrkz9n7X0U5LjnNcTv3LsNQG26FXfn8JZdOHvWVldWrFIFwI1QjDBMDYvwqKISpsqxdn93It8vrIWs1t0HxITspe/Dwlm5i0J52e7e2rOda1O7+1Ya5Vey5cVN277Rss7gDsxhtrR4FgUxsVvXbtq4Prbv+dAnECgZ+8rL79hrVw7sh//8R+z5jxfsqUUEwZmSFS5uWhLlMdqv2fzcohWKC7axdMouoRinKPhkd2xnzqxZCCBbW0Dxro7tzcvX7K1vYIVXxzZXKlp5YR2jAEJtMvkSYNpYlcpbrTejD30rAgrnAYVpgOUcgG5tURu55mxtfsXynKudmcP2yL1DC8VTdmrjMctHNxDCFRRvEws4wfiPbK/xqu3UXrNxBGAQecaq9w7sqdNnmbOira+dg9HjCAuUIwySLtcAE1IQAiBZ7qGA/JQDv8Xl8Ldjo7U8ocTRy4BaWMr29/aYGwQ4YKA8V/Ids0rhJBBT5G/RkwSodnnL06Mk3VJIpaWyZ69IrTXs9PP0f2nX+kPFJxWg4b5Fp32bz696Tkx5U9PRvAuoBxahaDfwmIipBDSk/KIwLAoxjZBIdW0crdsgAq2EatYd7cHjEupa2gliUOVhFh31aLsAVmD4PfD8wBMIGaUucU9DSrFxKoOojQEJ+1Pv+d87L32n1z/7N/99AGQReg5mpfH5Xx2YSQOj4LUM6OcotEACB9bVuQww1wWHrpPCDYSHVjtQD4GOdUGiUfGhkUDT3/wgOcfproD1rt89t+NUwECgljbQZ6ksV9IceoQDNg6NUSk350v/ep6E4VDKh/bVodtTpy8yFvKsKkZORppCRwAbIwBSMovS6yI7VCmox7ghpKFdAWPJHwlIv0+jStPgZRDMZDKEbpS8P2EXzp+3559/1j70wedp58AeffSc/ekf/JQbc7s7d6Gnjitv5cbU5jx5BOTh0sDV6g1f7sqjDPtDgAVISXJOgrlcKFk+WwTEj61Za3lcnVazpJwlX6RQZfp3ek33jpRzKxjSQRiEPMaaN427YTxrUCEHqze3AQRvclyy3/qtf4asvGqPPa7y24pLbdBvAHr1ACO0yfT2GM8GfRoDXhp2G5Cyv3/snlYpAcUJz2hHFjk+V0jBe9Bq8hi5uGvHjT3kJAY1ii8SUcxjBqPyLAo8xbh07fqVG3br9g5AfIAy6cKHdfgYigBRSjlqGVgemwJgWPmepTjheNoXczlw684V6w0bRrdRfpAJBCNgIFAjr5cOBw56Zxw8rluEIbIUWUNAAqAia3+XCtL3vAt06AcuEwU7PWk4RZfcjj5wcA2PC971PN4FlEWYwQqjgCyyVzwKHQrM6nrxgNqkPSI//1P/Z778zq/P/vav0GyBIfVNYQO0TffiGdpJHvAGCpnP9NaXtT3nKZ8lw0Qb7piAjpVPW7pXq1MKcdJmQnkdvcIVjRMoUnifnEsCG77hhmcloPkMQEOGm2Kes9Cllnk5AR7gmlDa6uiMXnNsayubdv78JnyGNKze5rw8umuZ6ya+yXJxccHHR4BS8mEOvaSKTwIuoivNURFe0SqpckBPkfOSw3fubtvd7XumsscnT59ywKhsQynaJVknmVGv112eS+4pz2q1qmINY9oQTKBoQB5ZhR2oDQJJkCbgZ+KhauNxV1LJQ2UerK4lI/A/go5Rd8eM6MJXQ2ivaGcAvym8RzmrJVckAx3k8kivlAgPeilVAW5+FD1qDkUHCgPx7E28O6Dl+gi8jwqyercDmNUY7jAXI0twwyx6UGGyNAsZJyA4tT5EKz6bYcxxY6ddFauQThhDfNMBhku7Y+1a1So7+3Z098B6R0OLDODLEc8bFq3bByBqQz334iNzJbmtD8JbYANt4tLqON/LONLKikIchEvkVW/10VvjCpimb6lC2NLIg0QuCW8i5zAUAj6aOG84P8qRAWr2nMeMiTbHxzOMN0ZQUoecBRHNF8BYzk74V/HCcoCqzLeyR6irDIU1ugpVDCPDFNIkZ6Xi9HWNQr803vTDwaxIgHeO8XDmew/6GM8Nz3CBfJSc55o/9aE/G5z7EK/Iue/KfFp55tQwLfPJotbEu5Ups4JWiqHE+LKe5LZXkL0EhuJl5LkV00ZpVAJpUsxnEXYlG/aidnzUt30I/Lh27DvUwtOwFVJlW1Zxg+SaleKA3gyMDQoP0QaxvyeHDouZVGq1bbPkng3jPZsI6A4AwyitbPVH7BZW53Gt7jGvcoXLKqxiac5jlX7kwx9xt3en0bV2f4zAb1m1pspjiudictRPOi8lE07IA1PymtCbT92wUeQ6E4XQTofsd3/tkhVja3Z398t2YuOUlUsbdlB5G/C1glJagTilQA5ss/w0CnTRJomKffYr+5beHNmf/YU1xH3Itnfeslg+Qz/ztrC8CiNgdVVaNsMKW1w/DXheRaE07N6dO1YqF+3G7lW7dXzTksWo3Tvcts/8qtkRwP3ioxsegnFvt2KjLsQEAZRR9N1p2ZoAq8JC3pbLeZuDYLty+w9aVhnVrHevghUZs1K2bCtLq7a2pAwAJ21z9bxtrpwGGA+Y05q1W0PrtVX55Mj2ml+3o/Y1mBkA2l+0PIbFI0tnbSnzKID7HAxSA5wmLdTeNLAihgfjAABM5hQIryWzjFUPKyi/FAI3iEH0jAYwpwC3LO0mglGeQvdSAVaVY09CKYNlq7g81ZxW6hRtGmt1K9AmoCA6tY2zOSymjmVOMu+be4CTmE3EgAA6PlkGcDdXQKBmRLNSBIGlJ2Dp9MpnPUc7ueUVEBhVmpNppGXTaNWGEYyu0K51ZweA2QZCg76J+CUABFaxIrUsEqSJkpAJhLM28yjFiISWZLW8hxLOEu6+0QCJp12sP/jSf+aM951ev/Ib/yPCV0KRmyGMlZJsKutY7VA8oQQ0RByK8I4i1yEQIKAnQCs5gbxxpSvQhhThGg4ah+70NvK/K53gJWV43zNLP6U09fKUXPytd89sgKCRXFVSePe6SSByJ887TR8Ve6bvlRIrjUCUUar2QLqe5qfb7lsFXjx16qw/Jy7Pv1QW99EubdGJcs0KALmnKJ+DDqOMbQclq1AUbqaduVEENWAvHhdACNvW5gl78omLtrmGsTQdwFMHtjhXsP3dHd9Es7G5ZstrqwC1im2cOMlwAA+RB1qJUPosjU9JJT0ZD5XdDHbJh6DHFMa5atCnTfGMPUCHhK2MIK1Yafm+P5Shw5hCfz3kUwHDHKpm8NSHFvwFfaHIIjwzkypC2yr/fYzx3rPPfe5XrVa9ZkvlkD395BpAI47c3EdJoBi70CQKPYWhNYNGW82hb+o5PNaGsolF6X8iJU9MH2N9aE0UifYDdAEQg3ADRTxAkfFM+EGrHMoaUz8eWOWgbW3uNZ1Gg3HZWPdlRZXHVFJ4eQMF+lQdUF4sLRWrWpXCPKQXtBlzbg4+DCncpmvvXnnFVHTEIgJWAAjoRkUxgk3DUmA6NMSB8vSlTPhbJwo88LPTquhEgFQx1uA2p28PGeDdadVpFH5mjvwZfO/nc8iRos+aNy3hCuj6QzlT9O4rRswbrO99k2Ghd91bBtgv/NT/hXO/8+uzv/2veJ54GZkDvbqHDwAqXSnA6R5p/cezVdBCWTJSvKvPciYkALO+OkSjuRKegmpoiEq/ame8V7bib33vS7MaQ2CC+i76yQIMtLm6Xju05eUFj+1U4Y5Ws+MG2vLyGu2IYXBBb5a1VcCslqCr1X2MIMBmatGWllZ81UMb/VR9L8hE0Pf5Wlpe9o090vNy+ihmslBUQZvAsMupdLjkNu1fXV21+fmyg1YBX5WLlsEvYNxSZTvkq/omj6zSeGms9CxdH8yoVmyQxdCr7icQbMy9NlkeHNwFXPXoAwa0dB16PYcOVUlZ83FGBzKuXg1Lhhxzrw11bQxoGavFUtEKJe3BiPIMyUttEEVniezc4JGgDEhE8kv0ovaJHvSP0lcphCPCGIyQiTV0lgy9RvPAQuiSDOdnkXcxZLECI2bceAhNt+F8R0TQSFgoV7TOs4NNbhBfP2JNcMrhvbt2eGcHLNC1jBUtbWCQLmB6kkcuIaPRldNIAnCOXhn1oQeew30UXueZCwQsGcJUIg1oVHpUGo5+6PSb1h5iNFjD4vmZ5RdSlp3LIHsz6FJkmDtlGEIZ2XRehpKK8kxkQKG/Zvyd1EowfJwEm2WyCjNUGBjUyjNVyVC1B/RSmKBW1TJJZacJjMae0h5Ck9r8n6ZdclbIcxzEmcuQ17VqqxSPwgzEh/LMKpZ7aPW2QpmmzGEQIfCD3/XjetRDve6DWR4ImBRolZCWgpPQ0oNkHYrQ1NBgA5US8AaTLkXuScSZMGgJ5oV9YLREPIMyGdveTsOq7arH4shCS0ZTVs7O20J6DZC4aOlw0cJYYmE6iV0VgFkp6hDCGCCrHIa96W3rzHpYKRBVv2HpSdfKnZ+1w7vHCN6pba6vObPcuHndVCP9wx/6kDVqDYTWBIENqIPJVFIuiwUpS0HLHZ1OkzaHrAzzJ9PztH3VPvPNP7DYwuctVaxDdFMmtGGD5hKW001rDyowf9O+7/s/wQCn7OiwzgSIsbdsbSENUw9saSVhV25Ubbu5Z3/h//RRa4Wv2mF114areVudZqxz/diWzp221Naa1faqZgibcn7J4r2RZeaKdonnrJTn7YQyFOzIm3JktVDXvvWZRRsNOvbSey9yvwO7DQNEsQvnswUrA/yaw3mrtmqWLYqxxpZHIEQg7PAChLM4s1UsPaVUkidR3vel5RUMDpT2MGJ1GGnSh7hmA8ZqDMCcWme4a3vtb1i1uwcfzlml/Yq979wP28n0+215roSwOYYoYNIhoBUCaTYRhqNDi+Rv2/zmwGvOR2dl27s5sGJiCyEz4Pkz966JxjLptB3sgc4hskg8gvIOYqya7aaVCqWgxKZAI0IwBfOVFkoo/6bnzSyVUaDpiFWnN22Yu2b9FIB78ojnCO4NqhZjnOeZ52yujTCBIeKbzGUTpR2AWPf+IfwkPH2XsJbusHBHWLuDacV6s11rT3asMwbIRhqA257FJ4rzCTwtQQ5lhJaYHhAkZabVAQfIMLnyLEqpyROa9moyGcky+EWeAIQn1zwsmP313/knfi1P9oOWIxRH/Isw4T8p/pmkmTQ6QFIJXsFO/gqh0B2s6m95p0ALAsGK3woUu76Hf/nMafA4N5NHwYFswPdaetO7gKxeSrXCTVyo6qoQ57oS4AhiFiUY4WWep9spfhQrijkWP3M9f1p45LF4Sg2lWvBStN224kcXkCey3KMYv4e+tBVklRjY/MIcCqph2ZIyA+w5KF5UIY7+PYDkPdu+87p96ns/Yi889ay1jmuAtbs27MLDnYZv/FRsn9IFTekfU2hbZy7Q1SRKAZlGw6R0vOQk/VOie8U2SlnJ+6gMKIvz876U7Zs3RAMYQT7HyEXP0KFd1lwvBS4AoA0z8toGXgdUx1iFIRhGwEMilgZwLNi9/X3rajBjI2tU3rVCumPve2rVChkMSuRMtcZ7R2M8sXhYNKsx4nTYDlxtC9jSi+u8ryI3VR0pFbLEXNZotFVHUdtRxpguNELfY2lAy+DQ8tk4gGDOJsqZPSsC3Fdt7/DI3rn2pkWzYZtbLSNr+1YHdDQA7DJYRHdKc6asB0mMjoS8K+LNASDWeoCckN26+6rtHF7GiOXxAJ8OgFcZUTw0TeCAifd3gVLRCLpEG+qgNGhGlKWNofeBKIfe9YIc/BoBD6dViNFpmmHjdH0F6SsWFaqBtTzEgGuUb1b3ENdIb2keVC5Yy/9B5TH0DPIdMg+u4x7SBT//kGEGX/yDX/e5170TCjVRf/hPbZNhJxmjzBfyxqYAUvIUC4Sox0kpdGhDn525OE9gbzREBnHIU9lpi44UTx6sWIhx5AyQkSngoqvFzYUC8lSx4jxfzihlSei2Wtw5BVjqWRLZPezHrQ8dRbiqqwwi4SS0s+GrD9s79+6vcmhzMG1nHOTl1/K89i6I5mQw6/scRmgqFbM723d8KVnxuisrKgYCqNPyNjJd/C8PfjbP8xstOz4+9JYKtLtxjAxqNtvfBq7qm4x9Gf9djFYBX8nIGMZeF13QblStrTAgdN3m6hk7f/YxaBg54aWng3lVTPoIepsA4pT/WSBWlbFkCKg9Ws0sOf2CXRSOJDBLW+QfkHNA7RDY9ZA27ukhYRChyz1HuREHslAoOg54SL8rlbuWYE5PLi/ZMvIhIg82RnadNitfaxXdIC9kOJ4QAWtg/d4qrtLm+sNre3a8t+dpIRMTxmEqj2zWokN0xTiNDkE2ibtoZwd9okIJopeQNidwKIxnwvNotqXhb1UjKxbAUei7vSPmp1vFyEV3FswKC+CE5TzGbJq5jsOfyhLFmDNPCt/TSkWMeYkhG6UYJNs7A+YcvTK/OG+lefqHXpEDIZNJuDwSjmPYmDvmin6pbwrRUIy9dLt0ejaTQ15l3cFYKOR93MUPkq1yjfvGPB9kmXMK/1AGlQGHxljAXTwkIzpmP/AnAbOPf3f+0wIayZhcbAnGiwdyc6VfmWLxa/lWRC2lPZr0HGwqpkIIfiJzaMo18hLwXzaVp6NpqzUGKCUl36/YrNXFMupan+uxU20FAHiicN7SsYIvZyjNhnKfjWJaZogAcDWodWtE96yavmfDaJWJi9i4zsBxXmEu6RsShve2LHzUsdtXd1BgFSuj9C4+/YglcgmE6w4KY9eD7BfnF93LpLRI7o3DIkkBMrIQ4kwTgmA7akbs1aM/b8Vz92xr4wnr1Lq2Mv9+m8bv2stvHVskHbfjzhGC+5bl0wu2kivayRWUW/urluy27RPvfQkLNmv1ZM02P7pm1XjbjmhbdMhY1pt2aee2vXZ4w751812+34cg+oxpjxHrWocxiTFxFzZOWgHhUh3W7Vptx0qri5565IMRnD4AAP/0SURBVBbED4/Y2VMZu/nuVZu0+Hxyy3rjqHWwBJT2JwuBRiB+jxNiDNtafhJGbU6tyJyG0wjMGAIjiUUHAygWajiuA/q6Zvljq3RuWSxbt1R5z3qhWwgXAH87ZZ1WzNanC3Z67QlbWdvi3llAStw95+1J3QZxaGJ4yLiOMF4UzjGw0vLQFjcY42zUrt/dsbXcMgq3abNYE2WdsVk3jmiFplB23VDfiuU5m5ufc89suVi0evXIlpdKEHwP0IGRgJLNqwhGqWbHydfsUv3X7U7rs0C6Y8vG6a8BehAkEQRWjmdmUagx6DCOQo2Ee+BoecUAgQg9xbIhmRy0yJPQ63bsMHrT6pN7VhneYexvW2uKkRRqMa4CLfJEhRGEQwQLAFjeV24x4lDC6e4Yeurrd5gRngnBfMkMIBbgkEyGUDJAT4yTBLyVCZcsFyrZJ178qYDzvsPrN//gnyNMA4+CKrZI+WvV4sE7j/MDaeBA0QGpFKfALf9L+Qt/OhiQ7EBo+JIrwlnKS4dArWSKLHR5XT1GHjrSd7qMi1zw6yXw6vKHaxT25I8Obh2IJC7wTQK6UGKBGVLMu26g9kmOCXB4nDQftBn05Kkz7lnRrncHhSgAxRBKqGrXsoR3GWNGMXSjacMWl9P0CQPvzS/a6299zq5e+6YdHl63b371S/bkxae94uBs1Efhcu0YwVivc63yQi7S7IgdHtfRgUEsWxvQpuVSDw/gXTvF1V+tHiidlEBSDmNKmRG8MAWd9RAM5IXiG2ki46FYzaiDbhnsGhid6yEhs6g1mjVfih/1FBrUsTNnHrHdw2O7sX3LiisZ5uLA9u6+bEtzY7uwVfRwgioKv1pjnBjXAvhUYyyHlZ6X4e+FxZyHUzG6NkXWLq2ft3hu1Q7bMWsMs7aw+QR/b9pb7xzYm+90fIPg4tKGtToKd8A4LJ4EdESs2RnYybOnrLCYsru7N+3dK+/SnzDybd4Odiv0U14YeABQU8zlAbCABvothdTr1Gx5JceYNuzarVdRgBXonkkGTAZL3hrHSDAOog4nTd7pjPojXebhM/dp6EFhHn3WJa7rvn0eJznhQaO8CdS54SHScg+sDAuep4PnyKunGFq/nkNQIIjjZ7zgZUQBQC0As3rpXIHsn/+phyua8KXP/DvuK4+p6F2gFuOHBqhvQYPVpsC4k97yKkz8za9+3kQeKfSpDGynO2jc42EZ2zFjLcNLIEtVpnQoxMU7qL5zfgo5yS2RLUqzCDjmXCgOYAHsQ/7UW03mamSD7sT29yqwUAiDZNGNC8Mwl+xQCJRWTlQ0QWE88pYKmIoPFb4gb6lCEOS98zhSxlThhfVm3ZYX1twb56tokufeF8CHOshLfKANQQrhk8dQxqSMfelf37xFe3W92q5NZ9qE9mDznAzCnmLypz2rHt4FkM/skXPnbHNlCz5S+kbwhTLNyJPonn3hENomj6HuQVt8CmiMsEShmEXPZ3zsFbYRg7gEkjSYHt96fx41X5JheldVLI01DXVdF2STCHv6rxa82e9XLQc4O7myaKtzJQy8sDtGjqpV8A4gHDka0Uq3DAX6GIYGND7KYX18eGitnaobtnpMYhrHvufcadHiGJihUdK6E8YfGlddAgFzSXyEGqpg4EY2Ggg8MwKvIZ/S0AiH+KBFu9o95hudmcwDkotmyVzYMnMJdKwcPMqWIW8nxihjJwwimafsBUplKfmulRiwOGMyQw6nPRY/gfzKAISTyOww7ZlBo5KFgQCUsaqh4jfGUeATm9FXVgWctRdGhWhktASeWXRQSKsOnKRJCbiCuZxhxAdhBs2OVozkIAmK1Xzqgz+mEx/qFbn48fynVfFC1jQoxYWEhlAf3HrmNzGeiNnr+fN8X1phAESAIhCXWXRUAk8xZEL9OrSBYAjYk+UiwZ8HfCylV6wULWOFMMlIlt6wg9xn8NMQXharMI6FN4MhZgeAHSVNpw3yGvWlMLsMLEzbuWCTvdNmgK0MBL62uW4LywtuobUBKFqmESOV5+axECtu9WkHqQKWfflA1ii9E6CpYEFOUh2bLn/FljYhcMD5oFu3+blVW1/fsj/8rT+AgSQuwnZcBfT0L9t7v/uUrV7I24UXL1p4HvCynrF2DAYuwPzzcRtMOjbpNi2v5yCkPJE0GqqhKhetNkwLQ9LnUCZly6msndxas/XVJZhyaH/02ssWRkicePQx26v17OSZc9af7KHcg3rXWawfbdLTzlbFZnUg0DFKZMq4JbCKkQAQJCIc4lPS4wZGxNCrqQSl9XpVBMjdAwsd1QHiyirRtDCKbarct4z2PO1ZARScXVuzJQD6wvJpW1rcslRcQAHCQ/BiHNu0hxHTSVqst2tZmJ4fUN6MZahtxfmsZcoAyiIj1zjDnKQg1JyIhG4foowPsKRhpGgWxavSohKuXZhS8UZRyynmOlPkvWRH4Tfs3d0v2CvXf9eu7n3Njnu3bRSCZiJiPMV1qRSwvF8zUwYNxd8qD6isRY2XIui1/KfwDm1wUfWUDgaDvO29SdVqk6so4wOE1SHWN4KVzmnpWVVhpggWlV+Up97j7WALT0sFfSm9mZZcxro/AkI7qVWBKZGK+JJPAnoTrcHCHuYRncDQCKuPv/cnnPG+0+t3P/evmA3GWwCKfko5u4cYxSBAKqkdiDoxJBc4iJUSh2v5SqfoJ3/pWr6UQJeg4M1fjhH0knCSzOQLfafLdEu9czt/yTPrMVZc779LAXBoM0wAZOEQFIDHRgq8ctKD5V79DbnzLiCLMaM4tGEfxVu1E1ubyBjVNM/5cqA2gCkNUKd+yPjLKNzBuKo4aLp26xWOb9hh9V2M7LatrsvTYCiJXbty+aqHF21unHJvhZRtq6MNJyNTVSzVBVf5Ti3DK862P1CcnjajYOTIYKBftI42B5lcCtkkwhz0CGyUApDSVs5k0ZUUgeIbNd6+MU2bT7iX8k0qLjKIyUbtIgsVU6rS1Jubp+CDvr19+TJIpGsbWxiqt75k0em2PfHoKnQysp39KkZqENOpRRBt0E7CWhsnVmxhvgSwX7XeIAH/rNgjFz+BEbVql2/MLFN8ys4//j3W7M3b1ZtdiyRP2TMv/DjG1Ya99lbd9g6jVlo4h6JMWLXdtcJCzlqDlm3v3WL8V5EpGcsg5xuHbdu7WUdOA3IS2qRG3wRiMTg6KHJPeab9DBihp84t2a3td+zO3hXmdGqJdBTdBpCE5gViBeYEKvTZD9EYRKZxdGLjkB7Ru36SMnSAe5/wFN3ixKPL+EdXuCHG5U6/uobDyZ4DG8I/w3YOnp2OubF4WcoyqDYm/r3/mXvoub4BlDb/H37y4TyzX/jMr9FPPVzPhebvGy9qt8CqlLXHANMOAU2FCgVd0ANnbgTLQBBgFI0qbZPeHWDC5wJkOeSYVhKTkvXQqqfy4iYCRUmewW0BB4AHOWOQ6XIEKJ2mYrwnYcAwgEc8VKvUbIB89BLKyTzycR45V/PnaWOUloPFt3WPFUcHMj7lBZWfD6p4adC1wiTgp/h0jWsuUeB5Idvb2QVcpkxpE5VBRLHh8tqFY4yBdDb3yKDT5T33SmagJBVPUC54gR+FEOm5SjUm+hJQl5yLoKMzKW0mumtz2aidPXUC3ghZDf2p+UzT726raw2BdvCD5KLmQp5HhSTpM1TGuROPI1U+aqUJ89ACTZJ0gSQn8+FGhuaKdxlbD+SXXvqsMdWcchI4BVwDD3S6x1ZIxAGzS4DZOfCKPP9jO8CAPgKsD7lGzrIk+ENhBgrR4YEO3CsYso27h6bIsIzSgikFF/wcM4DsNMOcha0pPcU8i07G0IJvOOYIzQCzyJZJaIiBEYI/w1YCqCu1W6dXs0prD57pWTRHvwsxDjBVFuO0rNLgcxixefhX4Ubqj2KVFWrHHPMsVe6S30GhUKg39MTMEomQF0VR2eu0wKxoDSEeoU1c7uf6ZlEmQAaKZKrXJ2C8lGNYntksh8c6J0Wbul6hf+qLTAewFpTq+WmZE1WWk7xsCIvQBnl8teLw/R/4Mz4fD/OKPPk98x4zq3Rc0oIP4nU0gWLUkJc5xHIc9REAMCrMqcZLWPhyiIiBDirOrI41p/jPflepNrT7ksHpqbKSlEDYygkEcnTe4lgjCjbWBiBtromjbIpzS6a66uEoRD/ateZkB9BSp4m0jeagMSCugeeBS/TP2OzolM2aea7JA55KFmXAhjxHgykFI+WkoHqFUOglT9WYvsmK0rskjMB4PMUElG9a4dTbCP8eQA3rNMX30YyVi2fs4+95xt5997I1+y0AbMZ+9q99xNafYGKWw5ZawjI7WbDZHJOR4tocE5RiYFBsCQklmHZ7D2sMS1WASC95dUaM2RAmOJ71bdEOkHpte+fqy1bvHVovjFLMRwHyE0sqNAJLGRqwVqNty8ubALU5xhaFDCBQTGIugpBCcCQYnxT9mtQBy3vHNj1uWX+vbomFhGcPGEIs6WgKgJ22HDfUbsw+wDolTytzdbR7YO1604Px+4DDNEZDPD6zbHHO4sx5Jglx53gHhGYY30QIwmS4uwDtwVBeWZhW8WKSBfSvUEjb6bMLtnoWkJqFPsYFG3WXbNZl/sPz3L/ocTHVuoRG0ct6ZrEiM4swcObImuFrtj+4ZK/t/DM7HLxpg8SORfNtCyUB4NMgTlExqnqglk0Uo1rCElwQmE2maQKgE2MqJNCvMokzQAj02xrA+L17VgUUN4bbVhm8g6A6sv64hiIJds9PAbKjYQiaRnBCv9rYpY1P8iJ6aAL3FDAWsERcuRdAqVkgYz7LOS7lgrAKpaF5+jvGAu8mPKfyJz70cMsmf/DFX4WG9RxtMFOZUAkaCZAJrMCzkZBS7jQNscCg87/H1TKvfO0vF/Qcgp/+/X0wIN6Fvf1dL33Wqd8Gs/f/1kuCK3gXmOVOup4ffUc3J+pcnfQAsDiYBZR46IHACb9Lngvw6nyPvVPatHIBMFv3OHF5oGT4KutIGuO0g+F24+pr9tJLT9hP//RP2MlTK/b1lz9rh8c34PM2ih4+i8lgrqDEEJ4I9BvXblin1bOnn36PnTxxzlbXltzjdOv2PZ4bt8XFZavCB0dHh7a0ojKzXeaUcQXIelyxltp0MJ8a77wvnyJTkBdS3qgAxk6rR9r533UvkpQvPzEuyBNpSXh2jAySkKbH8ChzjvDW/bP5OXv9rbfckFpZhzZiu9ZuvG3z+YEVMjKse4zH0GptAAR0VK+YLZbL3AOQj0zqDxM2v3DBlteetO6gZHf3zJ564VN24tz77da9vr3y5o498sQH7Nzj/L3dtoNK2C4++RHbOPmM1Ttx2znAUAsnkSkFO6hWbW5eKZjG9qUvfgWFv2Qx5P+oC2hBoSreUQpTEFJLkAqvCeI9UWrpsD36+EkU6KF97eXPuOMhxhyIMMSDTeY0k0vRb75yItI//OG0J+X8wKDSu8B/QHd+puhHpwOwRLPyiuk73ULnCFLpvjrdMYYbSMFn0ZhEvd5lOD14iRcU7z3RcR/U/v8Ds3/1Jx8uZvYPMTKDfR3cmOdocVJEAHcCPsSPAEUH5Zyj8+ANr7IEjeh3VdVS0Z4+OkLyZDQFkNEWVWPSiKez8mgBwACy6ueDcAm1VfIgwWfpZa1ueNo+BkC8J/2q1YRYQmnUlIZRBjYGFXSuvLAyFJXXGa3IfbVZimdhCSpOVXMhWtY4JAGAWUCoQLC8xXqG2EKed4XQaHVVIUAKF1DZdYFtYUT3RMdjvkKp+4hjlG5MWEKhXWqn9K08iUrnJT6UM0nGjZxNml+1pdGFTocV6zW3bb6AYcT4CETqPnNzBRugn1qNRpCRiMnV+GnlTGMsWgmeKXDcd7msNHxKLxaErDBfTDyz5LT04KVxd9nFhKJKOUn9CcAsX/rvwUY1wGzj2IrMzcbCvKfHUwiJPLcVjFutqswAjMpXrZy3Hl6jg1v22m0AOkbDbgX9mbQsxkVkAs/1FNaUpb1JZMrUmtDCSPRCHzxfteQTfUTDcgh/jdHBEQ6BRoHTqVXq+148qDAft9wiwB0MFUli5BZS4Kqchwrl8mn3sjumoz3Ca8qR/SC3rBubzM8EcCnHjQzzQimHXi6Ah4LVKYU1AFtErO7UkhGieZFTTbyqORYDerYO5KfidCWbJTeClJjiwxE3+E8v0Yn4UrmKlZKx3laKNukTxp+5/BOB2We+b/nTWl6QMA9yxULYNNCXUQA9Wp50C3KkNEMQDN+rURJIOi860xLuyONoatU6lgsIB1As5SfFOx7L+wETIixL0ZJlx1mbdbgWACWLL57TzuJ5W1xaszSM0J0e21H3prUn+ygsxehCFFgrEYRAjIlMRwsWH56wWWXDRvV5t1DkBpKQkECQRaUdk4qXVW1xxfOIKb0+siYQpSHGkdtb/VDp0XHhK2aFdxjYlM3nNmxzrYwls2i57Hlb2ojZEy89YktnMvYDP/0Bi5a6NtCSPZMfhvGVJiqSoglYMgLTEwR/FsCdiiZMJfwuH+5bm2fKZa64EgEQeaIZTdtvVG19AeV2Zg7wXsEqO7Q8xJgqhLCgKwCqlk3bPZi6ZOP+lOszvtOxi+DpMRcDxq8YKUj228yVKMRFPyddrDfmLYZ+DU/qNjjCcu4i2FoT2z84sBbKOIoVGy0CMDtT60G8IVn/CLcJgLRFH9rDNgwRsgRC63Dnqh0fvAsYuGLHVe1cPvDNJSHGoYnl3pnGESyqiFKgnRHrNvqWRCoswkTZCwiauRsWmn/HwxgyOaUSWmSu5zx8Y2H5FAoSsJfuWaV/xbqxy3Yw+pI14y/b8fSPALhmpaWoza9kLJVVnGMQM7W+uG4XTl2whAQszJ2k/cp6oNybKQSKtNiYfkxpv1hGefrakya0tWOHnRu8X7Xm6C5zc4jw4BwsJtG8aFyxr1phEIP58pXuBS3LONPGL4FYCT5ZmhkY13d8otDjcQE4LdskLBUuWhKQvhBaQ5KlmMeoTTg+8YkfCTjvO7x+74u/QhdQgAKzCACBWVnCUoDuIZYSgI6ZuvteLHlq+UNKjz8lsPRPEJ8kfoW3dS4H/3O///RZh/7Vdw9AiK5/cOjlMbP8pUfpd3/nGh3u2dDfUvDQpFogL5GPBcTpSpex1XIlON+Ns3a7aQmU1+uvv2HlhUUHGLl8id/jdvvGFfvoh56xv/FLP4lAzTownWHkfe2Vz1kqxxxMa8y5ZC+GKfSuxOESmq+99rrVKkOrVFouix658KidOXeeMRvbweGB1eC3DspSEEGGtOSXdgMrXheWYnw1/94x6EjZOvh+pKVWKTYMIgZMwLaDrFO/ZRgrFEX9VYiF2qFcrIo5VKEQeaXqlQp0mbc33nkXVdS3zBwPCO1Dr9sYmFdtYyVh3XbN9g7g2UwcXtSGlpitLJ2FHheQU2v2wgs/CPh/2m7emdjJMx+y9VMv2jQ6by+/ed1y5VU7+9gzgNSy/dE330CBLdnZi09Zsze2vUodxZa3M+eftsEkYjdvH1oknmM8z8LDAvib0PnUbl1XLGTbmpw/h/GqZezhoOcrZ8lYygooZy0RttoVm18CbJWj9oU/+j27fON1DBPmH6NX2R+0pK6wJ4+TZNxc0fkhpcz4wFsPAKw2m3keUFfaweF122U0aXQZZ4FE3wR5n9JkuOlanyJ+FmnrNIFXHehL9/IL3N0nV04W7TND6CTPky5AKxIN7GD6BTAUmP2JhwOzv/3Zf4rsp60CqwI+zPuYmw0BsaoSNZ717gNb+FW/0Cf1bSg9KRnCu4zqIXSjGHZ5s5loZCmNYdwU8qcNscoJKrp9YGypL/LcBnG3AaAVyhRgClKFqT9T5uGYUwVCh5bCwFaKy6YqHPY7GH/NIF3lZODj9GDsHzC1AHS72zJVpxsj51qNljux5O3Ol4LUT8POxMGo742gDdocJt6W3FQ2kRiGquZI3mcZtp7WjweorK3eZQRqw5g8wZpJz2sv40X0AX91p3WbDo9seSFqj51ft8rePWs2AgNJsZu3r991rKGxUwlgGVAznu/7eLiP4nblAFDlv1B4xHXKO5tzLzYqIHjRV8hCMAU6cWrif8ZU1hF6QODVwzc0rgG1ecxtt9u2XgdDMJuxFW2Kk1cd+aP9Pvv1pu3W6jbhOuXNjzEWgdMv0EMaS61ujGoNS8FTKXT5dKjUfjKeteFZ1b4m1mAcnReYG7ksFSOM5kCWjME/0CuqLZ1C32m1le/Ed/XugUWzI1s/XbLCcs46gyZjMsJgLfpGzVw2BQ5KgB9oD22VLlBo1KCrUAOFH9FD0YGE/wwDHsNHpmMul7BiKe+x3wKzWfqlEAsku88VzWOsA8wovg3GVcUbtHEs5fsOtHFWoQoyGjXfrqM03v4v48uYa0OtYroVN9sA74h2PD8tdP/9H/wTxMw+//3rn3ZO4MZSlu6FYgIcaaN1hJIfgFkJbRGMXk6wDEqU371ygyarJQtLsUKqa87vgILAU4EVGE0DZucsNcpYaBBYlYpvLS4UrTy/igWwhHDo237/ph12r1t/1nSFOJyA6Oms8iemmMlUaMkSk0Wbthatd7wMwaE8mCgRpypGNQGQAiKQoy9zVKoVB7GKJZTHVhat3hX0LUvxsFGz4unXLVGs23z2tC0VT3jQ8sLSEuesWD2xb9M8BLjasf3WTcuiLBrVMEBtFZC3bXE4QvnelESdRgPuY5ZLAsqHE7u+u2fHCLeOllmYIMUyqRKZJlFMJM+Rlie2qzVLzc2hJOZdiMmCm0tjEQEsoz3azRg0q23A8cCmGB6llRVjCC2C5dOJJKwVH1tlhjER6drRtGWHo5a1YMrKDGXbR7iF6U/5rM0XNx1oKWZMuQlv3Tqw8RHCh/EtI5xScSw5gGYRBNmpNiyEYCiHGOcG1vK4xawrcbRyrgrMQ3yTKsI1DniXf1Kl7pS6DCGVjtg4HbLjScP2egif8D1LLh3a3GbdiqvHll7YtXH+irUBrK3xZasMvmatyZtWG77Dbz1Llma2fGrRyitle/Tko1iWc8GGAfqekzKGXs6fOG/nTp618tq85TNZlC5WI2On5RBZoFLIYs7ptGP9KQIUa/+gddf2WjfsGDDbHu/aYFplHoK0IWIeAcYuAKXX00YFxVR2uYWADkpQHlmnZzGacgsqiT5CQmBWGzF4rm9ImcHAkTkrJqHp1Kql2ssMWxLtjeCdJe3DH/8+sc93fP3O5/4F7UEBIpildFSQYSRlKMCEzFEKGl9C1cniXz8kegUu4VQJqPt8rVQx7sW9L6/8kkCe+Lt7xDhHPKtrdejnB4deevff7/8RvAe/+mMkEVHMulbvMt58xzcTIe9BGKCvylDiO10XZz6rlYY9/8IH7E//4I/CCyN77dXXnJ7agKZPfuwllysdLPXPf+lz9q03Xkd5hXyj32jcc09mPp9HKCtniCSMSqsWbOduldvH7dJbl2x3b9eryz373JO2vIpVhJI/ONr99uYUeeyjWKJheEJefmU7Ud8keBfnSswvxioyTkuo8h5p7iXzZBgrE4o8QhLPqryj9ENaOtZmVAEMxRfqGHZGvgu62WlhuE+t3rmOMr5kpXTNcomO5VLaRMl1iNfOAIOrH7O1tceQXZv2zHMfB3g+bkdVs8ef+rCHCrz61h1rwGLLm+csW1yyr736BsZU0ZZWN239xEl7++3LGMJDWz+55cBtZ3cf4BK1s+cvAEoytr17ZNV6D8NBZXP7trK8ZkdHxxgDGLeA+v29Q8YJoMC8yUjp0TAtCyu2WCUxiwtJgOzv2muvfwVgzHcK/mMMpB8ERtMoMcUHSjcG6YgAraJZxlCHlKB74wDtgXeWEfR3KUTuxC2lfwTwIEk1xWWl/pOKdX2rr/UPL5GgVJKMJD/421kh+JmXAEXg/VGMII/1Z9Bk+DgAswIdDwtmf/Nz/yOGMe1lLCbQuULbxgBUZZTQMZDHFXgzhHdVxlSeQ7Vb3raB/hbEpQEjAUn1hMYrrjKkg3aILsU7/h3PEyAU2Bb6ljxIIGsVy+meRoCANmC7wcgYhbRVn34mAIjdATJs0mOOwh62oAIA8sapsp4MEwFT6XXxkQMy5kYhBQKbpWKJVgRhMioaopK280slN9r3t6tO40vL6GD6IOD94D6e/1mOBFoukOTv8JVneIEGFDMrICv56d5bOqjYWYEf8ZZAcjQTswxA7YPvvWh/6rs/Yi10o8BOPJG246Oag0Kdq2qLPfRTH7rz1FuaE4ZTRpjMeu15CAH2BLbn52k7clpGjoNE0RMPl4Gkl8BrIEP4W8Y49wnAbABG9RoOkTmMTa16YILlOXmk1e9ez4sp7NLO42bHRtCE8IWXPKaPon0Zv02M6x7vUUCkcrUCeekXxoGKDk3T6KII/A+YpX1O/7xr01cEuolGxh5+mWQuYymtjmoJXqEpytaEwRof2MJm1jZPz1skG8NgYYw5f3F5ASyjqqoJy6XjlpSxKfBImySrJsyNMJoMT750eT2byDElnaPsEMpjn0UORi0N3ZUwCoqJrNOGG4nMi1ZIe31lhRE21H4EgVnmUJ5ZX91Srln6KyblJda7z8HMv1M486kMGPIU963uYFYGXBCP/qk/yQaw5z+1/mkJHgFZ7ToUYehRnhKECVVQsIhVO3UVXyLm0gRrvPUeG2mXIpMMOBoOFXwPCHa0jfDSgNEwWffz2QVbTK1YIVS2ZCRjaRB/fqlgpcUFANQ8xBqx6mDXtjvvWnV8zxViDKAhSzY8xioA2OaSKJjwhiXCRSR/ybrH69BeHws0ab1Rz5cR5bFSwl21YwgzSuFo0LTkJyKTotHfEo6+gzI9sbXHrzDgZitz52x5fh1wkrf0XMRqgJq96R3brrxltcE93zRWR9Yv5E/a8e6xnVhfsRjKVMSq6lG5RM7mMsqRGoa4q3bn+BjQh/CCWFRUQp5v7XiVEPUUF3p+dt3u7LWs0sTyihdtY/W8VfZ7drgNIK1MGd8Z4BmA3h16OT1taJlyVBowRiwNuMc4AIS1ezW3yLSbsdmuAaax2CDqK90KAPK0vfjCn7Lv+cgP2Q983w/ZJz7yffbRJ77LHtt8ylazZ+zE2jLtqdnB7m2Lz6JWTs5ZnOdGYdLJcA1CBbxhtSc9oXYG6zSEtdz28IVcBbBf3QX4Hlk6OcAoQLHlelZPtqwah6layjU6B0NuQclp+nBg9f51rNgbduvwtl26+dtWH121zFzP5teTdv7xLVs7sWor66sIorKlJvQxmfEcn8XcnGc8WCgtME+LgPOyrZ44Yfm0NlkBqF1IcsBrEv5TBFrHUN48817zpm3Xr9p++6Z1x0coI2W5EJ1rg6M2cQl09Dw0ReVSfTkkLKaXhwTmlkBGwEggKvZITKq0O/IGSWhpp7AKgSQNIBtZtXJinUP0UQBYJ/mdawBwL77/w2rcd3z9u9/7fzl9qi67vA2K2ZLwHkBHAimynrVs6QKZNmlVQgJFf8oQFXjlA78HnimBMPe88rv4W+cFlyLCdQ1/iqf9OxABt+OjAE0ghHSR/8w/HlKgz7qWd//s//ynQwpLSS9UmWyGMNZuWxmwau8IOYFete/6wHfb/+Zn/rKdwmB5+snn7PYtjEUM07/6V/4CBlbXrly9ZF//xjds/+DYFsWXyZzduXvbDWflqa1V2gj5LHIG5aVYaMBZp6uKViXb2jhtV69csa9+/fN24/YlF/7nzq87SFPp4tEoBM8iSCfKZBBn7B60XcoE0KDVKoltKSRkmEq86hQ/j77KQFa8o+Sih34g7xpaaoQnZHTICyYPkpZlKyi63ELK9qG/e/vftFhk3wrRlj1+dsOODw7sqILSmC/aYRUFFl60D3/4z9ijT3wYMJKxx595ySoYsZdvbdtTL7xk8VTOvv7qm1aYw9hb1GacspcvzasgAvJtYw0DvFYBcA8wtLTkN+H5d62NQaol3gj822f8b2/f5fqS1VpHtrqxYteu3QYEy5iT0QMtKbwCOd5u9azBeMmzU5zPepzsF//oP3ouy1xB+aS1aqBlahl8A5Ei49uE5phnxkw0HBzSIwIhGiut2AVgNrhWn0WjomfGWPPA6EvxucdO3/EuBSsS1zn6rGeJ7FBTzvPiZ1/ahUglY0XPorcAzHINypcm+LMCOta1fzIw++uf+x883KcPUBTQUzaUPsfAwaz4s8d3ioUdudHjoUEoWHljRxor2inP20hLyLRfYQrSswpNUL99YYNWo41pt/6VPmZOuF6e7ajiltVZ5kbZGUSz4rUwwE3hA8nZKvdLogeYA/6LASSyuaItLKz4isFsLMNL3ylFGMacBBgvAQkZZLqvvMOae02DrzSkwja3UABgz+xwl3nPZTkyvqlIYxgUJgBocS/lZ5WcVAYG0YQq5Mm5pLAFpfzSnOs50sXeT/6WwaOwBdFJpzdCV3bt/NaKfeillzAGx4DrVYzSDeYMUF0uciwCTpVmChphIuW9Vyij4mhVLTIRl+AZusyZXyja4tKCZ8bRyopkuYdc+RgzshpvERQTof4qNER/yisYFU4QqOQLpf8S0K9WDq2PsRYBRIeYX3mYD2p198oOoDOBWV+9Zhw0bwovqBweWUPYBH2a4jduycE8og/HwyR0E0VfY8hqM7yuYp7DnKT41CjGj6YokZCjBhCbBixqFTGs8r8d6K1p+cWUbZyft/JK2poDAG63ZUmA5CKGfJrxyGdSGP4peB9ew2CVMaBDjkdl4emj94RNZERNRzKQxKMKb9EmQeYaI0f7UOZLc5ZGLmr/hbCiQklbvj8qCM+Tp9xDUhT3DCZTGKR0pDyzAZjViEjfaKS54r5ukZH5AMw2uJfoxj2zgOM/EZh94VObn1bHJGB84wIdVC1optOVoiZAAltLniJczfgDK1pMNGnOUCpNFMz9wF1JFP0OYcp7IQVazM/ZxuIJWyts2lxyyXe85iCy7FI+qEoDKOuO2navcd12ADb9eIt7y5uV9ApgsUnSSgxKKb1q6fBJS6HAwsOcdY42LQ1Dimnb3bYrFeXdUyqIXmfgcaa5IsBWVhbnaRDdIpHwpG1inuLqyPIbr8KIaS//t7xYZiIWbJKqWHV025qTexBN1xbKKIl6HwJZgkAUYxyyFiA+FsMql9BhsvKAaO02bTYagNkDq006FvMlf4SNBCjPj8qD48wrb7FiccwZVMuux9WKL4s+/fSLMGLWeoOQrayso0SWbH1r07MVaBkwghWX4fditGCJNiABRh72GjAf1rGsOIg9jyWqZYkeYP3D7/8h+64XftSWC6ctNlZQO1ZjetEW1x6x0888ZcUTOfvapf9od2vXbUiblBy6vDxvWaza+MKixQH2szjClOmPhXKWBrQVGa9Vfj++27BG5dhmwwb9NwAbhDGmHXS4hAIsjc5bhPlT7NZx5S6A+Y5VjoZ2dJi3vT1oYL5vpYV5W9GGM6x9pd+KxxBTgE1az7iFLZ6MwigS+IGHR6UJNQ/Lq6vMFZYjIFFAMe5LjKJhhZV0sFqbdjC4YbstDJLWNTvq3+KOFe6DgQTthrSkAt0rV2gHoaO4TSldT9+j5OAAeoUciN41gYoJE4Nq6SQtgSwQyzOjEZUjZqymGFuTeQ4MtlEBIJv09mCN+bvA3HMvfZB+fefXr/6H/6fTiQxJ9xbTRuUPVW5ULe30PcctrMa5Dw7xpIxNKXTxMtzIIT+FlIZ+5zt9/eDt/md/8bvOefCCY5gDXR0AWwl9dUXn+/caP/50OKsP92UD8jw4PKSAD1FXGf4dTEjbwwgts5/8s3/RfvSHf4Z5K7uxJoG2vrJsZYzcZqPi/CAP1o3btxz8ttuq/pWzNQyvw6NDjgMexzPgLnlMBJMlMCeqBldvISPStrQ4jyCdAIpfB9R+xt5+51W7CxjOZQvMVw4hPORadSgGTcWhMXnnUb7IkupBhT4Hyb+1p0DGfTSWdMAlD1pPc4HRIwV6BP0rj3bNl0/lmdWYIJAFCOTZRyEd1G/ZcfMqRsddFO7Ezq5kkWsKh+qjPGJ2a69j8cyafeITP8n4lukLMqGHQtOSZC5vb71zyY2Yk2fOcv+YvXnpsgMUlQxXyqa9vT0fAz1aJZy37+6iSJTAHsOh24B24cWiqoZpGTuLEqracWPfMrmIb9CVd2QXA105tBVaMep2LBlFEWFIdvs9X7IcTjv2b37tnyKr69w3KI3q8ae0S8pHoMQBPHwihe3e0PvAVRuABGZlFGiVQ942KS2B2cA7C32IhFz3oIMgTlSR06x+0t/cLgACehct8rt0klSOlsIDL6u+DL7Xy0leypebBBs5Azp3cmSwPJzuTwJm//C/h1aRL4AZha953X3JG4zfAcazPNTyxgrQDrQMjd7U/Ltnlr/VVqVb8v8ADeIT8YaHLXCeQIIcOANdr+dwjT9LQJjvlftURpaAnGIV9VJf1R8Zs616sCFxxJzm0X1aWu5j4MUjGQ6Mj17cuugtxd2Ld5NJZRsYYPAcM08KkVGsrqMtDS/gsuP3TQOG5IDZ3zmG7hZcLpXmStC7PKXK364yAgJ/8spOMCy1cZZxEH6AtjSL+k7eXXlpBXg11yr1LG+tNsBpE+gUtN9rVH1pfWVhzQ4PGujwVSsvbMEHC+ikgq2un7KVE6dt88Q5O3X2vJ05dY62AFjh+cGwSZ8AbMyJBFYZQLewvOx6VysvmiePR3XaEl3Qarqrl1anxLtyOKmAgmSgDAjJPhmHXW1W67WsBjidDLX6a1Zvtezu/oEdyqsM7tC4KQ5VG9r6XKOy+cfIkkETnYg8KSaFqeCXifYC5RgbsMogAggdY4D0bYqxLXkrjyxnom8xSNCHKs0tQBtNAqLRjfLCD4ZtaKVv82slWz+zYMl81Hb1LPSFYpdLjJUqJRbyGSsqjh2akqEjupUnXjpCHnXRwxT+lD4bIj8V/qlc6mKlMtfOaWM1MmippI37jB2HjBMVWak1WowLuhr+V0EQAWKPxRaQTaOTwUqiMRk76ljA5tLv9xmYl+hD2Qy0Ut3k3T2zAFmF8f2Jwgxe+NSJT7uQgSDF+A+WLmBx/pBykOeB3yAuAWnfLe2SR4IgYt3jvqkgQasdUISUXGDNTJyRpgCvRSypE0uA2eKm5RJaMs5bEqs+XkzSYCwG/mt26rbTvGpHsztIY00kVkuHCYhDJA5mmZD/D2X/AW7rut31YWP2Pldfa/e9Tz/nVl1dSUgCoS4Q8GA7jh9iB0wxDsQFF4HL4yexsGPTYyeYBIIxcp44NhiMCQhIABlVkMTt7fSz69p79TX7XD2/3/jWOrrgcvf91v72N+c3v/KWUf5jvOMdbxvhXb4HWMXqmbdisnUX4mjBCMWQTh0rRAYZHA6zsVdo/CkgSvDqUJtCK4WkgpEy2qmvfKIVqy++GctLN+mMLoDwHOFdiWfjL8be8dtxNNnGIrkd8xHXAWhPLg5iMN1McAxtx+R0JypdgCOdp5Q8HRXW2+7sEGB4Hr0R7UCbZoyK4IR3G0Pr5w4WbuNim2dNclKLueHuP34HqSxBAXRp8+EQ6YuQNmH+0WwcXerbqXbi3tqL0T5vxT5A6RiguX9+GMPKJAYVFCwKeVJCQEHU337vtfjBb/sn41vufT9WIaCaPtE624eAdmal+PLBTnzh2Wfiz/6N/zT2q/sxwTj/zDvvRnN9KU47ldg7eRjH1Z0Yn20hpKbRikbU5qdRGh9E63QSe82PRwPL37i4MQDjfH8W6yjiu/XV2DjrxN70HZj6AYS/GaMDhPKwA6PC9I0ncdL6RQh5EaZbyWTd19ZvRleveqmOMIDQndl5cUh5YXLqKZh1vWtTruWypnoY6hhEHN11WOhV02Ido8BH40E8nr0Ve5OncXC0GfPSIbjllOsEGbzjuAxw34OJigwTyvAKwBQbM5l5OqG/DLynKA6HGVKghapXNr0QenXq/IaAqJcQHGcLUT3uR3nWjIsxYBgAdlY7gLcKL47q6ju+6weST77R9l/91f8s6dNVsAROThpJ4O05ysZHeFKxIE/mLenZ0XjMlFMc5Wf51KNCO2P0PAUfm14lP/NfAUy9Tt7wN+rFQ1WQV7Gy+ec5ZUPKh8jRDj4muFY2ZBoib6If1LMLK63Y2jtCqdLfpQZgbx6N6nL8vn/534l/4jf9c/BUxHwC6EN1me9wY3UllYQzYVsYvA8fP4MDVEpmh1jMSUp6qF556R4GcjP2dp/FztMDykJ76O1FabtAwZR+f/j+g/jgvTdzJaHjEwyuw8149x2+o4gOALuLi6YraqOMC0CqN8H8vMYI6nn3eQKz3R2ANX3gcNoBRqqe133T7GB4mqvT8y59qTGt0jd9UHrgHOakcR21Mdfus4O3Yw/51u2cxJ3rEZ98+aXYebyDXKjHzkHE+HghPvKJH4jbN78lrl97NelfWaURc/veLeiyGg8fPUiDLz1ztJoAdn3VocSFONjbj4NdQCbA1nqh+jDmh1yl8V4sJToaYFwDjrd3t2NptYlsRPGdHUIbKuYyZZ0jY84ypk/PbB2F0kDeCE4giPg7/8PfjIdP3lVvUTNpUF0AbVFWh5PpGuhHrxz9II3Shp5D1CZNJqg1m4CedGUl59Cpxe/SZ+6CYMgFepCepccT3iO9S1O+L69h97fkTcEsvJijMrRNxkdKtUmrDhWXi2cAoL3f++R1waxD9NL0804A+8t/7U+mUSfHaFjreXU/clLqySx5USBq1pQibtZhcADp2SyBxxz6NtTgrMpvyu6YYyQcIe+d3IO04QHcHXPOzaFFJ4q5ghQskjpHwOh7lQ1mmsgE9tyThveUJ0Ma6mvEZNQEobV6LLZWonrRi/JJi/5FZ+wMY/PJTmYhcOREPu0vtDFwDiIXPqEOzjVJQ4R2W+gvZp9sPd2HHo9ygpiZLiyHANTZ9+YmrrfQZ9GJZ5tP4cU5RiP6FANRD7VOJL2j5pp2EqUeYWPOpxhNVIA+OUkjViBoR5+eVeO7f+2PxuTEFbg6cYgBLJht9Ddicf3FWFy9F9c2Xo1r11/C6Opl3eMUCHzxjHKj/+aCcO6hfE52bC/0acNKGhzn9HsaSRIRZdfR5MXmkXYEzZXvMt/0JV4wM5MpxMwAcDrax/jeBcTCW9aBhtnc2kkA2e92qL0To41lPY4p9RgA4HV6NdE3a83FWF+vo7+72RfHLlI0xviG58ZHAG2wVYkyOSZUo8/Lp0e5YMpSX31HeWij05pGPEd1PG1XAuTefvlWrNxY5hkTwD/yBuO41W6CVxZ4H4Z7VT47pm/RSQBp07glnfJ8+9hV56z73MlXKVsK50udj0vouhfXb8SdFcCyOhl6VcQb9nbo6OyQdoY2ahgSZrDo1tDvHPXS1jvNnFCowyfgBWeMG/LGB/SEBjBYERrVwJ0gh/XgHwBmDXsRy7n95u99vuw/bpVP/+Y7P24SeJm+RDNqMVcuajQAiuoMIeBsboBXsWym8RZKAmyW0zIE5NKOWGEcnfjURvossLdSUEAc3KMV9uK1l+Peyuux1DS+ETDQo5LNhejU1qNxvhI782fxzuzz8TQeQY8VLJg+YNVhqmEslJqZTLwWvYjZ3WhXFjJ+4xTiOAYEznfWY3LkamE2xhRCbMKIMB8NZvqm5jlKqr4S89oDFBCW8cUCygWlhKV03DiKb/t1H0S3cS/X8HexhuH5MwDil2NU3QKMYuHUr0PQelTpYhre4YEARBpWcFGmc5YAobNyrAB2uye92IdYHo0B91qICJUDhNGc3j4vSyS1jE+VeAS45zJHV2DVjgWzPKh6jgdYwyOU/iLgcyNc4WwbcL6Dojmtd2OA0NoBRQ9AAiY7P2ztxjEg7+gI5QDIdCEDLTqZ09Qpv/ET/3p86pUfiSVn1VMEh7X25w/iwdbn42v3fza++NZ/H+88+EuxvfluXO99S0wO2rG4UY3rryzGkL4+AOwNFQ4oBNcVeXK2E4+ru7HZncSD9jzuLi1FowPzYY1VKl3q1ozKnHY6wIrdHsT33vzW+Ojy3Vji/HD3SYzOtjH3ENptjA7qsrbQiP6S+WFrGROE+RQVmEbvc+2sDmNg8Fy0EDgYI9CjWswYRunLfHgNrFaZoo5RpIdtBC08nbwXj6ZfiicnX43tC72xI5hI4VCJOuWo0w8nMM1sMsg4t1OHXKBbYZWG3XRubmTALe3adjKXNLvYjTpC5cIJL5SxhuBp0qf1s41ozhejO1qO1mEvSvsAEPrL5T3nFfropAtdusIJgAJQ9H0/8KMF532D7Sf+4v8lvTIulepuTuY59DIDKCArUyErg10QQbCpn0XL98ONjwICwaabwMCtOFeAUgW6p/3uwxTcfhQgUMWkIZ/ju0w27vV+SdDLb/lsPisb9MIa6uBmnB6dBlg6iaXlNuUOACZ9vXIvfudv/33xPb/618W7b23G/fcf0X+tWOgtZFLxrWdPU9boOfzqW/dREs9QKGeZIsZrXCaxXJpFp3EaN29wriMIRnFTeEcE9OjbGtg60QZgHs+HceDqg4NDFOqcsqIEAN5DDM6nz3ZiBSC4tr4B70wQygeQFoYLD0ilZowE1bHtXQrWpW4NF9hFkZlTMj01yEbjwfRCpKcDWSegdRTCTA5zJ2LQkIeTzTiY3I/JZC9HS77/u1/AVC9nPtdnO8AdDLpPfPuPxg/90D8LoCwDAtYA3Xvx8guvxAeA8usbt1JhbG5uxc7OPnW/mXFxpiLa3n4aiwsACtrN7nEkoWUMOYaXoSm7u1uxsoz8mxkTCeg1XYIgrl6CzscoafP5Iit5fh1aNzWhWU96gHuHkvVKN9uNePOdr8Qv/sO/j9xF4UErijOBIYKNe51chXyk3dQfkG3yU7G8syNgekTZ0ztbnLv6TX2il9BnWUeaL+m68OdrDrhLc5xz9zf1C++X/sCiyB3ojzolBydde7yka+SW9xbvM6ShuF/S9RlXxtm/+lufLzXXX/3JP8O7fHnBdzp3fLPEkvzk87MdAKsaNexXzha9uKYiMuRAIHHEUb+tXnzpxGvnVNIUje56EXOSFn88KWWY/ZSxuDzzFNAskNZQNXxjfnwSMwCDxpSVrCEfNcJrZeQnFHeBLhxP0Q4YGfaJixUMx3vZp3rxTaVk6Jaz9nd2dmLB2FnqqtFkQvwb125AdwNAkivitbKd5QPfTbFokmo8eucRxmg/1tdWkVXTNJyUKz7b0Y8yMjWBFEaahrrpvAzrcmKm8aeNciNpgubDiHsVuce10Ng5/Fiq1qO/vMaLzJeK/ofWnzx5FF/76hfjnbe+HJuP7sfsBCA953kzDUqA+GI/NuCXdq+Xz7UPpIEz6NBIsnRgnDtByVCxWi5XXZj2GFG05xDDdQcDeHtrM/Z392O6/5TynmS71psAfuhAneGokOEN/QZGA7r9iPY/BOAOaGPfsYSRud4ys0CN9zXR/Y4o1WhrnVvoHMoqVjT1Wpl+rqJrW/BozxjijiEc6jne0x/Rfy4DP8lRuna/Hbdeuh71Xi2e7jzFUOF90KRhQRki0HNuB0YpMrlO+zkaKekaTqrn/3gOBtH4NO4VA1QB6GIUMtoiAPoexvMLG9djA/nbExOWbb8LwPwktmiPQ2SpI0VtDBk9srRitoVpDxSh6gInGzqBzN1R/0zBJ2/KM7yn8Myi31AUexgOc8qSjkf+/qnv+21c93xb5dO/8W6GGcgQ6XanRSVgd4eBTiA0BbXmqC54vbLwnuUIg4enh8ZyTmlgmIfTro5hSwiaMK7okGbcWL0d670bCEmEozMwac0LLCCH7WTG/aOnsXf0BMA0inOUlxU1xrREo/brS9EV+FY3on5+DcYELFWM99DKrsTw/XWsRZfKK4C2K73oZeDXgsCc3EFjmbT/ZF6P05nKVy9ZI07KB/F9v64TDYDVcRnBfrITh6c7MS+PuB6FpvcNIeDkMa05Qy2KmeVOOtGKCyyxhVgoL0TzohFDmGh3chiD00GCoROIGdaFcLT69Exwv2WBuj9M9u0yrbS9jHBG3Q8BWC75WcJy0yPdwDKC79i5n/4pvBQ8L4V0KSaVSQ7lzHwXwNklJbWszE3Y7yzFx+99T9y59UK4ROXBwITtvxSf/er/L7787i/EB4++FMPZIN5577MQOeCs5qou1VjZcAYiFuMYyw+BaPwQH2BuynEkYfZidekWSvbFGCzVY69xFo9jHPcvxvEBbfd+fRpvVcfx5dJBvPfeu3HQRGivI0gBHFtTF9MYpJfh2vpaCkOHd/Wm0PEpvHPSBMLciRWuGOXkk0vdQZ3ZaS+tT/MPK6AzJo46T473Ynf8fjwbvZ2AfXZOX/J8KUJTLb09PEBBrsfLNhPkpaKF3tz1IGjZ2zdO7MkVnVowqMwJaFPhtzCo+hg53cZGxnDHEX00QEmNUFIadihpxxMremzpB9/pSIDep+//wecDs3/6v/pjlEcFqJVfgFljpuF5zmdTyYLFB7biMzTFB8/YToXAKJS7Zbi8Km+xTQpw6hn+o2wexKMKcyVrPhkay1vh+aujv+cMa47e5mfBdCo1fq/WkSX8jMznLAJ7qFxYjX/1f//vxSc++t3x4L2tHJFxeUo9Bu+//y7tVI2XXnopXnzhJWjiIt5/vBV145K7CrrdOJocRLM6RVQae4asGLwbi73zWF/uxd0bG/GRV16JayvOxD+KBu++OANQwgt6qBRJ7hqlCsrhcIKC3ownTx/zrkmsri8i9NupUKejMddCT8o870Oo6kE0ps/h3mwzq06NT+C3qqNEUJ8xss4sBi7B68cAb991CP06+esRIOIR7XQaH3m5Et/5rZ+Mt778ldh8ehQ373w0XnjlV8cP/ujvQDley5hG1A62AOZ7YyF2nx3G+tJNwO0oPvb6J+P+B4CFhUUU+QzQ2ohHm/dpowblqMcUJTeHtkfTSfRXO9AKCm96FltbA/gHOTiDxpEd0tXYGWeABBVZswKIpc1rZyjhAfyYFURGQbuCkDGG8//w9/4ObfYIYGuDalDSxtIEBuY5Mhcxzjn6PcEqdyun6H/nURgm8uHR39mlE0EFJJM6J5dLVm4r1yBQ76e76AOeT0d4jjNJb7JXkq60xyXphdSgothX2TNSDeU16qwCzBahDZf32bfKX/5zyPP3PeeiCX/pr/9p7oW3kT/yQYlyO8/AcCiN7bYp+fjFeSNcleVXyCSAogwlPhunqJGkd7rgQY0BRwoNJ6CNEvgaasDOUbDkZ2XTDBqcnsxyjsgsPbfKhnlM+D45nsYEAClIcYQKERSlOpKPMp1CU0Kkk9N2tgtFgCahWQwaU1fduH4jNtavp54SSDx5skl5KTQXS2um0lPxu3iAxpt5XSUAQ/sEd7t7B3H//sNMO+XIlfL44GAvva9LS/JXP3nIlGEuqDABBAle7Ay9mua11bCQjsxp732GM6zA0xoHUkfqIfSROdfPTic52vL5L/x0fPlLP4eheR+6O4BW5lavcF5g5CxjsN66dTcBtjKt5ehrNKN6Tp9B7xfH6Jx54Qg6xuADRaTHenf7Wbz/7rvx1S9/Ib72la9ieL8bTx8/iTnv1goynaDpMQ3FPKe9LmjsY4h6qbUCtqnkimEHk1F6iNsY4atc71LznWYdEC0vVmM0PMeIRh7xXro9cUYD3nKBhCpGeg8Q2wek1lsQcA3MUaVObeonPUsv0EWjU4/VG7wTvPIUw/Zgf4I+NecuQJbdyV8t3ttAvoqLChc/7aykkjFkRNrcSfb5O32sxx92iuur/Xjl7u24fW0jlrst+raaDgsn3T7bO4zH27sxmpgCDVnuaBp9fwHzC6bPdHerjOiDKg9zFISHpzy9crJI+9ZFvTvhOc5X2RlM8yhbSPe/5Ud+F/c93waYvZNhBjK9D1ZIqZx8YTGL21WUAJa2g+eovNal1xrkPTqcJOFp6WBIsFNgjuDMTF3SB7FfW7pNY1wDWLagRRjX+yGbcrUWY4Df7vRxHB5vxnF1RolgcDqKl9Ps59GvXAPMrkS7fDPKJ4v5nougspnc6jiG77yYKZQajS5lNuE6HQFQchhYMOuKnzM4t1w7gmCxiKhTq0nHGv/WeBDf/X2CyGkMLrZi92wzDi9246Ri6hIsCYccUAJZb0EWROZEE00tF4sw28LGySuxUKdcSNnNw614On0Ws4sZ5YQhx1hJ6c3WzQ9702l6qiWYXC2JNqpxbQq0pkwRaS2fzyHi+kos91ButJEgyFVBbDMzTBijqfUnQc/rY5i/EIQnCDDbxDAPvSsrC+txc+llCOg0tnbeiXce/P342gd/L957/IsxwoI9LY2o00m8+95XY4lr5xMDtRtx49Yq7QOgvMAKQ1E501sl1DD37vKNnIzT7bgEaTv2Dx/GaLAfu4c7sT8+iMOzSSAiAxs2di8QCqfD+PLBg/ji03fiyXQ3DRxjhU37c4Ilfq6nUwGH8Dk+wyo7gZhPJjksN7+YZlsJEKguZeNmNRwKIZWp9T6tIfABDEdbsTN8Ox4ffAkw+2aMTp/FMfUr88JiSB3FDaMpeCcA2cxTC93rFdJjVGQqoM7QnUqu1S4mSbR6GE+QgEPo1VIdILsY/Rr03Hw5lpqvYWCZegujbMyz2I09ykwgKJBy2QB4Ci5PoHFdGeZ7vvcHC877Btuf+ok/kgrN0AJBlJbrjD6gyLQGdZcfUzglY/IKaEg6utr8qPzIrRBe+UkpwndTJuX3/Fxc6u3Fr3rI+YW29nPSp4CDo/zrV/siQxfgJ+VDLpnrM3mAidP1ujWb5TjYNtznXvwrv+ffjTde/XTsPBsBoIpY1NdefTk+9omPxsa1VdqqTN2O2c8zN/O7Dx/lEFq5Msa4e4qi2YTnDiGHZxjQ78f50XZczPdjdrgf472dmA0HPHfGNWfRawsnztIbYbiTbQapQcc2C6CtYXzfWewdbMfu/tOYotRPAATOCPfP+OnB0CWrj5NHTYWkx1X6gPjS+EL6QzsolgtA7vEYYHAITWJ8wdelEi+CD+YuzIFxPDneQtkOMH4ifu13vhFHThTbNc3VTeh8IX719/5T0V98Jd5+ezOH+h1qbcC/xv3v7B7GwuJyfOXLb8av/p7vicco1J3dHeRdMzMkGKu4g2Lpc83hcIyS7cWDJw9RfCg/5F9/YQ0FN4211fUMpzCe3CwLOQKnMUh7Iy1Tuc9GxzHe5/38Jr04ROyKdv/ws78Yv/SZX0jZfDxT2RW0AsvmtfJjxmgLSJM/7X90JPSax8s9vbeKdvYPz/sdssnJXxxVqUlu/gfTw03ZJ3mEdjNUxncX5Ft4ZpHF5lgVxOYsdJSozoKCmgtdVbyXWvHZHbGcfJ46lu33/bZ/v/jwDba/+Ff/dLab8tfRoyr1rWK5mbCozrFbbwMc6tFGwTsXRHldRo8UGzJA3WcbsXn0L9ubXXmuBD+lYQS00pvpHjMvLWDV2PnpMcDVmfXQ65R9xu9jQOzY39lH0zH0KJjFsNQLSh3Vt3o3leOlcOVNQDKfndSqnLt9+yaA7xZ80w8XMhBEujysYPOYZ+pudhECHTQrABtHIMxReniwH9s7OzkML8Azp/ud67fDxRF293co/3GYa9QwGGWikyZdXMaYaZftVVerD/UEO6pqjKQjCxmTDPC1HK+//lqGyBijq8xy1UaXd29iMDuZ+N23/2E8ffZmGo1l9LjYxUlH6n4xjCMUCz0zdbhyHWAf/poAmCaDUYwOhsin/djCcN589CSePX4W7775lXjv7bc4vh3vvfMW5x/F8OCgiJEFcarrDFl0wQnzYtd4V6lCW+nF5fn0PG2vFkZuwyAldFuPa5YguBblPz8CDE5K1OkiDgYA/qGx9+oMjCLkTAXd53B8s35OueE/bI8SeOQiMQkyqDyDvvS0tuiPwpCptpFPgNnBZBBjeFgD1JRaOQEL8Ozu0rPK52KYn1ewy0tK+jS74GENMBXrKQaRDoF7N6/Fay/ciRury9Fv1qKFDtMAm2B4PEF+PdlGvtGHTQppOlJzNGjIu5jR0TktAA+ack7jSIeKRlwV4yrD2lRbbpQnU6li3LgM8ubeELoaJp5xQaTf8U/8S5cXfuOt8skfvf3jMpFei2KXLlXAZRoLgAR4A15yJacEBTaAJRHkACIngNkU8rQO5U4wq74UyJ5zYqHZi9XuRiw2V6N+BoCEIFI5w6Ay7uDsWRzMn8X4Yp/rZUIUNoCjQqM6O7dXWgCYrUTjYi0ujoogdIdXMm0RQPDk8cdyCLfVNN+qzEBnI4SdsanQOoUpjD+leQGzWCxO3mk1YXxAy/JX4uZLuzFE4RwAZoflnZhXISY7gGeVz0010aDTsZyMU7EteLfxfa6Y06534tbJR2iTcuzN9uPR+EkcHO/zLl33CAwDuhGyV2DWeD9Fc04GwnIRm3XKJ9S7HHNA20xpf1KNXm0pNhZuxtriBoShl1BhXQyF1CC4OoxpFpZjAM5FhzajnplxAia4AHCrLQyGX+1fow0bKK8ncf/JF2Ln8B0s+C2sOIRMvxVVAPTe4XbG9Cx0rsHop7lwRX+Rdj7HUoZ5Tubm4cSiREmuYeWuulY9zDwaDWJrG+t963EcHw7jbDyJKmVoQkNgkHBgy6iX46UqfTuLISDV4TWFbMaA2Q7U/wRrExaNE8o+O0Ug6006G2c6rdnZCINJ78OUPtDHzSZDZnsCPHnfrATwOELBj96OJ4dfjqdD88ciSAHwFYRDCheI2qWMpyAavZt6LkyNI13PscidGKBQdbiygaHTgT701HXaGD+U0aD2Bu3fqvSjh3G1UH8pFmsvR69xL0on7QQCJQyVC8BNsaIUTMz15VItXKlHoGdcpavc/arv/DXW4htu/+mf+49SqWUKGmh4hvJxyE3Ff7Wp3AuFbWtqZBbfPJ1tdblleeRrPn94x+W1V55c+Td/83t+5hufBQICORgg6czreBx9wAk21y73vAJKb7sv18vWa/fi6cM5PNSMf+ff/IPxrd/yPfH5z7yFoJrEysp6jMaHsflsM57tbCJbLmI8x3DBYHBCxZvvvRO1TgdhSL+f7EHzemPvo1S+EkeD9wCxm9GrHoEXD3PyY5WyHGs40q8KYcMJzqClAnyal7GBgmtSHejg2DAA+hvlnNWBNg8wQnd2nvLtLBbp9wYCV9B3ggLTc5xNAb04o9kJK6co+IuLOUcXXtCw9pqjqFfPaAc98xNumdBXh9CxMbfbAMWLWFuK+PjLN2Pn6dPo9hei3OjH3/3ZL8TBpBy3b30kPnjvafK5IG4Pob527Xo8ePwgVtaX450P3oyFlT5Adjc2N7dRVuux+fQwbt18Nb7ylfdifeMWSuYoSgD4GcreZUUbGE+DQ+QVxzHAVwB+cPgM+X6eIQoOo84nAKXJObZ0DwWPLMGQNu/u/Hwe9W4tVwn7qZ/+O4B1nufEGoci/UfTCkZlZWX6FSA7O0HWck41mcvH8pvLyWoU5+69kA5krZhKQ0Og6bniN+R/KiF0EcQouM2RLXpHutNLqwIUrOjJK3IZswtg+bWIKuWzKJej1+fIiGW8pFVOJ5hVnwlmPT4vmP1v/8qfTSeHYSh6YgWn+mJbnGshJ+u0XbMCiED+Ntkzj7HeMICux065mZPxjLPP/MaWSwcNZVOPAB9pA3WcYQTISj6n1xbQotzSG6tXVuCqV3aCIS+onUDvE0PaBLOcL7IoQJnoLgGvXnuXMb847yVgzLA0Osr8r46Qub6/4SI6Dsw0cOfO3TTmTa05AMDtA05v3NiIBgDOcIBM1cnz9fI7Ec3MMzdv3Eqdp8dXXup2zcGt/lAmaEjQ8NTP+SO5JL0jCTPKfjlZTGClESXY9Zrr19bj277t29BPh3l/i3Yz9zM4G/ksKBpRly0A6zGgnHuODwGEArhm0oShS1cjpMODQwzpZ/H+ex/Eo/cfxsMP7sfj+4/j8QcP4tH9DwC0jzOzyMP338nsA6OBABaeBgu1aA8XEXFhCsMi1MPtTjcWlleit9iHHCsxAqhP2U/RoYYjVuoA0R56Ez3b5XOddkQBxdmoBl4pxVAwO3ZEB2RBuzhC4LvKp0MMWTOyYBh1q4A/6CCQjfXjcIVJcYyeV8NA1WcDwLn6vIS9ZEaNM/CDK2G6lKxxs/avcwBcicx+ylAwaL6Qfsp/6gXXlPmtDP2Zq9hlrA1xuHfzerwAoF3RqQO/uHiSOshY2cc7u7ED+HRkRTnTpE0q9jF6eoac1NB2FCEDUKgXnJr9n15a2kuj03AdHaPG0DtZfzgYxqPtw9jbOcC4mdFOR/Ev/XM/luV8nq3yyV9/58cFBzkcw3tVfEVeO2faO6yGtKGCmQNPOaIggTH1HuQSZKMigTE/feiZ1TMaDSxQBOsKIHapex3ls0pH1ZMYBlTUlT6s7KC8HZPzgzguF0uU6iV2QYU6AECLZrHWi05lPWqAWeMm7RCXGlUQOtnjYusNrDyslnKH8kPAMHWjqThzvV8ZZxwVOnOKgBeYra/DoDTq5h5McOsXo7G2GYOL3RiXBzGlDOdVU1QUeeDKZwgbBaHD3gkSTygHhFztUp/F6FYWYuXsZuxNCyC7d7qNNQa4h0lP6HX6NZzFmh4raSiFJ4SLtVZBoahUejCKoH8fa2gymEf3vB/X2zdjobWYRFfD4koiR1irrDN0ASPiFGFwRodf9HgwBOFwtGlhVLIK+H57ATC7FrWmVr65Yfej3i6xAzErHYiwGrsozAePvgZ4W8fq7KVAvXPvOvV1JqbW9hkW2c2MuVsDxLqs3YS67u48jv2DhzDTXsyx+k71dCu0MBRKDrtCrAItrU0Fccf8tZSnWXGIhT6mrLlalTYRf4L8E4W2EycuwwucKT3Tu+VsSwDuZOYkLb5TR8QdzwaQwNij0gPA7AOMiffi8OgRZ8ZpjJQzf2gbmimGiAWxWs+GMeTkE8pgCqWcuMJ3+CrMs9g1iL+DesL6NVD+9LwSzXKbflqJhepGdMs3o1u6ieK6zmsw0KR39SZKZTY9wMI8oB+c3QkfAaYNTVDNOmRrOplv+47vlO++4fbH/u//Ps+h7zSKaDOKDi0mGV16ZdlkSNU3wjPBp1TvBWx6Vv1dfnbLSy+3D68RgBa35d15DZ/FrsXO89hTLrBLxgJdj/4uPQsUfNyFfc1JleTR/Dye3Ucgwke//X/7e+Of+if+2fiFn/4s4AnQ6gTJ/X3a/jiWV5dQvPbLUWYdaQJg3/3g/dhFeWrRX5RmCLsR9D/AKHmCSNnCMD7BmKBd9MQOEfAUrKHhPQFQQmtm0Til/fVO5FDtHIVOHxegCd6mT5q6SLluNivaq97wHgxjZISTJPpO6ISO5zPobjyivhhGCl55UH4G6LpS4Tm8JhgucSxDj1X42XRwTjjDhOO3UZzWEOwAdB4bd69FLMMjKuQRRtTnv/pBvPck4l2U6cc/+h0A/VFOgnRC7T7vNX2hQKSMcf7lr30lbgEwDg8n8fDBZty980rs7Q7i+vXbuUJhrsYECH+6+SiWl3q0s4CiEtvPkG3DIWB9K5YXuig8QNDxNA09PafOk/Ao2BoNpqmoG7TPOUZmtV2On/n5n4oHDx/E6koP+qauPFVqsf/teI2hK5oRxArOBK9wIO1dfE4PvjR0LpiVpjgqv7knaZqd0wUR+jiOGdLGMb2xKl6IPmGqR2nOizij7VJTMaJor4CtvAw5ssl73uP7NIIvaZl3eZ30e/nK+NeeE8z+pb/yn0NjemThML3SPLdeos2QFQ3AqV5bs0Dk4gfNLsBEYKUsqSP/jEcWYDQ57wwJeJcyZVYCKpuKXWPYxqDMAg2Z0r8MFeJ8EYKF7EIva5CbZ3WC0TWDn3KmviAYGs1wEq4z762x9zPA4hSZXrno007FiOOxYSfc32mhz7oYV4JuNlesNIez7WMGFw1BJ8l2e8jxVQxRdP4RukfQtLGxkZ655DGE1OHeAWW+AGx2kKUAeu41FZcOM72+R+jjAaDlGP3ouSlA1v40b7d9YdhZjoKiQzcA2YJZDcmeS6TyPidmnczHlOEZ/LuHrj8FZC/E8lov434blXYsGJ+LXlVoWaYj3iFQGh0MEiBNOE4oy2w4iel4GCcYwpkmE2oxZ7l906DPXMnRiYVCMRRZDqHX0Xc681y61fJ0erQTeEdAKgnPh87BOEPvVtMI7S92MXq4l3eUBcJHS2CRUoymADiMyLmGDP3e4D22frOM7OpU4Wf1BZqjBgbDcK80wAHgGj29jqq4nKxy0xHGKnq9xj2OzDbqffqpE23a3smbAtkEs8ovtYIMQ79KWvKBNJZx3kdnxYg4tbhAbrp63MbKcqwvLUKz6An6oITgn05PY4s+frS9G4e0pVhKg64K/dZ4lnpdOXGI8etuXG96gGlLM57oJNB7npOFbVn5Gx6S5qSLh88Gsb+3T98aS30a/8bv/Hcp0/NtlU/+CGDWSlCYHHqkQDnbmz+tJnOOWUAVXyav5pKyYBILfeykJECYBTInWwoWWZReKSGASzDCeutGLLXXo13C8jspp1V3ONlHmA9hPtB3exDHpWmcVlUMCByUtit+tVD+HUywldo6lu9NBMYyZdMjyw7IdBjTVDGxcw+GgFjO2hCuCsalTVWoWn+WHXCEYDOVUfm8Hd3+KcplHjujnei99vcAtDAW7z4pYxHzl940rGcT3ENFvAfhIjEiKKp6ZCswfmU5OrEazYslCOskngBkH08fxfTC/Iow6QxLE+I4U8Fzr4K4mGggiKXRBcwKXJRfDZNqirg+nKJg5rV4aeGVeHnlZQRiC2sbgF8dwygIK4SA3qzJGOBvjLDlgbBPIEpXcMvhqGPTAgEeAWXLi6sor1X6wFgbd63FIkkzdIKwO4/9w32srKex2HsppiMs33bE+vUF6owQa3Rh1Hbm3ZWpTUathT463IXIRsgrGAfAcVTpUS/DEwCA6a2g8+nDKs9AHESVPjcdGCgfo1zgB2NhYZ+Bihzuz3yeyVkFQdJM9CFtx/P14ApwnUgwBsyaausE8GDMc6muRQowOnuTdtqK2fk+ChiBBN05Y7uE4eRw9vQYoTVH0MP0emJlYOnd/snE3ipBOqPVqqU3Vq+9wyL+SdGt8kosNtdjtXU7Fiq3onW2FqWTHjyg37nMfRb8GCYfpHdv2yFvlIurW7URqp02ljuXaAxW4Ifv+I7vynp+o+0P/8n/Y4LtBAeW2ZPQMSTIs+BFFZ6N5Unq5SeVQn7jS46ecImlu9q8PG/xM7veLQFAfpc+uVHhdgVWE0PwgCsQ6yZ48Rl6W4oYZE7y3Tcf2aY0aQ2lvQD4/92/7V+L3/Fb/8X4/Ge+ErsIv6WFRZRMNRYQkN6v50kArBeju9CPrf2deP/Re9EHjLl83eJiI3quQlcGmJ28H5WTJ9AVRufeUTRQAm3osoNSMQ5fr4Keeg3sGUpiaL5CZISGqWleMlRHowC6l66WVhaS3oFUyDHIcyZPFmED+wdbCQYdLp3Dyw55aeCpCARJemJLJceVMB4dNUCOKZuch25KpNMjgPZ8C1odxhDFa2wbl4frNqz3i6Ve33nwNO4/g3J43t0XX8wMAg5LTwDo2yaKb5Vic7CDHHVVsIjP0IYf/cS3xfBgDrjcjBfuvRBPHj+KGzdXALdv816Xu16OR/ff5zkY4fC4Ezqa0PYRZXB1vJ4redEW9x88SB4zs4m5R+1ac/dqLJbgrTIAobfaQ9FO4x9+5hdoP2eglwEE1FlxdqY8c/jc4yVhSAfs0oHKWaKQNnJiJZd4zu9e7Xdp5op2cuMoDeZwDbcXHqSCxjx6UuNY48m+NLzF58C+0LC0XOxOMtGZkuCP0hT0K4gtAPXXg9nc/ML2r/3zP57Hb7T9d//tn6Ek1JHn6cEyD7sF0dtkWes14+TZGx3aHj0C/dWR8Qlk6WNXwlTmq0MzjIe2zCakXUsAf4fcoOaoJ/hlR2a3eI4yqaaspLLytru3pQGJjsqVvGw4CO2UNnccy4UbTqFnw2H8LACunJu60ol9GDYYc8rAxf5CrC6jz+otdBY0yTlbxQwdjra48Ii0v3ewG69/5OO0/Sm8MaDaehB7HMsYQpMEjnpsa9BKry+YbV0OK1eStg4wYi2fqbiy9aEJMYbLnxpTKzYYTsa0XT1Bdq/fizdefSVBluQwmYxi5+k2BtvDeIKcODx4iqwzZlM5DNBeWkN3+1nvd7E4UoI4+so8qs5NcaXAGvTkqn4d2lZecTJvQx3Ae+0rSgxd0ca806X61QQNZI1Git5ZGz4dgNIe8qcODzmanTl74TmXV3eAJfO8dtC5gP8zjHhDUarHqzHHwN4fm4pLg4R+TUMI2oUQuhjq3W4d2cT70WcXNXUa9FWnf9HtZlDQEBCrSYfn8GZLDzCGquFPvf5ahhfI1y7c4AIaGqjJX/5HP8qe6grBqXUri/3gJ9soHSiGVNA9yow2OstR8jP0Z670OjiKh0+3ALM7yDaNJ3iLOuuVrSvTpTnk6xxcNNIjPp+gvx2tpKzSd8zt+Xy+RqbsJ0Qyq8xoqMfXMINBOjXl1d//L/57ef3zbALuFDiFtXtZSTpbD0TmY728UOJOwZS8i2A4QSlQSNgaJkPtX1q6ea0K0k5nb9Y6EEQTpgH4zA0WF3RNEoieogyOIcYzZ8jBaA6pCqqd4Szz+/l81o2zeS9OZhU6EUtqQiMCqsxDOQeonMchnQjzObRIq5ieRdDosIUZGupYKALnCtZhHyU6PT6MMcCwu9aOxVsoqt4MxVFka6hf0PmnEPtpC+EiEEUgwkg5e54GcPJPp4o1VoJIT1ejNluOJ7N3YvvU6U+jHNK5OILQqKdtZx7BpJzLzeaxWTKWF2I3B9wQkDd2ejpS5MbCtfjUrY/H6+uvRM9k8PTF8GIPwLYdA/Zx6TAnp01KALTqKM4ax2hhLaIazFCkstAT2Om30zI21IITKFsug+jGaMXhBIXr60r0w/luLC6sQ6Q1iKsZK2td2nOUMU7ra3cQcjchEK3X7djb2ozDnT3DeaKJYdIqUf+z5WgfInh3EDbPYIBtLNcBTDCrx8JpL5Yv+tRjLVZ6N1DkL8a1xdsZC4yIjn5zKa5v3EBIdGAC2py9WlZYOCyFoE8Tkm6gnCcVygzQOKlo9IziqHZAWzyjXe7H+HQ35he0RdnUNvY9ioz6nSNQzgH9I6zECUDcoTf7U7o4ob0zrIBaKwDaHa3sbnpmM70PbebMz06rH9d6L8SN/mtxo/N6rFVfjPbZSrb5iTR8toswcjjZiRvngA495M66bUcL4NBfXMjhqAoCUwXh0MnzbpJEerqgO3lQRe2AicM0xmunNPAX401yKzg1/+eejKFNt9k/uhU8XnxOAMuehlbeU8iBfCykm0zvZ4/S7uU9aVlTID3aygJBTW4Ip2ajnVkL/pv/99+I3/W7/vV4563N2Nk2VZ71n8Srr74cL7xwl4eV4otf/hJtU888ypsIx6fb29FeXIRfxzmpyWH84YFJyvdihuKa7GDI0IQrsCfdFn2ErUbWDgJw6mRFgEWV5lgApNYos+Tfql/QlxgXKIVuB9rkXvRWTqwSTZkJYKkPb/O8yeSMsm7G1tP78dWv/lLs739AeziZcwdD8jFA2Ukxu5w7QJk/ok57NBEKH7qs1o6gPwX1CGE+wAidwd+Ae5T1Ygu1I03Spt1lZMbeYTzcAgpzTqfo0lIllrqGVu3F9s4jaBlgjSKaoAzMG1kGxJQBGptPTaXl0qLLcTgex9ZOkYZI2bv97GmGFLUaTT7vRb+zjIEquHZyaQsea2Y40THy5uLceOeiy4xjdALGIb9doIGPMEIPR4bknMXjR5sYHMbgNXKSbZv2E1u6pYcUEHOlkL5+T7mtnrjcc6UAdUwOxRS/GQGCzstwhQSZfrZM8JG4zGtSF3EqDUvozi09svRbytDLsrgV3l7jTQUTQLkPy3Wls4qyJZ3nefXdlc67fMjzbBb82NFD05jNk6adSJqTsugLA8mcnCtfODmshgJvoVd6tP9CpRtdQxACPYPR3zyvZdhBu0z/XNDHGP5A4OiWAIIVZDiAc7HRj4UWNNpaZEemtHvRbXIVfSq40hOcnMk7lX1OoD6tOHZ1HnMadXJKGc8p64U6dxSj452Ynw0p41HqIkMecnVFjobWdTsLsbICKASkma99MpnAX3v53Ylv9+/fR6/U4s6dOxnTqhzwGuNalQe5gAy7qb2MK/WcK3u54IDn9GbqqTUVnrv3+909V+qkn73O78pLMymYWeHhw4fxhS98Ib7yxTfj0QdPYm/7MA27vWfTePetZ/H+O3sxOgSgttBL5vwuo3v4vNS7hq5ZpY2RxezHQ8DWFBl6Vse46LCje/g7c4LitAi7URc3q53oNLupmy7ELgDP2RhjV4ZlnwO+9rZ242BnF3xyHG34bqHdjeW1xdTrJYR3DZps8ezKKRR8Rl/XelByA/2kgQkWgo5ywQn4vU6f1cs6iKAZaNsFDpzEF6UiN7GT2RytdAN98Q76Cmzj5D1Bu225iN5eXOwnsK8qMwDm8lxOvDySLzToxCYA0EsBoOz2WUWfIHPlIWjYEJXxHMOeft0FZHo8oJ8HIxdLGoHBoCfArLLjCB0rL5if+gg54ebz7Fs94bs7h/EEAPz02W4MDifcq2d9Cs048op+5xkFL6qLqGzKFvjoUkY971b5xA/f/nEZzy0VGZvEaGUd4s21o1MC+HAaGGFjsLCAYDxEgYgclHAUxGEXL9WFTI/kvtG4nbO/axdIQohAK/HUiT2Ni2gjJKftWZjE3pVUFETAiPRSaCkphSqj29y3ARFVYzp/gtDdQrEAgs9cA3svWqPXuc5VsBZA+ADhFoKuCjMj9Z1131isZZC6EyFWl67HweCtoA9i9c5yrH/qZ2LcngBkeStAtX7R5mje2xqdWlgXpsmgZSGuC+rRQsgsRhswWztZBHG04oPjfxCHgKmJ8bsOL08B5rNjgC1loFm0st0UxQKMIszgnLKVUIClmADSZ+eAUtrtleWX4ttufTqWaguxdbgT2/PtOF0eI4hQSkYY0xcOyzvbM9O0gDhgIay4dv4+ng+w3M5iabEX/d5SKjFb1DgogXGF8pdMv8P756fP4oOHb0e/tQ5DlrHCFuLFlxdRyMcwxR0E2yJCegaAfQf5fQTDmXSjjtKpoqC1DvXOG4qBQnR1Mup84pAinFfivWUYyr3RXqbipj1BSHcwBGB428HYZj1erganhemfno30OsBgkHU2mmJZejSQv9VtYoHST/TxCSBjNNuF2PS+cx3gqIjN4ztW5AXPdlboQAUDHRWKzmFngCcGh152Y6uMMTSuqwN40kr3ve6tRg+DYCE2Oq/EcvtaLNZuRO14IY4mDqUDlF3UwRmmtKsB7BOMhMEhYAEab1PH1fX1WAK46IERCEz01kNT3/f9zzcB7A//6T+YbKX+tyngb1kw8WWhjYvv4lrp0+HY4jf+cVIPVn719/zPB3kDwIDfMxTIO1JiFKBBz4VAIj0SHCHPgo2RL7K0T3CSgH2UVj70B+bHCDWkoh7jkYqgG7/7d/0Y963EX/7v/lb84i99LtYxWl546WUU2WK89c5b8WVArBMdzS3saMzuwT5GRQvj6gLwtktfd+izo3h4/02MRoyFwwdRHz+Ja+2j6PPale5CbO8e5TKSA1ingh23jAKWDp1s7wSQixrKBBDZ6mDQ1gsvCvYd/VTUo0m5D3cLWmnWmtQRGkEuaG8sLKqMAXYnGE5HA3jC0Scn1uzQhwfUdQLAUzEMaYMp5wG8GMyG+MyPh9DAgHYRLNAnKM6jQ5QWRuCnPr0aI0DLL7/5NA6M4qHcyPy4t16Oj6wvx+l4FJXOSpybgu9wHu1mH+UKr+uFZDfOtd6sIQen8HA59of7cePWjZzQtX94GBsb1wAOg3jw8HHcu/c6wPyAuqzGNseccYyyE2BsPnkcLj3t4golQNWJBjXvuziFH6IPr/ZjQlu+8+6bPA+QLL9dekokKxemULJJVyWUrnJJWoW08pybxiIqM2/Qq003Q2qck8b47GhIGkbSnvdym+2VSyBrlKLcnSjqNcov7vrweVJ6juZ49GW8vzD2vM8n6mW/lCGSdwqI4r2yjvekAXfFE9zxr//25/PM/uRf+TPoItsC3qe8hsvowZwew98wj44CuornC8H18TlCVUOvGL9vmJxeXOoIQ1PF5KcG8i3lHnWqUj8z8HTq7Vig/xc7iwBYgFnD2eLoJ2SU8ZLpCbY+1sLPvMkQrqNzw1usV9GOhrSVkZECIPV7A0DdQC8CGXOkst9fiWsb12NlcSUnNOmra7W6XFsCbIyQbUXYg5ONlKGObhhL69Lx9sN0dhTDw2Eu5mP7qp8Wl/vpHOIFMRoMAD/7OeJlHzhMb07vo5ne/hbn2ylPB4OD7EPb1P7w/JPHD5NO93d3MyuOuV6PAZV6JXVczSZzZM4Mvr+Iw8Mzrh8AcgG1A0Ap/CWucr84dv5CJ2pl6mU/0AYuNtLAWKgDWtu1fiwtbMTa8rVM0dltL6IbOulhL/LOCq6ge+ScI7TGiOaEMgCc8b72gUCwg1Gg59lcsSeO6PC7+e5dMKEC//cEsydLcXA4jj3OneuRxYhwlNVcE226rdUEkHeaMUOODMZ76Ce9sTSJup42mfLMGkJXOT2zktxjLtfOIsbPQp9yaAgYNwxI99nQQOZDlx9oWyd1q2tzoqQ3J0NDHepN2vUYDJWLdaDT+CWvMaZbvTmhvZ882IodwOzAMIBjcIST+LjWyWEdZJMjo3r3ZwDxGfx+xPMNwfCZhhKUeL8Mb9iBwFonjzndzV9sqMrTPR2WxlMb6lCPH/sX/h1K8Xxb5eM/fDdjZo3zkDWc7GL9CsQMRpcmaUiKnR1aBeyVAJbH0wkCGmTtUB0EXqdAdcFWDavF4fNmCUY0f+qnaFyeC0hyyNiUWLXeYtRXanHehvGaKCtz1kJwNma9fgojAyePAVbj9TgbteIQRbZz+H7s7O8gXIdYSCg/tAPQKFZulGO4/WrMRpVYv2HMSA+iP6VRZasqVsV2VGpLCA1DDPbjdH4vxhBP5zv+XDRe2KbRsA4vALDm4qugBKlwrqByBhdg0Z7qdalSpjKW2vn1aJ7cjoujNtbtQYwv3otHHJ18dWIM8AhGMt0JneRyjqVjhAyCBXKAkQwLMGXPGQJFQWTO2WrsIzRO0GxvtO/Er3/9R+Ju71482tqMB9P34nzRVc2epcI1tibBopYeT3SpO7QKOBEAxvOcsaqHw9WwVpdXYqHT4ztKsDaFUFDslV4Oi52e7cO4lXj/zfdpw160O69jMT2INz5mEurb0atvxEJ9IcZY4/UjCLT0Bvp6JapHq9T/Gky5GJXjPnXrxvG4FqNHALk9+nwPhh8BNI860T1bio3yWmxUNmKxjtFQuxPrrRdjuX4tmgiHEn13NB/FdABYQFk7TGwQeh1r0vQezhB2MokhCI1zBDpAe2WhE70+9S5z7wlC7RQrHqY5Oj/kaKwkX2RGmMvl9UYw/RgmQ+VCnxX6HQGD0HSZw1ZHkNMDQHWjDTAS/ChDZS5Xm1tv3ou73Y/FrcZH4nbpI5kD2EwGel+G5gF2cpGA26FArHtDXgRnGhkdDLRrN9ZjZWUphc5xTGI8gzbm4xSEP/hDv8ESfcPtj/xZwCzCHXmHIELJF4yY/Gi9VcpWWWXpB7/7g8rfXYXgT8Xmhw+/XPK0RwADN/q54v2Xv4lJUtFyLK4r7nbPN3JCMGz8FLoVnlXY+/08fuDX/ua4c/O1+Fs/+dMoqfN4+eXX4mMf+1jGJj98ZOaLYbgqz6uvvhqvv/EaBkohhB8924xHjx/logAuenA8Gcbd9Q70sxvV+f240T+HFk9jY3kdMLwTM2i93IPeV/rRWsSQRTgurC6lkVFro5BX+9lfDqc7M7vdqyMHKiha+gv+djjcjCJmBXCSSLvbieU1l2wuRojsK4VrQG+NJirehPaA25NT+hHDNQCuOdGrhExzTL+s8D+mn0c83zXxATZYzRcYfSeT03jpxYj1myvx5Q+24/EuwIC+g0Tj+nX2pXq8BJjYerITP/PFd6OzciMV6mDf2MR57O3t59BpC2Vm27vi2MrqCnQ+AXS3ElAYp7aysszv5Xj2bCs/a5A7Y1ylYJy6NKEHR4Cyu71Nvxniw1nkhODCdGnGebp0tfHx1t1sJafnKGyzNPAEAf+phT8HgIFey8jGIm9kQSdCTpV7cUwqLI68yN/dkn6Lj/lBGqM7OALJBLCXBOk8gfzM+avrfV46X4p/eV4+0DYzA41hMz7fYeIC/HJEoRkWcAXG3QTObgWvlACzzxcz+7f+8p9FljqkDEDS2uN+h1adjGV4kekifUfyIZUCz2EY8n4RrmCQts84RYErZXQGf539KpVXGx3Uow96LT2w9AWAL8MLeJ5xt3rySs5ZQVaZHUcDvcjbW4w2ndUAC+iIHKFyeKXkvBeBrO8HSGfoXDVKpzy/vRE3r92OG9duJm0529wUbsdHp+jQQ/YxMlEeh/bQKy2MH43yGzeux9LSEtdrwA+hnWPa3QnKwHYAzcJCL/vRyZIDjCzrrPfPlEsaJjpZTJVlGUe5GMFRAlknKWn45OgZYMZQHzNDZMw39KqndjI8yslhp5RRcFm6cCIdggD9HhddaIh+4Vi6wFDLvU0793KvYaSdnqk9AfJcX0Xvxxl0fq5XGrBb6QD+zW1uPlX0Mzr8FJ2b2W4ciYb2jX134l7mVqY9s13paXWHk98aHUMG0Mfgp+H+IEYHYBz0bxPc03WuyqQWB0PT9enU407osE4/tTGmO8jRRpv30xZOgj5HVzU71AeDJMOCaC/DA3OingYufFDDOOgu0ZfLvUyZVlN/CrahKZ10RayqRqKllL94Tjo0KLZ8QXub+z4XMgEDTs0RT7nsgyPkVy67zzsFrebnNhuEy+6O6POZkwj53Ud1sMrTiYC+cmQelWu0IWCWa3iHhlCRGg9ZyfOhzATkZ5TJ+gxdvAZstwUWkiZ8v57dbw7M/pCeWRsGQkcquMv8PuxUAUhhZArFU2HhVhOpO7x3fIQC1zw0ZoJLJWCNQugpmg0snnonliovpOXZ5nunuRhNwEMFZXPSGMdxxSROxxBMwYwJNKCWrtbj6SJfuzHYdZhjNx48ehCPnjxEuGKtAF5MPbGyZoA1ymL0UszHq9zv2s77OUOwBDi9KJuqRPDGdd0eSmkaTx4exMHxo7j57W9HZe1pnB0VLvmMK6HeKjbTkZw5+xDlaTL+HpZbv7pMmVzMAYBC20zPAK5nO7F3YkL2KQplnJ7snMCWQlNr2eFyCAYrRa9NCclmGym9qTKE6cofZ7GKNfjKyr241bzB81FW493YvdiOUWWfVp/HHMLN/Iw0kQTBjdkvEr0rWZnizClA5mNtd2uxuIhVCXEJCE1cX4KRSlIZAtGZ3ZPpNJ5ubiNg2hAn719ci2/56K+KjaUXYCoAaOcODObkJj2LLfrbCQ8wSNCmZ3oXivgXMHxMsZwPACgu5SnBa/EJEExAnUsWGnMDEMxZvVWUeAlwDRidAqxpRoTHWQrJFrTRbiyngK1XFniX4Qfd6PSxeluA3Ro0Cq2YSzcnScgJnDEhc64TDeOZysOhlBzig/GlZS0+2+3c2cQwsUNkuV50m92YouqMXW8IAOicvq4sxXLtWu5O8tNLr1KUPgWsY0CDfeyzVEJ1hIfCIodrqLvxygt9jAQsY61hBYTCUGauIOy/7wd+uOC8b7D9iT/3H9I2epsqHH/Fsr7UvXn0e87aVnEmKFVcJXmxQSde48f8/+qT117yM5vX+qxLskTo8xm6UtGa+N/fPH/1ezED1Xj6Y3iww2vOYjgoBOJCdyN+y//6t2HUoTyPq/GxNz4WN2/ciOM5NEw/rK2tx63bN2J9fS0XG7gAMQss9wY72a+NDgoYA7LbqMRKAz7bfzuuLT/EqPkgqhgtIqk33+TYjLj2Uj96N1ZjD8V2hKKsAIq3pltRWcUgWanGHkqv0ughbwAGKysokHGsX78VJ9BBlb75yEe/BUXQiCWMjuWN5bh1b72ISzMkwRg1Kqt9T7HpQ5WxXlRoHzBR5Ll0+W4EPUDGCWe2ld9NM6R/qun0X3hkNjxG9pTj45+8Gwfwyue/hiFG06P3oocOvnG9HS3a9KX127GFrPvrf++z8XDrMD76kU/kJJNcDadejcODA9p7Kdvx0cNHcf3adZR9MUynYtJjY+xeH/n6wf13MNyhxR730barq8sxM40XtNJDDgqMndlt5oScPEVdBe+mUnOZ2ma7zLt6AebAwPgazx5RBsQOfERT0B4QAmI/PbLwr0AXkvBUQWnSYkGWeSKPl99hs9ylPu+x7HmtR85nHkq/K668Nn8s6NAfEseqhH2A9/torkswC5F6ee4+nxJ5Xeo3Plxi4Dx3BWbdBIrPC2b/9l/7LwBjtFsN2U79bQN1nl5ZJ+To0XZBA8ubcZe8MRdQQOk7+VkAmkCWAtv+TrDyqBFiuwlm2w2MCfpSw0NDJOOG2bzPuFeqgzGvccUzE8wW8kkPqiF7WSCu1Vgtl8wSoL+1jEyFiAUXR7yzvBLXV1+KW7deisX+Ujak8bNmo3Cy1ODwAJkqXZcTULuyXLsJv7N1OwA/yr+z75LPjhgsphNBsLqwukC5XakPMx5eMObccDdjVW0H0+JZPp08ylHfkXnXtabtV+pmarzMJAKgte9ddW9wOKD/TqHPfnozu60F+GcFmbOOTlsDxCGrq33K1Ud2uSJlF73QQa40qVOL59V4Vy1OzsuUwwlpdb5XKJ/hi+wnNdqlHsPpKOZTcASG+tHsFMAGsEYGHNPGFwDhqnLBUBmwUEFbfKdv9VyeIBOdPKkxcgEdbD54mp5is924aqce9+NpKYfqXbrWukq3TeRgh2bpIzNKyCQdNVN0ZKVZilYfoK1QZtPgMUZXXCGYdatzvZM/mz1oBTnhgikaDl6X4Un0XxqJtKPEnxPPeY5PFLolH2c9Cs/sOcBej65bsaCFmX6gb4yH0ZTvgHBzWRu2MLONNJbhWWODW210JbrKDE7H1EvP7ASQIJ0a5+57phOBKsaBPKE3mJKoGwfI5729g9gdzLP/NRQs+zcFZj92CWbdvFnG9vsVmJUZ6X4ICQUFIRjs7eSvoxM9UXSI3h2ks2lK0h2vBU+FWvUuSmkpVmv3oiUg7CzH8tK1aC3046RJZ1V2YmYMKEzn7MWLEkIbsNeFaRar69G9uBbN8zUsnLOc5WaC863drTD9RLvXjpX1pVhY6UV/GYB0vB7zwe0kWFcAcWi907sW1dYkxty7uraagqFcGcX79zfjuPY03vih7Zg0PojaGaANgWHdLYcrBbmamMBTENEA0C3UVqJ/sRqVaTtOJ3Ti+TiwFTOd1/hYEKv307x5R3SC8oJ2DIQQ5dAjfQFjmwpIC0vv0DnEdILyN4eoixC8unYvPrLyavSO25R3EvunuzGoFanCaHCEpJYqQgjeNouEsSn2icDRd0m4Trox/tPhcnfroyI8PnVIAUCqIcI95hXcfLqJ1T1ByCzEyfAivvvbvy++/RM/EKvdl6NLf620X4heHWBNeeYz+uYIJgLUtqptgKYhIO18n9Z4BnxDeFOI2lm0xrsax5UCGIGVQhcwZwjG0ckoxvO9mMzH0Bb0gtHSbJVjwbimjrmI78Vy7w7vWcoJeM5MbS4ak3gGPNCrPEEY6WlwiA/llDxuPaFH6MT3fChk6AdBrmlCLugHPe8muXZt8AZCo9FEmdAXDYyMZrkZHQRgt7QWCyWAbAUgy/c6hoIzh/V4mAPVYSV3+cPhvgYAPQPtEdAdhIqr4rhSVQ+UYlq5VMSUD1FCH9Qw8FrxXb/6+VJz/Z///B9MAOueIBUhKgBNEEq986k8P38TPHDkl+Rf28YtBRb/2R4pybzfe3mWGBjyTmHq0bQrdU66hrYTEjJui+sUgt4jwPUe6+LwqYm4Va5Oruu12zEdn8UP/8BvjOnwPN5760EsoSCXFjFeUWSWQ9CNqqXtzpIHJkeTHC7/2ttfjdF4iACV/465vhbls8M43f5y3F0Zxs11wN9sJ7Y2p7G7DzibQ3f96zFtdOPLD3aj2nsxrt37ZGzuoHEAr0sA5rfffwZ91+La9ddjZeNuPNtGecC3917+ZDzaHGAE343F5VsZG9poYXgsANx471vvvImigbcAUm3qV8dQUqgaeiDNtdpOLiwmhZroXeXqZAlXMHJCiRNFVd6GEOlRcbTgcOc0XnzxBuCwEZ/90mZsD2hHwLjteefOIooVOYpiW8GQe/p0EG9tHsTW1h6CfRCf/MTHoT3DZE4zLd4H778XG+sbyW8ajM4O11vlzHm9VRoaerweP36ILAB0dJsAk1F6W41d63aWAE/G2DVjNjbzid4wPVEoNzpJ0Gy5DN2pYVCMp9tx//5XkT9meUC5gZMcIdATmDIOetAzq9GVQJTvSVe0hSFUApT8Tbrx98tr0qBPmviVc4LXpHW+CI51aaaxdknLElFOQIb+PgSznuZ8PiOP8oHn+NGjNA+NXl2f9/gTv11t8ovbv/GcYPan/sb/ExpwBA9qRkFL1wk69YJRkMF4kjJRb5Y8Khhzfol5VE2lJMPpcarRoHodBYXWM9NwISsdnUrZabnUp9zvb+aXde16MwDlMrDIoalgA7maWVnUMZTn1GHuC8rCX8Zi0m+tOmCjRr+7zDF6tgTQ67fuxI31V3J4XU+xCzJYXoftjX91XotyzclERTwtz6Q8GqGmSxQPmLfYcICVpdU0tgTDqxvobOphGJkjlsoQQ1t8lmBbg8tZ6ibmF8wIgvX+Si+mkXKhngLMYBAKqmgGnytoc5nrpdVVntfPBYEWAbLdzmrqsZIxO+U6tNfhPYJVJ4lXKEsV2Y0RgHgws8kpOumM305Oyvl9PkNHmPt15opoAajag+4F4oBzHSQA2iO9lhimenXl+Uxrhp5R1/FSnqey4TuG8lkJwE1ZzkbHsbe5Q3u60AtGix5jYJb6fjTBCAUkSuvNOoC1gZFJ8TvtShzRF8PpYRyfTzHEMUgXNGikGZ4CjWGJJ01fMYDt0jIFmqk2kWGOquhddtKrK7KZ3/cKzMoPhp1UkxkLhpD+E7fxTFfnqpWkZYiGc8p3+8iFVdIhQx2PaaRzvftgkbnLvWc4Z+lycSFoGnopY8xcICeP0JMTaRQcafmlo5SbnHf0y5EA897qFR7QToPDMQY/YBk+kZZtn28KzH7kB2/+uHFrNo1xTBKPQ1IZI6Pi4bsgKBvPIRSIIBlP76VB5FwAf3CTrACyVyhBmO36avQaG7HRvB3Nmqk/VlEai3DWeYzOd2J4sR1HgMvjMyquQ5qHNGmUJQDwcuM2gNala9dSUMloI6w80zbVURKLWH8Ly1hgNF6lqid1FJP91ZgcXoPptwHUA4RFL2fAl7A023S2HoVq4xSFcRjVxf342K8/jKfTd6MbN7JuLh7g7F3z6mmyNLSaETb1o8XoxGI05r04GxZMf1xHOdZ34hBA64IF5s3NoY/0ACo8BXMQoAKuBOCnfgroJErOqQiKdEsXsQxQfm35hXgB4VKeYbXNDmPY2I9Rcy9Gpb04g9Ec4tCLeYbBkNazFittnfEwEJiz95sorFZbVz/vrOrWF9whJFBcvBAm55Be8Alg1tWInDEa8crap+LX/cA/Ha/e+TQA9hb7OpaicVX1mI4mGZekh9eZtV2svk67k5Z5h71toDkCqME5Z+87c1fFIREZ19UA0JnD1cD+mWmOpvsIjEMA6QnXLwH6rvOcXqz0ABcd9q6Ju9cQ6E4umlA/AGT1CX2DwD4FAAM89ZZrMOmn1bKdYkWrXLXuYNVUMPaJwkJP8UXd4YoGoJk2cvnTpjPc9axyNbTbLl1HsENv7ZdjofoC368BYrGsaZtMsYZQNT4uFwegTX2+gt3JEU6ccPitAQjIfH4IFtPuNNEgOZQD79T4XUu3YcwvbfbJb/mUDPUNt/9UMIsiuPLGKogy3ytHZZl8mjpP7e2X/F+ru1DseebqP7uE3a8KLpV95k3UWGOvsyuQTNpd7BhhXCcwMcTI91w9Uy5XyAmAXG3mGobio/uD2Fi+Hr/+h/9JjC29BqVYW1mKG9dW49r6Mu0CTcLHhaFIw/Lg+w/ux57exsWF9JA7tKSBoOA9PvxafPLWg3hhbSsO9p7EzlbEm281Y3/+cqy88CPxyq/6p+OX34aO6i/FSx/7TVTtBehqKT766vfHu29hWA6dePdi3HvhU9BaP37xl9+NV177TvjjVuwflGJl5ZU4OIAbzlrIBpTvRQPAdhbPdvYAVdKFCsocnDXuUSbC43pncrjUtENdhDiaD/koj+mlqFb0jBWKTl3QAPgaw3iKoXXr5no83nwYD57Cx7Qh5Ap41sOFEkGmLaKcm+etePPth7GPvDpvdOLZk81cqvP1119Nw8JVk5TJJiy/tn4dBXyUEymcQd2BL40DF5jq+TrY28u+1eCkyWN4CA8fVwAu6wgRgY4p8opUXPQkbS6gLXZJRYNjOj+M9x58DUPgXWQI/EIzgM8SvKi7oQLO0VY16ltHtlJnd0Fanf52FThEaHGNTcjuM9z9rDwyrjLj+fhcTN68AqO8CJpT+Uh7bknn0MaHYJZLCkWcp5NWkz75UnznOr5fmn8pl2Hd4mJ3vZf5W3H+3/wdzwdm/7//n5/g9gIgyAPyexf5V6dCZnMx2OQC2tHjKGBz8YNcp5+OcBU/+6esvFAmaLjRfzqEcolaZI0KPJe7PTvKCcUm3p9Aa/ujg9gfDuIAoGMOz+EUHYRMnZ/NAA3OikcOKue5PzPyYJSaL7nZQMbB082KIwAd3ncJZtv3oKPXYrG3jh7gHVNAOOUbAigEHMYsd7rIeHSK4LsY6VKJ6FQ5Qf8uwTfIQ3a9zRpWpunqLnQyntb8zPVGFaBWp57OtdBLrtS2XOZybiRoNp6WIkcXNOd121u70OYx9TDPajcxiDlTb9+5GasYdO2lDj2mNxU5j750tcAZ4NiVz0xTNpsKTE/QC6bIK7zY9rGgSf+0GCJXz9S7KbBlT3DG8USwqsEN+DwDyBjWlzt6/VjnE8xr9gjBrxMrJVIBXoY78Gz1whE6ooysONHDeABmgQZ4YMqKSuomV9I0jAkZQL+4gJPhBYYUC2oP5OvJQZxVjqJjeq6OAB0eUR7z5yht0i6HNC4AjsbLdjHIBZTSk31VMJ1gVh6BJ/IUBg60lvMePG8NeIagt+ArZAbywf5V1zhardd/BE1oSHERtOGIlGEU6NgxeCLlXSkaPWiuXY9Fs8SYGq3Xzvz5Top0FTsRZmbJ4t1CBA0yd7Gmxs3I9GngDUMYbB83+fvHfvc3C2bp8RQMUJWfFdp6ZhUaVtpJLsgZKqOApvN0PZ8fcQ/XUMxziOriiMZA6DuSoYew29yIxcYdwOkGAg7iheGRrTG+2Iu9k8cxPt+LU0CFlorpcGi26JRbscB9a80Xo1+7EZ3KIqAP5XRsntApzK13pJkxdsaKuTUqLayAh3E674JsPkp5sBQ6+zRcDcakuyiSVsgpneQ9OzuDiP5OvPa9QNHpdoYzqGTNY+pqZ9ZZT12TsrhiWX2+AnEChicA0BmdV8HibCNIqgdxcHoY53SEYM3Z2un9xDrTykkWkloCK0MCgphyaAqiyZCBY9r1vB4fX3s97rRuxHL0IZyTOKwcxiFAdlB29SDXYS6G853c5Gwgl2IVUCv4qygOJyMYJtFsAWYBTioGwXOxGhgAmu+n1K/bgVsA1iZNN46p1ezH4GAW/8wP/svxnd/6/dFpLNN1MA8MlgswnAyxUjdTUJhrWMHYo+1dVUmvTQ3GqzixAIaxuhKoho0GgENkXZSeQ+0yrIJJRsq4Oy1DBLrGzdLSBu/diH6Xvm6t8OxOKs4jjJ3Z6SMoazvGZ1tYwXO6kTrrrUEBup1qcLHPpg6NeA5hhbAwcFxhbFsqPGvt8/S6mamgCTCpAToUZRkfWF+O2ys/HNcB9DdWviWWAED1EkIamjEX5+RknF4Lh0LsQK1c29qk1U1oSYFsGhmZVKOPHk0+8s9e8IzxZsYvuWuZvvGRNyz+N9z+kz//4wXvqdB5nHyWStyiQFfyo/SV7+OLb/Q/j17v5nX+Ykn8z/MqDu/LZT9pTzMjVOkXh8+8XgDtOdPX5LWC2avn+oxsa4Q1Qsd+3nk2jOX+UvzbP/Z/iF/z3T8YP/h9vyG+89u/O77v13xPfNd3fiRefOE29ARwGk9zopcZC956+x2UNzSFonzjjddiMNhDsI0Bmf3Y3t5EBgzjRv8+dLsdh4NOfO29iGt3fjj617497nzk18S7zwBhnY349Hf9epTCcrz9zna88eq3xeZjeHp7EvfufRQl04zbL3wkvvLVB8iQarz2kW+Pg4FhSGXA6LXY39doNUSknmVxSHQL1JyLKUDvppyCS+lrZ3jrZTeGzJobFqO3WQMKnnfoPWVvoawhSfocxYSh7TDdUt/JNOfxzgcjFBMiCtoCN8fCgl6aE8oCbTbayJhyPHl6EI8OT6ME0zahl/2dnXj04IN46eUXeabDokeFV5VjhhDN5jmh0qFqlYz0oidlAshZ7PVpG/u4Gbs7B8lbXQxHZdOVl2x7e4u+RcFVK/wOXSss+F0gMxhtAWa/xPMOMQykBetHvVCeAoTC0NLQgXaU+QJXdz2PevfTKEB/QL9F7GuyUB6T/lCA+dlzSVwFbSbNc3T4XuBLR9i4Sb9u8njmNk569tlcz/mCVn/lKC1f3XQFfovt8nmX3/3N7cd+5/NNAPsrf+HP5LsNGRLIdswsIH/zhzkQrcVCL+mtz0kzphWcCziRyZTJ0RwntDqioyFiHcwHKxAT9DrTI3PHnswTALu61+FsEHtDwCyA1uXOx7NZjObFKokCX2AsVMkfNNuttvP59mm9bmagsywXpY3KmfmDJ4CvNgDqFnLv9VjAmBNwHR4eYBwNYgKg0KHj0HYdmZ9D3IAcAa3OCkMsDKNo9wCVtL14wbC1+WSO8XqNupwlSJVOzHfabbfTcy1dGaOtl1NZqPft2bNn6LbjWMPgdVTg2dNNAJI0rry0DlVofBq3b9+Oj370I7GA4bt6ayXB4/7OKPa2BjE8mOZqfUPbRaDP/fJyAn3a3QUmMh8rxxltOTdmU86mE4WtmeqMepkrtqTylN4wBgq9hi6n7QTBgnCPJ9x3AgZy2NzwSFcsNMwg40HdId7zCbICAHgyFcjSP4IzjoWTBiAI/1s35WdHbyYGcqMO2EWfbgMapyejHL0RzGoQSuGaoBoEZ2VAlBiD50k7LqdtiEHDNGjgLtAq560HO+WVGeSRdN5Af7AdnwswKxP4DN+gk8Z45cZpCR1KG/HdUEH1qRP8pubi5Vw67RxpndB6AFrv1iPbW+5m7O7i6lJ0er2oI/fKzQa06WRvHVpHyQ/qTOWHBpRAVpyZE8Cmjjw40oA+x2q2XPLm7/8Xnz/PLLxP5RUMl7uTARzGzlmXVhIhZry/yJ26hxkLHHbV0tCRe8HLr4J7DZhWAOnxalSoUGkxhfspQnV+Oo/9+VYczB7G+GQrmVuAqQVXQik46adeAgDFStTLS3Q8TE9Hygh6ARXa5mF0oskxwNFA8BHWz84O1tjRflS6X4n2ygcwTy+BWduZzCiJYwEgwE53vLP5kxCcgXw0obMWaFgIEas4E6FTQVOCdUp0zMVitI8Xo2a8zeQixhOYpIRQak5iVh1AcDxj7vC2MzMB/0cAu1NajOdLCA45OjFE5rZTMt1UWjS0F8+rn7dyctQriy/GCqDddpxRrlET67t2kDlmzzASCiYp2lgiUHjkLlECKmElCGBOmxxd9llxjZvL7pUgpCp1bNKWmCIZO1fRc3yKsGmtx7d9/Huj31wsQD/9nwrENC4wlENSNVMbYd27O/x4guHhEJQTmgR81vUUYWiMk96GGxvr8eLde3Hj+vVYWuxjTS/mqmEb6zfi1rUXYmP1NooW4AzI5WYA5QL3tWgfwOmFS38+iOHJu4DY92NWeUyxYCyARMZUSW8XwFqEqZ5Xh/xplhQsMswU4eViD1rP5jdsAeAtU5N2ajYwUhDs5iGslxdjpY0Rsfqr4mMv/4Z4/YUfjJfvfmfcvvHxWF27hxBZjbIz6hFy53qrTBcGiK3rhQXEXqWOMUm9wNi+PoXQTWR9AW/oFc80IwoUhYbCJIWlvfP8m3KmLO2wC4jcfILnEwj4O8dcAcmdzwUgZefV/m4WDls67/F6xbOfeV5xDqDheRSh5/J6BF8926yaSikNGB6SQAF54FGvCeoS4zbiT/zRPxm/85//PfGJNz4Z3UY31peW4ngc8YVfeho//Xe/Fl/64rvx8MFO7O9OuG853vjoJ2Pj+rV4/SOvxebWQwysbdrnCHD1PsbqabQXMCJXXord07vxX//1rXjh4789Skuvx/ILr8bO0TBMNdTtr+SKVV/+4pdibamP0hvFL3/uF+Mjn/pY7E0ncQHvHyJwHz7di7svvgGPlFF0KBfKPKfMNYwtU6bpzdL4NlenE/XMfQzDKCY4V8SiHtG/CmxIgn6I9EJp+Luhk5ArAlNAKrd61vZUSGcYS7MS+4P99GY6jIeKiH6/Ay+sIdu4Dlre2sZ4HU9QYLQn9yOxMAKdeHMSjx69HX/hv/kJ5NUAA9DQDgx6yiv/2VeOIklvCYp4p2mSOEmf9QDZ9FvNEQMNKlfu2UFh6Ak7ph8cNYK/OW9uVgHoh7QKgcyPXfxjP4c6q+b6bUe0+9QVpYuIje4i9NGF5qiDTgr39NA24DFkBtg79ypyo9bUgwTfKUekJUCWzaescRcQWxdp08++X1ou6NP/rKc8oAyU34p+cLsCo5fs8eH5jJG9OnLO88WukgSE8Vj3q/PPu0kfDoWrfC1HDUJpIE/7GMcb0PaN1bW4iQy8vr6WC4GcYhwdQpu70NbhySRG6J0h8tOjKeiGkyGGw2EcAFYPhvvxZOdZbB/uxS6fd4Z78fRgO57ubcWOS4UfD2PEvRMBGvrqCOloGq4zZCOqNDOgWWlIgrakjW0UC+k/AIheNOkGhZLtrsFvLuW9/f3Y3d9LOnUETdmi11ggmxkoeEb2F3urgQ6l4ZysI705EU6DRC+7E+DMIas+NUe5dCcIdsRFWZl0iByVf7aebaMfLzBgV3Iyme9w0pm0uLK8DK1jBKqvAL3La+jJ11+LG7dvxb2XbmX6K5fMzpEKQP2UfTIHyE73MB7cDygLOhRcMJ7tcX6bdt7m8y44BPlxMszj/MQc7rPM1epql+4IfoBtmzqhk+jTMpjDY6UGOKt2AavUn+NFuYN8cMlfJ7oJ6E/C1Q33tseAbMA0INuJ+4bcKW8d3RMPuKKn7erclcVuJxbbYBuzACBXTZeWS2TTznXkhjxpW9o2Oh/EJzpmnaDn/BCdaIXe0TuuPnS0SCNcYXNF1NwFgYvNJAefkp5ZpLdxy/KC2X70njpyYOYi8ZhgOZfnhu+cGyIfZr52HlKkFEMeIBTMUby8uBQra6ux7mph6Po+8ti0lMurK3Htxo2U9aa+1BHl6I4E4zLchkL43Ty07mbKuKpvAWa9+Pk3mu0f3QRCCkePKSCpzAkPVejbRiYwvqAyZVrVGfXJOLSbQxsSbqG/YQIUxumsAhEdQUTTGM4OUS5PYMZNAIKTJAwi5npBMYzXxBRpV5eiVXPiUY9fBaFTOr6OUpUxUD5ceTYHuIxA8yiq4/F57O7uQjgAzngQZ82/TYdsxuxgOc6PBRvTqDu7sFeJ7nKzqBOgqd06j5P5AEDnMAWKj8KaK9cYpoaTlU4BvlMYcYz1CX24RroC5LgJsXWw8EpYxVgoZ9TPiUd6ZXPikd5TKZFN76hpOiSpjHfBypF40MMAzHIs1Ppxa/lmLF0sROsMjQABjeIgdsvbMeDvyF4+ridYsz9Mzq2RkXlveapeWVNy1Z1IQz8Yt6tw8f2pANhc7MJ4nR5gtYqkc5Ukl6us6wWeteJTb/xaBMe1JHLDF44F6AJUycKcu/U1lBj9ojGBcNEbaX+k9FLiUaeGChuN2Gs1YxUCvg3h3r1zK15A8Ny5eTMTuV9L4X4nrm/ci421Owgqg/Uh7DMU74X5CfcRLo9iMH87diZfYX87BqebMY19XkPhVGYyskMWhg9gPDiU5PCubWNfyMiOKHidAKLTbUav3y4Mm3o/wx6MaSqfY6xUr8f1/rfHa9d/U9xdeymu9zZQRt3oUJ8+Ate41z6AttNbjUZrkef1UNYO0elNqXOEYmkChwfPaHNBbAJZhcbl7pbpsq4+Xx6fd7vswg/78mrzq3GLxdHvgtFiTyxy1TVcm5+9hi/usFl+d3MlLAWkysMwA8EwRn4OafvdeNF6gyMn3QVDCT58Ni+S1p48GcR/9B/84fiRH/xN8XM/+8vxV//7vx1/9S//rfi//if/efyRP/TH4z/7k/+3+O//yl+Pt996P1ZX1uNXf8/3x7d923cCRPvwZhuA+zim8JbhF6bBMaVMA8HZbG/EJF6Jz7zdiI9+5++IpxMMnmt3Y055dlHqrvsNREqD9oN33o/bN+/EW2++iwJYAECsxaOn24CtlXj0+HEKXr0nIwCua9trjMxQfgcobtP7aZAJhHcPnnKNKyoBMCE5yItG4jPtdTA6BmxygvrrKYHcUqko0B1x0CvpZ3fbx+FnFY6hN474TGZnCQa3dpEKyNBllLXGAfgYmhU4wJHLK3YKfMH9gOlAsVUAm6tr3TjY3Yr/8if+NOBySDv2OaKEj5SNKEBkl7sOBD3KaZRTCEOT5BFX99IZIEjc4TmHACXnOrTbGGeAYvdMo8Rmn+aOQh3PD+CkCW1KgZFjDT1FXeraOI2FJcq7WIM/IrDvuEbvDPXwCGCtNpCzjsBQP4FrsUNjCWSL3fLk/nV8kbSau/RMO/AfzZTs72VXu+1cfOY/KD0Pl9sVfV9d637VN8V9X8efHx7z8FybK20NoZkh/Xolc5wEpdFsnHiHvlxdWEpjfn19PcySof4cHZkJZYaxDr3Rd4LSA+hwZzqMZ6OD2KRfNvd34+HuZjzBuNuZHsbu5DC2ALR70LyrIDqkfw59XdC2Ot+kxcyLDehRnqZRhR5JfZE6XN1R5f2GLkTmnR3PjgBv9E3zIo5PB7G98ySebj6O4SH9jXx1noi5S5eWlYG9NNyb6BgnFelNVNbabh6llVx2F9mXwNeQiMk46c/0hOoVk+HPMgYV4IkxabkODgDNAN/1jbVYW1vJ59iOglr1iykc68gCwxcMp2lDaOsb12NhEb1OfxeGCnRZxXCCPhstMATGX7WKcj0bIQfljRmyjfrUjvkN/Q7dZj56TACDfUz9l+FrgkfatdjOw0lubWSTu4teuDyxursC6DOdmvncTeWnPjEfuSFXpQt0MDr9FNV5+HQY430dXPTTOXLTxYTQGWKjUwj5QjBbMmPTeXSbvA9e0TMriNIJA8BJ72kTHWN7ixn0kDZzXkw9jgbTGGMMaEyYW9v2/FDf0+HqHDEAxUpQm3WjvUpXx+Spyx1ioijgES6GQFwFzBE0ZaYe2DEyU6fRRbWCcY8BjUy5KJtEjDpDa+ZnX8ZgW0YmOcn0OjS/cX09FpeXMJbbsbDQR+evxa0b19PAW+e8zsN0djZcJa8IRXRUUJ5X9l054ty+Gb50+zCbgZsVtXEkLM/ZmPMKReezudNKp1gKVPjMOJCY01EoCl94goI/r4Ue8DIE49BV5WwtqqeraRjq+j53la3KXpzXAShcc4I16yzCamkeGG0IASrfWI+F+l06eiWJxPjO8QggibVoLsWDwWG6qg1Q7gAu7OCFlUas9F6MXs/E5YOY77waJ4NXUIgIVd5VbQEW+2fRbaMEuPcAgbFwcyeWX38v9rGu5yMsKIg7AYAWizklJrTqpAGQBTxTwSmK6QjrrbIA4XSmMTs/hEmPsHQrgMgiIbxKw5W4BK5pGUElRRoMh/FqED+Eaczr/Iy2qcdafwOAdyvWUdStCyyU0lFsXWzF48qTGFaGGA0ogFkDZSpQ1vJScXG/oIk/03AsYAHV6BvkCddXAfzGcDqjkPKo2Gj65epyLPaWsnym+DFoO86xMI+68aM//L+JFzde4zcEIArRSU7pAaY8TqY7P1dTuaILAgYw2251k7Eq1qcFAKHP9PY6pGWy9kWEn4mbF3ocASzLC4BoPaKNHr8vIeAW0tg5uXDiIFYfQhmrAyqEgcwOcbTJ/jTGR9u08Qi60WCiPhCRFq106VCcQAGZnUA/Z0ZyTi+DlpXAwJmVzqh12LRJ/c2kIEhTKZZPm9Gp3Ik7a98d9258T6wgUMq06RmMe4yC4oHJWApN3XN1V9+h/gpZU44gDih7wWU5IYWy5GosEJATvq48dnk/16ZH2R5DUClnXnvt9fz9G23/yZ//g5efik3hdCmzkskz1hABaWygxqNDRG4CWoVCXmOZEsWypRuqeIZFFKB6NDl33oMwTeufL35XE2UN+Jz3IICcae4zteZP5ghjlMy//W/9wRgMAJXvPotnm4exvzdI78zSci9e++hr8S2f/mR867d/Ou699GLsoLB//u//YjwEZD568jju3nsRHodXKFsFY9gFMzzWMGg//3lAV+lurN75VIwpy8L1Xrz9wVvQjJWrxkJrKb74uS/FCzfuxsnsIt5981GGOcwQxFvb2/HGGy/Gux+8Sbup7Izbq8YB7zeW0MwjRwAKahEra/0YT3bj81/8DEI4YvUaMo+6ISqSvmw+FTUXY6yyX57TsNTeVGbaZuf8YBcI4KSfXPmIvj9BNkx5luAY1oeGNEKLRRLOTuYxQf/2MLyUB8PRLB5uTeA3rqNdGqJk3uWzHKZ78+13Elh853d/F99dJhd5iJGtB3h723ywtXjl1Zdz2NVJR73+Yg5xChacsCe9GJu8s7sdL77wYrz80suxtbNDvQoeNgOK+Wv3Dh7HW+///Wh2HJkpPK0CJYcqpTX50QlHhgrU4R/rbPugQ7PO0pWLe9hORayrEsvGci8UaoJKf/Iajh8aSnntBfTpCNrlczmXRw78K67jP9v8CpBCusVzOFrWq8135ea9Pobv7noFPZfxhOw/9jufL2b2z/7X/3HkAi3QpEntlX8ZL4/Rb/iRM9EdhnWW9xHHObJFx4Q5y6f0RYcCGuM6R9aO6Lu98SCeQpdP0W37AEFjDA0xGByh+wC+wK7ky8zUQFUEROZqdfVFuB421att3dA6yESHkJVTZ9TJ3OLmgJ4go8a8e+RSxqeGYnHvOXAGnX0EwJwMZtkOvW4LAGtmAOWd9GtM6Rl6Ab2H4JY+TOHm/JXNrc145Y2XATyjpB/1m95Bva/XNjYAKU1ArN5CZTVyyHBAwN0Xv/o56HYQK4CfFoZlOk8oZ2ZjgNjLJfQaRoEyQqeNev/OvXuA2Q0MTYCVXuHBhOdOaW9wwwIGdwPmKo/pxynYAb2PrsbkBJOY6cMRV2S6YWq05TnlzLkH7MaY6yhrCDixDFydrUFZSuoADI8K/VBHj9ah26rpH+1raQhdaVikK15plJp3v4pMalQaOUman1JfS285v8j3wQ+O9pSQGxV4p4a+r+gpOy8mNZ/RiRUAfIV3r5lqCzBoWIfx8GZHEHSa4mtwMOUdtZwMt7yk9xP9vgF4BAvobKpyve2oAaWsshDiN0PJdBRYbkOEKmfQABijXDRVsWAEWGsIPtgHJEsvR9wvzeYEOq4bT9DVVEJM16YdXR57dbUXS6uL0V/tg7OgH1Ml9jpRg5Zc2t7JvjkvA7pv8N6ptA+ec86Vq7w6v8S4Xx0nyk1DMG0uJ95qLPz+/903sZzt6z/84o9Xqy2sj0YCKOO4HPo6r9AZNIAEWgaEVQV5ondT0kAYMtmMlzbmuoux/MoT7qGDKWD5vAdoA3jRacvlPpUCxC4+i9OFUcwgtmmugAHxntGJFzswpkv9rQIyrkEQG7R/ByIvxQhGMI51rlU7GmLZDZVpsbi0HDdu3o7rN+/GzeVXc8ix5kSOBoJk4TNR6bwVleON6J58e1x0dqMxeyMGmw9iKXPptWPU/um49gmEyLZeCRi5jGI2wT/PHu8exnAbK5V3XwCUKucLdOwgTtvTqK+hyMF3hxOsI5TCxQlWNpanoQXOIBX4ZkovvSoAWdNgnCFZdbtPYYptBNdFrR2rzY24E2vx4slK5mUdVQax2Xocm/XNGAPinD2IgZZenPGsTfkaCMhj3geDQX1+Vmno7VqEWLZMs4MGtF1MJeaSfC0IeBli6jbOYqP6iaiBeD94/LnYNd6lPIu7q7fiR177vRAaasYXoTnMQahgcvKL5W+0StGvQ6zdZdrHcACsVRjOnISGcHRbLuuKYcHRlEArgNc1rK8VLGyJM4dUEK4qZYcXKwAL6akQ9IVlf9LeiuHJE4T3IwT5kzi+0BvrBADBP8ACgZKTuVIpXwJ6+smYJ70QNDWC9wLBBvDFSHLijbG6zVofAdPBLnmUEzMWGzfjWu8VaOxWTCdKoHosrHcRTr0Yz1Ee0JbDbM4kdRWTEu/NXL41mBIAorbV2DGDg4aWxp5KK11HKNDUNKlUBLwypkoSxuc67zOY3jjul15+mYu/8SaYvVK8KipKksLR3a14j2f9DN3x0d8SHHhdURyuuQJaKv68nGdSLQUaFyrYnBTgORUarZ7vdMargMHPX7/5PCgl9veP4xOf+FT8pt/wz2BI1OPOrVfi4x//dLzyyhvxnd/1a+JbP/3puHbjJv13Hg8eb8YXv/zV+Oznv5BKSkH7ymuvAYIq8PQohwsd3tSg0iBwdvLpOQJy+YV4//GT2Lh1LZ7ubadSNR3P6TGCmX4ZH07ilXuvxD48a+5Hc6++8967cXCwG9evr8fW1jNA9VJ6kTR43K2X8WG5IAzlqAEK3nz7SwDgg+j2AoVQARQClJAJssUc+W0rm87NGEk/mz3DdrbLlX0ZN4ew14hwOFUwpmrudjQITwH3gFmJVqCLYVWpnsTtdcH7DP52IlgrNpY2AJmDePRskvfbh8mL3C8gtBuMH9zZ3Yn37r8Xr772EvJvI+WA7zV9nfHGg+F+LK8sxvBwRN0w5gBYxsBPM/xAeX4aTzefpZL81Ke/NSezCVjmxnZOkHsw1N7wURyM3oWniqXB9QBTzYK2eIbPEcD73gS5lCE9hbJV7oKSglIK+vGe4ihd+lXgL9t42kuyPdkFbfnZ44fgt7gw6Ze9eBp0zQl/8nfLV9xfHL9+8z5Z+eu3D3nBnQc8N5j9b/8o7wVE8ue98k8aWOyCbvsqJ6DSzmNk3AjZMpxOMWZ0MHAZNG6MtmB2gl47hO6H7MqgXEEMPWH6LV0RtpXGhYaGtKbDvgS9FvHu/OZ33lkAcuWrstH7Cl7XoHHxIGMTNXaL7BvQtW5aZKupFmvRQwf0om1Wj04HnVzP+jk5R6PU1bem02PAabHkrJMKBZ7j2SjWNtZiMgVE8teDnhb6LsrRof0rANmjnDR0DnZwotVkMovdvUN4agp4kVeXs9xTgLE6wRhK5YrpsAZDFyuZJZ8aj2un3r33Qs6RcL6HIndhsRfX1pcwQB3RETiixwWtXGP/l0R/tMU5GCS91NRdwjEVYJn66UAzZWiLZ9dpr4yMTRR1wmcMOAGscvHCMEiUDM/2mhYNn5OovIYOqOa9wTMA+8gg5YF55A3T6CAznFh7TlnM1JL6EIzl/VWwQYXe0jNs3LOeTydrr3WasYAedXnv7sIS/dKhXJXMeGIe2+X+Krq2i8xoJuBtL7WLpWy7tXTynUMrynq9vHBh8gaohXdCP36iPuZcdmipJB1QHnMgn04p4+w4F0NwcYMx/a+H1hArjfgjaNpYfVe18zldQOgyfb20CNheou9X7YvFqLepszt1p5LJYLaPy906qdilrs7pa0MYMuQPGWWMtCMCettPMHyudJV69psDsz9w+8clRu/OVBkwlDOOfXAGeHORk2BKWhp6xAAWpt7gWxIjUDaZTeZWuUqQZjyonGMpVJdiY+FaVNoQUwcQjAV1hnWfylUCo5WaVXPLYokAZluAWSAGla3BDCOE62G41rrKSG+KS+IZQHzz+o146d7Lcefmnbi7sBEL7YW0OpzgdFLZiePGZhz13ovZwpehzRsIWt5VGtLZd2JyikJa++lovfCVGIwQ9gC2BgCwUerG+bQRw2cnMdyTuVRoCAdjYeuApCWsjE4p5jDjaDKkXFgXWi/8nh6b7AB7QG8sbcM7nQBhUn2aMk4khLNKLLQWY729Egvn7ShNsPbadHCdulb3Y3C+C5gdZEyo4RyCSugLAXSENWMMq0PoxhLDgAA21/4XpLoShwaJUETAZKqkFtZ1D0ZYbNyKpd6deLT11Xh0iILquHxtPT5579PxrS9+H4Jzmn2R0p5yp7dPK1Emh9CcvWxeylTmvFtBJKPqAc1VT8xuAFGbR1DvbJHGRQ+/wL4QEsXYmCRNe14ATM+HMT3bh1EOYwZ4ddj0+GTId1N20fbUScvM3dniHjOTgF4FBF+mNOOYcdoIGr0IdYRxt2MqKJifepQACqXKmPbox7XF1+PF9e+K2yvfFv3G7Tij/jOYdgqNuVT0ZDQAVA0SxBozlB2mRqVNztXSCKxiK/pYEJT9LRfYbrnJQ1fgz93N+67u5Sy/PS+Y/eOAWYeMfLpPkMZ81dXrnIwm38pvqfUoYwouvtp/+WaLxAe9N3k/n7N4bEWcbXEsco2qDDnHc+374kUCYesqfTtk5dF+gQ+g+09963fFq698Kh4/PgAclsMlUd977wNA1X78zM9/Lr705XfjrXcfx8NHT+ODh5u5uhdSJT08n/iWj8VkPqR/MYLPzes8Q5li9GAE5dApwn0EMDNhv8NWDx/sxs0br2O4DShEJVwKUUBfR8k+fPAQg7FLf5/GO++8SdHlh1o8/OBh3NZzS1kbGJF6fmfjo0wBs9BbBKw1krY+85nPYHwb52e+5nMAbRtahH5pKwGTIQUClFP40TaxeRI8wB+Cb5Ul5AjPuyuUi5hReSVXSkQfO5Z1Cl9cqOxqp/HGPVcZKsXW7hGgM2Jj+Rp124dPp9lvrQZ9a3uzOyzvEpUC7Fyffn8r/sE/+Lm4//B9Xngar7x6L27fuZEzwhcWjeeuAnSv89sFBm4XYO8ky24sLS1iAK/RZvU4HGBY8+zXXn8FWaJXFvC9WKef3oyvvfWLUW0gL5HX2MP0BXwMXQmorlihyAbCd+qf4QL+xjXF0YYrPrt5lFusmLf73qvzNCcAwHsSs7Cjgn1Z/saF+VLvL3bbwz23gkQLmrd8l3tOTuMaH/HhjcU3PhcPK8urec7vgNnf9Xxg9k//hT+a9KWRkSub8aIzd/hlDlBVczth2bCCASB1kFkHBLM6bgAYGBaZ4QCDfoJBNUHm5JA957KNKL/tY5E1JmvyI5+zPimWoDu+A5vgW+iP4l9lWcmLoNsM3YBeL5ClhqeVIOAqdJ4rd1JuUXUZA7RVWQEc3Y2VpdvI8WWejW4E3CjHnPjjanJ7ANDZzMWI1AMux4pE5/lHp0exAV0pF9wMszDPqU4q42KtkyGATgZTZgpsnXBo/uUOIMyFfZQlM+PbrR+7I8EC8N3dPXjQUU6zBZzFU4zSu3fvIt+76LpBdNsAxQ4MBhAcDfbicH83XGjFiVgm4Hf0SCeKYM7RJLMMGNtr3uZ6eRzYfeAODFmAVeUMEMxzKoC86sU8Sape1ViWT5WRgFb4swY/OvGzCrhzfoHZOgyfyREIKgALpDFRAs+4eFQT3m6rK5vUzSXZTya0adFXOjmKEdxyGj06dxwpsN+cFGaqzfZiH0N8Jbp92txr7XvafhGLWwPVSdg54tRFJ3dAYZRFHpIRUmbzLOn7QrmN3MoJY+pTwb0jvYYPiF2ghXMdXDOwE5b7YDxC7g7otxFl41rvhT8KXcw9lNfJwT10/2rGy/ajv2x2qaXo9PtxQTsZXlVjL4NT0kFCmdQxhhScczQ074zGkHadBGnIgTHQpiA0B3CODqeDrRJ/4Pd8E2D2te+/lWBWBi0sJBpWy5MOk8K0qsoOSegVo/KnWJCnWj/ZeVojdCwASgKSBZ3t7OQigazrIl9buxbRAYk3Z3FS49kyPoU+F7BhqdSrJ9EsL0a7fCPq5+s0VhMLBEGAIhsMd2I4dHEG8+pNAZAzhHwjXrh9L1564aW4sX4jbqKAnBhWPq8jEI5jWoKBmg9j0Hsrhgufi3r3rSgtfA3tMOe9r6FUFiLWfi4ulv4hoEgQBqAFyHbOlqI27sfRQSVGhzznZBZHEiFCqbPSjN56O06qR1jZB2lNmljZiV9HCAqxIH39YbuZGiudeTBNCzDlKldnR1h1NZRNezE2umvRL3fiZEi92pMYlQ5iUNqPw3NAAWBZgleomIh+AlA1ptchLZWEHgttrgSbWHLQQPaRdp7Gp5bb9dX1aFda0a624/bSd0UVZfXZd38qdp3Q0bkT1ePF+LWf/P54eeWlGMwHKTAytoZOLcIiVNQcZWgYJ2PrMGUzNiuBbvF+46jagFgBrUNGpqCSoTMXMe1ieATEkp60XGoWejouIcDPdmJ8+iSm59txfGF6tgl9J6iBeaiEws88fubT1DpMMHtijJeehRM+I7wheDe9xwLoVttk8D1oEcY/NxTFtfjLcXvxu+Mjd78/PnLnR+LWMmC2dSsF3miwGzu7T2I+NknzIWVFkKFQFCgCO3lCKetw3a9sxTsFsAWote0LjaqgEawWu7zgdV5f3HO1vfzK84HZP/Hn/gP+twzypm0Kg/Oq9MiyF5KLwyXfpeiCtzLMATq5Kp/X5n0W83KzSgL+HG5SOar8vZffBLTWXuu9eAb9QLv7fpWxCtymn819VwcQW42vfe1BfO5zX4uvvvkuYPb9eOeDBzmr+HCksnbSg3kyjzA4GrF2fT36i66adgz43YNOnME6h8aw9AFb8pD97FDn02dP4/bdWzm8VT3vxMriRmw/eQofNWKwZxaFtXj/ncdUtxq9hRa8cR57hwPoEQUA8z19sp1LvKpIM5cmClbgqaxTgDq57enWZrz3JjICpdNfbvH7CYqqnvGNJZSrs4Uh9aQ38EHeaxOreBP8I7BtuFzlhnbJmfucWl4pFpQ4mkGrnD+FbwaZ5/IMRV6Jj7/oKn2Ax0cF/60trMazp7vxbPs432d/2c25IVOcPGG4g3GnDbRxCRmzv7kZH9x/K778tS/EL/3yL8T+QQEWdveepYHY7VUBr71wIZnD4Xa88MKN+J7v+VXxHd/xqbh2fZnnO7mmEwsr9bh2qxdffvMX4yf/1l+MSmMMmLAdHKG5LAOb9PT13ACVFH9JY/I4/OAusGJP+ue3pCOuz1hHT7HbhtKhbZef8zu0l/TLNUi0BMUqGg5ymCSsmPJhRTx6FiLvF8z6PMv74bP9+cMjJz16g8cPG7fYnhfM/ik9s5Yu6yYuLHhiju4xp6bet0P0w8HU5UgLMOvkFvvde1yuFgosaEI5f8mYCbqgqQTybhTvKuQnl4kXRHFprsQF/TtBarHTT/mcHnx+t4E0RlOG8Qi9hW1kdLdaRxcArALZCAhrlM3kshgr3Vtxc+PV2Fh7CXruASzV64IO04YBRkq1pHtTNjmZ1SF+R73szRk6eQEQY6YbwZO5qe0/U8SNRgD40SiroZc2R+K43r51FM1hZeshLyp7DE0QNJ2yF7G4xtHrcQXY0SaPnzzJtF9vvPEGdTSDh4sguPDPo3jy8IMYDg4L8EOjyee2m+2nk8NJnun5AzTpdOvWz8AL6DKuqdAnysE2smeh10pgVgUgdtuG7JkDlvbMmFYMfsCpozhl5BaPQ9eYEg+cg17U4KjynJogtQ7fiInKGtDoJ7MU8NsJxrrzalSJmW2A/rCXUG+0mROv0PFXXlA/88wGRnUFq1j+MnY2V/eizGms8ExlgalK6/B8BXDrBONzzjuvxtzD0oXp1ly05pR3qONyqVpwy/HsiBfBY+hp50AVYPaoyF08AftwrbIwaZSW1VjSEDoD8Op57rcwSLr9XLDBzBaGFlSR7yXazXCbMo2kD0u2K7L9FLR8QX864u1qmfKPesdQBPlZj//JienAHMGlYajzH/i93wSYffUHALO8RO6yQTOlkcVXoPryMwiQlyqFzkXUvgSiS4ahpCWISwYUzJYpvfZQFc2w0L0GqLobSyuAx8Y8juvTOC7TYLCyxG9+RWNPanR63QwG59ejerqUuePGo0EcDrZiONqL2cSO0TPppKtphgNc31jP4WzRflS3crjGoRyD8g+PxjHAwhpDUMeAsnZvEuXeblRW3olp7bNx2vlcVFY/D6gdRaP0CUD0TjSOatEa0yGTTlSOKlh5QyzrPSztw3SbL93sR2e5EeOTURyM9ikLxHBkWijAP6AzLSE6WjFlnGwmJhYQwrxnFzAeBKB1Zb5TVw9xsku31ed7JQa1URzVTmJWncccwOpiCqb+yBx607OY0bEK9SLIH6uGdrc7GggDg+wdKs4BC/pPpX5743ZcX76ek9ha5S4A7lfFk4O34rMPfyYuIPqT+SIA7xPxXa9/d4YozBSCMEiCSI0MhGsBiiA+FHAxhMULSwXgVWjm7rUJgvlROqFQKVQV7pRZz2fuGCRmGnD4jF6MyTlGysnjGJ4+pg92sBTNSOCuEeVQnMrhyhNmfNclmOVdxsc6HOVmAmktvYWFTlqlesJ0zkOkUcGwUVDfWf9IfOTmb47Xbn13XFu8EcbZS9TnCBa9sePhJMaDg6y3/eVEtpx568wK6pR0mpVXoNsufi7q6gVXHlrDCBTUCvwiv6xtVlzrvV+/vfzKK5ef/pe3P/Ff/If8Xyhw20NhIhCgSDyRd3ukIGZyULkmcMjN64rP/k8z5v15L99/RZlnDVLAmC3Bs7l7E9+vPLUK3ATS+SftqYgFVi3aagnAdI3ntwBM9EsF4d4GFfLQGTLCDCZjl3fmqJG3vLoQL7x4FzDbi52dZ9TBtGl1+rIeG9duRK+3zLPgWYTkLrxPp1CYRhxsDWN9YS227t+PBWj4bIQsGZxjsC3F1758P+7efSlmgLfOwmI83jwA0PVjPplS4bO4wXOVaw6jD4eDVGyOcByjMOG0+OKXPx/jPc73IlzqVt41HGCCHIQiUxg3EdL9xW56ZjR4plMUMorE4Xk9C45QSEPGewnEYO9YWy9maBu2JYscXVTjYEwb8nljtYyBtRSL/XY8egYPHGuE9mmTw9jZhhZpe/k5aRoAi0ynHPSeDoYygAOBLL8srVHo2hltPMDYPwT8P4ovfelz8ZWvfDY+95lfiJ//+38nfuEX/0Z84Us/F//wsz8VP/tzfyM+87m/F/cffQnQfx+A+zDee/DZ+Nm//5PxX/3FPxV/92f/ZqzdgA8AsnUzf9AWGjbSA41V0Af/JWlLJ/mThhZNDa8qSpI+/U1iY/NjIbG8qfjuluEEnrJuPtOPSYx8uDyhxzcJlO8+UwB49fz/MZgtnmd/+d3beEXSszIM7sx35G6DyhDef7n/2L/wfGD2T/6lP1Q8lBf5GFW9hswU5TxOPTQEyALm2PXMTjGKTvg928B7AjlbMF8+JzNcWE5+S5mBTij4F5lvZfgsmDUsyIlH68srsba0Cgi9GddWrhWGG/JH2WsNNaSqPMclVfutTqx1Aa2dxejX0BdmZmlF4eioLESvtRErC7czNaIgYjA4Aqhcxmn2FtAvPejcTBhNnuwiBIbEmCHoBMNpH17hjYIq3tXtwR8YiK6S9eTJJrJ/BgC9znN6PBewCS/evHETMGWsOnWijMpXY7k1DPf29+Lw4BAeOuZZrtylQQsj0gS7O9vZXp/4xMfBA5Ps1/HwMB49vB87W5uANHVYmTooz/RSw3fqY9s5iUDaQZ/4TMqbKRYF94DlLuVbWlqAX52MtgZecDTSFI7ocYNJyzo4NEamlGmOUQAtOeqKMam+ybhx+syc1I6IdhZoe86bArIOtqk5qqHu1EuOMC6bOQgccH5Rp4xVnkmfUcwi33cxwpiZBRAKGkFmU5mjR9U1Oo9aXJxGNCA688r3TcvV4LnoIDBCejfRkY7uOirkcthTJ3IBaDO3K8+aT47iHLlUpr0q0JYj7n6/OEY+0UZ6c1P3w0jSt9pBAyzB5uwEOqxhSLVjCTm9gNzKyeHsgtgK5dFwMG5bYC1ucYSXbqQOPNvRBHhPzabO0gGpg9PdPjueQ2eUxfy2emb/rW8GzL72A7d/XGZXYZnzTSLKZQrZ88+VmrjQAgh6LhDmCWK8m8IUbnOBrEq8YCiVuyl4VtevR7Nfi5MajVkdI9BnCUoyhZTB83RQq0JnXKwhk69honRoaATCcCsGgEY7IRUpzHN6aszXMQ1XjcWlNo2EAJkexuPZe7EzfRbbs4exO38MmAXc0pGu1Vwvr0JMKtclzNRZjEufj4vOO1FrDyn6DMANle+P4uIQxftsHKeHs7gw88LoGe9/jNgZx7WXX4jF9QUeV8Li1lvs0OgZjX6W3lm9TumR5WoBn94hhxoFqsbxTdM8wcpCIEwgzme7u+ESb65i1lroxonDiZqACgKEDaqTNm9gOQE83SXO7FgeI8BgN4erw6QtBJSzSZV99pLh3YvtpWLp3fNurPRuUJ7l+Adv/mQ8Gr4Do62juCvxPR/50Xjl2mu8h3YASChDRUd6QrP/+UvPK8wlsFUz+Y5iOFXvvf0hyERAJN3AfJfMYkiEKbKcaDIYDOLg8AnAZhbz0iRmcRjDk83Ynz8ERG/FWWUCaIXJMyUJz83ddxQrhKQXFg0mcPZ9OcmLshkL6HCMizaYdziHgupHCFWALKChX70Vt5e/K165/gNxd+XbY7W3CoM7Ae5xDA4fQDf7MRlikExrgLBnCag6ejwQbgo4A+9tklxNDJq27lfbr4BUNz/L9Pb1FaAt9vyVaxVCX789L5j94//5f5RHy1DwADvA1LJclSfDB4qPWZSiaKoKe8x7L4Ese3FPITgtlw4gFX4OF3PSc96uZ+PyynyIYFhlncCeP8GYCegx4KPb30AR3KEPnTDSoa2qCOBpDFA4DtOXoKELhL9DZQ4tNjuNHBq7mlFvPktroCFhtpK9/QN4f5zZIrZ39jNsZDqAzmaAXgr7FOXVqTfoO2cLV2N/vxiivHZ9hc+PUxm9+7UHsQAPHOx5fy89WE4WcZlqwaBgdjwZ5DDmaHQYX/7qF7Idq92IFrKlgsVzjPJy3TjIkF3DDvGBABe45gsBwfaryhMWAezCs7ShHmXxt9k0XNrRDCmIBvj+DOP4PJw/UW0EAOI8Xr3eBZgsIn7Mf3mSivBwfxbbWyhP5KkLz1TUirS3XaIHWBsLFkdqI2N4bzfDIgC3AN2FxQ58Ab/yeXV9EQVXiv6yHlwV4Tz6CxVkzkUcDjbj7Xc+H2+/+9n4ylu/HL/0uX8QT7bfjEYHBV13ZMSUYmfUl9fQ5uoGaUfash085uc8As4giKQx+jtpk+LlNZdHCsR/blCU3y+PV+DNuiXtXR79oKLzNo34vIXN5wpmr96hOkya5gLLdQVmbRfLBul9eG/xfMvBPfL01UO+bntuMPsX/3A+74KX+nyqnvwxR27qnR0DYHOVrlPkC/LKhQ9UoNl2FExj3DpfhRBBXpSp4K+Ma6VgfnbLNuYzvQjwBDz0F+LW+s2cOKzTYsm5IjVnejh6ohcOYkuvIH2NHFvu9GOtv5THXh35Bog6o28b5Tb6tpVzCmqVHjKhjfyOmI5Po0XH50RiZKuAz3y6jrx69D25qiR1GI4HGWO7srrEtYahFd7VRw+eAoZ7sbysMYd63d9P3dADoBrn6vwXWi/r5ybQ8fcnTx6nzugvAMBX1xIM68RQp6hHnNRpVo4VdJix6xpwDx68H8+ePkk9ofPDmfjHZ4NwUtn8HPkiCE332Ukc8d2JVsfInFPbXvoAEMpj5SYGLtVzsnUAWk8Mezsa0p5m2nFE99Lh4ogfMuQEQD45HlFuZ/sjKRDMZoUxrGhxGV3SqAJoNQR1+iErLjGTE9Ca/bscm7RhmbLoKKimPnO1PidQ95ZWKFglZYKjWnpSpV1Ds6x3O8kY/hdEd+pR64odKsGjkM3GOp+ld3U6N4ylyEgwRh4bM2+GAlPK6VwrgWEa8EJFpnL0l/dV+Z58R8eZrrAGLtE4MjRKAtZRdcozu41WhjssY/CYsaDrcrqA2yr1v2g7ukAPc4/GnqEnhj050TgJggc6juyf3lczyxSjWufJL/Mp7+G8fW84xh/4vd/Eogmv/uAdwKwTfwBoVFYwI/MJaEXlGBh8gUn8JrLOs34Gvlk4X8ofJ70wr9MjYWqNbr8dla6Tw8aAGcAsRKXA93FG2dIfuTBC+Ww1qscrcTorwVAotNEWje5qXDbiMYyPFihDSI3jzPVoOg6JdjjbjqezXcDRduyfPYyD8wcxKW3TIbO0jjoI/tPKozg/XcRKWYDIschiMWoX16J8inVTfhbD+wihg2nsPgNA728hjPZi5+AJltFhzjK+/smPRRXFd3x2FAejg4wB0itrYL0TgDK+VaLFChPIYnjQ7wAauIM3xAzGqXuS9jk4OIjNrUGM5ocwEZ2Mcj++QCEJfE0JBuCtVbrRrqGMan0Y3ewBWG8O2SOoFHp6ePtYzgJZhw9UqopVZ+yfUS7XxG/Tpgut67Gx+ELc3/ta/PK7fzNOqg6nXI/2aTN+6NO/MRb5/fjsAGJr827FMj2X7v+ifxwaMM5IwJqCln4+5becSMN7FMoOB0BxtCuMMx3HeDTKeGKF0sFgL9MfDY4349hUZpVBTMrbmXJrfLJdZDRIDxOEywv1OOt9VDEqmIyHNUev7asBZKEspmDEjA1apQoPxFEKkgwkL7Vg0JVYqr8Sd5e+I24vfSr6TYX1RRzsP4mnW2/F3sGjzGc4G8PAJxgQMUkhodVrAL0C3KMvSy847VsAUmm7aKdigzaT8uk72qrwvOuZVXEW12kI/IrHtNieF8z+sT/3f8o3ervPuAKz/CsUIG9WKV5tlqR4Lz9y8BptIPWiez6MHwoVbOmTa/P5CXB9otd4PX0uilVB5uQmvmd4AT8r+zTdzo2WL3Wi17kBrwpIUZAATWdaN9tdBCn8UDMvdCk6DkHVnOmNEEXZzmbjnLyg8ecKNz2AmItDSDMOnSkFj0aAq7NONHmHsYHDw2eAuTJCmqKhhM+iHVsYhh/91G2UyjMMk4NYrHdj9GQUt9duce4Y4OxklbPYP9jNeOj0yipSENAaMO+9905sP32GQOE7YNY4ONgtZhjPM+qpHWqTSPsOk7p0rWec1OGQIfYWbV5MIpNelAOCFoEtV8DDFe4tZw7KESKMA20EmOVdL2004trqSkyRcQPAuQBkOJjHs2caxTyE9rDb5D95wvh7bCzaCeOD8kKuyCqMN2SO4FmvRyo57nIpUb1aKlrFObiE+sI/CHPwUNy4vUifoKzqp3HnhTXOn8TBcBbLq75AOePreY80QllypyxZLI42it81aiSVM9pCUeB92b7KP/kDZXTFC8UdHnxe4fVxy7bK5ivoNyeO+Zu3J5oroE/2w+UHW1fezJNsPkt9m8/h6GeNfpng6tn8n7QuEevFVFF67ur8739uMPuH8pmFXOBZlNfRrJwLYttDO3Qmu94sXy69UWf608287WX1gd/5px71WYU3VnPRZ2tAqUMABJTVcAEXwVhfXoW27wBmb8b60kbUyi1kNs/E6hLQHRv2xp8TmzTOFwDAgtq2DqOALwGzJWjcFR5dttRFFFyy00VNzJd6hNFoSijBld5S5X56yCioss1zTgIyHlUQsmjqpRsb/FZBThzHzt4uOnQaN67fwnjtxFOA5s7OdgJd51VchRDYfspb6ykQVC/u7e1hFLZifW0504EZ7ui7vEce2OfZLqqwvvoS99Z5HwB48yHvfJoxyxrOqS8utmh7dHLtFP7lGedj+M75GICmmMeIa6b00eRsHhNHcM9n6KRR7M8GsT3ZjePhVkzGe8gL7gHMnp1N4b+ZPUtHqhvNa++kvVGYdm/KZ5ey1zB3r9cBiZCWoxowBrqS36iLDr4WBkWlcot2rcXJWZPnV5GJ9rihcub05vc0YOF99QjtbDiUxoFxqLk0Le2cWYcoi6EFdGwcVwCxEMKUth1NqJMOQXbB/XBkyAfl1KA33AMBqge2joxtlWt5rMC/xtPWwSvKMDGenl/lZ6WFjqUMZ5THiYsn0FgHbLdAHy20exliaBaFMuWWxqemR5C2k72kdbAJ914ZxqYvVb84WmaWqvFklJ5ocecMeXU6R2YiTDK0gr9vKmbWMAMFjy7kXN2BY8HnKrFTgJ/DSCJliIoOVaX5J5B1SNghSt3BKlsT/xbr6CPUEfCCtUqrkmDmtDLPpfmMfVQqNahYUyUw78bFrBen01Yc0RGT6Tb7ARU2aLgflfo0FUCzcxGtHkTSlkhHGBMj1MUwhensAgAVgqU5vFnEljR02VuHeZvGmUN0W5QJkGuamROU7OFyHDyox97TAwhK78kwxoC7yYUgeYBSbsUL916N7iv3cnbpeA7xYt1MJlOsHKy+OUxO28wg9Fz5DEWRK1zRdjn7HdK4AKgax9KgY44OhjHYO0iPVtKvM4+4L4cbBJAKXojJoSQnMTWbHYSAE7naYXooBYLCxFd1ms0EtDrvEakQifKznKk1jB2+tfpKrPawAAEDX9r6ydiafZW+uBYx7sfHb78Qn3zp25G1SzG+2I96GVDCvQ69JQ3MsF7V0jxb6JMAE0IXyJpuxDjYAsgigPnNMAA9Xw7h7u3tYInv8nmPPkQYoFRntT2ANMZMdSfmlR36CmBxPkEJSHWAjQuFFSQBTaQXlv3E9yFICxBNu8oU9GuuKgPjZOotwIheMBmlIZC6WI3SdC3K8/XolW/FGvXtNdr0RTG5a3t7JycWuAToyGFq6uGEgDaWrbHHTgjRC6qH1XAOGSkVFrQualTwXnmTUtO6yXB0uEzqbzlMyn38kD97LO7z92J/3pjZPy6YvXyM7XAFZHMr9HT+zk8Woygvxzx39Zs7N+U1fC6u4s/f+RG2BtD7mUeqs7zOPuBc1oKbCsCioub7ZfWk1HqtQz8foVjvJq2aLktLWw+OyxO2Ootc00xDxJX39HA7iVNA6Yx/Fy1xxKXbN26d6xBoeiHMyDEdI7BH0N4RzzyiTyjHo0f3MbDqXKfiRADDE4fD3fjWT78Wb735hTiHDqeUp4Qxt4Ki3x8PcxEC6cVRJsMKfN9oNEDRtgDgk/js5z6D3NFdinzpOXTH3kSJ0xhj6gxJUJ/LOtMAsoX5W0135DkdYQmeZHf7136hcb0uQRk068IwMxh+wn3ol2hwzcZixD3A7I21FeiwhCG9l0JhBOJ98oyHqugpk15c28py1Zt6OqBH+4AyeRSvNZoFnaaXmHKkBxeZk1kQKAM6J8tn2Tp0k5POprMZcloQDihB2Tq5U8BrLOMpChiWS1qQjHV0JMHxT9pJjuA3acrr6LaU6YiHpDHliH8Fn0h08oC/FeCvoCSexM+5802C/PA7/xXhBfmw3KU3PiWY9RkW5h8FsyrL4n7bJPvk6tm5WR7aj3uUZ/al39085/a8YPZP/YU/ROV5RjbCZRHk+6sdg6tKO7pIikOtylD7Iwvju6R/O4TfE3Dzg6C9WGKUMnqGihRgllsQjsr5pV4vlpeW4s76i7G+cgP5vxTnR8j8TAupsWUsJGDmFB6DfhzybyPfzN5RRT+b5kgPq96zZgX9UkZmmsHItqT/juEzw9sc8TJ+XTpy1M9RN72KaTBQjzbgRbA7hL/k5aWVReSDk5DnCUpfuvtG6smnzzYpkx5Zc9UCujUQAX2O6hm7KkhzOzk+Ss+roNW4WBe48bPtMgV4Kfv0Envugw8+iPsP9+h7QNX5NLZ2HsTu4VPaGSKsoZfQxc0+YB5DrdGl/OUjdPlBDGb7MTkeoHsmMYWeJrTR6BR9fzIA2I45j34CQwh8S+MDaBnwCtDVK3tyiq5wAaOLY9oKXIOOd6nhI/SWk5md5e8ooiGQs+MJFZqlLeMEsjM+FxPF5bVmhoRMJ336VL3XBBzWuYdyI7vUceqS8zJADsNBj3sVHeeoj6vFaeyMJ/Qt5XcE4IQ6u2jSBXLh2HO0r6MDQwzjEYBzNDbkRc+soV6OmgqCeTZCS9ZqQgttrOMEs8ge8+jWSzXe46ir4VPlqNNvFcpg/OsxTOPExmOeKV11mq4i18p+rADEZbiTCwwF2iWNUL6jtZBT4kLoOWlavGBbFelWHeV2V+6LgRzRvTg1lCtJg60cv//3PL9nFj2hsEHEqDC/br86pwfVnHonNJa7aSZybXz2E5EVTKIFqtLLBwroqNtZGYsEiDmDaCQyYyb12KbwYC9AQDXmY4hrBFgcAWTprCMsHolAhq4BUkx+3+2Zdqodi6swYRflWoUpqkN6ZBad6mnaE2dnWDsXy3FWuUv52C+u85yFqJ8sQWAjwOcES3SFa/U+fAAY/ocw3E9j2ZUghHqUekj9JRRlF+DOxw5K9datV6OUViAdKTHQyip2E/dnKhGJD6WhgFaQKoQLIY7wPVd4wrTWE4I9AkS58lcfHtZjPB+PY3fnaaaesnMdajG+xTgX2z2H91Fa/c5qDiu9eOfVuLFxO4WJabHWV1YRHPdyFqkhILrmBQRavCsra2GqlZ3tw9iZvxvNpTKMtBhHo/N4+c49gJ99Yf9ZTiWmQEwwh7LWy6pVNxrBGEOeqSED6E5QpKh1E+gK+CpxOMKQ4FoF0t7Bbk4+ORgA2mHsXO6ySfNXZ3EEqJyX9umfMfVC89EumDRYwBK3MY3FMVcCgbnTA6ympE0t29WStF36yaOpvjgNkAqYqx/Vs404Hl3HaFiOoyFGxgQBNn4vDkZv08f3AbIItf0SYLZCvXz+FJBwhIJYB/As0N4u3edEBRiWoyDMfsgNQfOPbyqpVLy5f70HSvKWpwpa/5VrfuX359lsd/cjhI/gCX2VMMBdPZpKD4EhyKcwio0EOLxRXcv7aB/71q1guey/omx89xlXu6AEZZayeA4dmMkjjazypQcSMZ6ghWdgZBo/l5PlSk7UfBCttjlH5yg76iwWMyYMgejSlwu9bgoyQbJ0aztczXAucgHTdijJvYNDeHwtDlwHfKIRWo9nj5EdM4Al/bW3A00OjCd1yOoktp6+HS/d24jZAKVz1Ijr115DsZ3EwvXrcSJtwAcC3hJlcX11jSf52Pc6svDLv/hLcYZglcewinlmQU/y89ERxpNNRzsJoGxvLsnlSBHV0DIglbqVOuWYcn53fJqhQ5mHmPuUf6dcMz6lrhi2sF3yWYNnVY2PxYAyrsxcSws8cwE6W6BdmtRtzjUzhzuRa7QCz9IDw/0ZywvHIb7EQu7olALU8QyalrpSv/SMsCOb05PLbzoCxQ4C8yoPMxe08qDZ6Een0eNSwA7ldCJIkzqA56kl9HCiB+4sATxQmZOi4gIZ2yZ6lebQx5w6aQBpHBk7q6GbstBb2F2UoZg4ym47p4KzXBgPKk1o2M+F7ijuxaYsdq5zL8KdaAcOhX7i6HfaO4c/OSG5pzjjB8Me1InGpPLEbKe8nvrJNxnbz27N3J93K1Ffc3Q6IiRdW5D0nrMrh9UG5pQ1pRFiIY0jvVwO2xpP7iI3LjvqKIO/e8zday55IXO70qApP5SzNEBVANHppTc20HUXx3WM+H606ktxMinF4TZgaqCHy/4o5GmGyCSxCJIqcUbbu3hOt44ubXfRLejVDjqpcwrPmmd7HxC0T38bvnMCzdSiv6AONpzLdFoLsYIBprx76YWXoZ9WPH28SauUKXs9J2sKaNUHAloXjdCIMRTB/nNhn1wURZ06B7BCRENkwR73rKyvxe17d3kUoIi2GwzH0EgNGdKKbUdND/fj5o1rAKpZfO5Ln4+f+6WfiYc79wMJFHvTB7GFnJ+VNgFT+xwPY15BdlTHcVobx3F5mN89Ti+2cx9fPI1JbHF+O06be3HRH0VpcQIQhv77gPYF3g1/V+rIS3hKr6774AjgzbHaqYER+tFcqKfhKd8KJreeUp+9vQyvUH/qtb5+/WbcuHE7ut2V6JXQ36U2mo++rPRptxUYeSlGJ/XYPDiK/cGU9jO8AI7D2HBxBttXEDhFNx5MANecOaM/x+iqfdrvEMDscQsZatjS3iFAf38YW+i8Qx038gt9zyOzHxrpBXbhBwwr2lhDR+++SxmbpsvxTheMcJnm2iW/6jxa6HczpECdnEsEnzrpcR57g8PYPtiJ/eFhmA1h72A/22B4OIjxAKMBehDfyCNFtiAxpjgHbuFZGjhHAFnYEno/R0ZB9fJpcuzzb5WP/ZoXfxzuwXowQFmwCnKXSVV8vLxBw6uYXLIzpRVkrlWaSo796ByFgQCqNAzkdbhDKdWP3sq16C5jPXYHMBEgEeWEwQOQOoOZVIdUDEU12nRmovcZ76EQoBIoo0b3JOr9g1ir70a5fxzz7hwrCuLAWlrAImlDaGf109hGcJu7zHg9Y4LMf1ZFARyfTWN8RiefuGLQabRQBP3GSXQQxmfjeuw+vYjDQ6yOdYQFz3MmfL26iKLDKur2Y+O1e7H66o30vGp1TUbjGEMkrqpTzC5H2NOZcX4Q3Ra2s22D8GkZB0F9JT49qS2sUIdDd7jfppGgaM5CyehxaSBg6nWU+JyONcYNoprrGa5hYRuzojIZUf6VqMxp03I3Xn+tF9c2aCce1Ot0Yv3iZgwPNmOn/AAjYzHeWPot0bxYiK3KT8eDw/cQlgsxGk7jRv9m/NC3/LPRLl2nPM+irFJVWCJAHQaxn6cwS+b8xEKzjKfnGBf0vQpEQGvPORtT4Gny7On4UWw9fhSP3n8Y+1sHEDmgvXwYp60d+n4nZs0nGB8j+gci5hmSkTRazJSk52wMmNWE2ibXPgFIaKnmLEr2LgxtWqFeH8CKRnbol0Jg6FQKD8TRx6JxfjNqp9di8PQkJnvc00BZ058ZczmvIqARQqP9OBw8BUjtwzS0W9e16lux5BA4TOuEpUOsci14PWFmyKjS1g7NUuxUd+mxRbFkWh61OeWghTj62Svc6RcargC4aCt363e5v/zKi970Dbc/+mf/g1RmAlMOH7K1b3BLpe27+N/fi7LwH+WwvCrBvI/P7m7IDcrkPRzhMx+a5/zOA91t4HwHNxdhH5cP9x8CxkluApIkDn5y+HFxYQVhpOGrYaVlXUlPTQFiNOpU+AVwuAI15l2swOfr66uZJmrr2Q4Ctptx6PPRWYwQ7J16Fz7S47MXd1GUCj/T9XVQJAE9tdrwC3TqggLSh2WswsvHnDMtjAaFmUVOoLPt7cfJS8463t3biocfvEuZ50UbAAR7C5i5KOwANB4jmwogxCNpr6I94I9siqLuxjw609ohMdvX2c85dArNFkYFMpXrj/hyhJw0lYdLVmvI3r5ej3v3lqJD21TnlVioOxLTjSdbh/H+NrRFMbBRc+RB8hHkZUflbpk8VxSFLrkEe/Tf5TFDXfjs/YjDfJYT1jKDAwCkDkAwDCq9f/SnW+E9odxURq/9GX2o8WP/6xWSvK1TAdwKWSAv8zU/246WK2k/3+99BeEJF20U6cBvxSYdClw5+vDLetmH1keveHFV0Q+2vYaFj/R1+ajL67M9uD536395rvi5MDrdfL/3SYdfn83A83/gd//45bf/5e3/8f/6I3RDUWZzbkrVdosv9HhqX/G5eCf7ZTl9XfIqHZeTpbPtkb50cBp7tp/Xct7yZD9yh21vKqR+rxd9aLxfB/zwy3RyFvs7g9jZ3o+trZ3Y3d9NZ8jB6T6AgY7heVku7s8FInhXSUsIngCq8AP8AT8JWs/P9YbCN3P7zfkunXCo31AVDU4994YF9fodigdB8Tjj3A1jkb8Xl+U941sBUluFA0QvvxeabUFAq6NFQONQunGWAt7Hjx9zzzyXOzdHtHGySIbY2wWMYiH1+0u8r5QpvRxREVwPqeM5Gv/k4jCOTqnzxQEiywV2Jug+j1hVNSiE/QJscFY+xrB0WB5jk+864jS6S3XDc7ishdyqy7PzlBOLGAtOApUtEPd5vUaiK9qpJy80ODBGDAMwP6xZKrRLHZW1zXksoK+ZuiUzJsCEne5CLCyuYxisRnm6hG5DzyEnowq2wkg0BVamwZLZwTdagIa6aSA4f8M+dEJY5mYHc2i4Ga5yyn7Ce48ow5DfRpMTgCS4x0lf6FAnk2cYCz3hM/wTd9XpYxdeaIOXmrzLWFl53uxJ8pq6N5El9XWRDnWuzgAnlx3P0IfIPB0gBR8g487M/jTlneAkPuvRdxKec4lyPhH9L4Yw04V9PBy6hPIecngfvewIrjSoA8HwLfWl5qf8UYp/83c/f5gB9xdC5mq/2j78fHmk+T68Ri/s1ZUGRTuEoNcGcJ3WfBlrQsFZc4ZdpUWFC8XiUot5Lej/9JhmPea3iz7t1o9GzWF1LUAAYLuMFUujNyE+CIdmiPOTWTSwkDrNEkSopSkxAQ5B9SqTQohfKU4Erl5kjiYcbkKgVcpxSkMND00KrPeiGYur11BiS5mguN42rs843xWsxNuxtHQTQuwnA+ZuPBKW0K9MftJyhpBRYH7X4jd/42Wj5Hm/u5yfQHIyse0c6gOc9R1qhQhgZONxFIypqMfDtGjTS2tgOZ2ql/DK8nd4xkB6O9scfg6dNyDmm9dWos27dNLZBs68vKhP4sne+6loBKCnKNTV5Q2AbRWgf0w5qC9EfbVZF99h/aZTrL3Dw3zHZFCK0SHAFYtxOhnG7OgwJrNdLOcthM6T+MpX3osvf/XteOv9N+PZwftxUjqIUoeyNwwrGcEADu8W3l3jpXNILD2vesAKWtCIMYewu8PB1s8wEZNOt7uLGYdZrR+zT6KF4GlDK7Wz9YjZDQyTlZgP2zHaV7gP0xo2B6IMOUNAyDzuRZsa1ySaRojRP9Ko5dJ6dp1pz1l/BedkNkW42fYFPcls3lv0ffH9f7wXfPP1u3RZKLZvbjNjgLuxQ9KNm8erz4mf1fCXm2/InVO+LvlA8XV5w4fOZe5xKDizUEAbdHsKpnO9rwgzcxCewVhzPXLQjM1lG7hlPYoXo/SKoSgngszhTb1J6EdAJIYdNFkATdvPiXwAV4S6owouQTzXWOGRZrsQTOkVWF1ez3CR8kWTMkFzQ+PTd2M2PaCfCsXT7pbixq2FWFqtxR3A4OIKyr46iKUNXlwfRn+N+tYOkBe78XTr3TTEcgEOFJuG09JyN1xc4Mtf+RyCHqUh+aOb223aGP51N37LfnQTT9nGgoKrdpQ23Z0/YP3VP5BwAq30Agrm+IOMoGd4eujSs0e0RTG85ytVBjajtDoEyLuMtqmXThGgiIdoNwHWCfrl54J+PCZY88hvGlVpXMD0Gg25Ops7fOMukC3ut9/cC+Oq2JEI0LpDmcWxhnzEsKcv3P1depaHcllc6GE6d9REhaSxidyFLqx3Alm3bCzfw39+5OC7c+Y1H2wrh6eL5ZEpY9bl8porz+2Hx+L+q62ge064/y9sH3bR1Zb3/Qov/k9t/3Pn/+e2OrRZQ+Ob/D5ZIY1C2h+NL+8pf6/aQNCsgyff4DnAgb+nN1pPbdMhZNsfUMs5vVR+FijaLt5bhPUBZmh0FxAYH03icDSO7Z292Hy6m7mJD5B740Nk3BjACMidjY9jPJrEweEwtgAMWxiDewCIw/k0XF9fz62OmFOH0kPHRTHRSUPM5d7VTWAd2kadX+gdackwHVVGs93IfjSjjKESGaJFOzrZyN/15GqkniI8XDlS4KoMcHcJVs/nZ+rj5vLO7tLvBJmgbMjVJgFykwwJg4dpE2VJrQUorY7h0wGAdhTlxiyavaNo9idR64+i1qOpzWdfmSEb5lFpnUXDePgO/WBMPO8uw2tVxRW7R1fRrFF+YHdMzyYAshnHWVASDEvuq5XSSAHu8lyfX8s99HAaO7q8AKB3gmvE6vpCdPoLqWcFfzp8duHz8dEx97RiZXkpJ3v20D1LHVfLXIbeu1zbAhS2YoxBMWE3b7fGhRmNjueCQWmqCmikjO7I5unc5x/TzxN0tSv7DeJAT+gUXXeCzIM+k0cFnwBKPaxmV/Co9aSGELzmyIXXIsQMm5A+JmNoZeJkMfpcmofB9djWmggoZIiTy6zX5t5ObO4+iye7T+Mx+97hAWUYJP3l4gvDSQwOR7G/u594pZgUDpA9MF2g+cnBQI5yw9sadlXK54SxdsdVRzW6nn+rvPG9937cmJhE03Sy7CPfWXlRtb47vSiCw4K4OU+lBSEZfwQRuFSoDiobP6IVvcWlWDAkoK+CqKcFfwrzCGqwadgbUb1YwCLA8ppdj2a9j8Xm8qh6EU6i3ppGoz2PBoR7Dmc5WahcmsVCqxSLHRgI5prSwGOo3uUr9QY4DKOHTYmSOeu0dFBMEqxL7SGu4xjlMhwIcKh4vQeQxVJqnKVQcZKQBFBrdmPVlCIra1g/1TgYbQHi5jk0qvKVsbQyiwwGglYAp0xGOZ0FqseWp/NOFBScvbW5m50qKHBIvNUGmFlMJ4zRbCY1X+gvA7ROM0jbCTIaCPRAmBaF5k6io4ZRLbVgGmOaRnFwQLmm57RHKe4AvDc3zeYwoC/W4oW1j0SjcxSfffi3YHyt/0acjCvxba99V9xcegECpZ4tjIlSoVQVviX7FKVrDt2c/QjQFvxVTzvUWzA/gnkG1GU3nm1txnvvv50TaL7whbdja3cTYbsf1R6W9+px1BYAsADP4/IU/SPdFEMKxsHqiVXRm7FAz77nDDWwbQ36l/Zc4q6NwDSWstPph+uIV6pHyg6syk5Ujpdjvt+L0dNGTHb6MTk4j8O9cTx58oT3HMf6xnrGXjagKYPyFZoChzFWoGDV4W9juRS0FA7h3USQOmFmTv1kMIc8oBnOi/a+Hqx6dHLa1ZYWrbLhV07l9qES/frP7K+88lJxwTfY/tCf+YOX9yhs/sfPl3ZUHAXYufzOf36WArOsUqj0k4XwpOyB4hXMIr0KZcx9/JZeHL5DfsX9nBeg+Rn2StCWw7/wl38Jpmgz+3J56Vp026vwXxsltOgdKD/aDaFgvlezZNRBo+YDNkm8LzV+amGxnxMIdrd2AXA9lDDtf+iO4t3dou9mGDDncfvOWmzvPozHm+/Gex98NT7zuZ/HgPpMvPnW5+L+g7ficPAs9g/pe4y4GvLD5WL1DN24sR5Pnr6XXpdevxq7+0/iM7/8C9DamL5FCTssTwWbXQAcis5E5Ia2UPXcbRK3ou35j4uzPS5/6TgpA9rW82ouSZopFYK5jp0/oIGPDVyAWNpV/Ax2ieWFi9hYb8X11ZXo8tdHqZkVZHMP5bCPYQd4bDnEiWDPUBLb34dwFDzY9pYg+dYOz60w5C2dW7E0MWWyUJyCTHieQ/oAVr0xCKFfSSNnHb1XGpdP5cvCiLF/k4bcvYTdozL9w/PsRZPYPhaWf5eA1BEfjcWcIAkgMRYz43p5t6BOMJ5V8LYsi/XVkJPKeMfl+zJ2++qzr/NHN9/le7i/2Lnfc8VPHz7ziv+8L++9uv9ye17P7H/xX/5RilrUUYBNixVlzh1dSZ8VL+CSy3fk9WxeJzMlAHT3z3Ncl2FB7rRH3g5zphOIP+thLlMrZrYWdcV4eBRDAOxY3QRYHCK3jH8+Khc5ld00MNVXM+S42UWONMIFsejO9LijZ4C1HOXjHn3agycXMjyg31vIMtpvGiGWJ/Ue+k0cYFtbH1PH1RDMJ8hdJ0S1AGZ6dY2nFbjooBqNndsCXaEEDRPSwyjAVQZfu34t9e+V53YHkN6FH3rdxdjl87vvvs31yAHeMxofRg3eLqFTSvB0qTyHhtA38HylqdEKUK22qLejMcXEsQtjUGk357XIQ9WTFu0tnRe0IHTBlktg7mifqfRsN/s1u417HGFwdDfLyD3mdffGeq0FbmmECxS5d/h8b+NugrGTU8DcZICxb45tAAflqtU7sVBZLdoeXFRBv1SrDd8EgNdsob0tZ8n88fAn/V8Yn/BI8g1w289aFggUQzcNaRmjOwcZGggw5HcLLgkYHqr3HTbLkZk2/bQI/rCsXbNb8G5Xx5SxnNBulopjBL/ZEDxmdiGJk842HnZuGCQPPkZfuzuCqs429eKpc4aqF9FyUQfaQUeREx8Lr/wZ4NiQtEEYSrq7t4c830udbCyveJJagm9oUzCUOetdbc50b7/zn/5XJOXn2hLMOvHH/G4SXC6WIAvZ4VSgAfMYH6dV5vBqMQRFAWUIFSGNphetdEGnnNTpsH4srixGd6UFgWHZnAOkAC5aVaUyoA+irJs2K5Y5rkQ7bufQvB4TibJUG6JYhlFpTOjsWZxUReez6NVPY7VbT6+FjLl/fBbjE4d8z7GCLhsPhnemqkHrxmYomOvUo8w1zvSfjo5oVBiYsledWAWjnVwMaEasrqNZegVN/rt2fSOqNOhwOgbAISToRONICyBbeGWvFHIZMCvAMbbHfJmWR2ArITqr30Tobq4I0shgNLQb9+p9XlzsRKdnDNsK5a7EbArRIwFtaw0MlX/JCVowr5MsvGahj1VYNYXRfrQay+yzWGstZYzODoC7Vl6Me+svQ4S7cX/yBfpmMU5nAIfqenz6jV8TjUqPPjRJSFF+h8okelXEGYLOdtDClnDnEKwroE2nhznZxhjfh48fxFtvvxtf/drb8fbbH8TBcAxwxiK93YyVO/TtIgRP/5nBwqEejYRixq90g3JECRpWcsJnaPySIYowBvQ2tOBCDMUwjRO+oqFVjsAExLZK61E/W4/5QS+evj+JB+/sxPb9YRxgHR7sb8EkT6C/Sty+fTdWVm5E1XhAZ6zSLhnLRX+o0IrZtd2CZnhnpYLhQX+5dPIApSDDqUIq9KdDRZYtFZd0BJC9UopuOeTqd1swj8Xvfs6de/I+mMXdFZeeZ/uP/7RhBoXivtouX0lZU77knrme+UURlkAnS6A+pbx8UKHCxvkgnydYTTkND3iBwFb9W+XoRCvEZAp4uoebC/5OPSqYdfdF/mRnYWBpszdbC7GyfBOa4VutR59KRxPKeZ7pe3xqMWGQdkAAO/vVNd1dw32q5X4wjFsbtwB7GIaz89h59oy2Gsf3fv+vAgRfxC/8g5+Kv/13/lq8+c7n4unWe9DiU5TEUwDssxhNt6DBp5z/IB4/fSee7XwQj558kCm5TFnVaF1gALbiq1/9bHpkx8MDymQCeGQZdbRpWj3K3ahh4KGENLxoNM9bfzevy0blu+1p+7u5XK3D84ZIJfinfl6k0mmjzFOuwu62ZwIMxIbOhoVexMZaJ166cwflWgtXTDRU4unOYdx/dhQmcWm1AWHKl0svXTF0z3PyeAWiUBb5WVqzv6XaonAN+KAoJ2XiWChtFZlGhcOXemG9RnAifUu31oOdZwnKzY9qvaxWVp9nFKmt9MQV71N0ZLW9QCrMd/IlCc3yFvcJXgW16aXlaDYJwXXO9KcMPsuySr82MmyVj8yz/Oee5/zseX9w433/KD9cvtPzl5cUGyd9+uWN9uPV5ql/6znB7J//L/8YNxeeSEtrP/hc/wR/zqwvCs53dnuEA5dkzVJp57u5X9lo2/tbtpGd5FWWPXnNinCNfUJH6JzRY5bAACA7OjT3uiAQEHtcLAd9UdVZZBsX7eAw+DG8d0zjmUbSofQj9LHfT5zglKMnLd6xRl+uxcJSB15eTjCZ9NJsJ9jUEPF5xuFelccsPY5eqTOyv6HXi2PqQvkPDwungH1rvZST6jNHA+czk/IXE77Wr62n02Sf6wU3ZtRptQwvOouH9x+hP58iX4ydLfPMfdpFnVI4knSSnJ4ZizmjPE7YAmQ5kQ1dr9NCt2MaDdmY2UNRPnYxCA0PAay8KihFXzscYF8pLO0GfpfXYEB4rIy+hCbpGnVW8juXGGduX5bol26rEdc31mJj+QblmYMlJnQoFALAM761jJKsNdrgmEXaUsOuCNfTG60j7oJnIZIyDrV8gT4Cd9XRS+b9zclgYhwTWANI5VtxWsaK0xdz2ixHu5zPQNktvu0jY1bBGh1kwQKCZxlss9gxdVg7F97IhS7EcUdOzJI2LmJG/Z3QdkpbSjtZU+ov+M8liqFZMYGT2jKfL31g7H9rsRn9FWOqoSEMIvP3arjIb+p4Myoc7A+Q3fsJZg/pa9OPKU9onXQeGcPb6rWQq+3oIyQ1dn7Lj/5uavN8W04Au1K2fpb43K6Y3hZOhsvus32sBL+7J5dy5KCgMiVPz8kFWHUGvBvX4YSRUwCg98mrCn8VySnErJVorsRu17yMJ4CPcQLZqBh36QSyIzoZgNlAAfCf+dvUwSMae0IDCWorKI1CMLNTZNf9PePZOWRCB7R4Z4nOPsZqOR4BEo07h5mdbe+KObPREOFwGLP5ME4BmvVuORp9OqFGx50ewhwOh+s1FIwofNjpYhlJ76pckcmCMe8cFi6d60F1WP8sdra28/oaDOPKPLW6wLtQVA55rpm/tizgPwZYFcPebraVyk1LtVg0AKI5cUYiwuvUGF4Z+AKFACDvNLD8BrG8hDXbhFEApyfH2yj0r+XsST0wJ5OLuLP6UvQbS1mfWrvKc+fRxDpMRWKfonVdlcP9GGFjmMH29nY83X4bAPtWvPve1+JLX/1KfO7zX8nQgsdP9gASZ7F0oxEb97CwX2zHwnUUVneGlTbE8kS4YD1rPdr/ghmHLDPWmmbLUBM+2La2kcNWDk8ZA5xAFsHXwHIslycwXSv6tZdjqfId0Tn91ojxWgx3JvH0yfvx8OEX2b8EkHkz9vYexXhSDF04VHJ6Yv7Fov8M3zDcQCFne16dt3xppHA+GQqhprEykj5Ge1zjZL8CiBaZHHguEu3qGVmPy91zCm2PtqlD0V5/tfue592Szi550MMlNyqTc7/aCiBy9fmSPqHDr7/3wy3ZmJtp/xT0MFNZCe05v3NfEYLAO9jlp6v3pXz3vew6JhTC6ogq/GfbWrfxbIxCOszZ8g6dC61FIaaoElEb3qJysd0r0J5xm4f70Ao08eTRZrz1tTdjfLgXKxh5v/W3/q9ibaMd777/xfiZn/kbCNh96KMc/cVqtLsuwNCIpVWE9HI1mp2j3C/KvPvIoa734+f//t+Mn/35vxl7+w9iC4D71jufp49HceOmeRw1Rmkn6kExMUjLhbyilTNnKD9fbR+2n0fbz/qz2xbKASf0GCavs945LdJ2pjPju9fZ1G6CQmUmeih5YHlpDTpfQh60aTtzZs8pEsqP+9opBjDCUwcDaPXO8tm2U97Is6mk83dewvYr9HIJDqlHKvz8rfjdexJQprwswEmxqwQL2f+P7vnoD9uAR/LZZ6PcoJ2c5Ard+Lu/5TulJbb0JvHR51y99+rd7kXIAYoafrMM4IrcinoUn4vNH9jtrNyK41XZ/qc277cd3a7q8o9vXnO1f1Ob9bMv2ZPHr46XO9odgEA/ir6/brNe2Rue5/er6z1v+b6+X4q4ckFOEYctr+kZdFncrYOn8WwP4233aewO9qC5aV7rpNgFdG+z1IxGGdkJf9UAQv4mzRxT4MnJURzMhrE3GcbOeBC7I8CFy5cKLk/px3KnkHPIKo9X/Waf+92jo4oC2ysZl3w/xvinHuovPbLmGheY2r+nyEiPV/3wDEPV79euXctQN2WH1/osvbidzgL6eB6bm89yRHShv5iOB8vifWcO11PW2kUrqmf9uJi3Yj6qxOTgIkYHgDonT03BLIha8FjOwfD7kTnhOZoWc3Y6heDOoopObnYBimYKAZCdIhfMhuRETz2RE/CJKxdOOZ7Am07qLFcaCWodSbTe6snRYJgGxPryUnqlBTrmsF1cW4z1m2vRWYTHKczBZJQZEyoY2P1+PfoY1X2A+ip1XF9ei9Xla2DnJWRHG1mF3j6qgFWcbAvwPavDW/SpoVjIzYszwLf8J4A1uwu/NUqUjbLqfKxQ3iZM1W3XY3mhDcjux/XVxVhcdP5JKw0Et1x6Gbqynpnei/Y9QngJ2sHoiWOM9S/oGtK136FjZZu7da2AzXrLC7F2nTqsrua+tLSU/aaDT7l3NDf0ZZahi3rs1cVXMsk9l7sFHBmCZOiNI9iGZH4zW4JZC1gQ6z/KgBKf3k1EV/EbpfLorlucMmalvdeJS51+IxYBae2FIqWDVo1gIZdWg1ivnjeZ7kMAT+Jw8BC+fooCeUxbbQJUtyEUY96mvEPVgsADFPUaKK1mi7dVY4CmOQCwHl2W1fyq6ZG9ZCx3wwxUDOYf7VS6UT9rZzqu2jnMHgtRPW3E0XAeh4C1ycFRDPbGWB0yXTOX0TM/3fxoCCAa5BC1hKsCNt+rddcqBZLRQFgWKAFTEKXQQOCamkhgI4NmEuq2ipK6V05zyHRxqRYb1xcTzNYd1gRUms7K8jcaWMipKDAAICCD4FOw8DZjqtILrCWtNAVJXJyz11sw3SzqzQZKsBsuRHG4+yCefPAuxXNRBQjmtB7XF+5ybCYjlmqFcjQVh6rBmBqF4mwyjgng/nCwn0D86ZMngMT34/6TB/HBw6fx6DEAd3eaQy39taW48/KtuPdGO5ZuU78252MYJyWACsJYZWL8ZXoOAJbTGW06M5cczEK9tP5cEEJlLZDt0u69fju9ss541eqtUL5GeR0gey9WGh+PxdrHo1t6mX5ci5o5ZesIUAP4zWiB4QRPZF2mCI3xEIZBaCtw7EP7T5ArE/n9ak+gO0LIuIA+isbZo1q30yME1bgAxvaB/aFgLtKSFSBVQyM9zfT31W4WhGJ3JKMAtl5/xWPPuxWeLwEDXwpSp60EHsUxDUMKehVm4Cb4zDXnL6+/2vJnnpODLpKOIJa6+t0sAwloVbAIMcE4LJubz+XK5C+HpjUYpW8NpFS2CCr72jCbCYaWXtfJ3CF8lWsxWcR2SnrgWapzY9/GtLXeE3muFMWM3SePH8PrF3H31lp89CN34id+4j+Lf+PHfk/85b/059Pzf/vuKjzgEOkIIIvSPZ/R71bmCL4yvv4iJ3FJD822w/TwwfBR/NTf++vxd//uX4vFhQbWfit2dvei5QxzZLl703g6hL6A1nzJyGyb4sPNtsz2/Mfa1H7IGD6AuhlXpBkBrDVy8sWU+tl/Nn7G0tLe9r5NPQX0GnA1RdGWUU4XgA9pyRX9HPLEVv1wS09L7leA04IUsvRqK2RSIZsyuwGbcrY4l1//kevdsm/zFPKc0nit9Onx6lnea5nFb34WqJ5d7mZqcT/L/ZJWlSbQozulhUb5n5ekwqLc0myCckH4JUH6WVryGnPb5juv2MR3ZjmK/WpzFKU4snM6m+QbbFf1/8fbgTOX+/Nt53qkkAXnaZgD/CCWIpWgAABJnSjAC3kme8aTUpHMnkC7+s23yatZQf9X11E/r5NXiv7mB9sGujTTgXHRPAHAdRRIUl4BkfAQfzOG1VzOzqfoN/vRqbUAB7UEs+pHw4HsG3U2nBcj9MWhxjrysZgNb+pKio2eUF4OBgDcUZFPXUPekUmBpfNGijyxllsm0fNXgEyBrLUTiOgQEYg7YU2vroDfthoN9SQfpcdtbQV+pt/TwXAGN1wCeedPOAo6HgnSi8w8jnqeCooxOCuAu/JpC53ej/rFSjQuNqJ8vBInk07MDmoZY3oyB8BOkEVDgNqBk7ePgiLRnP//9v4E3tasrO/En332PJx99hnvXLeqKIoZmRGjxgGVpNMOIDjQ+Zs2kU4nne5oJ5rOR5NyACQdI5K0YozRTjQGEo2RoCAgihpARUBQxqKmW3c898xnn/m8/f0+79m3LhCoU38IgXz2uvc977vfYY3P8HvWetZa5NElRQCTgYyodveiohHc4Wii35vUL3We9UQ7boI11sjfunpDmc9n2WElnVP8HSrNdVs1XFrUgas06Ce7jU63x1oDuQeQrFonxR5Alvzso5cOVtHBOwlmB7TdFHU035+JW06dB+TfTv2cgB5asT2soJfQoRvQGEr3cA+DMAC26v0UJuQDgXKI3skB3z1yRtsk7mk1YnYwGWfmZuPswmycEVhP9zCiBxg9ugI0kp50KVxeXYmltfVYpc3d8GaIVZ4+t9SBYDZ1hnpFbEK5lRO5shP14RyJdq+TuzraM9sfTAGYTWMKMGo6zkURo9lL61wC9T4fKhf0RVISIDgtjxOQlZLSV+o3yvFIwieB2ZuZPa8VIiQ+ijaZzhJxR/+NxHDcc8i6061ghcBArn0oL1MBI39bM50ForF3sI5W1u8HNH0krl5/b1xffX+sDT8S2/sX4f81hLa9nijKSjd6gMA+DOt6eZs06CJEuoYgOSBh3Rv06TF9Qazgb5eGEGzmUBbPawfT0a6eII5bYrp1S0w1byHOBT17AUWA0P1BFDuTALvpmJp0e8CZ7DnRp6cEsuUEIrvKy/opBa9V46zHFMT6FFEfKncV/cYG1ubyStZNqwOhtyuBnoopgOzsXA8g62QzrKJtVwoAnK9ch3AOAcWdBLLwNvd1Ald5S1QCZxWYbSU4cBmXeiqVZThM3efwRLfVjS5CZRswurcJgw71d90B5LoDzCAOtsq2tgdNJZx7NAsqKJ8W8iYgMIesEDjeU6BcBegvLwFItyDI2mxMDk7FwukTcerWqTjzaGrxxDAak+tYs8tYeesJ+lRyFRjP9HZAe/rj7GwjFI+ArIsvp8KEcFu6FEwSj+uNuqaebQYIaiDIWij66fqzY7b1tBg0b8cwmaUaAFOA3O7kTMwv3Bq33va4OH/r4+O2254cd9z+1Dg5dy4VxdbGNYyJB7MdpQ3TUyD6e7R0iteW2d7E7F2Aqaxb21mBtLkDyEVglUK97JE1jmRImD0Np+Sdsl5NY3QeHYZPVqDHCR//jfx3s0JOgHB03PDBOzo+Lj3qwsM6UQflRBX4WSCrn3SOBnPOdWe5VqhQhPwmgbT5KGrwo0PqTQ57igSz5YiIw9zSpRNKdKPRqO1N9ag3DQB7ectept09jB0UaAlsFbhN2gjhvqLBgeIiQ4977K1x90feG3/n//ir8ZZf//cYvYsxmO8kQL185T5AIwqvekibrSqSslfVPJZDYLaHdIocQ+YAKXm4GadOTEW7WckelAJl0p/UgNyjnqgLjkZDWUUkVJBGiJOnkL03BYHb0eVRsHYFmfbeONmt0aZe7Og4EpK70ImC3vUiS/DGQ+rJaDzs+X3b298db3nbO+IP3vX+eOcfvTfe/Scfhld2AdzIEcplIrB31lV+S/5G7apsHQVp1SPBMu0mzaVP4xEoUgbRTNkG0sboGwlA/vMQ+CpXPKTvkS74uHJzXd4TmEPbpsXhb4+bQ/p6miefU4jR+yWvPHTkWq2EHPYlnzfKl+naLkK/Us/42wq+OUuGm0l9FKwy4/rEZ6P4/0vfHDdU7aEC0FY8i24wSFyuKw/ARMWeMo4JD+QgqWVDlm1UxpEZSG1NeTjp1pKdPuhJZ8yX7c07HkThZCJ5raLvAAq+AlB0R8qJXiVq6NxaW7pVx7rSTyV6vN+hQtuk3SIBl1eyTlAY1F8l6XyLthbUru8i97GetjTSIVo7YbK30fkFIMASaCOvoRMn6G0CSC2LhqruWQ3AiuDUnRhdUlNZL2iF62+AXOWkMlO9eP78+eyx8546r9frx/TUTMrWixcuI5eXs4qyl9eVflyLnjykUZwGlPLL+hTYoZPQ4xMA2/3tbmxvoPvWAcMru7G2xHF9mJOXD7YbqQPn0L2nz7Vj4VQ9JmfJX4+6Btwe6js/IrEq5UWvC2rttNmiTnJ9VtLURcRVBVyRwvXypE0BmW6STYXGPob6rv6r5bwT67lScaIsybSdGIcBi460o8S2djS2226nzptG952bm8eYf1TMoGMrE70YblVjxaVLEWV7u5YbeQKQnUAHThxwjfFROPt/Gx7bgm+Hu9FGr3cAz1OTzZidmYy5eco9O52bbkx2SoOn28XYaQq8IzbBACvIYZf0Wt6gzji2tokLetZgm0DPVaj7OvzaQsY48ltA98qJfTEJsqrRLle9SEDfcMWUVp7L0WZ7Wx2JEbNAq7zvSLXGji5HkFV2eu2BDYyzlDEjLh8xzPHCDTeDhyL4+FC6GBjKd/L9FDLlddkzSqZQYoJYh9KdxCRgkTmczDSKX3IwHMBEu7tuc3c1ri1/KJbX7gU4XkQJLBErlg7M7jBkdaINoKESiV8lvEGDrWJ1bUNEWptOcjAoJFwzUhAm0Sf45pGTHeJwMpqVWayf01hBt0S/eSImm/Mw/RwENMf5NPdOxlT3VMxNn4nJ7hyEq5M9wIWGLEFMCb5GQFaFnI2BYFHIGByezEWp+e2+8JubexA5+UQJObN7CgttagpC6qH4qAYtXMFyvg9BZV2inbMs1JtO0aU/6UH2Drq6gUBzmKscOMO0ke+tAxLdNcSJK1pC7peMfo7pDkq8MlVawp1JGIF3eFelsrq+muVwubEckufY3d6ivXT3phwIUfMlYN4a6u/YpryzWFxn48SpW2LuxEx0pgsEqSD2WhT19ZhoIFAh1lSqWMHaLw5/pBJz2ALOsWc7y2YhZQQkrNucOkSmr6xEr9L1qLlrDRJgUHtKTNbvwOqdgRhhsn1n3kITjamYHpyPs2eeELece1Lceccz4rF3Pi3Onb09e3f3eW9n91qWzfRGIFXQrrB2qEMAK2i3HWQmn9NieVbZOmlxJIhHClhasAzJzNLFUXlGSt5vbz5G4HJ0HDcIKkcWa/6GPe1FTXlLnMZteunjeJQuIjev7V01rfzGw+rmrDD2gJsyXtWBHAmWoYy0E4fvj0LZFiotaVxD1LKU99KPyzP8T9Gp00WE/hZ1X6c+3aZY/uuSWj0VpIafrLK8ci31uBMc1CBkFwNnGM94+pPi3vs/EL/02p+Oem8v+ie7OQlKeeA+6J0u4Hfb2dIOeZZxgf2Io8x/Q7lI/vX/dWUGVytoIIu2tlbJN7yIbKKFKJ/0zWeUXxtUGpSXc2kcIhOIWgf2eFs3GgEj9wvr0MPnVHsCYV2NXPN4Zradond1vXQ9cF3PXRSgmmyPD/b43g67DSzPCgl/7L6I3/rdy/H2d38o3vnuKygUqxjhj3KzupVfyhfbYEQ7I1qyrQ2j9vaneXWo3tUDStpTDpfx5He2ORmUZt3aV/mzB/hw8wpli7+dI1GiHuNUEUsD5WE8o7ST1o7obkR7JZgejV7ou+h1SYsl/ytPS1CbrglHxygeacEVFZw5L43vqjB9n0d+c3NaZXq8dNQWmc8sP21GoS2vwWf5vdHf9J1tx1dHz823ZT5eaO5MRBOUU6cdmxjsUGn0Kp08tw+aUUdWOgI24Vqw+zTkgTxGOwJ69CG0514pK9Cxt3ZkSGlVup6poxy5mxgFomZ4h8zyWzDQUj4CFHYBSFtV9AFyd7W4nOum1utb0WsdAIzWkZc7MSC+LvXewaDsgszagL86dFygA5KWIRbXiV8/RFchK137W90kzalntnY3yCN64QAjdA/wVdjbuEUe2pQLGFlHp0zYKVOuHb2+tggf7Mfi9fVoNQG3bbect6fVzQ0uA2Ia8djHPQGe7uRInT3ZFCyH/x+452J8+AP35JrSJ+bnubdJm2/Bn+ILeN9hZ8Cty8o5PwYKBoOAE3hWrasnZpEnszEEzG6uU2/b6mDqPBqxN5yI4Wot5ufOAGQ78eee/eR4ylMeHydOzGW9OqJg+9BSuWxeFXBbdRBYbKqBShksSwcmqJlv2m99295xKIg8FdBsC13bbU7F9tpu3H31alzFOHfiurJRt8qFQS9OznbJ2HK4xms5Uun8IFdPmo4eQLYLHfQn9uP0YD7OzKFnT6LHZs/FbrUP0MQQWIU/qZ+t67RztZf4pXrYieouxsLaXuxtbsegBTCGfwe9VszPT8XUDDQ5iUGBPGoBuDstMEEP2cQ79oCTjdiCnpeRWUtrh7Gx04itzYi1lWGsLW9EHSKcBwRPO2INvhjAZ8099HQaadBrFQODelLmuzpDD5nXIw/JfdB34gjrOAAEE8jjFhixq/EvPQPG5Qt0ioLREY29XehtCF/o8sKzI1v32KF65xefvUsfN7dHc0av64vK8S2UVh2iPay44cFOyXzczx6pHTKIsMkhTVjOWfvuWtXr06hTk9m9roP8NgJtnQaYqKxCKBtBtURn+7FR33o0Ge7ATFh5hyi+Co0Box9CGPYC6bQ8hfKZ71fiLCXSN/bCzn7cb7e/PXMwUqvRSWJpIhAcvt4AEO7EFoBqGPB99IqFmCxui8mDWyHUyQTGAuQG37X4vl2bI0+nIIBpQOB89KfnooMlc4CUWtm6BnOuAPD2YQ7qg6oFUsLQa7FbyPgAyAaJoHWcoWjPXa+Fhcibly48CGEsBz+TKTSYp/udWDjRj6k+wB+hseswO/VygLBb3yBNFEBUhrksj8043NjKNLVimtV1CMJtQREi/HP7yhaE0yL9g8MVBFQrqh1AQZMcDvdhXKy25kxMzTdiZfGPonIAc8w+AQK/NVoAxBpl6NT2EfiNmGjN5FIvrke3DwEdULdwoWSRQtb19fb2YOTeIDr9QbT7WKEzO9GZJ93BpSha91AF5oP6EQQYB8JbReRSJJsCwV13U1EJCUQFhLuUoYLR0MnF2ydbC0dGiX6ULrvVi8nG+ZisPTG61S+K2cbppLUt2ndrD0t704lc17AWh9FvVWOuPxnzg07MTrVjjvO0/rYgnQ60UbMsCnHiV9G7A83y0nVA7Hq6Zeio3qr1YmZmNiek7KLcdfZ32MsZuO7xXQHQu22ghobGTbqwkH+ln7ziCMCot7FUtipXaAPeUIk7CUl2Ln1bD+OJT3wCZX348LJ/9oN8RzTqOZJTYRvU0wKcnLXNj1wf1DNlTODjS/xRDrhyAUlKStoOadV7Fq+48Ls9YtKbcdsCgigPexJqCF13fiknK/CdZxirXHqrlrKgwosT8Ky9JROVZs7udbKFIwvVQNDSBiboyIzrMLZcMN5vqcca8ayuLMfS4uX46q/6slhevhCv/LG74KvtmBzwDjJHBe+Ij2odCi3rkPyrBy2nZbGg+nIlok4/zvLIirPg5PNGu+Qz24lX+SbL2iIZ8uKInTN4iSKjygoxfuuQ8pWhrC8TF+BhM+Vi4uXsfGkfhSP2MEO8JpBzOC5BFDQMCyQAtncEcz/z1CQucGRgb8bk7ExcH+4gf3bJE8mRbDnZg2vzIY0RsfHlJgFZlrI85tX4MnP8MH9HWU2wUa5tSj6TkMp3nPDrupV7KConq43AorI7jc6Mz+8pL+V7yA9WI8RErRvrlzxRZG+ZhzIf5TmfZT3mHY4y/fI9aUPQqm4py5KbG/DMXiMroHS3KX+r4MrDUNLt6JBH5A3LKS9mOxEsu4ehBLSezcEoD/kovu8lx5sA9tp//krSsD7lQWGQoInflN9mdzSynFRbGp4mUMl6hV8oSKbr76P8ZTmgcYFrylwES5JvPiFYrqwLfls/nKnxfD/5A11Uq+xHnaMKjzQAGDXKD2knn2sQOynaSWCO8lkvbhBDg2Jw7XFAK0DxXu1stKunYn9ng+d2ZDSyxzF9dqlYZ9ArB5AIMQHfSwMaIhqwdli5ao28ur/TAijOZu+fq4a4na1rTg8GM9F0+UsyZZy6Ely8eDEuX7oSutgIVF1nemXlOs9rOUpnvLarkxlXHQq/vhTNnFCqDCbfGooaTBzK4CZAf2vnUvJT2mXkWz/2CrLp5LmpeNRjpuPsyVPRnZzMXkVn7esWoIix97qFjJvAIMiOD1sCHTkN2Dw5Mw8WARxSH2vZIUQLpKBwRZNOnJ4DP/BsGxB7j6N8w3XuV2Nhph+9DgYOstRdvJzQXC16OfTeaPbJL3ip3Sc9Wgs5UacME+1Z6jEFMCrECXb78AcGPWV2uq0r/TjCrWyRONxy/tB30Eu6fEydauQoZxvDxF0Dm+iujitFEac9qtuH6zFc2YrVpU3wwWZcu7oey9cxXhzZEgdQJ0q7btPe3Rb5B7yqW6gv62gZTFhumIA1RzY7AOdZyqm/sB1pLYCPOlG6d0K9o89udT/cciWhrdh1Lo20uAclAqTtYNsd7qbrjrxYoz6bgnJBMXTwgq/6K+TmeGEihZJUf3NILWF+FGql5ex1GUYCjYqHWLxvL16umQfIkhAMpeAQ8EJkVRqvNg1x9DljHXQH0QccOUFJhaigsgdxdwtri/RsFBvOBxsUeoOC6oPmKgRSdzIX6Zsvl73YobFdM89lLZwB2K4CdicGWBYQLYLF+J2pl6xs3FoavclcX7YFMU1y7k0BCusAzQMnsxwNtVPJI98zw6jMltHhVnuofOZwiiJmb0fGhqmpKgmiDeHNLwxialrCVQBUsrchN4lAsbmz0ijuUTq+Mypb1j3PRj5V/pbx7fUY9SY4O9RJbyoCXRwUdPYm6rP4qNsfx7mfxJ+9JQgVlYa5K+PaDnd2odmoEwHGXvr3Tjj0ylvoGOoLi5TitbrbAHRAYGcVwLGJMEEZU5cpbaAX43ONTutNYs+h+V3dP+x5LmJnW3qowGBYo70e+YJoSbje2iTfCOGJDuD1JBbn7THbeXTMAr5n2s6wpaJsQF0tyJAWnOvSWR7X76T46U9mj4K9F/bu6rc1Gt5wlQwFXdIp+TXtOYTPifmFmAPEnjp1Iv23Bs6e7PYQONBGA0FAm9qboiAe9c6qkAzGZUihx/WorRQovjt6v6zzj+/dPXZAo43cASQRk6SmM4xoBFL0Rx4qF98QjNslN3rHxypDDU8jSWwoDXCd8XPpJ5BI1iUkXV5Tp2JyFe0oJFCkDTMdzpmHrFtHYeCb/VWaah0aG0IbKLpRmSlETszkSFEDPR/SngcYJM981hfRrGvx6n/+41Fp7qYRuIOglPKp1fIgrrKe+RRySHt7F17a8YCWobH93Pq2PA52UdS8Z6+5riM3vvFamrYZiMu6sQ01DEwt0zB73Le+S0xcll8/tdxYwnJzS+MoGwQQoetDG6A+NUAGAnDR0+GSgc4yzlU7BHxwlMnm0DkVLIg2fnArz4kG+q1w3x4zhYebMtiWox2s8nvzRAZv1AOHkaa4Npseti0Ryy7ayHnOo5L8qLuPk0udcOaIRI7KQOPSrm2VdZZtTLyc7BjQbkfEl0aO19IJxonl1s9VyrPirNdkV8pkr7lHXiOv9Jk0Hxq8tomdIqXbTnno+qE/uu2U3ZLQaznaIRjjJ+0wOgwj+r45+Gj0/Oh0I9yotxvPH3rvE9/9dEHD2Emi6dpj+nzscOy+8m4b2b+1F1UM+ToyvoF8b1IfzX1ABWd/uz5tlTqe4HoCuTaiR/kzD+pTKJWAmTQsY64Gc/RPwDvii12Mp114Rf9MZ5W71KLyWsb1bJ9jGi/Qgz7yGrBJw1wngFOP8Th9le1rrGzRlpMAjGpsrGLQrxE3t9NQnRAQTSYALgKjv+Ik7VVochP6hQd8Bx1vh81k322sN+P60mXA5hBwOx8nTy6UE0EJDzzwQFy69GCWT3mcQ9TIXX1ilRk54mlnjToFGe/ap/baq6cnnVwO0JZJbMNS9uon382dttqtEwBYjVLqqChHk/YONmJrcz0mO6fI30JuBzwHQLWn2wlT6pgatJwrLaHs9DV25QBHNGenZ+PEwqk4SdwRK8pEAABpRUlEQVQnT7tNei8maf8KMl3q0y+4PdmPIXl9cGUpri2vpM+pKovWoCloewBnr9OK6QF5ByzPzHfAG3BNbRi1zmFMzaJ3+p1kNjndd93h8/TJ0zE/txCTvWn4zfkvxNV2lQknVvWiWZtEVwGye/MY1tMk2ChlAfVxAJDM0VbqT1wir6+ursXy4kasLK3H2orzStZjG6C5v+fW4s4Z4XzkWnJERrQvcoP6V46BDCiRnQoHXFFf0I/rsztXp9kQxLazTTSSd4nHkTSXYnRpNV2YpN3kHfRquW06cmHbAxmBzNwdKrcwsJQDlEE6fSRhIhMhA+mbZTgidDmscMiDRrKHSaJJIZdKsXx1FFT8nUmIsmtvTGlZjvbW10K1F9De0djv8LuVvp3uKNLuUXi+tes+h9I3hxQMgIKg0wep2N6Pxb2JWKSgKwJA0s/lkkijgoRzNYT1HBIZkm179gBd1W50awvRqSxE/WCWPDs5jKqHkdOaIif6/jlpS0uwUutiXXBMqmSwWraW05LQX1Wf0hTwELtNqWBJIHt0mA/EDUxRziLU9/IA4aLQbzQraSHNzk/BhC3ulT1+zlwmaggMkL0rwyocqGPrWSCskFSJUXf7KBgFuYpMpek79kIPNxH8+mhBbb5XukTAC7Vm5tH1BvU5Xli4NeamTiXBSYD66diEFJrvBJoulF1qRJdNiwmYEEG1A8jYRCi7VzzmGNbSfjSn1qI5uBK1yYsU7io5os6zjct8mdcSuLk1HWkhrI16d8fZrQ4ZEA/gfnKynWvr6ierj2VtYhOLuBqD5uk40XtSnBt8cZwdPCVOtG+J6foMAhTlAdhw1xLnx2OXYhzV0kr038jfL5dosXxSk2XkyuvcPQnBli4NWPXnTp+Lxz76sXHno+6M2265LW49i5BamIk5HePdFcyJdNBnx15G2lYAJp1ZfxoO6X5B+jKlQ+250LvAD6axHT/xsG4U0NK3AuW4IRlZRkI4yY8UobyfvCcvPqTYU4Ac8dvN9w2WIRW/1/JBxsv7CjzyZq7zPWWrYIWyuHuWO2GJgrM30LIRQa7XSIx+MSG64cp1faUdd+XZ2V2B7zV2EIixncOTu/soSCMnB7vQ1CFKvwnvbG+upA/bYx5zLn75P/xCXLvw0ejNtKLeJi11nodpZhpkWSCioPY2MogmPzq4FtQiEA/1HfPY4ZpDd5fRe4eAiMoBciMVOnFwmIQGjsE6s7qtunK5q/LaGjTdEuCVACvlIz9tC4E8OYDOnOwJXTg8SXrI/8yzQXrUUDCKciMCjG4UpUbDLvmRA33B5yqQ0bvW9ahn0o4Y27dsN9PlGw7jHh2GUU8pJAfvjY7SmNSf2OX2hpvOLHY+gEanblR8o7I6ArIjo0W6SXcL3R3yoM5ynkARrtOrP2PeI7+ezYP5stzmMQ8Kl7IX2kv3HBWYIP8I3B4Csh1W9Jz1xvvl8nf8phzGpTHg73L5rk8ON982z7ZT0svR/RFPlMdDdWW4+fo4IX050Vl50FjKI0WMYNFGr6AzqtB4nXK2KHcPYu5N1KNf4UA+t6E6/ViblCdBLd9Jk/bWJgHa0EcZz5GTo8M7jh4EoEu+pAY5BLGAPd0BCq65s00edojDkVQPy+z3ul+YV+nd/AqbbshSeF58qG+6s+XtbFlb3YyVZecUlH6wLg3p2Ylgumbt7G7QHijmimuDQxOU0W+r9e1YXbsaV69epA0OAbIAwZMnyf9ePHjx3pyvoJuXYTRRyI4N8+gcBvXqPkSgvBy1ma5hns+cOZPvikF8x3sGq44SAojdlOAx2TaKJ/W8rmpig8uXL8T1q+jODQ0n6F06tPMJmpMElH3p18k/Zf9Upx+D/nSCZ/167c117dNTU9PR17WB/NobO0UZXAFh0RUiONyNq+C3EnGIPrRjR0DuUqHqMet4MAMwH8BbDWRjeze6g3p0p7pRwxJ23WrzrvudG0ksLJyM2VkMgfYUrSvoHwCKTwHGBe4DaKQd3eZsDAC0jcpkbK2jfzfg6yFpb6zH0vVrcfnig3Hx4oVcSWLx0mIsX1uNIe27j9FlT7m9sJOdKgf00ELHgVUM8mhOyOO9DUDLmpsxUGf7GE+2l7jHSavunup6uVKV83vEAcOtDdoNwEydbG1tpg5MjIP82SOufXTBnn6++uaKg/aQf/vllvICWt+lgTMfxw3V88+cu0tfyZTsCaL2IXaZjLjI2AHEqjBXQCpYFU4p/JKAYA44Y3IKC2ZuBoDi9p8oLUDFDo3nxgWu/5bd1Fine0NaiYwr+Pcq12L78HKAezLTBaDBLWFrMHj6PhG7wyqLB9VYI80hBCzw1aesURfZ2wuIxQFwc7h3ArDXrrURGLPRrZzCEj4Rtb0pSOpIEKaAJqmjc5aHQmwgWJuTZKKHtbJ3KZbWr+U6fi7nsY2wFyC5fh1vU2AALeUrh3QRBQgCJyNpcW5i9awsXyfXbsKgxdmJaQCSvOfwizO9dyCKLS0Re5WoBy0thznsYlEhtGAifUVzwpRaiO90ptYSdZiHKkpw6/BWrebMcIeOaCvqq2yNVs5+tOe2BzM6zHu4P0W99GEkmMV8Axxd21HZq/ISCNtLJrNJeFevXY8HL12PK4trsbaxF+3pGlbjbrRn1qLauxxF8xrgppy5bq+3IF4/w3J3L30VBUrK5FL5uOKClnazPRGdrlslunahAtQeMXAx7dsHyC5MPjHODJ7G+THZhpoJEyCSXLSbQ/Cow7hrobr0jYaLWyVmeRAS/KddaSPqP8+pABH5tJ2MIRDVitbavfX8+bj13JlYcNF6V09QkksP0LVrKcuQWpUKbftAtNadqeoWg8YDJwPgOMhE5tP6T9oib7aDQDvvyYxlT/lokt2XfMlzuPfw4WU/8cNZBls2g9FxWGfGbd1mEh5H//I/96VrUVEqA9qYQmC18qK8xm+BcR48Mr4EI1SBoNytfLF1ELS86zkPe7ep/7ouPhhygnfuW1dG6l7aJq5RYTsI+A8PeYEM6prgzGqt9jrKxfy4nq+bb3z1c58Tb3rLr8Yb3/TLMTjZibX1TcBsZtGiZJnLMnGQX88CWf85bC8gdbew9CX2Oefs+Uve5jAS2cuyZ88qdSKJ8MdnztlwIkRBeVy6aJebCRh43aQzE/k+FXTUW+jaufae2TvZbnFA1/bqZD3y5xBBjx6GXsu6S8MW5VCjofYR3MZZqx/maiYFPDOBAHdHot5kLToYv6sI+mWUhjscJsAlJ4I5/Sttc/Mt3rHKkxZ47qu5Pav5NXAWzCorSnnNLa4TrHIIiPdHYICyZFmJiVipGng60wCYedc0SCA3hRC0cHgvn/HHrzLd0ZH1ddM17+RcA/LnP9tFUGs7+IovpFuDeVRm+JxnZLfMBz88m5K91QY4u8z7UfC2eXEExrzmazz3rNI1+PpNnyTtZ+DsG8d1M3jdL/40adDOGW1p2MmEZY89B+DPHcKatF0bfvHowDtteKDFvZRnajfeySiMwEb1knI7hG29lfHaNvwws9Y5HzgRkGTygNIzB3lw398+MLpS9qKN+J1vIJOUVy6dZedS6TJkJ8ZB1CtNgNlpjHlXBhikrHIZSOvSoXHlnjK1XEoRAGIC8IFy43BCGQD9I+MdGdzevj82XaccHTQzfRaZOxvubHnl8v1x6dLHKDk62m15OXT3WgbYum2u7Xnp0sXsFLAdlWEu8aWucn1dV78ZTE9zv8I3S8SvT20ppMxXYFBLl/2pc7G5fRGe2SDOJnKtBi0N+X4/+p0FAGkjAebi8pW4fPUBDMvDcCk8AZ0bJzSrPXRCP3tD+x39WfvRa6I7iVwZA/akDux5PYjB7Gy4rqouP8u6pXGvkeAcowUM0MHga9g5QHUpG+x9BjDkWr76EOea2+1edLvT1F8XUaWOc4S4QbnNT9lONuiumxg5mbs7iREwlxtTKeY1Rl3twbWtDzAItw+uZlq6tx0UfLO9EUPk6jZ0ubftqC5Gj50KlMP99avUZ533deVoOfHAGV1yGHnTzpeOdknINWE3SWt1j3hoD1d/6VPWOXTo7GAGINwlv030Avgm9edGtu/mVjmpfBuZr9uBy64N1/QBpk03K5QJft9WZisj0DNdXU5a4c5quqO84Gu+k/wcLwBmB3cpuFI6ElzaqEYBFKOgrpzZl1Y7FXcg+BK4QAC8AnMV0YIIJLJprIhaq4mlqOXj0kvl0GqjvkMTAQlA5NvrQwQ6grqi7+nV2Nq/whMX76UyFUKiIOJPhYwA3Ed5rFBB6Aby4eQIfU300QOk0Cg28K49smS9ScO3JyajEyejfXgqartTAORG7ANy9Xkzv9k6CgIuc5UAJOhOhTxOAbqbSyiSy7ECKHXJoH1n9FEGl1hKcERZZXwFggJJC1IJ00IjHqIYVpfdenMHEGkj2/PohK9JXsHC5J9d79vO2sveB8rmbH8Ur76BE4JkCKhV70BUzvSEeFxwOoUJhEodm56MfMi3gqUGCrGmQ669AzCh680dQpw7WyqhRvri7OwALjexlgG5DtPkIslIRFvaOlAgOVSrgrM3+vKlq/Gxe+6LC5evpDVWbdZi6lQRnamtaPTXQdvXc2jpkDqzbfQP05fVbVATyFKnubMXoFA6kayaMHW7XUll3XWZMnhFAAjERag3Yr75+Jjr3hknp54Uc707oluFjswdwqha2QQ4ySQIGdo/d3JDIThJRDeCThd60MDRXYFngih9+6yrkqFhloMdvnUNxuk4OXcyzp4+A6B1h5se8ZkP6EPFLj3AsNKuE8z2uWfvgPVidlRU5lkCStBKPSaolGazNlWw8kXWLHEgTKwPwL4g1glnToT66q/+qnz+cOGlP/GDZbo0FkUp282Go0z+thEtX3mzDJlymTwn7kMvKrTUbmaTo0JbCQL91EPFYW9stU68KjiKKM8rlHIGfb5EXULzuVsTZK/CcQ3d3N6SvAhwTTEnFe07ZH1AfO0USE4SE0w2bUcEta5Em5sr8UVPeTQRbcbP//w/ja29ZWj8MJDrGKfwCtklyQyknkrecqkUSt1OuSiH9wRpKpnSj4028Jr2k/b8zsMVQpy0Ihj10GD320aLcjucScTusy7WBMuViRqO0nNCqK2fNCCYowIhv3Qp6HaoP+qOHEHb0B0VVCEuqw0dnXXrcKZg1iE18+XwvC4JAtHaPsAf4d8hLmcFryIflzAiEQ3hbj7pQwotWbZRvmxyJ1cIqtyMoORpZRygIl8DAFNESZOo80gXC2lBWU4lCRqNz28tZrlmrHXH73wHmZRplPRme5Q9cGX6fpMhPz46DEdn4yovpEFuHpVD/0Zlr9feT+Bq5wjP5Z3MswCWT3MPeuufa/OZlWqUHkfpeNvDfJV5LfnEUpXPym+8U35ZBvFhhqPzccHsv/vZn4DG0Rup39ARXHsoP+Szxg4aTfCKInZHKLcZ1xfT0QhHmdJv2bxYJ5ZdgrWOjipMQz1liP+tE1/yPodZ9b10KfIGZfONCe/xnm1mJ0vWq/XI2SNlAelLL52Oc0c0SqHJhjEeJIhoNWYAbtPo0NMJZO15tZeuP+Uoqh1IZcdBTmqC33M90OxMKGWiedGPuFpdJZO61DXBBgBWdP7lyxd5vpOr+PTacykXHaXap96cXKX83tjciOuL19ET7ex9FTAuLl4z2vRLV6YPAHDqFv1qnVhWbirkRGjqP8t/EFP985Toegx3l1InNVxhpWoPshhhOtwmX2C1vHEl1jcWs06TT20bsMikfqydKfT4FLpY90iMcOpb9zY3FugTkXUqb7YB5N3BICf0iXva6P3BydOAYXWtdBCU17bngB56AL4917JHr2pUkHnK56Yzs5R5FlnhCKpSQdJQ5tpOgNrU/fJCQbtsA/a6Mdm3N7SRbW6PdKuJnsfovr75YHbCuR69E2cRPSlnGsTjPBV7ndvijGorWrRRvYLOJB/KEw3W3QOMAwSGo8iJ+aDBbejc7ax19dxAn8mbTnKdAvfNzEylLnWCnrjEiYTpK7u1ie5cjy1oyQmn6lI30dgCxG6u7gJoqc+hveMVcJyj/0RKPuuTjnCRR0f5odFv/trjb5pQveUZU3e504MgLS3kgz0EL0CJBioEs9T5aGjLHlmHiewNlN8UHIP+VExPz2R3uwyjBbRz5P8kMG63+ZCc7lHBe1tD4tmiodYhgFUqzNnJVq7MD1AhThtHIHtQR/FBBNsSLPHag6IFZ0O7w5BD6elfg1RqwlAtLKpWMR3N/ZPR2J9BaddJi4rnAwVnCQYUdhKQsRhIsAd4n9yM/Rpgdms51u2mH1JWyqDLgBZYasu0RPmeekrQpHInLolBfxzXNK1XIXZXLXCZKQUAROv6sooTey9z5yu01CEKTCC7Q31OOCOwyhukY89sF2vQIfrc2pb0mtyru0QJcQXw7/DAoQiBG63vDEEJ3oWciS/X14Pg7N3ttGaxymqxs1oCx9HQWDrvH7GLRoHryClY3Ebw/vvujwtYx+s7azBqNaZPIFhOrEWjtx3VFm1X2YK4Ed4w3AEK3uXDtjYBs9BHgliOXJZIMGGpqbMpGLvTqZZMDRiqw3C1okN7zUS3Nh+3Tn9ZzHTPx2TjdDQqLtqN1QiNVFw/FIv5EAKUSVLYpLDke4WpfrEgALf8TWZGcSgUHfqwh8B3bWddGwaD2QSy8/MnYjDl1sk6LAjCh9A4wAHgrjuBPefufKaysBfI4Y71nXXqCOOGehVQ2KMinzi5QIUgxwtqpQWZYkRZCWSlU6xSQazDax7f8A1ff/TGpw8vfeUPU4MCipR5/CFmI/d001nAkUpPhiT4V3rLnxwJJBIZofTEcEoihBRNQZ5lWYSYAi9BqnHCf7xizys6lj98z4PcjShjpm2hgRxqwvoviw2NE7HP5DmHpibszW53oVXozUmbtK3bFjucujA/FU9+yh3xL3/+VXFt6R7aUZ83aQp6JDmX5DFdjdRsd/PlvyNglKMrR+WwXHmP34iO/C0A8rIsv/xGeyoILBDvydPZa0o6Ew0M9ASzpM3jBLOErGO+z4jsmU0lQztz0+V2ILV0KWg7yTOzaw81tEl7SDejFUEcDcmeacqzn77u5futDt+r5IomitLelf3cgWxt2z3Py3pwNynlYU6OshnIRjkRy3yYR862jZklfwlkrSPyaToqowSzR+2e9WNcHNZp1qtl5r4y3nOuXW19QgckBX2UNOJ70oj38jvTIjr/yC9KGXOV7STvafhRbwnEjc+4+cY0qOry4L7+eJl2Eqy0TN7NIwnajt7NJ2bAg1C+W/7MfJGfBLOZAwLP09A7Cn5rHst6KmnoxtMyqmOD2X/zz388ewlzMxX1HLKhQI67zJ20N4M86rrBD7LJSUwq+SZyokH69sim+8bRu+pTaob8HOWG21RxnvM+5SiLWtaPl9k2lpT4shBZt7Yp7xgnN3Vj0c9UeSzMLagcfbLd2MdeVmupCt07E91lMIsDe5GRi+1JwNwtyCz088FGrm7gxj6zs9PZQWOPoxNnMy7425UMJEonD5MF8lUB2Li6UTs2dYfbArRsb/JgPzf1mQWMUm2xtLwE0NmKwcx0zM7P5uo6l65cyjK6K6bG6MrqMkB4IyZ1X0S+q29PnTod62vDfObIWc5ZkV4kFOhV2bXrhOU+Om1vibLv5ooolTp6UtB9CIhrb/MNWKTYyDLqhmadK6dqbs4DiHUyex29m+6IVYA0ZTy0TsEwdY144tKdYI/6r3e70QB8OaoymJuLDljIkZg9J9Jh7HcB07rI2ViNepPyg1sApOovwahug436DKDwLLprALBDJ1Emmo+qRcchZBr2vJKmbm3DIfkmL/bsuruiQFdDowv467V7sUnbrVFvW8PV6PWbcfLUbMzODGJuxiW6ZikfABzgK5it0e6FshFhY2eAfqrOUSpXyOFMtdpxubFNWwI4t8AlbqwgTTmZV0A9Cc5xS18nrykrDiZ0qduifddyCVAnVe/CL2KMzQ0A7lKRS6dtrsP7O+Is+QDiITH5s9W3A69Du1H/IPEXft13UXfHCxMji9CQ1jK1KBO5ZVxuG8dzfyt8St9ZGepIgJD7freczJMzgnnPJaJGvjqGicMmAhVFBVjQOosJwMEhzEIBsZsAOBUanHhglNKPthvVTi92IeBNrCLRebnOngJTJA+6BzyK9FWfKoYmXNmuYakcToN0+ygNHdARki4RDUHlLlQc3jmE4N2vWd8jt5prTNI41c3Y3idPgnWApr2mChqakvKqqFVgiAAKnQoKgWAXuMS1j6bYhjH1t3FWXx/Lsg2hOuswnZj5LmUXf0o/0lJIjRY3tx7tacx7EJF1atylAAagEof3c6IJL+fuHwDJ9INDMlgP+qna+ySR5f7R3NtKv1WX44IAt/VldaIA9QAjbKIw9VtavH4Fq/lD8cD9H4wH7r07d+ew17U/W4mZ0wcxc2YY1ckrUTSXqb8dGAwm3WvDjK30wRtqoOg+sks+KZtANg0HytBGkCuIpmjLXrNd7kiD5duo9GOyfibmO4+NU4OnxmzrfHRy/Vi+B0Tv7K1gCQ4BF8DNfZc7suwK5fJs/bRAEO3JfnQ4nCXrMT2Yp/7nc2elqalBWvz1mk737krisRD93hT0gqG0q68gabjeH4y3jRW5PdzMJWF2aEv9vCdb3XzfZcME2G6He+3KlbjCseQuJssrsb66nj0I9uZKz9KC1w7T6f9lb6x1evNx3DDiH4N0kOiB4G0pUz4c9dr5bt5PpeZ1qRyTbqm2VPBHf/M9jjLO8l4qBM7l9oiCvaM0eCX1bNY/QrzAwIMfXLIMhJj0KIDVwd8eWTcxsDdAv9nh9uXYGF6iPlxbNlPJ4UaXyHraM54Qv/Pbvx7LKxcR6kWuvexmINvwxmAG/renmG/s8YTVsscz+Z/smm1KycHfLD8xHx2wOG1VHhbI37odwDr5Gw2WbZTPicfelT15jHI7nmENZzV7GJ9n643ylzfzRgawYz4v8wHdC0x00iVRqyz99lSEFEQZpdJMMFh+Dh/uZl3VuxhogJ+DiQE8NgBr9VJeYPfCr8gMDvObeeGw3RzJ8kY2DTc1pjINMl9OyPId0tcQFJ1TfkHtgXyKAnMirSM8GsC5Sx4H2eFcyprkYeK0rsvzQ0fKIA6BteXJ8pF4uY6khqYgXpnGu/p3Wj+cPG6UgyDANk29qbAhM3+5RXpZohsGyej9ROqEpEeCebv5bLAt0h0DAh/lf/T9KP+j4P2bj2MHQfGNVqT8xGnPnaCiBbH2ABj2wE2ikJV7HXREC1CSPv8eFKpGm1Sh9fIMrXO2narI9Sq8WMMIKCdskZwVwZErIHCoh7NiRq1P3t122F2btqjgFeTa8u4wVuHJDQzLLehxT6KmjXI+y5GMliXT7z87ZYxqh2jXMOZKf3cneukLP6GObNaRp/UEt25cQkslrZTzPtTv1exEUe8URZ/s1rP3c2sPfu9Vc/KX/rQP3L+YfpvOXXArW2lHmfjggw/mfJPRBDBlraNZ7iKln6ydFObZHcLSZUX6o96zw4D3fWZdCPAXr18FjE4DsmYAc+S752ZItA+Aa9t1w7eR+cgq+VHZ3mlLv/DgRAtQ2M35JRqf9p87CqguHnUC6Uu/5u4L6PeCwx3VtpCFdYBXfw49BDBv9loYCc4Dcu136oT22iZ/7qy1MUQvQ/B2btj7jcbLPO3trpPmBHlpxmSXPHXtCBBfmD70Iy119d+dzaW8Noe7sby0Bj2gZ1t98t2Lyc50nDxxS9x529NjtncW2upHrzEXJ2dPx1mMgIUFdOPsZM4L6TvpjjrVGMjOGXkP2bhnDyl0WFSwcqiPPehxY+sQY2M3VjjWNikPuMN3Ew9S/3ZquHWvE/08dA1xFSANIt0LRrpRPamP/vZ6BX1LWtslz8urGZIGFdglb6m7ynY9fqje8ozJu0qFZc8szAKx01SADhKioRydEwDlUDJplUu2KJhhYir7BEChB0JvAF4cqnP4JZea4h2Zr4F1k0P2gCldF5Bx/FUocVFpRJuEHJJxKCb3mUcBFC0EOoJgl2uHAhUUHsbpvs72WjrBzG3+eJxD/c2JyajuAop2ZxAAdnnDdIDUQwBUWTGSlykfVZ4KSsE/tRX7E0tYcstlV/pmNeBDyg7RCoYpL9nK8uYsS4C1PqwOt3i9t4kVsrFOOapYV84uRIGojCH+XO4GiSQwtldqW38jhNeBsyxpNNtKAKDPoQChCcNP9WezkR12yWFdwKuTyVzuzMlzh1ieSvRKZZN8uUuLbgpaj677WUt/3/U17hX65PE+5VFYleviIswQllvkeRXrdnllEXB2d25bq1DRl6g1WYnBQi2mFvaj2QcoT1xN8K8i2lMB7jkM4eLR5Y5aOm5bDkFCKhEYXquz5440/Q7A1aFm8oCwqRUw6cRsTLXPxsLU42Ku/6jcnY1M8b+0mAVHZXwo3V3KXC/dPGQ6TtlmORGJMnl0HDLRxwbwqlGVaxEihLIHg6aeHAB4p2Zg4EHeP3Ar4P0tgMeQHCEINWDsmdW1Y1O3AodnoEXaUSZc2VmJjdW1WLoOfawPoT8ZEwalvLnUDXTkMFfO4JSRid91Vl10fG1tNZYQzCMXA+vrL//l/ynJ7+HCS1/1gwkYUmhTKzRbllc6tI59ZnUIaW4+lA28mjSUH6SwekgA2WFlL4VuA07uIqayPrlWT0rsxu09QZV1WAbenRA4a6ii5MhXTlqhUZJveZ4+xyhP/ffWN67Dlw0E9CwG71wpyLZX4yu/8mnx4bv/MN76tv8ED69hUNIexRa0rtsM+cZwUK5omkm3AjSaLZ+lOwRKSH9Ve+KoiiyrdGFWPNT9/kaGlviHcrskjb5hjn5kDwTf7+sjDJA8hG93oO895V9GZhVIP+V1pkAe+IKfViDp6ypD+r1eCbrNByIMRYhiJOcb68jQZhc+sDd6G4UPX8F/u66RvQFNtjHAXc9yAmMZhTbRmETJnYz7Lh3GlaV1ZEYldrBCzI9iMsuf5SW9zFEJWJWl/rbRJyh82Qbkclc52zjq+aA9IB7pQhle9mI5qlKCZUV1umdYbg597azb9IPmnPx2RATSinWTtMNvF4wvZ/Urp3jGJ+bTRdHdUtN2ck6F0Sh+E1CrwEgbdkmjKVd3IH/uCuR6vElzfOByjJ4dnZL2y0D5OMyObT0K6V7AO0RVHmadfOfZtjROyq77RFaqmacseT5q57/314/XM/tT/+RHUNT6TO7THugCCFMXKA1sO3YQJsjrRjRdhkm5l/nhD5lQj7psl7vf2XsvMHPkSKPHYWfXcHUbUnfEc16I8zESwPKtZaCms87Nr+Uv3UEsa9kGFt6dMe3RT8OUitZQyyJSQSQNvWAo2KmB8eiKI1ZASgH0h/U82bwjjVM3Qen2+jEP6LSn0tGWbf02D5GDgLKNTScjowt04eHf7iGyc2stBpPnMj6eIl2RsQAUdd/SIgBnsxL9qRZ1ousZYAms8MAD95dLJTbcYn03OxTUm1vo+FvO3wIAm4tLAGCH2xfIy70fuy8uPHgv5VZOQMe0p6v1KDeUTweUL8VeALDiUtSayHN0hPpEw7jV2QOr6Lfq5CV4hDYZbsKFW655OxFT+mpa/43SNU+eSLqnbm3rCuB3dQcdS/1U1TlgH3Wd/rApq6HXAyekixABepvrq7GFTjafOZJJ3a6tAV4Bk67mZNobqxi1GLPKyd5AVwOXkUSA0f4daMpeaUcrGnzjJFN9kje3NtLIsN50SVQeLywsoHPdJruPbr8eq2uLsXBqOs6dO8l3Al94bbuCHIIuhpR5CNDmvL5GnjaGtJPr+Je0pazY5vc++a7YaUel5ioN2bGBjIAMXSmo05vAOAEPgVE0ftxUwm3LXWVDdwPnTrmD3NqGOhHsds2166UJ+YF0KCv/aSM7LSKm5qbSzcDFBHSvfNHzjt8zWz395M5dDp0rwXJ4GGWlD84EZ2c+HpCCVpfDQTJMWkMIyxYo0u7r6ZODaHQdenB9USwNKlrAUIFZGvps7NGQrsTL9+7iJVEIehSw+nFU+rvR5/fMDqBWRofpNuswI/lRQeg07+SlatGB4BwGRtgfbnHPWe71aNodvXsiapu3RXv3THQndByWUfXtsKepjgDFEoNBEBkI/QaVzX2H/yHsmFlEqNu7ClGtDAEs7se/SCNupGBtYLE2G11quh17COZ6dx9gRLoFDbxWi9Vr98b05GT0sZgqAGdBel1LbgJwx7e7B4s5PLG3DZMddBHkpE9dOLHMpcCgrhS4dcDndA0LqnkegNeOTYh+sximcmg3AHxZDhhogrrkSDcOlx4D3K6uIqgO2yhRiGV9g3fsdQEcqHSoV7fb1KqFfCD8NRgBkLXi1rRLceGDl2NpHYu1thWV+Z3onN2LzsndqHUBy/o2QxM5rAsxKzj1ix2Sjx2UpMrQSWf6xDijdgJgPjnZiuke9YFgpiVghAdpa94qFqJffQzl+6I42XtczLdORb+G0VKVNbTwHCIiTg57/8yrTv3I11L5QY/SaJW6tX4btRZKBMHo8CzKo1LlutqBDlvkDSsYOimIu6jYQ4yxBWg5PAAwH0CP1O0+6bhDz84hFjtJrSBwVhGgubQV6Qh63IlsFwt6aX0zrlBHSxgYa4DTtaXr5BNjo34YewiCnc3NPIYIruWlxbi+eClWlq7RHiuxubZCPAWUQR3AIy/41m8pOe9hwkv/aalcFcwqGRWz1zl8ioSlScp7HF54utFDq3RIPcVv206JwW/ttwSevC8w8t3yez/g2j+koVJ0MMlI5VffKwG7hkY56jJasktFludMuxR0dgcLOpeX11AOJ6lrFcVu3H7HLdDhfrzu9b9Ee2GQFi7MjuCvqIiI5KisCdK4dkh95GYgaBTEZ88jCsNJorpGJYgyDyRruclC5lfZkcP+tLFFtJfF3Kq6D7iEVGICYXzYKMuaBsNROaAyziRKXH5TajQrQ4BEepxbTcArCiLBGq+aZydQtJw9TX1fRytswyMukSOQyyVoAJfyMzoJGkdwz1Tj3C2PjqvXKnH3RyM2d5rwFvTSFuKaU5I0CwSbPdPhjm1nPALabAbLbIRZAA/kAcp7HwNtNFnXelGeGaFgQhlsPfm+p4yfw2KaTu4ylb858yDPWfabDuo0gSw5UZl6rzxbT7aTydk6pED6Bh7newmiua2SltvNY3qCkCmXokqaF1AbH9/lb/JlXn0381g+ziPzT3re96y4sHwlXXrhd+X55sBn+f33/S/HA7M/+cofRX47QiEvlXQoaBVU5BA+/KF8yok7xJsdRGRY/tE41j/QYdfsEOL9HEYGCEk3zo/I3ZMAD26wkACVijOPJZ9SR+he/ZsN6VJi5rNYGiyCWfKWaZY0ply13MCS1HtISgAu6Tv/A+Lxn++VLlzQxf6p7EXjTkz152JmZgHeQUaiw9y9cYhxb0/+Afpof3cidcny6gPQ+8eQofeiG+dj+2AzNtAzG9v21gncyae9iE6kQrc7S98luTTyL126lIa+I4rKlp1t50m4je1suMe/PbTmx15aRzU/+pG7AWnXKaM9xNAbgFP9qT61DdAk3Bb8IctrVzjAIkhfhDW64wAwe5ggrO7OX1SSAG0LYL63DSDehe8xRhx51TjR39/2S9VD3Tgha7cGT3PtLmDO43H9bQko3e+o5w2wjoago7OuXmDnm0t82mZ1ALIuFQJH262GrtraOoydDTvsnMOhnyhYAPpxaTXXfdVdyxUZSsPINtrLNs0NjrLdNYrhP55rlE5OLqRbxib5WF6+Rn3r1jUDPqEOoKVDlzIE9qyt7MS1KxsYGYDMjb3YGO5w3o4t6k7acZQEmEJ98j6HRpH04xJwFrkFqO72Ads9l2WlHahbKAmDhDYUD2gQYcQ7GcxNoVZXN2N1ZT+GiwBoFK4D9cr0ZhfDYtCKuROTMX9yjji7APh21DEGdWd5RGD2zNN6gFkkPxVv47pbjnZ3aXvDBNzXorSAMkWuBcijLgB2MHDZqV5WvpJGHyJ7prRKKG8qoqo9X0dKUAXsEkIOxUl4XR19Yao21p1DLTvEuwYhbEF4mlH6rGh90VwQu764Wox2z+tnqiJx14npqO8DqN2r2YXaIRJ92sx9uZ2uwtbeI0UutcevAzI30UTodClfYyUtwrR20n2hdEwv3S8UhDtUKlAEy9keh3bT4SQYaA9wuSXQxJrtTiKUIGwFOHFrtTjLUwbTOpbp9/YpB0DapcK4IAMqQUQQ8VchasF6tzoXg8aJrOu1vTVs23UAWxPCaafQK8tMPDRACkMIXmLJklHXCgJnedoT56L1g/6JOD0LyG93SYu6gDmc0ZhbujrsfW2J6tiOZr8akyfq0T+FpTUDY7UgX/MPF8u0MkkuKZaCuOxpzgPiPkD5quSbWL+dbjn7s93WssXaxKioArhblGuycSomW6ej15qPTmMqeyEEG4Lj7C1L2lMkoxykm6PDMipYS0YuJ8LxBvcMSnh78GgE3rEe7DkcAjrXN5bIs47/jaQzrWqNAXuMnLDnQtN+7dJIgrQU1AgeBacCuFSCgXBaidXhMNaG0B7l34bhV5dX47q9rquu07cTK7mvNWDXhb2XAbAYFOlLR3kmdbfoYLnbm0/e/9Lzn2/GHza8zJ5ZaHakxLhMMOt5pNQz+NzADfM7OgSukpltl8qKw1ore1GpQ59Td0nn8gz/7J3N+Dg0yvSTLcGsyti4MqXMQwnsynMZiEd+8V9G43fQqwuydweA+7X4uud9TTx44e64574PwCcYFK5Ji4Fhz385dEgUHHxJu8vHo3PezrQSmvI/lT00kb1S5dOjgvteeTKeLB/tIDAu/fH5Sf3VEaLVNgoJUZPuR7w76gGz3rM4fGeZyujzD6EcXULWQuf2cguUy6PTnEgfd5XQ0mpp8HE722tf3UucThyzTSenIu589B0Yy+fiQx8YxtWL8BmJV5pXo9EDkJiUHxPMjpcC/JSpXOREMyO2ePBiWf20IdkViEPq8CjyTTCb98y5kRmR16R1BMQ9jCrPPEvwyg/flyPL7/2i/O2R4JZ07J30XdvCSNJflt+6GuVv3vFby2DaksvIJ1BwLW0ekVXWi/k0PUe3ysrzQ8/5v8wDf/Tjzds8z/oZHSSb5eAd75fPOTLP+QXPjbgMZVkr8X0v+YdHdz59eNUrX07eKQ/x67LkML174duTWg5H23Nf9ljLTyPgkYY6hXZ4XnmXo4rIZSe6uBSgQLbrYvrGae+rgbqxtj1Lx9mzDE9R66OHWRnKTutRvbGracBrBmndd5JM7FHjAi1F/SL/BLPUoZPsbIt8EX20vz1PXAVl6cXs4FTMzpymLB0AiiAKPqE9twFgS0sbcfHyA3Fx8YOxvvvhmGg9EK0pR/quxfXly7xfRH/yLMdJaBH5urMBTthFHk7G9NQgZfOFB+5HD12jTnaznK5wc/LkfHa8zMxM014TcenK1ehPTceZs+di8fpy3P2Rj1KPjrjqAmgPoWAW3YcOt6fSDhXXyS0mduDxpTis6bPbiFqzgw7fhm72SmOH+lDuu+rCjr2Vu+gBjG5d3ayKEjwSH7ThSIQ4xBG7Hfi3qi7mcF3oPSzVXGkAAOdW+upM8Y9Gt5ig15mMyf40husCPD9DfVfSl3hlHf3OOfkUgKnfcm5CVXd79z51RZ3zXFPG5a/suecxZ0AwukSaEug7zN9Cx1QRQM73aDQGCfydnHf12pW4dv0y95yUDjai3CvXtwC5W3HpwnJcuE8MsBGbYILc5GiIAQ7tSd+5nB55TXkJ45RumtQ65QWTR2/QzM1tupPQfot813RNskNKIKxbIyBel1D04+raZixdd6k3jPoNZQHmBWB4ZqYfJ07PxemzC3Hq3IlYoO07GDmuWGXnqGT8or/wCCaAnX1q/66UkiQgcegnqxwRQtK8gFl7qVCIHM5S1qncShkM3Ot4jsSxGgCeMq3DqIJZdzxRuButXtIOGwjq0srJdSkPARj6h7Sjbw8ZTOgyT+tQ2Qbp7WKZTgDwcrtMwKNCeYu4tyFie3KMt15tZe/c5N4tgFka/6CbDvbklpwjQGgIJynJgKUvmYKevJC3Q4f2O/aykudw2GSYw8KbgCDzb2OUYIYyTwDI7AnUYkaDtWGeBswR4KRieBiDLpaTlrjxA6JVEBWAXAEY3KOsezsQfE74ggsOIUjBLPlzlyAt4wkszGa1G80KgK92FtC3QFtUY7i/HFvFGm2B9SMxN3UjABzUndxEAynSaHAVh8RN62WvuAtAu4dzqz4V58/eGSf6t2Q92kPnMh5awStLS7FBeQV07TkU60I9pk7Xojd/gIIHZFTK5c5K53rqDCvKlRjSvw7lKNjzKGfxrqKgGzCgO6lNo+RhrCrlc6UF6m6q8viYbJ6Jqc5tMdU6Fd3GDEoAmkFQpXpUAiM8c8ccfsIW+axUFCh4zaLsXbPeynIb0vDK/AlKqd9kQHs/sDDXrwMqr9GurltY+rA17CmRvtWU1JEKxbADPQtgh4DVNQCptJA9Jxm/vWXcA+jumE+kyc7WHgJ4OZcwu7wE7WxuAVyo09VVhMRqbMC4ToDUiteinnNd3UmMHYShzklf+/XfmOk+XEg3A9rZfxk4KcxsepUTRc/rmx5nMM8eVFwqOHsVxHOQWgLBcr9437G+ShofxW0YKfdcGoqESgPX98p4IbdM90hkcEiD5XUGv+daXneZmY21YbhByvmzt8eznvWMeNObX49wuwR9rXFshEv/Jfghd0ocUXamQVR5ZGF4AnGU90Y1UgJdvzRNrwWsfpvAlXKbD8tvyJ5knmE3IzyQb4DZiVYtDuFXFVJSQxkx3/FSJusfbxz9KaNISrRn1nUZO80iga0g1YmoqL3YGLouI3H6OZ/5zFEk17Y0P4jLuPOxC3HuzBPi3X94Pe79yA5xnMBE34rWYDmqGJNm294bk/ZQJtcplyBJH03vWSR7ZFNWWQAa0Ta17Bqu8oTARFox7yom68dCCIgdkSsP7o0C75hta3u0drGARyCVW5B67Qs8L1vE6DiTp9FQd9mjaDBfvu93ZcKpS/idSrKM7iG6Jh/ZZkbrNeXNG4Q8+dP3885DwWejV03aa19KOuKGfDT6SIoxWK5R8PnxwewrUi7aI+ue+jm5i0PwIOCooa8EQmbCNEo5iXGCHvIYGeSOJrU1cjsd4kCXAY708ReYagBYzhpn28kymFsN0OxBtXxQiG4z2QnA0/STt5xVJwBXiK/Mk9unOmkrzxCidFTqKaIg7lLGE4MgDMBSxGzWe6M5GXMA2dnZ0wm0TVz3H0dfNzY24+rihbi+cnfsVh6I/txqnDq/GwtnnYneiaX1ZYDnkLimKFMP3e2kp1Xi2Y9WbSpxwn333Rv3339/1lFZd+iQ3NBmxpJmWZTJ6u/Tp88kwL3nnntj8eo19B7lrejW5egmfEW9mIajbzWA29YmsgEd3Ok7AXydfOg2Jq87hm4nQ4lHRpOWc7UT3hGc1gD5uuK4NKYTodJnte0EJ+Q3daBbUq4wgHxXZpm/chWcHcrIN9StnSXGqOuBM/4XFs7G6XO3xZnzt8XM/Eny0orFZXQIZXGuScpkDwpygOEw2ZtKvKExrJ0qiE5jn0MjTjdHjRt7fU3f3lx7k+WrjfUdsBWGyDyKHbq5evUSxsVVCGc/8+ymCZcuLMUD9y7GtWtDAOxhuKa8o61kJbahBzst5U0ZqEA+CmRdzxhJEAXV0O9PxPRCL6ZnBN72Ypf1nqxFObKHfde40amr6+hGXR8Bujsa/J2YBjueOncqzt92Ns5xnDxzMgazU9EDcHcA/7pDlm4yh4DZv0qkxwvV00+bvEur2szrZqCLgcxSBWEKkFyuxsZNEOOwGYLSLdJcV3Z2diZ9ZRFXqcD1c3FYTf8t+DyFvl3aDq/YSykDShNtKr+LBuigTLqbxCmQJc01mHxYa1JhLSq+mTMJFXTuZSzK3ztwSJ5Ggeic8NWqTkZn82xUDzpRP2yRX3PihC8tVXv9XLYDMGpGeFJalhSoRf4m96LWIt6i3G5tdX0lfXHt6bQesleSryoNh40QBhBmm3w1KwBtF2Yf0rDky05pCWsf8CeDVeoCOdKHqHOHlu0OIFAwi7J2pjD/SqGBQIE5qlR8qzYDiD0Xg/ot0a3OU0YYuViK9YNrEDhCzyEPGthhvfTbwXpSqSkAJHRnSeqXN9xcpd5RdhPNBI3nb3lMTDVPEJ9g7yA2N9xe8EosrVyFoSLmFvrRO0t7zMDKfazdxkrsuGXvvr1mKhssUQjdCSNOOlMx5s491iPA0TodYGn3upNYaNRNC8aslT2e1Uqb9hvEifrTAbJnee80AHsW+mqXyjbjoG5tnNRo5eG3CVLUStzap85GIwLloeAor41jRz8eaZP7jgy4ZMvy0tWc3LZBfVRdNxABq8O7hppCRkShz1quJ7ujf1bJdOsb5cRCy2cwS8a3TVntodaPylGsrY3tWAPEDklvH8GVvcu8m0Pw1FsLQTQFgJvBIu+3sKxbpYvIcGMtnvdNL8y4Hy64zqy0olBXT1kxN5S+QsPq8dpn5eOPC6n0OURFPitHJgBHRsKDsscW4ZxJcD+FJW2h8ufaSSUJhvK9Mo5Ml2C8LuWl0pNWvT/KR6bhO0eoA2gTi4vX447b70y/7D/8w9+DvtYRjkuUx53kaHOFpkKQysttKomPaPNQ6UoieS/PZZrZ40oamb5ny8Dz7ElBxph45te8GhfPBFDqDDB2THR4v1VFecCnFuiobKIGa0oQlgW1MAYLyM8ElcThhj0uNN5zSTGAbYsHjqJosK5v7MTaEIO5qWIgfT6XJzc3UE7Qz4kT9Xj0ox4fi1ci3v1ODaIe9dmjzpejPbWGDOG9MtFsR/Ng9sp6KdvRerWm/ZeEYeG4JziQfNNPWj47KoZhRD/5k+sbZ+nBa4LveuTyQ9wcHZbDd/wme2zywi/KdK38zIv0QyLZC05EJY3J1ybkfcGtn/kuZ9PjESSQ7VfSYhmlzy3vUbT5zGCdWC6/tW6TTvJ5+cz4vWdIoO9Bhr2vLPG3deTvUfh7x3Qz+L9/7EdTbhm/vucuEag+1Ec2QQ/ALHt+yUi2BTLBXjTLbVnsIbTzJ10LNLKRFQlwKbx6zUzpjqdbjPHn5CR+m2cnh6nkNfD0W7d3UZ3n3A3BsB0ezp7voi+6yLw8uLbnzvypw5J/6lCPREztoa2O2lidt5udSFZso9HNkb2pKTsfdC/b4O52LK8uw7+rMML16M2vx9nb9+LsHQWAdg3eWkQmrsXVpStx34UVjHuH26eJR/cCdMo+tL4SOYn2vvvvzZ5FfWfL3RrdUKcX6+hh683eUH1FZ+fmoj8YxMWLV+JjH7s3in2NX10pyom3pX+729YOs7PJDh63dQfvRatHfVVdKxzsAU/RGtluig4nVuea6A7TSEXEkzoVQ1M9J2h3+3WoN+kq51HYFk1HFJ3cidnKOQ0X2qFczaXIUUF76stlI/VD1sDpJog9eea2mDt9Nnqz02lAX0UergBqNTzNl3NGDidmMAjcUatD2RRkGhslsWpM2EmnvJRhlIH2mNv5IilXwQFOZrZjcWpqNk6dOJ366cKF+7NzR5m/vRXxwH1X49LFNTCDvrA1dNkwttCFtY4gEnqQxoyP8qLdSMMJ2ZSRZHvTEdNzjZg50Y2pmXZ0Esw6MdCOOuRGjtz6jXOJNmMNXWl7VCfAMd1unDl5Mk6dno+zt52Jc+dPx4kzCzE1O4hm197lBmVsJa84ec6RjBc97xGsM3vm6VN3pZCQySUiiFog69awErW7iQhmE0jslg3b63UTzLp7B1yXlb29s5tAoCAT+gjqyG4v7L4bBKRlSmXZyBBur20PJ4zNO80hwA2GWgOgbVSb6AI3CYAxKZgToJyE5vCvlg9kBoNPRAtt1I4+wHIq2sNTACTXbBTkUQatJ8riJCvBGDlP4VIhrZz8kCsYkHYPAmgOY7izkcPu9szm7hakp5VqkGb0yVVYuI5tyx7FfQDyFoRLeSuA1dJyQVg5VIo1WHUrWO7tUl4nw+3v9JNpyuVLrBN7BfV34Xty18Z67TWxTDrno1c9E62KG09gdRXXY3XnGkROuljGMpMEJoGnGlPgaaFZNu5rSKyuLqVwb1RdemMQ89PnYqY5Q9u51ukaguh6rKxfBUisR3emBkFStnnqtbkZuxOrAFlAhj6lFP8QpXxwUCNeLVRqHkbQUrQXMyd3YKEqPE72b4NxFZYO32xRZ4fRikH067dHv/boONF8THQas9GpDxDSnezhcZKhmtAeAi3eGwrn6FCYjc57uYKCM2V1JHfXIl0BXMu0BKOu6JCglnztwKnuPJJ+q9evprB0nVrdQHR9sKYUXa4JaVs7kWvI97a/YFZHetNMIvc9mNPhq13bDSSRgmv3ADA7zCEge0PstXDr4HSZ0e8NhdLr9gCzU1jYvejU7IGvxAaGxMraSnz9i74943644DqzCVvISvrF+Z/MG1dRWmd570Y4Usy+KqwY/c4OybznMBZ0k3xue8InNINlzOLyp4xO9ahAl9tsjzIZ/VRziSYr0N9EmsAjv/ULLzyXQcXhrlEaXysrKD/4001F1teXgmrCiADJ1eRNviFOAZKh4l7z6geuj2R2pplLiHn4WwBMwuZB5TyqF/lJ0Jr8TgSZRwrgtdHDhuliIJitdeEnaHgfGnTiURkHiWY+iIl6yuKUP28EMQCfxVSvhiyD1pEnsAFyCaVVuIONSw8NY2WTdwHLKVdFplQ2sh0lFXHb+fPwwnT84TvuzWHdBryxsrYIwFiPRs8to8kT+pcmyKQF48SU1ZytA426ooE9Q+Y3e+XQNJJu2fHAme9tw3zdchDyJ2XMYhmxdZEF98bRfZ8Tt+U3vqwGztKBz/MgPfkknxFRCVTLZ4JsIy9lHUc+QPES8QhoeiSozbi4Nv38lrgsLOk7Yan0iT7Ka55L2vSbLBuB18p3ymQz+Dzp5Ogbg7n0fkbgibO0VR7H75n94Zf9w6xjdU0uKUmDdiCATtfJOe70CBoyTcvPvxwB4lqadI6IgENZnn7/AFlBkeW2APKnPe2elfZpoJFm9q6PMo9+Sz9tcI5LQAlSdVWwV9OJSNPInn5vMl2bBLLZ65mguATGOVze4GMLTsq6pOQawxDpYUXoAtpJSrE3Uvc0Jxitxsb2NQyc1ajU1qM9uRWD+Y04cctunLk1uIZeMcT00fzA3X+KnlmPC5eXAa2rSUsD8uUyVatLS7G5sp+jg66/bc+lSypa/65Y4CoHAlvXk+1PAXCon8m++vMg7rvv/uypdRk7Z8rvH7rtKu1A/iyD8yzUR/KEI8F6ajj0XtTQ51WA24RLLmJ42EHGO9JwOReEOi4JBRpFZtkOKWfKttDt0A6T7K0GnOYKJaSnf7Q4JdecRygomzQQ9neHyatESxy0bUOjZRK+HlDvnWi769kM5Rv0iXszLl+5FEP0j5hLA6USp5ER1ktp5CjQLI9N78TqgwMMEHSoWdZVxfk+iY845xb7yNQN9JO9zXOzJ7KnfHHpWly8fCFW11YB2/tx38cuc28b+pkkT91cbktPvcnZKeI74nvkt3JWl4kckaE+7aCfP1ctl2tbcIJ3OYmuTptJv7onbNshCN4ojQF5AJzQ5P1pd4K7JW49Nx8nT83H3MmZmJ7pR2eqSx7bGPDiPfAa+tUybaHjPb/o6/5navN4oXrq6YO7krCT5456ZmkNlQyQMJf8yI0SdgUN9srWILYBGRlkheus7z78ugE4KxzWS0vOhcOdeOUkBHtH5Up3mRDI9jB/msQ1AUgp9pqxTkWsARb3Esi2UBalr4rANCfX6ItCfA7puPxJo2hHfa8fjd1+1A6mkkg9ypmxNgNlKewF5ZBAIbRS+LqWIxbqAALuAn5iKRt+fX01ey0dJpBBEtRTLgm2BXC397AFEbcmsKp39JUl3/tDnu8G2YdhBLEIGJdW0LKDKLdt0KyzHvWggsHKU7hDHOZDYehalNMTt2JNYwG3z0an4vq4iLEaoBFwKZh1McAOQsklQlLgcZCSBaSOuCYOKduhrI2NZdrxAPAIeKvNIMwAkUUjd0xZXLoca8PFOATAt2ciOvOH0ZgCDNaXUPKA2UOAHCDeNWxhG/JcT7cCCVNLL9dgpR0cxned1x4g361pe7UB+bA3djcFcHtiEJO18zFTfUxM1e+IfmMqGqCH9H1VHiuYZRTa1iE069rem3RFgcA8l70Z5W/X5TXNcqUAjxLIZi8odiOnrFfBaa5vh2V8ffFqLF5bzGGOdntQri1LHd7omeUj14D1WHc1CmhsNKSVw5LUa6kEdZHRoKOeOA5If5d3t1c3EjgbnxMV+tDIJIJ4tK6kAtqeYBVbGwNHQKyP1MbWdjz/2463msHL/skPQbsqL3u1bGNuqnjV0tabdMotdZxH5pY/PM0y+NAylD6y/EMAU+XUFQ88Uy5tCqLJ7xTWGZDCxpw6mGtpLI+jdxJg0JQ57E8y+a3XXHhtKI1K06IdKXv6A3Z6cfL0AkDvSqxuXoHGy1EWNyIpy+L3ZZyZFSMo//OeReZ5Fss3ka3kI8FqvuMdAqf8DCHsy3k36S2LDF/yB/Ku53xOZAOGJ5RUCmtf9ntf5CJ7Zg2kmcGfHE7qaPHdVHsi27ZFfN0WBjaKTeW4sbYbD16h3Ef5cNcx69p4Lddk12UIz8TF+4u4cO8O9DIPwD6Mzd3L0exgDDaGAJQaykkpRnatD8vPt37P/8yH/nWWLVcv4Mi1IlEeB3v8BsgmXfCy5yySwbJ4z8sjOrJtveO9Mtj+tl0JGL2fJMP16FwanSUQEIkKtnzRtEY9sWWHAO/wNw1TnxGhQ6G+L99m43FfMOu32Zb2kEljFNb2Nc9JV9CGoUy3rIusj9FhMQgWp6QX70sZQgvzkZzPN75oHMpPngJG5Jfv/a5/4OcPG1768h/O9BsAwy6AtI0yzx2fALNN+N7VCSyUdZCF9kyedUHJkTTdsPjWrUtztI38lbyI7KEi9rVE+EQbzyKXco7f6CDBcB1a66DDOj3AX68d/X4PUNHL7Va7ANkZgEEa1gBrfTb1Q7WXN91TuJ4AyFo/trj0egh9lgxtOuisyhpnR7+Q6FsFBtgQvbKIpF0ClB3EzInlmJrbiD7n7vT1aHQ3KeZ2bKzvYbQeAIy2YwgWuLx4Ka5cuxCL1y/H8uJ6xG4rZiZvhV90j9MPVADepU5qCcSnBpMJvG+79XyeBTiOAm4jc50wtLJCvmiv3eE6YPkqVVquZpCbfCRtok+I2y1RJzCI95Tx6LJKcycOG+QfMFurTqURKNK1M8hyJ84QSCqHnIiqnJS2vE+7qf+U3+ok9Uz9sElViSwAsdBNdp5B6/KwnW9tDkctt9CX+hfXWx3qbSpBo6PFdYB9FfA3PT1Je1RiaelKbKIbqro3HALgdk5CR+p7lyODfuXzIz5KGVhxUyV0F/QiD2ssCPrSHxv+b7XLke7hBuUGS+h/q9ufrneOPDrJ6/qigLMW/ck5nk9FDXrqTndi5tRsuoWVI+nlrqR2Lgh0+wOA7EI/Fm7pxOx8P6bnujn5S2MqfXhJc0est+eorvjGEUrpcy7dVRZOnotTp26JhROtmAY09yl/295YRyZ0r6EuLa+uklu5cgZ4A3z4rc97JGD2GVN3kW4ynjtwqVDS51HNB+jaI5MlmBWI2StLYVDgMpA9Vc5u0/1AonNrU9kTniEuGAOAUfCdWtSGEwh3AYhtMl6HICao/K2iExsIlCEgFlM12lUAAUTmupP6aeyuu4OEPTgAJYEC4G5iBwts2IvKFkc2tMMlDoNH5kcgpNJOhSQtkHGdzN0cQsfjzoB3G2uxvnsl1lZcMkJfyQ2+PwJJvGfZms5IbPbAp85Ep8Grk4BNiIzKLgCAdRhlt036deqOw0aXuFVEzupz9t/hfjvzY90Yr8JNIGcPjz2+C40no+znYrI1H1XdIrYpK9bVfn0zVneXINTDaAKSXK5FxaP/azaXpYYpD8lnDnmgVF1/dw9B0msNoouSdLkPu4OGw7XY2qMeqxtRm8SCnYXxBxBdfRXxtZE9y7ZtcaBwIA+7NZgKkbaLUNbv1155DQ9eknh7vVa5QgL1M8H35qZagbir52K6eWfMtx4LkD0TPYRH7npDXhXM6VpA+7hMGjWTde3MYLfC9XDIRMb0OhmU9xW4+sWmIYBINQ8K3rIHjnhte+JzVMAJfBom7iTjJhAbG1vpg+Paj4KfXQCo7hiuebxFe28BYNc5FA7SuCDWOlYRqQh1v9nn7HIjQyz07SGMvrYSuxg+FfKocdXrTUYHurSHRt85jY70T0ORKFyd4OfkseX1DeihiBd9+4uz/R4uvOyVP0S9KXjV1Nzw8Cf5U4s7FGRQV5YX5as8Sp6AEPltDPzzW17P3jCBFe8mOBFQHH03CqP43NDDRnNlDF+AEvhnvUDn9kJkXdkGZT1lNvMwXc8lgOlAu/bu9/tTxHoQV648wH0US93eLejBg6SMP3uiiKuEofwmbgXqqKfE3+aEomXamfnMFel5VmZJE2bAm75jJozOW/zWE6DZI9ImqdQBD6Svr1jWre8ki3LD4pueNw2+Q2hBR22+01+216zAa4cxBS84ofEQBLuyvB0XHsQQ7mUnPnVMOshQecA1dpvVGZRXKx64exsQfBq62uK4jtG8G1V43k1W7FFywo6AzDLnBgzkw3oh58lLNpQkkEAoQSxHdjzIZ0f1M8o6Zy/V317AUtSripg4uJGHsspiG2e2RyZRyoVMp/ztA6+tm7L3dXRTuioPebVsAu8bX5nXnDBKXjNOP/NLlZhv8ptL8khdIc8kWts9b/rG6H2j5Hvzf3MwPY+yzsp3fHeUp9F11h3BkSbDqHmPC2Z/9Md+NGnRXtkO4LUrsEQhl+4GjXC9dXtjBVOZLu3oqgOuv2tP7ESC2XIIXR1j9eUcBCyQdG/KOuCmAEvesM0wnuoYOC4kP9GF/gCy7uTVnewAYDvZ+6ZcFiyrX/WNFeg1iD/5kzqU9nNSHsGas43Sr1risHLkHer5oKpuA0RuA4DUHVvXKcv1mJzajgUnCE8vRnewE73BJnS6Qb6Ro1u7sb7Ksb4f9128Ny5fvpJLUE0NOsRbxNK1ZQz+k/G0p355TE5bRxPU1ST5cwS2CLdEHcwuIB2KcIvU5eUV6Hg/l5lc29ykHqopf7eQu/tgiqWlJXSk28Y3qTN0uTSPnCUxigBYRra4ApN8oYFRaV2G5ofUi+vQ2+FWyqyR7MoKKZkqQaodLNJ2yiPS9vcOwNoODOfp+GqOUvFO6i/BLnpNmm3yjb3Z0nXOIxIY0qxO4Nb1oDvVh1dd9szJ7ROU9Xpsrrlrmh1e6JlhK/WVE6pdkQQpUPIDV46I1pvu7LkDcNzhiQaOvWnIml1HKTGO2wcYy25JvIeBsUl9TgKc++SFfLo8qCM2B8qrKdpnHhw3yGWweoMubTOIK5fXwFtOzD0AU5A6mKaFLJs72YqTZ+ZjcKIWg2kNKI036cyl5WgD+Mkl59Q5lr3Kh70e7Tp9ImY5PA+mZqHXAhDcJ81u1F2KDiCsYBJf5Hb/bgtM/bgMnIbEtzwSN4PTT58BzCLMlFAcipa6CiFNQ3vMHgKzBv1aXFpDZlIQCLZszPTb4HstHwXK4R6Z2d7JNR51SHed1LROeU72o0m8NTh5DRC0BfrdtlJqWJQVmFGmOtyJLQ6XPlIYyvgKDyACqA1i2ewAajtR1AA66W/nzEIJqASz6RyfwgShKzASxHCvCRG1p6qxU1mKla0rsbHq5B+ACoSqheU3UodWc1odNSwp7rQF4VWuD2AgKr0yIXPsxaZbAlpfNLwuDArm7GG0p4T8HGIp5pB4StdyeErFVC40DpitPw3C6kW70Y9KWsP2zgB+m8NY3wGQ8dut6hyWyglg5EMmUzrlMAN5drKUQ04bq8swxlp0G4PoNOZgNgiGNtAloN620qmr5mYctABW1ZUYYm3rEqIMcDtclyYRxLrRQrnBA+2PkNAPxjpRMLhFbx8hqp+YQq/VXCFfGiFnYrrz+FjoPilmOrdED23eIG6HrwwyR+l4T9lScNvjy0G9ZH1lu2mdlofAy16elMKE0VJQKioP+TsVWrqT6Pc6jCFg1uH869eWALRLgFt7uaCpFoYSdb+FcNbA2oNRNmEafXo0OMyblZCWLslkuoTSqR5DC1re3ObdLeoOYFrBmFFR6LpgV7wzzB3K02XCunbILHd4oX3dmGFt0wPDDuPjW48JZl8OmE1KhF5SnyMorQoFvvlTBieo45AcPGe9KKARDiow7+cEqmxiHlIsFWvJ7yXYzJBxyCe2uYBIvsnXqQ8/KutacJ51pDRPoWXejs6ZWwW83/LbOLljnPKSvmTXFi8S504qIl7kPblFA5oXyV72zOrUCscJTnPozkIRcrIkaZcKyPT4BAM585MFNC9lotKOSsqip69w9tTyn6ghVRe4iHq3EVsa3FSC97OOTYd/XqrbBY/Zw2ZiZpLLCX634Ples8BgBNRw7ULkDvG5duKFC0PamnTQ49JAHGDMIjtrjQNophvba+1YXoROKieITvpaQ7BjwNahqyogYaoBuJFWlJvQIHRWbqFrGwBms0xEi05Of2blDApKEAvblGW2PLwjjeTmBRzKGz+0zcVxxmUbj4LX3Ia2BFfWYXl4b3R985EJlATCb+W1LkHwbvYQEQF5lqftXcv7JJpslahK2sxaJi1pwHzKe7zDtW2qnMhf2aa8YTvyiWlnkxzlPQ02HvBJljXLbRJ+lu8cvcg5ZTsR5OiEd/JReX1cN4OXvvyl2WHTIMEW/G4vqbt9pb8mCQNr0ljQSHKt1t39cojaHbdsy9xdCkXvTHnLKj/7vhOGnWdx4KYEDsKlXzdCrk35XAKu14h6hwe6yAlc2/UcGXXEKVePUebYWWSPL7pAQxHlQ5n3cwa6gHkHPXSAHMo2oZ2cF+LknuxkgeDVK96r5RKHAjJ10WrMzO/G6XOVmJrdidZgL5pt6DVBLzIbI2o43IullY30p72O0b60pCwGSxCnOznqEuC6qW6TftvjHhXTJ6f4rhf1ohVnT3e53onF5b2YOj0biwDfZcDqqlvWiimoqxOnzgB4p+PC/R+L+z/2IfiyCUhrAOgElwvZ2dDsiUM0mLbTYK8Ach1yr1dOwu8Pxn51jXZXLko80Ct5z/rPOuIWcke5mRtF5T94BTJ2FEGesINF0K6R4q5WjrACbqhjmY53aXtdHnY3xQbyta6GyDbpC/5wJN5dMHvNXrpymr49wvbwO3q1vuaa8YJiDFry36lPx1T3dOyQZ/1VW0Swgz7bhVgyv9SCWTB/dmpJ19lJs7lKGVyOcxKe242VlcVoQZ+D/hx6vRpLQwAz8mlu+iQxlL7W7iTW7k5SvsNYvOKKDIsYT+ShW825MAtnWnHiLEbTVCvm5p2kpeujbivypPUEr5O2ONHtyOt1gPHkQgwGZzjmo9+fJz8AdORhu9eGflz9STALfkPWu5LQ2voK4Hs11+XV51hjy3p54dc9AjB76pmdu6TlYg9BgPBpTEDMKBt7xAQ3ChURs/reIZVp1yybnYpKE4EDA6LXE0XvQ1h1iKRGA4H2snAKry0VEIzcBazOcsygSeqHjdiEY5cmGvEgIO0Q60KfVGeHKoC3D8oF+XcAG+vEnUMRMG7jsBnVISBJP7PtmajuwhDiCcGPCxXvryeItiev3DN7h2+w7pa3qWTAY78avRMw44zbrK3H0tV1LErdC7SeVGNKEZkAcOIMPSeI9LaiX+vHYH8QNRp7uHWVvF+OvQbWEWWjSRFsDv8gkVCyaWU7+YL0XJl9Ykc/OkXcAQqOfNZcXgxQPXEy5nqPi9nKrQB4ygUx5s5RWKC7ML+ArwIz6QxhRbYhZgWmjKHwbugoTf1OYGU5nLS9vZ4ADTELmJ2OFkbCIe0myN91h7PmldjvXoj99jXqez0Z2jXK4Cny6koHnKEBSILDZbzsWQfu7i1Hg/xNY8kNJMgeBNxSWdHWCADB/XT7sXFi8tkx33lyLsHVRBgKBATIDr84c9TFk6UT3QVUdvZeKAf2oJtkYg1mGNdtRx0u1QrlKgWQvsG5iDU0Imhw1q87tthe7s3tGohbWPDuDrexuhL33vfReODi/bFBnXQQMPa470FT61trcR2B6wLO0rPDQCU4UvtNkEcVspet2Kddt53pSrvkWnl7COjttRjmgtj7COKJ6GgAIPQd3tMvqo7F2Z85EdMnTmKVTiGsqEfycPnyddKuAFpOxrd+2zdR0IcPP/J/31WKVAQT/2kS6csjZafZVUzmn1TcXHouexO8lp65EIymEvddBavlJmYv856RHL3C7zy4FuCZnD6M2RMraBAQCSQ9TMtqy7QAW2bM/OZBMGnjNrNZAMEJwsJJkryfPR5kQDCb7/oqf9KPDSMwFWze9X55Nhi/+fObzF/eTThLnOWvDKmYvCjB8igbkA+yi2fIJLge8MG3R3War1N+SpuVYHyZ9TzK8tHcKBoUE0C2o5sB9dRGPu3sHMba2k5cvbaP8UM6gBEn+mA5Qs9GTsT7GsLTyFn9x/vwgsuSrUanRx5k9soedATnOCOdT+2hVdk5qqDB6rrfDhlnvmioLJ90bJtxHlVT2SYcsrKZz99lWWyppKuj3549Ru/lF0dnKyTjvPn+Tc9I9sbP/EEoJyfxmPpL44izRTePJjjKp4ayvIUoPPq0/E4QkL2+0gh0lvWfgYcWmJf5m2GUdw9BSklXR8/KE2F0RXmNe/QxwXfL4/g+sy97BWCW9+2Ucd1od3psASL1RzWebQCO4FE5J8jaRgc5YSzZg8TtiZJ+BbtJ/VUKRDkD42jC3dO6tLUjXgDZCelUAnPbZY4qhrKL9udapABpe8X0SRf8j8oi8sqRL+SnQC0NCu7pU6kcdmlz3sx6zfV8+TfiXyeaETmGTrm+quuktwCus/MumeVe/BhvU3byUH5o04LYo7YpmF1ei+WVlbhwaREsoPQ3Qxo57g5oT1sBYBmSF139atHvnMOw24uV5Y/yJnK00ov7rtwHrSDVqdvpwWTMuwnAzBxypxGXHrwQH3rfe3l3nzp36/whZdiPmcEdZAM+PnDyczd1vkaF67JPYBzWJ05H0bk79iZW4DXwRVJPedwsV8oGskwwL6GUjdAh70jH0qrt5koOLsW1myN16FVwihOS0+cfQZLbnvOePafp97mHhEF/0ETgG97Zo56pZrfS1nDQPdDmPyAOVxuKYiOKXfTyrstJDqLa6gJJ7IPFOICJ3CVxtORerjksP5FnO+zswNsarsALTsoCZyHjxEL2ENvm1Yl2rGzvxHADOQN+cJR3dnomd9J0ZNtVWFaWNqnvpej2D2J6djKXSJs52Yy5k+3oTnVy9QJH2O2ISx1E/WSvLHSwhx7fAVNkp2S7j16cDpdldOMHDa/03+ZcdXiMErmWrRsqlHNWVrLDQ9zhe2nEIwf+x684nlueYcLMJJPTAuU1QPbonse+wINn0oc9Th4qf5/pW2GllsNKChWJo/ztc++ljytEpH+JrZyI24NX7RWxxzUbyzPv20tWzlgXQB+gLKg8mNao9Yvc2FyJteFSbO0DXrAcs1fYA6IYXY++97i+vBKrm0tYvcDnNgTRJH6Qk2umlpOLLINlLPMsmLYnKfex5rA3xaUzLKfd+K6ooKO8O3vJ+/YoVGtlWa27LF/2agoOIVJB2VHd5MQnwFlzAuuGePX9NW19NtdWS7/dbYe/AbXet05GwbyNzjcf5b2y/h3eKId8YRqqO9NsDGMf8CqI3q+ousuyjnzt7NW299qhAYeu7eLPNfMUwjCQ6+Tpg9VpUxcIW96GVqibQ6yrYi4WWs+M6eYTALFnozXRS6bVf9aZmdiJWWef2C6l72t52MYeltfno8N75tO2kN6y94X68PfonuXPCXsKbQUmgsVteq9eXYyLD16NBy9ciYtXl+L+i1fi/gv8vnQ9ri2u5rp365sYOFja5T7mWNEKCJUvVZoz+eF/BZQMl76BRyA/hZoKFaFrr9AUwmB6ZjZmTpxA+J6KeYDs3LxLjTgLF0FsV6DMi4DJ4exjBsgmDY0cDv2E9s72/TRBXlFol8q9VPafeAjipV+LZnmo7hJoqOw8l8kepVvGW16X9aEi9hjlK/NG3XCL66PDOiRyajUVj8OmHlhuvOr3xEW8ilrPeZ3fkAXisugKbvMG+3GU9wzmX2JLH17pAz7MnlrAZnlQ9rRNH7pvee3E8N1cHipTvCm/Nw6+5X6Zu6NQ3qDuTFvVz7OjvNrTtYGiXllxFyTeSd4r4+FtPsOg2sPoGrrQPEpkwok1lL66DZCtAA6gZwzGbgeDcaoJfaNsUBi6r7RUAEduKx45fIxMdTnAlFMaGqQn3/E/e209ey/bmTKP3ELyfepCY/3mYD7zG7I7Oiys2c/2kB58j/hsYsUYLJryLc9JpxycNQaxAWMLRQt7ZWeH70szDmknvRFxWTdlPZXn8sLb8n55rayx/dUpfAPPGaRvQ/42Qz4zbtLNbk7PJuRxFB4q11Hw+uj5KC/HDUm7VEyuIf4JcssjZZz6iAqxV3ofsJFLU8IHexPIxSrgo0EZBbAdwNskANEF6Kdb0R70AIzdaPU5ODf7AIPJdrhNat1JZtkT205woFtBSQOANNrXOhzJ09GRsh75eKO+0bWCZt0hdHeoCRygq9HZeJ2IM6IXfVvddMbJWeWqOqUcHqXlVunS/uoqYHZ5NYbru4D9GgYe1pztAsPWMM72D5bi2vWPxO//3m/G637tNfGBD38gejMz0Ud+qmvtqGmRp8F0P2bnpnPpT9NrNtoAtO10G5MWRzrA8qifzZ9n6zz58qgeRsH3RvQ2Oso6KfX9zeessyO9mKMe0N1D1+X9LdccX1nNJcKuXLwUlzkWr17N3uR17psPdZF0cQjAU8c5Wqg7gau5XL++mJOUy10inSB3iM6YizO3nEFnDHLkc/9gKy5eujvuvfCh2NpeIf9Oft4knwd8u8k9sM822MfRb+jNvGmAdNq9PMQqtku5s1wz8cQQfecIb7c9x/0OFePWy/245ZZb49bzj46F+XPR7zlhrBa9foM2qMXJU7Nx6vRcnDgJoHUr3IFuLSUdWOcG68yyeng9qmODbSVe9H3bydF8f3vfd0Z144RAMZD6W9dDDUPdWfvQwiMJGNJHKY/DOIzDOIzDOIzDOIzDOHyBhY830cdhHMZhHMZhHMZhHMZhHL6AwhjMjsM4jMM4jMM4jMM4jMMXbBiD2XEYh3EYh3EYh3EYh3H4gg1jMDsO4zAO4zAO4zAO4zAOX7BhDGbHYRzGYRzGYRzGYRzG4Qs2jMHsOIzDOIzDOIzDOIzDOHzBhjGYHYdxGIdxGIdxGIdxGIcv2DAGs+MwDuMwDuMwDuMwDuPwBRvGYHYcxmEcxmEcxmEcxmEcvmDDGMx+AYT1t/7deMrsifjSH/1jd7f/rxKKq/8intd6aLvT9gtfG1tHz8ZhHL6Qw+eCf0bhs5XW5zLP4zAO/92GzfviHa//9XjXtfFGp/+9hzGYvRE24ldePBP1O7833un+9DeFgw+8PJ7ZHMS3/vv1ozufy1DAjx+Mjy0vxoc/eDF2j+5+tkNl9gXx03/8vnjfn7wlvv/Z5b7L4zAOn5/hkfDq54Z/yvDZSutzmedxGIfPs7DzsfjV739+PPP8TLQbreiffHR88Td+X/zq/YdHLxw/7P/pT8Z3vOC747X3HBzdGYf/XsMYzN4I3fjS5z4nGve9I95x4WamKWLpHb8Xf1b94vjqL+0d3ftchkqc/I5fjPf84bviXT/5l8jlf6VQnY7zj39iPPEJj4lT3crRzXEYh8/H8Eh49XPEPxk+W2l9LvM8DuPw+RT2449f+o3xrT9xdzz+u3823vC7b4vX/8sfiOc/5mzMzozhyjh86jCmjhuhEnNf/RfiWRPvjt/63WXU4ihsxO+9+R1x+IznxXMXjkDe4WL853/2v8TXPuV8THc6MfuoL43veOXvx+JNIxnF8jvip7/nxfE1T78zzs33o1VvRGdwLp73qg/FyEYs1t8TP/s3nhtPONWPZr0ZvZnT8ejn/J14w9rR89HQ/8RUPOoZT407v+NTDf3vxoW3/Hi85HlfFLdMt6PR7MbMuSfF17/yPYiGo7B/IX7zZd8Wz751EO32dNz2nBfHy9504aHnxw3HiOc4ZR+Hcfj/PxyPV4/LP8ei1/0H400vf3E853Z7i3g+czYe/yVfH//wN69n+sdKq7geb/yB/zG+5AnnY2FKPu3H2ad+Y3z/6+674Urw2eT54uq/j5c85ZaY7Tai1ujFycc9N/7Gz703NsYjruPw+RoOL8fbf+9DMfF1fz9+8m9/U/z5Zz8rvuwv/v/ie1/xt+LLbu5LejgdvPc78b+fr0X92f8oPrzz4fhHz65H6UJXjYXveuNDox3H0WcPo6cNY177PAjFODwUDj5a/OMvaRYzL/6VYv3oVrH1xuIlp5rFs370g8V+3tgs3vEDTyt6c88p/rdXv774/T/43eI//KNvLu5ozRR/8V/cVxzkO0Wx/4GXF89sdItn/a+vLl77+rcUb/vd3yne/J9eW7zpg9tHb+wWv/89jyrqJ7+2+Af/9reKd/7RHxb/+S3/sfh/f+Y3io+WCRHJSvHAhz5QfODPfqf4B1/cKFrf/JpiePTooXBQPPjaFxfn6+3i9r/4d4tX/eJ/LH7zLW8qXvdvfqr4xXcsF4f5zmrx1u9+fNHqPbn4jlf+cvGbv/nvih/7n55YdJuPK/72W1fzjRvh4ELx/3zVp0rrePE8fNnHYRw+w3AcXj0W/xyHXteK3/k/n1C0Oo8vvu0fvaZ441vfWrzxNd9dPKveKL7slfeUPH+ctA7uK37iyxtF/Vl/u/i3v/7m4s2v/9fFXf/DuaLW/fLilR85YvrPGs8Tdj9SvOW1ry9+++1/VPzRf35D8TN//enFZP3O4u/+552jF8ZhHD7fwrB4y988X9RPf2Px03+2eXTvE8NxdPCwuPLRDxbve81fK25t3Fr8tde8r/jAB+CrD3yw+MiljUegF4+hpw1jXvtvHsZg9uPCfvGhVzy7aJ5+SfGbW+Wd3d/77uL25tOKH3l/SbmHl/9l8T9MzhTP/4WrDymNYqV4zQuniuZX/NPi/iM0WyrIueI7f+NTEfOw+NUXD4r64/9W8aarN3PFfyF8OoC587vF3769Vgy+7tXFPZ8imsMH/3nxdd1G8ayX/1mxd3Sv2PvT4mXPbBTd5/1McfGhgnzatI4bz8OXfRzG4TMND8+rN8KnNdAenl4PL/1s8Rd7jeKpP/heVNtR2P53xQtbN4HZUfh0aR2B2ZufHTzw/xRf2WwWX/vTl2+SJ4TPkOf/i2HzPxQvHjSKL/mxj358nsdhHD6PwuHqHxX/7NufWAw6p4pnfsv/VfzMb90DfH0oHFcHG/be+b3Fnc07i+995w2NdSMcT589Aj19cxjz2uc8jN0MPi5U41Hf+Px4ytIb43Xv3OH3frz7db8eFx//jfENj63mG3t/8o541+ZK/Np3not2qxWtPE7EX/6VtTh48P64eGwf9XY87wdeGd9a/Fw871F3xlf85b8fr37Dh2LtEfq4Hz7wznjnhWo850UviPNlFj8p7L3/D+M9u7fEl3/FHVE7uhe1R8dXfsX52H3vH8WfHnO69GcrnnEYh888PDyvfrbC3vveGX+8cza+8rmPi8/21MiJhdvjtskilq4vxXFZ/zg8bzi4+rvxEy/52njK7SdjZnohzj/1e+KNwyK2t3Zucs0Yh3H4/AqV/tPjb/7ie+O+9/6reMmj7o6ffNET4jFf+yPxu0sl1X62dPDx9Nnx9PSY1/7bhzGY/YRQfdQ3xDc9+Ur8+q+9K/b2/yR+7T/dF49/wQvicSOlYW929bb4zl98V7znPe+5cbz3/X8W73/D34kn3+CKhw/Nx3xH/Kv3fize9QvfE198+LZ42QueFHd+1V3xO9c/gfz19Tm6/KRQmchGnJj49E35qRnqkU32+mzFMw7j8JmGh+XVUfh0/HOccLAPVK5HvXaMWB5xWhNRJb+Hh59lnj+8J3762/9SfO+bu/GiV7wm3vy2N8cvv/pvxLN6Yz4dhy+EMBH9O54bf+2lr4l3/tFPxbPf90PxV3/0naVv+SPSwZ+eH4+jzx5WT4957fMijMHsJ4bqHfH8Fz41Lv7ar8Tb3/Wr8R/v+aL4lhc+Jkb6sf6kZ8VTGw/EO96zFecf+9h47E3HY25fwI57hKE+H1/09X8zfvQXfy8+/MeviCe86+Xxf/3c3Z/QS9OOTrsSxcpyrHwC902cfXo8/fRBvP2XfzUe+BSzq+pPfCZ5vj/e9tsffcixff8j8dbfvi8aX/T0eMLN3U2VRjQbpLWx/knO648onnEYh//a4WF49aHwqfnnOKH26MfHnZX74h3vvHCM3tPPLK2HwmfG87H3vnj7H23FE7/zR+L7Xvjn42lPenI84889Lc63xgp2HL6wQv2W58ZXP2EiHvjw3bHt70eig1vtaBXrsbr6yZz7iPTZp9PTY177vAjVuwhH1+OQYSKmT2/Gm17xM/HWP/tgfODsd8VPfvcXx+CILiuTd8Zt66+Ln/jH/yJ+68oEjLIZV+99f7z9N94cD554Rjz66MVi8ffiZ1/99jjxbf9nfMMdn6xeI7bid/7x34qfec92FHvDWLnysXjP7/xa/PJvfCzOfsvfiW9/Su8h27BSj8M/e238zL97T6ydOB3VB/4gfvPPqvG0xy3ERPVcPH7mvfHzP/aq+MU/WOb3XqxfvT8+/N7fjzd9sB5Pfex8VMnz2cVfjn/yyl+Ju9vz0V59f/zqD/1v8cNvnY7vevWPxzff1jxKiACY3X73v4p/+asfiO2zZyLufWe86UO1RxTPw5d9HMbhsxE+Pa/eCJ+Of3j8cPRaGdwarff+fLzq594UlycXorH4/njzv/7Z+LdvuycGX/e34q988eB4vFqsxjv/338ab+4+P77vRU8oXRYO7o7X/fgvxaVn//X4ri+ff6h34TPk+YnKRNz3Gz8T//YPrsfULQtRG16PBz/69njdL70l1p/zCWmNwzh8voTDS/HL3/+98e8/th8HuxuxfOmD8bu/8EPx0n/1sXj0d/5gvOQ5czFxTB1smOisxO+9+ufiP31kP86cqMSVD/5+vP5PKvH0x584pj47hp4e89rnRzjynR2Hm8PBA8VPPbdTVCq94i/8i4sfPzHDcHCtePtP/o3ieU8+U0w1a0WtPVOcf+o3Fa96z0NO5g87CerwWvHmH3lh8Zw75otuvVpU671i/o7nFC+66/XFfTdmmTwUDhd/u/jhv/S4Yq5dL5rTtxVf8nffUCzfyNh2cc9vvKL4zq9+QnG63yyq1UYxeepxxVd+328U10fv7D1QvOGHXlg8/exk0Wj2i3PPeFHxQ2+4/yHH95vCwcVfL/7e1zy6mGnVixZp/bnvvSmtY8QzngA2Dp+z8HC8ehQ+Hf8ch14Pl/+w+Km/+mXF7dPNot6eK+780qcXt8Bnf/5V93/SBI9PmdZ/YQJYsfPG4rtONIqn/cifHq2W8lD4THl+555fK37gm59V3L4wWTTrjaI7WChuefyXFC/5pU/O8ziMw+dF2P9Y8R9/4MXFVz3pXDHTqRe1Rq848dg/X3zHK95cPHgzgxxDB5dhr7jnV76n+JrHzBedeq1ozTyq+LK/98bj67Nj6ukxr/23DxX/HOHacRiHcRiHcThGOLzvVfGVj/n+OPGvH4zXvnDy6O44jMM4jMM4/LcIYzA7DuMwDuPwacN2vOvf/GS8t/XEuONkLw6XPhhv+IkfiH/y/q+IX3rfL8QL5j7Rr2EcxmEcxmEcPpdhDGbHYRzGYRw+XTh8IF7zv74w/v6vvD8eWN6Jav9UPPY53xT/+4/8UPyVp0592tnS4zAO4zAO4/BfO0T8f3LPP2sQGjcbAAAAAElFTkSuQmCC)", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import neighbors , datasets\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\niris = pd.read_csv('iris.csv')\n\nfeature_names = iris.iloc[0:4]\ntarget_names = iris.iloc[5]\n\nX = iris.iloc[:,:4]\ny = iris.iloc[:,-1:]\nprint(X)\nprint(y)", " sepal_length sepal_width petal_length petal_width\n0 5.1 3.5 1.4 0.2\n1 4.9 3.0 1.4 0.2\n2 4.7 3.2 1.3 0.2\n3 4.6 3.1 1.5 0.2\n4 5.0 3.6 1.4 0.2\n.. ... ... ... ...\n145 6.7 3.0 5.2 2.3\n146 6.3 2.5 5.0 1.9\n147 6.5 3.0 5.2 2.0\n148 6.2 3.4 5.4 2.3\n149 5.9 3.0 5.1 1.8\n\n[150 rows x 4 columns]\n species\n0 setosa\n1 setosa\n2 setosa\n3 setosa\n4 setosa\n.. ...\n145 virginica\n146 virginica\n147 virginica\n148 virginica\n149 virginica\n\n[150 rows x 1 columns]\n" ], [ "import seaborn as sns\nplt.style.use('ggplot')\nsc = StandardScaler()\n#X_scaled = sc.fit_transform(X[['sepal_length','sepal_width','petal_length','petal_width']])\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2)", "_____no_output_____" ], [ "sns.pairplot(iris,hue = 'species')", "_____no_output_____" ] ], [ [ "#### The **12th plot** in pairplot shows some significant groups developing when parameter are chosen as **petal_width and petal_length**", "_____no_output_____" ] ], [ [ "sns.scatterplot(x = iris['petal_width'] , y = iris['petal_length'],hue = iris.species)", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB \nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import SVC\nfrom sklearn import datasets\nimport numpy as np\n\n# Initializing Classifiers\nclf1 = LogisticRegression(random_state=1,\n solver='newton-cg',\n multi_class='multinomial')\nclf2 = RandomForestClassifier(random_state=1, n_estimators=100)\nclf3 = GaussianNB()\nclf4 = SVC(gamma='auto')\nclf5 = neighbors.KNeighborsClassifier(n_neighbors = 6)\n#clf.fit(X[['sepal_length','sepal_width','petal_length','petal_width']],\n # y.species)\n\n# Loading some example data\niris = datasets.load_iris()\nX = iris.data[:, [2,3]]\ny = iris.target", "_____no_output_____" ] ], [ [ "### Having look on decision boundaries made by classifier :\n#### **0.Setosa**\n#### **1.Versicolor**\n#### **2.Verginica**", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfrom mlxtend.plotting import plot_decision_regions\nimport matplotlib.gridspec as gridspec\nimport itertools\ngs = gridspec.GridSpec(2, 3)\n\nfig = plt.figure(figsize=(10,8))\n\nlabels = ['Logistic Regression', 'Random Forest', 'Naive Bayes', 'SVM','KNN']\nfor clf, lab, grd in zip([clf1, clf2, clf3, clf4,clf5],\n labels,\n itertools.product([0, 1, 2], repeat=2)):\n\n clf.fit(X, y)\n ax = plt.subplot(gs[grd[0], grd[1]])\n fig = plot_decision_regions(X=X, y=y, clf=clf, legend=2)\n plt.title(lab)\n\nplt.show()", "/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n" ] ], [ [ "## Selecting **KNeigborsClassifier** as model", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "fig = plt.figure(figsize=(10,8))\nfig = plot_decision_regions(X=X, y=y, clf=clf5, legend=2)\nplt.title('KNN')\nplt.show()", "/usr/local/lib/python3.7/dist-packages/mlxtend/plotting/decision_regions.py:244: MatplotlibDeprecationWarning: Passing unsupported keyword arguments to axis() will raise a TypeError in 3.3.\n ax.axis(xmin=xx.min(), xmax=xx.max(), y_min=yy.min(), y_max=yy.max())\n" ] ], [ [ "### **Tesing model prediction on sample data**", "_____no_output_____" ] ], [ [ "final = np.array([[3,5],[3,1],[2,1],[3.3,2.2],[0.1,0.1],[5,1.69]])\nclf5.predict(final)", "_____no_output_____" ] ], [ [ "### **Saving our model in .pkl format**", "_____no_output_____" ] ], [ [ "import pickle\nfilename = 'iris_model.sav'\npickle.dump(clf5, open(filename, 'wb'))\n \n# some time later...\n \n# load the model from disk\nloaded_model = pickle.load(open(filename, 'rb'))\nresult = loaded_model.score(X_test, y_test)\nprint(result)", "0.9666666666666667\n" ] ], [ [ "#### *Great ! this model did a* **nice classification work** *of predicting with* **score of 96.666%**", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
ec98e5c6472a90745d5d8cd3792b441e9dd6cdeb
181,353
ipynb
Jupyter Notebook
notebooks/feature-engineering/Section-06-Categorical-Encoding/06.05-Ordered-Integer-Encoding.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
notebooks/feature-engineering/Section-06-Categorical-Encoding/06.05-Ordered-Integer-Encoding.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
notebooks/feature-engineering/Section-06-Categorical-Encoding/06.05-Ordered-Integer-Encoding.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
166.074176
32,312
0.884805
[ [ [ "## Target guided encodings\n\nIn the previous lectures in this section, we learned how to convert a label into a number, by using one hot encoding, replacing by a digit or replacing by frequency or counts of observations. These methods are simple, make (almost) no assumptions and work generally well in different scenarios.\n\nThere are however methods that allow us to capture information while pre-processing the labels of categorical variables. These methods include:\n\n- Ordering the labels according to the target\n- Replacing labels by the target mean (mean encoding / target encoding)\n- Replacing the labels by the probability ratio of the target being 1 or 0\n- Weight of evidence.\n\nAll of the above methods have something in common:\n\n- the encoding is **guided by the target**, and\n- they create a **monotonic relationship** between the variable and the target.\n\n\n### Monotonicity\n\nA monotonic relationship is a relationship that does one of the following:\n\n- (1) as the value of one variable increases, so does the value of the other variable; or\n- (2) as the value of one variable increases, the value of the other variable decreases.\n\nIn this case, as the value of the independent variable (predictor) increases, so does the target, or conversely, as the value of the variable increases, the target value decreases.\n\n\n\n### Advantages of target guided encodings\n\n- Capture information within the category, therefore creating more predictive features\n- Create a monotonic relationship between the variable and the target, therefore suitable for linear models\n- Do not expand the feature space\n\n\n### Limitations\n\n- Prone to cause over-fitting\n- Difficult to cross-validate with current libraries\n\n\n### Note\n\nThe methods discussed in this and the coming 3 lectures can be also used on numerical variables, after discretisation. This creates a monotonic relationship between the numerical variable and the target, and therefore improves the performance of linear models. I will discuss this in more detail in the section \"Discretisation\".\n\n===============================================================================\n\n## Ordered Integer Encoding\n\nOrdering the categories according to the target means assigning a number to the category from 1 to k, where k is the number of distinct categories in the variable, but this numbering is informed by the mean of the target for each category.\n\nFor example, we have the variable city with values London, Manchester and Bristol; if the default rate is 30% in London, 20% in Bristol and 10% in Manchester, then we replace London by 1, Bristol by 2 and Manchester by 3.\n\n## In this demo:\n\nWe will see how to perform one hot encoding with:\n- pandas\n- Feature-Engine\n\nAnd the advantages and limitations of these implementations using the House Prices dataset.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n\n# to split the datasets\nfrom sklearn.model_selection import train_test_split\n\n# for encoding with feature-engine\nfrom feature_engine.categorical_encoders import OrdinalCategoricalEncoder", "_____no_output_____" ], [ "# load dataset\n\ndata = pd.read_csv(\n '../houseprice.csv',\n usecols=['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice'])\n\ndata.head()", "_____no_output_____" ], [ "# let's have a look at how many labels each variable has\n\nfor col in data.columns:\n print(col, ': ', len(data[col].unique()), ' labels')", "Neighborhood : 25 labels\nExterior1st : 15 labels\nExterior2nd : 16 labels\nSalePrice : 663 labels\n" ], [ "# let's explore the unique categories\ndata['Neighborhood'].unique()", "_____no_output_____" ], [ "data['Exterior1st'].unique()", "_____no_output_____" ], [ "data['Exterior2nd'].unique()", "_____no_output_____" ] ], [ [ "### Encoding important\n\nWe select which digit to assign each category using the train set, and then use those mappings in the test set.\n\n**Note that to do this technique with pandas, we need to keep the target within the training set**", "_____no_output_____" ] ], [ [ "# let's separate into training and testing set\n\nX_train, X_test, y_train, y_test = train_test_split(\n data[['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice']], # this time we keep the target!!\n data['SalePrice'], # target\n test_size=0.3, # percentage of obs in test set\n random_state=0) # seed to ensure reproducibility\n\nX_train.shape, X_test.shape", "_____no_output_____" ] ], [ [ "### Explore original relationship between categorical variables and target", "_____no_output_____" ] ], [ [ "# let's explore the relationship of the categories with the target\n\nfor var in ['Neighborhood', 'Exterior1st', 'Exterior2nd']:\n \n fig = plt.figure()\n fig = X_train.groupby([var])['SalePrice'].mean().plot()\n fig.set_title('Relationship between {} and SalePrice'.format(var))\n fig.set_ylabel('Mean SalePrice')\n plt.show()", "_____no_output_____" ] ], [ [ "You can see that the relationship between the target and the categories of the categorical variables goes up and down, depending on the category.\n\n\n## Ordered Integer encoding with pandas\n\n\n### Advantages\n\n- quick\n- returns pandas dataframe\n\n### Limitations of pandas:\n\n- it does not preserve information from train data to propagate to test data\n\nWe need to store the encoding maps separately if planing to use them in production.", "_____no_output_____" ] ], [ [ "# let's order the labels according to the mean target value\n\nX_train.groupby(['Neighborhood'])['SalePrice'].mean().sort_values()", "_____no_output_____" ] ], [ [ "In the above cell, we ordered the categories from the neighbourhood where the houses sale prices are cheaper (IDOTRR), to the neighbourhood where the house prices are, on average, more expensive (NoRidge).\n\nIn the next cells, we will replace those categories, ordered as they are, by the numbers 0 to k, where k is the number of different categories minus 1, in this case 25 - 1 = 24.\n\nSo IDOTRR will be replaced by 0 and NoRidge by 24, just to be clear.", "_____no_output_____" ] ], [ [ "# first we generate an ordered list with the labels\n\nordered_labels = X_train.groupby(['Neighborhood'\n ])['SalePrice'].mean().sort_values().index\n\nordered_labels", "_____no_output_____" ], [ "# next let's create a dictionary with the mappings of categories to numbers\n\nordinal_mapping = {k: i for i, k in enumerate(ordered_labels, 0)}\n\nordinal_mapping", "_____no_output_____" ], [ "# now, we replace the labels with the integers\n\nX_train['Neighborhood'] = X_train['Neighborhood'].map(ordinal_mapping)\nX_test['Neighborhood'] = X_test['Neighborhood'].map(ordinal_mapping)", "_____no_output_____" ], [ "# let's explore the result\n\nX_train['Neighborhood'].head(10)", "_____no_output_____" ], [ "# we can turn the previous commands into 2 functions\n\n\ndef find_category_mappings(df, variable, target):\n\n # first we generate an ordered list with the labels\n ordered_labels = X_train.groupby([variable\n ])[target].mean().sort_values().index\n\n # return the dictionary with mappings\n return {k: i for i, k in enumerate(ordered_labels, 0)}\n\n\ndef integer_encode(train, test, variable, ordinal_mapping):\n\n X_train[variable] = X_train[variable].map(ordinal_mapping)\n X_test[variable] = X_test[variable].map(ordinal_mapping)", "_____no_output_____" ], [ "# and now we run a loop over the remaining categorical variables\n\nfor variable in ['Exterior1st', 'Exterior2nd']:\n\n mappings = find_category_mappings(X_train, variable, 'SalePrice')\n\n integer_encode(X_train, X_test, variable, mappings)", "_____no_output_____" ], [ "# let's see the result\n\nX_train.head()", "_____no_output_____" ], [ "# let's inspect the newly created monotonic relationship\n# between the variables and the target\n\nfor var in ['Neighborhood', 'Exterior1st', 'Exterior2nd']:\n \n fig = plt.figure()\n fig = X_train.groupby([var])['SalePrice'].mean().plot()\n fig.set_title('Monotonic relationship between {} and SalePrice'.format(var))\n fig.set_ylabel('Mean SalePrice')\n plt.show()", "_____no_output_____" ] ], [ [ "We see from the plots above that the relationship between the categories and the target is now monotonic, and for the first 2 variables, almost linear, which helps improve linear models performance.\n\n### Note\n\nMonotonic does not mean strictly linear. Monotonic means that it increases constantly, or it decreases constantly.\n\nReplacing categorical labels with this code and method will generate missing values for categories present in the test set that were not seen in the training set. Therefore it is extremely important to handle rare labels before-hand. I will explain how to do this, in a later notebook.", "_____no_output_____" ], [ "## Integer Encoding with Feature-Engine\n\nIf using Feature-Engine, instead of pandas, we do not need to keep the target variable in the training dataset.", "_____no_output_____" ] ], [ [ "# let's separate into training and testing set\n\nX_train, X_test, y_train, y_test = train_test_split(\n data[['Neighborhood', 'Exterior1st', 'Exterior2nd']], # predictors\n data['SalePrice'], # target\n test_size=0.3, # percentage of obs in test set\n random_state=0) # seed to ensure reproducibility\n\nX_train.shape, X_test.shape", "_____no_output_____" ], [ "ordinal_enc = OrdinalCategoricalEncoder(\n # NOTE that we indicate ordered in the encoding_method, otherwise it assings numbers arbitrarily\n encoding_method='ordered',\n variables=['Neighborhood', 'Exterior1st', 'Exterior2nd'])", "_____no_output_____" ], [ "# when fitting the transformer, we need to pass the target as well\n# just like with any Scikit-learn predictor class\n\nordinal_enc.fit(X_train, y_train)", "_____no_output_____" ], [ "# in the encoder dict we can observe each of the top categories\n# selected for each of the variables\n\nordinal_enc.encoder_dict_", "_____no_output_____" ], [ "# this is the list of variables that the encoder will transform\n\nordinal_enc.variables", "_____no_output_____" ], [ "X_train = ordinal_enc.transform(X_train)\nX_test = ordinal_enc.transform(X_test)\n\n# let's explore the result\nX_train.head()", "_____no_output_____" ] ], [ [ "**Note**\n\nIf the argument variables is left to None, then the encoder will automatically identify all categorical variables. Is that not sweet?\n\nThe encoder will not encode numerical variables. So if some of your numerical variables are in fact categories, you will need to re-cast them as object before using the encoder.\n\nFinally, if there is a label in the test set that was not present in the train set, the encoder will through and error, to alert you of this behaviour.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
ec98eb7e906abf050131a4c008222be385003a95
4,669
ipynb
Jupyter Notebook
notebooks/House_D.ipynb
brauliobarahona/RAPT-dataset
ec842544fe8af39d2f44604c06784b4dd6e24108
[ "MIT" ]
2
2020-06-15T09:26:46.000Z
2020-06-15T14:39:48.000Z
notebooks/House_D.ipynb
brauliobarahona/RAPT-dataset
ec842544fe8af39d2f44604c06784b4dd6e24108
[ "MIT" ]
null
null
null
notebooks/House_D.ipynb
brauliobarahona/RAPT-dataset
ec842544fe8af39d2f44604c06784b4dd6e24108
[ "MIT" ]
1
2021-01-23T15:22:29.000Z
2021-01-23T15:22:29.000Z
29.18125
151
0.559863
[ [ [ "import sys, os\nsys.path.insert(0,\"./../\") #so we can import our modules properly", "_____no_output_____" ], [ "# iPhython\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\"))\n\nfrom matplotlib import rcParams\nimport matplotlib.pyplot as plt\n%matplotlib notebook\n%matplotlib notebook\n\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "# House D", "_____no_output_____" ] ], [ [ "path_to_house = \"./datasets/dfD_300s.hdf\"\ndf_house = pd.read_hdf(path_to_house)\nprint('\\n\\nStart time: {}'.format(df_house.index[1]))\nprint('End time: {}'.format(df_house.index[-1]))\nprint(df_house.columns)", "_____no_output_____" ] ], [ [ "## Total Imported Power and Total Submetered Power", "_____no_output_____" ] ], [ [ "meters = ['D_audio_wlan_og_power', 'D_dishwasher_power', 'D_hp_power', 'D_rainwater_power', 'D_tumble_dryer_power', 'D_washing_machine_power']\n\n# Investigate only points in time where all values are available\ncols = ['D_imp_power']\ncols.extend(meters)\ndf_house_noNAN = df_house.loc[:,cols].dropna(axis=0, how='any')\n\n\ntotal_consumers_house = df_house_noNAN.loc[:,meters].sum(axis=1)\n# Check if total consumed power is larger than total of appliances\ndelta = total_consumers_house - df_house_noNAN.loc[:,'D_imp_power']\ntmt = delta > 0\n\nif np.any(tmt):\n # investigate problematic cases\n print(\"Found {} out of {} ({:.2}%) values to be problematic.\".format(tmt.sum(), len(tmt), tmt.sum()/len(tmt)*100))\n print(\"\")\n print(\"Statistics of problematic values:\")\n print(delta[tmt].describe())\n print()\n print((delta[tmt]/df_house_noNAN.loc[tmt, 'D_imp_power']).describe())\nelse: \n print(\"Total of all appliances is smaller than total consumed energy for all measurement points.\")", "_____no_output_____" ], [ "l = ['Total Appliances', 'Total Consumed']\nl.extend(meters)\nif np.any(tmt):\n ## plot params ##\n ha = 6\n ncols = 2\n nrows = np.sum(tmt)//ncols+1\n nrows = 2\n fig, ax = plt.subplots(figsize=(24,3*nrows), ncols=ncols, nrows=nrows)\n idxs = np.nonzero(tmt)[0]\n for i, idx in enumerate(idxs[0:2]):\n minidx = idx-ha\n maxidx = idx+ha\n total_consumers_house.iloc[minidx:maxidx].plot(ax=ax[i//ncols, i%ncols], label='Total Appliances')\n df_house_noNAN.iloc[minidx:maxidx].loc[:,'D_imp_power'].plot(ax=ax[i//ncols, i%ncols], label='Total Consumed')\n for meter in meters:\n df_house_noNAN.iloc[minidx:maxidx].loc[:,meter].plot(ax=ax[i//ncols, i%ncols], label=meter)\n ax[i//ncols, i%ncols]\n fig.legend(l, loc='upper center')\nelse: \n print(\"Total of all appliances is smaller than total consumed energy for all measurement points.\")", "_____no_output_____" ] ], [ [ "# House D - Raw Data", "_____no_output_____" ] ], [ [ "# import necessary constants and functions\nfrom src.const import cipD, startDateD, endDateD, rawDataBaseDir\nfrom src.preprocessing import getRawData", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
ec98f30b528ebad2b121c7fe01973c3bd8e080d1
7,673
ipynb
Jupyter Notebook
test/demo_L1.ipynb
gowerrobert/MultipleScatteringLearnMoments
157cdbf984de9bdb3b4fb035f60a37f10d2a17f9
[ "MIT" ]
1
2017-12-27T19:19:14.000Z
2017-12-27T19:19:14.000Z
test/demo_L1.ipynb
gowerrobert/MultipleScatteringLearnMoments
157cdbf984de9bdb3b4fb035f60a37f10d2a17f9
[ "MIT" ]
2
2018-01-09T16:11:22.000Z
2018-01-09T16:31:20.000Z
test/demo_L1.ipynb
gowerrobert/MultipleScatteringLearnMoments
157cdbf984de9bdb3b4fb035f60a37f10d2a17f9
[ "MIT" ]
null
null
null
31.706612
193
0.58569
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ec98f507307c57bc009e88ca267a45170fc0c434
122,054
ipynb
Jupyter Notebook
notebooks/ecommerce-example.ipynb
Chunshuizhao/HugeCTR
085b2e8ad2abaee5578e7bf43b8394d0b8473b58
[ "Apache-2.0" ]
null
null
null
notebooks/ecommerce-example.ipynb
Chunshuizhao/HugeCTR
085b2e8ad2abaee5578e7bf43b8394d0b8473b58
[ "Apache-2.0" ]
null
null
null
notebooks/ecommerce-example.ipynb
Chunshuizhao/HugeCTR
085b2e8ad2abaee5578e7bf43b8394d0b8473b58
[ "Apache-2.0" ]
null
null
null
37.999377
371
0.433538
[ [ [ "<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n\n# Merlin ETL, training and inference demo on the e-Commerce behavior data", "_____no_output_____" ], [ "## Overview\n\nIn this tutorial, we will be using the [eCommerce behavior data from multi category store](https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store) from [REES46 Marketing Platform](https://rees46.com/) as our dataset. This tutorial is built upon the NVIDIA RecSys 2020 [tutorial](https://recsys.acm.org/recsys20/tutorials/). \n\nThis jupyter notebook provides the code to preprocess the dataset and generate the train, validation and test sets for the remainder of the tutorial. We define our own goal and filter the dataset accordingly.\n\nFor our tutorial, we decided that our goal is to predict if a user purchased an item:\n\n- Positive: User purchased an item\n- Negative: User added an item to the cart, but did not purchase it (in the same session) \n\n\nWe split the dataset into train, validation and test set by the timestamp:\n- Training: October 2019 - February 2020\n- Validation: March 2020\n- Test: April 2020\n\nWe remove AddToCart Events from a session, if in the same session the same item was purchased.\n\n## Table of Contents\n1. [Data](#1)\n1. [ETL with NVTabular](#2)\n1. [Training with HugeCTR](#3)\n1. [HugeCTR inference](#4)\n", "_____no_output_____" ], [ "<a id=\"1\"></a>\n## 1. Data\nFirst, we download and unzip the raw data.\n\nNote: the dataset is ~11GB and will take a while to download.", "_____no_output_____" ] ], [ [ "%%bash\nexport HOME=$PWD\npip install gdown --user\n~/.local/bin/gdown https://drive.google.com/uc?id=1-Rov9fFtGJqb7_ePc6qH-Rhzxn0cIcKB\n~/.local/bin/gdown https://drive.google.com/uc?id=1-Rov9fFtGJqb7_ePc6qH-Rhzxn0cIcKB\n~/.local/bin/gdown https://drive.google.com/uc?id=1zr_RXpGvOWN2PrWI6itWL8HnRsCpyqz8\n~/.local/bin/gdown https://drive.google.com/uc?id=1g5WoIgLe05UMdREbxAjh0bEFgVCjA1UL\n~/.local/bin/gdown https://drive.google.com/uc?id=1qZIwMbMgMmgDC5EoMdJ8aI9lQPsWA3-P\n~/.local/bin/gdown https://drive.google.com/uc?id=1x5ohrrZNhWQN4Q-zww0RmXOwctKHH9PT", "Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com\nCollecting gdown\n Downloading gdown-3.13.0.tar.gz (9.3 kB)\n Installing build dependencies: started\n Installing build dependencies: finished with status 'done'\n Getting requirements to build wheel: started\n Getting requirements to build wheel: finished with status 'done'\n Preparing wheel metadata: started\n Preparing wheel metadata: finished with status 'done'\nCollecting filelock\n Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)\nRequirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from gdown) (4.61.0)\nRequirement already satisfied: requests[socks]>=2.12.0 in /usr/lib/python3/dist-packages (from gdown) (2.22.0)\nRequirement already satisfied: six in /usr/lib/python3/dist-packages (from gdown) (1.14.0)\nCollecting PySocks!=1.5.7,>=1.5.6\n Downloading PySocks-1.7.1-py3-none-any.whl (16 kB)\nBuilding wheels for collected packages: gdown\n Building wheel for gdown (PEP 517): started\n Building wheel for gdown (PEP 517): finished with status 'done'\n Created wheel for gdown: filename=gdown-3.13.0-py3-none-any.whl size=9034 sha256=a42e1a003f31f07a2dab41ee0adc192159f56e059bcaafe64672c529a4e2ce5b\n Stored in directory: /tmp/pip-ephem-wheel-cache-vrf8xmza/wheels/04/51/53/ed3e97af28b242e9eb81afb4836273fbe233a14228aa82fea3\nSuccessfully built gdown\nInstalling collected packages: PySocks, filelock, gdown\nSuccessfully installed PySocks-1.7.1 filelock-3.0.12 gdown-3.13.0\n" ], [ "import glob \n\nlist_files = glob.glob('*.csv.gz')\nlist_files", "_____no_output_____" ] ], [ [ "### Data extraction and initial preprocessing\n\nWe extract a few relevant columns from the raw datasets and parse date columns into several atomic colunns (day, month...).", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom tqdm import tqdm\n\ndef process_files(file):\n df_tmp = pd.read_csv(file, compression='gzip')\n df_tmp['session_purchase'] = df_tmp['user_session'] + '_' + df_tmp['product_id'].astype(str)\n df_purchase = df_tmp[df_tmp['event_type']=='purchase']\n df_cart = df_tmp[df_tmp['event_type']=='cart']\n df_purchase = df_purchase[df_purchase['session_purchase'].isin(df_cart['session_purchase'])]\n df_cart = df_cart[~(df_cart['session_purchase'].isin(df_purchase['session_purchase']))]\n df_cart['target'] = 0\n df_purchase['target'] = 1\n df = pd.concat([df_cart, df_purchase])\n df = df.drop('category_id', axis=1)\n df = df.drop('session_purchase', axis=1)\n df[['cat_0', 'cat_1', 'cat_2', 'cat_3']] = df['category_code'].str.split(\"\\.\", n = 3, expand = True).fillna('NA')\n df['brand'] = df['brand'].fillna('NA')\n df = df.drop('category_code', axis=1)\n df['timestamp'] = pd.to_datetime(df['event_time'].str.replace(' UTC', ''))\n df['ts_hour'] = df['timestamp'].dt.hour\n df['ts_minute'] = df['timestamp'].dt.minute\n df['ts_weekday'] = df['timestamp'].dt.weekday\n df['ts_day'] = df['timestamp'].dt.day\n df['ts_month'] = df['timestamp'].dt.month\n df['ts_year'] = df['timestamp'].dt.year\n df.to_csv('./dataset/' + file.replace('.gz', ''), index=False)\n \n!mkdir ./dataset\nfor file in tqdm(list_files):\n print(file)\n process_files(file)", "mkdir: cannot create directory ‘./dataset’: File exists\n" ] ], [ [ "### Prepare train/validation/test data\n\nNext, we split the data into train, validation and test sets. We will be using 3 months for training, 1 month for validation and 1 month for testing.", "_____no_output_____" ] ], [ [ "lp = []\nlist_files = glob.glob('./dataset/*.csv')", "_____no_output_____" ], [ "!ls -l ./dataset/*.csv", "-rw-r--r-- 1 root dip 479323170 Jul 12 05:57 ./dataset/2019-Dec.csv\n-rw-r--r-- 1 root dip 455992639 Jul 12 05:52 ./dataset/2020-Apr.csv\n-rw-r--r-- 1 root dip 453967664 Jul 12 05:48 ./dataset/2020-Feb.csv\n-rw-r--r-- 1 root dip 375205173 Jul 12 05:45 ./dataset/2020-Jan.csv\n-rw-r--r-- 1 root dip 403896607 Jul 12 05:42 ./dataset/2020-Mar.csv\n" ], [ "for file in list_files:\n lp.append(pd.read_csv(file))", "_____no_output_____" ], [ "df = pd.concat(lp)\ndf.shape", "_____no_output_____" ], [ "df_test = df[df['ts_month']==4]\ndf_valid = df[df['ts_month']==3]\ndf_train = df[(df['ts_month']!=3)&(df['ts_month']!=4)]", "_____no_output_____" ], [ "df_train.shape, df_valid.shape, df_test.shape", "_____no_output_____" ], [ "!mkdir -p ./data\ndf_train.to_parquet('./data/train.parquet', index=False)\ndf_valid.to_parquet('./data/valid.parquet', index=False)\ndf_test.to_parquet('./data/test.parquet', index=False)", "_____no_output_____" ], [ "df_train.head()", "_____no_output_____" ] ], [ [ "<a id=\"2\"></a>\n## 2. Preprocessing with NVTabular\n\nNext, we will use NVTabular for preprocessing and engineering more features. \n\nBut first, we need to import the necessary libraries and initialize a Dask GPU cluster for computation.\n\n### Initialize Dask GPU cluster\n", "_____no_output_____" ] ], [ [ "# Standard Libraries\nimport os\nfrom time import time\nimport re\nimport shutil\nimport glob\nimport warnings\n\n# External Dependencies\nimport numpy as np\nimport pandas as pd\nimport cupy as cp\nimport cudf\nimport dask_cudf\nfrom dask_cuda import LocalCUDACluster\nfrom dask.distributed import Client\nfrom dask.utils import parse_bytes\nfrom dask.delayed import delayed\nimport rmm\n\n# NVTabular\nimport nvtabular as nvt\nimport nvtabular.ops as ops\nfrom nvtabular.io import Shuffle\nfrom nvtabular.utils import _pynvml_mem_size, device_mem_size\n\nprint(nvt.__version__)", "0.5.3\n" ], [ "# define some information about where to get our data\nBASE_DIR = \"./nvtabular_temp\"\n!rm -r $BASE_DIR && mkdir $BASE_DIR\ninput_path = './dataset'\ndask_workdir = os.path.join(BASE_DIR, \"workdir\")\noutput_path = os.path.join(BASE_DIR, \"output\")\nstats_path = os.path.join(BASE_DIR, \"stats\")", "_____no_output_____" ] ], [ [ "This example was tested on a DGX server with 8 GPUs. If you have less GPUs, modify the `NUM_GPUS` variable accordingly.", "_____no_output_____" ] ], [ [ "NUM_GPUS = [0,1,2,3,4,5,6,7]\n#NUM_GPUS = [0]\n\n# Dask dashboard\ndashboard_port = \"8787\"\n\n# Deploy a Single-Machine Multi-GPU Cluster\nprotocol = \"tcp\" # \"tcp\" or \"ucx\"\nvisible_devices = \",\".join([str(n) for n in NUM_GPUS]) # Delect devices to place workers\ndevice_limit_frac = 0.5 # Spill GPU-Worker memory to host at this limit.\ndevice_pool_frac = 0.6\npart_mem_frac = 0.05\n\n# Use total device size to calculate args.device_limit_frac\ndevice_size = device_mem_size(kind=\"total\")\ndevice_limit = int(device_limit_frac * device_size)\ndevice_pool_size = int(device_pool_frac * device_size)\npart_size = int(part_mem_frac * device_size)\n\n# Check if any device memory is already occupied\n\"\"\"\nfor dev in visible_devices.split(\",\"):\n fmem = _pynvml_mem_size(kind=\"free\", index=int(dev))\n used = (device_size - fmem) / 1e9\n if used > 1.0:\n warnings.warn(f\"BEWARE - {used} GB is already occupied on device {int(dev)}!\")\n\"\"\"\n\ncluster = None # (Optional) Specify existing scheduler port\nif cluster is None:\n cluster = LocalCUDACluster(\n protocol = protocol,\n n_workers=len(visible_devices.split(\",\")),\n CUDA_VISIBLE_DEVICES = visible_devices,\n device_memory_limit = device_limit,\n local_directory=dask_workdir,\n dashboard_address=\":\" + dashboard_port,\n )\n\n# Create the distributed client\nclient = Client(cluster)\nclient", "_____no_output_____" ], [ "!nvidia-smi", "Mon Jul 12 05:59:31 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla V100-SXM2... On | 00000000:06:00.0 Off | 0 |\n| N/A 38C P0 60W / 300W | 613MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 1 Tesla V100-SXM2... On | 00000000:07:00.0 Off | 0 |\n| N/A 40C P0 60W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 2 Tesla V100-SXM2... On | 00000000:0A:00.0 Off | 0 |\n| N/A 43C P0 61W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 3 Tesla V100-SXM2... On | 00000000:0B:00.0 Off | 0 |\n| N/A 42C P0 64W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 4 Tesla V100-SXM2... On | 00000000:85:00.0 Off | 0 |\n| N/A 44C P0 61W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 5 Tesla V100-SXM2... On | 00000000:86:00.0 Off | 0 |\n| N/A 46C P0 63W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 6 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 |\n| N/A 43C P0 60W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n| 7 Tesla V100-SXM2... On | 00000000:8A:00.0 Off | 0 |\n| N/A 40C P0 59W / 300W | 308MiB / 16160MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n+-----------------------------------------------------------------------------+\n" ], [ "# Initialize RMM pool on ALL workers\ndef _rmm_pool():\n rmm.reinitialize(\n # RMM may require the pool size to be a multiple of 256.\n pool_allocator=True,\n initial_pool_size=(device_pool_size // 256) * 256, # Use default size\n )\n \nclient.run(_rmm_pool)", "_____no_output_____" ] ], [ [ "### Define NVTabular dataset", "_____no_output_____" ] ], [ [ "train_paths = glob.glob('./data/train.parquet')\nvalid_paths = glob.glob('./data/valid.parquet')\ntest_paths = glob.glob('./data/test.parquet')\n\ntrain_dataset = nvt.Dataset(train_paths, engine='parquet', part_mem_fraction=0.15)\nvalid_dataset = nvt.Dataset(valid_paths, engine='parquet', part_mem_fraction=0.15)\ntest_dataset = nvt.Dataset(test_paths, engine='parquet', part_mem_fraction=0.15)", "_____no_output_____" ], [ "train_dataset.to_ddf().head()", "_____no_output_____" ], [ "len(train_dataset.to_ddf().columns)", "_____no_output_____" ], [ "train_dataset.to_ddf().columns", "_____no_output_____" ], [ "len(train_dataset.to_ddf())", "_____no_output_____" ] ], [ [ "### Preprocessing and feature engineering", "_____no_output_____" ], [ "In this notebook we will explore a few feature engineering technique with NVTabular:\n\n- Creating cross features, e.g. `user_id` and `'brand`\n- Target encoding\n\nThe engineered features will then be preprocessed into a form suitable for machine learning model:\n\n- Fill missing values\n- Encoding categorical features into integer values\n- Normalization of numeric features", "_____no_output_____" ] ], [ [ "from nvtabular.ops import LambdaOp\n\n# cross features\ndef user_id_cross_maker(col, gdf):\n return col.astype(str) + '_' + gdf['user_id'].astype(str)\n\nuser_id_cross_features = (\n nvt.ColumnGroup(['product_id', 'brand', 'ts_hour', 'ts_minute']) >>\n LambdaOp(user_id_cross_maker, dependency=['user_id']) >> \n nvt.ops.Rename(postfix = '_user_id_cross')\n)\n\n\ndef user_id_brand_cross_maker(col, gdf):\n return col.astype(str) + '_' + gdf['user_id'].astype(str) + '_' + gdf['brand'].astype(str)\n\nuser_id_brand_cross_features = (\n nvt.ColumnGroup(['ts_hour', 'ts_weekday', 'cat_0', 'cat_1', 'cat_2']) >>\n LambdaOp(user_id_brand_cross_maker, dependency=['user_id', 'brand']) >> \n nvt.ops.Rename(postfix = '_user_id_brand_cross')\n)\n\ntarget_encode = (\n ['brand', 'user_id', 'product_id', 'cat_2', ['ts_weekday', 'ts_day']] >>\n nvt.ops.TargetEncoding(\n nvt.ColumnGroup('target'),\n kfold=5,\n p_smooth=20,\n out_dtype=\"float32\",\n )\n)\n\ncat_feats = (user_id_brand_cross_features + user_id_cross_features) >> nvt.ops.Categorify()\ncont_feats = ['price', 'ts_weekday', 'ts_day', 'ts_month'] >> nvt.ops.FillMissing() >> nvt.ops.Normalize()\ncont_feats += target_encode >> nvt.ops.Rename(postfix = '_TE')", "_____no_output_____" ], [ "output = cat_feats + cont_feats + 'target'\nproc = nvt.Workflow(output)", "_____no_output_____" ] ], [ [ "### Visualize workflow as a DAG\n", "_____no_output_____" ] ], [ [ "!apt install graphviz", "Reading package lists... Done\nBuilding dependency tree \nReading state information... Done\ngraphviz is already the newest version (2.42.2-3build2).\n0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.\n" ], [ "output.graph", "_____no_output_____" ] ], [ [ "### Executing the workflow\n\nAfter having defined the workflow, calling the `fit()` method will start the actual computation to record the required statistics from the training data.", "_____no_output_____" ] ], [ [ "%%time\ntime_preproc_start = time()\nproc.fit(train_dataset)\ntime_preproc = time()-time_preproc_start", "CPU times: user 11 s, sys: 9.39 s, total: 20.4 s\nWall time: 22.7 s\n" ], [ "dict_dtypes = {}\nfor col in cat_feats.columns:\n dict_dtypes[col] = np.int64\nfor col in cont_feats.columns:\n dict_dtypes[col] = np.float32\n\ndict_dtypes['target'] = np.float32", "_____no_output_____" ] ], [ [ "Next, we call the `transform()` method to transform the datasets.", "_____no_output_____" ] ], [ [ "output_train_dir = os.path.join(output_path, 'train/')\noutput_valid_dir = os.path.join(output_path, 'valid/')\noutput_test_dir = os.path.join(output_path, 'test/')\n! rm -rf $output_train_dir && mkdir -p $output_train_dir\n! rm -rf $output_valid_dir && mkdir -p $output_valid_dir\n! rm -rf $output_test_dir && mkdir -p $output_test_dir", "_____no_output_____" ], [ "%%time\n\ntime_preproc_start = time()\nproc.transform(train_dataset).to_parquet(output_path=output_train_dir, dtypes=dict_dtypes,\n shuffle=nvt.io.Shuffle.PER_PARTITION,\n cats=cat_feats.columns,\n conts=cont_feats.columns,\n labels=['target'])\ntime_preproc += time()-time_preproc_start", "CPU times: user 2.14 s, sys: 2.62 s, total: 4.76 s\nWall time: 8.57 s\n" ], [ "!ls -l $output_train_dir", "total 767915\n-rw-r--r-- 1 root dip 706360433 Jul 12 06:00 0.187d286a7350426e877b1361cbcbf51f.parquet\n-rw-r--r-- 1 root dip 75 Jul 12 06:00 _file_list.txt\n-rw-r--r-- 1 root dip 21931 Jul 12 06:00 _metadata\n-rw-r--r-- 1 root dip 1073 Jul 12 06:00 _metadata.json\n" ], [ "%%time\n\ntime_preproc_start = time()\nproc.transform(valid_dataset).to_parquet(output_path=output_valid_dir, dtypes=dict_dtypes,\n shuffle=nvt.io.Shuffle.PER_PARTITION,\n cats=cat_feats.columns,\n conts=cont_feats.columns,\n labels=['target'])\ntime_preproc += time()-time_preproc_start", "CPU times: user 940 ms, sys: 1.35 s, total: 2.29 s\nWall time: 2.47 s\n" ], [ "!ls -l $output_valid_dir", "total 100539\n-rw-r--r-- 1 root dip 92438411 Jul 12 06:00 0.b0fc84833238495c9a7241a03aa8c4a0.parquet\n-rw-r--r-- 1 root dip 75 Jul 12 06:00 _file_list.txt\n-rw-r--r-- 1 root dip 10351 Jul 12 06:00 _metadata\n-rw-r--r-- 1 root dip 1073 Jul 12 06:00 _metadata.json\n" ], [ "%%time\n\ntime_preproc_start = time()\nproc.transform(test_dataset).to_parquet(output_path=output_test_dir, dtypes=dict_dtypes,\n shuffle=nvt.io.Shuffle.PER_PARTITION,\n cats=cat_feats.columns,\n conts=cont_feats.columns,\n labels=['target'])\ntime_preproc += time()-time_preproc_start", "CPU times: user 932 ms, sys: 1.14 s, total: 2.07 s\nWall time: 2.24 s\n" ], [ "time_preproc", "_____no_output_____" ] ], [ [ "### Verify the preprocessed data\n\nLet's quickly read the data back and verify that all fields have the expected format.", "_____no_output_____" ] ], [ [ "nvtdata = pd.read_parquet(output_train_dir)\nnvtdata.head()", "_____no_output_____" ], [ "nvtdata_valid = pd.read_parquet(output_valid_dir)\nnvtdata_valid.head()", "_____no_output_____" ], [ "sum(nvtdata_valid['ts_hour_user_id_brand_cross']==0)", "_____no_output_____" ], [ "len(nvtdata_valid)", "_____no_output_____" ] ], [ [ "### Getting the embedding size\n\nNext, we need to get the embedding size for the categorical variables. This is an important input for defining the embedding table size to be used by HugeCTR.", "_____no_output_____" ] ], [ [ "embeddings = ops.get_embedding_sizes(proc)\nembeddings", "_____no_output_____" ], [ "print([embeddings[x][0] for x in cat_feats.columns])", "[4427037, 3961156, 2877223, 2890639, 2159304, 4398425, 3009092, 3999369, 5931061]\n" ], [ "cat_feats.columns", "_____no_output_____" ], [ "embedding_size_str = \"{}\".format([embeddings[x][0] for x in cat_feats.columns])\nembedding_size_str", "_____no_output_____" ], [ "num_con_feates = len(cont_feats.columns)\nnum_con_feates", "_____no_output_____" ], [ "cont_feats.columns", "_____no_output_____" ], [ "print([embeddings[x][0] for x in cat_feats.columns])\nprint(len(cont_feats.columns))\nprint(len(cat_feats.columns))", "[4427037, 3961156, 2877223, 2890639, 2159304, 4398425, 3009092, 3999369, 5931061]\n9\n9\n" ] ], [ [ "Next, we'll shutdown our Dask client from earlier to free up some memory so that we can share it with HugeCTR.", "_____no_output_____" ] ], [ [ "client.shutdown()\ncluster.close()", "_____no_output_____" ] ], [ [ "### Preparing the training Python script for HugeCTR\n\nHugeCTR model can be defined by Python API. The below Python script defines a DLRM model and specifies the training resources. \n\nSeveral parameters that need to be edited to match this dataset are:\n\n- `slot_size_array`: cadinalities for the categorical variables\n- `dense_dim`: number of dense features\n- `slot_num`: number of categorical variables\n\nThe model graph can be saved into a JSON file by calling `model.graph_to_json`, which will be used for inference afterwards.", "_____no_output_____" ] ], [ [ "%%writefile hugectr_dlrm_ecommerce.py\nimport hugectr\nfrom mpi4py import MPI\nsolver = hugectr.CreateSolver(max_eval_batches = 2720,\n batchsize_eval = 16384,\n batchsize = 16384,\n lr = 0.1,\n warmup_steps = 8000,\n decay_start = 48000,\n decay_steps = 24000,\n vvgpu = [[0,1,2,3]],\n repeat_dataset = True,\n i64_input_key = True)\nreader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Parquet,\n source = [\"./nvtabular_temp/output/train/_file_list.txt\"],\n eval_source = \"./nvtabular_temp/output/valid/_file_list.txt\",\n check_type = hugectr.Check_t.Non,\n slot_size_array = [4427037, 3961156, 2877223, 2890639, 2159304, 4398425, 3009092, 3999369, 5931061])\noptimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.SGD,\n update_type = hugectr.Update_t.Local,\n atomic_update = True)\nmodel = hugectr.Model(solver, reader, optimizer)\nmodel.add(hugectr.Input(label_dim = 1, label_name = \"label\",\n dense_dim = 9, dense_name = \"dense\",\n data_reader_sparse_param_array = \n [hugectr.DataReaderSparseParam(\"data1\", 1, True, 9)]))\nmodel.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash,\n workspace_size_per_gpu_in_mb = 4883,\n embedding_vec_size = 128,\n combiner = 'sum',\n sparse_embedding_name = \"sparse_embedding1\",\n bottom_name = \"data1\",\n optimizer = optimizer))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"dense\"],\n top_names = [\"fc1\"],\n num_output=512))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc1\"],\n top_names = [\"relu1\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu1\"],\n top_names = [\"fc2\"],\n num_output=256))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc2\"],\n top_names = [\"relu2\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu2\"],\n top_names = [\"fc3\"],\n num_output=128))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc3\"],\n top_names = [\"relu3\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Interaction,\n bottom_names = [\"relu3\",\"sparse_embedding1\"],\n top_names = [\"interaction1\"]))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"interaction1\"],\n top_names = [\"fc4\"],\n num_output=1024))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc4\"],\n top_names = [\"relu4\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu4\"],\n top_names = [\"fc5\"],\n num_output=1024))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc5\"],\n top_names = [\"relu5\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu5\"],\n top_names = [\"fc6\"],\n num_output=512))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc6\"],\n top_names = [\"relu6\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu6\"],\n top_names = [\"fc7\"],\n num_output=256))\nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU,\n bottom_names = [\"fc7\"],\n top_names = [\"relu7\"])) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct,\n bottom_names = [\"relu7\"],\n top_names = [\"fc8\"],\n num_output=1)) \nmodel.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.BinaryCrossEntropyLoss,\n bottom_names = [\"fc8\", \"label\"],\n top_names = [\"loss\"]))\nmodel.compile()\nmodel.summary()\nmodel.graph_to_json(graph_config_file = \"dlrm_ecommerce.json\")\nmodel.fit(max_iter = 12000, display = 1000, eval_interval = 3000, snapshot = 10000, snapshot_prefix = \"./\")", "Overwriting hugectr_dlrm_ecommerce.py\n" ] ], [ [ "<a id=\"3\"></a>\n## 3. HugeCTR training\n\nNow we are ready to train a DLRM model with HugeCTR.\n\n", "_____no_output_____" ] ], [ [ "!python3 hugectr_dlrm_ecommerce.py", "====================================================Model Init=====================================================\n[12d06h09m18s][HUGECTR][INFO]: Global seed is 3594265474\nDevice 0: Tesla V100-SXM2-16GB\nDevice 1: Tesla V100-SXM2-16GB\nDevice 2: Tesla V100-SXM2-16GB\nDevice 3: Tesla V100-SXM2-16GB\n[12d06h09m24s][HUGECTR][INFO]: num of DataReader workers: 4\n[12d06h09m24s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=10000384\n===================================================Model Compile===================================================\n[12d06h10m36s][HUGECTR][INFO]: gpu0 start to init embedding[\n[1212d0d0606hh10m10m3636s][HUGECTR][INFO]: gpu3 start to init embeddings\n][HUGECTR][INFO]: gpu1 start to init embedding\n[12d06h10m36s][HUGECTR][INFO]: gpu2 start to init embedding\n[12d06h10m36s][HUGECTR][INFO]: gpu3 init embedding done\n[12d06h10m36s][HUGECTR][INFO]: gpu2 init embedding done\n[12d06h10m36s][HUGECTR][INFO]: gpu0 init embedding done\n[12d06h10m36s][HUGECTR][INFO]: gpu1 init embedding done\n===================================================Model Summary===================================================\nLabel Dense Sparse \nlabel dense data1 \n(None, 1) (None, 9) \n------------------------------------------------------------------------------------------------------------------\nLayer Type Input Name Output Name Output Shape \n------------------------------------------------------------------------------------------------------------------\nDistributedSlotSparseEmbeddingHash data1 sparse_embedding1 (None, 9, 128) \nInnerProduct dense fc1 (None, 512) \nReLU fc1 relu1 (None, 512) \nInnerProduct relu1 fc2 (None, 256) \nReLU fc2 relu2 (None, 256) \nInnerProduct relu2 fc3 (None, 128) \nReLU fc3 relu3 (None, 128) \nInteraction relu3,sparse_embedding1 interaction1 (None, 174) \nInnerProduct interaction1 fc4 (None, 1024) \nReLU fc4 relu4 (None, 1024) \nInnerProduct relu4 fc5 (None, 1024) \nReLU fc5 relu5 (None, 1024) \nInnerProduct relu5 fc6 (None, 512) \nReLU fc6 relu6 (None, 512) \nInnerProduct relu6 fc7 (None, 256) \nReLU fc7 relu7 (None, 256) \nInnerProduct relu7 fc8 (None, 1) \nBinaryCrossEntropyLoss fc8,label loss \n------------------------------------------------------------------------------------------------------------------\n[12d60h10m36s][HUGECTR][INFO]: Save the model graph to dlrm_ecommerce.json, successful\n=====================================================Model Fit=====================================================\n[12d60h10m36s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 12000\n[12d60h10m36s][HUGECTR][INFO]: Training batchsize: 16384, evaluation batchsize: 16384\n[12d60h10m36s][HUGECTR][INFO]: Evaluation interval: 3000, snapshot interval: 10000\n[12d60h10m36s][HUGECTR][INFO]: Sparse embedding trainable: 1, dense network trainable: 1\n[12d60h10m36s][HUGECTR][INFO]: Use mixed precision: 0, scaler: 1.000000, use cuda graph: 1\n[12d60h10m36s][HUGECTR][INFO]: lr: 0.100000, warmup_steps: 8000, decay_start: 48000, decay_steps: 24000, decay_power: 2.000000, end_lr: 0.000000\n[12d60h10m36s][HUGECTR][INFO]: Training source file: ./nvtabular_temp/output/train/_file_list.txt\n[12d60h10m36s][HUGECTR][INFO]: Evaluation source file: ./nvtabular_temp/output/valid/_file_list.txt\n[12d60h10m44s][HUGECTR][INFO]: Iter: 1000 Time(1000 iters): 7.665220s Loss: 0.653065 lr:0.012512\n[12d60h10m52s][HUGECTR][INFO]: Iter: 2000 Time(1000 iters): 7.689930s Loss: 0.528752 lr:0.025013\n[12d60h10m59s][HUGECTR][INFO]: Iter: 3000 Time(1000 iters): 7.629534s Loss: 0.526933 lr:0.037512\n[12d60h11m22s][HUGECTR][INFO]: Evaluation, AUC: 0.648711\n[12d60h11m22s][HUGECTR][INFO]: Eval Time for 2720 iters: 22.321903s\n[12d60h11m29s][HUGECTR][INFO]: Iter: 4000 Time(1000 iters): 30.022837s Loss: 0.506167 lr:0.050012\n[12d60h11m37s][HUGECTR][INFO]: Iter: 5000 Time(1000 iters): 7.661832s Loss: 0.523222 lr:0.062513\n[12d60h11m45s][HUGECTR][INFO]: Iter: 6000 Time(1000 iters): 7.689628s Loss: 0.500767 lr:0.075013\n[12d60h12m70s][HUGECTR][INFO]: Evaluation, AUC: 0.650302\n[12d60h12m70s][HUGECTR][INFO]: Eval Time for 2720 iters: 22.313590s\n[12d60h12m15s][HUGECTR][INFO]: Iter: 7000 Time(1000 iters): 29.978754s Loss: 0.519886 lr:0.087513\n[12d60h12m22s][HUGECTR][INFO]: Iter: 8000 Time(1000 iters): 7.689465s Loss: 0.511019 lr:0.100000\n[12d60h12m30s][HUGECTR][INFO]: Iter: 9000 Time(1000 iters): 7.657638s Loss: 0.517614 lr:0.100000\n[12d60h12m52s][HUGECTR][INFO]: Evaluation, AUC: 0.646990\n[12d60h12m52s][HUGECTR][INFO]: Eval Time for 2720 iters: 22.289129s\n[12d60h13m00s][HUGECTR][INFO]: Iter: 10000 Time(1000 iters): 29.954833s Loss: 0.494307 lr:0.100000\n[12d60h13m90s][HUGECTR][INFO]: Rank0: Write hash table to file\n[12d60h14m50s][HUGECTR][INFO]: Dumping sparse weights to files, successful\n[12d60h14m50s][HUGECTR][INFO]: Dumping sparse optimzer states to files, successful\n[12d60h14m50s][HUGECTR][INFO]: Dumping dense weights to file, successful\n[12d60h14m50s][HUGECTR][INFO]: Dumping dense optimizer states to file, successful\n[12d60h14m50s][HUGECTR][INFO]: Dumping untrainable weights to file, successful\n[12d60h14m13s][HUGECTR][INFO]: Iter: 11000 Time(1000 iters): 73.121730s Loss: 0.521651 lr:0.100000\n" ] ], [ [ "<a id=\"4\"></a>\n## 4. HugeCTR inference\n\nIn this section, we will read the test dataset, and compute the AUC value. \n\nWe will utilize the saved model graph in JSON format for inference.", "_____no_output_____" ], [ "### Prepare the inference session", "_____no_output_____" ] ], [ [ "import sys\nfrom hugectr.inference import InferenceParams, CreateInferenceSession\nfrom mpi4py import MPI", "_____no_output_____" ], [ "# create inference session\ninference_params = InferenceParams(model_name = \"dlrm\",\n max_batchsize = 4096,\n hit_rate_threshold = 0.6,\n dense_model_file = \"./_dense_10000.model\",\n sparse_model_files = [\"./0_sparse_10000.model\"],\n device_id = 0,\n use_gpu_embedding_cache = True,\n cache_size_percentage = 0.2,\n i64_input_key = True)\ninference_session = CreateInferenceSession(\"dlrm_ecommerce.json\", inference_params)", "[12d06h15m03s][HUGECTR][INFO]: default_emb_vec_value is not specified using default: 0.000000\n[12d06h16m49s][HUGECTR][INFO]: Global seed is 1517370373\n[12d06h16m50s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.\n[12d06h16m50s][HUGECTR][INFO]: Use mixed precision: 0\n[12d06h16m50s][HUGECTR][INFO]: start create embedding for inference\n[12d06h16m50s][HUGECTR][INFO]: sparse_input name data1\n[12d06h16m50s][HUGECTR][INFO]: create embedding for inference success\n[12d06h16m50s][HUGECTR][INFO]: Inference stage skip BinaryCrossEntropyLoss layer, replaced by Sigmoid layer\n" ] ], [ [ "### Reading and prepare the data\n\nWe first read the NVTabular processed data.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nnvtdata_test = pd.read_parquet('./nvtabular_temp/output/test')\nnvtdata_test.head()", "_____no_output_____" ], [ "con_feats = ['price',\n 'ts_weekday',\n 'ts_day',\n 'ts_month',\n 'TE_brand_target_TE',\n 'TE_user_id_target_TE',\n 'TE_product_id_target_TE',\n 'TE_cat_2_target_TE',\n 'TE_ts_weekday_ts_day_target_TE']", "_____no_output_____" ], [ "cat_feats = ['ts_hour_user_id_brand_cross',\n 'ts_weekday_user_id_brand_cross',\n 'cat_0_user_id_brand_cross',\n 'cat_1_user_id_brand_cross',\n 'cat_2_user_id_brand_cross',\n 'product_id_user_id_cross',\n 'brand_user_id_cross',\n 'ts_hour_user_id_cross',\n 'ts_minute_user_id_cross']", "_____no_output_____" ], [ "emb_size = [4427037, 3961156, 2877223, 2890639, 2159304, 4398425, 3009092, 3999369, 5931061]", "_____no_output_____" ] ], [ [ "### Converting data to CSR format\n\nHugeCTR expects data in CSR format for inference. One important thing to note is that NVTabular requires categorical variables to occupy different integer ranges. For example, if there are 10 users and 10 items, then the users should be encoded in the 0-9 range, while items should be in the 10-19 range. NVTabular encodes both users and items in the 0-9 ranges.\n\nFor this reason, we need to shift the keys of the categorical variable produced by NVTabular to comply with HugeCTR.", "_____no_output_____" ] ], [ [ "import numpy as np\nshift = np.insert(np.cumsum(emb_size), 0, 0)[:-1]", "_____no_output_____" ], [ "cat_data = nvtdata_test[cat_feats].values + shift", "_____no_output_____" ], [ "dense_data = nvtdata_test[con_feats].values", "_____no_output_____" ], [ "def infer_batch(inference_session, dense_data_batch, cat_data_batch):\n dense_features = list(dense_data_batch.flatten())\n embedding_columns = list(cat_data_batch.flatten())\n row_ptrs= list(range(0,len(embedding_columns)+1))\n output = inference_session.predict(dense_features, embedding_columns, row_ptrs)\n return output", "_____no_output_____" ] ], [ [ "Now we are ready to carry out inference on the test set.", "_____no_output_____" ] ], [ [ "batch_size = 4096\nnum_batches = (len(dense_data) // batch_size) + 1\nbatch_idx = np.array_split(np.arange(len(dense_data)), num_batches)", "_____no_output_____" ], [ "!pip install tqdm", "Requirement already satisfied: tqdm in /opt/conda/lib/python3.8/site-packages (4.59.0)\n" ], [ "from tqdm import tqdm\n\nlabels = []\nfor batch_id in tqdm(batch_idx):\n dense_data_batch = dense_data[batch_id]\n cat_data_batch = cat_data[batch_id]\n results = infer_batch(inference_session, dense_data_batch, cat_data_batch)\n labels.extend(results)", "100%|████████████████████████████████████████████████████████████████████████████████| 677/677 [00:07<00:00, 89.71it/s]\n" ], [ "len(labels)", "_____no_output_____" ] ], [ [ "### Computing the test AUC value", "_____no_output_____" ] ], [ [ "ground_truth = nvtdata_test['target'].values", "_____no_output_____" ], [ "from sklearn.metrics import roc_auc_score\n\nroc_auc_score(ground_truth, labels)", "_____no_output_____" ] ], [ [ "# Conclusion\n\nIn this notebook, we have walked you through the process of preprocessing the data, train a DLRM model with HugeCTR, then carrying out inference with the HugeCTR Python interface. Try this workflow on your data and let us know your feedback.\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ec99178bba50daad3110687123fbc127138e3b6f
56,443
ipynb
Jupyter Notebook
sentiment-rnn/Sentiment_RNN_Solution.ipynb
olivetom/deep-learning-v2-pytorch
98f7ee1adfdfa5b3e68db4eb4e2fbf094e47346c
[ "MIT" ]
null
null
null
sentiment-rnn/Sentiment_RNN_Solution.ipynb
olivetom/deep-learning-v2-pytorch
98f7ee1adfdfa5b3e68db4eb4e2fbf094e47346c
[ "MIT" ]
null
null
null
sentiment-rnn/Sentiment_RNN_Solution.ipynb
olivetom/deep-learning-v2-pytorch
98f7ee1adfdfa5b3e68db4eb4e2fbf094e47346c
[ "MIT" ]
null
null
null
40.316429
849
0.495225
[ [ [ "# Sentiment Analysis with an RNN\n\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. \n>Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. \n\nHere we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.\n\n<img src=\"https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/assets/reviews_ex.png?raw=1\" width=40%>\n\n### Network Architecture\n\nThe architecture for this network is shown below.\n\n<img src=\"https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/assets/network_diagram.png?raw=1\" width=40%>\n\n>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*\n\n>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data. \n\n>**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. \n\nWe don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).", "_____no_output_____" ], [ "---\n### Load in and visualize the data", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# read data from text files\nwith open('data/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('data/labels.txt', 'r') as f:\n labels = f.read()", "_____no_output_____" ], [ "print(reviews[:1000])\nprint()\nprint(labels[:20])", "bromwell high is a cartoon comedy . it ran at the same time as some other programs about school life such as teachers . my years in the teaching profession lead me to believe that bromwell high s satire is much closer to reality than is teachers . the scramble to survive financially the insightful students who can see right through their pathetic teachers pomp the pettiness of the whole situation all remind me of the schools i knew and their students . when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled . . . . . . . . . at . . . . . . . . . . high . a classic line inspector i m here to sack one of your teachers . student welcome to bromwell high . i expect that many adults of my age think that bromwell high is far fetched . what a pity that it isn t \nstory of a man who has unnatural feelings for a pig . starts out with a opening scene that is a terrific example of absurd comedy . a formal orchestra audience is turn\n\npositive\nnegative\npo\n" ] ], [ [ "## Data pre-processing\n\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\n\nYou can see an example of the reviews data above. Here are the processing steps, we'll want to take:\n>* We'll want to get rid of periods and extraneous punctuation.\n* Also, you might notice that the reviews are delimited with newline characters `\\n`. To deal with those, I'm going to split the text into each review using `\\n` as the delimiter. \n* Then I can combined all the reviews back together into one big string.\n\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "_____no_output_____" ] ], [ [ "from string import punctuation\n\n# get rid of punctuation\nreviews = reviews.lower() # lowercase, standardize\nall_text = ''.join([c for c in reviews if c not in punctuation])\n\n# split by new lines and spaces\nreviews_split = all_text.split('\\n')\nall_text = ' '.join(reviews_split)\n\n# create a list of words\nwords = all_text.split()", "_____no_output_____" ], [ "words[:30]", "_____no_output_____" ] ], [ [ "### Encoding the words\n\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\n> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.\n> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. ", "_____no_output_____" ] ], [ [ "# feel free to use this import \nfrom collections import Counter\n\n## Build a dictionary that maps words to integers\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\n## use the dict to tokenize each review in reviews_split\n## store the tokenized reviews in reviews_ints\nreviews_ints = []\nfor review in reviews_split:\n reviews_ints.append([vocab_to_int[word] for word in review.split()])", "_____no_output_____" ] ], [ [ "**Test your code**\n\nAs a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.", "_____no_output_____" ] ], [ [ "# stats about vocabulary\nprint('Unique words: ', len((vocab_to_int))) # should ~ 74000+\nprint()\n\n# print tokens in first review\nprint('Tokenized review: \\n', reviews_ints[:1])", "Unique words: 74072\n\nTokenized review: \n [[21025, 308, 6, 3, 1050, 207, 8, 2138, 32, 1, 171, 57, 15, 49, 81, 5785, 44, 382, 110, 140, 15, 5194, 60, 154, 9, 1, 4975, 5852, 475, 71, 5, 260, 12, 21025, 308, 13, 1978, 6, 74, 2395, 5, 613, 73, 6, 5194, 1, 24103, 5, 1983, 10166, 1, 5786, 1499, 36, 51, 66, 204, 145, 67, 1199, 5194, 19869, 1, 37442, 4, 1, 221, 883, 31, 2988, 71, 4, 1, 5787, 10, 686, 2, 67, 1499, 54, 10, 216, 1, 383, 9, 62, 3, 1406, 3686, 783, 5, 3483, 180, 1, 382, 10, 1212, 13583, 32, 308, 3, 349, 341, 2913, 10, 143, 127, 5, 7690, 30, 4, 129, 5194, 1406, 2326, 5, 21025, 308, 10, 528, 12, 109, 1448, 4, 60, 543, 102, 12, 21025, 308, 6, 227, 4146, 48, 3, 2211, 12, 8, 215, 23]]\n" ] ], [ [ "### Encoding the labels\n\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\n> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.", "_____no_output_____" ] ], [ [ "# 1=positive, 0=negative label conversion\nlabels_split = labels.split('\\n')\nencoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])", "_____no_output_____" ] ], [ [ "### Removing Outliers\n\nAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:\n\n1. Getting rid of extremely long or short reviews; the outliers\n2. Padding/truncating the remaining data so that we have reviews of the same length.\n\nBefore we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.", "_____no_output_____" ] ], [ [ "# outlier review stats\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Zero-length reviews: 1\nMaximum review length: 2514\n" ] ], [ [ "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.\n\n> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`.", "_____no_output_____" ] ], [ [ "print('Number of reviews before removing outliers: ', len(reviews_ints))\n\n## remove any reviews/labels with zero length from the reviews_ints list.\n\n# get indices of any reviews with length 0\nnon_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\n\n# remove 0-length reviews and their labels\nreviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nencoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])\n\nprint('Number of reviews after removing outliers: ', len(reviews_ints))", "Number of reviews before removing outliers: 25001\nNumber of reviews after removing outliers: 25000\n" ] ], [ [ "---\n## Padding sequences\n\nTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.\n\n> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network. \n* The data should come from `review_ints`, since we want to feed integers to the network. \n* Each row should be `seq_length` elements long. \n* For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. \n* For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.\n\nAs a small example, if the `seq_length=10` and an input review is: \n```\n[117, 18, 128]\n```\nThe resultant, padded sequence should be: \n\n```\n[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]\n```\n\n**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "_____no_output_____" ] ], [ [ "def pad_features(reviews_ints, seq_length):\n ''' Return features of review_ints, where each review is padded with 0's \n or truncated to the input seq_length.\n '''\n \n # getting the correct rows x cols shape\n features = np.zeros((len(reviews_ints), seq_length), dtype=int)\n\n # for each review, I grab that review and \n for i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_length]\n \n return features", "_____no_output_____" ], [ "# Test your implementation!\n\nseq_length = 200\n\nfeatures = pad_features(reviews_ints, seq_length=seq_length)\n\n## test statements - do not change - ##\nassert len(features)==len(reviews_ints), \"Your features should have as many rows as reviews.\"\nassert len(features[0])==seq_length, \"Each feature row should contain seq_length values.\"\n\n# print first 10 values of the first 30 batches \nprint(features[:30,:10])", "[[ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [22382 42 46418 15 706 17139 3389 47 77 35]\n [ 4505 505 15 3 3342 162 8312 1652 6 4819]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 54 10 14 116 60 798 552 71 364 5]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 1 330 578 34 3 162 748 2731 9 325]\n [ 9 11 10171 5305 1946 689 444 22 280 673]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 1 307 10399 2069 1565 6202 6528 3288 17946 10628]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 21 122 2069 1565 515 8181 88 6 1325 1182]\n [ 1 20 6 76 40 6 58 81 95 5]\n [ 54 10 84 329 26230 46427 63 10 14 614]\n [ 11 20 6 30 1436 32317 3769 690 15100 6]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 40 26 109 17952 1422 9 1 327 4 125]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 10 499 1 307 10399 55 74 8 13 30]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]\n [ 0 0 0 0 0 0 0 0 0 0]]\n" ] ], [ [ "## Training, Validation, Test\n\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\n> **Exercise:** Create the training, validation, and test sets. \n* You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example. \n* Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9. \n* Whatever data is left will be split in half to create the validation and *testing* data.", "_____no_output_____" ] ], [ [ "split_frac = 0.8\n\n## split data into training, validation, and test data (features and labels, x and y)\n\nsplit_idx = int(len(features)*split_frac)\ntrain_x, remaining_x = features[:split_idx], features[split_idx:]\ntrain_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]\n\ntest_idx = int(len(remaining_x)*0.5)\nval_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]\nval_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]\n\n## print out the shapes of your resultant feature data\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "\t\t\tFeature Shapes:\nTrain set: \t\t(20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t(2500, 200)\n" ] ], [ [ "**Check your work**\n\nWith train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:\n```\n Feature Shapes:\nTrain set: \t\t (20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t (2500, 200)\n```", "_____no_output_____" ], [ "---\n## DataLoaders and Batching\n\nAfter creating training, test, and validation data, we can create DataLoaders for this data by following two steps:\n1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html#) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.\n2. Create DataLoaders and batch our training, validation, and test Tensor datasets.\n\n```\ntrain_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))\ntrain_loader = DataLoader(train_data, batch_size=batch_size)\n```\n\nThis is an alternative to creating a generator function for batching our data into full batches.", "_____no_output_____" ] ], [ [ "import torch\nfrom torch.utils.data import TensorDataset, DataLoader\n\n# create Tensor datasets\ntrain_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))\nvalid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))\ntest_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))\n\n# dataloaders\nbatch_size = 50\n\n# make sure the SHUFFLE your training data\ntrain_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)\nvalid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)\ntest_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)", "_____no_output_____" ], [ "# obtain one batch of training data\ndataiter = iter(train_loader)\nsample_x, sample_y = dataiter.next()\n\nprint('Sample input size: ', sample_x.size()) # batch_size, seq_length\nprint('Sample input: \\n', sample_x)\nprint()\nprint('Sample label size: ', sample_y.size()) # batch_size\nprint('Sample label: \\n', sample_y)", "Sample input size: torch.Size([50, 200])\nSample input: \n tensor([[ 9, 8707, 7097, ..., 1, 1497, 14],\n [ 0, 0, 0, ..., 24, 5, 39],\n [ 0, 0, 0, ..., 760, 4, 224],\n ...,\n [ 0, 0, 0, ..., 60, 76, 618],\n [ 10, 216, 1, ..., 20, 15, 8],\n [ 0, 0, 0, ..., 243, 38, 8]])\n\nSample label size: torch.Size([50])\nSample label: \n tensor([0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0,\n 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1,\n 0, 0])\n" ] ], [ [ "---\n# Sentiment Network with PyTorch\n\nBelow is where you'll define the network.\n\n<img src=\"https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/assets/network_diagram.png?raw=1\" width=40%>\n\nThe layers are as follows:\n1. An [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) that converts our word tokens (integers) into embeddings of a specific size.\n2. An [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) defined by a hidden_state size and number of layers\n3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size\n4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network.\n\n### The Embedding Layer\n\nWe need to add an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.\n\n\n### The LSTM Layer(s)\n\nWe'll create an [LSTM](https://pytorch.org/docs/stable/nn.html#lstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.\n\nMost of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships. \n\n> **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.\n\nNote: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.", "_____no_output_____" ] ], [ [ "# First checking if GPU is available\ntrain_on_gpu=torch.cuda.is_available()\n\nif(train_on_gpu):\n print('Training on GPU.')\nelse:\n print('No GPU available, training on CPU.')", "Training on GPU.\n" ], [ "import torch.nn as nn\n\nclass SentimentRNN(nn.Module):\n \"\"\"\n The RNN model that will be used to perform Sentiment analysis.\n \"\"\"\n\n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):\n \"\"\"\n Initialize the model by setting up the layers.\n \"\"\"\n super(SentimentRNN, self).__init__()\n\n self.output_size = output_size\n self.n_layers = n_layers\n self.hidden_dim = hidden_dim\n \n # embedding and LSTM layers\n self.embedding = nn.Embedding(vocab_size, embedding_dim)\n self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, \n dropout=drop_prob, batch_first=True)\n \n # dropout layer\n self.dropout = nn.Dropout(0.3)\n \n # linear and sigmoid layers\n self.fc = nn.Linear(hidden_dim, output_size)\n self.sig = nn.Sigmoid()\n \n\n def forward(self, x, hidden):\n \"\"\"\n Perform a forward pass of our model on some input and hidden state.\n \"\"\"\n batch_size = x.size(0)\n\n # embeddings and lstm_out\n x = x.long()\n embeds = self.embedding(x)\n lstm_out, hidden = self.lstm(embeds, hidden)\n \n # stack up lstm outputs\n lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)\n \n # dropout and fully-connected layer\n out = self.dropout(lstm_out)\n out = self.fc(out)\n # sigmoid function\n sig_out = self.sig(out)\n \n # reshape to be batch_size first\n sig_out = sig_out.view(batch_size, -1)\n sig_out = sig_out[:, -1] # get last batch of labels\n \n # return last sigmoid output and hidden state\n return sig_out, hidden\n \n \n def init_hidden(self, batch_size):\n ''' Initializes hidden state '''\n # Create two new tensors with sizes n_layers x batch_size x hidden_dim,\n # initialized to zero, for hidden state and cell state of LSTM\n weight = next(self.parameters()).data\n \n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())\n \n return hidden\n ", "_____no_output_____" ] ], [ [ "## Instantiate the network\n\nHere, we'll instantiate the network. First up, defining the hyperparameters.\n\n* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.\n* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).\n* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.\n* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\n* `n_layers`: Number of LSTM layers in the network. Typically between 1-3\n\n> **Exercise:** Define the model hyperparameters.\n", "_____no_output_____" ] ], [ [ "# Instantiate the model w/ hyperparams\nvocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens\noutput_size = 1\nembedding_dim = 400\nhidden_dim = 256\nn_layers = 2\n\nnet = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)\n\nprint(net)", "SentimentRNN(\n (embedding): Embedding(74073, 400)\n (lstm): LSTM(400, 256, num_layers=2, batch_first=True, dropout=0.5)\n (dropout): Dropout(p=0.3, inplace=False)\n (fc): Linear(in_features=256, out_features=1, bias=True)\n (sig): Sigmoid()\n)\n" ] ], [ [ "---\n## Training\n\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.\n\n>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.html#bceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.\n\nWe also have some data and training hyparameters:\n\n* `lr`: Learning rate for our optimizer.\n* `epochs`: Number of times to iterate through the training dataset.\n* `clip`: The maximum gradient value to clip at (to prevent exploding gradients).", "_____no_output_____" ] ], [ [ "# loss and optimization functions\nlr=0.001\n\ncriterion = nn.BCELoss()\noptimizer = torch.optim.Adam(net.parameters(), lr=lr)\n", "_____no_output_____" ], [ "# training params\n\nepochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing\n\ncounter = 0\nprint_every = 100\nclip=5 # gradient clipping\n\n# move model to GPU, if available\nif(train_on_gpu):\n net.cuda()\n\nnet.train()\n# train for some number of epochs\nfor e in range(epochs):\n # initialize hidden state\n h = net.init_hidden(batch_size)\n\n # batch loop\n for inputs, labels in train_loader:\n counter += 1\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n # zero accumulated gradients\n net.zero_grad()\n\n # get the output from the model\n output, h = net(inputs, h)\n\n # calculate the loss and perform backprop\n loss = criterion(output.squeeze(), labels.float())\n loss.backward()\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n nn.utils.clip_grad_norm_(net.parameters(), clip)\n optimizer.step()\n\n # loss stats\n if counter % print_every == 0:\n # Get validation loss\n val_h = net.init_hidden(batch_size)\n val_losses = []\n net.eval()\n for inputs, labels in valid_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n val_h = tuple([each.data for each in val_h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n output, val_h = net(inputs, val_h)\n val_loss = criterion(output.squeeze(), labels.float())\n\n val_losses.append(val_loss.item())\n\n net.train()\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Step: {}...\".format(counter),\n \"Loss: {:.6f}...\".format(loss.item()),\n \"Val Loss: {:.6f}\".format(np.mean(val_losses)))", "Epoch: 1/4... Step: 100... Loss: 0.677959... Val Loss: 0.642008\nEpoch: 1/4... Step: 200... Loss: 0.664585... Val Loss: 0.681140\nEpoch: 1/4... Step: 300... Loss: 0.645996... Val Loss: 0.675547\nEpoch: 1/4... Step: 400... Loss: 0.676859... Val Loss: 0.630865\nEpoch: 2/4... Step: 500... Loss: 0.636592... Val Loss: 0.611641\nEpoch: 2/4... Step: 600... Loss: 0.655095... Val Loss: 0.550141\nEpoch: 2/4... Step: 700... Loss: 0.465473... Val Loss: 0.511292\nEpoch: 2/4... Step: 800... Loss: 0.451874... Val Loss: 0.538470\nEpoch: 3/4... Step: 900... Loss: 0.365887... Val Loss: 0.496255\nEpoch: 3/4... Step: 1000... Loss: 0.239041... Val Loss: 0.526600\nEpoch: 3/4... Step: 1100... Loss: 0.320466... Val Loss: 0.459085\nEpoch: 3/4... Step: 1200... Loss: 0.386077... Val Loss: 0.446263\nEpoch: 4/4... Step: 1300... Loss: 0.264770... Val Loss: 0.445911\nEpoch: 4/4... Step: 1400... Loss: 0.162070... Val Loss: 0.468796\nEpoch: 4/4... Step: 1500... Loss: 0.227964... Val Loss: 0.568724\nEpoch: 4/4... Step: 1600... Loss: 0.160491... Val Loss: 0.466198\n" ] ], [ [ "---\n## Testing\n\nThere are a few ways to test your network.\n\n* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.\n\n* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**.", "_____no_output_____" ] ], [ [ "# Get test data loss and accuracy\n\ntest_losses = [] # track loss\nnum_correct = 0\n\n# init hidden state\nh = net.init_hidden(batch_size)\n\nnet.eval()\n# iterate over test data\nfor inputs, labels in test_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n \n # get predicted outputs\n output, h = net(inputs, h)\n \n # calculate loss\n test_loss = criterion(output.squeeze(), labels.float())\n test_losses.append(test_loss.item())\n \n # convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) # rounds to the nearest integer\n \n # compare predictions to true label\n correct_tensor = pred.eq(labels.float().view_as(pred))\n correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())\n num_correct += np.sum(correct)\n\n\n# -- stats! -- ##\n# avg test loss\nprint(\"Test loss: {:.3f}\".format(np.mean(test_losses)))\n\n# accuracy over all test data\ntest_acc = num_correct/len(test_loader.dataset)\nprint(\"Test accuracy: {:.3f}\".format(test_acc))", "Test loss: 0.516\nTest accuracy: 0.811\n" ] ], [ [ "### Inference on a test review\n\nYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!\n \n> **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!\n* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length.\n", "_____no_output_____" ] ], [ [ "# negative test review\ntest_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'\n", "_____no_output_____" ], [ "from string import punctuation\n\ndef tokenize_review(test_review):\n test_review = test_review.lower() # lowercase\n # get rid of punctuation\n test_text = ''.join([c for c in test_review if c not in punctuation])\n\n # splitting by spaces\n test_words = test_text.split()\n\n # tokens\n test_ints = []\n test_ints.append([vocab_to_int[word] for word in test_words])\n\n return test_ints\n\n# test code and generate tokenized review\ntest_ints = tokenize_review(test_review_neg)\nprint(test_ints)", "[[1, 247, 18, 10, 28, 108, 113, 14, 388, 2, 10, 181, 60, 273, 144, 11, 18, 68, 76, 113, 2, 1, 410, 14, 539]]\n" ], [ "# test sequence padding\nseq_length=200\nfeatures = pad_features(test_ints, seq_length)\n\nprint(features)", "[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0 1 247 18 10 28\n 108 113 14 388 2 10 181 60 273 144 11 18 68 76 113 2 1 410\n 14 539]]\n" ], [ "# test conversion to tensor and pass into your model\nfeature_tensor = torch.from_numpy(features)\nprint(feature_tensor.size())", "torch.Size([1, 200])\n" ], [ "def predict(net, test_review, sequence_length=200):\n \n net.eval()\n \n # tokenize review\n test_ints = tokenize_review(test_review)\n \n # pad tokenized sequence\n seq_length=sequence_length\n features = pad_features(test_ints, seq_length)\n \n # convert to tensor to pass into your model\n feature_tensor = torch.from_numpy(features)\n \n batch_size = feature_tensor.size(0)\n \n # initialize hidden state\n h = net.init_hidden(batch_size)\n \n if(train_on_gpu):\n feature_tensor = feature_tensor.cuda()\n \n # get the output from the model\n output, h = net(feature_tensor, h)\n \n # convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) \n # printing output value, before rounding\n print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))\n \n # print custom response\n if(pred.item()==1):\n print(\"Positive review detected!\")\n else:\n print(\"Negative review detected.\")\n ", "_____no_output_____" ], [ "# positive test review\ntest_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'\n", "_____no_output_____" ], [ "# call function\nseq_length=200 # good to use the length that was trained on\n\npredict(net, test_review_neg, seq_length)", "Prediction value, pre-rounding: 0.005722\nNegative review detected.\n" ] ], [ [ "### Try out test_reviews of your own!\n\nNow that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.\n\nLater, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
ec992dc11c550afa5d5cbd52db521d3655812eb3
5,325
ipynb
Jupyter Notebook
notebooks/a3.ipynb
giwankim/cs224n
d05d018dd3026aa48810260be50c94cda596dc82
[ "MIT" ]
null
null
null
notebooks/a3.ipynb
giwankim/cs224n
d05d018dd3026aa48810260be50c94cda596dc82
[ "MIT" ]
null
null
null
notebooks/a3.ipynb
giwankim/cs224n
d05d018dd3026aa48810260be50c94cda596dc82
[ "MIT" ]
null
null
null
24.539171
629
0.500469
[ [ [ "import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F", "_____no_output_____" ], [ "n_features = 36\nembed_size = 30\nn_vocab = 100\nbatch_size = 4", "_____no_output_____" ], [ "embeddings = torch.randn((n_vocab, embed_size))", "_____no_output_____" ], [ "inds = torch.randint(0, n_vocab, (batch_size, n_features), dtype=torch.long); inds", "_____no_output_____" ], [ "x = torch.gather(embeddings, 0, inds[0]); x.shape", "_____no_output_____" ], [ "x = torch.index_select(embeddings, 0, inds[0]); x.shape", "_____no_output_____" ], [ "x = x.view(-1); x.shape", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
ec993169f0bde35d878341564e754b77db762abd
320,028
ipynb
Jupyter Notebook
Amino_Acid_Pairs_with_various_distances.ipynb
psychedelic2007/Protein_Analysis
347c408e195ca839803fa04c888734fd419ede67
[ "MIT" ]
null
null
null
Amino_Acid_Pairs_with_various_distances.ipynb
psychedelic2007/Protein_Analysis
347c408e195ca839803fa04c888734fd419ede67
[ "MIT" ]
null
null
null
Amino_Acid_Pairs_with_various_distances.ipynb
psychedelic2007/Protein_Analysis
347c408e195ca839803fa04c888734fd419ede67
[ "MIT" ]
null
null
null
49.288156
1,342
0.455351
[ [ [ "'''Original code Author @Satyam Sangeet'''\n\nrbd_seq = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nprint(len(rbd_seq))", "223\n" ], [ "s1 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\nlambda_1 = []\n\nfor i in range(len(s1)):\n if(0<=i<222):\n c1 = s1[i]\n c2 = s1[i+1]\n lambda_1.append(c1+c2)\n else:\n break\nprint(lambda_1)", "['RV', 'VQ', 'QP', 'PT', 'TE', 'ES', 'SI', 'IV', 'VR', 'RF', 'FP', 'PN', 'NI', 'IT', 'TN', 'NL', 'LC', 'CP', 'PF', 'FG', 'GE', 'EV', 'VF', 'FN', 'NA', 'AT', 'TR', 'RF', 'FA', 'AS', 'SV', 'VY', 'YA', 'AW', 'WN', 'NR', 'RK', 'KR', 'RI', 'IS', 'SN', 'NC', 'CV', 'VA', 'AD', 'DY', 'YS', 'SV', 'VL', 'LY', 'YN', 'NS', 'SA', 'AS', 'SF', 'FS', 'ST', 'TF', 'FK', 'KC', 'CY', 'YG', 'GV', 'VS', 'SP', 'PT', 'TK', 'KL', 'LN', 'ND', 'DL', 'LC', 'CF', 'FT', 'TN', 'NV', 'VY', 'YA', 'AD', 'DS', 'SF', 'FV', 'VI', 'IR', 'RG', 'GD', 'DE', 'EV', 'VR', 'RQ', 'QI', 'IA', 'AP', 'PG', 'GQ', 'QT', 'TG', 'GK', 'KI', 'IA', 'AD', 'DY', 'YN', 'NY', 'YK', 'KL', 'LP', 'PD', 'DD', 'DF', 'FT', 'TG', 'GC', 'CV', 'VI', 'IA', 'AW', 'WN', 'NS', 'SN', 'NN', 'NL', 'LD', 'DS', 'SK', 'KV', 'VG', 'GG', 'GN', 'NY', 'YN', 'NY', 'YL', 'LY', 'YR', 'RL', 'LF', 'FR', 'RK', 'KS', 'SN', 'NL', 'LK', 'KP', 'PF', 'FE', 'ER', 'RD', 'DI', 'IS', 'ST', 'TE', 'EI', 'IY', 'YQ', 'QA', 'AG', 'GS', 'ST', 'TP', 'PC', 'CN', 'NG', 'GV', 'VE', 'EG', 'GF', 'FN', 'NC', 'CY', 'YF', 'FP', 'PL', 'LQ', 'QS', 'SY', 'YG', 'GF', 'FQ', 'QP', 'PT', 'TN', 'NG', 'GV', 'VG', 'GY', 'YQ', 'QP', 'PY', 'YR', 'RV', 'VV', 'VV', 'VL', 'LS', 'SF', 'FE', 'EL', 'LL', 'LH', 'HA', 'AP', 'PA', 'AT', 'TV', 'VC', 'CG', 'GP', 'PK', 'KK', 'KS', 'ST', 'TN', 'NL', 'LV', 'VK', 'KN', 'NK', 'KC', 'CV', 'VN', 'NF']\n" ], [ "s2 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_2 = []\n\nfor i in range(len(s2)):\n if(0<=i<221):\n c1 = s2[i]\n c2 = s2[i+2]\n lambda_2.append(c1+c2)\n else:\n break\nprint(lambda_2)", "['RQ', 'VP', 'QT', 'PE', 'TS', 'EI', 'SV', 'IR', 'VF', 'RP', 'FN', 'PI', 'NT', 'IN', 'TL', 'NC', 'LP', 'CF', 'PG', 'FE', 'GV', 'EF', 'VN', 'FA', 'NT', 'AR', 'TF', 'RA', 'FS', 'AV', 'SY', 'VA', 'YW', 'AN', 'WR', 'NK', 'RR', 'KI', 'RS', 'IN', 'SC', 'NV', 'CA', 'VD', 'AY', 'DS', 'YV', 'SL', 'VY', 'LN', 'YS', 'NA', 'SS', 'AF', 'SS', 'FT', 'SF', 'TK', 'FC', 'KY', 'CG', 'YV', 'GS', 'VP', 'ST', 'PK', 'TL', 'KN', 'LD', 'NL', 'DC', 'LF', 'CT', 'FN', 'TV', 'NY', 'VA', 'YD', 'AS', 'DF', 'SV', 'FI', 'VR', 'IG', 'RD', 'GE', 'DV', 'ER', 'VQ', 'RI', 'QA', 'IP', 'AG', 'PQ', 'GT', 'QG', 'TK', 'GI', 'KA', 'ID', 'AY', 'DN', 'YY', 'NK', 'YL', 'KP', 'LD', 'PD', 'DF', 'DT', 'FG', 'TC', 'GV', 'CI', 'VA', 'IW', 'AN', 'WS', 'NN', 'SN', 'NL', 'ND', 'LS', 'DK', 'SV', 'KG', 'VG', 'GN', 'GY', 'NN', 'YY', 'NL', 'YY', 'LR', 'YL', 'RF', 'LR', 'FK', 'RS', 'KN', 'SL', 'NK', 'LP', 'KF', 'PE', 'FR', 'ED', 'RI', 'DS', 'IT', 'SE', 'TI', 'EY', 'IQ', 'YA', 'QG', 'AS', 'GT', 'SP', 'TC', 'PN', 'CG', 'NV', 'GE', 'VG', 'EF', 'GN', 'FC', 'NY', 'CF', 'YP', 'FL', 'PQ', 'LS', 'QY', 'SG', 'YF', 'GQ', 'FP', 'QT', 'PN', 'TG', 'NV', 'GG', 'VY', 'GQ', 'YP', 'QY', 'PR', 'YV', 'RV', 'VV', 'VL', 'VS', 'LF', 'SE', 'FL', 'EL', 'LH', 'LA', 'HP', 'AA', 'PT', 'AV', 'TC', 'VG', 'CP', 'GK', 'PK', 'KS', 'KT', 'SN', 'TL', 'NV', 'LK', 'VN', 'KK', 'NC', 'KV', 'CN', 'VF']\n" ], [ "s3 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_3 = []\n\nfor i in range(len(s3)):\n if(0<=i<220):\n c1 = s3[i]\n c2 = s3[i+3]\n lambda_3.append(c1+c2)\n else:\n break\nprint(lambda_3)", "['RP', 'VT', 'QE', 'PS', 'TI', 'EV', 'SR', 'IF', 'VP', 'RN', 'FI', 'PT', 'NN', 'IL', 'TC', 'NP', 'LF', 'CG', 'PE', 'FV', 'GF', 'EN', 'VA', 'FT', 'NR', 'AF', 'TA', 'RS', 'FV', 'AY', 'SA', 'VW', 'YN', 'AR', 'WK', 'NR', 'RI', 'KS', 'RN', 'IC', 'SV', 'NA', 'CD', 'VY', 'AS', 'DV', 'YL', 'SY', 'VN', 'LS', 'YA', 'NS', 'SF', 'AS', 'ST', 'FF', 'SK', 'TC', 'FY', 'KG', 'CV', 'YS', 'GP', 'VT', 'SK', 'PL', 'TN', 'KD', 'LL', 'NC', 'DF', 'LT', 'CN', 'FV', 'TY', 'NA', 'VD', 'YS', 'AF', 'DV', 'SI', 'FR', 'VG', 'ID', 'RE', 'GV', 'DR', 'EQ', 'VI', 'RA', 'QP', 'IG', 'AQ', 'PT', 'GG', 'QK', 'TI', 'GA', 'KD', 'IY', 'AN', 'DY', 'YK', 'NL', 'YP', 'KD', 'LD', 'PF', 'DT', 'DG', 'FC', 'TV', 'GI', 'CA', 'VW', 'IN', 'AS', 'WN', 'NN', 'SL', 'ND', 'NS', 'LK', 'DV', 'SG', 'KG', 'VN', 'GY', 'GN', 'NY', 'YL', 'NY', 'YR', 'LL', 'YF', 'RR', 'LK', 'FS', 'RN', 'KL', 'SK', 'NP', 'LF', 'KE', 'PR', 'FD', 'EI', 'RS', 'DT', 'IE', 'SI', 'TY', 'EQ', 'IA', 'YG', 'QS', 'AT', 'GP', 'SC', 'TN', 'PG', 'CV', 'NE', 'GG', 'VF', 'EN', 'GC', 'FY', 'NF', 'CP', 'YL', 'FQ', 'PS', 'LY', 'QG', 'SF', 'YQ', 'GP', 'FT', 'QN', 'PG', 'TV', 'NG', 'GY', 'VQ', 'GP', 'YY', 'QR', 'PV', 'YV', 'RV', 'VL', 'VS', 'VF', 'LE', 'SL', 'FL', 'EH', 'LA', 'LP', 'HA', 'AT', 'PV', 'AC', 'TG', 'VP', 'CK', 'GK', 'PS', 'KT', 'KN', 'SL', 'TV', 'NK', 'LN', 'VK', 'KC', 'NV', 'KN', 'CF']\n" ], [ "s4 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_4 = []\n\nfor i in range(len(s4)):\n if(0<=i<219):\n c1 = s4[i]\n c2 = s4[i+4]\n lambda_4.append(c1+c2)\n else:\n break\nprint(lambda_4)", "['RT', 'VE', 'QS', 'PI', 'TV', 'ER', 'SF', 'IP', 'VN', 'RI', 'FT', 'PN', 'NL', 'IC', 'TP', 'NF', 'LG', 'CE', 'PV', 'FF', 'GN', 'EA', 'VT', 'FR', 'NF', 'AA', 'TS', 'RV', 'FY', 'AA', 'SW', 'VN', 'YR', 'AK', 'WR', 'NI', 'RS', 'KN', 'RC', 'IV', 'SA', 'ND', 'CY', 'VS', 'AV', 'DL', 'YY', 'SN', 'VS', 'LA', 'YS', 'NF', 'SS', 'AT', 'SF', 'FK', 'SC', 'TY', 'FG', 'KV', 'CS', 'YP', 'GT', 'VK', 'SL', 'PN', 'TD', 'KL', 'LC', 'NF', 'DT', 'LN', 'CV', 'FY', 'TA', 'ND', 'VS', 'YF', 'AV', 'DI', 'SR', 'FG', 'VD', 'IE', 'RV', 'GR', 'DQ', 'EI', 'VA', 'RP', 'QG', 'IQ', 'AT', 'PG', 'GK', 'QI', 'TA', 'GD', 'KY', 'IN', 'AY', 'DK', 'YL', 'NP', 'YD', 'KD', 'LF', 'PT', 'DG', 'DC', 'FV', 'TI', 'GA', 'CW', 'VN', 'IS', 'AN', 'WN', 'NL', 'SD', 'NS', 'NK', 'LV', 'DG', 'SG', 'KN', 'VY', 'GN', 'GY', 'NL', 'YY', 'NR', 'YL', 'LF', 'YR', 'RK', 'LS', 'FN', 'RL', 'KK', 'SP', 'NF', 'LE', 'KR', 'PD', 'FI', 'ES', 'RT', 'DE', 'II', 'SY', 'TQ', 'EA', 'IG', 'YS', 'QT', 'AP', 'GC', 'SN', 'TG', 'PV', 'CE', 'NG', 'GF', 'VN', 'EC', 'GY', 'FF', 'NP', 'CL', 'YQ', 'FS', 'PY', 'LG', 'QF', 'SQ', 'YP', 'GT', 'FN', 'QG', 'PV', 'TG', 'NY', 'GQ', 'VP', 'GY', 'YR', 'QV', 'PV', 'YV', 'RL', 'VS', 'VF', 'VE', 'LL', 'SL', 'FH', 'EA', 'LP', 'LA', 'HT', 'AV', 'PC', 'AG', 'TP', 'VK', 'CK', 'GS', 'PT', 'KN', 'KL', 'SV', 'TK', 'NN', 'LK', 'VC', 'KV', 'NN', 'KF']\n" ], [ "s5 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_5 = []\n\nfor i in range(len(s5)):\n if(0<=i<218):\n c1 = s5[i]\n c2 = s5[i+5]\n lambda_5.append(c1+c2)\n else:\n break\nprint(lambda_5)", "['RE', 'VS', 'QI', 'PV', 'TR', 'EF', 'SP', 'IN', 'VI', 'RT', 'FN', 'PL', 'NC', 'IP', 'TF', 'NG', 'LE', 'CV', 'PF', 'FN', 'GA', 'ET', 'VR', 'FF', 'NA', 'AS', 'TV', 'RY', 'FA', 'AW', 'SN', 'VR', 'YK', 'AR', 'WI', 'NS', 'RN', 'KC', 'RV', 'IA', 'SD', 'NY', 'CS', 'VV', 'AL', 'DY', 'YN', 'SS', 'VA', 'LS', 'YF', 'NS', 'ST', 'AF', 'SK', 'FC', 'SY', 'TG', 'FV', 'KS', 'CP', 'YT', 'GK', 'VL', 'SN', 'PD', 'TL', 'KC', 'LF', 'NT', 'DN', 'LV', 'CY', 'FA', 'TD', 'NS', 'VF', 'YV', 'AI', 'DR', 'SG', 'FD', 'VE', 'IV', 'RR', 'GQ', 'DI', 'EA', 'VP', 'RG', 'QQ', 'IT', 'AG', 'PK', 'GI', 'QA', 'TD', 'GY', 'KN', 'IY', 'AK', 'DL', 'YP', 'ND', 'YD', 'KF', 'LT', 'PG', 'DC', 'DV', 'FI', 'TA', 'GW', 'CN', 'VS', 'IN', 'AN', 'WL', 'ND', 'SS', 'NK', 'NV', 'LG', 'DG', 'SN', 'KY', 'VN', 'GY', 'GL', 'NY', 'YR', 'NL', 'YF', 'LR', 'YK', 'RS', 'LN', 'FL', 'RK', 'KP', 'SF', 'NE', 'LR', 'KD', 'PI', 'FS', 'ET', 'RE', 'DI', 'IY', 'SQ', 'TA', 'EG', 'IS', 'YT', 'QP', 'AC', 'GN', 'SG', 'TV', 'PE', 'CG', 'NF', 'GN', 'VC', 'EY', 'GF', 'FP', 'NL', 'CQ', 'YS', 'FY', 'PG', 'LF', 'QQ', 'SP', 'YT', 'GN', 'FG', 'QV', 'PG', 'TY', 'NQ', 'GP', 'VY', 'GR', 'YV', 'QV', 'PV', 'YL', 'RS', 'VF', 'VE', 'VL', 'LL', 'SH', 'FA', 'EP', 'LA', 'LT', 'HV', 'AC', 'PG', 'AP', 'TK', 'VK', 'CS', 'GT', 'PN', 'KL', 'KV', 'SK', 'TN', 'NK', 'LC', 'VV', 'KN', 'NF']\n" ], [ "s6 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_6 = []\n\nfor i in range(len(s6)):\n if(0<=i<217):\n c1 = s6[i]\n c2 = s6[i+6]\n lambda_6.append(c1+c2)\n else:\n break\nprint(lambda_6)", "['RS', 'VI', 'QV', 'PR', 'TF', 'EP', 'SN', 'II', 'VT', 'RN', 'FL', 'PC', 'NP', 'IF', 'TG', 'NE', 'LV', 'CF', 'PN', 'FA', 'GT', 'ER', 'VF', 'FA', 'NS', 'AV', 'TY', 'RA', 'FW', 'AN', 'SR', 'VK', 'YR', 'AI', 'WS', 'NN', 'RC', 'KV', 'RA', 'ID', 'SY', 'NS', 'CV', 'VL', 'AY', 'DN', 'YS', 'SA', 'VS', 'LF', 'YS', 'NT', 'SF', 'AK', 'SC', 'FY', 'SG', 'TV', 'FS', 'KP', 'CT', 'YK', 'GL', 'VN', 'SD', 'PL', 'TC', 'KF', 'LT', 'NN', 'DV', 'LY', 'CA', 'FD', 'TS', 'NF', 'VV', 'YI', 'AR', 'DG', 'SD', 'FE', 'VV', 'IR', 'RQ', 'GI', 'DA', 'EP', 'VG', 'RQ', 'QT', 'IG', 'AK', 'PI', 'GA', 'QD', 'TY', 'GN', 'KY', 'IK', 'AL', 'DP', 'YD', 'ND', 'YF', 'KT', 'LG', 'PC', 'DV', 'DI', 'FA', 'TW', 'GN', 'CS', 'VN', 'IN', 'AL', 'WD', 'NS', 'SK', 'NV', 'NG', 'LG', 'DN', 'SY', 'KN', 'VY', 'GL', 'GY', 'NR', 'YL', 'NF', 'YR', 'LK', 'YS', 'RN', 'LL', 'FK', 'RP', 'KF', 'SE', 'NR', 'LD', 'KI', 'PS', 'FT', 'EE', 'RI', 'DY', 'IQ', 'SA', 'TG', 'ES', 'IT', 'YP', 'QC', 'AN', 'GG', 'SV', 'TE', 'PG', 'CF', 'NN', 'GC', 'VY', 'EF', 'GP', 'FL', 'NQ', 'CS', 'YY', 'FG', 'PF', 'LQ', 'QP', 'ST', 'YN', 'GG', 'FV', 'QG', 'PY', 'TQ', 'NP', 'GY', 'VR', 'GV', 'YV', 'QV', 'PL', 'YS', 'RF', 'VE', 'VL', 'VL', 'LH', 'SA', 'FP', 'EA', 'LT', 'LV', 'HC', 'AG', 'PP', 'AK', 'TK', 'VS', 'CT', 'GN', 'PL', 'KV', 'KK', 'SN', 'TK', 'NC', 'LV', 'VN', 'KF']\n" ], [ "s7 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_7 = []\n\nfor i in range(len(s7)):\n if(0<=i<216):\n c1 = s7[i]\n c2 = s7[i+7]\n lambda_7.append(c1+c2)\n else:\n break\nprint(lambda_7)", "['RI', 'VV', 'QR', 'PF', 'TP', 'EN', 'SI', 'IT', 'VN', 'RL', 'FC', 'PP', 'NF', 'IG', 'TE', 'NV', 'LF', 'CN', 'PA', 'FT', 'GR', 'EF', 'VA', 'FS', 'NV', 'AY', 'TA', 'RW', 'FN', 'AR', 'SK', 'VR', 'YI', 'AS', 'WN', 'NC', 'RV', 'KA', 'RD', 'IY', 'SS', 'NV', 'CL', 'VY', 'AN', 'DS', 'YA', 'SS', 'VF', 'LS', 'YT', 'NF', 'SK', 'AC', 'SY', 'FG', 'SV', 'TS', 'FP', 'KT', 'CK', 'YL', 'GN', 'VD', 'SL', 'PC', 'TF', 'KT', 'LN', 'NV', 'DY', 'LA', 'CD', 'FS', 'TF', 'NV', 'VI', 'YR', 'AG', 'DD', 'SE', 'FV', 'VR', 'IQ', 'RI', 'GA', 'DP', 'EG', 'VQ', 'RT', 'QG', 'IK', 'AI', 'PA', 'GD', 'QY', 'TN', 'GY', 'KK', 'IL', 'AP', 'DD', 'YD', 'NF', 'YT', 'KG', 'LC', 'PV', 'DI', 'DA', 'FW', 'TN', 'GS', 'CN', 'VN', 'IL', 'AD', 'WS', 'NK', 'SV', 'NG', 'NG', 'LN', 'DY', 'SN', 'KY', 'VL', 'GY', 'GR', 'NL', 'YF', 'NR', 'YK', 'LS', 'YN', 'RL', 'LK', 'FP', 'RF', 'KE', 'SR', 'ND', 'LI', 'KS', 'PT', 'FE', 'EI', 'RY', 'DQ', 'IA', 'SG', 'TS', 'ET', 'IP', 'YC', 'QN', 'AG', 'GV', 'SE', 'TG', 'PF', 'CN', 'NC', 'GY', 'VF', 'EP', 'GL', 'FQ', 'NS', 'CY', 'YG', 'FF', 'PQ', 'LP', 'QT', 'SN', 'YG', 'GV', 'FG', 'QY', 'PQ', 'TP', 'NY', 'GR', 'VV', 'GV', 'YV', 'QL', 'PS', 'YF', 'RE', 'VL', 'VL', 'VH', 'LA', 'SP', 'FA', 'ET', 'LV', 'LC', 'HG', 'AP', 'PK', 'AK', 'TS', 'VT', 'CN', 'GL', 'PV', 'KK', 'KN', 'SK', 'TC', 'NV', 'LN', 'VF']\n" ], [ "s8 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_8 = []\n\nfor i in range(len(s8)):\n if(0<=i<215):\n c1 = s8[i]\n c2 = s8[i+8]\n lambda_8.append(c1+c2)\n else:\n break\nprint(lambda_8)", "['RV', 'VR', 'QF', 'PP', 'TN', 'EI', 'ST', 'IN', 'VL', 'RC', 'FP', 'PF', 'NG', 'IE', 'TV', 'NF', 'LN', 'CA', 'PT', 'FR', 'GF', 'EA', 'VS', 'FV', 'NY', 'AA', 'TW', 'RN', 'FR', 'AK', 'SR', 'VI', 'YS', 'AN', 'WC', 'NV', 'RA', 'KD', 'RY', 'IS', 'SV', 'NL', 'CY', 'VN', 'AS', 'DA', 'YS', 'SF', 'VS', 'LT', 'YF', 'NK', 'SC', 'AY', 'SG', 'FV', 'SS', 'TP', 'FT', 'KK', 'CL', 'YN', 'GD', 'VL', 'SC', 'PF', 'TT', 'KN', 'LV', 'NY', 'DA', 'LD', 'CS', 'FF', 'TV', 'NI', 'VR', 'YG', 'AD', 'DE', 'SV', 'FR', 'VQ', 'II', 'RA', 'GP', 'DG', 'EQ', 'VT', 'RG', 'QK', 'II', 'AA', 'PD', 'GY', 'QN', 'TY', 'GK', 'KL', 'IP', 'AD', 'DD', 'YF', 'NT', 'YG', 'KC', 'LV', 'PI', 'DA', 'DW', 'FN', 'TS', 'GN', 'CN', 'VL', 'ID', 'AS', 'WK', 'NV', 'SG', 'NG', 'NN', 'LY', 'DN', 'SY', 'KL', 'VY', 'GR', 'GL', 'NF', 'YR', 'NK', 'YS', 'LN', 'YL', 'RK', 'LP', 'FF', 'RE', 'KR', 'SD', 'NI', 'LS', 'KT', 'PE', 'FI', 'EY', 'RQ', 'DA', 'IG', 'SS', 'TT', 'EP', 'IC', 'YN', 'QG', 'AV', 'GE', 'SG', 'TF', 'PN', 'CC', 'NY', 'GF', 'VP', 'EL', 'GQ', 'FS', 'NY', 'CG', 'YF', 'FQ', 'PP', 'LT', 'QN', 'SG', 'YV', 'GG', 'FY', 'QQ', 'PP', 'TY', 'NR', 'GV', 'VV', 'GV', 'YL', 'QS', 'PF', 'YE', 'RL', 'VL', 'VH', 'VA', 'LP', 'SA', 'FT', 'EV', 'LC', 'LG', 'HP', 'AK', 'PK', 'AS', 'TT', 'VN', 'CL', 'GV', 'PK', 'KN', 'KK', 'SC', 'TV', 'NN', 'LF']\n" ], [ "s9 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_9 = []\n\nfor i in range(len(s9)):\n if(0<=i<214):\n c1 = s9[i]\n c2 = s9[i+9]\n lambda_9.append(c1+c2)\n else:\n break\nprint(lambda_9)", "['RR', 'VF', 'QP', 'PN', 'TI', 'ET', 'SN', 'IL', 'VC', 'RP', 'FF', 'PG', 'NE', 'IV', 'TF', 'NN', 'LA', 'CT', 'PR', 'FF', 'GA', 'ES', 'VV', 'FY', 'NA', 'AW', 'TN', 'RR', 'FK', 'AR', 'SI', 'VS', 'YN', 'AC', 'WV', 'NA', 'RD', 'KY', 'RS', 'IV', 'SL', 'NY', 'CN', 'VS', 'AA', 'DS', 'YF', 'SS', 'VT', 'LF', 'YK', 'NC', 'SY', 'AG', 'SV', 'FS', 'SP', 'TT', 'FK', 'KL', 'CN', 'YD', 'GL', 'VC', 'SF', 'PT', 'TN', 'KV', 'LY', 'NA', 'DD', 'LS', 'CF', 'FV', 'TI', 'NR', 'VG', 'YD', 'AE', 'DV', 'SR', 'FQ', 'VI', 'IA', 'RP', 'GG', 'DQ', 'ET', 'VG', 'RK', 'QI', 'IA', 'AD', 'PY', 'GN', 'QY', 'TK', 'GL', 'KP', 'ID', 'AD', 'DF', 'YT', 'NG', 'YC', 'KV', 'LI', 'PA', 'DW', 'DN', 'FS', 'TN', 'GN', 'CL', 'VD', 'IS', 'AK', 'WV', 'NG', 'SG', 'NN', 'NY', 'LN', 'DY', 'SL', 'KY', 'VR', 'GL', 'GF', 'NR', 'YK', 'NS', 'YN', 'LL', 'YK', 'RP', 'LF', 'FE', 'RR', 'KD', 'SI', 'NS', 'LT', 'KE', 'PI', 'FY', 'EQ', 'RA', 'DG', 'IS', 'ST', 'TP', 'EC', 'IN', 'YG', 'QV', 'AE', 'GG', 'SF', 'TN', 'PC', 'CY', 'NF', 'GP', 'VL', 'EQ', 'GS', 'FY', 'NG', 'CF', 'YQ', 'FP', 'PT', 'LN', 'QG', 'SV', 'YG', 'GY', 'FQ', 'QP', 'PY', 'TR', 'NV', 'GV', 'VV', 'GL', 'YS', 'QF', 'PE', 'YL', 'RL', 'VH', 'VA', 'VP', 'LA', 'ST', 'FV', 'EC', 'LG', 'LP', 'HK', 'AK', 'PS', 'AT', 'TN', 'VL', 'CV', 'GK', 'PN', 'KK', 'KC', 'SV', 'TN', 'NF']\n" ], [ "s10 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_10 = []\n\nfor i in range(len(s10)):\n if(0<=i<213):\n c1 = s10[i]\n c2 = s10[i+10]\n lambda_10.append(c1+c2)\n else:\n break\nprint(lambda_10)", "['RF', 'VP', 'QN', 'PI', 'TT', 'EN', 'SL', 'IC', 'VP', 'RF', 'FG', 'PE', 'NV', 'IF', 'TN', 'NA', 'LT', 'CR', 'PF', 'FA', 'GS', 'EV', 'VY', 'FA', 'NW', 'AN', 'TR', 'RK', 'FR', 'AI', 'SS', 'VN', 'YC', 'AV', 'WA', 'ND', 'RY', 'KS', 'RV', 'IL', 'SY', 'NN', 'CS', 'VA', 'AS', 'DF', 'YS', 'ST', 'VF', 'LK', 'YC', 'NY', 'SG', 'AV', 'SS', 'FP', 'ST', 'TK', 'FL', 'KN', 'CD', 'YL', 'GC', 'VF', 'ST', 'PN', 'TV', 'KY', 'LA', 'ND', 'DS', 'LF', 'CV', 'FI', 'TR', 'NG', 'VD', 'YE', 'AV', 'DR', 'SQ', 'FI', 'VA', 'IP', 'RG', 'GQ', 'DT', 'EG', 'VK', 'RI', 'QA', 'ID', 'AY', 'PN', 'GY', 'QK', 'TL', 'GP', 'KD', 'ID', 'AF', 'DT', 'YG', 'NC', 'YV', 'KI', 'LA', 'PW', 'DN', 'DS', 'FN', 'TN', 'GL', 'CD', 'VS', 'IK', 'AV', 'WG', 'NG', 'SN', 'NY', 'NN', 'LY', 'DL', 'SY', 'KR', 'VL', 'GF', 'GR', 'NK', 'YS', 'NN', 'YL', 'LK', 'YP', 'RF', 'LE', 'FR', 'RD', 'KI', 'SS', 'NT', 'LE', 'KI', 'PY', 'FQ', 'EA', 'RG', 'DS', 'IT', 'SP', 'TC', 'EN', 'IG', 'YV', 'QE', 'AG', 'GF', 'SN', 'TC', 'PY', 'CF', 'NP', 'GL', 'VQ', 'ES', 'GY', 'FG', 'NF', 'CQ', 'YP', 'FT', 'PN', 'LG', 'QV', 'SG', 'YY', 'GQ', 'FP', 'QY', 'PR', 'TV', 'NV', 'GV', 'VL', 'GS', 'YF', 'QE', 'PL', 'YL', 'RH', 'VA', 'VP', 'VA', 'LT', 'SV', 'FC', 'EG', 'LP', 'LK', 'HK', 'AS', 'PT', 'AN', 'TL', 'VV', 'CK', 'GN', 'PK', 'KC', 'KV', 'SN', 'TF']\n" ], [ "s11 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_11 = []\n\nfor i in range(len(s11)):\n if(0<=i<212):\n c1 = s11[i]\n c2 = s11[i+11]\n lambda_11.append(c1+c2)\n else:\n break\nprint(lambda_11)", "['RP', 'VN', 'QI', 'PT', 'TN', 'EL', 'SC', 'IP', 'VF', 'RG', 'FE', 'PV', 'NF', 'IN', 'TA', 'NT', 'LR', 'CF', 'PA', 'FS', 'GV', 'EY', 'VA', 'FW', 'NN', 'AR', 'TK', 'RR', 'FI', 'AS', 'SN', 'VC', 'YV', 'AA', 'WD', 'NY', 'RS', 'KV', 'RL', 'IY', 'SN', 'NS', 'CA', 'VS', 'AF', 'DS', 'YT', 'SF', 'VK', 'LC', 'YY', 'NG', 'SV', 'AS', 'SP', 'FT', 'SK', 'TL', 'FN', 'KD', 'CL', 'YC', 'GF', 'VT', 'SN', 'PV', 'TY', 'KA', 'LD', 'NS', 'DF', 'LV', 'CI', 'FR', 'TG', 'ND', 'VE', 'YV', 'AR', 'DQ', 'SI', 'FA', 'VP', 'IG', 'RQ', 'GT', 'DG', 'EK', 'VI', 'RA', 'QD', 'IY', 'AN', 'PY', 'GK', 'QL', 'TP', 'GD', 'KD', 'IF', 'AT', 'DG', 'YC', 'NV', 'YI', 'KA', 'LW', 'PN', 'DS', 'DN', 'FN', 'TL', 'GD', 'CS', 'VK', 'IV', 'AG', 'WG', 'NN', 'SY', 'NN', 'NY', 'LL', 'DY', 'SR', 'KL', 'VF', 'GR', 'GK', 'NS', 'YN', 'NL', 'YK', 'LP', 'YF', 'RE', 'LR', 'FD', 'RI', 'KS', 'ST', 'NE', 'LI', 'KY', 'PQ', 'FA', 'EG', 'RS', 'DT', 'IP', 'SC', 'TN', 'EG', 'IV', 'YE', 'QG', 'AF', 'GN', 'SC', 'TY', 'PF', 'CP', 'NL', 'GQ', 'VS', 'EY', 'GG', 'FF', 'NQ', 'CP', 'YT', 'FN', 'PG', 'LV', 'QG', 'SY', 'YQ', 'GP', 'FY', 'QR', 'PV', 'TV', 'NV', 'GL', 'VS', 'GF', 'YE', 'QL', 'PL', 'YH', 'RA', 'VP', 'VA', 'VT', 'LV', 'SC', 'FG', 'EP', 'LK', 'LK', 'HS', 'AT', 'PN', 'AL', 'TV', 'VK', 'CN', 'GK', 'PC', 'KV', 'KN', 'SF']\n" ], [ "s12 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_12 = []\n\nfor i in range(len(s12)):\n if(0<=i<211):\n c1 = s12[i]\n c2 = s12[i+12]\n lambda_12.append(c1+c2)\n else:\n break\nprint(lambda_12)", "['RN', 'VI', 'QT', 'PN', 'TL', 'EC', 'SP', 'IF', 'VG', 'RE', 'FV', 'PF', 'NN', 'IA', 'TT', 'NR', 'LF', 'CA', 'PS', 'FV', 'GY', 'EA', 'VW', 'FN', 'NR', 'AK', 'TR', 'RI', 'FS', 'AN', 'SC', 'VV', 'YA', 'AD', 'WY', 'NS', 'RV', 'KL', 'RY', 'IN', 'SS', 'NA', 'CS', 'VF', 'AS', 'DT', 'YF', 'SK', 'VC', 'LY', 'YG', 'NV', 'SS', 'AP', 'ST', 'FK', 'SL', 'TN', 'FD', 'KL', 'CC', 'YF', 'GT', 'VN', 'SV', 'PY', 'TA', 'KD', 'LS', 'NF', 'DV', 'LI', 'CR', 'FG', 'TD', 'NE', 'VV', 'YR', 'AQ', 'DI', 'SA', 'FP', 'VG', 'IQ', 'RT', 'GG', 'DK', 'EI', 'VA', 'RD', 'QY', 'IN', 'AY', 'PK', 'GL', 'QP', 'TD', 'GD', 'KF', 'IT', 'AG', 'DC', 'YV', 'NI', 'YA', 'KW', 'LN', 'PS', 'DN', 'DN', 'FL', 'TD', 'GS', 'CK', 'VV', 'IG', 'AG', 'WN', 'NY', 'SN', 'NY', 'NL', 'LY', 'DR', 'SL', 'KF', 'VR', 'GK', 'GS', 'NN', 'YL', 'NK', 'YP', 'LF', 'YE', 'RR', 'LD', 'FI', 'RS', 'KT', 'SE', 'NI', 'LY', 'KQ', 'PA', 'FG', 'ES', 'RT', 'DP', 'IC', 'SN', 'TG', 'EV', 'IE', 'YG', 'QF', 'AN', 'GC', 'SY', 'TF', 'PP', 'CL', 'NQ', 'GS', 'VY', 'EG', 'GF', 'FQ', 'NP', 'CT', 'YN', 'FG', 'PV', 'LG', 'QY', 'SQ', 'YP', 'GY', 'FR', 'QV', 'PV', 'TV', 'NL', 'GS', 'VF', 'GE', 'YL', 'QL', 'PH', 'YA', 'RP', 'VA', 'VT', 'VV', 'LC', 'SG', 'FP', 'EK', 'LK', 'LS', 'HT', 'AN', 'PL', 'AV', 'TK', 'VN', 'CK', 'GC', 'PV', 'KN', 'KF']\n" ], [ "s13 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_13 = []\n\nfor i in range(len(s13)):\n if(0<=i<210):\n c1 = s13[i]\n c2 = s13[i+13]\n lambda_13.append(c1+c2)\n else:\n break\nprint(lambda_13)", "['RI', 'VT', 'QN', 'PL', 'TC', 'EP', 'SF', 'IG', 'VE', 'RV', 'FF', 'PN', 'NA', 'IT', 'TR', 'NF', 'LA', 'CS', 'PV', 'FY', 'GA', 'EW', 'VN', 'FR', 'NK', 'AR', 'TI', 'RS', 'FN', 'AC', 'SV', 'VA', 'YD', 'AY', 'WS', 'NV', 'RL', 'KY', 'RN', 'IS', 'SA', 'NS', 'CF', 'VS', 'AT', 'DF', 'YK', 'SC', 'VY', 'LG', 'YV', 'NS', 'SP', 'AT', 'SK', 'FL', 'SN', 'TD', 'FL', 'KC', 'CF', 'YT', 'GN', 'VV', 'SY', 'PA', 'TD', 'KS', 'LF', 'NV', 'DI', 'LR', 'CG', 'FD', 'TE', 'NV', 'VR', 'YQ', 'AI', 'DA', 'SP', 'FG', 'VQ', 'IT', 'RG', 'GK', 'DI', 'EA', 'VD', 'RY', 'QN', 'IY', 'AK', 'PL', 'GP', 'QD', 'TD', 'GF', 'KT', 'IG', 'AC', 'DV', 'YI', 'NA', 'YW', 'KN', 'LS', 'PN', 'DN', 'DL', 'FD', 'TS', 'GK', 'CV', 'VG', 'IG', 'AN', 'WY', 'NN', 'SY', 'NL', 'NY', 'LR', 'DL', 'SF', 'KR', 'VK', 'GS', 'GN', 'NL', 'YK', 'NP', 'YF', 'LE', 'YR', 'RD', 'LI', 'FS', 'RT', 'KE', 'SI', 'NY', 'LQ', 'KA', 'PG', 'FS', 'ET', 'RP', 'DC', 'IN', 'SG', 'TV', 'EE', 'IG', 'YF', 'QN', 'AC', 'GY', 'SF', 'TP', 'PL', 'CQ', 'NS', 'GY', 'VG', 'EF', 'GQ', 'FP', 'NT', 'CN', 'YG', 'FV', 'PG', 'LY', 'QQ', 'SP', 'YY', 'GR', 'FV', 'QV', 'PV', 'TL', 'NS', 'GF', 'VE', 'GL', 'YL', 'QH', 'PA', 'YP', 'RA', 'VT', 'VV', 'VC', 'LG', 'SP', 'FK', 'EK', 'LS', 'LT', 'HN', 'AL', 'PV', 'AK', 'TN', 'VK', 'CC', 'GV', 'PN', 'KF']\n" ], [ "s14 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_14 = []\n\nfor i in range(len(s14)):\n if(0<=i<209):\n c1 = s14[i]\n c2 = s14[i+14]\n lambda_14.append(c1+c2)\n else:\n break\nprint(lambda_14)", "['RT', 'VN', 'QL', 'PC', 'TP', 'EF', 'SG', 'IE', 'VV', 'RF', 'FN', 'PA', 'NT', 'IR', 'TF', 'NA', 'LS', 'CV', 'PY', 'FA', 'GW', 'EN', 'VR', 'FK', 'NR', 'AI', 'TS', 'RN', 'FC', 'AV', 'SA', 'VD', 'YY', 'AS', 'WV', 'NL', 'RY', 'KN', 'RS', 'IA', 'SS', 'NF', 'CS', 'VT', 'AF', 'DK', 'YC', 'SY', 'VG', 'LV', 'YS', 'NP', 'ST', 'AK', 'SL', 'FN', 'SD', 'TL', 'FC', 'KF', 'CT', 'YN', 'GV', 'VY', 'SA', 'PD', 'TS', 'KF', 'LV', 'NI', 'DR', 'LG', 'CD', 'FE', 'TV', 'NR', 'VQ', 'YI', 'AA', 'DP', 'SG', 'FQ', 'VT', 'IG', 'RK', 'GI', 'DA', 'ED', 'VY', 'RN', 'QY', 'IK', 'AL', 'PP', 'GD', 'QD', 'TF', 'GT', 'KG', 'IC', 'AV', 'DI', 'YA', 'NW', 'YN', 'KS', 'LN', 'PN', 'DL', 'DD', 'FS', 'TK', 'GV', 'CG', 'VG', 'IN', 'AY', 'WN', 'NY', 'SL', 'NY', 'NR', 'LL', 'DF', 'SR', 'KK', 'VS', 'GN', 'GL', 'NK', 'YP', 'NF', 'YE', 'LR', 'YD', 'RI', 'LS', 'FT', 'RE', 'KI', 'SY', 'NQ', 'LA', 'KG', 'PS', 'FT', 'EP', 'RC', 'DN', 'IG', 'SV', 'TE', 'EG', 'IF', 'YN', 'QC', 'AY', 'GF', 'SP', 'TL', 'PQ', 'CS', 'NY', 'GG', 'VF', 'EQ', 'GP', 'FT', 'NN', 'CG', 'YV', 'FG', 'PY', 'LQ', 'QP', 'SY', 'YR', 'GV', 'FV', 'QV', 'PL', 'TS', 'NF', 'GE', 'VL', 'GL', 'YH', 'QA', 'PP', 'YA', 'RT', 'VV', 'VC', 'VG', 'LP', 'SK', 'FK', 'ES', 'LT', 'LN', 'HL', 'AV', 'PK', 'AN', 'TK', 'VC', 'CV', 'GN', 'PF']\n" ], [ "s15 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_15 = []\n\nfor i in range(len(s15)):\n if(0<=i<208):\n c1 = s15[i]\n c2 = s15[i+15]\n lambda_15.append(c1+c2)\n else:\n break\nprint(lambda_15)", "['RN', 'VL', 'QC', 'PP', 'TF', 'EG', 'SE', 'IV', 'VF', 'RN', 'FA', 'PT', 'NR', 'IF', 'TA', 'NS', 'LV', 'CY', 'PA', 'FW', 'GN', 'ER', 'VK', 'FR', 'NI', 'AS', 'TN', 'RC', 'FV', 'AA', 'SD', 'VY', 'YS', 'AV', 'WL', 'NY', 'RN', 'KS', 'RA', 'IS', 'SF', 'NS', 'CT', 'VF', 'AK', 'DC', 'YY', 'SG', 'VV', 'LS', 'YP', 'NT', 'SK', 'AL', 'SN', 'FD', 'SL', 'TC', 'FF', 'KT', 'CN', 'YV', 'GY', 'VA', 'SD', 'PS', 'TF', 'KV', 'LI', 'NR', 'DG', 'LD', 'CE', 'FV', 'TR', 'NQ', 'VI', 'YA', 'AP', 'DG', 'SQ', 'FT', 'VG', 'IK', 'RI', 'GA', 'DD', 'EY', 'VN', 'RY', 'QK', 'IL', 'AP', 'PD', 'GD', 'QF', 'TT', 'GG', 'KC', 'IV', 'AI', 'DA', 'YW', 'NN', 'YS', 'KN', 'LN', 'PL', 'DD', 'DS', 'FK', 'TV', 'GG', 'CG', 'VN', 'IY', 'AN', 'WY', 'NL', 'SY', 'NR', 'NL', 'LF', 'DR', 'SK', 'KS', 'VN', 'GL', 'GK', 'NP', 'YF', 'NE', 'YR', 'LD', 'YI', 'RS', 'LT', 'FE', 'RI', 'KY', 'SQ', 'NA', 'LG', 'KS', 'PT', 'FP', 'EC', 'RN', 'DG', 'IV', 'SE', 'TG', 'EF', 'IN', 'YC', 'QY', 'AF', 'GP', 'SL', 'TQ', 'PS', 'CY', 'NG', 'GF', 'VQ', 'EP', 'GT', 'FN', 'NG', 'CV', 'YG', 'FY', 'PQ', 'LP', 'QY', 'SR', 'YV', 'GV', 'FV', 'QL', 'PS', 'TF', 'NE', 'GL', 'VL', 'GH', 'YA', 'QP', 'PA', 'YT', 'RV', 'VC', 'VG', 'VP', 'LK', 'SK', 'FS', 'ET', 'LN', 'LL', 'HV', 'AK', 'PN', 'AK', 'TC', 'VV', 'CN', 'GF']\n" ], [ "s16 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_16 = []\n\nfor i in range(len(s16)):\n if(0<=i<207):\n c1 = s16[i]\n c2 = s16[i+16]\n lambda_16.append(c1+c2)\n else:\n break\nprint(lambda_16)", "['RL', 'VC', 'QP', 'PF', 'TG', 'EE', 'SV', 'IF', 'VN', 'RA', 'FT', 'PR', 'NF', 'IA', 'TS', 'NV', 'LY', 'CA', 'PW', 'FN', 'GR', 'EK', 'VR', 'FI', 'NS', 'AN', 'TC', 'RV', 'FA', 'AD', 'SY', 'VS', 'YV', 'AL', 'WY', 'NN', 'RS', 'KA', 'RS', 'IF', 'SS', 'NT', 'CF', 'VK', 'AC', 'DY', 'YG', 'SV', 'VS', 'LP', 'YT', 'NK', 'SL', 'AN', 'SD', 'FL', 'SC', 'TF', 'FT', 'KN', 'CV', 'YY', 'GA', 'VD', 'SS', 'PF', 'TV', 'KI', 'LR', 'NG', 'DD', 'LE', 'CV', 'FR', 'TQ', 'NI', 'VA', 'YP', 'AG', 'DQ', 'ST', 'FG', 'VK', 'II', 'RA', 'GD', 'DY', 'EN', 'VY', 'RK', 'QL', 'IP', 'AD', 'PD', 'GF', 'QT', 'TG', 'GC', 'KV', 'II', 'AA', 'DW', 'YN', 'NS', 'YN', 'KN', 'LL', 'PD', 'DS', 'DK', 'FV', 'TG', 'GG', 'CN', 'VY', 'IN', 'AY', 'WL', 'NY', 'SR', 'NL', 'NF', 'LR', 'DK', 'SS', 'KN', 'VL', 'GK', 'GP', 'NF', 'YE', 'NR', 'YD', 'LI', 'YS', 'RT', 'LE', 'FI', 'RY', 'KQ', 'SA', 'NG', 'LS', 'KT', 'PP', 'FC', 'EN', 'RG', 'DV', 'IE', 'SG', 'TF', 'EN', 'IC', 'YY', 'QF', 'AP', 'GL', 'SQ', 'TS', 'PY', 'CG', 'NF', 'GQ', 'VP', 'ET', 'GN', 'FG', 'NV', 'CG', 'YY', 'FQ', 'PP', 'LY', 'QR', 'SV', 'YV', 'GV', 'FL', 'QS', 'PF', 'TE', 'NL', 'GL', 'VH', 'GA', 'YP', 'QA', 'PT', 'YV', 'RC', 'VG', 'VP', 'VK', 'LK', 'SS', 'FT', 'EN', 'LL', 'LV', 'HK', 'AN', 'PK', 'AC', 'TV', 'VN', 'CF']\n" ], [ "s17 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_17 = []\n\nfor i in range(len(s17)):\n if(0<=i<206):\n c1 = s17[i]\n c2 = s17[i+17]\n lambda_17.append(c1+c2)\n else:\n break\nprint(lambda_17)", "['RC', 'VP', 'QF', 'PG', 'TE', 'EV', 'SF', 'IN', 'VA', 'RT', 'FR', 'PF', 'NA', 'IS', 'TV', 'NY', 'LA', 'CW', 'PN', 'FR', 'GK', 'ER', 'VI', 'FS', 'NN', 'AC', 'TV', 'RA', 'FD', 'AY', 'SS', 'VV', 'YL', 'AY', 'WN', 'NS', 'RA', 'KS', 'RF', 'IS', 'ST', 'NF', 'CK', 'VC', 'AY', 'DG', 'YV', 'SS', 'VP', 'LT', 'YK', 'NL', 'SN', 'AD', 'SL', 'FC', 'SF', 'TT', 'FN', 'KV', 'CY', 'YA', 'GD', 'VS', 'SF', 'PV', 'TI', 'KR', 'LG', 'ND', 'DE', 'LV', 'CR', 'FQ', 'TI', 'NA', 'VP', 'YG', 'AQ', 'DT', 'SG', 'FK', 'VI', 'IA', 'RD', 'GY', 'DN', 'EY', 'VK', 'RL', 'QP', 'ID', 'AD', 'PF', 'GT', 'QG', 'TC', 'GV', 'KI', 'IA', 'AW', 'DN', 'YS', 'NN', 'YN', 'KL', 'LD', 'PS', 'DK', 'DV', 'FG', 'TG', 'GN', 'CY', 'VN', 'IY', 'AL', 'WY', 'NR', 'SL', 'NF', 'NR', 'LK', 'DS', 'SN', 'KL', 'VK', 'GP', 'GF', 'NE', 'YR', 'ND', 'YI', 'LS', 'YT', 'RE', 'LI', 'FY', 'RQ', 'KA', 'SG', 'NS', 'LT', 'KP', 'PC', 'FN', 'EG', 'RV', 'DE', 'IG', 'SF', 'TN', 'EC', 'IY', 'YF', 'QP', 'AL', 'GQ', 'SS', 'TY', 'PG', 'CF', 'NQ', 'GP', 'VT', 'EN', 'GG', 'FV', 'NG', 'CY', 'YQ', 'FP', 'PY', 'LR', 'QV', 'SV', 'YV', 'GL', 'FS', 'QF', 'PE', 'TL', 'NL', 'GH', 'VA', 'GP', 'YA', 'QT', 'PV', 'YC', 'RG', 'VP', 'VK', 'VK', 'LS', 'ST', 'FN', 'EL', 'LV', 'LK', 'HN', 'AK', 'PC', 'AV', 'TN', 'VF']\n" ], [ "s18 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_18 = []\n\nfor i in range(len(s18)):\n if(0<=i<205):\n c1 = s18[i]\n c2 = s18[i+18]\n lambda_18.append(c1+c2)\n else:\n break\nprint(lambda_18)", "['RP', 'VF', 'QG', 'PE', 'TV', 'EF', 'SN', 'IA', 'VT', 'RR', 'FF', 'PA', 'NS', 'IV', 'TY', 'NA', 'LW', 'CN', 'PR', 'FK', 'GR', 'EI', 'VS', 'FN', 'NC', 'AV', 'TA', 'RD', 'FY', 'AS', 'SV', 'VL', 'YY', 'AN', 'WS', 'NA', 'RS', 'KF', 'RS', 'IT', 'SF', 'NK', 'CC', 'VY', 'AG', 'DV', 'YS', 'SP', 'VT', 'LK', 'YL', 'NN', 'SD', 'AL', 'SC', 'FF', 'ST', 'TN', 'FV', 'KY', 'CA', 'YD', 'GS', 'VF', 'SV', 'PI', 'TR', 'KG', 'LD', 'NE', 'DV', 'LR', 'CQ', 'FI', 'TA', 'NP', 'VG', 'YQ', 'AT', 'DG', 'SK', 'FI', 'VA', 'ID', 'RY', 'GN', 'DY', 'EK', 'VL', 'RP', 'QD', 'ID', 'AF', 'PT', 'GG', 'QC', 'TV', 'GI', 'KA', 'IW', 'AN', 'DS', 'YN', 'NN', 'YL', 'KD', 'LS', 'PK', 'DV', 'DG', 'FG', 'TN', 'GY', 'CN', 'VY', 'IL', 'AY', 'WR', 'NL', 'SF', 'NR', 'NK', 'LS', 'DN', 'SL', 'KK', 'VP', 'GF', 'GE', 'NR', 'YD', 'NI', 'YS', 'LT', 'YE', 'RI', 'LY', 'FQ', 'RA', 'KG', 'SS', 'NT', 'LP', 'KC', 'PN', 'FG', 'EV', 'RE', 'DG', 'IF', 'SN', 'TC', 'EY', 'IF', 'YP', 'QL', 'AQ', 'GS', 'SY', 'TG', 'PF', 'CQ', 'NP', 'GT', 'VN', 'EG', 'GV', 'FG', 'NY', 'CQ', 'YP', 'FY', 'PR', 'LV', 'QV', 'SV', 'YL', 'GS', 'FF', 'QE', 'PL', 'TL', 'NH', 'GA', 'VP', 'GA', 'YT', 'QV', 'PC', 'YG', 'RP', 'VK', 'VK', 'VS', 'LT', 'SN', 'FL', 'EV', 'LK', 'LN', 'HK', 'AC', 'PV', 'AN', 'TF']\n" ], [ "s19 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_19 = []\n\nfor i in range(len(s19)):\n if(0<=i<204):\n c1 = s19[i]\n c2 = s19[i+19]\n lambda_19.append(c1+c2)\n else:\n break\nprint(lambda_19)", "['RF', 'VG', 'QE', 'PV', 'TF', 'EN', 'SA', 'IT', 'VR', 'RF', 'FA', 'PS', 'NV', 'IY', 'TA', 'NW', 'LN', 'CR', 'PK', 'FR', 'GI', 'ES', 'VN', 'FC', 'NV', 'AA', 'TD', 'RY', 'FS', 'AV', 'SL', 'VY', 'YN', 'AS', 'WA', 'NS', 'RF', 'KS', 'RT', 'IF', 'SK', 'NC', 'CY', 'VG', 'AV', 'DS', 'YP', 'ST', 'VK', 'LL', 'YN', 'ND', 'SL', 'AC', 'SF', 'FT', 'SN', 'TV', 'FY', 'KA', 'CD', 'YS', 'GF', 'VV', 'SI', 'PR', 'TG', 'KD', 'LE', 'NV', 'DR', 'LQ', 'CI', 'FA', 'TP', 'NG', 'VQ', 'YT', 'AG', 'DK', 'SI', 'FA', 'VD', 'IY', 'RN', 'GY', 'DK', 'EL', 'VP', 'RD', 'QD', 'IF', 'AT', 'PG', 'GC', 'QV', 'TI', 'GA', 'KW', 'IN', 'AS', 'DN', 'YN', 'NL', 'YD', 'KS', 'LK', 'PV', 'DG', 'DG', 'FN', 'TY', 'GN', 'CY', 'VL', 'IY', 'AR', 'WL', 'NF', 'SR', 'NK', 'NS', 'LN', 'DL', 'SK', 'KP', 'VF', 'GE', 'GR', 'ND', 'YI', 'NS', 'YT', 'LE', 'YI', 'RY', 'LQ', 'FA', 'RG', 'KS', 'ST', 'NP', 'LC', 'KN', 'PG', 'FV', 'EE', 'RG', 'DF', 'IN', 'SC', 'TY', 'EF', 'IP', 'YL', 'QQ', 'AS', 'GY', 'SG', 'TF', 'PQ', 'CP', 'NT', 'GN', 'VG', 'EV', 'GG', 'FY', 'NQ', 'CP', 'YY', 'FR', 'PV', 'LV', 'QV', 'SL', 'YS', 'GF', 'FE', 'QL', 'PL', 'TH', 'NA', 'GP', 'VA', 'GT', 'YV', 'QC', 'PG', 'YP', 'RK', 'VK', 'VS', 'VT', 'LN', 'SL', 'FV', 'EK', 'LN', 'LK', 'HC', 'AV', 'PN', 'AF']\n" ], [ "s20 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_20 = []\n\nfor i in range(len(s20)):\n if(0<=i<203):\n c1 = s20[i]\n c2 = s20[i+20]\n lambda_20.append(c1+c2)\n else:\n break\nprint(lambda_20)", "['RG', 'VE', 'QV', 'PF', 'TN', 'EA', 'ST', 'IR', 'VF', 'RA', 'FS', 'PV', 'NY', 'IA', 'TW', 'NN', 'LR', 'CK', 'PR', 'FI', 'GS', 'EN', 'VC', 'FV', 'NA', 'AD', 'TY', 'RS', 'FV', 'AL', 'SY', 'VN', 'YS', 'AA', 'WS', 'NF', 'RS', 'KT', 'RF', 'IK', 'SC', 'NY', 'CG', 'VV', 'AS', 'DP', 'YT', 'SK', 'VL', 'LN', 'YD', 'NL', 'SC', 'AF', 'ST', 'FN', 'SV', 'TY', 'FA', 'KD', 'CS', 'YF', 'GV', 'VI', 'SR', 'PG', 'TD', 'KE', 'LV', 'NR', 'DQ', 'LI', 'CA', 'FP', 'TG', 'NQ', 'VT', 'YG', 'AK', 'DI', 'SA', 'FD', 'VY', 'IN', 'RY', 'GK', 'DL', 'EP', 'VD', 'RD', 'QF', 'IT', 'AG', 'PC', 'GV', 'QI', 'TA', 'GW', 'KN', 'IS', 'AN', 'DN', 'YL', 'ND', 'YS', 'KK', 'LV', 'PG', 'DG', 'DN', 'FY', 'TN', 'GY', 'CL', 'VY', 'IR', 'AL', 'WF', 'NR', 'SK', 'NS', 'NN', 'LL', 'DK', 'SP', 'KF', 'VE', 'GR', 'GD', 'NI', 'YS', 'NT', 'YE', 'LI', 'YY', 'RQ', 'LA', 'FG', 'RS', 'KT', 'SP', 'NC', 'LN', 'KG', 'PV', 'FE', 'EG', 'RF', 'DN', 'IC', 'SY', 'TF', 'EP', 'IL', 'YQ', 'QS', 'AY', 'GG', 'SF', 'TQ', 'PP', 'CT', 'NN', 'GG', 'VV', 'EG', 'GY', 'FQ', 'NP', 'CY', 'YR', 'FV', 'PV', 'LV', 'QL', 'SS', 'YF', 'GE', 'FL', 'QL', 'PH', 'TA', 'NP', 'GA', 'VT', 'GV', 'YC', 'QG', 'PP', 'YK', 'RK', 'VS', 'VT', 'VN', 'LL', 'SV', 'FK', 'EN', 'LK', 'LC', 'HV', 'AN', 'PF']\n" ], [ "s21 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_21 = []\n\nfor i in range(len(s21)):\n if(0<=i<202):\n c1 = s21[i]\n c2 = s21[i+21]\n lambda_21.append(c1+c2)\n else:\n break\nprint(lambda_21)", "['RE', 'VV', 'QF', 'PN', 'TA', 'ET', 'SR', 'IF', 'VA', 'RS', 'FV', 'PY', 'NA', 'IW', 'TN', 'NR', 'LK', 'CR', 'PI', 'FS', 'GN', 'EC', 'VV', 'FA', 'ND', 'AY', 'TS', 'RV', 'FL', 'AY', 'SN', 'VS', 'YA', 'AS', 'WF', 'NS', 'RT', 'KF', 'RK', 'IC', 'SY', 'NG', 'CV', 'VS', 'AP', 'DT', 'YK', 'SL', 'VN', 'LD', 'YL', 'NC', 'SF', 'AT', 'SN', 'FV', 'SY', 'TA', 'FD', 'KS', 'CF', 'YV', 'GI', 'VR', 'SG', 'PD', 'TE', 'KV', 'LR', 'NQ', 'DI', 'LA', 'CP', 'FG', 'TQ', 'NT', 'VG', 'YK', 'AI', 'DA', 'SD', 'FY', 'VN', 'IY', 'RK', 'GL', 'DP', 'ED', 'VD', 'RF', 'QT', 'IG', 'AC', 'PV', 'GI', 'QA', 'TW', 'GN', 'KS', 'IN', 'AN', 'DL', 'YD', 'NS', 'YK', 'KV', 'LG', 'PG', 'DN', 'DY', 'FN', 'TY', 'GL', 'CY', 'VR', 'IL', 'AF', 'WR', 'NK', 'SS', 'NN', 'NL', 'LK', 'DP', 'SF', 'KE', 'VR', 'GD', 'GI', 'NS', 'YT', 'NE', 'YI', 'LY', 'YQ', 'RA', 'LG', 'FS', 'RT', 'KP', 'SC', 'NN', 'LG', 'KV', 'PE', 'FG', 'EF', 'RN', 'DC', 'IY', 'SF', 'TP', 'EL', 'IQ', 'YS', 'QY', 'AG', 'GF', 'SQ', 'TP', 'PT', 'CN', 'NG', 'GV', 'VG', 'EY', 'GQ', 'FP', 'NY', 'CR', 'YV', 'FV', 'PV', 'LL', 'QS', 'SF', 'YE', 'GL', 'FL', 'QH', 'PA', 'TP', 'NA', 'GT', 'VV', 'GC', 'YG', 'QP', 'PK', 'YK', 'RS', 'VT', 'VN', 'VL', 'LV', 'SK', 'FN', 'EK', 'LC', 'LV', 'HN', 'AF']\n" ], [ "s22 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_22 = []\n\nfor i in range(len(s22)):\n if(0<=i<201):\n c1 = s22[i]\n c2 = s22[i+22]\n lambda_22.append(c1+c2)\n else:\n break\nprint(lambda_22)", "['RV', 'VF', 'QN', 'PA', 'TT', 'ER', 'SF', 'IA', 'VS', 'RV', 'FY', 'PA', 'NW', 'IN', 'TR', 'NK', 'LR', 'CI', 'PS', 'FN', 'GC', 'EV', 'VA', 'FD', 'NY', 'AS', 'TV', 'RL', 'FY', 'AN', 'SS', 'VA', 'YS', 'AF', 'WS', 'NT', 'RF', 'KK', 'RC', 'IY', 'SG', 'NV', 'CS', 'VP', 'AT', 'DK', 'YL', 'SN', 'VD', 'LL', 'YC', 'NF', 'ST', 'AN', 'SV', 'FY', 'SA', 'TD', 'FS', 'KF', 'CV', 'YI', 'GR', 'VG', 'SD', 'PE', 'TV', 'KR', 'LQ', 'NI', 'DA', 'LP', 'CG', 'FQ', 'TT', 'NG', 'VK', 'YI', 'AA', 'DD', 'SY', 'FN', 'VY', 'IK', 'RL', 'GP', 'DD', 'ED', 'VF', 'RT', 'QG', 'IC', 'AV', 'PI', 'GA', 'QW', 'TN', 'GS', 'KN', 'IN', 'AL', 'DD', 'YS', 'NK', 'YV', 'KG', 'LG', 'PN', 'DY', 'DN', 'FY', 'TL', 'GY', 'CR', 'VL', 'IF', 'AR', 'WK', 'NS', 'SN', 'NL', 'NK', 'LP', 'DF', 'SE', 'KR', 'VD', 'GI', 'GS', 'NT', 'YE', 'NI', 'YY', 'LQ', 'YA', 'RG', 'LS', 'FT', 'RP', 'KC', 'SN', 'NG', 'LV', 'KE', 'PG', 'FF', 'EN', 'RC', 'DY', 'IF', 'SP', 'TL', 'EQ', 'IS', 'YY', 'QG', 'AF', 'GQ', 'SP', 'TT', 'PN', 'CG', 'NV', 'GG', 'VY', 'EQ', 'GP', 'FY', 'NR', 'CV', 'YV', 'FV', 'PL', 'LS', 'QF', 'SE', 'YL', 'GL', 'FH', 'QA', 'PP', 'TA', 'NT', 'GV', 'VC', 'GG', 'YP', 'QK', 'PK', 'YS', 'RT', 'VN', 'VL', 'VV', 'LK', 'SN', 'FK', 'EC', 'LV', 'LN', 'HF']\n" ], [ "s23 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_23 = []\n\nfor i in range(len(s23)):\n if(0<=i<200):\n c1 = s23[i]\n c2 = s23[i+23]\n lambda_23.append(c1+c2)\n else:\n break\nprint(lambda_23)", "['RF', 'VN', 'QA', 'PT', 'TR', 'EF', 'SA', 'IS', 'VV', 'RY', 'FA', 'PW', 'NN', 'IR', 'TK', 'NR', 'LI', 'CS', 'PN', 'FC', 'GV', 'EA', 'VD', 'FY', 'NS', 'AV', 'TL', 'RY', 'FN', 'AS', 'SA', 'VS', 'YF', 'AS', 'WT', 'NF', 'RK', 'KC', 'RY', 'IG', 'SV', 'NS', 'CP', 'VT', 'AK', 'DL', 'YN', 'SD', 'VL', 'LC', 'YF', 'NT', 'SN', 'AV', 'SY', 'FA', 'SD', 'TS', 'FF', 'KV', 'CI', 'YR', 'GG', 'VD', 'SE', 'PV', 'TR', 'KQ', 'LI', 'NA', 'DP', 'LG', 'CQ', 'FT', 'TG', 'NK', 'VI', 'YA', 'AD', 'DY', 'SN', 'FY', 'VK', 'IL', 'RP', 'GD', 'DD', 'EF', 'VT', 'RG', 'QC', 'IV', 'AI', 'PA', 'GW', 'QN', 'TS', 'GN', 'KN', 'IL', 'AD', 'DS', 'YK', 'NV', 'YG', 'KG', 'LN', 'PY', 'DN', 'DY', 'FL', 'TY', 'GR', 'CL', 'VF', 'IR', 'AK', 'WS', 'NN', 'SL', 'NK', 'NP', 'LF', 'DE', 'SR', 'KD', 'VI', 'GS', 'GT', 'NE', 'YI', 'NY', 'YQ', 'LA', 'YG', 'RS', 'LT', 'FP', 'RC', 'KN', 'SG', 'NV', 'LE', 'KG', 'PF', 'FN', 'EC', 'RY', 'DF', 'IP', 'SL', 'TQ', 'ES', 'IY', 'YG', 'QF', 'AQ', 'GP', 'ST', 'TN', 'PG', 'CV', 'NG', 'GY', 'VQ', 'EP', 'GY', 'FR', 'NV', 'CV', 'YV', 'FL', 'PS', 'LF', 'QE', 'SL', 'YL', 'GH', 'FA', 'QP', 'PA', 'TT', 'NV', 'GC', 'VG', 'GP', 'YK', 'QK', 'PS', 'YT', 'RN', 'VL', 'VV', 'VK', 'LN', 'SK', 'FC', 'EV', 'LN', 'LF']\n" ], [ "s24 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_24 = []\n\nfor i in range(len(s24)):\n if(0<=i<199):\n c1 = s24[i]\n c2 = s24[i+24]\n lambda_24.append(c1+c2)\n else:\n break\nprint(lambda_24)", "['RN', 'VA', 'QT', 'PR', 'TF', 'EA', 'SS', 'IV', 'VY', 'RA', 'FW', 'PN', 'NR', 'IK', 'TR', 'NI', 'LS', 'CN', 'PC', 'FV', 'GA', 'ED', 'VY', 'FS', 'NV', 'AL', 'TY', 'RN', 'FS', 'AA', 'SS', 'VF', 'YS', 'AT', 'WF', 'NK', 'RC', 'KY', 'RG', 'IV', 'SS', 'NP', 'CT', 'VK', 'AL', 'DN', 'YD', 'SL', 'VC', 'LF', 'YT', 'NN', 'SV', 'AY', 'SA', 'FD', 'SS', 'TF', 'FV', 'KI', 'CR', 'YG', 'GD', 'VE', 'SV', 'PR', 'TQ', 'KI', 'LA', 'NP', 'DG', 'LQ', 'CT', 'FG', 'TK', 'NI', 'VA', 'YD', 'AY', 'DN', 'SY', 'FK', 'VL', 'IP', 'RD', 'GD', 'DF', 'ET', 'VG', 'RC', 'QV', 'II', 'AA', 'PW', 'GN', 'QS', 'TN', 'GN', 'KL', 'ID', 'AS', 'DK', 'YV', 'NG', 'YG', 'KN', 'LY', 'PN', 'DY', 'DL', 'FY', 'TR', 'GL', 'CF', 'VR', 'IK', 'AS', 'WN', 'NL', 'SK', 'NP', 'NF', 'LE', 'DR', 'SD', 'KI', 'VS', 'GT', 'GE', 'NI', 'YY', 'NQ', 'YA', 'LG', 'YS', 'RT', 'LP', 'FC', 'RN', 'KG', 'SV', 'NE', 'LG', 'KF', 'PN', 'FC', 'EY', 'RF', 'DP', 'IL', 'SQ', 'TS', 'EY', 'IG', 'YF', 'QQ', 'AP', 'GT', 'SN', 'TG', 'PV', 'CG', 'NY', 'GQ', 'VP', 'EY', 'GR', 'FV', 'NV', 'CV', 'YL', 'FS', 'PF', 'LE', 'QL', 'SL', 'YH', 'GA', 'FP', 'QA', 'PT', 'TV', 'NC', 'GG', 'VP', 'GK', 'YK', 'QS', 'PT', 'YN', 'RL', 'VV', 'VK', 'VN', 'LK', 'SC', 'FV', 'EN', 'LF']\n" ], [ "s25 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_25 = []\n\nfor i in range(len(s25)):\n if(0<=i<198):\n c1 = s25[i]\n c2 = s25[i+25]\n lambda_25.append(c1+c2)\n else:\n break\nprint(lambda_25)", "['RA', 'VT', 'QR', 'PF', 'TA', 'ES', 'SV', 'IY', 'VA', 'RW', 'FN', 'PR', 'NK', 'IR', 'TI', 'NS', 'LN', 'CC', 'PV', 'FA', 'GD', 'EY', 'VS', 'FV', 'NL', 'AY', 'TN', 'RS', 'FA', 'AS', 'SF', 'VS', 'YT', 'AF', 'WK', 'NC', 'RY', 'KG', 'RV', 'IS', 'SP', 'NT', 'CK', 'VL', 'AN', 'DD', 'YL', 'SC', 'VF', 'LT', 'YN', 'NV', 'SY', 'AA', 'SD', 'FS', 'SF', 'TV', 'FI', 'KR', 'CG', 'YD', 'GE', 'VV', 'SR', 'PQ', 'TI', 'KA', 'LP', 'NG', 'DQ', 'LT', 'CG', 'FK', 'TI', 'NA', 'VD', 'YY', 'AN', 'DY', 'SK', 'FL', 'VP', 'ID', 'RD', 'GF', 'DT', 'EG', 'VC', 'RV', 'QI', 'IA', 'AW', 'PN', 'GS', 'QN', 'TN', 'GL', 'KD', 'IS', 'AK', 'DV', 'YG', 'NG', 'YN', 'KY', 'LN', 'PY', 'DL', 'DY', 'FR', 'TL', 'GF', 'CR', 'VK', 'IS', 'AN', 'WL', 'NK', 'SP', 'NF', 'NE', 'LR', 'DD', 'SI', 'KS', 'VT', 'GE', 'GI', 'NY', 'YQ', 'NA', 'YG', 'LS', 'YT', 'RP', 'LC', 'FN', 'RG', 'KV', 'SE', 'NG', 'LF', 'KN', 'PC', 'FY', 'EF', 'RP', 'DL', 'IQ', 'SS', 'TY', 'EG', 'IF', 'YQ', 'QP', 'AT', 'GN', 'SG', 'TV', 'PG', 'CY', 'NQ', 'GP', 'VY', 'ER', 'GV', 'FV', 'NV', 'CL', 'YS', 'FF', 'PE', 'LL', 'QL', 'SH', 'YA', 'GP', 'FA', 'QT', 'PV', 'TC', 'NG', 'GP', 'VK', 'GK', 'YS', 'QT', 'PN', 'YL', 'RV', 'VK', 'VN', 'VK', 'LC', 'SV', 'FN', 'EF']\n" ], [ "s26 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_26 = []\n\nfor i in range(len(s26)):\n if(0<=i<197):\n c1 = s26[i]\n c2 = s26[i+26]\n lambda_26.append(c1+c2)\n else:\n break\nprint(lambda_26)", "['RT', 'VR', 'QF', 'PA', 'TS', 'EV', 'SY', 'IA', 'VW', 'RN', 'FR', 'PK', 'NR', 'II', 'TS', 'NN', 'LC', 'CV', 'PA', 'FD', 'GY', 'ES', 'VV', 'FL', 'NY', 'AN', 'TS', 'RA', 'FS', 'AF', 'SS', 'VT', 'YF', 'AK', 'WC', 'NY', 'RG', 'KV', 'RS', 'IP', 'ST', 'NK', 'CL', 'VN', 'AD', 'DL', 'YC', 'SF', 'VT', 'LN', 'YV', 'NY', 'SA', 'AD', 'SS', 'FF', 'SV', 'TI', 'FR', 'KG', 'CD', 'YE', 'GV', 'VR', 'SQ', 'PI', 'TA', 'KP', 'LG', 'NQ', 'DT', 'LG', 'CK', 'FI', 'TA', 'ND', 'VY', 'YN', 'AY', 'DK', 'SL', 'FP', 'VD', 'ID', 'RF', 'GT', 'DG', 'EC', 'VV', 'RI', 'QA', 'IW', 'AN', 'PS', 'GN', 'QN', 'TL', 'GD', 'KS', 'IK', 'AV', 'DG', 'YG', 'NN', 'YY', 'KN', 'LY', 'PL', 'DY', 'DR', 'FL', 'TF', 'GR', 'CK', 'VS', 'IN', 'AL', 'WK', 'NP', 'SF', 'NE', 'NR', 'LD', 'DI', 'SS', 'KT', 'VE', 'GI', 'GY', 'NQ', 'YA', 'NG', 'YS', 'LT', 'YP', 'RC', 'LN', 'FG', 'RV', 'KE', 'SG', 'NF', 'LN', 'KC', 'PY', 'FF', 'EP', 'RL', 'DQ', 'IS', 'SY', 'TG', 'EF', 'IQ', 'YP', 'QT', 'AN', 'GG', 'SV', 'TG', 'PY', 'CQ', 'NP', 'GY', 'VR', 'EV', 'GV', 'FV', 'NL', 'CS', 'YF', 'FE', 'PL', 'LL', 'QH', 'SA', 'YP', 'GA', 'FT', 'QV', 'PC', 'TG', 'NP', 'GK', 'VK', 'GS', 'YT', 'QN', 'PL', 'YV', 'RK', 'VN', 'VK', 'VC', 'LV', 'SN', 'FF']\n" ], [ "s27 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_27 = []\n\nfor i in range(len(s27)):\n if(0<=i<196):\n c1 = s27[i]\n c2 = s27[i+27]\n lambda_27.append(c1+c2)\n else:\n break\nprint(lambda_27)", "['RR', 'VF', 'QA', 'PS', 'TV', 'EY', 'SA', 'IW', 'VN', 'RR', 'FK', 'PR', 'NI', 'IS', 'TN', 'NC', 'LV', 'CA', 'PD', 'FY', 'GS', 'EV', 'VL', 'FY', 'NN', 'AS', 'TA', 'RS', 'FF', 'AS', 'ST', 'VF', 'YK', 'AC', 'WY', 'NG', 'RV', 'KS', 'RP', 'IT', 'SK', 'NL', 'CN', 'VD', 'AL', 'DC', 'YF', 'ST', 'VN', 'LV', 'YY', 'NA', 'SD', 'AS', 'SF', 'FV', 'SI', 'TR', 'FG', 'KD', 'CE', 'YV', 'GR', 'VQ', 'SI', 'PA', 'TP', 'KG', 'LQ', 'NT', 'DG', 'LK', 'CI', 'FA', 'TD', 'NY', 'VN', 'YY', 'AK', 'DL', 'SP', 'FD', 'VD', 'IF', 'RT', 'GG', 'DC', 'EV', 'VI', 'RA', 'QW', 'IN', 'AS', 'PN', 'GN', 'QL', 'TD', 'GS', 'KK', 'IV', 'AG', 'DG', 'YN', 'NY', 'YN', 'KY', 'LL', 'PY', 'DR', 'DL', 'FF', 'TR', 'GK', 'CS', 'VN', 'IL', 'AK', 'WP', 'NF', 'SE', 'NR', 'ND', 'LI', 'DS', 'ST', 'KE', 'VI', 'GY', 'GQ', 'NA', 'YG', 'NS', 'YT', 'LP', 'YC', 'RN', 'LG', 'FV', 'RE', 'KG', 'SF', 'NN', 'LC', 'KY', 'PF', 'FP', 'EL', 'RQ', 'DS', 'IY', 'SG', 'TF', 'EQ', 'IP', 'YT', 'QN', 'AG', 'GV', 'SG', 'TY', 'PQ', 'CP', 'NY', 'GR', 'VV', 'EV', 'GV', 'FL', 'NS', 'CF', 'YE', 'FL', 'PL', 'LH', 'QA', 'SP', 'YA', 'GT', 'FV', 'QC', 'PG', 'TP', 'NK', 'GK', 'VS', 'GT', 'YN', 'QL', 'PV', 'YK', 'RN', 'VK', 'VC', 'VV', 'LN', 'SF']\n" ], [ "s28 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_28 = []\n\nfor i in range(len(s28)):\n if(0<=i<195):\n c1 = s28[i]\n c2 = s28[i+28]\n lambda_28.append(c1+c2)\n else:\n break\nprint(lambda_28)", "['RF', 'VA', 'QS', 'PV', 'TY', 'EA', 'SW', 'IN', 'VR', 'RK', 'FR', 'PI', 'NS', 'IN', 'TC', 'NV', 'LA', 'CD', 'PY', 'FS', 'GV', 'EL', 'VY', 'FN', 'NS', 'AA', 'TS', 'RF', 'FS', 'AT', 'SF', 'VK', 'YC', 'AY', 'WG', 'NV', 'RS', 'KP', 'RT', 'IK', 'SL', 'NN', 'CD', 'VL', 'AC', 'DF', 'YT', 'SN', 'VV', 'LY', 'YA', 'ND', 'SS', 'AF', 'SV', 'FI', 'SR', 'TG', 'FD', 'KE', 'CV', 'YR', 'GQ', 'VI', 'SA', 'PP', 'TG', 'KQ', 'LT', 'NG', 'DK', 'LI', 'CA', 'FD', 'TY', 'NN', 'VY', 'YK', 'AL', 'DP', 'SD', 'FD', 'VF', 'IT', 'RG', 'GC', 'DV', 'EI', 'VA', 'RW', 'QN', 'IS', 'AN', 'PN', 'GL', 'QD', 'TS', 'GK', 'KV', 'IG', 'AG', 'DN', 'YY', 'NN', 'YY', 'KL', 'LY', 'PR', 'DL', 'DF', 'FR', 'TK', 'GS', 'CN', 'VL', 'IK', 'AP', 'WF', 'NE', 'SR', 'ND', 'NI', 'LS', 'DT', 'SE', 'KI', 'VY', 'GQ', 'GA', 'NG', 'YS', 'NT', 'YP', 'LC', 'YN', 'RG', 'LV', 'FE', 'RG', 'KF', 'SN', 'NC', 'LY', 'KF', 'PP', 'FL', 'EQ', 'RS', 'DY', 'IG', 'SF', 'TQ', 'EP', 'IT', 'YN', 'QG', 'AV', 'GG', 'SY', 'TQ', 'PP', 'CY', 'NR', 'GV', 'VV', 'EV', 'GL', 'FS', 'NF', 'CE', 'YL', 'FL', 'PH', 'LA', 'QP', 'SA', 'YT', 'GV', 'FC', 'QG', 'PP', 'TK', 'NK', 'GS', 'VT', 'GN', 'YL', 'QV', 'PK', 'YN', 'RK', 'VC', 'VV', 'VN', 'LF']\n" ], [ "s29 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_29 = []\n\nfor i in range(len(s29)):\n if(0<=i<194):\n c1 = s29[i]\n c2 = s29[i+29]\n lambda_29.append(c1+c2)\n else:\n break\nprint(lambda_29)", "['RA', 'VS', 'QV', 'PY', 'TA', 'EW', 'SN', 'IR', 'VK', 'RR', 'FI', 'PS', 'NN', 'IC', 'TV', 'NA', 'LD', 'CY', 'PS', 'FV', 'GL', 'EY', 'VN', 'FS', 'NA', 'AS', 'TF', 'RS', 'FT', 'AF', 'SK', 'VC', 'YY', 'AG', 'WV', 'NS', 'RP', 'KT', 'RK', 'IL', 'SN', 'ND', 'CL', 'VC', 'AF', 'DT', 'YN', 'SV', 'VY', 'LA', 'YD', 'NS', 'SF', 'AV', 'SI', 'FR', 'SG', 'TD', 'FE', 'KV', 'CR', 'YQ', 'GI', 'VA', 'SP', 'PG', 'TQ', 'KT', 'LG', 'NK', 'DI', 'LA', 'CD', 'FY', 'TN', 'NY', 'VK', 'YL', 'AP', 'DD', 'SD', 'FF', 'VT', 'IG', 'RC', 'GV', 'DI', 'EA', 'VW', 'RN', 'QS', 'IN', 'AN', 'PL', 'GD', 'QS', 'TK', 'GV', 'KG', 'IG', 'AN', 'DY', 'YN', 'NY', 'YL', 'KY', 'LR', 'PL', 'DF', 'DR', 'FK', 'TS', 'GN', 'CL', 'VK', 'IP', 'AF', 'WE', 'NR', 'SD', 'NI', 'NS', 'LT', 'DE', 'SI', 'KY', 'VQ', 'GA', 'GG', 'NS', 'YT', 'NP', 'YC', 'LN', 'YG', 'RV', 'LE', 'FG', 'RF', 'KN', 'SC', 'NY', 'LF', 'KP', 'PL', 'FQ', 'ES', 'RY', 'DG', 'IF', 'SQ', 'TP', 'ET', 'IN', 'YG', 'QV', 'AG', 'GY', 'SQ', 'TP', 'PY', 'CR', 'NV', 'GV', 'VV', 'EL', 'GS', 'FF', 'NE', 'CL', 'YL', 'FH', 'PA', 'LP', 'QA', 'ST', 'YV', 'GC', 'FG', 'QP', 'PK', 'TK', 'NS', 'GT', 'VN', 'GL', 'YV', 'QK', 'PN', 'YK', 'RC', 'VV', 'VN', 'VF']\n" ], [ "s30 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_30 = []\n\nfor i in range(len(s30)):\n if(0<=i<193):\n c1 = s30[i]\n c2 = s30[i+30]\n lambda_30.append(c1+c2)\n else:\n break\nprint(lambda_30)", "['RS', 'VV', 'QY', 'PA', 'TW', 'EN', 'SR', 'IK', 'VR', 'RI', 'FS', 'PN', 'NC', 'IV', 'TA', 'ND', 'LY', 'CS', 'PV', 'FL', 'GY', 'EN', 'VS', 'FA', 'NS', 'AF', 'TS', 'RT', 'FF', 'AK', 'SC', 'VY', 'YG', 'AV', 'WS', 'NP', 'RT', 'KK', 'RL', 'IN', 'SD', 'NL', 'CC', 'VF', 'AT', 'DN', 'YV', 'SY', 'VA', 'LD', 'YS', 'NF', 'SV', 'AI', 'SR', 'FG', 'SD', 'TE', 'FV', 'KR', 'CQ', 'YI', 'GA', 'VP', 'SG', 'PQ', 'TT', 'KG', 'LK', 'NI', 'DA', 'LD', 'CY', 'FN', 'TY', 'NK', 'VL', 'YP', 'AD', 'DD', 'SF', 'FT', 'VG', 'IC', 'RV', 'GI', 'DA', 'EW', 'VN', 'RS', 'QN', 'IN', 'AL', 'PD', 'GS', 'QK', 'TV', 'GG', 'KG', 'IN', 'AY', 'DN', 'YY', 'NL', 'YY', 'KR', 'LL', 'PF', 'DR', 'DK', 'FS', 'TN', 'GL', 'CK', 'VP', 'IF', 'AE', 'WR', 'ND', 'SI', 'NS', 'NT', 'LE', 'DI', 'SY', 'KQ', 'VA', 'GG', 'GS', 'NT', 'YP', 'NC', 'YN', 'LG', 'YV', 'RE', 'LG', 'FF', 'RN', 'KC', 'SY', 'NF', 'LP', 'KL', 'PQ', 'FS', 'EY', 'RG', 'DF', 'IQ', 'SP', 'TT', 'EN', 'IG', 'YV', 'QG', 'AY', 'GQ', 'SP', 'TY', 'PR', 'CV', 'NV', 'GV', 'VL', 'ES', 'GF', 'FE', 'NL', 'CL', 'YH', 'FA', 'PP', 'LA', 'QT', 'SV', 'YC', 'GG', 'FP', 'QK', 'PK', 'TS', 'NT', 'GN', 'VL', 'GV', 'YK', 'QN', 'PK', 'YC', 'RV', 'VN', 'VF']\n" ], [ "s31 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_31 = []\n\nfor i in range(len(s31)):\n if(0<=i<192):\n c1 = s31[i]\n c2 = s31[i+31]\n lambda_31.append(c1+c2)\n else:\n break\nprint(lambda_31)", "['RV', 'VY', 'QA', 'PW', 'TN', 'ER', 'SK', 'IR', 'VI', 'RS', 'FN', 'PC', 'NV', 'IA', 'TD', 'NY', 'LS', 'CV', 'PL', 'FY', 'GN', 'ES', 'VA', 'FS', 'NF', 'AS', 'TT', 'RF', 'FK', 'AC', 'SY', 'VG', 'YV', 'AS', 'WP', 'NT', 'RK', 'KL', 'RN', 'ID', 'SL', 'NC', 'CF', 'VT', 'AN', 'DV', 'YY', 'SA', 'VD', 'LS', 'YF', 'NV', 'SI', 'AR', 'SG', 'FD', 'SE', 'TV', 'FR', 'KQ', 'CI', 'YA', 'GP', 'VG', 'SQ', 'PT', 'TG', 'KK', 'LI', 'NA', 'DD', 'LY', 'CN', 'FY', 'TK', 'NL', 'VP', 'YD', 'AD', 'DF', 'ST', 'FG', 'VC', 'IV', 'RI', 'GA', 'DW', 'EN', 'VS', 'RN', 'QN', 'IL', 'AD', 'PS', 'GK', 'QV', 'TG', 'GG', 'KN', 'IY', 'AN', 'DY', 'YL', 'NY', 'YR', 'KL', 'LF', 'PR', 'DK', 'DS', 'FN', 'TL', 'GK', 'CP', 'VF', 'IE', 'AR', 'WD', 'NI', 'SS', 'NT', 'NE', 'LI', 'DY', 'SQ', 'KA', 'VG', 'GS', 'GT', 'NP', 'YC', 'NN', 'YG', 'LV', 'YE', 'RG', 'LF', 'FN', 'RC', 'KY', 'SF', 'NP', 'LL', 'KQ', 'PS', 'FY', 'EG', 'RF', 'DQ', 'IP', 'ST', 'TN', 'EG', 'IV', 'YG', 'QY', 'AQ', 'GP', 'SY', 'TR', 'PV', 'CV', 'NV', 'GL', 'VS', 'EF', 'GE', 'FL', 'NL', 'CH', 'YA', 'FP', 'PA', 'LT', 'QV', 'SC', 'YG', 'GP', 'FK', 'QK', 'PS', 'TT', 'NN', 'GL', 'VV', 'GK', 'YN', 'QK', 'PC', 'YV', 'RN', 'VF']\n" ], [ "s32 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_32 = []\n\nfor i in range(len(s32)):\n if(0<=i<191):\n c1 = s32[i]\n c2 = s32[i+32]\n lambda_32.append(c1+c2)\n else:\n break\nprint(lambda_32)", "['RY', 'VA', 'QW', 'PN', 'TR', 'EK', 'SR', 'II', 'VS', 'RN', 'FC', 'PV', 'NA', 'ID', 'TY', 'NS', 'LV', 'CL', 'PY', 'FN', 'GS', 'EA', 'VS', 'FF', 'NS', 'AT', 'TF', 'RK', 'FC', 'AY', 'SG', 'VV', 'YS', 'AP', 'WT', 'NK', 'RL', 'KN', 'RD', 'IL', 'SC', 'NF', 'CT', 'VN', 'AV', 'DY', 'YA', 'SD', 'VS', 'LF', 'YV', 'NI', 'SR', 'AG', 'SD', 'FE', 'SV', 'TR', 'FQ', 'KI', 'CA', 'YP', 'GG', 'VQ', 'ST', 'PG', 'TK', 'KI', 'LA', 'ND', 'DY', 'LN', 'CY', 'FK', 'TL', 'NP', 'VD', 'YD', 'AF', 'DT', 'SG', 'FC', 'VV', 'II', 'RA', 'GW', 'DN', 'ES', 'VN', 'RN', 'QL', 'ID', 'AS', 'PK', 'GV', 'QG', 'TG', 'GN', 'KY', 'IN', 'AY', 'DL', 'YY', 'NR', 'YL', 'KF', 'LR', 'PK', 'DS', 'DN', 'FL', 'TK', 'GP', 'CF', 'VE', 'IR', 'AD', 'WI', 'NS', 'ST', 'NE', 'NI', 'LY', 'DQ', 'SA', 'KG', 'VS', 'GT', 'GP', 'NC', 'YN', 'NG', 'YV', 'LE', 'YG', 'RF', 'LN', 'FC', 'RY', 'KF', 'SP', 'NL', 'LQ', 'KS', 'PY', 'FG', 'EF', 'RQ', 'DP', 'IT', 'SN', 'TG', 'EV', 'IG', 'YY', 'QQ', 'AP', 'GY', 'SR', 'TV', 'PV', 'CV', 'NL', 'GS', 'VF', 'EE', 'GL', 'FL', 'NH', 'CA', 'YP', 'FA', 'PT', 'LV', 'QC', 'SG', 'YP', 'GK', 'FK', 'QS', 'PT', 'TN', 'NL', 'GV', 'VK', 'GN', 'YK', 'QC', 'PV', 'YN', 'RF']\n" ], [ "s33 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_33 = []\n\nfor i in range(len(s33)):\n if(0<=i<190):\n c1 = s33[i]\n c2 = s33[i+33]\n lambda_33.append(c1+c2)\n else:\n break\nprint(lambda_33)", "['RA', 'VW', 'QN', 'PR', 'TK', 'ER', 'SI', 'IS', 'VN', 'RC', 'FV', 'PA', 'ND', 'IY', 'TS', 'NV', 'LL', 'CY', 'PN', 'FS', 'GA', 'ES', 'VF', 'FS', 'NT', 'AF', 'TK', 'RC', 'FY', 'AG', 'SV', 'VS', 'YP', 'AT', 'WK', 'NL', 'RN', 'KD', 'RL', 'IC', 'SF', 'NT', 'CN', 'VV', 'AY', 'DA', 'YD', 'SS', 'VF', 'LV', 'YI', 'NR', 'SG', 'AD', 'SE', 'FV', 'SR', 'TQ', 'FI', 'KA', 'CP', 'YG', 'GQ', 'VT', 'SG', 'PK', 'TI', 'KA', 'LD', 'NY', 'DN', 'LY', 'CK', 'FL', 'TP', 'ND', 'VD', 'YF', 'AT', 'DG', 'SC', 'FV', 'VI', 'IA', 'RW', 'GN', 'DS', 'EN', 'VN', 'RL', 'QD', 'IS', 'AK', 'PV', 'GG', 'QG', 'TN', 'GY', 'KN', 'IY', 'AL', 'DY', 'YR', 'NL', 'YF', 'KR', 'LK', 'PS', 'DN', 'DL', 'FK', 'TP', 'GF', 'CE', 'VR', 'ID', 'AI', 'WS', 'NT', 'SE', 'NI', 'NY', 'LQ', 'DA', 'SG', 'KS', 'VT', 'GP', 'GC', 'NN', 'YG', 'NV', 'YE', 'LG', 'YF', 'RN', 'LC', 'FY', 'RF', 'KP', 'SL', 'NQ', 'LS', 'KY', 'PG', 'FF', 'EQ', 'RP', 'DT', 'IN', 'SG', 'TV', 'EG', 'IY', 'YQ', 'QP', 'AY', 'GR', 'SV', 'TV', 'PV', 'CL', 'NS', 'GF', 'VE', 'EL', 'GL', 'FH', 'NA', 'CP', 'YA', 'FT', 'PV', 'LC', 'QG', 'SP', 'YK', 'GK', 'FS', 'QT', 'PN', 'TL', 'NV', 'GK', 'VN', 'GK', 'YC', 'QV', 'PN', 'YF']\n" ], [ "s34 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_34 = []\n\nfor i in range(len(s34)):\n if(0<=i<189):\n c1 = s34[i]\n c2 = s34[i+34]\n lambda_34.append(c1+c2)\n else:\n break\nprint(lambda_34)", "['RW', 'VN', 'QR', 'PK', 'TR', 'EI', 'SS', 'IN', 'VC', 'RV', 'FA', 'PD', 'NY', 'IS', 'TV', 'NL', 'LY', 'CN', 'PS', 'FA', 'GS', 'EF', 'VS', 'FT', 'NF', 'AK', 'TC', 'RY', 'FG', 'AV', 'SS', 'VP', 'YT', 'AK', 'WL', 'NN', 'RD', 'KL', 'RC', 'IF', 'ST', 'NN', 'CV', 'VY', 'AA', 'DD', 'YS', 'SF', 'VV', 'LI', 'YR', 'NG', 'SD', 'AE', 'SV', 'FR', 'SQ', 'TI', 'FA', 'KP', 'CG', 'YQ', 'GT', 'VG', 'SK', 'PI', 'TA', 'KD', 'LY', 'NN', 'DY', 'LK', 'CL', 'FP', 'TD', 'ND', 'VF', 'YT', 'AG', 'DC', 'SV', 'FI', 'VA', 'IW', 'RN', 'GS', 'DN', 'EN', 'VL', 'RD', 'QS', 'IK', 'AV', 'PG', 'GG', 'QN', 'TY', 'GN', 'KY', 'IL', 'AY', 'DR', 'YL', 'NF', 'YR', 'KK', 'LS', 'PN', 'DL', 'DK', 'FP', 'TF', 'GE', 'CR', 'VD', 'II', 'AS', 'WT', 'NE', 'SI', 'NY', 'NQ', 'LA', 'DG', 'SS', 'KT', 'VP', 'GC', 'GN', 'NG', 'YV', 'NE', 'YG', 'LF', 'YN', 'RC', 'LY', 'FF', 'RP', 'KL', 'SQ', 'NS', 'LY', 'KG', 'PF', 'FQ', 'EP', 'RT', 'DN', 'IG', 'SV', 'TG', 'EY', 'IQ', 'YP', 'QY', 'AR', 'GV', 'SV', 'TV', 'PL', 'CS', 'NF', 'GE', 'VL', 'EL', 'GH', 'FA', 'NP', 'CA', 'YT', 'FV', 'PC', 'LG', 'QP', 'SK', 'YK', 'GS', 'FT', 'QN', 'PL', 'TV', 'NK', 'GN', 'VK', 'GC', 'YV', 'QN', 'PF']\n" ], [ "s35 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_35 = []\n\nfor i in range(len(s35)):\n if(0<=i<188):\n c1 = s35[i]\n c2 = s35[i+35]\n lambda_35.append(c1+c2)\n else:\n break\nprint(lambda_35)", "['RN', 'VR', 'QK', 'PR', 'TI', 'ES', 'SN', 'IC', 'VV', 'RA', 'FD', 'PY', 'NS', 'IV', 'TL', 'NY', 'LN', 'CS', 'PA', 'FS', 'GF', 'ES', 'VT', 'FF', 'NK', 'AC', 'TY', 'RG', 'FV', 'AS', 'SP', 'VT', 'YK', 'AL', 'WN', 'ND', 'RL', 'KC', 'RF', 'IT', 'SN', 'NV', 'CY', 'VA', 'AD', 'DS', 'YF', 'SV', 'VI', 'LR', 'YG', 'ND', 'SE', 'AV', 'SR', 'FQ', 'SI', 'TA', 'FP', 'KG', 'CQ', 'YT', 'GG', 'VK', 'SI', 'PA', 'TD', 'KY', 'LN', 'NY', 'DK', 'LL', 'CP', 'FD', 'TD', 'NF', 'VT', 'YG', 'AC', 'DV', 'SI', 'FA', 'VW', 'IN', 'RS', 'GN', 'DN', 'EL', 'VD', 'RS', 'QK', 'IV', 'AG', 'PG', 'GN', 'QY', 'TN', 'GY', 'KL', 'IY', 'AR', 'DL', 'YF', 'NR', 'YK', 'KS', 'LN', 'PL', 'DK', 'DP', 'FF', 'TE', 'GR', 'CD', 'VI', 'IS', 'AT', 'WE', 'NI', 'SY', 'NQ', 'NA', 'LG', 'DS', 'ST', 'KP', 'VC', 'GN', 'GG', 'NV', 'YE', 'NG', 'YF', 'LN', 'YC', 'RY', 'LF', 'FP', 'RL', 'KQ', 'SS', 'NY', 'LG', 'KF', 'PQ', 'FP', 'ET', 'RN', 'DG', 'IV', 'SG', 'TY', 'EQ', 'IP', 'YY', 'QR', 'AV', 'GV', 'SV', 'TL', 'PS', 'CF', 'NE', 'GL', 'VL', 'EH', 'GA', 'FP', 'NA', 'CT', 'YV', 'FC', 'PG', 'LP', 'QK', 'SK', 'YS', 'GT', 'FN', 'QL', 'PV', 'TK', 'NN', 'GK', 'VC', 'GV', 'YN', 'QF']\n" ], [ "s36 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_36 = []\n\nfor i in range(len(s36)):\n if(0<=i<187):\n c1 = s36[i]\n c2 = s36[i+36]\n lambda_36.append(c1+c2)\n else:\n break\nprint(lambda_36)", "['RR', 'VK', 'QR', 'PI', 'TS', 'EN', 'SC', 'IV', 'VA', 'RD', 'FY', 'PS', 'NV', 'IL', 'TY', 'NN', 'LS', 'CA', 'PS', 'FF', 'GS', 'ET', 'VF', 'FK', 'NC', 'AY', 'TG', 'RV', 'FS', 'AP', 'ST', 'VK', 'YL', 'AN', 'WD', 'NL', 'RC', 'KF', 'RT', 'IN', 'SV', 'NY', 'CA', 'VD', 'AS', 'DF', 'YV', 'SI', 'VR', 'LG', 'YD', 'NE', 'SV', 'AR', 'SQ', 'FI', 'SA', 'TP', 'FG', 'KQ', 'CT', 'YG', 'GK', 'VI', 'SA', 'PD', 'TY', 'KN', 'LY', 'NK', 'DL', 'LP', 'CD', 'FD', 'TF', 'NT', 'VG', 'YC', 'AV', 'DI', 'SA', 'FW', 'VN', 'IS', 'RN', 'GN', 'DL', 'ED', 'VS', 'RK', 'QV', 'IG', 'AG', 'PN', 'GY', 'QN', 'TY', 'GL', 'KY', 'IR', 'AL', 'DF', 'YR', 'NK', 'YS', 'KN', 'LL', 'PK', 'DP', 'DF', 'FE', 'TR', 'GD', 'CI', 'VS', 'IT', 'AE', 'WI', 'NY', 'SQ', 'NA', 'NG', 'LS', 'DT', 'SP', 'KC', 'VN', 'GG', 'GV', 'NE', 'YG', 'NF', 'YN', 'LC', 'YY', 'RF', 'LP', 'FL', 'RQ', 'KS', 'SY', 'NG', 'LF', 'KQ', 'PP', 'FT', 'EN', 'RG', 'DV', 'IG', 'SY', 'TQ', 'EP', 'IY', 'YR', 'QV', 'AV', 'GV', 'SL', 'TS', 'PF', 'CE', 'NL', 'GL', 'VH', 'EA', 'GP', 'FA', 'NT', 'CV', 'YC', 'FG', 'PP', 'LK', 'QK', 'SS', 'YT', 'GN', 'FL', 'QV', 'PK', 'TN', 'NK', 'GC', 'VV', 'GN', 'YF']\n" ], [ "s37 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_37 = []\n\nfor i in range(len(s37)):\n if(0<=i<186):\n c1 = s37[i]\n c2 = s37[i+37]\n lambda_37.append(c1+c2)\n else:\n break\nprint(lambda_37)", "['RK', 'VR', 'QI', 'PS', 'TN', 'EC', 'SV', 'IA', 'VD', 'RY', 'FS', 'PV', 'NL', 'IY', 'TN', 'NS', 'LA', 'CS', 'PF', 'FS', 'GT', 'EF', 'VK', 'FC', 'NY', 'AG', 'TV', 'RS', 'FP', 'AT', 'SK', 'VL', 'YN', 'AD', 'WL', 'NC', 'RF', 'KT', 'RN', 'IV', 'SY', 'NA', 'CD', 'VS', 'AF', 'DV', 'YI', 'SR', 'VG', 'LD', 'YE', 'NV', 'SR', 'AQ', 'SI', 'FA', 'SP', 'TG', 'FQ', 'KT', 'CG', 'YK', 'GI', 'VA', 'SD', 'PY', 'TN', 'KY', 'LK', 'NL', 'DP', 'LD', 'CD', 'FF', 'TT', 'NG', 'VC', 'YV', 'AI', 'DA', 'SW', 'FN', 'VS', 'IN', 'RN', 'GL', 'DD', 'ES', 'VK', 'RV', 'QG', 'IG', 'AN', 'PY', 'GN', 'QY', 'TL', 'GY', 'KR', 'IL', 'AF', 'DR', 'YK', 'NS', 'YN', 'KL', 'LK', 'PP', 'DF', 'DE', 'FR', 'TD', 'GI', 'CS', 'VT', 'IE', 'AI', 'WY', 'NQ', 'SA', 'NG', 'NS', 'LT', 'DP', 'SC', 'KN', 'VG', 'GV', 'GE', 'NG', 'YF', 'NN', 'YC', 'LY', 'YF', 'RP', 'LL', 'FQ', 'RS', 'KY', 'SG', 'NF', 'LQ', 'KP', 'PT', 'FN', 'EG', 'RV', 'DG', 'IY', 'SQ', 'TP', 'EY', 'IR', 'YV', 'QV', 'AV', 'GL', 'SS', 'TF', 'PE', 'CL', 'NL', 'GH', 'VA', 'EP', 'GA', 'FT', 'NV', 'CC', 'YG', 'FP', 'PK', 'LK', 'QS', 'ST', 'YN', 'GL', 'FV', 'QK', 'PN', 'TK', 'NC', 'GV', 'VN', 'GF']\n" ], [ "s38 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_38 = []\n\nfor i in range(len(s38)):\n if(0<=i<185):\n c1 = s38[i]\n c2 = s38[i+38]\n lambda_38.append(c1+c2)\n else:\n break\nprint(lambda_38)", "['RR', 'VI', 'QS', 'PN', 'TC', 'EV', 'SA', 'ID', 'VY', 'RS', 'FV', 'PL', 'NY', 'IN', 'TS', 'NA', 'LS', 'CF', 'PS', 'FT', 'GF', 'EK', 'VC', 'FY', 'NG', 'AV', 'TS', 'RP', 'FT', 'AK', 'SL', 'VN', 'YD', 'AL', 'WC', 'NF', 'RT', 'KN', 'RV', 'IY', 'SA', 'ND', 'CS', 'VF', 'AV', 'DI', 'YR', 'SG', 'VD', 'LE', 'YV', 'NR', 'SQ', 'AI', 'SA', 'FP', 'SG', 'TQ', 'FT', 'KG', 'CK', 'YI', 'GA', 'VD', 'SY', 'PN', 'TY', 'KK', 'LL', 'NP', 'DD', 'LD', 'CF', 'FT', 'TG', 'NC', 'VV', 'YI', 'AA', 'DW', 'SN', 'FS', 'VN', 'IN', 'RL', 'GD', 'DS', 'EK', 'VV', 'RG', 'QG', 'IN', 'AY', 'PN', 'GY', 'QL', 'TY', 'GR', 'KL', 'IF', 'AR', 'DK', 'YS', 'NN', 'YL', 'KK', 'LP', 'PF', 'DE', 'DR', 'FD', 'TI', 'GS', 'CT', 'VE', 'II', 'AY', 'WQ', 'NA', 'SG', 'NS', 'NT', 'LP', 'DC', 'SN', 'KG', 'VV', 'GE', 'GG', 'NF', 'YN', 'NC', 'YY', 'LF', 'YP', 'RL', 'LQ', 'FS', 'RY', 'KG', 'SF', 'NQ', 'LP', 'KT', 'PN', 'FG', 'EV', 'RG', 'DY', 'IQ', 'SP', 'TY', 'ER', 'IV', 'YV', 'QV', 'AL', 'GS', 'SF', 'TE', 'PL', 'CL', 'NH', 'GA', 'VP', 'EA', 'GT', 'FV', 'NC', 'CG', 'YP', 'FK', 'PK', 'LS', 'QT', 'SN', 'YL', 'GV', 'FK', 'QN', 'PK', 'TC', 'NV', 'GN', 'VF']\n" ], [ "s39 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_39 = []\n\nfor i in range(len(s39)):\n if(0<=i<184):\n c1 = s39[i]\n c2 = s39[i+39]\n lambda_39.append(c1+c2)\n else:\n break\nprint(lambda_39)", "['RI', 'VS', 'QN', 'PC', 'TV', 'EA', 'SD', 'IY', 'VS', 'RV', 'FL', 'PY', 'NN', 'IS', 'TA', 'NS', 'LF', 'CS', 'PT', 'FF', 'GK', 'EC', 'VY', 'FG', 'NV', 'AS', 'TP', 'RT', 'FK', 'AL', 'SN', 'VD', 'YL', 'AC', 'WF', 'NT', 'RN', 'KV', 'RY', 'IA', 'SD', 'NS', 'CF', 'VV', 'AI', 'DR', 'YG', 'SD', 'VE', 'LV', 'YR', 'NQ', 'SI', 'AA', 'SP', 'FG', 'SQ', 'TT', 'FG', 'KK', 'CI', 'YA', 'GD', 'VY', 'SN', 'PY', 'TK', 'KL', 'LP', 'ND', 'DD', 'LF', 'CT', 'FG', 'TC', 'NV', 'VI', 'YA', 'AW', 'DN', 'SS', 'FN', 'VN', 'IL', 'RD', 'GS', 'DK', 'EV', 'VG', 'RG', 'QN', 'IY', 'AN', 'PY', 'GL', 'QY', 'TR', 'GL', 'KF', 'IR', 'AK', 'DS', 'YN', 'NL', 'YK', 'KP', 'LF', 'PE', 'DR', 'DD', 'FI', 'TS', 'GT', 'CE', 'VI', 'IY', 'AQ', 'WA', 'NG', 'SS', 'NT', 'NP', 'LC', 'DN', 'SG', 'KV', 'VE', 'GG', 'GF', 'NN', 'YC', 'NY', 'YF', 'LP', 'YL', 'RQ', 'LS', 'FY', 'RG', 'KF', 'SQ', 'NP', 'LT', 'KN', 'PG', 'FV', 'EG', 'RY', 'DQ', 'IP', 'SY', 'TR', 'EV', 'IV', 'YV', 'QL', 'AS', 'GF', 'SE', 'TL', 'PL', 'CH', 'NA', 'GP', 'VA', 'ET', 'GV', 'FC', 'NG', 'CP', 'YK', 'FK', 'PS', 'LT', 'QN', 'SL', 'YV', 'GK', 'FN', 'QK', 'PC', 'TV', 'NN', 'GF']\n" ], [ "s40 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_40 = []\n\nfor i in range(len(s40)):\n if(0<=i<183):\n c1 = s40[i]\n c2 = s40[i+40]\n lambda_40.append(c1+c2)\n else:\n break\nprint(lambda_40)", "['RS', 'VN', 'QC', 'PV', 'TA', 'ED', 'SY', 'IS', 'VV', 'RL', 'FY', 'PN', 'NS', 'IA', 'TS', 'NF', 'LS', 'CT', 'PF', 'FK', 'GC', 'EY', 'VG', 'FV', 'NS', 'AP', 'TT', 'RK', 'FL', 'AN', 'SD', 'VL', 'YC', 'AF', 'WT', 'NN', 'RV', 'KY', 'RA', 'ID', 'SS', 'NF', 'CV', 'VI', 'AR', 'DG', 'YD', 'SE', 'VV', 'LR', 'YQ', 'NI', 'SA', 'AP', 'SG', 'FQ', 'ST', 'TG', 'FK', 'KI', 'CA', 'YD', 'GY', 'VN', 'SY', 'PK', 'TL', 'KP', 'LD', 'ND', 'DF', 'LT', 'CG', 'FC', 'TV', 'NI', 'VA', 'YW', 'AN', 'DS', 'SN', 'FN', 'VL', 'ID', 'RS', 'GK', 'DV', 'EG', 'VG', 'RN', 'QY', 'IN', 'AY', 'PL', 'GY', 'QR', 'TL', 'GF', 'KR', 'IK', 'AS', 'DN', 'YL', 'NK', 'YP', 'KF', 'LE', 'PR', 'DD', 'DI', 'FS', 'TT', 'GE', 'CI', 'VY', 'IQ', 'AA', 'WG', 'NS', 'ST', 'NP', 'NC', 'LN', 'DG', 'SV', 'KE', 'VG', 'GF', 'GN', 'NC', 'YY', 'NF', 'YP', 'LL', 'YQ', 'RS', 'LY', 'FG', 'RF', 'KQ', 'SP', 'NT', 'LN', 'KG', 'PV', 'FG', 'EY', 'RQ', 'DP', 'IY', 'SR', 'TV', 'EV', 'IV', 'YL', 'QS', 'AF', 'GE', 'SL', 'TL', 'PH', 'CA', 'NP', 'GA', 'VT', 'EV', 'GC', 'FG', 'NP', 'CK', 'YK', 'FS', 'PT', 'LN', 'QL', 'SV', 'YK', 'GN', 'FK', 'QC', 'PV', 'TN', 'NF']\n" ], [ "s41 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_41 = []\n\nfor i in range(len(s41)):\n if(0<=i<182):\n c1 = s41[i]\n c2 = s41[i+41]\n lambda_41.append(c1+c2)\n else:\n break\nprint(lambda_41)", "['RN', 'VC', 'QV', 'PA', 'TD', 'EY', 'SS', 'IV', 'VL', 'RY', 'FN', 'PS', 'NA', 'IS', 'TF', 'NS', 'LT', 'CF', 'PK', 'FC', 'GY', 'EG', 'VV', 'FS', 'NP', 'AT', 'TK', 'RL', 'FN', 'AD', 'SL', 'VC', 'YF', 'AT', 'WN', 'NV', 'RY', 'KA', 'RD', 'IS', 'SF', 'NV', 'CI', 'VR', 'AG', 'DD', 'YE', 'SV', 'VR', 'LQ', 'YI', 'NA', 'SP', 'AG', 'SQ', 'FT', 'SG', 'TK', 'FI', 'KA', 'CD', 'YY', 'GN', 'VY', 'SK', 'PL', 'TP', 'KD', 'LD', 'NF', 'DT', 'LG', 'CC', 'FV', 'TI', 'NA', 'VW', 'YN', 'AS', 'DN', 'SN', 'FL', 'VD', 'IS', 'RK', 'GV', 'DG', 'EG', 'VN', 'RY', 'QN', 'IY', 'AL', 'PY', 'GR', 'QL', 'TF', 'GR', 'KK', 'IS', 'AN', 'DL', 'YK', 'NP', 'YF', 'KE', 'LR', 'PD', 'DI', 'DS', 'FT', 'TE', 'GI', 'CY', 'VQ', 'IA', 'AG', 'WS', 'NT', 'SP', 'NC', 'NN', 'LG', 'DV', 'SE', 'KG', 'VF', 'GN', 'GC', 'NY', 'YF', 'NP', 'YL', 'LQ', 'YS', 'RY', 'LG', 'FF', 'RQ', 'KP', 'ST', 'NN', 'LG', 'KV', 'PG', 'FY', 'EQ', 'RP', 'DY', 'IR', 'SV', 'TV', 'EV', 'IL', 'YS', 'QF', 'AE', 'GL', 'SL', 'TH', 'PA', 'CP', 'NA', 'GT', 'VV', 'EC', 'GG', 'FP', 'NK', 'CK', 'YS', 'FT', 'PN', 'LL', 'QV', 'SK', 'YN', 'GK', 'FC', 'QV', 'PN', 'TF']\n" ], [ "s42 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_42 = []\n\nfor i in range(len(s42)):\n if(0<=i<181):\n c1 = s42[i]\n c2 = s42[i+42]\n lambda_42.append(c1+c2)\n else:\n break\nprint(lambda_42)", "['RC', 'VV', 'QA', 'PD', 'TY', 'ES', 'SV', 'IL', 'VY', 'RN', 'FS', 'PA', 'NS', 'IF', 'TS', 'NT', 'LF', 'CK', 'PC', 'FY', 'GG', 'EV', 'VS', 'FP', 'NT', 'AK', 'TL', 'RN', 'FD', 'AL', 'SC', 'VF', 'YT', 'AN', 'WV', 'NY', 'RA', 'KD', 'RS', 'IF', 'SV', 'NI', 'CR', 'VG', 'AD', 'DE', 'YV', 'SR', 'VQ', 'LI', 'YA', 'NP', 'SG', 'AQ', 'ST', 'FG', 'SK', 'TI', 'FA', 'KD', 'CY', 'YN', 'GY', 'VK', 'SL', 'PP', 'TD', 'KD', 'LF', 'NT', 'DG', 'LC', 'CV', 'FI', 'TA', 'NW', 'VN', 'YS', 'AN', 'DN', 'SL', 'FD', 'VS', 'IK', 'RV', 'GG', 'DG', 'EN', 'VY', 'RN', 'QY', 'IL', 'AY', 'PR', 'GL', 'QF', 'TR', 'GK', 'KS', 'IN', 'AL', 'DK', 'YP', 'NF', 'YE', 'KR', 'LD', 'PI', 'DS', 'DT', 'FE', 'TI', 'GY', 'CQ', 'VA', 'IG', 'AS', 'WT', 'NP', 'SC', 'NN', 'NG', 'LV', 'DE', 'SG', 'KF', 'VN', 'GC', 'GY', 'NF', 'YP', 'NL', 'YQ', 'LS', 'YY', 'RG', 'LF', 'FQ', 'RP', 'KT', 'SN', 'NG', 'LV', 'KG', 'PY', 'FQ', 'EP', 'RY', 'DR', 'IV', 'SV', 'TV', 'EL', 'IS', 'YF', 'QE', 'AL', 'GL', 'SH', 'TA', 'PP', 'CA', 'NT', 'GV', 'VC', 'EG', 'GP', 'FK', 'NK', 'CS', 'YT', 'FN', 'PL', 'LV', 'QK', 'SN', 'YK', 'GC', 'FV', 'QN', 'PF']\n" ], [ "s43 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_43 = []\n\nfor i in range(len(s43)):\n if(0<=i<180):\n c1 = s43[i]\n c2 = s43[i+43]\n lambda_43.append(c1+c2)\n else:\n break\nprint(lambda_43)", "['RV', 'VA', 'QD', 'PY', 'TS', 'EV', 'SL', 'IY', 'VN', 'RS', 'FA', 'PS', 'NF', 'IS', 'TT', 'NF', 'LK', 'CC', 'PY', 'FG', 'GV', 'ES', 'VP', 'FT', 'NK', 'AL', 'TN', 'RD', 'FL', 'AC', 'SF', 'VT', 'YN', 'AV', 'WY', 'NA', 'RD', 'KS', 'RF', 'IV', 'SI', 'NR', 'CG', 'VD', 'AE', 'DV', 'YR', 'SQ', 'VI', 'LA', 'YP', 'NG', 'SQ', 'AT', 'SG', 'FK', 'SI', 'TA', 'FD', 'KY', 'CN', 'YY', 'GK', 'VL', 'SP', 'PD', 'TD', 'KF', 'LT', 'NG', 'DC', 'LV', 'CI', 'FA', 'TW', 'NN', 'VS', 'YN', 'AN', 'DL', 'SD', 'FS', 'VK', 'IV', 'RG', 'GG', 'DN', 'EY', 'VN', 'RY', 'QL', 'IY', 'AR', 'PL', 'GF', 'QR', 'TK', 'GS', 'KN', 'IL', 'AK', 'DP', 'YF', 'NE', 'YR', 'KD', 'LI', 'PS', 'DT', 'DE', 'FI', 'TY', 'GQ', 'CA', 'VG', 'IS', 'AT', 'WP', 'NC', 'SN', 'NG', 'NV', 'LE', 'DG', 'SF', 'KN', 'VC', 'GY', 'GF', 'NP', 'YL', 'NQ', 'YS', 'LY', 'YG', 'RF', 'LQ', 'FP', 'RT', 'KN', 'SG', 'NV', 'LG', 'KY', 'PQ', 'FP', 'EY', 'RR', 'DV', 'IV', 'SV', 'TL', 'ES', 'IF', 'YE', 'QL', 'AL', 'GH', 'SA', 'TP', 'PA', 'CT', 'NV', 'GC', 'VG', 'EP', 'GK', 'FK', 'NS', 'CT', 'YN', 'FL', 'PV', 'LK', 'QN', 'SK', 'YC', 'GV', 'FN', 'QF']\n" ], [ "s44 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_44 = []\n\nfor i in range(len(s44)):\n if(0<=i<179):\n c1 = s44[i]\n c2 = s44[i+44]\n lambda_44.append(c1+c2)\n else:\n break\nprint(lambda_44)", "['RA', 'VD', 'QY', 'PS', 'TV', 'EL', 'SY', 'IN', 'VS', 'RA', 'FS', 'PF', 'NS', 'IT', 'TF', 'NK', 'LC', 'CY', 'PG', 'FV', 'GS', 'EP', 'VT', 'FK', 'NL', 'AN', 'TD', 'RL', 'FC', 'AF', 'ST', 'VN', 'YV', 'AY', 'WA', 'ND', 'RS', 'KF', 'RV', 'II', 'SR', 'NG', 'CD', 'VE', 'AV', 'DR', 'YQ', 'SI', 'VA', 'LP', 'YG', 'NQ', 'ST', 'AG', 'SK', 'FI', 'SA', 'TD', 'FY', 'KN', 'CY', 'YK', 'GL', 'VP', 'SD', 'PD', 'TF', 'KT', 'LG', 'NC', 'DV', 'LI', 'CA', 'FW', 'TN', 'NS', 'VN', 'YN', 'AL', 'DD', 'SS', 'FK', 'VV', 'IG', 'RG', 'GN', 'DY', 'EN', 'VY', 'RL', 'QY', 'IR', 'AL', 'PF', 'GR', 'QK', 'TS', 'GN', 'KL', 'IK', 'AP', 'DF', 'YE', 'NR', 'YD', 'KI', 'LS', 'PT', 'DE', 'DI', 'FY', 'TQ', 'GA', 'CG', 'VS', 'IT', 'AP', 'WC', 'NN', 'SG', 'NV', 'NE', 'LG', 'DF', 'SN', 'KC', 'VY', 'GF', 'GP', 'NL', 'YQ', 'NS', 'YY', 'LG', 'YF', 'RQ', 'LP', 'FT', 'RN', 'KG', 'SV', 'NG', 'LY', 'KQ', 'PP', 'FY', 'ER', 'RV', 'DV', 'IV', 'SL', 'TS', 'EF', 'IE', 'YL', 'QL', 'AH', 'GA', 'SP', 'TA', 'PT', 'CV', 'NC', 'GG', 'VP', 'EK', 'GK', 'FS', 'NT', 'CN', 'YL', 'FV', 'PK', 'LN', 'QK', 'SC', 'YV', 'GN', 'FF']\n" ], [ "s45 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_45 = []\n\nfor i in range(len(s45)):\n if(0<=i<178):\n c1 = s45[i]\n c2 = s45[i+45]\n lambda_45.append(c1+c2)\n else:\n break\nprint(lambda_45)", "['RD', 'VY', 'QS', 'PV', 'TL', 'EY', 'SN', 'IS', 'VA', 'RS', 'FF', 'PS', 'NT', 'IF', 'TK', 'NC', 'LY', 'CG', 'PV', 'FS', 'GP', 'ET', 'VK', 'FL', 'NN', 'AD', 'TL', 'RC', 'FF', 'AT', 'SN', 'VV', 'YY', 'AA', 'WD', 'NS', 'RF', 'KV', 'RI', 'IR', 'SG', 'ND', 'CE', 'VV', 'AR', 'DQ', 'YI', 'SA', 'VP', 'LG', 'YQ', 'NT', 'SG', 'AK', 'SI', 'FA', 'SD', 'TY', 'FN', 'KY', 'CK', 'YL', 'GP', 'VD', 'SD', 'PF', 'TT', 'KG', 'LC', 'NV', 'DI', 'LA', 'CW', 'FN', 'TS', 'NN', 'VN', 'YL', 'AD', 'DS', 'SK', 'FV', 'VG', 'IG', 'RN', 'GY', 'DN', 'EY', 'VL', 'RY', 'QR', 'IL', 'AF', 'PR', 'GK', 'QS', 'TN', 'GL', 'KK', 'IP', 'AF', 'DE', 'YR', 'ND', 'YI', 'KS', 'LT', 'PE', 'DI', 'DY', 'FQ', 'TA', 'GG', 'CS', 'VT', 'IP', 'AC', 'WN', 'NG', 'SV', 'NE', 'NG', 'LF', 'DN', 'SC', 'KY', 'VF', 'GP', 'GL', 'NQ', 'YS', 'NY', 'YG', 'LF', 'YQ', 'RP', 'LT', 'FN', 'RG', 'KV', 'SG', 'NY', 'LQ', 'KP', 'PY', 'FR', 'EV', 'RV', 'DV', 'IL', 'SS', 'TF', 'EE', 'IL', 'YL', 'QH', 'AA', 'GP', 'SA', 'TT', 'PV', 'CC', 'NG', 'GP', 'VK', 'EK', 'GS', 'FT', 'NN', 'CL', 'YV', 'FK', 'PN', 'LK', 'QC', 'SV', 'YN', 'GF']\n" ], [ "s46 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_46 = []\n\nfor i in range(len(s46)):\n if(0<=i<177):\n c1 = s46[i]\n c2 = s46[i+46]\n lambda_46.append(c1+c2)\n else:\n break\nprint(lambda_46)", "['RY', 'VS', 'QV', 'PL', 'TY', 'EN', 'SS', 'IA', 'VS', 'RF', 'FS', 'PT', 'NF', 'IK', 'TC', 'NY', 'LG', 'CV', 'PS', 'FP', 'GT', 'EK', 'VL', 'FN', 'ND', 'AL', 'TC', 'RF', 'FT', 'AN', 'SV', 'VY', 'YA', 'AD', 'WS', 'NF', 'RV', 'KI', 'RR', 'IG', 'SD', 'NE', 'CV', 'VR', 'AQ', 'DI', 'YA', 'SP', 'VG', 'LQ', 'YT', 'NG', 'SK', 'AI', 'SA', 'FD', 'SY', 'TN', 'FY', 'KK', 'CL', 'YP', 'GD', 'VD', 'SF', 'PT', 'TG', 'KC', 'LV', 'NI', 'DA', 'LW', 'CN', 'FS', 'TN', 'NN', 'VL', 'YD', 'AS', 'DK', 'SV', 'FG', 'VG', 'IN', 'RY', 'GN', 'DY', 'EL', 'VY', 'RR', 'QL', 'IF', 'AR', 'PK', 'GS', 'QN', 'TL', 'GK', 'KP', 'IF', 'AE', 'DR', 'YD', 'NI', 'YS', 'KT', 'LE', 'PI', 'DY', 'DQ', 'FA', 'TG', 'GS', 'CT', 'VP', 'IC', 'AN', 'WG', 'NV', 'SE', 'NG', 'NF', 'LN', 'DC', 'SY', 'KF', 'VP', 'GL', 'GQ', 'NS', 'YY', 'NG', 'YF', 'LQ', 'YP', 'RT', 'LN', 'FG', 'RV', 'KG', 'SY', 'NQ', 'LP', 'KY', 'PR', 'FV', 'EV', 'RV', 'DL', 'IS', 'SF', 'TE', 'EL', 'IL', 'YH', 'QA', 'AP', 'GA', 'ST', 'TV', 'PC', 'CG', 'NP', 'GK', 'VK', 'ES', 'GT', 'FN', 'NL', 'CV', 'YK', 'FN', 'PK', 'LC', 'QV', 'SN', 'YF']\n" ], [ "s47 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_47 = []\n\nfor i in range(len(s47)):\n if(0<=i<176):\n c1 = s47[i]\n c2 = s47[i+47]\n lambda_47.append(c1+c2)\n else:\n break\nprint(lambda_47)", "['RS', 'VV', 'QL', 'PY', 'TN', 'ES', 'SA', 'IS', 'VF', 'RS', 'FT', 'PF', 'NK', 'IC', 'TY', 'NG', 'LV', 'CS', 'PP', 'FT', 'GK', 'EL', 'VN', 'FD', 'NL', 'AC', 'TF', 'RT', 'FN', 'AV', 'SY', 'VA', 'YD', 'AS', 'WF', 'NV', 'RI', 'KR', 'RG', 'ID', 'SE', 'NV', 'CR', 'VQ', 'AI', 'DA', 'YP', 'SG', 'VQ', 'LT', 'YG', 'NK', 'SI', 'AA', 'SD', 'FY', 'SN', 'TY', 'FK', 'KL', 'CP', 'YD', 'GD', 'VF', 'ST', 'PG', 'TC', 'KV', 'LI', 'NA', 'DW', 'LN', 'CS', 'FN', 'TN', 'NL', 'VD', 'YS', 'AK', 'DV', 'SG', 'FG', 'VN', 'IY', 'RN', 'GY', 'DL', 'EY', 'VR', 'RL', 'QF', 'IR', 'AK', 'PS', 'GN', 'QL', 'TK', 'GP', 'KF', 'IE', 'AR', 'DD', 'YI', 'NS', 'YT', 'KE', 'LI', 'PY', 'DQ', 'DA', 'FG', 'TS', 'GT', 'CP', 'VC', 'IN', 'AG', 'WV', 'NE', 'SG', 'NF', 'NN', 'LC', 'DY', 'SF', 'KP', 'VL', 'GQ', 'GS', 'NY', 'YG', 'NF', 'YQ', 'LP', 'YT', 'RN', 'LG', 'FV', 'RG', 'KY', 'SQ', 'NP', 'LY', 'KR', 'PV', 'FV', 'EV', 'RL', 'DS', 'IF', 'SE', 'TL', 'EL', 'IH', 'YA', 'QP', 'AA', 'GT', 'SV', 'TC', 'PG', 'CP', 'NK', 'GK', 'VS', 'ET', 'GN', 'FL', 'NV', 'CK', 'YN', 'FK', 'PC', 'LV', 'QN', 'SF']\n" ], [ "s48 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_48 = []\n\nfor i in range(len(s48)):\n if(0<=i<175):\n c1 = s48[i]\n c2 = s48[i+48]\n lambda_48.append(c1+c2)\n else:\n break\nprint(lambda_48)", "['RV', 'VL', 'QY', 'PN', 'TS', 'EA', 'SS', 'IF', 'VS', 'RT', 'FF', 'PK', 'NC', 'IY', 'TG', 'NV', 'LS', 'CP', 'PT', 'FK', 'GL', 'EN', 'VD', 'FL', 'NC', 'AF', 'TT', 'RN', 'FV', 'AY', 'SA', 'VD', 'YS', 'AF', 'WV', 'NI', 'RR', 'KG', 'RD', 'IE', 'SV', 'NR', 'CQ', 'VI', 'AA', 'DP', 'YG', 'SQ', 'VT', 'LG', 'YK', 'NI', 'SA', 'AD', 'SY', 'FN', 'SY', 'TK', 'FL', 'KP', 'CD', 'YD', 'GF', 'VT', 'SG', 'PC', 'TV', 'KI', 'LA', 'NW', 'DN', 'LS', 'CN', 'FN', 'TL', 'ND', 'VS', 'YK', 'AV', 'DG', 'SG', 'FN', 'VY', 'IN', 'RY', 'GL', 'DY', 'ER', 'VL', 'RF', 'QR', 'IK', 'AS', 'PN', 'GL', 'QK', 'TP', 'GF', 'KE', 'IR', 'AD', 'DI', 'YS', 'NT', 'YE', 'KI', 'LY', 'PQ', 'DA', 'DG', 'FS', 'TT', 'GP', 'CC', 'VN', 'IG', 'AV', 'WE', 'NG', 'SF', 'NN', 'NC', 'LY', 'DF', 'SP', 'KL', 'VQ', 'GS', 'GY', 'NG', 'YF', 'NQ', 'YP', 'LT', 'YN', 'RG', 'LV', 'FG', 'RY', 'KQ', 'SP', 'NY', 'LR', 'KV', 'PV', 'FV', 'EL', 'RS', 'DF', 'IE', 'SL', 'TL', 'EH', 'IA', 'YP', 'QA', 'AT', 'GV', 'SC', 'TG', 'PP', 'CK', 'NK', 'GS', 'VT', 'EN', 'GL', 'FV', 'NK', 'CN', 'YK', 'FC', 'PV', 'LN', 'QF']\n" ], [ "s49 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_49 = []\n\nfor i in range(len(s49)):\n if(0<=i<174):\n c1 = s49[i]\n c2 = s49[i+49]\n lambda_49.append(c1+c2)\n else:\n break\nprint(lambda_49)", "['RL', 'VY', 'QN', 'PS', 'TA', 'ES', 'SF', 'IS', 'VT', 'RF', 'FK', 'PC', 'NY', 'IG', 'TV', 'NS', 'LP', 'CT', 'PK', 'FL', 'GN', 'ED', 'VL', 'FC', 'NF', 'AT', 'TN', 'RV', 'FY', 'AA', 'SD', 'VS', 'YF', 'AV', 'WI', 'NR', 'RG', 'KD', 'RE', 'IV', 'SR', 'NQ', 'CI', 'VA', 'AP', 'DG', 'YQ', 'ST', 'VG', 'LK', 'YI', 'NA', 'SD', 'AY', 'SN', 'FY', 'SK', 'TL', 'FP', 'KD', 'CD', 'YF', 'GT', 'VG', 'SC', 'PV', 'TI', 'KA', 'LW', 'NN', 'DS', 'LN', 'CN', 'FL', 'TD', 'NS', 'VK', 'YV', 'AG', 'DG', 'SN', 'FY', 'VN', 'IY', 'RL', 'GY', 'DR', 'EL', 'VF', 'RR', 'QK', 'IS', 'AN', 'PL', 'GK', 'QP', 'TF', 'GE', 'KR', 'ID', 'AI', 'DS', 'YT', 'NE', 'YI', 'KY', 'LQ', 'PA', 'DG', 'DS', 'FT', 'TP', 'GC', 'CN', 'VG', 'IV', 'AE', 'WG', 'NF', 'SN', 'NC', 'NY', 'LF', 'DP', 'SL', 'KQ', 'VS', 'GY', 'GG', 'NF', 'YQ', 'NP', 'YT', 'LN', 'YG', 'RV', 'LG', 'FY', 'RQ', 'KP', 'SY', 'NR', 'LV', 'KV', 'PV', 'FL', 'ES', 'RF', 'DE', 'IL', 'SL', 'TH', 'EA', 'IP', 'YA', 'QT', 'AV', 'GC', 'SG', 'TP', 'PK', 'CK', 'NS', 'GT', 'VN', 'EL', 'GV', 'FK', 'NN', 'CK', 'YC', 'FV', 'PN', 'LF']\n" ], [ "s50 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_50 = []\n\nfor i in range(len(s50)):\n if(0<=i<173):\n c1 = s50[i]\n c2 = s50[i+50]\n lambda_50.append(c1+c2)\n else:\n break\nprint(lambda_50)", "['RY', 'VN', 'QS', 'PA', 'TS', 'EF', 'SS', 'IT', 'VF', 'RK', 'FC', 'PY', 'NG', 'IV', 'TS', 'NP', 'LT', 'CK', 'PL', 'FN', 'GD', 'EL', 'VC', 'FF', 'NT', 'AN', 'TV', 'RY', 'FA', 'AD', 'SS', 'VF', 'YV', 'AI', 'WR', 'NG', 'RD', 'KE', 'RV', 'IR', 'SQ', 'NI', 'CA', 'VP', 'AG', 'DQ', 'YT', 'SG', 'VK', 'LI', 'YA', 'ND', 'SY', 'AN', 'SY', 'FK', 'SL', 'TP', 'FD', 'KD', 'CF', 'YT', 'GG', 'VC', 'SV', 'PI', 'TA', 'KW', 'LN', 'NS', 'DN', 'LN', 'CL', 'FD', 'TS', 'NK', 'VV', 'YG', 'AG', 'DN', 'SY', 'FN', 'VY', 'IL', 'RY', 'GR', 'DL', 'EF', 'VR', 'RK', 'QS', 'IN', 'AL', 'PK', 'GP', 'QF', 'TE', 'GR', 'KD', 'II', 'AS', 'DT', 'YE', 'NI', 'YY', 'KQ', 'LA', 'PG', 'DS', 'DT', 'FP', 'TC', 'GN', 'CG', 'VV', 'IE', 'AG', 'WF', 'NN', 'SC', 'NY', 'NF', 'LP', 'DL', 'SQ', 'KS', 'VY', 'GG', 'GF', 'NQ', 'YP', 'NT', 'YN', 'LG', 'YV', 'RG', 'LY', 'FQ', 'RP', 'KY', 'SR', 'NV', 'LV', 'KV', 'PL', 'FS', 'EF', 'RE', 'DL', 'IL', 'SH', 'TA', 'EP', 'IA', 'YT', 'QV', 'AC', 'GG', 'SP', 'TK', 'PK', 'CS', 'NT', 'GN', 'VL', 'EV', 'GK', 'FN', 'NK', 'CC', 'YV', 'FN', 'PF']\n" ], [ "s51 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_51 = []\n\nfor i in range(len(s51)):\n if(0<=i<172):\n c1 = s51[i]\n c2 = s51[i+51]\n lambda_51.append(c1+c2)\n else:\n break\nprint(lambda_51)", "['RN', 'VS', 'QA', 'PS', 'TF', 'ES', 'ST', 'IF', 'VK', 'RC', 'FY', 'PG', 'NV', 'IS', 'TP', 'NT', 'LK', 'CL', 'PN', 'FD', 'GL', 'EC', 'VF', 'FT', 'NN', 'AV', 'TY', 'RA', 'FD', 'AS', 'SF', 'VV', 'YI', 'AR', 'WG', 'ND', 'RE', 'KV', 'RR', 'IQ', 'SI', 'NA', 'CP', 'VG', 'AQ', 'DT', 'YG', 'SK', 'VI', 'LA', 'YD', 'NY', 'SN', 'AY', 'SK', 'FL', 'SP', 'TD', 'FD', 'KF', 'CT', 'YG', 'GC', 'VV', 'SI', 'PA', 'TW', 'KN', 'LS', 'NN', 'DN', 'LL', 'CD', 'FS', 'TK', 'NV', 'VG', 'YG', 'AN', 'DY', 'SN', 'FY', 'VL', 'IY', 'RR', 'GL', 'DF', 'ER', 'VK', 'RS', 'QN', 'IL', 'AK', 'PP', 'GF', 'QE', 'TR', 'GD', 'KI', 'IS', 'AT', 'DE', 'YI', 'NY', 'YQ', 'KA', 'LG', 'PS', 'DT', 'DP', 'FC', 'TN', 'GG', 'CV', 'VE', 'IG', 'AF', 'WN', 'NC', 'SY', 'NF', 'NP', 'LL', 'DQ', 'SS', 'KY', 'VG', 'GF', 'GQ', 'NP', 'YT', 'NN', 'YG', 'LV', 'YG', 'RY', 'LQ', 'FP', 'RY', 'KR', 'SV', 'NV', 'LV', 'KL', 'PS', 'FF', 'EE', 'RL', 'DL', 'IH', 'SA', 'TP', 'EA', 'IT', 'YV', 'QC', 'AG', 'GP', 'SK', 'TK', 'PS', 'CT', 'NN', 'GL', 'VV', 'EK', 'GN', 'FK', 'NC', 'CV', 'YN', 'FF']\n" ], [ "s52 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_52 = []\n\nfor i in range(len(s52)):\n if(0<=i<171):\n c1 = s52[i]\n c2 = s52[i+52]\n lambda_52.append(c1+c2)\n else:\n break\nprint(lambda_52)", "['RS', 'VA', 'QS', 'PF', 'TS', 'ET', 'SF', 'IK', 'VC', 'RY', 'FG', 'PV', 'NS', 'IP', 'TT', 'NK', 'LL', 'CN', 'PD', 'FL', 'GC', 'EF', 'VT', 'FN', 'NV', 'AY', 'TA', 'RD', 'FS', 'AF', 'SV', 'VI', 'YR', 'AG', 'WD', 'NE', 'RV', 'KR', 'RQ', 'II', 'SA', 'NP', 'CG', 'VQ', 'AT', 'DG', 'YK', 'SI', 'VA', 'LD', 'YY', 'NN', 'SY', 'AK', 'SL', 'FP', 'SD', 'TD', 'FF', 'KT', 'CG', 'YC', 'GV', 'VI', 'SA', 'PW', 'TN', 'KS', 'LN', 'NN', 'DL', 'LD', 'CS', 'FK', 'TV', 'NG', 'VG', 'YN', 'AY', 'DN', 'SY', 'FL', 'VY', 'IR', 'RL', 'GF', 'DR', 'EK', 'VS', 'RN', 'QL', 'IK', 'AP', 'PF', 'GE', 'QR', 'TD', 'GI', 'KS', 'IT', 'AE', 'DI', 'YY', 'NQ', 'YA', 'KG', 'LS', 'PT', 'DP', 'DC', 'FN', 'TG', 'GV', 'CE', 'VG', 'IF', 'AN', 'WC', 'NY', 'SF', 'NP', 'NL', 'LQ', 'DS', 'SY', 'KG', 'VF', 'GQ', 'GP', 'NT', 'YN', 'NG', 'YV', 'LG', 'YY', 'RQ', 'LP', 'FY', 'RR', 'KV', 'SV', 'NV', 'LL', 'KS', 'PF', 'FE', 'EL', 'RL', 'DH', 'IA', 'SP', 'TA', 'ET', 'IV', 'YC', 'QG', 'AP', 'GK', 'SK', 'TS', 'PT', 'CN', 'NL', 'GV', 'VK', 'EN', 'GK', 'FC', 'NV', 'CN', 'YF']\n" ], [ "s53 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_53 = []\n\nfor i in range(len(s53)):\n if(0<=i<170):\n c1 = s53[i]\n c2 = s53[i+53]\n lambda_53.append(c1+c2)\n else:\n break\nprint(lambda_53)", "['RA', 'VS', 'QF', 'PS', 'TT', 'EF', 'SK', 'IC', 'VY', 'RG', 'FV', 'PS', 'NP', 'IT', 'TK', 'NL', 'LN', 'CD', 'PL', 'FC', 'GF', 'ET', 'VN', 'FV', 'NY', 'AA', 'TD', 'RS', 'FF', 'AV', 'SI', 'VR', 'YG', 'AD', 'WE', 'NV', 'RR', 'KQ', 'RI', 'IA', 'SP', 'NG', 'CQ', 'VT', 'AG', 'DK', 'YI', 'SA', 'VD', 'LY', 'YN', 'NY', 'SK', 'AL', 'SP', 'FD', 'SD', 'TF', 'FT', 'KG', 'CC', 'YV', 'GI', 'VA', 'SW', 'PN', 'TS', 'KN', 'LN', 'NL', 'DD', 'LS', 'CK', 'FV', 'TG', 'NG', 'VN', 'YY', 'AN', 'DY', 'SL', 'FY', 'VR', 'IL', 'RF', 'GR', 'DK', 'ES', 'VN', 'RL', 'QK', 'IP', 'AF', 'PE', 'GR', 'QD', 'TI', 'GS', 'KT', 'IE', 'AI', 'DY', 'YQ', 'NA', 'YG', 'KS', 'LT', 'PP', 'DC', 'DN', 'FG', 'TV', 'GE', 'CG', 'VF', 'IN', 'AC', 'WY', 'NF', 'SP', 'NL', 'NQ', 'LS', 'DY', 'SG', 'KF', 'VQ', 'GP', 'GT', 'NN', 'YG', 'NV', 'YG', 'LY', 'YQ', 'RP', 'LY', 'FR', 'RV', 'KV', 'SV', 'NL', 'LS', 'KF', 'PE', 'FL', 'EL', 'RH', 'DA', 'IP', 'SA', 'TT', 'EV', 'IC', 'YG', 'QP', 'AK', 'GK', 'SS', 'TT', 'PN', 'CL', 'NV', 'GK', 'VN', 'EK', 'GC', 'FV', 'NN', 'CF']\n" ], [ "s54 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_54 = []\n\nfor i in range(len(s54)):\n if(0<=i<169):\n c1 = s54[i]\n c2 = s54[i+54]\n lambda_54.append(c1+c2)\n else:\n break\nprint(lambda_54)", "['RS', 'VF', 'QS', 'PT', 'TF', 'EK', 'SC', 'IY', 'VG', 'RV', 'FS', 'PP', 'NT', 'IK', 'TL', 'NN', 'LD', 'CL', 'PC', 'FF', 'GT', 'EN', 'VV', 'FY', 'NA', 'AD', 'TS', 'RF', 'FV', 'AI', 'SR', 'VG', 'YD', 'AE', 'WV', 'NR', 'RQ', 'KI', 'RA', 'IP', 'SG', 'NQ', 'CT', 'VG', 'AK', 'DI', 'YA', 'SD', 'VY', 'LN', 'YY', 'NK', 'SL', 'AP', 'SD', 'FD', 'SF', 'TT', 'FG', 'KC', 'CV', 'YI', 'GA', 'VW', 'SN', 'PS', 'TN', 'KN', 'LL', 'ND', 'DS', 'LK', 'CV', 'FG', 'TG', 'NN', 'VY', 'YN', 'AY', 'DL', 'SY', 'FR', 'VL', 'IF', 'RR', 'GK', 'DS', 'EN', 'VL', 'RK', 'QP', 'IF', 'AE', 'PR', 'GD', 'QI', 'TS', 'GT', 'KE', 'II', 'AY', 'DQ', 'YA', 'NG', 'YS', 'KT', 'LP', 'PC', 'DN', 'DG', 'FV', 'TE', 'GG', 'CF', 'VN', 'IC', 'AY', 'WF', 'NP', 'SL', 'NQ', 'NS', 'LY', 'DG', 'SF', 'KQ', 'VP', 'GT', 'GN', 'NG', 'YV', 'NG', 'YY', 'LQ', 'YP', 'RY', 'LR', 'FV', 'RV', 'KV', 'SL', 'NS', 'LF', 'KE', 'PL', 'FL', 'EH', 'RA', 'DP', 'IA', 'ST', 'TV', 'EC', 'IG', 'YP', 'QK', 'AK', 'GS', 'ST', 'TN', 'PL', 'CV', 'NK', 'GN', 'VK', 'EC', 'GV', 'FN', 'NF']\n" ], [ "s55 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_55 = []\n\nfor i in range(len(s55)):\n if(0<=i<168):\n c1 = s55[i]\n c2 = s55[i+55]\n lambda_55.append(c1+c2)\n else:\n break\nprint(lambda_55)", "['RF', 'VS', 'QT', 'PF', 'TK', 'EC', 'SY', 'IG', 'VV', 'RS', 'FP', 'PT', 'NK', 'IL', 'TN', 'ND', 'LL', 'CC', 'PF', 'FT', 'GN', 'EV', 'VY', 'FA', 'ND', 'AS', 'TF', 'RV', 'FI', 'AR', 'SG', 'VD', 'YE', 'AV', 'WR', 'NQ', 'RI', 'KA', 'RP', 'IG', 'SQ', 'NT', 'CG', 'VK', 'AI', 'DA', 'YD', 'SY', 'VN', 'LY', 'YK', 'NL', 'SP', 'AD', 'SD', 'FF', 'ST', 'TG', 'FC', 'KV', 'CI', 'YA', 'GW', 'VN', 'SS', 'PN', 'TN', 'KL', 'LD', 'NS', 'DK', 'LV', 'CG', 'FG', 'TN', 'NY', 'VN', 'YY', 'AL', 'DY', 'SR', 'FL', 'VF', 'IR', 'RK', 'GS', 'DN', 'EL', 'VK', 'RP', 'QF', 'IE', 'AR', 'PD', 'GI', 'QS', 'TT', 'GE', 'KI', 'IY', 'AQ', 'DA', 'YG', 'NS', 'YT', 'KP', 'LC', 'PN', 'DG', 'DV', 'FE', 'TG', 'GF', 'CN', 'VC', 'IY', 'AF', 'WP', 'NL', 'SQ', 'NS', 'NY', 'LG', 'DF', 'SQ', 'KP', 'VT', 'GN', 'GG', 'NV', 'YG', 'NY', 'YQ', 'LP', 'YY', 'RR', 'LV', 'FV', 'RV', 'KL', 'SS', 'NF', 'LE', 'KL', 'PL', 'FH', 'EA', 'RP', 'DA', 'IT', 'SV', 'TC', 'EG', 'IP', 'YK', 'QK', 'AS', 'GT', 'SN', 'TL', 'PV', 'CK', 'NN', 'GK', 'VC', 'EV', 'GN', 'FF']\n" ], [ "s56 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_56 = []\n\nfor i in range(len(s56)):\n if(0<=i<167):\n c1 = s56[i]\n c2 = s56[i+56]\n lambda_56.append(c1+c2)\n else:\n break\nprint(lambda_56)", "['RS', 'VT', 'QF', 'PK', 'TC', 'EY', 'SG', 'IV', 'VS', 'RP', 'FT', 'PK', 'NL', 'IN', 'TD', 'NL', 'LC', 'CF', 'PT', 'FN', 'GV', 'EY', 'VA', 'FD', 'NS', 'AF', 'TV', 'RI', 'FR', 'AG', 'SD', 'VE', 'YV', 'AR', 'WQ', 'NI', 'RA', 'KP', 'RG', 'IQ', 'ST', 'NG', 'CK', 'VI', 'AA', 'DD', 'YY', 'SN', 'VY', 'LK', 'YL', 'NP', 'SD', 'AD', 'SF', 'FT', 'SG', 'TC', 'FV', 'KI', 'CA', 'YW', 'GN', 'VS', 'SN', 'PN', 'TL', 'KD', 'LS', 'NK', 'DV', 'LG', 'CG', 'FN', 'TY', 'NN', 'VY', 'YL', 'AY', 'DR', 'SL', 'FF', 'VR', 'IK', 'RS', 'GN', 'DL', 'EK', 'VP', 'RF', 'QE', 'IR', 'AD', 'PI', 'GS', 'QT', 'TE', 'GI', 'KY', 'IQ', 'AA', 'DG', 'YS', 'NT', 'YP', 'KC', 'LN', 'PG', 'DV', 'DE', 'FG', 'TF', 'GN', 'CC', 'VY', 'IF', 'AP', 'WL', 'NQ', 'SS', 'NY', 'NG', 'LF', 'DQ', 'SP', 'KT', 'VN', 'GG', 'GV', 'NG', 'YY', 'NQ', 'YP', 'LY', 'YR', 'RV', 'LV', 'FV', 'RL', 'KS', 'SF', 'NE', 'LL', 'KL', 'PH', 'FA', 'EP', 'RA', 'DT', 'IV', 'SC', 'TG', 'EP', 'IK', 'YK', 'QS', 'AT', 'GN', 'SL', 'TV', 'PK', 'CN', 'NK', 'GC', 'VV', 'EN', 'GF']\n" ], [ "s57 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_57 = []\n\nfor i in range(len(s57)):\n if(0<=i<166):\n c1 = s57[i]\n c2 = s57[i+57]\n lambda_57.append(c1+c2)\n else:\n break\nprint(lambda_57)", "['RT', 'VF', 'QK', 'PC', 'TY', 'EG', 'SV', 'IS', 'VP', 'RT', 'FK', 'PL', 'NN', 'ID', 'TL', 'NC', 'LF', 'CT', 'PN', 'FV', 'GY', 'EA', 'VD', 'FS', 'NF', 'AV', 'TI', 'RR', 'FG', 'AD', 'SE', 'VV', 'YR', 'AQ', 'WI', 'NA', 'RP', 'KG', 'RQ', 'IT', 'SG', 'NK', 'CI', 'VA', 'AD', 'DY', 'YN', 'SY', 'VK', 'LL', 'YP', 'ND', 'SD', 'AF', 'ST', 'FG', 'SC', 'TV', 'FI', 'KA', 'CW', 'YN', 'GS', 'VN', 'SN', 'PL', 'TD', 'KS', 'LK', 'NV', 'DG', 'LG', 'CN', 'FY', 'TN', 'NY', 'VL', 'YY', 'AR', 'DL', 'SF', 'FR', 'VK', 'IS', 'RN', 'GL', 'DK', 'EP', 'VF', 'RE', 'QR', 'ID', 'AI', 'PS', 'GT', 'QE', 'TI', 'GY', 'KQ', 'IA', 'AG', 'DS', 'YT', 'NP', 'YC', 'KN', 'LG', 'PV', 'DE', 'DG', 'FF', 'TN', 'GC', 'CY', 'VF', 'IP', 'AL', 'WQ', 'NS', 'SY', 'NG', 'NF', 'LQ', 'DP', 'ST', 'KN', 'VG', 'GV', 'GG', 'NY', 'YQ', 'NP', 'YY', 'LR', 'YV', 'RV', 'LV', 'FL', 'RS', 'KF', 'SE', 'NL', 'LL', 'KH', 'PA', 'FP', 'EA', 'RT', 'DV', 'IC', 'SG', 'TP', 'EK', 'IK', 'YS', 'QT', 'AN', 'GL', 'SV', 'TK', 'PN', 'CK', 'NC', 'GV', 'VN', 'EF']\n" ], [ "s58 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_58 = []\n\nfor i in range(len(s58)):\n if(0<=i<165):\n c1 = s58[i]\n c2 = s58[i+58]\n lambda_58.append(c1+c2)\n else:\n break\nprint(lambda_58)", "['RF', 'VK', 'QC', 'PY', 'TG', 'EV', 'SS', 'IP', 'VT', 'RK', 'FL', 'PN', 'ND', 'IL', 'TC', 'NF', 'LT', 'CN', 'PV', 'FY', 'GA', 'ED', 'VS', 'FF', 'NV', 'AI', 'TR', 'RG', 'FD', 'AE', 'SV', 'VR', 'YQ', 'AI', 'WA', 'NP', 'RG', 'KQ', 'RT', 'IG', 'SK', 'NI', 'CA', 'VD', 'AY', 'DN', 'YY', 'SK', 'VL', 'LP', 'YD', 'ND', 'SF', 'AT', 'SG', 'FC', 'SV', 'TI', 'FA', 'KW', 'CN', 'YS', 'GN', 'VN', 'SL', 'PD', 'TS', 'KK', 'LV', 'NG', 'DG', 'LN', 'CY', 'FN', 'TY', 'NL', 'VY', 'YR', 'AL', 'DF', 'SR', 'FK', 'VS', 'IN', 'RL', 'GK', 'DP', 'EF', 'VE', 'RR', 'QD', 'II', 'AS', 'PT', 'GE', 'QI', 'TY', 'GQ', 'KA', 'IG', 'AS', 'DT', 'YP', 'NC', 'YN', 'KG', 'LV', 'PE', 'DG', 'DF', 'FN', 'TC', 'GY', 'CF', 'VP', 'IL', 'AQ', 'WS', 'NY', 'SG', 'NF', 'NQ', 'LP', 'DT', 'SN', 'KG', 'VV', 'GG', 'GY', 'NQ', 'YP', 'NY', 'YR', 'LV', 'YV', 'RV', 'LL', 'FS', 'RF', 'KE', 'SL', 'NL', 'LH', 'KA', 'PP', 'FA', 'ET', 'RV', 'DC', 'IG', 'SP', 'TK', 'EK', 'IS', 'YT', 'QN', 'AL', 'GV', 'SK', 'TN', 'PK', 'CC', 'NV', 'GN', 'VF']\n" ], [ "s59 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_59 = []\n\nfor i in range(len(s59)):\n if(0<=i<164):\n c1 = s59[i]\n c2 = s59[i+59]\n lambda_59.append(c1+c2)\n else:\n break\nprint(lambda_59)", "['RK', 'VC', 'QY', 'PG', 'TV', 'ES', 'SP', 'IT', 'VK', 'RL', 'FN', 'PD', 'NL', 'IC', 'TF', 'NT', 'LN', 'CV', 'PY', 'FA', 'GD', 'ES', 'VF', 'FV', 'NI', 'AR', 'TG', 'RD', 'FE', 'AV', 'SR', 'VQ', 'YI', 'AA', 'WP', 'NG', 'RQ', 'KT', 'RG', 'IK', 'SI', 'NA', 'CD', 'VY', 'AN', 'DY', 'YK', 'SL', 'VP', 'LD', 'YD', 'NF', 'ST', 'AG', 'SC', 'FV', 'SI', 'TA', 'FW', 'KN', 'CS', 'YN', 'GN', 'VL', 'SD', 'PS', 'TK', 'KV', 'LG', 'NG', 'DN', 'LY', 'CN', 'FY', 'TL', 'NY', 'VR', 'YL', 'AF', 'DR', 'SK', 'FS', 'VN', 'IL', 'RK', 'GP', 'DF', 'EE', 'VR', 'RD', 'QI', 'IS', 'AT', 'PE', 'GI', 'QY', 'TQ', 'GA', 'KG', 'IS', 'AT', 'DP', 'YC', 'NN', 'YG', 'KV', 'LE', 'PG', 'DF', 'DN', 'FC', 'TY', 'GF', 'CP', 'VL', 'IQ', 'AS', 'WY', 'NG', 'SF', 'NQ', 'NP', 'LT', 'DN', 'SG', 'KV', 'VG', 'GY', 'GQ', 'NP', 'YY', 'NR', 'YV', 'LV', 'YV', 'RL', 'LS', 'FF', 'RE', 'KL', 'SL', 'NH', 'LA', 'KP', 'PA', 'FT', 'EV', 'RC', 'DG', 'IP', 'SK', 'TK', 'ES', 'IT', 'YN', 'QL', 'AV', 'GK', 'SN', 'TK', 'PC', 'CV', 'NN', 'GF']\n" ], [ "s60 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_60 = []\n\nfor i in range(len(s60)):\n if(0<=i<163):\n c1 = s60[i]\n c2 = s60[i+60]\n lambda_60.append(c1+c2)\n else:\n break\nprint(lambda_60)", "['RC', 'VY', 'QG', 'PV', 'TS', 'EP', 'ST', 'IK', 'VL', 'RN', 'FD', 'PL', 'NC', 'IF', 'TT', 'NN', 'LV', 'CY', 'PA', 'FD', 'GS', 'EF', 'VV', 'FI', 'NR', 'AG', 'TD', 'RE', 'FV', 'AR', 'SQ', 'VI', 'YA', 'AP', 'WG', 'NQ', 'RT', 'KG', 'RK', 'II', 'SA', 'ND', 'CY', 'VN', 'AY', 'DK', 'YL', 'SP', 'VD', 'LD', 'YF', 'NT', 'SG', 'AC', 'SV', 'FI', 'SA', 'TW', 'FN', 'KS', 'CN', 'YN', 'GL', 'VD', 'SS', 'PK', 'TV', 'KG', 'LG', 'NN', 'DY', 'LN', 'CY', 'FL', 'TY', 'NR', 'VL', 'YF', 'AR', 'DK', 'SS', 'FN', 'VL', 'IK', 'RP', 'GF', 'DE', 'ER', 'VD', 'RI', 'QS', 'IT', 'AE', 'PI', 'GY', 'QQ', 'TA', 'GG', 'KS', 'IT', 'AP', 'DC', 'YN', 'NG', 'YV', 'KE', 'LG', 'PF', 'DN', 'DC', 'FY', 'TF', 'GP', 'CL', 'VQ', 'IS', 'AY', 'WG', 'NF', 'SQ', 'NP', 'NT', 'LN', 'DG', 'SV', 'KG', 'VY', 'GQ', 'GP', 'NY', 'YR', 'NV', 'YV', 'LV', 'YL', 'RS', 'LF', 'FE', 'RL', 'KL', 'SH', 'NA', 'LP', 'KA', 'PT', 'FV', 'EC', 'RG', 'DP', 'IK', 'SK', 'TS', 'ET', 'IN', 'YL', 'QV', 'AK', 'GN', 'SK', 'TC', 'PV', 'CN', 'NF']\n" ], [ "s61 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_61 = []\n\nfor i in range(len(s61)):\n if(0<=i<162):\n c1 = s61[i]\n c2 = s61[i+61]\n lambda_61.append(c1+c2)\n else:\n break\nprint(lambda_61)", "['RY', 'VG', 'QV', 'PS', 'TP', 'ET', 'SK', 'IL', 'VN', 'RD', 'FL', 'PC', 'NF', 'IT', 'TN', 'NV', 'LY', 'CA', 'PD', 'FS', 'GF', 'EV', 'VI', 'FR', 'NG', 'AD', 'TE', 'RV', 'FR', 'AQ', 'SI', 'VA', 'YP', 'AG', 'WQ', 'NT', 'RG', 'KK', 'RI', 'IA', 'SD', 'NY', 'CN', 'VY', 'AK', 'DL', 'YP', 'SD', 'VD', 'LF', 'YT', 'NG', 'SC', 'AV', 'SI', 'FA', 'SW', 'TN', 'FS', 'KN', 'CN', 'YL', 'GD', 'VS', 'SK', 'PV', 'TG', 'KG', 'LN', 'NY', 'DN', 'LY', 'CL', 'FY', 'TR', 'NL', 'VF', 'YR', 'AK', 'DS', 'SN', 'FL', 'VK', 'IP', 'RF', 'GE', 'DR', 'ED', 'VI', 'RS', 'QT', 'IE', 'AI', 'PY', 'GQ', 'QA', 'TG', 'GS', 'KT', 'IP', 'AC', 'DN', 'YG', 'NV', 'YE', 'KG', 'LF', 'PN', 'DC', 'DY', 'FF', 'TP', 'GL', 'CQ', 'VS', 'IY', 'AG', 'WF', 'NQ', 'SP', 'NT', 'NN', 'LG', 'DV', 'SG', 'KY', 'VQ', 'GP', 'GY', 'NR', 'YV', 'NV', 'YV', 'LL', 'YS', 'RF', 'LE', 'FL', 'RL', 'KH', 'SA', 'NP', 'LA', 'KT', 'PV', 'FC', 'EG', 'RP', 'DK', 'IK', 'SS', 'TT', 'EN', 'IL', 'YV', 'QK', 'AN', 'GK', 'SC', 'TV', 'PN', 'CF']\n" ], [ "s62 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_62 = []\n\nfor i in range(len(s62)):\n if(0<=i<161):\n c1 = s62[i]\n c2 = s62[i+62]\n lambda_62.append(c1+c2)\n else:\n break\nprint(lambda_62)", "['RG', 'VV', 'QS', 'PP', 'TT', 'EK', 'SL', 'IN', 'VD', 'RL', 'FC', 'PF', 'NT', 'IN', 'TV', 'NY', 'LA', 'CD', 'PS', 'FF', 'GV', 'EI', 'VR', 'FG', 'ND', 'AE', 'TV', 'RR', 'FQ', 'AI', 'SA', 'VP', 'YG', 'AQ', 'WT', 'NG', 'RK', 'KI', 'RA', 'ID', 'SY', 'NN', 'CY', 'VK', 'AL', 'DP', 'YD', 'SD', 'VF', 'LT', 'YG', 'NC', 'SV', 'AI', 'SA', 'FW', 'SN', 'TS', 'FN', 'KN', 'CL', 'YD', 'GS', 'VK', 'SV', 'PG', 'TG', 'KN', 'LY', 'NN', 'DY', 'LL', 'CY', 'FR', 'TL', 'NF', 'VR', 'YK', 'AS', 'DN', 'SL', 'FK', 'VP', 'IF', 'RE', 'GR', 'DD', 'EI', 'VS', 'RT', 'QE', 'II', 'AY', 'PQ', 'GA', 'QG', 'TS', 'GT', 'KP', 'IC', 'AN', 'DG', 'YV', 'NE', 'YG', 'KF', 'LN', 'PC', 'DY', 'DF', 'FP', 'TL', 'GQ', 'CS', 'VY', 'IG', 'AF', 'WQ', 'NP', 'ST', 'NN', 'NG', 'LV', 'DG', 'SY', 'KQ', 'VP', 'GY', 'GR', 'NV', 'YV', 'NV', 'YL', 'LS', 'YF', 'RE', 'LL', 'FL', 'RH', 'KA', 'SP', 'NA', 'LT', 'KV', 'PC', 'FG', 'EP', 'RK', 'DK', 'IS', 'ST', 'TN', 'EL', 'IV', 'YK', 'QN', 'AK', 'GC', 'SV', 'TN', 'PF']\n" ], [ "s63 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_63 = []\n\nfor i in range(len(s63)):\n if(0<=i<160):\n c1 = s63[i]\n c2 = s63[i+63]\n lambda_63.append(c1+c2)\n else:\n break\nprint(lambda_63)", "['RV', 'VS', 'QP', 'PT', 'TK', 'EL', 'SN', 'ID', 'VL', 'RC', 'FF', 'PT', 'NN', 'IV', 'TY', 'NA', 'LD', 'CS', 'PF', 'FV', 'GI', 'ER', 'VG', 'FD', 'NE', 'AV', 'TR', 'RQ', 'FI', 'AA', 'SP', 'VG', 'YQ', 'AT', 'WG', 'NK', 'RI', 'KA', 'RD', 'IY', 'SN', 'NY', 'CK', 'VL', 'AP', 'DD', 'YD', 'SF', 'VT', 'LG', 'YC', 'NV', 'SI', 'AA', 'SW', 'FN', 'SS', 'TN', 'FN', 'KL', 'CD', 'YS', 'GK', 'VV', 'SG', 'PG', 'TN', 'KY', 'LN', 'NY', 'DL', 'LY', 'CR', 'FL', 'TF', 'NR', 'VK', 'YS', 'AN', 'DL', 'SK', 'FP', 'VF', 'IE', 'RR', 'GD', 'DI', 'ES', 'VT', 'RE', 'QI', 'IY', 'AQ', 'PA', 'GG', 'QS', 'TT', 'GP', 'KC', 'IN', 'AG', 'DV', 'YE', 'NG', 'YF', 'KN', 'LC', 'PY', 'DF', 'DP', 'FL', 'TQ', 'GS', 'CY', 'VG', 'IF', 'AQ', 'WP', 'NT', 'SN', 'NG', 'NV', 'LG', 'DY', 'SQ', 'KP', 'VY', 'GR', 'GV', 'NV', 'YV', 'NL', 'YS', 'LF', 'YE', 'RL', 'LL', 'FH', 'RA', 'KP', 'SA', 'NT', 'LV', 'KC', 'PG', 'FP', 'EK', 'RK', 'DS', 'IT', 'SN', 'TL', 'EV', 'IK', 'YN', 'QK', 'AC', 'GV', 'SN', 'TF']\n" ], [ "s64 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_64 = []\n\nfor i in range(len(s64)):\n if(0<=i<159):\n c1 = s64[i]\n c2 = s64[i+64]\n lambda_64.append(c1+c2)\n else:\n break\nprint(lambda_64)", "['RS', 'VP', 'QT', 'PK', 'TL', 'EN', 'SD', 'IL', 'VC', 'RF', 'FT', 'PN', 'NV', 'IY', 'TA', 'ND', 'LS', 'CF', 'PV', 'FI', 'GR', 'EG', 'VD', 'FE', 'NV', 'AR', 'TQ', 'RI', 'FA', 'AP', 'SG', 'VQ', 'YT', 'AG', 'WK', 'NI', 'RA', 'KD', 'RY', 'IN', 'SY', 'NK', 'CL', 'VP', 'AD', 'DD', 'YF', 'ST', 'VG', 'LC', 'YV', 'NI', 'SA', 'AW', 'SN', 'FS', 'SN', 'TN', 'FL', 'KD', 'CS', 'YK', 'GV', 'VG', 'SG', 'PN', 'TY', 'KN', 'LY', 'NL', 'DY', 'LR', 'CL', 'FF', 'TR', 'NK', 'VS', 'YN', 'AL', 'DK', 'SP', 'FF', 'VE', 'IR', 'RD', 'GI', 'DS', 'ET', 'VE', 'RI', 'QY', 'IQ', 'AA', 'PG', 'GS', 'QT', 'TP', 'GC', 'KN', 'IG', 'AV', 'DE', 'YG', 'NF', 'YN', 'KC', 'LY', 'PF', 'DP', 'DL', 'FQ', 'TS', 'GY', 'CG', 'VF', 'IQ', 'AP', 'WT', 'NN', 'SG', 'NV', 'NG', 'LY', 'DQ', 'SP', 'KY', 'VR', 'GV', 'GV', 'NV', 'YL', 'NS', 'YF', 'LE', 'YL', 'RL', 'LH', 'FA', 'RP', 'KA', 'ST', 'NV', 'LC', 'KG', 'PP', 'FK', 'EK', 'RS', 'DT', 'IN', 'SL', 'TV', 'EK', 'IN', 'YK', 'QC', 'AV', 'GN', 'SF']\n" ], [ "s65 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_65 = []\n\nfor i in range(len(s65)):\n if(0<=i<158):\n c1 = s65[i]\n c2 = s65[i+65]\n lambda_65.append(c1+c2)\n else:\n break\nprint(lambda_65)", "['RP', 'VT', 'QK', 'PL', 'TN', 'ED', 'SL', 'IC', 'VF', 'RT', 'FN', 'PV', 'NY', 'IA', 'TD', 'NS', 'LF', 'CV', 'PI', 'FR', 'GG', 'ED', 'VE', 'FV', 'NR', 'AQ', 'TI', 'RA', 'FP', 'AG', 'SQ', 'VT', 'YG', 'AK', 'WI', 'NA', 'RD', 'KY', 'RN', 'IY', 'SK', 'NL', 'CP', 'VD', 'AD', 'DF', 'YT', 'SG', 'VC', 'LV', 'YI', 'NA', 'SW', 'AN', 'SS', 'FN', 'SN', 'TL', 'FD', 'KS', 'CK', 'YV', 'GG', 'VG', 'SN', 'PY', 'TN', 'KY', 'LL', 'NY', 'DR', 'LL', 'CF', 'FR', 'TK', 'NS', 'VN', 'YL', 'AK', 'DP', 'SF', 'FE', 'VR', 'ID', 'RI', 'GS', 'DT', 'EE', 'VI', 'RY', 'QQ', 'IA', 'AG', 'PS', 'GT', 'QP', 'TC', 'GN', 'KG', 'IV', 'AE', 'DG', 'YF', 'NN', 'YC', 'KY', 'LF', 'PP', 'DL', 'DQ', 'FS', 'TY', 'GG', 'CF', 'VQ', 'IP', 'AT', 'WN', 'NG', 'SV', 'NG', 'NY', 'LQ', 'DP', 'SY', 'KR', 'VV', 'GV', 'GV', 'NL', 'YS', 'NF', 'YE', 'LL', 'YL', 'RH', 'LA', 'FP', 'RA', 'KT', 'SV', 'NC', 'LG', 'KP', 'PK', 'FK', 'ES', 'RT', 'DN', 'IL', 'SV', 'TK', 'EN', 'IK', 'YC', 'QV', 'AN', 'GF']\n" ], [ "s66 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_66 = []\n\nfor i in range(len(s66)):\n if(0<=i<157):\n c1 = s66[i]\n c2 = s66[i+66]\n lambda_66.append(c1+c2)\n else:\n break\nprint(lambda_66)", "['RT', 'VK', 'QL', 'PN', 'TD', 'EL', 'SC', 'IF', 'VT', 'RN', 'FV', 'PY', 'NA', 'ID', 'TS', 'NF', 'LV', 'CI', 'PR', 'FG', 'GD', 'EE', 'VV', 'FR', 'NQ', 'AI', 'TA', 'RP', 'FG', 'AQ', 'ST', 'VG', 'YK', 'AI', 'WA', 'ND', 'RY', 'KN', 'RY', 'IK', 'SL', 'NP', 'CD', 'VD', 'AF', 'DT', 'YG', 'SC', 'VV', 'LI', 'YA', 'NW', 'SN', 'AS', 'SN', 'FN', 'SL', 'TD', 'FS', 'KK', 'CV', 'YG', 'GG', 'VN', 'SY', 'PN', 'TY', 'KL', 'LY', 'NR', 'DL', 'LF', 'CR', 'FK', 'TS', 'NN', 'VL', 'YK', 'AP', 'DF', 'SE', 'FR', 'VD', 'II', 'RS', 'GT', 'DE', 'EI', 'VY', 'RQ', 'QA', 'IG', 'AS', 'PT', 'GP', 'QC', 'TN', 'GG', 'KV', 'IE', 'AG', 'DF', 'YN', 'NC', 'YY', 'KF', 'LP', 'PL', 'DQ', 'DS', 'FY', 'TG', 'GF', 'CQ', 'VP', 'IT', 'AN', 'WG', 'NV', 'SG', 'NY', 'NQ', 'LP', 'DY', 'SR', 'KV', 'VV', 'GV', 'GL', 'NS', 'YF', 'NE', 'YL', 'LL', 'YH', 'RA', 'LP', 'FA', 'RT', 'KV', 'SC', 'NG', 'LP', 'KK', 'PK', 'FS', 'ET', 'RN', 'DL', 'IV', 'SK', 'TN', 'EK', 'IC', 'YV', 'QN', 'AF']\n" ], [ "s67 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_67 = []\n\nfor i in range(len(s67)):\n if(0<=i<156):\n c1 = s67[i]\n c2 = s67[i+67]\n lambda_67.append(c1+c2)\n else:\n break\nprint(lambda_67)", "['RK', 'VL', 'QN', 'PD', 'TL', 'EC', 'SF', 'IT', 'VN', 'RV', 'FY', 'PA', 'ND', 'IS', 'TF', 'NV', 'LI', 'CR', 'PG', 'FD', 'GE', 'EV', 'VR', 'FQ', 'NI', 'AA', 'TP', 'RG', 'FQ', 'AT', 'SG', 'VK', 'YI', 'AA', 'WD', 'NY', 'RN', 'KY', 'RK', 'IL', 'SP', 'ND', 'CD', 'VF', 'AT', 'DG', 'YC', 'SV', 'VI', 'LA', 'YW', 'NN', 'SS', 'AN', 'SN', 'FL', 'SD', 'TS', 'FK', 'KV', 'CG', 'YG', 'GN', 'VY', 'SN', 'PY', 'TL', 'KY', 'LR', 'NL', 'DF', 'LR', 'CK', 'FS', 'TN', 'NL', 'VK', 'YP', 'AF', 'DE', 'SR', 'FD', 'VI', 'IS', 'RT', 'GE', 'DI', 'EY', 'VQ', 'RA', 'QG', 'IS', 'AT', 'PP', 'GC', 'QN', 'TG', 'GV', 'KE', 'IG', 'AF', 'DN', 'YC', 'NY', 'YF', 'KP', 'LL', 'PQ', 'DS', 'DY', 'FG', 'TF', 'GQ', 'CP', 'VT', 'IN', 'AG', 'WV', 'NG', 'SY', 'NQ', 'NP', 'LY', 'DR', 'SV', 'KV', 'VV', 'GL', 'GS', 'NF', 'YE', 'NL', 'YL', 'LH', 'YA', 'RP', 'LA', 'FT', 'RV', 'KC', 'SG', 'NP', 'LK', 'KK', 'PS', 'FT', 'EN', 'RL', 'DV', 'IK', 'SN', 'TK', 'EC', 'IV', 'YN', 'QF']\n" ], [ "s68 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_68 = []\n\nfor i in range(len(s68)):\n if(0<=i<155):\n c1 = s68[i]\n c2 = s68[i+68]\n lambda_68.append(c1+c2)\n else:\n break\nprint(lambda_68)", "['RL', 'VN', 'QD', 'PL', 'TC', 'EF', 'ST', 'IN', 'VV', 'RY', 'FA', 'PD', 'NS', 'IF', 'TV', 'NI', 'LR', 'CG', 'PD', 'FE', 'GV', 'ER', 'VQ', 'FI', 'NA', 'AP', 'TG', 'RQ', 'FT', 'AG', 'SK', 'VI', 'YA', 'AD', 'WY', 'NN', 'RY', 'KK', 'RL', 'IP', 'SD', 'ND', 'CF', 'VT', 'AG', 'DC', 'YV', 'SI', 'VA', 'LW', 'YN', 'NS', 'SN', 'AN', 'SL', 'FD', 'SS', 'TK', 'FV', 'KG', 'CG', 'YN', 'GY', 'VN', 'SY', 'PL', 'TY', 'KR', 'LL', 'NF', 'DR', 'LK', 'CS', 'FN', 'TL', 'NK', 'VP', 'YF', 'AE', 'DR', 'SD', 'FI', 'VS', 'IT', 'RE', 'GI', 'DY', 'EQ', 'VA', 'RG', 'QS', 'IT', 'AP', 'PC', 'GN', 'QG', 'TV', 'GE', 'KG', 'IF', 'AN', 'DC', 'YY', 'NF', 'YP', 'KL', 'LQ', 'PS', 'DY', 'DG', 'FF', 'TQ', 'GP', 'CT', 'VN', 'IG', 'AV', 'WG', 'NY', 'SQ', 'NP', 'NY', 'LR', 'DV', 'SV', 'KV', 'VL', 'GS', 'GF', 'NE', 'YL', 'NL', 'YH', 'LA', 'YP', 'RA', 'LT', 'FV', 'RC', 'KG', 'SP', 'NK', 'LK', 'KS', 'PT', 'FN', 'EL', 'RV', 'DK', 'IN', 'SK', 'TC', 'EV', 'IN', 'YF']\n" ], [ "s69 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_69 = []\n\nfor i in range(len(s69)):\n if(0<=i<154):\n c1 = s69[i]\n c2 = s69[i+69]\n lambda_69.append(c1+c2)\n else:\n break\nprint(lambda_69)", "['RN', 'VD', 'QL', 'PC', 'TF', 'ET', 'SN', 'IV', 'VY', 'RA', 'FD', 'PS', 'NF', 'IV', 'TI', 'NR', 'LG', 'CD', 'PE', 'FV', 'GR', 'EQ', 'VI', 'FA', 'NP', 'AG', 'TQ', 'RT', 'FG', 'AK', 'SI', 'VA', 'YD', 'AY', 'WN', 'NY', 'RK', 'KL', 'RP', 'ID', 'SD', 'NF', 'CT', 'VG', 'AC', 'DV', 'YI', 'SA', 'VW', 'LN', 'YS', 'NN', 'SN', 'AL', 'SD', 'FS', 'SK', 'TV', 'FG', 'KG', 'CN', 'YY', 'GN', 'VY', 'SL', 'PY', 'TR', 'KL', 'LF', 'NR', 'DK', 'LS', 'CN', 'FL', 'TK', 'NP', 'VF', 'YE', 'AR', 'DD', 'SI', 'FS', 'VT', 'IE', 'RI', 'GY', 'DQ', 'EA', 'VG', 'RS', 'QT', 'IP', 'AC', 'PN', 'GG', 'QV', 'TE', 'GG', 'KF', 'IN', 'AC', 'DY', 'YF', 'NP', 'YL', 'KQ', 'LS', 'PY', 'DG', 'DF', 'FQ', 'TP', 'GT', 'CN', 'VG', 'IV', 'AG', 'WY', 'NQ', 'SP', 'NY', 'NR', 'LV', 'DV', 'SV', 'KL', 'VS', 'GF', 'GE', 'NL', 'YL', 'NH', 'YA', 'LP', 'YA', 'RT', 'LV', 'FC', 'RG', 'KP', 'SK', 'NK', 'LS', 'KT', 'PN', 'FL', 'EV', 'RK', 'DN', 'IK', 'SC', 'TV', 'EN', 'IF']\n" ], [ "s70 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_70 = []\n\nfor i in range(len(s70)):\n if(0<=i<153):\n c1 = s70[i]\n c2 = s70[i+70]\n lambda_70.append(c1+c2)\n else:\n break\nprint(lambda_70)", "['RD', 'VL', 'QC', 'PF', 'TT', 'EN', 'SV', 'IY', 'VA', 'RD', 'FS', 'PF', 'NV', 'II', 'TR', 'NG', 'LD', 'CE', 'PV', 'FR', 'GQ', 'EI', 'VA', 'FP', 'NG', 'AQ', 'TT', 'RG', 'FK', 'AI', 'SA', 'VD', 'YY', 'AN', 'WY', 'NK', 'RL', 'KP', 'RD', 'ID', 'SF', 'NT', 'CG', 'VC', 'AV', 'DI', 'YA', 'SW', 'VN', 'LS', 'YN', 'NN', 'SL', 'AD', 'SS', 'FK', 'SV', 'TG', 'FG', 'KN', 'CY', 'YN', 'GY', 'VL', 'SY', 'PR', 'TL', 'KF', 'LR', 'NK', 'DS', 'LN', 'CL', 'FK', 'TP', 'NF', 'VE', 'YR', 'AD', 'DI', 'SS', 'FT', 'VE', 'II', 'RY', 'GQ', 'DA', 'EG', 'VS', 'RT', 'QP', 'IC', 'AN', 'PG', 'GV', 'QE', 'TG', 'GF', 'KN', 'IC', 'AY', 'DF', 'YP', 'NL', 'YQ', 'KS', 'LY', 'PG', 'DF', 'DQ', 'FP', 'TT', 'GN', 'CG', 'VV', 'IG', 'AY', 'WQ', 'NP', 'SY', 'NR', 'NV', 'LV', 'DV', 'SL', 'KS', 'VF', 'GE', 'GL', 'NL', 'YH', 'NA', 'YP', 'LA', 'YT', 'RV', 'LC', 'FG', 'RP', 'KK', 'SK', 'NS', 'LT', 'KN', 'PL', 'FV', 'EK', 'RN', 'DK', 'IC', 'SV', 'TN', 'EF']\n" ], [ "s71 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_71 = []\n\nfor i in range(len(s71)):\n if(0<=i<152):\n c1 = s71[i]\n c2 = s71[i+71]\n lambda_71.append(c1+c2)\n else:\n break\nprint(lambda_71)", "['RL', 'VC', 'QF', 'PT', 'TN', 'EV', 'SY', 'IA', 'VD', 'RS', 'FF', 'PV', 'NI', 'IR', 'TG', 'ND', 'LE', 'CV', 'PR', 'FQ', 'GI', 'EA', 'VP', 'FG', 'NQ', 'AT', 'TG', 'RK', 'FI', 'AA', 'SD', 'VY', 'YN', 'AY', 'WK', 'NL', 'RP', 'KD', 'RD', 'IF', 'ST', 'NG', 'CC', 'VV', 'AI', 'DA', 'YW', 'SN', 'VS', 'LN', 'YN', 'NL', 'SD', 'AS', 'SK', 'FV', 'SG', 'TG', 'FN', 'KY', 'CN', 'YY', 'GL', 'VY', 'SR', 'PL', 'TF', 'KR', 'LK', 'NS', 'DN', 'LL', 'CK', 'FP', 'TF', 'NE', 'VR', 'YD', 'AI', 'DS', 'ST', 'FE', 'VI', 'IY', 'RQ', 'GA', 'DG', 'ES', 'VT', 'RP', 'QC', 'IN', 'AG', 'PV', 'GE', 'QG', 'TF', 'GN', 'KC', 'IY', 'AF', 'DP', 'YL', 'NQ', 'YS', 'KY', 'LG', 'PF', 'DQ', 'DP', 'FT', 'TN', 'GG', 'CV', 'VG', 'IY', 'AQ', 'WP', 'NY', 'SR', 'NV', 'NV', 'LV', 'DL', 'SS', 'KF', 'VE', 'GL', 'GL', 'NH', 'YA', 'NP', 'YA', 'LT', 'YV', 'RC', 'LG', 'FP', 'RK', 'KK', 'SS', 'NT', 'LN', 'KL', 'PV', 'FK', 'EN', 'RK', 'DC', 'IV', 'SN', 'TF']\n" ], [ "s72 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_72 = []\n\nfor i in range(len(s72)):\n if(0<=i<151):\n c1 = s72[i]\n c2 = s72[i+72]\n lambda_72.append(c1+c2)\n else:\n break\nprint(lambda_72)", "['RC', 'VF', 'QT', 'PN', 'TV', 'EY', 'SA', 'ID', 'VS', 'RF', 'FV', 'PI', 'NR', 'IG', 'TD', 'NE', 'LV', 'CR', 'PQ', 'FI', 'GA', 'EP', 'VG', 'FQ', 'NT', 'AG', 'TK', 'RI', 'FA', 'AD', 'SY', 'VN', 'YY', 'AK', 'WL', 'NP', 'RD', 'KD', 'RF', 'IT', 'SG', 'NC', 'CV', 'VI', 'AA', 'DW', 'YN', 'SS', 'VN', 'LN', 'YL', 'ND', 'SS', 'AK', 'SV', 'FG', 'SG', 'TN', 'FY', 'KN', 'CY', 'YL', 'GY', 'VR', 'SL', 'PF', 'TR', 'KK', 'LS', 'NN', 'DL', 'LK', 'CP', 'FF', 'TE', 'NR', 'VD', 'YI', 'AS', 'DT', 'SE', 'FI', 'VY', 'IQ', 'RA', 'GG', 'DS', 'ET', 'VP', 'RC', 'QN', 'IG', 'AV', 'PE', 'GG', 'QF', 'TN', 'GC', 'KY', 'IF', 'AP', 'DL', 'YQ', 'NS', 'YY', 'KG', 'LF', 'PQ', 'DP', 'DT', 'FN', 'TG', 'GV', 'CG', 'VY', 'IQ', 'AP', 'WY', 'NR', 'SV', 'NV', 'NV', 'LL', 'DS', 'SF', 'KE', 'VL', 'GL', 'GH', 'NA', 'YP', 'NA', 'YT', 'LV', 'YC', 'RG', 'LP', 'FK', 'RK', 'KS', 'ST', 'NN', 'LL', 'KV', 'PK', 'FN', 'EK', 'RC', 'DV', 'IN', 'SF']\n" ], [ "s73 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_73 = []\n\nfor i in range(len(s73)):\n if(0<=i<150):\n c1 = s73[i]\n c2 = s73[i+73]\n lambda_73.append(c1+c2)\n else:\n break\nprint(lambda_73)", "['RF', 'VT', 'QN', 'PV', 'TY', 'EA', 'SD', 'IS', 'VF', 'RV', 'FI', 'PR', 'NG', 'ID', 'TE', 'NV', 'LR', 'CQ', 'PI', 'FA', 'GP', 'EG', 'VQ', 'FT', 'NG', 'AK', 'TI', 'RA', 'FD', 'AY', 'SN', 'VY', 'YK', 'AL', 'WP', 'ND', 'RD', 'KF', 'RT', 'IG', 'SC', 'NV', 'CI', 'VA', 'AW', 'DN', 'YS', 'SN', 'VN', 'LL', 'YD', 'NS', 'SK', 'AV', 'SG', 'FG', 'SN', 'TY', 'FN', 'KY', 'CL', 'YY', 'GR', 'VL', 'SF', 'PR', 'TK', 'KS', 'LN', 'NL', 'DK', 'LP', 'CF', 'FE', 'TR', 'ND', 'VI', 'YS', 'AT', 'DE', 'SI', 'FY', 'VQ', 'IA', 'RG', 'GS', 'DT', 'EP', 'VC', 'RN', 'QG', 'IV', 'AE', 'PG', 'GF', 'QN', 'TC', 'GY', 'KF', 'IP', 'AL', 'DQ', 'YS', 'NY', 'YG', 'KF', 'LQ', 'PP', 'DT', 'DN', 'FG', 'TV', 'GG', 'CY', 'VQ', 'IP', 'AY', 'WR', 'NV', 'SV', 'NV', 'NL', 'LS', 'DF', 'SE', 'KL', 'VL', 'GH', 'GA', 'NP', 'YA', 'NT', 'YV', 'LC', 'YG', 'RP', 'LK', 'FK', 'RS', 'KT', 'SN', 'NL', 'LV', 'KK', 'PN', 'FK', 'EC', 'RV', 'DN', 'IF']\n" ], [ "s74 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_74 = []\n\nfor i in range(len(s74)):\n if(0<=i<149):\n c1 = s74[i]\n c2 = s74[i+74]\n lambda_74.append(c1+c2)\n else:\n break\nprint(lambda_74)", "['RT', 'VN', 'QV', 'PY', 'TA', 'ED', 'SS', 'IF', 'VV', 'RI', 'FR', 'PG', 'ND', 'IE', 'TV', 'NR', 'LQ', 'CI', 'PA', 'FP', 'GG', 'EQ', 'VT', 'FG', 'NK', 'AI', 'TA', 'RD', 'FY', 'AN', 'SY', 'VK', 'YL', 'AP', 'WD', 'ND', 'RF', 'KT', 'RG', 'IC', 'SV', 'NI', 'CA', 'VW', 'AN', 'DS', 'YN', 'SN', 'VL', 'LD', 'YS', 'NK', 'SV', 'AG', 'SG', 'FN', 'SY', 'TN', 'FY', 'KL', 'CY', 'YR', 'GL', 'VF', 'SR', 'PK', 'TS', 'KN', 'LL', 'NK', 'DP', 'LF', 'CE', 'FR', 'TD', 'NI', 'VS', 'YT', 'AE', 'DI', 'SY', 'FQ', 'VA', 'IG', 'RS', 'GT', 'DP', 'EC', 'VN', 'RG', 'QV', 'IE', 'AG', 'PF', 'GN', 'QC', 'TY', 'GF', 'KP', 'IL', 'AQ', 'DS', 'YY', 'NG', 'YF', 'KQ', 'LP', 'PT', 'DN', 'DG', 'FV', 'TG', 'GY', 'CQ', 'VP', 'IY', 'AR', 'WV', 'NV', 'SV', 'NL', 'NS', 'LF', 'DE', 'SL', 'KL', 'VH', 'GA', 'GP', 'NA', 'YT', 'NV', 'YC', 'LG', 'YP', 'RK', 'LK', 'FS', 'RT', 'KN', 'SL', 'NV', 'LK', 'KN', 'PK', 'FC', 'EV', 'RN', 'DF']\n" ], [ "s75 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_75 = []\n\nfor i in range(len(s75)):\n if(0<=i<148):\n c1 = s75[i]\n c2 = s75[i+75]\n lambda_75.append(c1+c2)\n else:\n break\nprint(lambda_75)", "['RN', 'VV', 'QY', 'PA', 'TD', 'ES', 'SF', 'IV', 'VI', 'RR', 'FG', 'PD', 'NE', 'IV', 'TR', 'NQ', 'LI', 'CA', 'PP', 'FG', 'GQ', 'ET', 'VG', 'FK', 'NI', 'AA', 'TD', 'RY', 'FN', 'AY', 'SK', 'VL', 'YP', 'AD', 'WD', 'NF', 'RT', 'KG', 'RC', 'IV', 'SI', 'NA', 'CW', 'VN', 'AS', 'DN', 'YN', 'SL', 'VD', 'LS', 'YK', 'NV', 'SG', 'AG', 'SN', 'FY', 'SN', 'TY', 'FL', 'KY', 'CR', 'YL', 'GF', 'VR', 'SK', 'PS', 'TN', 'KL', 'LK', 'NP', 'DF', 'LE', 'CR', 'FD', 'TI', 'NS', 'VT', 'YE', 'AI', 'DY', 'SQ', 'FA', 'VG', 'IS', 'RT', 'GP', 'DC', 'EN', 'VG', 'RV', 'QE', 'IG', 'AF', 'PN', 'GC', 'QY', 'TF', 'GP', 'KL', 'IQ', 'AS', 'DY', 'YG', 'NF', 'YQ', 'KP', 'LT', 'PN', 'DG', 'DV', 'FG', 'TY', 'GQ', 'CP', 'VY', 'IR', 'AV', 'WV', 'NV', 'SL', 'NS', 'NF', 'LE', 'DL', 'SL', 'KH', 'VA', 'GP', 'GA', 'NT', 'YV', 'NC', 'YG', 'LP', 'YK', 'RK', 'LS', 'FT', 'RN', 'KL', 'SV', 'NK', 'LN', 'KK', 'PC', 'FV', 'EN', 'RF']\n" ], [ "s76 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_76 = []\n\nfor i in range(len(s76)):\n if(0<=i<147):\n c1 = s76[i]\n c2 = s76[i+76]\n lambda_76.append(c1+c2)\n else:\n break\nprint(lambda_76)", "['RV', 'VY', 'QA', 'PD', 'TS', 'EF', 'SV', 'II', 'VR', 'RG', 'FD', 'PE', 'NV', 'IR', 'TQ', 'NI', 'LA', 'CP', 'PG', 'FQ', 'GT', 'EG', 'VK', 'FI', 'NA', 'AD', 'TY', 'RN', 'FY', 'AK', 'SL', 'VP', 'YD', 'AD', 'WF', 'NT', 'RG', 'KC', 'RV', 'II', 'SA', 'NW', 'CN', 'VS', 'AN', 'DN', 'YL', 'SD', 'VS', 'LK', 'YV', 'NG', 'SG', 'AN', 'SY', 'FN', 'SY', 'TL', 'FY', 'KR', 'CL', 'YF', 'GR', 'VK', 'SS', 'PN', 'TL', 'KK', 'LP', 'NF', 'DE', 'LR', 'CD', 'FI', 'TS', 'NT', 'VE', 'YI', 'AY', 'DQ', 'SA', 'FG', 'VS', 'IT', 'RP', 'GC', 'DN', 'EG', 'VV', 'RE', 'QG', 'IF', 'AN', 'PC', 'GY', 'QF', 'TP', 'GL', 'KQ', 'IS', 'AY', 'DG', 'YF', 'NQ', 'YP', 'KT', 'LN', 'PG', 'DV', 'DG', 'FY', 'TQ', 'GP', 'CY', 'VR', 'IV', 'AV', 'WV', 'NL', 'SS', 'NF', 'NE', 'LL', 'DL', 'SH', 'KA', 'VP', 'GA', 'GT', 'NV', 'YC', 'NG', 'YP', 'LK', 'YK', 'RS', 'LT', 'FN', 'RL', 'KV', 'SK', 'NN', 'LK', 'KC', 'PV', 'FN', 'EF']\n" ], [ "s77 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_77 = []\n\nfor i in range(len(s77)):\n if(0<=i<146):\n c1 = s77[i]\n c2 = s77[i+77]\n lambda_77.append(c1+c2)\n else:\n break\nprint(lambda_77)", "['RY', 'VA', 'QD', 'PS', 'TF', 'EV', 'SI', 'IR', 'VG', 'RD', 'FE', 'PV', 'NR', 'IQ', 'TI', 'NA', 'LP', 'CG', 'PQ', 'FT', 'GG', 'EK', 'VI', 'FA', 'ND', 'AY', 'TN', 'RY', 'FK', 'AL', 'SP', 'VD', 'YD', 'AF', 'WT', 'NG', 'RC', 'KV', 'RI', 'IA', 'SW', 'NN', 'CS', 'VN', 'AN', 'DL', 'YD', 'SS', 'VK', 'LV', 'YG', 'NG', 'SN', 'AY', 'SN', 'FY', 'SL', 'TY', 'FR', 'KL', 'CF', 'YR', 'GK', 'VS', 'SN', 'PL', 'TK', 'KP', 'LF', 'NE', 'DR', 'LD', 'CI', 'FS', 'TT', 'NE', 'VI', 'YY', 'AQ', 'DA', 'SG', 'FS', 'VT', 'IP', 'RC', 'GN', 'DG', 'EV', 'VE', 'RG', 'QF', 'IN', 'AC', 'PY', 'GF', 'QP', 'TL', 'GQ', 'KS', 'IY', 'AG', 'DF', 'YQ', 'NP', 'YT', 'KN', 'LG', 'PV', 'DG', 'DY', 'FQ', 'TP', 'GY', 'CR', 'VV', 'IV', 'AV', 'WL', 'NS', 'SF', 'NE', 'NL', 'LL', 'DH', 'SA', 'KP', 'VA', 'GT', 'GV', 'NC', 'YG', 'NP', 'YK', 'LK', 'YS', 'RT', 'LN', 'FL', 'RV', 'KK', 'SN', 'NK', 'LC', 'KV', 'PN', 'FF']\n" ], [ "s78 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_78 = []\n\nfor i in range(len(s78)):\n if(0<=i<145):\n c1 = s78[i]\n c2 = s78[i+78]\n lambda_78.append(c1+c2)\n else:\n break\nprint(lambda_78)", "['RA', 'VD', 'QS', 'PF', 'TV', 'EI', 'SR', 'IG', 'VD', 'RE', 'FV', 'PR', 'NQ', 'II', 'TA', 'NP', 'LG', 'CQ', 'PT', 'FG', 'GK', 'EI', 'VA', 'FD', 'NY', 'AN', 'TY', 'RK', 'FL', 'AP', 'SD', 'VD', 'YF', 'AT', 'WG', 'NC', 'RV', 'KI', 'RA', 'IW', 'SN', 'NS', 'CN', 'VN', 'AL', 'DD', 'YS', 'SK', 'VV', 'LG', 'YG', 'NN', 'SY', 'AN', 'SY', 'FL', 'SY', 'TR', 'FL', 'KF', 'CR', 'YK', 'GS', 'VN', 'SL', 'PK', 'TP', 'KF', 'LE', 'NR', 'DD', 'LI', 'CS', 'FT', 'TE', 'NI', 'VY', 'YQ', 'AA', 'DG', 'SS', 'FT', 'VP', 'IC', 'RN', 'GG', 'DV', 'EE', 'VG', 'RF', 'QN', 'IC', 'AY', 'PF', 'GP', 'QL', 'TQ', 'GS', 'KY', 'IG', 'AF', 'DQ', 'YP', 'NT', 'YN', 'KG', 'LV', 'PG', 'DY', 'DQ', 'FP', 'TY', 'GR', 'CV', 'VV', 'IV', 'AL', 'WS', 'NF', 'SE', 'NL', 'NL', 'LH', 'DA', 'SP', 'KA', 'VT', 'GV', 'GC', 'NG', 'YP', 'NK', 'YK', 'LS', 'YT', 'RN', 'LL', 'FV', 'RK', 'KN', 'SK', 'NC', 'LV', 'KN', 'PF']\n" ], [ "s79 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_79 = []\n\nfor i in range(len(s79)):\n if(0<=i<144):\n c1 = s79[i]\n c2 = s79[i+79]\n lambda_79.append(c1+c2)\n else:\n break\nprint(lambda_79)", "['RD', 'VS', 'QF', 'PV', 'TI', 'ER', 'SG', 'ID', 'VE', 'RV', 'FR', 'PQ', 'NI', 'IA', 'TP', 'NG', 'LQ', 'CT', 'PG', 'FK', 'GI', 'EA', 'VD', 'FY', 'NN', 'AY', 'TK', 'RL', 'FP', 'AD', 'SD', 'VF', 'YT', 'AG', 'WC', 'NV', 'RI', 'KA', 'RW', 'IN', 'SS', 'NN', 'CN', 'VL', 'AD', 'DS', 'YK', 'SV', 'VG', 'LG', 'YN', 'NY', 'SN', 'AY', 'SL', 'FY', 'SR', 'TL', 'FF', 'KR', 'CK', 'YS', 'GN', 'VL', 'SK', 'PP', 'TF', 'KE', 'LR', 'ND', 'DI', 'LS', 'CT', 'FE', 'TI', 'NY', 'VQ', 'YA', 'AG', 'DS', 'ST', 'FP', 'VC', 'IN', 'RG', 'GV', 'DE', 'EG', 'VF', 'RN', 'QC', 'IY', 'AF', 'PP', 'GL', 'QQ', 'TS', 'GY', 'KG', 'IF', 'AQ', 'DP', 'YT', 'NN', 'YG', 'KV', 'LG', 'PY', 'DQ', 'DP', 'FY', 'TR', 'GV', 'CV', 'VV', 'IL', 'AS', 'WF', 'NE', 'SL', 'NL', 'NH', 'LA', 'DP', 'SA', 'KT', 'VV', 'GC', 'GG', 'NP', 'YK', 'NK', 'YS', 'LT', 'YN', 'RL', 'LV', 'FK', 'RN', 'KK', 'SC', 'NV', 'LN', 'KF']\n" ], [ "s80 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_80 = []\n\nfor i in range(len(s80)):\n if(0<=i<143):\n c1 = s80[i]\n c2 = s80[i+80]\n lambda_80.append(c1+c2)\n else:\n break\nprint(lambda_80)", "['RS', 'VF', 'QV', 'PI', 'TR', 'EG', 'SD', 'IE', 'VV', 'RR', 'FQ', 'PI', 'NA', 'IP', 'TG', 'NQ', 'LT', 'CG', 'PK', 'FI', 'GA', 'ED', 'VY', 'FN', 'NY', 'AK', 'TL', 'RP', 'FD', 'AD', 'SF', 'VT', 'YG', 'AC', 'WV', 'NI', 'RA', 'KW', 'RN', 'IS', 'SN', 'NN', 'CL', 'VD', 'AS', 'DK', 'YV', 'SG', 'VG', 'LN', 'YY', 'NN', 'SY', 'AL', 'SY', 'FR', 'SL', 'TF', 'FR', 'KK', 'CS', 'YN', 'GL', 'VK', 'SP', 'PF', 'TE', 'KR', 'LD', 'NI', 'DS', 'LT', 'CE', 'FI', 'TY', 'NQ', 'VA', 'YG', 'AS', 'DT', 'SP', 'FC', 'VN', 'IG', 'RV', 'GE', 'DG', 'EF', 'VN', 'RC', 'QY', 'IF', 'AP', 'PL', 'GQ', 'QS', 'TY', 'GG', 'KF', 'IQ', 'AP', 'DT', 'YN', 'NG', 'YV', 'KG', 'LY', 'PQ', 'DP', 'DY', 'FR', 'TV', 'GV', 'CV', 'VL', 'IS', 'AF', 'WE', 'NL', 'SL', 'NH', 'NA', 'LP', 'DA', 'ST', 'KV', 'VC', 'GG', 'GP', 'NK', 'YK', 'NS', 'YT', 'LN', 'YL', 'RV', 'LK', 'FN', 'RK', 'KC', 'SV', 'NN', 'LF']\n" ], [ "s81 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_81 = []\n\nfor i in range(len(s81)):\n if(0<=i<142):\n c1 = s81[i]\n c2 = s81[i+81]\n lambda_81.append(c1+c2)\n else:\n break\nprint(lambda_81)", "['RF', 'VV', 'QI', 'PR', 'TG', 'ED', 'SE', 'IV', 'VR', 'RQ', 'FI', 'PA', 'NP', 'IG', 'TQ', 'NT', 'LG', 'CK', 'PI', 'FA', 'GD', 'EY', 'VN', 'FY', 'NK', 'AL', 'TP', 'RD', 'FD', 'AF', 'ST', 'VG', 'YC', 'AV', 'WI', 'NA', 'RW', 'KN', 'RS', 'IN', 'SN', 'NL', 'CD', 'VS', 'AK', 'DV', 'YG', 'SG', 'VN', 'LY', 'YN', 'NY', 'SL', 'AY', 'SR', 'FL', 'SF', 'TR', 'FK', 'KS', 'CN', 'YL', 'GK', 'VP', 'SF', 'PE', 'TR', 'KD', 'LI', 'NS', 'DT', 'LE', 'CI', 'FY', 'TQ', 'NA', 'VG', 'YS', 'AT', 'DP', 'SC', 'FN', 'VG', 'IV', 'RE', 'GG', 'DF', 'EN', 'VC', 'RY', 'QF', 'IP', 'AL', 'PQ', 'GS', 'QY', 'TG', 'GF', 'KQ', 'IP', 'AT', 'DN', 'YG', 'NV', 'YG', 'KY', 'LQ', 'PP', 'DY', 'DR', 'FV', 'TV', 'GV', 'CL', 'VS', 'IF', 'AE', 'WL', 'NL', 'SH', 'NA', 'NP', 'LA', 'DT', 'SV', 'KC', 'VG', 'GP', 'GK', 'NK', 'YS', 'NT', 'YN', 'LL', 'YV', 'RK', 'LN', 'FK', 'RC', 'KV', 'SN', 'NF']\n" ], [ "s82 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_82 = []\n\nfor i in range(len(s82)):\n if(0<=i<141):\n c1 = s82[i]\n c2 = s82[i+82]\n lambda_82.append(c1+c2)\n else:\n break\nprint(lambda_82)", "['RV', 'VI', 'QR', 'PG', 'TD', 'EE', 'SV', 'IR', 'VQ', 'RI', 'FA', 'PP', 'NG', 'IQ', 'TT', 'NG', 'LK', 'CI', 'PA', 'FD', 'GY', 'EN', 'VY', 'FK', 'NL', 'AP', 'TD', 'RD', 'FF', 'AT', 'SG', 'VC', 'YV', 'AI', 'WA', 'NW', 'RN', 'KS', 'RN', 'IN', 'SL', 'ND', 'CS', 'VK', 'AV', 'DG', 'YG', 'SN', 'VY', 'LN', 'YY', 'NL', 'SY', 'AR', 'SL', 'FF', 'SR', 'TK', 'FS', 'KN', 'CL', 'YK', 'GP', 'VF', 'SE', 'PR', 'TD', 'KI', 'LS', 'NT', 'DE', 'LI', 'CY', 'FQ', 'TA', 'NG', 'VS', 'YT', 'AP', 'DC', 'SN', 'FG', 'VV', 'IE', 'RG', 'GF', 'DN', 'EC', 'VY', 'RF', 'QP', 'IL', 'AQ', 'PS', 'GY', 'QG', 'TF', 'GQ', 'KP', 'IT', 'AN', 'DG', 'YV', 'NG', 'YY', 'KQ', 'LP', 'PY', 'DR', 'DV', 'FV', 'TV', 'GL', 'CS', 'VF', 'IE', 'AL', 'WL', 'NH', 'SA', 'NP', 'NA', 'LT', 'DV', 'SC', 'KG', 'VP', 'GK', 'GK', 'NS', 'YT', 'NN', 'YL', 'LV', 'YK', 'RN', 'LK', 'FC', 'RV', 'KN', 'SF']\n" ], [ "s83 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_83 = []\n\nfor i in range(len(s83)):\n if(0<=i<140):\n c1 = s83[i]\n c2 = s83[i+83]\n lambda_83.append(c1+c2)\n else:\n break\nprint(lambda_83)", "['RI', 'VR', 'QG', 'PD', 'TE', 'EV', 'SR', 'IQ', 'VI', 'RA', 'FP', 'PG', 'NQ', 'IT', 'TG', 'NK', 'LI', 'CA', 'PD', 'FY', 'GN', 'EY', 'VK', 'FL', 'NP', 'AD', 'TD', 'RF', 'FT', 'AG', 'SC', 'VV', 'YI', 'AA', 'WW', 'NN', 'RS', 'KN', 'RN', 'IL', 'SD', 'NS', 'CK', 'VV', 'AG', 'DG', 'YN', 'SY', 'VN', 'LY', 'YL', 'NY', 'SR', 'AL', 'SF', 'FR', 'SK', 'TS', 'FN', 'KL', 'CK', 'YP', 'GF', 'VE', 'SR', 'PD', 'TI', 'KS', 'LT', 'NE', 'DI', 'LY', 'CQ', 'FA', 'TG', 'NS', 'VT', 'YP', 'AC', 'DN', 'SG', 'FV', 'VE', 'IG', 'RF', 'GN', 'DC', 'EY', 'VF', 'RP', 'QL', 'IQ', 'AS', 'PY', 'GG', 'QF', 'TQ', 'GP', 'KT', 'IN', 'AG', 'DV', 'YG', 'NY', 'YQ', 'KP', 'LY', 'PR', 'DV', 'DV', 'FV', 'TL', 'GS', 'CF', 'VE', 'IL', 'AL', 'WH', 'NA', 'SP', 'NA', 'NT', 'LV', 'DC', 'SG', 'KP', 'VK', 'GK', 'GS', 'NT', 'YN', 'NL', 'YV', 'LK', 'YN', 'RK', 'LC', 'FV', 'RN', 'KF']\n" ], [ "s84 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_84 = []\n\nfor i in range(len(s84)):\n if(0<=i<139):\n c1 = s84[i]\n c2 = s84[i+84]\n lambda_84.append(c1+c2)\n else:\n break\nprint(lambda_84)", "['RR', 'VG', 'QD', 'PE', 'TV', 'ER', 'SQ', 'II', 'VA', 'RP', 'FG', 'PQ', 'NT', 'IG', 'TK', 'NI', 'LA', 'CD', 'PY', 'FN', 'GY', 'EK', 'VL', 'FP', 'ND', 'AD', 'TF', 'RT', 'FG', 'AC', 'SV', 'VI', 'YA', 'AW', 'WN', 'NS', 'RN', 'KN', 'RL', 'ID', 'SS', 'NK', 'CV', 'VG', 'AG', 'DN', 'YY', 'SN', 'VY', 'LL', 'YY', 'NR', 'SL', 'AF', 'SR', 'FK', 'SS', 'TN', 'FL', 'KK', 'CP', 'YF', 'GE', 'VR', 'SD', 'PI', 'TS', 'KT', 'LE', 'NI', 'DY', 'LQ', 'CA', 'FG', 'TS', 'NT', 'VP', 'YC', 'AN', 'DG', 'SV', 'FE', 'VG', 'IF', 'RN', 'GC', 'DY', 'EF', 'VP', 'RL', 'QQ', 'IS', 'AY', 'PG', 'GF', 'QQ', 'TP', 'GT', 'KN', 'IG', 'AV', 'DG', 'YY', 'NQ', 'YP', 'KY', 'LR', 'PV', 'DV', 'DV', 'FL', 'TS', 'GF', 'CE', 'VL', 'IL', 'AH', 'WA', 'NP', 'SA', 'NT', 'NV', 'LC', 'DG', 'SP', 'KK', 'VK', 'GS', 'GT', 'NN', 'YL', 'NV', 'YK', 'LN', 'YK', 'RC', 'LV', 'FN', 'RF']\n" ], [ "s85 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_85 = []\n\nfor i in range(len(s85)):\n if(0<=i<138):\n c1 = s85[i]\n c2 = s85[i+85]\n lambda_85.append(c1+c2)\n else:\n break\nprint(lambda_85)", "['RG', 'VD', 'QE', 'PV', 'TR', 'EQ', 'SI', 'IA', 'VP', 'RG', 'FQ', 'PT', 'NG', 'IK', 'TI', 'NA', 'LD', 'CY', 'PN', 'FY', 'GK', 'EL', 'VP', 'FD', 'ND', 'AF', 'TT', 'RG', 'FC', 'AV', 'SI', 'VA', 'YW', 'AN', 'WS', 'NN', 'RN', 'KL', 'RD', 'IS', 'SK', 'NV', 'CG', 'VG', 'AN', 'DY', 'YN', 'SY', 'VL', 'LY', 'YR', 'NL', 'SF', 'AR', 'SK', 'FS', 'SN', 'TL', 'FK', 'KP', 'CF', 'YE', 'GR', 'VD', 'SI', 'PS', 'TT', 'KE', 'LI', 'NY', 'DQ', 'LA', 'CG', 'FS', 'TT', 'NP', 'VC', 'YN', 'AG', 'DV', 'SE', 'FG', 'VF', 'IN', 'RC', 'GY', 'DF', 'EP', 'VL', 'RQ', 'QS', 'IY', 'AG', 'PF', 'GQ', 'QP', 'TT', 'GN', 'KG', 'IV', 'AG', 'DY', 'YQ', 'NP', 'YY', 'KR', 'LV', 'PV', 'DV', 'DL', 'FS', 'TF', 'GE', 'CL', 'VL', 'IH', 'AA', 'WP', 'NA', 'ST', 'NV', 'NC', 'LG', 'DP', 'SK', 'KK', 'VS', 'GT', 'GN', 'NL', 'YV', 'NK', 'YN', 'LK', 'YC', 'RV', 'LN', 'FF']\n" ], [ "s86 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_86 = []\n\nfor i in range(len(s86)):\n if(0<=i<137):\n c1 = s86[i]\n c2 = s86[i+86]\n lambda_86.append(c1+c2)\n else:\n break\nprint(lambda_86)", "['RD', 'VE', 'QV', 'PR', 'TQ', 'EI', 'SA', 'IP', 'VG', 'RQ', 'FT', 'PG', 'NK', 'II', 'TA', 'ND', 'LY', 'CN', 'PY', 'FK', 'GL', 'EP', 'VD', 'FD', 'NF', 'AT', 'TG', 'RC', 'FV', 'AI', 'SA', 'VW', 'YN', 'AS', 'WN', 'NN', 'RL', 'KD', 'RS', 'IK', 'SV', 'NG', 'CG', 'VN', 'AY', 'DN', 'YY', 'SL', 'VY', 'LR', 'YL', 'NF', 'SR', 'AK', 'SS', 'FN', 'SL', 'TK', 'FP', 'KF', 'CE', 'YR', 'GD', 'VI', 'SS', 'PT', 'TE', 'KI', 'LY', 'NQ', 'DA', 'LG', 'CS', 'FT', 'TP', 'NC', 'VN', 'YG', 'AV', 'DE', 'SG', 'FF', 'VN', 'IC', 'RY', 'GF', 'DP', 'EL', 'VQ', 'RS', 'QY', 'IG', 'AF', 'PQ', 'GP', 'QT', 'TN', 'GG', 'KV', 'IG', 'AY', 'DQ', 'YP', 'NY', 'YR', 'KV', 'LV', 'PV', 'DL', 'DS', 'FF', 'TE', 'GL', 'CL', 'VH', 'IA', 'AP', 'WA', 'NT', 'SV', 'NC', 'NG', 'LP', 'DK', 'SK', 'KS', 'VT', 'GN', 'GL', 'NV', 'YK', 'NN', 'YK', 'LC', 'YV', 'RN', 'LF']\n" ], [ "s87 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_87 = []\n\nfor i in range(len(s87)):\n if(0<=i<136):\n c1 = s87[i]\n c2 = s87[i+87]\n lambda_87.append(c1+c2)\n else:\n break\nprint(lambda_87)", "['RE', 'VV', 'QR', 'PQ', 'TI', 'EA', 'SP', 'IG', 'VQ', 'RT', 'FG', 'PK', 'NI', 'IA', 'TD', 'NY', 'LN', 'CY', 'PK', 'FL', 'GP', 'ED', 'VD', 'FF', 'NT', 'AG', 'TC', 'RV', 'FI', 'AA', 'SW', 'VN', 'YS', 'AN', 'WN', 'NL', 'RD', 'KS', 'RK', 'IV', 'SG', 'NG', 'CN', 'VY', 'AN', 'DY', 'YL', 'SY', 'VR', 'LL', 'YF', 'NR', 'SK', 'AS', 'SN', 'FL', 'SK', 'TP', 'FF', 'KE', 'CR', 'YD', 'GI', 'VS', 'ST', 'PE', 'TI', 'KY', 'LQ', 'NA', 'DG', 'LS', 'CT', 'FP', 'TC', 'NN', 'VG', 'YV', 'AE', 'DG', 'SF', 'FN', 'VC', 'IY', 'RF', 'GP', 'DL', 'EQ', 'VS', 'RY', 'QG', 'IF', 'AQ', 'PP', 'GT', 'QN', 'TG', 'GV', 'KG', 'IY', 'AQ', 'DP', 'YY', 'NR', 'YV', 'KV', 'LV', 'PL', 'DS', 'DF', 'FE', 'TL', 'GL', 'CH', 'VA', 'IP', 'AA', 'WT', 'NV', 'SC', 'NG', 'NP', 'LK', 'DK', 'SS', 'KT', 'VN', 'GL', 'GV', 'NK', 'YN', 'NK', 'YC', 'LV', 'YN', 'RF']\n" ], [ "s88 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_88 = []\n\nfor i in range(len(s88)):\n if(0<=i<135):\n c1 = s88[i]\n c2 = s88[i+88]\n lambda_88.append(c1+c2)\n else:\n break\nprint(lambda_88)", "['RV', 'VR', 'QQ', 'PI', 'TA', 'EP', 'SG', 'IQ', 'VT', 'RG', 'FK', 'PI', 'NA', 'ID', 'TY', 'NN', 'LY', 'CK', 'PL', 'FP', 'GD', 'ED', 'VF', 'FT', 'NG', 'AC', 'TV', 'RI', 'FA', 'AW', 'SN', 'VS', 'YN', 'AN', 'WL', 'ND', 'RS', 'KK', 'RV', 'IG', 'SG', 'NN', 'CY', 'VN', 'AY', 'DL', 'YY', 'SR', 'VL', 'LF', 'YR', 'NK', 'SS', 'AN', 'SL', 'FK', 'SP', 'TF', 'FE', 'KR', 'CD', 'YI', 'GS', 'VT', 'SE', 'PI', 'TY', 'KQ', 'LA', 'NG', 'DS', 'LT', 'CP', 'FC', 'TN', 'NG', 'VV', 'YE', 'AG', 'DF', 'SN', 'FC', 'VY', 'IF', 'RP', 'GL', 'DQ', 'ES', 'VY', 'RG', 'QF', 'IQ', 'AP', 'PT', 'GN', 'QG', 'TV', 'GG', 'KY', 'IQ', 'AP', 'DY', 'YR', 'NV', 'YV', 'KV', 'LL', 'PS', 'DF', 'DE', 'FL', 'TL', 'GH', 'CA', 'VP', 'IA', 'AT', 'WV', 'NC', 'SG', 'NP', 'NK', 'LK', 'DS', 'ST', 'KN', 'VL', 'GV', 'GK', 'NN', 'YK', 'NC', 'YV', 'LN', 'YF']\n" ], [ "s89 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_89 = []\n\nfor i in range(len(s89)):\n if(0<=i<134):\n c1 = s89[i]\n c2 = s89[i+89]\n lambda_89.append(c1+c2)\n else:\n break\nprint(lambda_89)", "['RR', 'VQ', 'QI', 'PA', 'TP', 'EG', 'SQ', 'IT', 'VG', 'RK', 'FI', 'PA', 'ND', 'IY', 'TN', 'NY', 'LK', 'CL', 'PP', 'FD', 'GD', 'EF', 'VT', 'FG', 'NC', 'AV', 'TI', 'RA', 'FW', 'AN', 'SS', 'VN', 'YN', 'AL', 'WD', 'NS', 'RK', 'KV', 'RG', 'IG', 'SN', 'NY', 'CN', 'VY', 'AL', 'DY', 'YR', 'SL', 'VF', 'LR', 'YK', 'NS', 'SN', 'AL', 'SK', 'FP', 'SF', 'TE', 'FR', 'KD', 'CI', 'YS', 'GT', 'VE', 'SI', 'PY', 'TQ', 'KA', 'LG', 'NS', 'DT', 'LP', 'CC', 'FN', 'TG', 'NV', 'VE', 'YG', 'AF', 'DN', 'SC', 'FY', 'VF', 'IP', 'RL', 'GQ', 'DS', 'EY', 'VG', 'RF', 'QQ', 'IP', 'AT', 'PN', 'GG', 'QV', 'TG', 'GY', 'KQ', 'IP', 'AY', 'DR', 'YV', 'NV', 'YV', 'KL', 'LS', 'PF', 'DE', 'DL', 'FL', 'TH', 'GA', 'CP', 'VA', 'IT', 'AV', 'WC', 'NG', 'SP', 'NK', 'NK', 'LS', 'DT', 'SN', 'KL', 'VV', 'GK', 'GN', 'NK', 'YC', 'NV', 'YN', 'LF']\n" ], [ "s90 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_90 = []\n\nfor i in range(len(s90)):\n if(0<=i<133):\n c1 = s90[i]\n c2 = s90[i+90]\n lambda_90.append(c1+c2)\n else:\n break\nprint(lambda_90)", "['RQ', 'VI', 'QA', 'PP', 'TG', 'EQ', 'ST', 'IG', 'VK', 'RI', 'FA', 'PD', 'NY', 'IN', 'TY', 'NK', 'LL', 'CP', 'PD', 'FD', 'GF', 'ET', 'VG', 'FC', 'NV', 'AI', 'TA', 'RW', 'FN', 'AS', 'SN', 'VN', 'YL', 'AD', 'WS', 'NK', 'RV', 'KG', 'RG', 'IN', 'SY', 'NN', 'CY', 'VL', 'AY', 'DR', 'YL', 'SF', 'VR', 'LK', 'YS', 'NN', 'SL', 'AK', 'SP', 'FF', 'SE', 'TR', 'FD', 'KI', 'CS', 'YT', 'GE', 'VI', 'SY', 'PQ', 'TA', 'KG', 'LS', 'NT', 'DP', 'LC', 'CN', 'FG', 'TV', 'NE', 'VG', 'YF', 'AN', 'DC', 'SY', 'FF', 'VP', 'IL', 'RQ', 'GS', 'DY', 'EG', 'VF', 'RQ', 'QP', 'IT', 'AN', 'PG', 'GV', 'QG', 'TY', 'GQ', 'KP', 'IY', 'AR', 'DV', 'YV', 'NV', 'YL', 'KS', 'LF', 'PE', 'DL', 'DL', 'FH', 'TA', 'GP', 'CA', 'VT', 'IV', 'AC', 'WG', 'NP', 'SK', 'NK', 'NS', 'LT', 'DN', 'SL', 'KV', 'VK', 'GN', 'GK', 'NC', 'YV', 'NN', 'YF']\n" ], [ "s91 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_91 = []\n\nfor i in range(len(s91)):\n if(0<=i<132):\n c1 = s91[i]\n c2 = s91[i+91]\n lambda_91.append(c1+c2)\n else:\n break\nprint(lambda_91)", "['RI', 'VA', 'QP', 'PG', 'TQ', 'ET', 'SG', 'IK', 'VI', 'RA', 'FD', 'PY', 'NN', 'IY', 'TK', 'NL', 'LP', 'CD', 'PD', 'FF', 'GT', 'EG', 'VC', 'FV', 'NI', 'AA', 'TW', 'RN', 'FS', 'AN', 'SN', 'VL', 'YD', 'AS', 'WK', 'NV', 'RG', 'KG', 'RN', 'IY', 'SN', 'NY', 'CL', 'VY', 'AR', 'DL', 'YF', 'SR', 'VK', 'LS', 'YN', 'NL', 'SK', 'AP', 'SF', 'FE', 'SR', 'TD', 'FI', 'KS', 'CT', 'YE', 'GI', 'VY', 'SQ', 'PA', 'TG', 'KS', 'LT', 'NP', 'DC', 'LN', 'CG', 'FV', 'TE', 'NG', 'VF', 'YN', 'AC', 'DY', 'SF', 'FP', 'VL', 'IQ', 'RS', 'GY', 'DG', 'EF', 'VQ', 'RP', 'QT', 'IN', 'AG', 'PV', 'GG', 'QY', 'TQ', 'GP', 'KY', 'IR', 'AV', 'DV', 'YV', 'NL', 'YS', 'KF', 'LE', 'PL', 'DL', 'DH', 'FA', 'TP', 'GA', 'CT', 'VV', 'IC', 'AG', 'WP', 'NK', 'SK', 'NS', 'NT', 'LN', 'DL', 'SV', 'KK', 'VN', 'GK', 'GC', 'NV', 'YN', 'NF']\n" ], [ "s92 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_92 = []\n\nfor i in range(len(s92)):\n if(0<=i<131):\n c1 = s92[i]\n c2 = s92[i+92]\n lambda_92.append(c1+c2)\n else:\n break\nprint(lambda_92)", "['RA', 'VP', 'QG', 'PQ', 'TT', 'EG', 'SK', 'II', 'VA', 'RD', 'FY', 'PN', 'NY', 'IK', 'TL', 'NP', 'LD', 'CD', 'PF', 'FT', 'GG', 'EC', 'VV', 'FI', 'NA', 'AW', 'TN', 'RS', 'FN', 'AN', 'SL', 'VD', 'YS', 'AK', 'WV', 'NG', 'RG', 'KN', 'RY', 'IN', 'SY', 'NL', 'CY', 'VR', 'AL', 'DF', 'YR', 'SK', 'VS', 'LN', 'YL', 'NK', 'SP', 'AF', 'SE', 'FR', 'SD', 'TI', 'FS', 'KT', 'CE', 'YI', 'GY', 'VQ', 'SA', 'PG', 'TS', 'KT', 'LP', 'NC', 'DN', 'LG', 'CV', 'FE', 'TG', 'NF', 'VN', 'YC', 'AY', 'DF', 'SP', 'FL', 'VQ', 'IS', 'RY', 'GG', 'DF', 'EQ', 'VP', 'RT', 'QN', 'IG', 'AV', 'PG', 'GY', 'QQ', 'TP', 'GY', 'KR', 'IV', 'AV', 'DV', 'YL', 'NS', 'YF', 'KE', 'LL', 'PL', 'DH', 'DA', 'FP', 'TA', 'GT', 'CV', 'VC', 'IG', 'AP', 'WK', 'NK', 'SS', 'NT', 'NN', 'LL', 'DV', 'SK', 'KN', 'VK', 'GC', 'GV', 'NN', 'YF']\n" ], [ "s93 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_93 = []\n\nfor i in range(len(s93)):\n if(0<=i<130):\n c1 = s93[i]\n c2 = s93[i+93]\n lambda_93.append(c1+c2)\n else:\n break\nprint(lambda_93)", "['RP', 'VG', 'QQ', 'PT', 'TG', 'EK', 'SI', 'IA', 'VD', 'RY', 'FN', 'PY', 'NK', 'IL', 'TP', 'ND', 'LD', 'CF', 'PT', 'FG', 'GC', 'EV', 'VI', 'FA', 'NW', 'AN', 'TS', 'RN', 'FN', 'AL', 'SD', 'VS', 'YK', 'AV', 'WG', 'NG', 'RN', 'KY', 'RN', 'IY', 'SL', 'NY', 'CR', 'VL', 'AF', 'DR', 'YK', 'SS', 'VN', 'LL', 'YK', 'NP', 'SF', 'AE', 'SR', 'FD', 'SI', 'TS', 'FT', 'KE', 'CI', 'YY', 'GQ', 'VA', 'SG', 'PS', 'TT', 'KP', 'LC', 'NN', 'DG', 'LV', 'CE', 'FG', 'TF', 'NN', 'VC', 'YY', 'AF', 'DP', 'SL', 'FQ', 'VS', 'IY', 'RG', 'GF', 'DQ', 'EP', 'VT', 'RN', 'QG', 'IV', 'AG', 'PY', 'GQ', 'QP', 'TY', 'GR', 'KV', 'IV', 'AV', 'DL', 'YS', 'NF', 'YE', 'KL', 'LL', 'PH', 'DA', 'DP', 'FA', 'TT', 'GV', 'CC', 'VG', 'IP', 'AK', 'WK', 'NS', 'ST', 'NN', 'NL', 'LV', 'DK', 'SN', 'KK', 'VC', 'GV', 'GN', 'NF']\n" ], [ "s94 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_94 = []\n\nfor i in range(len(s94)):\n if(0<=i<129):\n c1 = s94[i]\n c2 = s94[i+94]\n lambda_94.append(c1+c2)\n else:\n break\nprint(lambda_94)", "['RG', 'VQ', 'QT', 'PG', 'TK', 'EI', 'SA', 'ID', 'VY', 'RN', 'FY', 'PK', 'NL', 'IP', 'TD', 'ND', 'LF', 'CT', 'PG', 'FC', 'GV', 'EI', 'VA', 'FW', 'NN', 'AS', 'TN', 'RN', 'FL', 'AD', 'SS', 'VK', 'YV', 'AG', 'WG', 'NN', 'RY', 'KN', 'RY', 'IL', 'SY', 'NR', 'CL', 'VF', 'AR', 'DK', 'YS', 'SN', 'VL', 'LK', 'YP', 'NF', 'SE', 'AR', 'SD', 'FI', 'SS', 'TT', 'FE', 'KI', 'CY', 'YQ', 'GA', 'VG', 'SS', 'PT', 'TP', 'KC', 'LN', 'NG', 'DV', 'LE', 'CG', 'FF', 'TN', 'NC', 'VY', 'YF', 'AP', 'DL', 'SQ', 'FS', 'VY', 'IG', 'RF', 'GQ', 'DP', 'ET', 'VN', 'RG', 'QV', 'IG', 'AY', 'PQ', 'GP', 'QY', 'TR', 'GV', 'KV', 'IV', 'AL', 'DS', 'YF', 'NE', 'YL', 'KL', 'LH', 'PA', 'DP', 'DA', 'FT', 'TV', 'GC', 'CG', 'VP', 'IK', 'AK', 'WS', 'NT', 'SN', 'NL', 'NV', 'LK', 'DN', 'SK', 'KC', 'VV', 'GN', 'GF']\n" ], [ "s95 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_95 = []\n\nfor i in range(len(s95)):\n if(0<=i<128):\n c1 = s95[i]\n c2 = s95[i+95]\n lambda_95.append(c1+c2)\n else:\n break\nprint(lambda_95)", "['RQ', 'VT', 'QG', 'PK', 'TI', 'EA', 'SD', 'IY', 'VN', 'RY', 'FK', 'PL', 'NP', 'ID', 'TD', 'NF', 'LT', 'CG', 'PC', 'FV', 'GI', 'EA', 'VW', 'FN', 'NS', 'AN', 'TN', 'RL', 'FD', 'AS', 'SK', 'VV', 'YG', 'AG', 'WN', 'NY', 'RN', 'KY', 'RL', 'IY', 'SR', 'NL', 'CF', 'VR', 'AK', 'DS', 'YN', 'SL', 'VK', 'LP', 'YF', 'NE', 'SR', 'AD', 'SI', 'FS', 'ST', 'TE', 'FI', 'KY', 'CQ', 'YA', 'GG', 'VS', 'ST', 'PP', 'TC', 'KN', 'LG', 'NV', 'DE', 'LG', 'CF', 'FN', 'TC', 'NY', 'VF', 'YP', 'AL', 'DQ', 'SS', 'FY', 'VG', 'IF', 'RQ', 'GP', 'DT', 'EN', 'VG', 'RV', 'QG', 'IY', 'AQ', 'PP', 'GY', 'QR', 'TV', 'GV', 'KV', 'IL', 'AS', 'DF', 'YE', 'NL', 'YL', 'KH', 'LA', 'PP', 'DA', 'DT', 'FV', 'TC', 'GG', 'CP', 'VK', 'IK', 'AS', 'WT', 'NN', 'SL', 'NV', 'NK', 'LN', 'DK', 'SC', 'KV', 'VN', 'GF']\n" ], [ "s96 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_96 = []\n\nfor i in range(len(s96)):\n if(0<=i<127):\n c1 = s96[i]\n c2 = s96[i+96]\n lambda_96.append(c1+c2)\n else:\n break\nprint(lambda_96)", "['RT', 'VG', 'QK', 'PI', 'TA', 'ED', 'SY', 'IN', 'VY', 'RK', 'FL', 'PP', 'ND', 'ID', 'TF', 'NT', 'LG', 'CC', 'PV', 'FI', 'GA', 'EW', 'VN', 'FS', 'NN', 'AN', 'TL', 'RD', 'FS', 'AK', 'SV', 'VG', 'YG', 'AN', 'WY', 'NN', 'RY', 'KL', 'RY', 'IR', 'SL', 'NF', 'CR', 'VK', 'AS', 'DN', 'YL', 'SK', 'VP', 'LF', 'YE', 'NR', 'SD', 'AI', 'SS', 'FT', 'SE', 'TI', 'FY', 'KQ', 'CA', 'YG', 'GS', 'VT', 'SP', 'PC', 'TN', 'KG', 'LV', 'NE', 'DG', 'LF', 'CN', 'FC', 'TY', 'NF', 'VP', 'YL', 'AQ', 'DS', 'SY', 'FG', 'VF', 'IQ', 'RP', 'GT', 'DN', 'EG', 'VV', 'RG', 'QY', 'IQ', 'AP', 'PY', 'GR', 'QV', 'TV', 'GV', 'KL', 'IS', 'AF', 'DE', 'YL', 'NL', 'YH', 'KA', 'LP', 'PA', 'DT', 'DV', 'FC', 'TG', 'GP', 'CK', 'VK', 'IS', 'AT', 'WN', 'NL', 'SV', 'NK', 'NN', 'LK', 'DC', 'SV', 'KN', 'VF']\n" ], [ "s97 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_97 = []\n\nfor i in range(len(s97)):\n if(0<=i<126):\n c1 = s97[i]\n c2 = s97[i+97]\n lambda_97.append(c1+c2)\n else:\n break\nprint(lambda_97)", "['RG', 'VK', 'QI', 'PA', 'TD', 'EY', 'SN', 'IY', 'VK', 'RL', 'FP', 'PD', 'ND', 'IF', 'TT', 'NG', 'LC', 'CV', 'PI', 'FA', 'GW', 'EN', 'VS', 'FN', 'NN', 'AL', 'TD', 'RS', 'FK', 'AV', 'SG', 'VG', 'YN', 'AY', 'WN', 'NY', 'RL', 'KY', 'RR', 'IL', 'SF', 'NR', 'CK', 'VS', 'AN', 'DL', 'YK', 'SP', 'VF', 'LE', 'YR', 'ND', 'SI', 'AS', 'ST', 'FE', 'SI', 'TY', 'FQ', 'KA', 'CG', 'YS', 'GT', 'VP', 'SC', 'PN', 'TG', 'KV', 'LE', 'NG', 'DF', 'LN', 'CC', 'FY', 'TF', 'NP', 'VL', 'YQ', 'AS', 'DY', 'SG', 'FF', 'VQ', 'IP', 'RT', 'GN', 'DG', 'EV', 'VG', 'RY', 'QQ', 'IP', 'AY', 'PR', 'GV', 'QV', 'TV', 'GL', 'KS', 'IF', 'AE', 'DL', 'YL', 'NH', 'YA', 'KP', 'LA', 'PT', 'DV', 'DC', 'FG', 'TP', 'GK', 'CK', 'VS', 'IT', 'AN', 'WL', 'NV', 'SK', 'NN', 'NK', 'LC', 'DV', 'SN', 'KF']\n" ], [ "s98 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_98 = []\n\nfor i in range(len(s98)):\n if(0<=i<125):\n c1 = s98[i]\n c2 = s98[i+98]\n lambda_98.append(c1+c2)\n else:\n break\nprint(lambda_98)", "['RK', 'VI', 'QA', 'PD', 'TY', 'EN', 'SY', 'IK', 'VL', 'RP', 'FD', 'PD', 'NF', 'IT', 'TG', 'NC', 'LV', 'CI', 'PA', 'FW', 'GN', 'ES', 'VN', 'FN', 'NL', 'AD', 'TS', 'RK', 'FV', 'AG', 'SG', 'VN', 'YY', 'AN', 'WY', 'NL', 'RY', 'KR', 'RL', 'IF', 'SR', 'NK', 'CS', 'VN', 'AL', 'DK', 'YP', 'SF', 'VE', 'LR', 'YD', 'NI', 'SS', 'AT', 'SE', 'FI', 'SY', 'TQ', 'FA', 'KG', 'CS', 'YT', 'GP', 'VC', 'SN', 'PG', 'TV', 'KE', 'LG', 'NF', 'DN', 'LC', 'CY', 'FF', 'TP', 'NL', 'VQ', 'YS', 'AY', 'DG', 'SF', 'FQ', 'VP', 'IT', 'RN', 'GG', 'DV', 'EG', 'VY', 'RQ', 'QP', 'IY', 'AR', 'PV', 'GV', 'QV', 'TL', 'GS', 'KF', 'IE', 'AL', 'DL', 'YH', 'NA', 'YP', 'KA', 'LT', 'PV', 'DC', 'DG', 'FP', 'TK', 'GK', 'CS', 'VT', 'IN', 'AL', 'WV', 'NK', 'SN', 'NK', 'NC', 'LV', 'DN', 'SF']\n" ], [ "s99 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_99 = []\n\nfor i in range(len(s99)):\n if(0<=i<124):\n c1 = s99[i]\n c2 = s99[i+99]\n lambda_99.append(c1+c2)\n else:\n break\nprint(lambda_99)", "['RI', 'VA', 'QD', 'PY', 'TN', 'EY', 'SK', 'IL', 'VP', 'RD', 'FD', 'PF', 'NT', 'IG', 'TC', 'NV', 'LI', 'CA', 'PW', 'FN', 'GS', 'EN', 'VN', 'FL', 'ND', 'AS', 'TK', 'RV', 'FG', 'AG', 'SN', 'VY', 'YN', 'AY', 'WL', 'NY', 'RR', 'KL', 'RF', 'IR', 'SK', 'NS', 'CN', 'VL', 'AK', 'DP', 'YF', 'SE', 'VR', 'LD', 'YI', 'NS', 'ST', 'AE', 'SI', 'FY', 'SQ', 'TA', 'FG', 'KS', 'CT', 'YP', 'GC', 'VN', 'SG', 'PV', 'TE', 'KG', 'LF', 'NN', 'DC', 'LY', 'CF', 'FP', 'TL', 'NQ', 'VS', 'YY', 'AG', 'DF', 'SQ', 'FP', 'VT', 'IN', 'RG', 'GV', 'DG', 'EY', 'VQ', 'RP', 'QY', 'IR', 'AV', 'PV', 'GV', 'QL', 'TS', 'GF', 'KE', 'IL', 'AL', 'DH', 'YA', 'NP', 'YA', 'KT', 'LV', 'PC', 'DG', 'DP', 'FK', 'TK', 'GS', 'CT', 'VN', 'IL', 'AV', 'WK', 'NN', 'SK', 'NC', 'NV', 'LN', 'DF']\n" ], [ "s100 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_100 = []\n\nfor i in range(len(s100)):\n if(0<=i<123):\n c1 = s100[i]\n c2 = s100[i+100]\n lambda_100.append(c1+c2)\n else:\n break\nprint(lambda_100)", "['RA', 'VD', 'QY', 'PN', 'TY', 'EK', 'SL', 'IP', 'VD', 'RD', 'FF', 'PT', 'NG', 'IC', 'TV', 'NI', 'LA', 'CW', 'PN', 'FS', 'GN', 'EN', 'VL', 'FD', 'NS', 'AK', 'TV', 'RG', 'FG', 'AN', 'SY', 'VN', 'YY', 'AL', 'WY', 'NR', 'RL', 'KF', 'RR', 'IK', 'SS', 'NN', 'CL', 'VK', 'AP', 'DF', 'YE', 'SR', 'VD', 'LI', 'YS', 'NT', 'SE', 'AI', 'SY', 'FQ', 'SA', 'TG', 'FS', 'KT', 'CP', 'YC', 'GN', 'VG', 'SV', 'PE', 'TG', 'KF', 'LN', 'NC', 'DY', 'LF', 'CP', 'FL', 'TQ', 'NS', 'VY', 'YG', 'AF', 'DQ', 'SP', 'FT', 'VN', 'IG', 'RV', 'GG', 'DY', 'EQ', 'VP', 'RY', 'QR', 'IV', 'AV', 'PV', 'GL', 'QS', 'TF', 'GE', 'KL', 'IL', 'AH', 'DA', 'YP', 'NA', 'YT', 'KV', 'LC', 'PG', 'DP', 'DK', 'FK', 'TS', 'GT', 'CN', 'VL', 'IV', 'AK', 'WN', 'NK', 'SC', 'NV', 'NN', 'LF']\n" ], [ "s101 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_101 = []\n\nfor i in range(len(s101)):\n if(0<=i<122):\n c1 = s101[i]\n c2 = s101[i+101]\n lambda_101.append(c1+c2)\n else:\n break\nprint(lambda_101)", "['RD', 'VY', 'QN', 'PY', 'TK', 'EL', 'SP', 'ID', 'VD', 'RF', 'FT', 'PG', 'NC', 'IV', 'TI', 'NA', 'LW', 'CN', 'PS', 'FN', 'GN', 'EL', 'VD', 'FS', 'NK', 'AV', 'TG', 'RG', 'FN', 'AY', 'SN', 'VY', 'YL', 'AY', 'WR', 'NL', 'RF', 'KR', 'RK', 'IS', 'SN', 'NL', 'CK', 'VP', 'AF', 'DE', 'YR', 'SD', 'VI', 'LS', 'YT', 'NE', 'SI', 'AY', 'SQ', 'FA', 'SG', 'TS', 'FT', 'KP', 'CC', 'YN', 'GG', 'VV', 'SE', 'PG', 'TF', 'KN', 'LC', 'NY', 'DF', 'LP', 'CL', 'FQ', 'TS', 'NY', 'VG', 'YF', 'AQ', 'DP', 'ST', 'FN', 'VG', 'IV', 'RG', 'GY', 'DQ', 'EP', 'VY', 'RR', 'QV', 'IV', 'AV', 'PL', 'GS', 'QF', 'TE', 'GL', 'KL', 'IH', 'AA', 'DP', 'YA', 'NT', 'YV', 'KC', 'LG', 'PP', 'DK', 'DK', 'FS', 'TT', 'GN', 'CL', 'VV', 'IK', 'AN', 'WK', 'NC', 'SV', 'NN', 'NF']\n" ], [ "s102 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_102 = []\n\nfor i in range(len(s102)):\n if(0<=i<121):\n c1 = s102[i]\n c2 = s102[i+102]\n lambda_102.append(c1+c2)\n else:\n break\nprint(lambda_102)", "['RY', 'VN', 'QY', 'PK', 'TL', 'EP', 'SD', 'ID', 'VF', 'RT', 'FG', 'PC', 'NV', 'II', 'TA', 'NW', 'LN', 'CS', 'PN', 'FN', 'GL', 'ED', 'VS', 'FK', 'NV', 'AG', 'TG', 'RN', 'FY', 'AN', 'SY', 'VL', 'YY', 'AR', 'WL', 'NF', 'RR', 'KK', 'RS', 'IN', 'SL', 'NK', 'CP', 'VF', 'AE', 'DR', 'YD', 'SI', 'VS', 'LT', 'YE', 'NI', 'SY', 'AQ', 'SA', 'FG', 'SS', 'TT', 'FP', 'KC', 'CN', 'YG', 'GV', 'VE', 'SG', 'PF', 'TN', 'KC', 'LY', 'NF', 'DP', 'LL', 'CQ', 'FS', 'TY', 'NG', 'VF', 'YQ', 'AP', 'DT', 'SN', 'FG', 'VV', 'IG', 'RY', 'GQ', 'DP', 'EY', 'VR', 'RV', 'QV', 'IV', 'AL', 'PS', 'GF', 'QE', 'TL', 'GL', 'KH', 'IA', 'AP', 'DA', 'YT', 'NV', 'YC', 'KG', 'LP', 'PK', 'DK', 'DS', 'FT', 'TN', 'GL', 'CV', 'VK', 'IN', 'AK', 'WC', 'NV', 'SN', 'NF']\n" ], [ "s103 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_103 = []\n\nfor i in range(len(s103)):\n if(0<=i<120):\n c1 = s103[i]\n c2 = s103[i+103]\n lambda_103.append(c1+c2)\n else:\n break\nprint(lambda_103)", "['RN', 'VY', 'QK', 'PL', 'TP', 'ED', 'SD', 'IF', 'VT', 'RG', 'FC', 'PV', 'NI', 'IA', 'TW', 'NN', 'LS', 'CN', 'PN', 'FL', 'GD', 'ES', 'VK', 'FV', 'NG', 'AG', 'TN', 'RY', 'FN', 'AY', 'SL', 'VY', 'YR', 'AL', 'WF', 'NR', 'RK', 'KS', 'RN', 'IL', 'SK', 'NP', 'CF', 'VE', 'AR', 'DD', 'YI', 'SS', 'VT', 'LE', 'YI', 'NY', 'SQ', 'AA', 'SG', 'FS', 'ST', 'TP', 'FC', 'KN', 'CG', 'YV', 'GE', 'VG', 'SF', 'PN', 'TC', 'KY', 'LF', 'NP', 'DL', 'LQ', 'CS', 'FY', 'TG', 'NF', 'VQ', 'YP', 'AT', 'DN', 'SG', 'FV', 'VG', 'IY', 'RQ', 'GP', 'DY', 'ER', 'VV', 'RV', 'QV', 'IL', 'AS', 'PF', 'GE', 'QL', 'TL', 'GH', 'KA', 'IP', 'AA', 'DT', 'YV', 'NC', 'YG', 'KP', 'LK', 'PK', 'DS', 'DT', 'FN', 'TL', 'GV', 'CK', 'VN', 'IK', 'AC', 'WV', 'NN', 'SF']\n" ], [ "s104 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_104 = []\n\nfor i in range(len(s104)):\n if(0<=i<119):\n c1 = s104[i]\n c2 = s104[i+104]\n lambda_104.append(c1+c2)\n else:\n break\nprint(lambda_104)", "['RY', 'VK', 'QL', 'PP', 'TD', 'ED', 'SF', 'IT', 'VG', 'RC', 'FV', 'PI', 'NA', 'IW', 'TN', 'NS', 'LN', 'CN', 'PL', 'FD', 'GS', 'EK', 'VV', 'FG', 'NG', 'AN', 'TY', 'RN', 'FY', 'AL', 'SY', 'VR', 'YL', 'AF', 'WR', 'NK', 'RS', 'KN', 'RL', 'IK', 'SP', 'NF', 'CE', 'VR', 'AD', 'DI', 'YS', 'ST', 'VE', 'LI', 'YY', 'NQ', 'SA', 'AG', 'SS', 'FT', 'SP', 'TC', 'FN', 'KG', 'CV', 'YE', 'GG', 'VF', 'SN', 'PC', 'TY', 'KF', 'LP', 'NL', 'DQ', 'LS', 'CY', 'FG', 'TF', 'NQ', 'VP', 'YT', 'AN', 'DG', 'SV', 'FG', 'VY', 'IQ', 'RP', 'GY', 'DR', 'EV', 'VV', 'RV', 'QL', 'IS', 'AF', 'PE', 'GL', 'QL', 'TH', 'GA', 'KP', 'IA', 'AT', 'DV', 'YC', 'NG', 'YP', 'KK', 'LK', 'PS', 'DT', 'DN', 'FL', 'TV', 'GK', 'CN', 'VK', 'IC', 'AV', 'WN', 'NF']\n" ], [ "s105 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_105 = []\n\nfor i in range(len(s105)):\n if(0<=i<118):\n c1 = s105[i]\n c2 = s105[i+105]\n lambda_105.append(c1+c2)\n else:\n break\nprint(lambda_105)", "['RK', 'VL', 'QP', 'PD', 'TD', 'EF', 'ST', 'IG', 'VC', 'RV', 'FI', 'PA', 'NW', 'IN', 'TS', 'NN', 'LN', 'CL', 'PD', 'FS', 'GK', 'EV', 'VG', 'FG', 'NN', 'AY', 'TN', 'RY', 'FL', 'AY', 'SR', 'VL', 'YF', 'AR', 'WK', 'NS', 'RN', 'KL', 'RK', 'IP', 'SF', 'NE', 'CR', 'VD', 'AI', 'DS', 'YT', 'SE', 'VI', 'LY', 'YQ', 'NA', 'SG', 'AS', 'ST', 'FP', 'SC', 'TN', 'FG', 'KV', 'CE', 'YG', 'GF', 'VN', 'SC', 'PY', 'TF', 'KP', 'LL', 'NQ', 'DS', 'LY', 'CG', 'FF', 'TQ', 'NP', 'VT', 'YN', 'AG', 'DV', 'SG', 'FY', 'VQ', 'IP', 'RY', 'GR', 'DV', 'EV', 'VV', 'RL', 'QS', 'IF', 'AE', 'PL', 'GL', 'QH', 'TA', 'GP', 'KA', 'IT', 'AV', 'DC', 'YG', 'NP', 'YK', 'KK', 'LS', 'PT', 'DN', 'DL', 'FV', 'TK', 'GN', 'CK', 'VC', 'IV', 'AN', 'WF']\n" ], [ "s106 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_106 = []\n\nfor i in range(len(s106)):\n if(0<=i<117):\n c1 = s106[i]\n c2 = s106[i+106]\n lambda_106.append(c1+c2)\n else:\n break\nprint(lambda_106)", "['RL', 'VP', 'QD', 'PD', 'TF', 'ET', 'SG', 'IC', 'VV', 'RI', 'FA', 'PW', 'NN', 'IS', 'TN', 'NN', 'LL', 'CD', 'PS', 'FK', 'GV', 'EG', 'VG', 'FN', 'NY', 'AN', 'TY', 'RL', 'FY', 'AR', 'SL', 'VF', 'YR', 'AK', 'WS', 'NN', 'RL', 'KK', 'RP', 'IF', 'SE', 'NR', 'CD', 'VI', 'AS', 'DT', 'YE', 'SI', 'VY', 'LQ', 'YA', 'NG', 'SS', 'AT', 'SP', 'FC', 'SN', 'TG', 'FV', 'KE', 'CG', 'YF', 'GN', 'VC', 'SY', 'PF', 'TP', 'KL', 'LQ', 'NS', 'DY', 'LG', 'CF', 'FQ', 'TP', 'NT', 'VN', 'YG', 'AV', 'DG', 'SY', 'FQ', 'VP', 'IY', 'RR', 'GV', 'DV', 'EV', 'VL', 'RS', 'QF', 'IE', 'AL', 'PL', 'GH', 'QA', 'TP', 'GA', 'KT', 'IV', 'AC', 'DG', 'YP', 'NK', 'YK', 'KS', 'LT', 'PN', 'DL', 'DV', 'FK', 'TN', 'GK', 'CC', 'VV', 'IN', 'AF']\n" ], [ "s107 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_107 = []\n\nfor i in range(len(s107)):\n if(0<=i<116):\n c1 = s107[i]\n c2 = s107[i+107]\n lambda_107.append(c1+c2)\n else:\n break\nprint(lambda_107)", "['RP', 'VD', 'QD', 'PF', 'TT', 'EG', 'SC', 'IV', 'VI', 'RA', 'FW', 'PN', 'NS', 'IN', 'TN', 'NL', 'LD', 'CS', 'PK', 'FV', 'GG', 'EG', 'VN', 'FY', 'NN', 'AY', 'TL', 'RY', 'FR', 'AL', 'SF', 'VR', 'YK', 'AS', 'WN', 'NL', 'RK', 'KP', 'RF', 'IE', 'SR', 'ND', 'CI', 'VS', 'AT', 'DE', 'YI', 'SY', 'VQ', 'LA', 'YG', 'NS', 'ST', 'AP', 'SC', 'FN', 'SG', 'TV', 'FE', 'KG', 'CF', 'YN', 'GC', 'VY', 'SF', 'PP', 'TL', 'KQ', 'LS', 'NY', 'DG', 'LF', 'CQ', 'FP', 'TT', 'NN', 'VG', 'YV', 'AG', 'DY', 'SQ', 'FP', 'VY', 'IR', 'RV', 'GV', 'DV', 'EL', 'VS', 'RF', 'QE', 'IL', 'AL', 'PH', 'GA', 'QP', 'TA', 'GT', 'KV', 'IC', 'AG', 'DP', 'YK', 'NK', 'YS', 'KT', 'LN', 'PL', 'DV', 'DK', 'FN', 'TK', 'GC', 'CV', 'VN', 'IF']\n" ], [ "s108 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_108 = []\n\nfor i in range(len(s108)):\n if(0<=i<115):\n c1 = s108[i]\n c2 = s108[i+108]\n lambda_108.append(c1+c2)\n else:\n break\nprint(lambda_108)", "['RD', 'VD', 'QF', 'PT', 'TG', 'EC', 'SV', 'II', 'VA', 'RW', 'FN', 'PS', 'NN', 'IN', 'TL', 'ND', 'LS', 'CK', 'PV', 'FG', 'GG', 'EN', 'VY', 'FN', 'NY', 'AL', 'TY', 'RR', 'FL', 'AF', 'SR', 'VK', 'YS', 'AN', 'WL', 'NK', 'RP', 'KF', 'RE', 'IR', 'SD', 'NI', 'CS', 'VT', 'AE', 'DI', 'YY', 'SQ', 'VA', 'LG', 'YS', 'NT', 'SP', 'AC', 'SN', 'FG', 'SV', 'TE', 'FG', 'KF', 'CN', 'YC', 'GY', 'VF', 'SP', 'PL', 'TQ', 'KS', 'LY', 'NG', 'DF', 'LQ', 'CP', 'FT', 'TN', 'NG', 'VV', 'YG', 'AY', 'DQ', 'SP', 'FY', 'VR', 'IV', 'RV', 'GV', 'DL', 'ES', 'VF', 'RE', 'QL', 'IL', 'AH', 'PA', 'GP', 'QA', 'TT', 'GV', 'KC', 'IG', 'AP', 'DK', 'YK', 'NS', 'YT', 'KN', 'LL', 'PV', 'DK', 'DN', 'FK', 'TC', 'GV', 'CN', 'VF']\n" ], [ "s109 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_109 = []\n\nfor i in range(len(s109)):\n if(0<=i<114):\n c1 = s109[i]\n c2 = s109[i+109]\n lambda_109.append(c1+c2)\n else:\n break\nprint(lambda_109)", "['RD', 'VF', 'QT', 'PG', 'TC', 'EV', 'SI', 'IA', 'VW', 'RN', 'FS', 'PN', 'NN', 'IL', 'TD', 'NS', 'LK', 'CV', 'PG', 'FG', 'GN', 'EY', 'VN', 'FY', 'NL', 'AY', 'TR', 'RL', 'FF', 'AR', 'SK', 'VS', 'YN', 'AL', 'WK', 'NP', 'RF', 'KE', 'RR', 'ID', 'SI', 'NS', 'CT', 'VE', 'AI', 'DY', 'YQ', 'SA', 'VG', 'LS', 'YT', 'NP', 'SC', 'AN', 'SG', 'FV', 'SE', 'TG', 'FF', 'KN', 'CC', 'YY', 'GF', 'VP', 'SL', 'PQ', 'TS', 'KY', 'LG', 'NF', 'DQ', 'LP', 'CT', 'FN', 'TG', 'NV', 'VG', 'YY', 'AQ', 'DP', 'SY', 'FR', 'VV', 'IV', 'RV', 'GL', 'DS', 'EF', 'VE', 'RL', 'QL', 'IH', 'AA', 'PP', 'GA', 'QT', 'TV', 'GC', 'KG', 'IP', 'AK', 'DK', 'YS', 'NT', 'YN', 'KL', 'LV', 'PK', 'DN', 'DK', 'FC', 'TV', 'GN', 'CF']\n" ], [ "s110 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_110 = []\n\nfor i in range(len(s110)):\n if(0<=i<113):\n c1 = s110[i]\n c2 = s110[i+110]\n lambda_110.append(c1+c2)\n else:\n break\nprint(lambda_110)", "['RF', 'VT', 'QG', 'PC', 'TV', 'EI', 'SA', 'IW', 'VN', 'RS', 'FN', 'PN', 'NL', 'ID', 'TS', 'NK', 'LV', 'CG', 'PG', 'FN', 'GY', 'EN', 'VY', 'FL', 'NY', 'AR', 'TL', 'RF', 'FR', 'AK', 'SS', 'VN', 'YL', 'AK', 'WP', 'NF', 'RE', 'KR', 'RD', 'II', 'SS', 'NT', 'CE', 'VI', 'AY', 'DQ', 'YA', 'SG', 'VS', 'LT', 'YP', 'NC', 'SN', 'AG', 'SV', 'FE', 'SG', 'TF', 'FN', 'KC', 'CY', 'YF', 'GP', 'VL', 'SQ', 'PS', 'TY', 'KG', 'LF', 'NQ', 'DP', 'LT', 'CN', 'FG', 'TV', 'NG', 'VY', 'YQ', 'AP', 'DY', 'SR', 'FV', 'VV', 'IV', 'RL', 'GS', 'DF', 'EE', 'VL', 'RL', 'QH', 'IA', 'AP', 'PA', 'GT', 'QV', 'TC', 'GG', 'KP', 'IK', 'AK', 'DS', 'YT', 'NN', 'YL', 'KV', 'LK', 'PN', 'DK', 'DC', 'FV', 'TN', 'GF']\n" ], [ "s111 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_111 = []\n\nfor i in range(len(s111)):\n if(0<=i<112):\n c1 = s111[i]\n c2 = s111[i+111]\n lambda_111.append(c1+c2)\n else:\n break\nprint(lambda_111)", "['RT', 'VG', 'QC', 'PV', 'TI', 'EA', 'SW', 'IN', 'VS', 'RN', 'FN', 'PL', 'ND', 'IS', 'TK', 'NV', 'LG', 'CG', 'PN', 'FY', 'GN', 'EY', 'VL', 'FY', 'NR', 'AL', 'TF', 'RR', 'FK', 'AS', 'SN', 'VL', 'YK', 'AP', 'WF', 'NE', 'RR', 'KD', 'RI', 'IS', 'ST', 'NE', 'CI', 'VY', 'AQ', 'DA', 'YG', 'SS', 'VT', 'LP', 'YC', 'NN', 'SG', 'AV', 'SE', 'FG', 'SF', 'TN', 'FC', 'KY', 'CF', 'YP', 'GL', 'VQ', 'SS', 'PY', 'TG', 'KF', 'LQ', 'NP', 'DT', 'LN', 'CG', 'FV', 'TG', 'NY', 'VQ', 'YP', 'AY', 'DR', 'SV', 'FV', 'VV', 'IL', 'RS', 'GF', 'DE', 'EL', 'VL', 'RH', 'QA', 'IP', 'AA', 'PT', 'GV', 'QC', 'TG', 'GP', 'KK', 'IK', 'AS', 'DT', 'YN', 'NL', 'YV', 'KK', 'LN', 'PK', 'DC', 'DV', 'FN', 'TF']\n" ], [ "s112 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_112 = []\n\nfor i in range(len(s112)):\n if(0<=i<111):\n c1 = s112[i]\n c2 = s112[i+112]\n lambda_112.append(c1+c2)\n else:\n break\nprint(lambda_112)", "['RG', 'VC', 'QV', 'PI', 'TA', 'EW', 'SN', 'IS', 'VN', 'RN', 'FL', 'PD', 'NS', 'IK', 'TV', 'NG', 'LG', 'CN', 'PY', 'FN', 'GY', 'EL', 'VY', 'FR', 'NL', 'AF', 'TR', 'RK', 'FS', 'AN', 'SL', 'VK', 'YP', 'AF', 'WE', 'NR', 'RD', 'KI', 'RS', 'IT', 'SE', 'NI', 'CY', 'VQ', 'AA', 'DG', 'YS', 'ST', 'VP', 'LC', 'YN', 'NG', 'SV', 'AE', 'SG', 'FF', 'SN', 'TC', 'FY', 'KF', 'CP', 'YL', 'GQ', 'VS', 'SY', 'PG', 'TF', 'KQ', 'LP', 'NT', 'DN', 'LG', 'CV', 'FG', 'TY', 'NQ', 'VP', 'YY', 'AR', 'DV', 'SV', 'FV', 'VL', 'IS', 'RF', 'GE', 'DL', 'EL', 'VH', 'RA', 'QP', 'IA', 'AT', 'PV', 'GC', 'QG', 'TP', 'GK', 'KK', 'IS', 'AT', 'DN', 'YL', 'NV', 'YK', 'KN', 'LK', 'PC', 'DV', 'DN', 'FF']\n" ], [ "s113 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_113 = []\n\nfor i in range(len(s113)):\n if(0<=i<110):\n c1 = s113[i]\n c2 = s113[i+113]\n lambda_113.append(c1+c2)\n else:\n break\nprint(lambda_113)", "['RC', 'VV', 'QI', 'PA', 'TW', 'EN', 'SS', 'IN', 'VN', 'RL', 'FD', 'PS', 'NK', 'IV', 'TG', 'NG', 'LN', 'CY', 'PN', 'FY', 'GL', 'EY', 'VR', 'FL', 'NF', 'AR', 'TK', 'RS', 'FN', 'AL', 'SK', 'VP', 'YF', 'AE', 'WR', 'ND', 'RI', 'KS', 'RT', 'IE', 'SI', 'NY', 'CQ', 'VA', 'AG', 'DS', 'YT', 'SP', 'VC', 'LN', 'YG', 'NV', 'SE', 'AG', 'SF', 'FN', 'SC', 'TY', 'FF', 'KP', 'CL', 'YQ', 'GS', 'VY', 'SG', 'PF', 'TQ', 'KP', 'LT', 'NN', 'DG', 'LV', 'CG', 'FY', 'TQ', 'NP', 'VY', 'YR', 'AV', 'DV', 'SV', 'FL', 'VS', 'IF', 'RE', 'GL', 'DL', 'EH', 'VA', 'RP', 'QA', 'IT', 'AV', 'PC', 'GG', 'QP', 'TK', 'GK', 'KS', 'IT', 'AN', 'DL', 'YV', 'NK', 'YN', 'KK', 'LC', 'PV', 'DN', 'DF']\n" ], [ "s114 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_114 = []\n\nfor i in range(len(s114)):\n if(0<=i<109):\n c1 = s114[i]\n c2 = s114[i+114]\n lambda_114.append(c1+c2)\n else:\n break\nprint(lambda_114)", "['RV', 'VI', 'QA', 'PW', 'TN', 'ES', 'SN', 'IN', 'VL', 'RD', 'FS', 'PK', 'NV', 'IG', 'TG', 'NN', 'LY', 'CN', 'PY', 'FL', 'GY', 'ER', 'VL', 'FF', 'NR', 'AK', 'TS', 'RN', 'FL', 'AK', 'SP', 'VF', 'YE', 'AR', 'WD', 'NI', 'RS', 'KT', 'RE', 'II', 'SY', 'NQ', 'CA', 'VG', 'AS', 'DT', 'YP', 'SC', 'VN', 'LG', 'YV', 'NE', 'SG', 'AF', 'SN', 'FC', 'SY', 'TF', 'FP', 'KL', 'CQ', 'YS', 'GY', 'VG', 'SF', 'PQ', 'TP', 'KT', 'LN', 'NG', 'DV', 'LG', 'CY', 'FQ', 'TP', 'NY', 'VR', 'YV', 'AV', 'DV', 'SL', 'FS', 'VF', 'IE', 'RL', 'GL', 'DH', 'EA', 'VP', 'RA', 'QT', 'IV', 'AC', 'PG', 'GP', 'QK', 'TK', 'GS', 'KT', 'IN', 'AL', 'DV', 'YK', 'NN', 'YK', 'KC', 'LV', 'PN', 'DF']\n" ], [ "s115 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_115 = []\n\nfor i in range(len(s115)):\n if(0<=i<108):\n c1 = s115[i]\n c2 = s115[i+115]\n lambda_115.append(c1+c2)\n else:\n break\nprint(lambda_115)", "['RI', 'VA', 'QW', 'PN', 'TS', 'EN', 'SN', 'IL', 'VD', 'RS', 'FK', 'PV', 'NG', 'IG', 'TN', 'NY', 'LN', 'CY', 'PL', 'FY', 'GR', 'EL', 'VF', 'FR', 'NK', 'AS', 'TN', 'RL', 'FK', 'AP', 'SF', 'VE', 'YR', 'AD', 'WI', 'NS', 'RT', 'KE', 'RI', 'IY', 'SQ', 'NA', 'CG', 'VS', 'AT', 'DP', 'YC', 'SN', 'VG', 'LV', 'YE', 'NG', 'SF', 'AN', 'SC', 'FY', 'SF', 'TP', 'FL', 'KQ', 'CS', 'YY', 'GG', 'VF', 'SQ', 'PP', 'TT', 'KN', 'LG', 'NV', 'DG', 'LY', 'CQ', 'FP', 'TY', 'NR', 'VV', 'YV', 'AV', 'DL', 'SS', 'FF', 'VE', 'IL', 'RL', 'GH', 'DA', 'EP', 'VA', 'RT', 'QV', 'IC', 'AG', 'PP', 'GK', 'QK', 'TS', 'GT', 'KN', 'IL', 'AV', 'DK', 'YN', 'NK', 'YC', 'KV', 'LN', 'PF']\n" ], [ "s116 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_116 = []\n\nfor i in range(len(s116)):\n if(0<=i<107):\n c1 = s116[i]\n c2 = s116[i+116]\n lambda_116.append(c1+c2)\n else:\n break\nprint(lambda_116)", "['RA', 'VW', 'QN', 'PS', 'TN', 'EN', 'SL', 'ID', 'VS', 'RK', 'FV', 'PG', 'NG', 'IN', 'TY', 'NN', 'LY', 'CL', 'PY', 'FR', 'GL', 'EF', 'VR', 'FK', 'NS', 'AN', 'TL', 'RK', 'FP', 'AF', 'SE', 'VR', 'YD', 'AI', 'WS', 'NT', 'RE', 'KI', 'RY', 'IQ', 'SA', 'NG', 'CS', 'VT', 'AP', 'DC', 'YN', 'SG', 'VV', 'LE', 'YG', 'NF', 'SN', 'AC', 'SY', 'FF', 'SP', 'TL', 'FQ', 'KS', 'CY', 'YG', 'GF', 'VQ', 'SP', 'PT', 'TN', 'KG', 'LV', 'NG', 'DY', 'LQ', 'CP', 'FY', 'TR', 'NV', 'VV', 'YV', 'AL', 'DS', 'SF', 'FE', 'VL', 'IL', 'RH', 'GA', 'DP', 'EA', 'VT', 'RV', 'QC', 'IG', 'AP', 'PK', 'GK', 'QS', 'TT', 'GN', 'KL', 'IV', 'AK', 'DN', 'YK', 'NC', 'YV', 'KN', 'LF']\n" ], [ "s117 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_117 = []\n\nfor i in range(len(s117)):\n if(0<=i<106):\n c1 = s117[i]\n c2 = s117[i+117]\n lambda_117.append(c1+c2)\n else:\n break\nprint(lambda_117)", "['RW', 'VN', 'QS', 'PN', 'TN', 'EL', 'SD', 'IS', 'VK', 'RV', 'FG', 'PG', 'NN', 'IY', 'TN', 'NY', 'LL', 'CY', 'PR', 'FL', 'GF', 'ER', 'VK', 'FS', 'NN', 'AL', 'TK', 'RP', 'FF', 'AE', 'SR', 'VD', 'YI', 'AS', 'WT', 'NE', 'RI', 'KY', 'RQ', 'IA', 'SG', 'NS', 'CT', 'VP', 'AC', 'DN', 'YG', 'SV', 'VE', 'LG', 'YF', 'NN', 'SC', 'AY', 'SF', 'FP', 'SL', 'TQ', 'FS', 'KY', 'CG', 'YF', 'GQ', 'VP', 'ST', 'PN', 'TG', 'KV', 'LG', 'NY', 'DQ', 'LP', 'CY', 'FR', 'TV', 'NV', 'VV', 'YL', 'AS', 'DF', 'SE', 'FL', 'VL', 'IH', 'RA', 'GP', 'DA', 'ET', 'VV', 'RC', 'QG', 'IP', 'AK', 'PK', 'GS', 'QT', 'TN', 'GL', 'KV', 'IK', 'AN', 'DK', 'YC', 'NV', 'YN', 'KF']\n" ], [ "s118 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_118 = []\n\nfor i in range(len(s118)):\n if(0<=i<105):\n c1 = s118[i]\n c2 = s118[i+118]\n lambda_118.append(c1+c2)\n else:\n break\nprint(lambda_118)", "['RN', 'VS', 'QN', 'PN', 'TL', 'ED', 'SS', 'IK', 'VV', 'RG', 'FG', 'PN', 'NY', 'IN', 'TY', 'NL', 'LY', 'CR', 'PL', 'FF', 'GR', 'EK', 'VS', 'FN', 'NL', 'AK', 'TP', 'RF', 'FE', 'AR', 'SD', 'VI', 'YS', 'AT', 'WE', 'NI', 'RY', 'KQ', 'RA', 'IG', 'SS', 'NT', 'CP', 'VC', 'AN', 'DG', 'YV', 'SE', 'VG', 'LF', 'YN', 'NC', 'SY', 'AF', 'SP', 'FL', 'SQ', 'TS', 'FY', 'KG', 'CF', 'YQ', 'GP', 'VT', 'SN', 'PG', 'TV', 'KG', 'LY', 'NQ', 'DP', 'LY', 'CR', 'FV', 'TV', 'NV', 'VL', 'YS', 'AF', 'DE', 'SL', 'FL', 'VH', 'IA', 'RP', 'GA', 'DT', 'EV', 'VC', 'RG', 'QP', 'IK', 'AK', 'PS', 'GT', 'QN', 'TL', 'GV', 'KK', 'IN', 'AK', 'DC', 'YV', 'NN', 'YF']\n" ], [ "s119 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_119 = []\n\nfor i in range(len(s119)):\n if(0<=i<104):\n c1 = s119[i]\n c2 = s119[i+119]\n lambda_119.append(c1+c2)\n else:\n break\nprint(lambda_119)", "['RS', 'VN', 'QN', 'PL', 'TD', 'ES', 'SK', 'IV', 'VG', 'RG', 'FN', 'PY', 'NN', 'IY', 'TL', 'NY', 'LR', 'CL', 'PF', 'FR', 'GK', 'ES', 'VN', 'FL', 'NK', 'AP', 'TF', 'RE', 'FR', 'AD', 'SI', 'VS', 'YT', 'AE', 'WI', 'NY', 'RQ', 'KA', 'RG', 'IS', 'ST', 'NP', 'CC', 'VN', 'AG', 'DV', 'YE', 'SG', 'VF', 'LN', 'YC', 'NY', 'SF', 'AP', 'SL', 'FQ', 'SS', 'TY', 'FG', 'KF', 'CQ', 'YP', 'GT', 'VN', 'SG', 'PV', 'TG', 'KY', 'LQ', 'NP', 'DY', 'LR', 'CV', 'FV', 'TV', 'NL', 'VS', 'YF', 'AE', 'DL', 'SL', 'FH', 'VA', 'IP', 'RA', 'GT', 'DV', 'EC', 'VG', 'RP', 'QK', 'IK', 'AS', 'PT', 'GN', 'QL', 'TV', 'GK', 'KN', 'IK', 'AC', 'DV', 'YN', 'NF']\n" ], [ "s120 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_120 = []\n\nfor i in range(len(s120)):\n if(0<=i<103):\n c1 = s120[i]\n c2 = s120[i+120]\n lambda_120.append(c1+c2)\n else:\n break\nprint(lambda_120)", "['RN', 'VN', 'QL', 'PD', 'TS', 'EK', 'SV', 'IG', 'VG', 'RN', 'FY', 'PN', 'NY', 'IL', 'TY', 'NR', 'LL', 'CF', 'PR', 'FK', 'GS', 'EN', 'VL', 'FK', 'NP', 'AF', 'TE', 'RR', 'FD', 'AI', 'SS', 'VT', 'YE', 'AI', 'WY', 'NQ', 'RA', 'KG', 'RS', 'IT', 'SP', 'NC', 'CN', 'VG', 'AV', 'DE', 'YG', 'SF', 'VN', 'LC', 'YY', 'NF', 'SP', 'AL', 'SQ', 'FS', 'SY', 'TG', 'FF', 'KQ', 'CP', 'YT', 'GN', 'VG', 'SV', 'PG', 'TY', 'KQ', 'LP', 'NY', 'DR', 'LV', 'CV', 'FV', 'TL', 'NS', 'VF', 'YE', 'AL', 'DL', 'SH', 'FA', 'VP', 'IA', 'RT', 'GV', 'DC', 'EG', 'VP', 'RK', 'QK', 'IS', 'AT', 'PN', 'GL', 'QV', 'TK', 'GN', 'KK', 'IC', 'AV', 'DN', 'YF']\n" ], [ "s121 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_121 = []\n\nfor i in range(len(s121)):\n if(0<=i<102):\n c1 = s121[i]\n c2 = s121[i+121]\n lambda_121.append(c1+c2)\n else:\n break\nprint(lambda_121)", "['RN', 'VL', 'QD', 'PS', 'TK', 'EV', 'SG', 'IG', 'VN', 'RY', 'FN', 'PY', 'NL', 'IY', 'TR', 'NL', 'LF', 'CR', 'PK', 'FS', 'GN', 'EL', 'VK', 'FP', 'NF', 'AE', 'TR', 'RD', 'FI', 'AS', 'ST', 'VE', 'YI', 'AY', 'WQ', 'NA', 'RG', 'KS', 'RT', 'IP', 'SC', 'NN', 'CG', 'VV', 'AE', 'DG', 'YF', 'SN', 'VC', 'LY', 'YF', 'NP', 'SL', 'AQ', 'SS', 'FY', 'SG', 'TF', 'FQ', 'KP', 'CT', 'YN', 'GG', 'VV', 'SG', 'PY', 'TQ', 'KP', 'LY', 'NR', 'DV', 'LV', 'CV', 'FL', 'TS', 'NF', 'VE', 'YL', 'AL', 'DH', 'SA', 'FP', 'VA', 'IT', 'RV', 'GC', 'DG', 'EP', 'VK', 'RK', 'QS', 'IT', 'AN', 'PL', 'GV', 'QK', 'TN', 'GK', 'KC', 'IV', 'AN', 'DF']\n" ], [ "s122 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_122 = []\n\nfor i in range(len(s122)):\n if(0<=i<101):\n c1 = s122[i]\n c2 = s122[i+122]\n lambda_122.append(c1+c2)\n else:\n break\nprint(lambda_122)", "['RL', 'VD', 'QS', 'PK', 'TV', 'EG', 'SG', 'IN', 'VY', 'RN', 'FY', 'PL', 'NY', 'IR', 'TL', 'NF', 'LR', 'CK', 'PS', 'FN', 'GL', 'EK', 'VP', 'FF', 'NE', 'AR', 'TD', 'RI', 'FS', 'AT', 'SE', 'VI', 'YY', 'AQ', 'WA', 'NG', 'RS', 'KT', 'RP', 'IC', 'SN', 'NG', 'CV', 'VE', 'AG', 'DF', 'YN', 'SC', 'VY', 'LF', 'YP', 'NL', 'SQ', 'AS', 'SY', 'FG', 'SF', 'TQ', 'FP', 'KT', 'CN', 'YG', 'GV', 'VG', 'SY', 'PQ', 'TP', 'KY', 'LR', 'NV', 'DV', 'LV', 'CL', 'FS', 'TF', 'NE', 'VL', 'YL', 'AH', 'DA', 'SP', 'FA', 'VT', 'IV', 'RC', 'GG', 'DP', 'EK', 'VK', 'RS', 'QT', 'IN', 'AL', 'PV', 'GK', 'QN', 'TK', 'GC', 'KV', 'IN', 'AF']\n" ], [ "s123 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_123 = []\n\nfor i in range(len(s123)):\n if(0<=i<100):\n c1 = s123[i]\n c2 = s123[i+123]\n lambda_123.append(c1+c2)\n else:\n break\nprint(lambda_123)", "['RD', 'VS', 'QK', 'PV', 'TG', 'EG', 'SN', 'IY', 'VN', 'RY', 'FL', 'PY', 'NR', 'IL', 'TF', 'NR', 'LK', 'CS', 'PN', 'FL', 'GK', 'EP', 'VF', 'FE', 'NR', 'AD', 'TI', 'RS', 'FT', 'AE', 'SI', 'VY', 'YQ', 'AA', 'WG', 'NS', 'RT', 'KP', 'RC', 'IN', 'SG', 'NV', 'CE', 'VG', 'AF', 'DN', 'YC', 'SY', 'VF', 'LP', 'YL', 'NQ', 'SS', 'AY', 'SG', 'FF', 'SQ', 'TP', 'FT', 'KN', 'CG', 'YV', 'GG', 'VY', 'SQ', 'PP', 'TY', 'KR', 'LV', 'NV', 'DV', 'LL', 'CS', 'FF', 'TE', 'NL', 'VL', 'YH', 'AA', 'DP', 'SA', 'FT', 'VV', 'IC', 'RG', 'GP', 'DK', 'EK', 'VS', 'RT', 'QN', 'IL', 'AV', 'PK', 'GN', 'QK', 'TC', 'GV', 'KN', 'IF']\n" ], [ "s124 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_124 = []\n\nfor i in range(len(s124)):\n if(0<=i<99):\n c1 = s124[i]\n c2 = s124[i+124]\n lambda_124.append(c1+c2)\n else:\n break\nprint(lambda_124)", "['RS', 'VK', 'QV', 'PG', 'TG', 'EN', 'SY', 'IN', 'VY', 'RL', 'FY', 'PR', 'NL', 'IF', 'TR', 'NK', 'LS', 'CN', 'PL', 'FK', 'GP', 'EF', 'VE', 'FR', 'ND', 'AI', 'TS', 'RT', 'FE', 'AI', 'SY', 'VQ', 'YA', 'AG', 'WS', 'NT', 'RP', 'KC', 'RN', 'IG', 'SV', 'NE', 'CG', 'VF', 'AN', 'DC', 'YY', 'SF', 'VP', 'LL', 'YQ', 'NS', 'SY', 'AG', 'SF', 'FQ', 'SP', 'TT', 'FN', 'KG', 'CV', 'YG', 'GY', 'VQ', 'SP', 'PY', 'TR', 'KV', 'LV', 'NV', 'DL', 'LS', 'CF', 'FE', 'TL', 'NL', 'VH', 'YA', 'AP', 'DA', 'ST', 'FV', 'VC', 'IG', 'RP', 'GK', 'DK', 'ES', 'VT', 'RN', 'QL', 'IV', 'AK', 'PN', 'GK', 'QC', 'TV', 'GN', 'KF']\n" ], [ "s125 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_125 = []\n\nfor i in range(len(s125)):\n if(0<=i<98):\n c1 = s125[i]\n c2 = s125[i+125]\n lambda_125.append(c1+c2)\n else:\n break\nprint(lambda_125)", "['RK', 'VV', 'QG', 'PG', 'TN', 'EY', 'SN', 'IY', 'VL', 'RY', 'FR', 'PL', 'NF', 'IR', 'TK', 'NS', 'LN', 'CL', 'PK', 'FP', 'GF', 'EE', 'VR', 'FD', 'NI', 'AS', 'TT', 'RE', 'FI', 'AY', 'SQ', 'VA', 'YG', 'AS', 'WT', 'NP', 'RC', 'KN', 'RG', 'IV', 'SE', 'NG', 'CF', 'VN', 'AC', 'DY', 'YF', 'SP', 'VL', 'LQ', 'YS', 'NY', 'SG', 'AF', 'SQ', 'FP', 'ST', 'TN', 'FG', 'KV', 'CG', 'YY', 'GQ', 'VP', 'SY', 'PR', 'TV', 'KV', 'LV', 'NL', 'DS', 'LF', 'CE', 'FL', 'TL', 'NH', 'VA', 'YP', 'AA', 'DT', 'SV', 'FC', 'VG', 'IP', 'RK', 'GK', 'DS', 'ET', 'VN', 'RL', 'QV', 'IK', 'AN', 'PK', 'GC', 'QV', 'TN', 'GF']\n" ], [ "s126 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_126 = []\n\nfor i in range(len(s126)):\n if(0<=i<97):\n c1 = s126[i]\n c2 = s126[i+126]\n lambda_126.append(c1+c2)\n else:\n break\nprint(lambda_126)", "['RV', 'VG', 'QG', 'PN', 'TY', 'EN', 'SY', 'IL', 'VY', 'RR', 'FL', 'PF', 'NR', 'IK', 'TS', 'NN', 'LL', 'CK', 'PP', 'FF', 'GE', 'ER', 'VD', 'FI', 'NS', 'AT', 'TE', 'RI', 'FY', 'AQ', 'SA', 'VG', 'YS', 'AT', 'WP', 'NC', 'RN', 'KG', 'RV', 'IE', 'SG', 'NF', 'CN', 'VC', 'AY', 'DF', 'YP', 'SL', 'VQ', 'LS', 'YY', 'NG', 'SF', 'AQ', 'SP', 'FT', 'SN', 'TG', 'FV', 'KG', 'CY', 'YQ', 'GP', 'VY', 'SR', 'PV', 'TV', 'KV', 'LL', 'NS', 'DF', 'LE', 'CL', 'FL', 'TH', 'NA', 'VP', 'YA', 'AT', 'DV', 'SC', 'FG', 'VP', 'IK', 'RK', 'GS', 'DT', 'EN', 'VL', 'RV', 'QK', 'IN', 'AK', 'PC', 'GV', 'QN', 'TF']\n" ], [ "s127 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_127 = []\n\nfor i in range(len(s127)):\n if(0<=i<96):\n c1 = s127[i]\n c2 = s127[i+127]\n lambda_127.append(c1+c2)\n else:\n break\nprint(lambda_127)", "['RG', 'VG', 'QN', 'PY', 'TN', 'EY', 'SL', 'IY', 'VR', 'RL', 'FF', 'PR', 'NK', 'IS', 'TN', 'NL', 'LK', 'CP', 'PF', 'FE', 'GR', 'ED', 'VI', 'FS', 'NT', 'AE', 'TI', 'RY', 'FQ', 'AA', 'SG', 'VS', 'YT', 'AP', 'WC', 'NN', 'RG', 'KV', 'RE', 'IG', 'SF', 'NN', 'CC', 'VY', 'AF', 'DP', 'YL', 'SQ', 'VS', 'LY', 'YG', 'NF', 'SQ', 'AP', 'ST', 'FN', 'SG', 'TV', 'FG', 'KY', 'CQ', 'YP', 'GY', 'VR', 'SV', 'PV', 'TV', 'KL', 'LS', 'NF', 'DE', 'LL', 'CL', 'FH', 'TA', 'NP', 'VA', 'YT', 'AV', 'DC', 'SG', 'FP', 'VK', 'IK', 'RS', 'GT', 'DN', 'EL', 'VV', 'RK', 'QN', 'IK', 'AC', 'PV', 'GN', 'QF']\n" ], [ "s128 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_128 = []\n\nfor i in range(len(s128)):\n if(0<=i<95):\n c1 = s128[i]\n c2 = s128[i+128]\n lambda_128.append(c1+c2)\n else:\n break\nprint(lambda_128)", "['RG', 'VN', 'QY', 'PN', 'TY', 'EL', 'SY', 'IR', 'VL', 'RF', 'FR', 'PK', 'NS', 'IN', 'TL', 'NK', 'LP', 'CF', 'PE', 'FR', 'GD', 'EI', 'VS', 'FT', 'NE', 'AI', 'TY', 'RQ', 'FA', 'AG', 'SS', 'VT', 'YP', 'AC', 'WN', 'NG', 'RV', 'KE', 'RG', 'IF', 'SN', 'NC', 'CY', 'VF', 'AP', 'DL', 'YQ', 'SS', 'VY', 'LG', 'YF', 'NQ', 'SP', 'AT', 'SN', 'FG', 'SV', 'TG', 'FY', 'KQ', 'CP', 'YY', 'GR', 'VV', 'SV', 'PV', 'TL', 'KS', 'LF', 'NE', 'DL', 'LL', 'CH', 'FA', 'TP', 'NA', 'VT', 'YV', 'AC', 'DG', 'SP', 'FK', 'VK', 'IS', 'RT', 'GN', 'DL', 'EV', 'VK', 'RN', 'QK', 'IC', 'AV', 'PN', 'GF']\n" ], [ "s129 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_129 = []\n\nfor i in range(len(s129)):\n if(0<=i<94):\n c1 = s129[i]\n c2 = s129[i+129]\n lambda_129.append(c1+c2)\n else:\n break\nprint(lambda_129)", "['RN', 'VY', 'QN', 'PY', 'TL', 'EY', 'SR', 'IL', 'VF', 'RR', 'FK', 'PS', 'NN', 'IL', 'TK', 'NP', 'LF', 'CE', 'PR', 'FD', 'GI', 'ES', 'VT', 'FE', 'NI', 'AY', 'TQ', 'RA', 'FG', 'AS', 'ST', 'VP', 'YC', 'AN', 'WG', 'NV', 'RE', 'KG', 'RF', 'IN', 'SC', 'NY', 'CF', 'VP', 'AL', 'DQ', 'YS', 'SY', 'VG', 'LF', 'YQ', 'NP', 'ST', 'AN', 'SG', 'FV', 'SG', 'TY', 'FQ', 'KP', 'CY', 'YR', 'GV', 'VV', 'SV', 'PL', 'TS', 'KF', 'LE', 'NL', 'DL', 'LH', 'CA', 'FP', 'TA', 'NT', 'VV', 'YC', 'AG', 'DP', 'SK', 'FK', 'VS', 'IT', 'RN', 'GL', 'DV', 'EK', 'VN', 'RK', 'QC', 'IV', 'AN', 'PF']\n" ], [ "s130 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_130 = []\n\nfor i in range(len(s130)):\n if(0<=i<93):\n c1 = s130[i]\n c2 = s130[i+130]\n lambda_130.append(c1+c2)\n else:\n break\nprint(lambda_130)", "['RY', 'VN', 'QY', 'PL', 'TY', 'ER', 'SL', 'IF', 'VR', 'RK', 'FS', 'PN', 'NL', 'IK', 'TP', 'NF', 'LE', 'CR', 'PD', 'FI', 'GS', 'ET', 'VE', 'FI', 'NY', 'AQ', 'TA', 'RG', 'FS', 'AT', 'SP', 'VC', 'YN', 'AG', 'WV', 'NE', 'RG', 'KF', 'RN', 'IC', 'SY', 'NF', 'CP', 'VL', 'AQ', 'DS', 'YY', 'SG', 'VF', 'LQ', 'YP', 'NT', 'SN', 'AG', 'SV', 'FG', 'SY', 'TQ', 'FP', 'KY', 'CR', 'YV', 'GV', 'VV', 'SL', 'PS', 'TF', 'KE', 'LL', 'NL', 'DH', 'LA', 'CP', 'FA', 'TT', 'NV', 'VC', 'YG', 'AP', 'DK', 'SK', 'FS', 'VT', 'IN', 'RL', 'GV', 'DK', 'EN', 'VK', 'RC', 'QV', 'IN', 'AF']\n" ], [ "s131 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_131 = []\n\nfor i in range(len(s131)):\n if(0<=i<92):\n c1 = s131[i]\n c2 = s131[i+131]\n lambda_131.append(c1+c2)\n else:\n break\nprint(lambda_131)", "['RN', 'VY', 'QL', 'PY', 'TR', 'EL', 'SF', 'IR', 'VK', 'RS', 'FN', 'PL', 'NK', 'IP', 'TF', 'NE', 'LR', 'CD', 'PI', 'FS', 'GT', 'EE', 'VI', 'FY', 'NQ', 'AA', 'TG', 'RS', 'FT', 'AP', 'SC', 'VN', 'YG', 'AV', 'WE', 'NG', 'RF', 'KN', 'RC', 'IY', 'SF', 'NP', 'CL', 'VQ', 'AS', 'DY', 'YG', 'SF', 'VQ', 'LP', 'YT', 'NN', 'SG', 'AV', 'SG', 'FY', 'SQ', 'TP', 'FY', 'KR', 'CV', 'YV', 'GV', 'VL', 'SS', 'PF', 'TE', 'KL', 'LL', 'NH', 'DA', 'LP', 'CA', 'FT', 'TV', 'NC', 'VG', 'YP', 'AK', 'DK', 'SS', 'FT', 'VN', 'IL', 'RV', 'GK', 'DN', 'EK', 'VC', 'RV', 'QN', 'IF']\n" ], [ "s132 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_132 = []\n\nfor i in range(len(s132)):\n if(0<=i<91):\n c1 = s132[i]\n c2 = s132[i+132]\n lambda_132.append(c1+c2)\n else:\n break\nprint(lambda_132)", "['RY', 'VL', 'QY', 'PR', 'TL', 'EF', 'SR', 'IK', 'VS', 'RN', 'FL', 'PK', 'NP', 'IF', 'TE', 'NR', 'LD', 'CI', 'PS', 'FT', 'GE', 'EI', 'VY', 'FQ', 'NA', 'AG', 'TS', 'RT', 'FP', 'AC', 'SN', 'VG', 'YV', 'AE', 'WG', 'NF', 'RN', 'KC', 'RY', 'IF', 'SP', 'NL', 'CQ', 'VS', 'AY', 'DG', 'YF', 'SQ', 'VP', 'LT', 'YN', 'NG', 'SV', 'AG', 'SY', 'FQ', 'SP', 'TY', 'FR', 'KV', 'CV', 'YV', 'GL', 'VS', 'SF', 'PE', 'TL', 'KL', 'LH', 'NA', 'DP', 'LA', 'CT', 'FV', 'TC', 'NG', 'VP', 'YK', 'AK', 'DS', 'ST', 'FN', 'VL', 'IV', 'RK', 'GN', 'DK', 'EC', 'VV', 'RN', 'QF']\n" ], [ "s133 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_133 = []\n\nfor i in range(len(s133)):\n if(0<=i<90):\n c1 = s133[i]\n c2 = s133[i+133]\n lambda_133.append(c1+c2)\n else:\n break\nprint(lambda_133)", "['RL', 'VY', 'QR', 'PL', 'TF', 'ER', 'SK', 'IS', 'VN', 'RL', 'FK', 'PP', 'NF', 'IE', 'TR', 'ND', 'LI', 'CS', 'PT', 'FE', 'GI', 'EY', 'VQ', 'FA', 'NG', 'AS', 'TT', 'RP', 'FC', 'AN', 'SG', 'VV', 'YE', 'AG', 'WF', 'NN', 'RC', 'KY', 'RF', 'IP', 'SL', 'NQ', 'CS', 'VY', 'AG', 'DF', 'YQ', 'SP', 'VT', 'LN', 'YG', 'NV', 'SG', 'AY', 'SQ', 'FP', 'SY', 'TR', 'FV', 'KV', 'CV', 'YL', 'GS', 'VF', 'SE', 'PL', 'TL', 'KH', 'LA', 'NP', 'DA', 'LT', 'CV', 'FC', 'TG', 'NP', 'VK', 'YK', 'AS', 'DT', 'SN', 'FL', 'VV', 'IK', 'RN', 'GK', 'DC', 'EV', 'VN', 'RF']\n" ], [ "s134 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_134 = []\n\nfor i in range(len(s134)):\n if(0<=i<89):\n c1 = s134[i]\n c2 = s134[i+134]\n lambda_134.append(c1+c2)\n else:\n break\nprint(lambda_134)", "['RY', 'VR', 'QL', 'PF', 'TR', 'EK', 'SS', 'IN', 'VL', 'RK', 'FP', 'PF', 'NE', 'IR', 'TD', 'NI', 'LS', 'CT', 'PE', 'FI', 'GY', 'EQ', 'VA', 'FG', 'NS', 'AT', 'TP', 'RC', 'FN', 'AG', 'SV', 'VE', 'YG', 'AF', 'WN', 'NC', 'RY', 'KF', 'RP', 'IL', 'SQ', 'NS', 'CY', 'VG', 'AF', 'DQ', 'YP', 'ST', 'VN', 'LG', 'YV', 'NG', 'SY', 'AQ', 'SP', 'FY', 'SR', 'TV', 'FV', 'KV', 'CL', 'YS', 'GF', 'VE', 'SL', 'PL', 'TH', 'KA', 'LP', 'NA', 'DT', 'LV', 'CC', 'FG', 'TP', 'NK', 'VK', 'YS', 'AT', 'DN', 'SL', 'FV', 'VK', 'IN', 'RK', 'GC', 'DV', 'EN', 'VF']\n" ], [ "s135 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_135 = []\n\nfor i in range(len(s135)):\n if(0<=i<88):\n c1 = s135[i]\n c2 = s135[i+135]\n lambda_135.append(c1+c2)\n else:\n break\nprint(lambda_135)", "['RR', 'VL', 'QF', 'PR', 'TK', 'ES', 'SN', 'IL', 'VK', 'RP', 'FF', 'PE', 'NR', 'ID', 'TI', 'NS', 'LT', 'CE', 'PI', 'FY', 'GQ', 'EA', 'VG', 'FS', 'NT', 'AP', 'TC', 'RN', 'FG', 'AV', 'SE', 'VG', 'YF', 'AN', 'WC', 'NY', 'RF', 'KP', 'RL', 'IQ', 'SS', 'NY', 'CG', 'VF', 'AQ', 'DP', 'YT', 'SN', 'VG', 'LV', 'YG', 'NY', 'SQ', 'AP', 'SY', 'FR', 'SV', 'TV', 'FV', 'KL', 'CS', 'YF', 'GE', 'VL', 'SL', 'PH', 'TA', 'KP', 'LA', 'NT', 'DV', 'LC', 'CG', 'FP', 'TK', 'NK', 'VS', 'YT', 'AN', 'DL', 'SV', 'FK', 'VN', 'IK', 'RC', 'GV', 'DN', 'EF']\n" ], [ "s136 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_136 = []\n\nfor i in range(len(s136)):\n if(0<=i<87):\n c1 = s136[i]\n c2 = s136[i+136]\n lambda_136.append(c1+c2)\n else:\n break\nprint(lambda_136)", "['RL', 'VF', 'QR', 'PK', 'TS', 'EN', 'SL', 'IK', 'VP', 'RF', 'FE', 'PR', 'ND', 'II', 'TS', 'NT', 'LE', 'CI', 'PY', 'FQ', 'GA', 'EG', 'VS', 'FT', 'NP', 'AC', 'TN', 'RG', 'FV', 'AE', 'SG', 'VF', 'YN', 'AC', 'WY', 'NF', 'RP', 'KL', 'RQ', 'IS', 'SY', 'NG', 'CF', 'VQ', 'AP', 'DT', 'YN', 'SG', 'VV', 'LG', 'YY', 'NQ', 'SP', 'AY', 'SR', 'FV', 'SV', 'TV', 'FL', 'KS', 'CF', 'YE', 'GL', 'VL', 'SH', 'PA', 'TP', 'KA', 'LT', 'NV', 'DC', 'LG', 'CP', 'FK', 'TK', 'NS', 'VT', 'YN', 'AL', 'DV', 'SK', 'FN', 'VK', 'IC', 'RV', 'GN', 'DF']\n" ], [ "s137 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_137 = []\n\nfor i in range(len(s137)):\n if(0<=i<86):\n c1 = s137[i]\n c2 = s137[i+137]\n lambda_137.append(c1+c2)\n else:\n break\nprint(lambda_137)", "['RF', 'VR', 'QK', 'PS', 'TN', 'EL', 'SK', 'IP', 'VF', 'RE', 'FR', 'PD', 'NI', 'IS', 'TT', 'NE', 'LI', 'CY', 'PQ', 'FA', 'GG', 'ES', 'VT', 'FP', 'NC', 'AN', 'TG', 'RV', 'FE', 'AG', 'SF', 'VN', 'YC', 'AY', 'WF', 'NP', 'RL', 'KQ', 'RS', 'IY', 'SG', 'NF', 'CQ', 'VP', 'AT', 'DN', 'YG', 'SV', 'VG', 'LY', 'YQ', 'NP', 'SY', 'AR', 'SV', 'FV', 'SV', 'TL', 'FS', 'KF', 'CE', 'YL', 'GL', 'VH', 'SA', 'PP', 'TA', 'KT', 'LV', 'NC', 'DG', 'LP', 'CK', 'FK', 'TS', 'NT', 'VN', 'YL', 'AV', 'DK', 'SN', 'FK', 'VC', 'IV', 'RN', 'GF']\n" ], [ "s138 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_138 = []\n\nfor i in range(len(s138)):\n if(0<=i<85):\n c1 = s138[i]\n c2 = s138[i+138]\n lambda_138.append(c1+c2)\n else:\n break\nprint(lambda_138)", "['RR', 'VK', 'QS', 'PN', 'TL', 'EK', 'SP', 'IF', 'VE', 'RR', 'FD', 'PI', 'NS', 'IT', 'TE', 'NI', 'LY', 'CQ', 'PA', 'FG', 'GS', 'ET', 'VP', 'FC', 'NN', 'AG', 'TV', 'RE', 'FG', 'AF', 'SN', 'VC', 'YY', 'AF', 'WP', 'NL', 'RQ', 'KS', 'RY', 'IG', 'SF', 'NQ', 'CP', 'VT', 'AN', 'DG', 'YV', 'SG', 'VY', 'LQ', 'YP', 'NY', 'SR', 'AV', 'SV', 'FV', 'SL', 'TS', 'FF', 'KE', 'CL', 'YL', 'GH', 'VA', 'SP', 'PA', 'TT', 'KV', 'LC', 'NG', 'DP', 'LK', 'CK', 'FS', 'TT', 'NN', 'VL', 'YV', 'AK', 'DN', 'SK', 'FC', 'VV', 'IN', 'RF']\n" ], [ "s139 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_139 = []\n\nfor i in range(len(s139)):\n if(0<=i<84):\n c1 = s139[i]\n c2 = s139[i+139]\n lambda_139.append(c1+c2)\n else:\n break\nprint(lambda_139)", "['RK', 'VS', 'QN', 'PL', 'TK', 'EP', 'SF', 'IE', 'VR', 'RD', 'FI', 'PS', 'NT', 'IE', 'TI', 'NY', 'LQ', 'CA', 'PG', 'FS', 'GT', 'EP', 'VC', 'FN', 'NG', 'AV', 'TE', 'RG', 'FF', 'AN', 'SC', 'VY', 'YF', 'AP', 'WL', 'NQ', 'RS', 'KY', 'RG', 'IF', 'SQ', 'NP', 'CT', 'VN', 'AG', 'DV', 'YG', 'SY', 'VQ', 'LP', 'YY', 'NR', 'SV', 'AV', 'SV', 'FL', 'SS', 'TF', 'FE', 'KL', 'CL', 'YH', 'GA', 'VP', 'SA', 'PT', 'TV', 'KC', 'LG', 'NP', 'DK', 'LK', 'CS', 'FT', 'TN', 'NL', 'VV', 'YK', 'AN', 'DK', 'SC', 'FV', 'VN', 'IF']\n" ], [ "s140 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_140 = []\n\nfor i in range(len(s140)):\n if(0<=i<83):\n c1 = s140[i]\n c2 = s140[i+140]\n lambda_140.append(c1+c2)\n else:\n break\nprint(lambda_140)", "['RS', 'VN', 'QL', 'PK', 'TP', 'EF', 'SE', 'IR', 'VD', 'RI', 'FS', 'PT', 'NE', 'II', 'TY', 'NQ', 'LA', 'CG', 'PS', 'FT', 'GP', 'EC', 'VN', 'FG', 'NV', 'AE', 'TG', 'RF', 'FN', 'AC', 'SY', 'VF', 'YP', 'AL', 'WQ', 'NS', 'RY', 'KG', 'RF', 'IQ', 'SP', 'NT', 'CN', 'VG', 'AV', 'DG', 'YY', 'SQ', 'VP', 'LY', 'YR', 'NV', 'SV', 'AV', 'SL', 'FS', 'SF', 'TE', 'FL', 'KL', 'CH', 'YA', 'GP', 'VA', 'ST', 'PV', 'TC', 'KG', 'LP', 'NK', 'DK', 'LS', 'CT', 'FN', 'TL', 'NV', 'VK', 'YN', 'AK', 'DC', 'SV', 'FN', 'VF']\n" ], [ "s141 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_141 = []\n\nfor i in range(len(s141)):\n if(0<=i<82):\n c1 = s141[i]\n c2 = s141[i+141]\n lambda_141.append(c1+c2)\n else:\n break\nprint(lambda_141)", "['RN', 'VL', 'QK', 'PP', 'TF', 'EE', 'SR', 'ID', 'VI', 'RS', 'FT', 'PE', 'NI', 'IY', 'TQ', 'NA', 'LG', 'CS', 'PT', 'FP', 'GC', 'EN', 'VG', 'FV', 'NE', 'AG', 'TF', 'RN', 'FC', 'AY', 'SF', 'VP', 'YL', 'AQ', 'WS', 'NY', 'RG', 'KF', 'RQ', 'IP', 'ST', 'NN', 'CG', 'VV', 'AG', 'DY', 'YQ', 'SP', 'VY', 'LR', 'YV', 'NV', 'SV', 'AL', 'SS', 'FF', 'SE', 'TL', 'FL', 'KH', 'CA', 'YP', 'GA', 'VT', 'SV', 'PC', 'TG', 'KP', 'LK', 'NK', 'DS', 'LT', 'CN', 'FL', 'TV', 'NK', 'VN', 'YK', 'AC', 'DV', 'SN', 'FF']\n" ], [ "s142 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_142 = []\n\nfor i in range(len(s142)):\n if(0<=i<81):\n c1 = s142[i]\n c2 = s142[i+142]\n lambda_142.append(c1+c2)\n else:\n break\nprint(lambda_142)", "['RL', 'VK', 'QP', 'PF', 'TE', 'ER', 'SD', 'II', 'VS', 'RT', 'FE', 'PI', 'NY', 'IQ', 'TA', 'NG', 'LS', 'CT', 'PP', 'FC', 'GN', 'EG', 'VV', 'FE', 'NG', 'AF', 'TN', 'RC', 'FY', 'AF', 'SP', 'VL', 'YQ', 'AS', 'WY', 'NG', 'RF', 'KQ', 'RP', 'IT', 'SN', 'NG', 'CV', 'VG', 'AY', 'DQ', 'YP', 'SY', 'VR', 'LV', 'YV', 'NV', 'SL', 'AS', 'SF', 'FE', 'SL', 'TL', 'FH', 'KA', 'CP', 'YA', 'GT', 'VV', 'SC', 'PG', 'TP', 'KK', 'LK', 'NS', 'DT', 'LN', 'CL', 'FV', 'TK', 'NN', 'VK', 'YC', 'AV', 'DN', 'SF']\n" ], [ "s143 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_143 = []\n\nfor i in range(len(s143)):\n if(0<=i<80):\n c1 = s143[i]\n c2 = s143[i+143]\n lambda_143.append(c1+c2)\n else:\n break\nprint(lambda_143)", "['RK', 'VP', 'QF', 'PE', 'TR', 'ED', 'SI', 'IS', 'VT', 'RE', 'FI', 'PY', 'NQ', 'IA', 'TG', 'NS', 'LT', 'CP', 'PC', 'FN', 'GG', 'EV', 'VE', 'FG', 'NF', 'AN', 'TC', 'RY', 'FF', 'AP', 'SL', 'VQ', 'YS', 'AY', 'WG', 'NF', 'RQ', 'KP', 'RT', 'IN', 'SG', 'NV', 'CG', 'VY', 'AQ', 'DP', 'YY', 'SR', 'VV', 'LV', 'YV', 'NL', 'SS', 'AF', 'SE', 'FL', 'SL', 'TH', 'FA', 'KP', 'CA', 'YT', 'GV', 'VC', 'SG', 'PP', 'TK', 'KK', 'LS', 'NT', 'DN', 'LL', 'CV', 'FK', 'TN', 'NK', 'VC', 'YV', 'AN', 'DF']\n" ], [ "s144 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_144 = []\n\nfor i in range(len(s144)):\n if(0<=i<79):\n c1 = s144[i]\n c2 = s144[i+144]\n lambda_144.append(c1+c2)\n else:\n break\nprint(lambda_144)", "['RP', 'VF', 'QE', 'PR', 'TD', 'EI', 'SS', 'IT', 'VE', 'RI', 'FY', 'PQ', 'NA', 'IG', 'TS', 'NT', 'LP', 'CC', 'PN', 'FG', 'GV', 'EE', 'VG', 'FF', 'NN', 'AC', 'TY', 'RF', 'FP', 'AL', 'SQ', 'VS', 'YY', 'AG', 'WF', 'NQ', 'RP', 'KT', 'RN', 'IG', 'SV', 'NG', 'CY', 'VQ', 'AP', 'DY', 'YR', 'SV', 'VV', 'LV', 'YL', 'NS', 'SF', 'AE', 'SL', 'FL', 'SH', 'TA', 'FP', 'KA', 'CT', 'YV', 'GC', 'VG', 'SP', 'PK', 'TK', 'KS', 'LT', 'NN', 'DL', 'LV', 'CK', 'FN', 'TK', 'NC', 'VV', 'YN', 'AF']\n" ], [ "s145 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_145 = []\n\nfor i in range(len(s145)):\n if(0<=i<78):\n c1 = s145[i]\n c2 = s145[i+145]\n lambda_145.append(c1+c2)\n else:\n break\nprint(lambda_145)", "['RF', 'VE', 'QR', 'PD', 'TI', 'ES', 'ST', 'IE', 'VI', 'RY', 'FQ', 'PA', 'NG', 'IS', 'TT', 'NP', 'LC', 'CN', 'PG', 'FV', 'GE', 'EG', 'VF', 'FN', 'NC', 'AY', 'TF', 'RP', 'FL', 'AQ', 'SS', 'VY', 'YG', 'AF', 'WQ', 'NP', 'RT', 'KN', 'RG', 'IV', 'SG', 'NY', 'CQ', 'VP', 'AY', 'DR', 'YV', 'SV', 'VV', 'LL', 'YS', 'NF', 'SE', 'AL', 'SL', 'FH', 'SA', 'TP', 'FA', 'KT', 'CV', 'YC', 'GG', 'VP', 'SK', 'PK', 'TS', 'KT', 'LN', 'NL', 'DV', 'LK', 'CN', 'FK', 'TC', 'NV', 'VN', 'YF']\n" ], [ "s146 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_146 = []\n\nfor i in range(len(s146)):\n if(0<=i<77):\n c1 = s146[i]\n c2 = s146[i+146]\n lambda_146.append(c1+c2)\n else:\n break\nprint(lambda_146)", "['RE', 'VR', 'QD', 'PI', 'TS', 'ET', 'SE', 'II', 'VY', 'RQ', 'FA', 'PG', 'NS', 'IT', 'TP', 'NC', 'LN', 'CG', 'PV', 'FE', 'GG', 'EF', 'VN', 'FC', 'NY', 'AF', 'TP', 'RL', 'FQ', 'AS', 'SY', 'VG', 'YF', 'AQ', 'WP', 'NT', 'RN', 'KG', 'RV', 'IG', 'SY', 'NQ', 'CP', 'VY', 'AR', 'DV', 'YV', 'SV', 'VL', 'LS', 'YF', 'NE', 'SL', 'AL', 'SH', 'FA', 'SP', 'TA', 'FT', 'KV', 'CC', 'YG', 'GP', 'VK', 'SK', 'PS', 'TT', 'KN', 'LL', 'NV', 'DK', 'LN', 'CK', 'FC', 'TV', 'NN', 'VF']\n" ], [ "s147 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_147 = []\n\nfor i in range(len(s147)):\n if(0<=i<76):\n c1 = s147[i]\n c2 = s147[i+147]\n lambda_147.append(c1+c2)\n else:\n break\nprint(lambda_147)", "['RR', 'VD', 'QI', 'PS', 'TT', 'EE', 'SI', 'IY', 'VQ', 'RA', 'FG', 'PS', 'NT', 'IP', 'TC', 'NN', 'LG', 'CV', 'PE', 'FG', 'GF', 'EN', 'VC', 'FY', 'NF', 'AP', 'TL', 'RQ', 'FS', 'AY', 'SG', 'VF', 'YQ', 'AP', 'WT', 'NN', 'RG', 'KV', 'RG', 'IY', 'SQ', 'NP', 'CY', 'VR', 'AV', 'DV', 'YV', 'SL', 'VS', 'LF', 'YE', 'NL', 'SL', 'AH', 'SA', 'FP', 'SA', 'TT', 'FV', 'KC', 'CG', 'YP', 'GK', 'VK', 'SS', 'PT', 'TN', 'KL', 'LV', 'NK', 'DN', 'LK', 'CC', 'FV', 'TN', 'NF']\n" ], [ "s148 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_148 = []\n\nfor i in range(len(s148)):\n if(0<=i<75):\n c1 = s148[i]\n c2 = s148[i+148]\n lambda_148.append(c1+c2)\n else:\n break\nprint(lambda_148)", "['RD', 'VI', 'QS', 'PT', 'TE', 'EI', 'SY', 'IQ', 'VA', 'RG', 'FS', 'PT', 'NP', 'IC', 'TN', 'NG', 'LV', 'CE', 'PG', 'FF', 'GN', 'EC', 'VY', 'FF', 'NP', 'AL', 'TQ', 'RS', 'FY', 'AG', 'SF', 'VQ', 'YP', 'AT', 'WN', 'NG', 'RV', 'KG', 'RY', 'IQ', 'SP', 'NY', 'CR', 'VV', 'AV', 'DV', 'YL', 'SS', 'VF', 'LE', 'YL', 'NL', 'SH', 'AA', 'SP', 'FA', 'ST', 'TV', 'FC', 'KG', 'CP', 'YK', 'GK', 'VS', 'ST', 'PN', 'TL', 'KV', 'LK', 'NN', 'DK', 'LC', 'CV', 'FN', 'TF']\n" ], [ "s149 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_149 = []\n\nfor i in range(len(s149)):\n if(0<=i<74):\n c1 = s149[i]\n c2 = s149[i+149]\n lambda_149.append(c1+c2)\n else:\n break\nprint(lambda_149)", "['RI', 'VS', 'QT', 'PE', 'TI', 'EY', 'SQ', 'IA', 'VG', 'RS', 'FT', 'PP', 'NC', 'IN', 'TG', 'NV', 'LE', 'CG', 'PF', 'FN', 'GC', 'EY', 'VF', 'FP', 'NL', 'AQ', 'TS', 'RY', 'FG', 'AF', 'SQ', 'VP', 'YT', 'AN', 'WG', 'NV', 'RG', 'KY', 'RQ', 'IP', 'SY', 'NR', 'CV', 'VV', 'AV', 'DL', 'YS', 'SF', 'VE', 'LL', 'YL', 'NH', 'SA', 'AP', 'SA', 'FT', 'SV', 'TC', 'FG', 'KP', 'CK', 'YK', 'GS', 'VT', 'SN', 'PL', 'TV', 'KK', 'LN', 'NK', 'DC', 'LV', 'CN', 'FF']\n" ], [ "s150 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_150 = []\n\nfor i in range(len(s150)):\n if(0<=i<73):\n c1 = s150[i]\n c2 = s150[i+150]\n lambda_150.append(c1+c2)\n else:\n break\nprint(lambda_150)", "['RS', 'VT', 'QE', 'PI', 'TY', 'EQ', 'SA', 'IG', 'VS', 'RT', 'FP', 'PC', 'NN', 'IG', 'TV', 'NE', 'LG', 'CF', 'PN', 'FC', 'GY', 'EF', 'VP', 'FL', 'NQ', 'AS', 'TY', 'RG', 'FF', 'AQ', 'SP', 'VT', 'YN', 'AG', 'WV', 'NG', 'RY', 'KQ', 'RP', 'IY', 'SR', 'NV', 'CV', 'VV', 'AL', 'DS', 'YF', 'SE', 'VL', 'LL', 'YH', 'NA', 'SP', 'AA', 'ST', 'FV', 'SC', 'TG', 'FP', 'KK', 'CK', 'YS', 'GT', 'VN', 'SL', 'PV', 'TK', 'KN', 'LK', 'NC', 'DV', 'LN', 'CF']\n" ], [ "s151 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_151 = []\n\nfor i in range(len(s151)):\n if(0<=i<72):\n c1 = s151[i]\n c2 = s151[i+151]\n lambda_151.append(c1+c2)\n else:\n break\nprint(lambda_151)", "['RT', 'VE', 'QI', 'PY', 'TQ', 'EA', 'SG', 'IS', 'VT', 'RP', 'FC', 'PN', 'NG', 'IV', 'TE', 'NG', 'LF', 'CN', 'PC', 'FY', 'GF', 'EP', 'VL', 'FQ', 'NS', 'AY', 'TG', 'RF', 'FQ', 'AP', 'ST', 'VN', 'YG', 'AV', 'WG', 'NY', 'RQ', 'KP', 'RY', 'IR', 'SV', 'NV', 'CV', 'VL', 'AS', 'DF', 'YE', 'SL', 'VL', 'LH', 'YA', 'NP', 'SA', 'AT', 'SV', 'FC', 'SG', 'TP', 'FK', 'KK', 'CS', 'YT', 'GN', 'VL', 'SV', 'PK', 'TN', 'KK', 'LC', 'NV', 'DN', 'LF']\n" ], [ "s152 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_152 = []\n\nfor i in range(len(s152)):\n if(0<=i<71):\n c1 = s152[i]\n c2 = s152[i+152]\n lambda_152.append(c1+c2)\n else:\n break\nprint(lambda_152)", "['RE', 'VI', 'QY', 'PQ', 'TA', 'EG', 'SS', 'IT', 'VP', 'RC', 'FN', 'PG', 'NV', 'IE', 'TG', 'NF', 'LN', 'CC', 'PY', 'FF', 'GP', 'EL', 'VQ', 'FS', 'NY', 'AG', 'TF', 'RQ', 'FP', 'AT', 'SN', 'VG', 'YV', 'AG', 'WY', 'NQ', 'RP', 'KY', 'RR', 'IV', 'SV', 'NV', 'CL', 'VS', 'AF', 'DE', 'YL', 'SL', 'VH', 'LA', 'YP', 'NA', 'ST', 'AV', 'SC', 'FG', 'SP', 'TK', 'FK', 'KS', 'CT', 'YN', 'GL', 'VV', 'SK', 'PN', 'TK', 'KC', 'LV', 'NN', 'DF']\n" ], [ "s153 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_153 = []\n\nfor i in range(len(s153)):\n if(0<=i<70):\n c1 = s153[i]\n c2 = s153[i+153]\n lambda_153.append(c1+c2)\n else:\n break\nprint(lambda_153)", "['RI', 'VY', 'QQ', 'PA', 'TG', 'ES', 'ST', 'IP', 'VC', 'RN', 'FG', 'PV', 'NE', 'IG', 'TF', 'NN', 'LC', 'CY', 'PF', 'FP', 'GL', 'EQ', 'VS', 'FY', 'NG', 'AF', 'TQ', 'RP', 'FT', 'AN', 'SG', 'VV', 'YG', 'AY', 'WQ', 'NP', 'RY', 'KR', 'RV', 'IV', 'SV', 'NL', 'CS', 'VF', 'AE', 'DL', 'YL', 'SH', 'VA', 'LP', 'YA', 'NT', 'SV', 'AC', 'SG', 'FP', 'SK', 'TK', 'FS', 'KT', 'CN', 'YL', 'GV', 'VK', 'SN', 'PK', 'TC', 'KV', 'LN', 'NF']\n" ], [ "s154 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_154 = []\n\nfor i in range(len(s154)):\n if(0<=i<69):\n c1 = s154[i]\n c2 = s154[i+154]\n lambda_154.append(c1+c2)\n else:\n break\nprint(lambda_154)", "['RY', 'VQ', 'QA', 'PG', 'TS', 'ET', 'SP', 'IC', 'VN', 'RG', 'FV', 'PE', 'NG', 'IF', 'TN', 'NC', 'LY', 'CF', 'PP', 'FL', 'GQ', 'ES', 'VY', 'FG', 'NF', 'AQ', 'TP', 'RT', 'FN', 'AG', 'SV', 'VG', 'YY', 'AQ', 'WP', 'NY', 'RR', 'KV', 'RV', 'IV', 'SL', 'NS', 'CF', 'VE', 'AL', 'DL', 'YH', 'SA', 'VP', 'LA', 'YT', 'NV', 'SC', 'AG', 'SP', 'FK', 'SK', 'TS', 'FT', 'KN', 'CL', 'YV', 'GK', 'VN', 'SK', 'PC', 'TV', 'KN', 'LF']\n" ], [ "s155 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_155 = []\n\nfor i in range(len(s155)):\n if(0<=i<68):\n c1 = s155[i]\n c2 = s155[i+155]\n lambda_155.append(c1+c2)\n else:\n break\nprint(lambda_155)", "['RQ', 'VA', 'QG', 'PS', 'TT', 'EP', 'SC', 'IN', 'VG', 'RV', 'FE', 'PG', 'NF', 'IN', 'TC', 'NY', 'LF', 'CP', 'PL', 'FQ', 'GS', 'EY', 'VG', 'FF', 'NQ', 'AP', 'TT', 'RN', 'FG', 'AV', 'SG', 'VY', 'YQ', 'AP', 'WY', 'NR', 'RV', 'KV', 'RV', 'IL', 'SS', 'NF', 'CE', 'VL', 'AL', 'DH', 'YA', 'SP', 'VA', 'LT', 'YV', 'NC', 'SG', 'AP', 'SK', 'FK', 'SS', 'TT', 'FN', 'KL', 'CV', 'YK', 'GN', 'VK', 'SC', 'PV', 'TN', 'KF']\n" ], [ "s156 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_156 = []\n\nfor i in range(len(s156)):\n if(0<=i<67):\n c1 = s156[i]\n c2 = s156[i+156]\n lambda_156.append(c1+c2)\n else:\n break\nprint(lambda_156)", "['RA', 'VG', 'QS', 'PT', 'TP', 'EC', 'SN', 'IG', 'VV', 'RE', 'FG', 'PF', 'NN', 'IC', 'TY', 'NF', 'LP', 'CL', 'PQ', 'FS', 'GY', 'EG', 'VF', 'FQ', 'NP', 'AT', 'TN', 'RG', 'FV', 'AG', 'SY', 'VQ', 'YP', 'AY', 'WR', 'NV', 'RV', 'KV', 'RL', 'IS', 'SF', 'NE', 'CL', 'VL', 'AH', 'DA', 'YP', 'SA', 'VT', 'LV', 'YC', 'NG', 'SP', 'AK', 'SK', 'FS', 'ST', 'TN', 'FL', 'KV', 'CK', 'YN', 'GK', 'VC', 'SV', 'PN', 'TF']\n" ], [ "s157 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_157 = []\n\nfor i in range(len(s157)):\n if(0<=i<66):\n c1 = s157[i]\n c2 = s157[i+157]\n lambda_157.append(c1+c2)\n else:\n break\nprint(lambda_157)", "['RG', 'VS', 'QT', 'PP', 'TC', 'EN', 'SG', 'IV', 'VE', 'RG', 'FF', 'PN', 'NC', 'IY', 'TF', 'NP', 'LL', 'CQ', 'PS', 'FY', 'GG', 'EF', 'VQ', 'FP', 'NT', 'AN', 'TG', 'RV', 'FG', 'AY', 'SQ', 'VP', 'YY', 'AR', 'WV', 'NV', 'RV', 'KL', 'RS', 'IF', 'SE', 'NL', 'CL', 'VH', 'AA', 'DP', 'YA', 'ST', 'VV', 'LC', 'YG', 'NP', 'SK', 'AK', 'SS', 'FT', 'SN', 'TL', 'FV', 'KK', 'CN', 'YK', 'GC', 'VV', 'SN', 'PF']\n" ], [ "s158 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_158 = []\n\nfor i in range(len(s158)):\n if(0<=i<65):\n c1 = s158[i]\n c2 = s158[i+158]\n lambda_158.append(c1+c2)\n else:\n break\nprint(lambda_158)", "['RS', 'VT', 'QP', 'PC', 'TN', 'EG', 'SV', 'IE', 'VG', 'RF', 'FN', 'PC', 'NY', 'IF', 'TP', 'NL', 'LQ', 'CS', 'PY', 'FG', 'GF', 'EQ', 'VP', 'FT', 'NN', 'AG', 'TV', 'RG', 'FY', 'AQ', 'SP', 'VY', 'YR', 'AV', 'WV', 'NV', 'RL', 'KS', 'RF', 'IE', 'SL', 'NL', 'CH', 'VA', 'AP', 'DA', 'YT', 'SV', 'VC', 'LG', 'YP', 'NK', 'SK', 'AS', 'ST', 'FN', 'SL', 'TV', 'FK', 'KN', 'CK', 'YC', 'GV', 'VN', 'SF']\n" ], [ "s159 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_159 = []\n\nfor i in range(len(s159)):\n if(0<=i<64):\n c1 = s159[i]\n c2 = s159[i+159]\n lambda_159.append(c1+c2)\n else:\n break\nprint(lambda_159)", "['RT', 'VP', 'QC', 'PN', 'TG', 'EV', 'SE', 'IG', 'VF', 'RN', 'FC', 'PY', 'NF', 'IP', 'TL', 'NQ', 'LS', 'CY', 'PG', 'FF', 'GQ', 'EP', 'VT', 'FN', 'NG', 'AV', 'TG', 'RY', 'FQ', 'AP', 'SY', 'VR', 'YV', 'AV', 'WV', 'NL', 'RS', 'KF', 'RE', 'IL', 'SL', 'NH', 'CA', 'VP', 'AA', 'DT', 'YV', 'SC', 'VG', 'LP', 'YK', 'NK', 'SS', 'AT', 'SN', 'FL', 'SV', 'TK', 'FN', 'KK', 'CC', 'YV', 'GN', 'VF']\n" ], [ "s160 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_160 = []\n\nfor i in range(len(s160)):\n if(0<=i<63):\n c1 = s160[i]\n c2 = s160[i+160]\n lambda_160.append(c1+c2)\n else:\n break\nprint(lambda_160)", "['RP', 'VC', 'QN', 'PG', 'TV', 'EE', 'SG', 'IF', 'VN', 'RC', 'FY', 'PF', 'NP', 'IL', 'TQ', 'NS', 'LY', 'CG', 'PF', 'FQ', 'GP', 'ET', 'VN', 'FG', 'NV', 'AG', 'TY', 'RQ', 'FP', 'AY', 'SR', 'VV', 'YV', 'AV', 'WL', 'NS', 'RF', 'KE', 'RL', 'IL', 'SH', 'NA', 'CP', 'VA', 'AT', 'DV', 'YC', 'SG', 'VP', 'LK', 'YK', 'NS', 'ST', 'AN', 'SL', 'FV', 'SK', 'TN', 'FK', 'KC', 'CV', 'YN', 'GF']\n" ], [ "s161 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_161 = []\n\nfor i in range(len(s161)):\n if(0<=i<62):\n c1 = s161[i]\n c2 = s161[i+161]\n lambda_161.append(c1+c2)\n else:\n break\nprint(lambda_161)", "['RC', 'VN', 'QG', 'PV', 'TE', 'EG', 'SF', 'IN', 'VC', 'RY', 'FF', 'PP', 'NL', 'IQ', 'TS', 'NY', 'LG', 'CF', 'PQ', 'FP', 'GT', 'EN', 'VG', 'FV', 'NG', 'AY', 'TQ', 'RP', 'FY', 'AR', 'SV', 'VV', 'YV', 'AL', 'WS', 'NF', 'RE', 'KL', 'RL', 'IH', 'SA', 'NP', 'CA', 'VT', 'AV', 'DC', 'YG', 'SP', 'VK', 'LK', 'YS', 'NT', 'SN', 'AL', 'SV', 'FK', 'SN', 'TK', 'FC', 'KV', 'CN', 'YF']\n" ], [ "s162 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_162 = []\n\nfor i in range(len(s162)):\n if(0<=i<61):\n c1 = s162[i]\n c2 = s162[i+162]\n lambda_162.append(c1+c2)\n else:\n break\nprint(lambda_162)", "['RN', 'VG', 'QV', 'PE', 'TG', 'EF', 'SN', 'IC', 'VY', 'RF', 'FP', 'PL', 'NQ', 'IS', 'TY', 'NG', 'LF', 'CQ', 'PP', 'FT', 'GN', 'EG', 'VV', 'FG', 'NY', 'AQ', 'TP', 'RY', 'FR', 'AV', 'SV', 'VV', 'YL', 'AS', 'WF', 'NE', 'RL', 'KL', 'RH', 'IA', 'SP', 'NA', 'CT', 'VV', 'AC', 'DG', 'YP', 'SK', 'VK', 'LS', 'YT', 'NN', 'SL', 'AV', 'SK', 'FN', 'SK', 'TC', 'FV', 'KN', 'CF']\n" ], [ "s163 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_163 = []\n\nfor i in range(len(s163)):\n if(0<=i<60):\n c1 = s163[i]\n c2 = s163[i+163]\n lambda_163.append(c1+c2)\n else:\n break\nprint(lambda_163)", "['RG', 'VV', 'QE', 'PG', 'TF', 'EN', 'SC', 'IY', 'VF', 'RP', 'FL', 'PQ', 'NS', 'IY', 'TG', 'NF', 'LQ', 'CP', 'PT', 'FN', 'GG', 'EV', 'VG', 'FY', 'NQ', 'AP', 'TY', 'RR', 'FV', 'AV', 'SV', 'VL', 'YS', 'AF', 'WE', 'NL', 'RL', 'KH', 'RA', 'IP', 'SA', 'NT', 'CV', 'VC', 'AG', 'DP', 'YK', 'SK', 'VS', 'LT', 'YN', 'NL', 'SV', 'AK', 'SN', 'FK', 'SC', 'TV', 'FN', 'KF']\n" ], [ "s164 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_164 = []\n\nfor i in range(len(s164)):\n if(0<=i<59):\n c1 = s164[i]\n c2 = s164[i+164]\n lambda_164.append(c1+c2)\n else:\n break\nprint(lambda_164)", "['RV', 'VE', 'QG', 'PF', 'TN', 'EC', 'SY', 'IF', 'VP', 'RL', 'FQ', 'PS', 'NY', 'IG', 'TF', 'NQ', 'LP', 'CT', 'PN', 'FG', 'GV', 'EG', 'VY', 'FQ', 'NP', 'AY', 'TR', 'RV', 'FV', 'AV', 'SL', 'VS', 'YF', 'AE', 'WL', 'NL', 'RH', 'KA', 'RP', 'IA', 'ST', 'NV', 'CC', 'VG', 'AP', 'DK', 'YK', 'SS', 'VT', 'LN', 'YL', 'NV', 'SK', 'AN', 'SK', 'FC', 'SV', 'TN', 'FF']\n" ], [ "s165 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_165 = []\n\nfor i in range(len(s165)):\n if(0<=i<58):\n c1 = s165[i]\n c2 = s165[i+165]\n lambda_165.append(c1+c2)\n else:\n break\nprint(lambda_165)", "['RE', 'VG', 'QF', 'PN', 'TC', 'EY', 'SF', 'IP', 'VL', 'RQ', 'FS', 'PY', 'NG', 'IF', 'TQ', 'NP', 'LT', 'CN', 'PG', 'FV', 'GG', 'EY', 'VQ', 'FP', 'NY', 'AR', 'TV', 'RV', 'FV', 'AL', 'SS', 'VF', 'YE', 'AL', 'WL', 'NH', 'RA', 'KP', 'RA', 'IT', 'SV', 'NC', 'CG', 'VP', 'AK', 'DK', 'YS', 'ST', 'VN', 'LL', 'YV', 'NK', 'SN', 'AK', 'SC', 'FV', 'SN', 'TF']\n" ], [ "s166 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_166 = []\n\nfor i in range(len(s166)):\n if(0<=i<57):\n c1 = s166[i]\n c2 = s166[i+166]\n lambda_166.append(c1+c2)\n else:\n break\nprint(lambda_166)", "['RG', 'VF', 'QN', 'PC', 'TY', 'EF', 'SP', 'IL', 'VQ', 'RS', 'FY', 'PG', 'NF', 'IQ', 'TP', 'NT', 'LN', 'CG', 'PV', 'FG', 'GY', 'EQ', 'VP', 'FY', 'NR', 'AV', 'TV', 'RV', 'FL', 'AS', 'SF', 'VE', 'YL', 'AL', 'WH', 'NA', 'RP', 'KA', 'RT', 'IV', 'SC', 'NG', 'CP', 'VK', 'AK', 'DS', 'YT', 'SN', 'VL', 'LV', 'YK', 'NN', 'SK', 'AC', 'SV', 'FN', 'SF']\n" ], [ "s167 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_167 = []\n\nfor i in range(len(s167)):\n if(0<=i<56):\n c1 = s167[i]\n c2 = s167[i+167]\n lambda_167.append(c1+c2)\n else:\n break\nprint(lambda_167)", "['RF', 'VN', 'QC', 'PY', 'TF', 'EP', 'SL', 'IQ', 'VS', 'RY', 'FG', 'PF', 'NQ', 'IP', 'TT', 'NN', 'LG', 'CV', 'PG', 'FY', 'GQ', 'EP', 'VY', 'FR', 'NV', 'AV', 'TV', 'RL', 'FS', 'AF', 'SE', 'VL', 'YL', 'AH', 'WA', 'NP', 'RA', 'KT', 'RV', 'IC', 'SG', 'NP', 'CK', 'VK', 'AS', 'DT', 'YN', 'SL', 'VV', 'LK', 'YN', 'NK', 'SC', 'AV', 'SN', 'FF']\n" ], [ "s168 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_168 = []\n\nfor i in range(len(s168)):\n if(0<=i<55):\n c1 = s168[i]\n c2 = s168[i+168]\n lambda_168.append(c1+c2)\n else:\n break\nprint(lambda_168)", "['RN', 'VC', 'QY', 'PF', 'TP', 'EL', 'SQ', 'IS', 'VY', 'RG', 'FF', 'PQ', 'NP', 'IT', 'TN', 'NG', 'LV', 'CG', 'PY', 'FQ', 'GP', 'EY', 'VR', 'FV', 'NV', 'AV', 'TL', 'RS', 'FF', 'AE', 'SL', 'VL', 'YH', 'AA', 'WP', 'NA', 'RT', 'KV', 'RC', 'IG', 'SP', 'NK', 'CK', 'VS', 'AT', 'DN', 'YL', 'SV', 'VK', 'LN', 'YK', 'NC', 'SV', 'AN', 'SF']\n" ], [ "s169 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_169 = []\n\nfor i in range(len(s169)):\n if(0<=i<54):\n c1 = s169[i]\n c2 = s169[i+169]\n lambda_169.append(c1+c2)\n else:\n break\nprint(lambda_169)", "['RC', 'VY', 'QF', 'PP', 'TL', 'EQ', 'SS', 'IY', 'VG', 'RF', 'FQ', 'PP', 'NT', 'IN', 'TG', 'NV', 'LG', 'CY', 'PQ', 'FP', 'GY', 'ER', 'VV', 'FV', 'NV', 'AL', 'TS', 'RF', 'FE', 'AL', 'SL', 'VH', 'YA', 'AP', 'WA', 'NT', 'RV', 'KC', 'RG', 'IP', 'SK', 'NK', 'CS', 'VT', 'AN', 'DL', 'YV', 'SK', 'VN', 'LK', 'YC', 'NV', 'SN', 'AF']\n" ], [ "s170 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_170 = []\n\nfor i in range(len(s170)):\n if(0<=i<53):\n c1 = s170[i]\n c2 = s170[i+170]\n lambda_170.append(c1+c2)\n else:\n break\nprint(lambda_170)", "['RY', 'VF', 'QP', 'PL', 'TQ', 'ES', 'SY', 'IG', 'VF', 'RQ', 'FP', 'PT', 'NN', 'IG', 'TV', 'NG', 'LY', 'CQ', 'PP', 'FY', 'GR', 'EV', 'VV', 'FV', 'NL', 'AS', 'TF', 'RE', 'FL', 'AL', 'SH', 'VA', 'YP', 'AA', 'WT', 'NV', 'RC', 'KG', 'RP', 'IK', 'SK', 'NS', 'CT', 'VN', 'AL', 'DV', 'YK', 'SN', 'VK', 'LC', 'YV', 'NN', 'SF']\n" ], [ "s171 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_171 = []\n\nfor i in range(len(s171)):\n if(0<=i<52):\n c1 = s171[i]\n c2 = s171[i+171]\n lambda_171.append(c1+c2)\n else:\n break\nprint(lambda_171)", "['RF', 'VP', 'QL', 'PQ', 'TS', 'EY', 'SG', 'IF', 'VQ', 'RP', 'FT', 'PN', 'NG', 'IV', 'TG', 'NY', 'LQ', 'CP', 'PY', 'FR', 'GV', 'EV', 'VV', 'FL', 'NS', 'AF', 'TE', 'RL', 'FL', 'AH', 'SA', 'VP', 'YA', 'AT', 'WV', 'NC', 'RG', 'KP', 'RK', 'IK', 'SS', 'NT', 'CN', 'VL', 'AV', 'DK', 'YN', 'SK', 'VC', 'LV', 'YN', 'NF']\n" ], [ "s172 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_172 = []\n\nfor i in range(len(s172)):\n if(0<=i<51):\n c1 = s172[i]\n c2 = s172[i+172]\n lambda_172.append(c1+c2)\n else:\n break\nprint(lambda_172)", "['RP', 'VL', 'QQ', 'PS', 'TY', 'EG', 'SF', 'IQ', 'VP', 'RT', 'FN', 'PG', 'NV', 'IG', 'TY', 'NQ', 'LP', 'CY', 'PR', 'FV', 'GV', 'EV', 'VL', 'FS', 'NF', 'AE', 'TL', 'RL', 'FH', 'AA', 'SP', 'VA', 'YT', 'AV', 'WC', 'NG', 'RP', 'KK', 'RK', 'IS', 'ST', 'NN', 'CL', 'VV', 'AK', 'DN', 'YK', 'SC', 'VV', 'LN', 'YF']\n" ], [ "s173 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_173 = []\n\nfor i in range(len(s173)):\n if(0<=i<50):\n c1 = s173[i]\n c2 = s173[i+173]\n lambda_173.append(c1+c2)\n else:\n break\nprint(lambda_173)", "['RL', 'VQ', 'QS', 'PY', 'TG', 'EF', 'SQ', 'IP', 'VT', 'RN', 'FG', 'PV', 'NG', 'IY', 'TQ', 'NP', 'LY', 'CR', 'PV', 'FV', 'GV', 'EL', 'VS', 'FF', 'NE', 'AL', 'TL', 'RH', 'FA', 'AP', 'SA', 'VT', 'YV', 'AC', 'WG', 'NP', 'RK', 'KK', 'RS', 'IT', 'SN', 'NL', 'CV', 'VK', 'AN', 'DK', 'YC', 'SV', 'VN', 'LF']\n" ], [ "s174 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_174 = []\n\nfor i in range(len(s174)):\n if(0<=i<49):\n c1 = s174[i]\n c2 = s174[i+174]\n lambda_174.append(c1+c2)\n else:\n break\nprint(lambda_174)", "['RQ', 'VS', 'QY', 'PG', 'TF', 'EQ', 'SP', 'IT', 'VN', 'RG', 'FV', 'PG', 'NY', 'IQ', 'TP', 'NY', 'LR', 'CV', 'PV', 'FV', 'GL', 'ES', 'VF', 'FE', 'NL', 'AL', 'TH', 'RA', 'FP', 'AA', 'ST', 'VV', 'YC', 'AG', 'WP', 'NK', 'RK', 'KS', 'RT', 'IN', 'SL', 'NV', 'CK', 'VN', 'AK', 'DC', 'YV', 'SN', 'VF']\n" ], [ "s175 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_175 = []\n\nfor i in range(len(s175)):\n if(0<=i<48):\n c1 = s175[i]\n c2 = s175[i+175]\n lambda_175.append(c1+c2)\n else:\n break\nprint(lambda_175)", "['RS', 'VY', 'QG', 'PF', 'TQ', 'EP', 'ST', 'IN', 'VG', 'RV', 'FG', 'PY', 'NQ', 'IP', 'TY', 'NR', 'LV', 'CV', 'PV', 'FL', 'GS', 'EF', 'VE', 'FL', 'NL', 'AH', 'TA', 'RP', 'FA', 'AT', 'SV', 'VC', 'YG', 'AP', 'WK', 'NK', 'RS', 'KT', 'RN', 'IL', 'SV', 'NK', 'CN', 'VK', 'AC', 'DV', 'YN', 'SF']\n" ], [ "s176 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_176 = []\n\nfor i in range(len(s176)):\n if(0<=i<47):\n c1 = s176[i]\n c2 = s176[i+176]\n lambda_176.append(c1+c2)\n else:\n break\nprint(lambda_176)", "['RY', 'VG', 'QF', 'PQ', 'TP', 'ET', 'SN', 'IG', 'VV', 'RG', 'FY', 'PQ', 'NP', 'IY', 'TR', 'NV', 'LV', 'CV', 'PL', 'FS', 'GF', 'EE', 'VL', 'FL', 'NH', 'AA', 'TP', 'RA', 'FT', 'AV', 'SC', 'VG', 'YP', 'AK', 'WK', 'NS', 'RT', 'KN', 'RL', 'IV', 'SK', 'NN', 'CK', 'VC', 'AV', 'DN', 'YF']\n" ], [ "s177 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_177 = []\n\nfor i in range(len(s177)):\n if(0<=i<46):\n c1 = s177[i]\n c2 = s177[i+177]\n lambda_177.append(c1+c2)\n else:\n break\nprint(lambda_177)", "['RG', 'VF', 'QQ', 'PP', 'TT', 'EN', 'SG', 'IV', 'VG', 'RY', 'FQ', 'PP', 'NY', 'IR', 'TV', 'NV', 'LV', 'CL', 'PS', 'FF', 'GE', 'EL', 'VL', 'FH', 'NA', 'AP', 'TA', 'RT', 'FV', 'AC', 'SG', 'VP', 'YK', 'AK', 'WS', 'NT', 'RN', 'KL', 'RV', 'IK', 'SN', 'NK', 'CC', 'VV', 'AN', 'DF']\n" ], [ "s178 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_178 = []\n\nfor i in range(len(s178)):\n if(0<=i<45):\n c1 = s178[i]\n c2 = s178[i+178]\n lambda_178.append(c1+c2)\n else:\n break\nprint(lambda_178)", "['RF', 'VQ', 'QP', 'PT', 'TN', 'EG', 'SV', 'IG', 'VY', 'RQ', 'FP', 'PY', 'NR', 'IV', 'TV', 'NV', 'LL', 'CS', 'PF', 'FE', 'GL', 'EL', 'VH', 'FA', 'NP', 'AA', 'TT', 'RV', 'FC', 'AG', 'SP', 'VK', 'YK', 'AS', 'WT', 'NN', 'RL', 'KV', 'RK', 'IN', 'SK', 'NC', 'CV', 'VN', 'AF']\n" ], [ "s179 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_179 = []\n\nfor i in range(len(s179)):\n if(0<=i<44):\n c1 = s179[i]\n c2 = s179[i+179]\n lambda_179.append(c1+c2)\n else:\n break\nprint(lambda_179)", "['RQ', 'VP', 'QT', 'PN', 'TG', 'EV', 'SG', 'IY', 'VQ', 'RP', 'FY', 'PR', 'NV', 'IV', 'TV', 'NL', 'LS', 'CF', 'PE', 'FL', 'GL', 'EH', 'VA', 'FP', 'NA', 'AT', 'TV', 'RC', 'FG', 'AP', 'SK', 'VK', 'YS', 'AT', 'WN', 'NL', 'RV', 'KK', 'RN', 'IK', 'SC', 'NV', 'CN', 'VF']\n" ], [ "s180= \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_180 = []\n\nfor i in range(len(s180)):\n if(0<=i<43):\n c1 = s180[i]\n c2 = s180[i+180]\n lambda_180.append(c1+c2)\n else:\n break\nprint(lambda_180)", "['RP', 'VT', 'QN', 'PG', 'TV', 'EG', 'SY', 'IQ', 'VP', 'RY', 'FR', 'PV', 'NV', 'IV', 'TL', 'NS', 'LF', 'CE', 'PL', 'FL', 'GH', 'EA', 'VP', 'FA', 'NT', 'AV', 'TC', 'RG', 'FP', 'AK', 'SK', 'VS', 'YT', 'AN', 'WL', 'NV', 'RK', 'KN', 'RK', 'IC', 'SV', 'NN', 'CF']\n" ], [ "s181 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_181 = []\n\nfor i in range(len(s181)):\n if(0<=i<42):\n c1 = s181[i]\n c2 = s181[i+181]\n lambda_181.append(c1+c2)\n else:\n break\nprint(lambda_181)", "['RT', 'VN', 'QG', 'PV', 'TG', 'EY', 'SQ', 'IP', 'VY', 'RR', 'FV', 'PV', 'NV', 'IL', 'TS', 'NF', 'LE', 'CL', 'PL', 'FH', 'GA', 'EP', 'VA', 'FT', 'NV', 'AC', 'TG', 'RP', 'FK', 'AK', 'SS', 'VT', 'YN', 'AL', 'WV', 'NK', 'RN', 'KK', 'RC', 'IV', 'SN', 'NF']\n" ], [ "s182 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_182 = []\n\nfor i in range(len(s182)):\n if(0<=i<41):\n c1 = s182[i]\n c2 = s182[i+182]\n lambda_182.append(c1+c2)\n else:\n break\nprint(lambda_182)", "['RN', 'VG', 'QV', 'PG', 'TY', 'EQ', 'SP', 'IY', 'VR', 'RV', 'FV', 'PV', 'NL', 'IS', 'TF', 'NE', 'LL', 'CL', 'PH', 'FA', 'GP', 'EA', 'VT', 'FV', 'NC', 'AG', 'TP', 'RK', 'FK', 'AS', 'ST', 'VN', 'YL', 'AV', 'WK', 'NN', 'RK', 'KC', 'RV', 'IN', 'SF']\n" ], [ "s183 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_183 = []\n\nfor i in range(len(s183)):\n if(0<=i<40):\n c1 = s183[i]\n c2 = s183[i+183]\n lambda_183.append(c1+c2)\n else:\n break\nprint(lambda_183)", "['RG', 'VV', 'QG', 'PY', 'TQ', 'EP', 'SY', 'IR', 'VV', 'RV', 'FV', 'PL', 'NS', 'IF', 'TE', 'NL', 'LL', 'CH', 'PA', 'FP', 'GA', 'ET', 'VV', 'FC', 'NG', 'AP', 'TK', 'RK', 'FS', 'AT', 'SN', 'VL', 'YV', 'AK', 'WN', 'NK', 'RC', 'KV', 'RN', 'IF']\n" ], [ "s184 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_184 = []\n\nfor i in range(len(s184)):\n if(0<=i<39):\n c1 = s184[i]\n c2 = s184[i+184]\n lambda_184.append(c1+c2)\n else:\n break\nprint(lambda_184)", "['RV', 'VG', 'QY', 'PQ', 'TP', 'EY', 'SR', 'IV', 'VV', 'RV', 'FL', 'PS', 'NF', 'IE', 'TL', 'NL', 'LH', 'CA', 'PP', 'FA', 'GT', 'EV', 'VC', 'FG', 'NP', 'AK', 'TK', 'RS', 'FT', 'AN', 'SL', 'VV', 'YK', 'AN', 'WK', 'NC', 'RV', 'KN', 'RF']\n" ], [ "s185 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_185 = []\n\nfor i in range(len(s185)):\n if(0<=i<38):\n c1 = s185[i]\n c2 = s185[i+185]\n lambda_185.append(c1+c2)\n else:\n break\nprint(lambda_185)", "['RG', 'VY', 'QQ', 'PP', 'TY', 'ER', 'SV', 'IV', 'VV', 'RL', 'FS', 'PF', 'NE', 'IL', 'TL', 'NH', 'LA', 'CP', 'PA', 'FT', 'GV', 'EC', 'VG', 'FP', 'NK', 'AK', 'TS', 'RT', 'FN', 'AL', 'SV', 'VK', 'YN', 'AK', 'WC', 'NV', 'RN', 'KF']\n" ], [ "s186 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_186 = []\n\nfor i in range(len(s186)):\n if(0<=i<37):\n c1 = s186[i]\n c2 = s186[i+186]\n lambda_186.append(c1+c2)\n else:\n break\nprint(lambda_186)", "['RY', 'VQ', 'QP', 'PY', 'TR', 'EV', 'SV', 'IV', 'VL', 'RS', 'FF', 'PE', 'NL', 'IL', 'TH', 'NA', 'LP', 'CA', 'PT', 'FV', 'GC', 'EG', 'VP', 'FK', 'NK', 'AS', 'TT', 'RN', 'FL', 'AV', 'SK', 'VN', 'YK', 'AC', 'WV', 'NN', 'RF']\n" ], [ "s187 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_187 = []\n\nfor i in range(len(s187)):\n if(0<=i<36):\n c1 = s187[i]\n c2 = s187[i+187]\n lambda_187.append(c1+c2)\n else:\n break\nprint(lambda_187)", "['RQ', 'VP', 'QY', 'PR', 'TV', 'EV', 'SV', 'IL', 'VS', 'RF', 'FE', 'PL', 'NL', 'IH', 'TA', 'NP', 'LA', 'CT', 'PV', 'FC', 'GG', 'EP', 'VK', 'FK', 'NS', 'AT', 'TN', 'RL', 'FV', 'AK', 'SN', 'VK', 'YC', 'AV', 'WN', 'NF']\n" ], [ "s188 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_188 = []\n\nfor i in range(len(s188)):\n if(0<=i<35):\n c1 = s188[i]\n c2 = s188[i+188]\n lambda_188.append(c1+c2)\n else:\n break\nprint(lambda_188)", "['RP', 'VY', 'QR', 'PV', 'TV', 'EV', 'SL', 'IS', 'VF', 'RE', 'FL', 'PL', 'NH', 'IA', 'TP', 'NA', 'LT', 'CV', 'PC', 'FG', 'GP', 'EK', 'VK', 'FS', 'NT', 'AN', 'TL', 'RV', 'FK', 'AN', 'SK', 'VC', 'YV', 'AN', 'WF']\n" ], [ "s189 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_189 = []\n\nfor i in range(len(s189)):\n if(0<=i<34):\n c1 = s189[i]\n c2 = s189[i+189]\n lambda_189.append(c1+c2)\n else:\n break\nprint(lambda_189)", "['RY', 'VR', 'QV', 'PV', 'TV', 'EL', 'SS', 'IF', 'VE', 'RL', 'FL', 'PH', 'NA', 'IP', 'TA', 'NT', 'LV', 'CC', 'PG', 'FP', 'GK', 'EK', 'VS', 'FT', 'NN', 'AL', 'TV', 'RK', 'FN', 'AK', 'SC', 'VV', 'YN', 'AF']\n" ], [ "s190 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_190 = []\n\nfor i in range(len(s190)):\n if(0<=i<33):\n c1 = s190[i]\n c2 = s190[i+190]\n lambda_190.append(c1+c2)\n else:\n break\nprint(lambda_190)", "['RR', 'VV', 'QV', 'PV', 'TL', 'ES', 'SF', 'IE', 'VL', 'RL', 'FH', 'PA', 'NP', 'IA', 'TT', 'NV', 'LC', 'CG', 'PP', 'FK', 'GK', 'ES', 'VT', 'FN', 'NL', 'AV', 'TK', 'RN', 'FK', 'AC', 'SV', 'VN', 'YF']\n" ], [ "s191 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_191 = []\n\nfor i in range(len(s191)):\n if(0<=i<32):\n c1 = s191[i]\n c2 = s191[i+191]\n lambda_191.append(c1+c2)\n else:\n break\nprint(lambda_191)", "['RV', 'VV', 'QV', 'PL', 'TS', 'EF', 'SE', 'IL', 'VL', 'RH', 'FA', 'PP', 'NA', 'IT', 'TV', 'NC', 'LG', 'CP', 'PK', 'FK', 'GS', 'ET', 'VN', 'FL', 'NV', 'AK', 'TN', 'RK', 'FC', 'AV', 'SN', 'VF']\n" ], [ "s192 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_192 = []\n\nfor i in range(len(s192)):\n if(0<=i<31):\n c1 = s192[i]\n c2 = s192[i+192]\n lambda_192.append(c1+c2)\n else:\n break\nprint(lambda_192)", "['RV', 'VV', 'QL', 'PS', 'TF', 'EE', 'SL', 'IL', 'VH', 'RA', 'FP', 'PA', 'NT', 'IV', 'TC', 'NG', 'LP', 'CK', 'PK', 'FS', 'GT', 'EN', 'VL', 'FV', 'NK', 'AN', 'TK', 'RC', 'FV', 'AN', 'SF']\n" ], [ "s193 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_193 = []\n\nfor i in range(len(s193)):\n if(0<=i<30):\n c1 = s193[i]\n c2 = s193[i+193]\n lambda_193.append(c1+c2)\n else:\n break\nprint(lambda_193)", "['RV', 'VL', 'QS', 'PF', 'TE', 'EL', 'SL', 'IH', 'VA', 'RP', 'FA', 'PT', 'NV', 'IC', 'TG', 'NP', 'LK', 'CK', 'PS', 'FT', 'GN', 'EL', 'VV', 'FK', 'NN', 'AK', 'TC', 'RV', 'FN', 'AF']\n" ], [ "s194 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_194 = []\n\nfor i in range(len(s194)):\n if(0<=i<29):\n c1 = s194[i]\n c2 = s194[i+194]\n lambda_194.append(c1+c2)\n else:\n break\nprint(lambda_194)", "['RL', 'VS', 'QF', 'PE', 'TL', 'EL', 'SH', 'IA', 'VP', 'RA', 'FT', 'PV', 'NC', 'IG', 'TP', 'NK', 'LK', 'CS', 'PT', 'FN', 'GL', 'EV', 'VK', 'FN', 'NK', 'AC', 'TV', 'RN', 'FF']\n" ], [ "s195 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_195 = []\n\nfor i in range(len(s195)):\n if(0<=i<28):\n c1 = s195[i]\n c2 = s195[i+195]\n lambda_195.append(c1+c2)\n else:\n break\nprint(lambda_195)", "['RS', 'VF', 'QE', 'PL', 'TL', 'EH', 'SA', 'IP', 'VA', 'RT', 'FV', 'PC', 'NG', 'IP', 'TK', 'NK', 'LS', 'CT', 'PN', 'FL', 'GV', 'EK', 'VN', 'FK', 'NC', 'AV', 'TN', 'RF']\n" ], [ "s196 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_196 = []\n\nfor i in range(len(s196)):\n if(0<=i<27):\n c1 = s196[i]\n c2 = s196[i+196]\n lambda_196.append(c1+c2)\n else:\n break\nprint(lambda_196)", "['RF', 'VE', 'QL', 'PL', 'TH', 'EA', 'SP', 'IA', 'VT', 'RV', 'FC', 'PG', 'NP', 'IK', 'TK', 'NS', 'LT', 'CN', 'PL', 'FV', 'GK', 'EN', 'VK', 'FC', 'NV', 'AN', 'TF']\n" ], [ "s197 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_197 = []\n\nfor i in range(len(s197)):\n if(0<=i<26):\n c1 = s197[i]\n c2 = s197[i+197]\n lambda_197.append(c1+c2)\n else:\n break\nprint(lambda_197)", "['RE', 'VL', 'QL', 'PH', 'TA', 'EP', 'SA', 'IT', 'VV', 'RC', 'FG', 'PP', 'NK', 'IK', 'TS', 'NT', 'LN', 'CL', 'PV', 'FK', 'GN', 'EK', 'VC', 'FV', 'NN', 'AF']\n" ], [ "s198 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_198 = []\n\nfor i in range(len(s198)):\n if(0<=i<25):\n c1 = s198[i]\n c2 = s198[i+198]\n lambda_198.append(c1+c2)\n else:\n break\nprint(lambda_198)", "['RL', 'VL', 'QH', 'PA', 'TP', 'EA', 'ST', 'IV', 'VC', 'RG', 'FP', 'PK', 'NK', 'IS', 'TT', 'NN', 'LL', 'CV', 'PK', 'FN', 'GK', 'EC', 'VV', 'FN', 'NF']\n" ], [ "s199 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_199 = []\n\nfor i in range(len(s199)):\n if(0<=i<24):\n c1 = s199[i]\n c2 = s199[i+199]\n lambda_199.append(c1+c2)\n else:\n break\nprint(lambda_199)", "['RL', 'VH', 'QA', 'PP', 'TA', 'ET', 'SV', 'IC', 'VG', 'RP', 'FK', 'PK', 'NS', 'IT', 'TN', 'NL', 'LV', 'CK', 'PN', 'FK', 'GC', 'EV', 'VN', 'FF']\n" ], [ "s200 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_200 = []\n\nfor i in range(len(s200)):\n if(0<=i<23):\n c1 = s200[i]\n c2 = s200[i+200]\n lambda_200.append(c1+c2)\n else:\n break\nprint(lambda_200)", "['RH', 'VA', 'QP', 'PA', 'TT', 'EV', 'SC', 'IG', 'VP', 'RK', 'FK', 'PS', 'NT', 'IN', 'TL', 'NV', 'LK', 'CN', 'PK', 'FC', 'GV', 'EN', 'VF']\n" ], [ "s201 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_201 = []\n\nfor i in range(len(s201)):\n if(0<=i<22):\n c1 = s201[i]\n c2 = s201[i+201]\n lambda_201.append(c1+c2)\n else:\n break\nprint(lambda_201)", "['RA', 'VP', 'QA', 'PT', 'TV', 'EC', 'SG', 'IP', 'VK', 'RK', 'FS', 'PT', 'NN', 'IL', 'TV', 'NK', 'LN', 'CK', 'PC', 'FV', 'GN', 'EF']\n" ], [ "s202 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_202 = []\n\nfor i in range(len(s202)):\n if(0<=i<21):\n c1 = s202[i]\n c2 = s202[i+202]\n lambda_202.append(c1+c2)\n else:\n break\nprint(lambda_202)", "['RP', 'VA', 'QT', 'PV', 'TC', 'EG', 'SP', 'IK', 'VK', 'RS', 'FT', 'PN', 'NL', 'IV', 'TK', 'NN', 'LK', 'CC', 'PV', 'FN', 'GF']\n" ], [ "s203 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_203 = []\n\nfor i in range(len(s203)):\n if(0<=i<20):\n c1 = s203[i]\n c2 = s203[i+203]\n lambda_203.append(c1+c2)\n else:\n break\nprint(lambda_203)", "['RA', 'VT', 'QV', 'PC', 'TG', 'EP', 'SK', 'IK', 'VS', 'RT', 'FN', 'PL', 'NV', 'IK', 'TN', 'NK', 'LC', 'CV', 'PN', 'FF']\n" ], [ "s204 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_204 = []\n\nfor i in range(len(s204)):\n if(0<=i<19):\n c1 = s204[i]\n c2 = s204[i+204]\n lambda_204.append(c1+c2)\n else:\n break\nprint(lambda_204)", "['RT', 'VV', 'QC', 'PG', 'TP', 'EK', 'SK', 'IS', 'VT', 'RN', 'FL', 'PV', 'NK', 'IN', 'TK', 'NC', 'LV', 'CN', 'PF']\n" ], [ "s205 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_205 = []\n\nfor i in range(len(s205)):\n if(0<=i<18):\n c1 = s205[i]\n c2 = s205[i+205]\n lambda_205.append(c1+c2)\n else:\n break\nprint(lambda_205)", "['RV', 'VC', 'QG', 'PP', 'TK', 'EK', 'SS', 'IT', 'VN', 'RL', 'FV', 'PK', 'NN', 'IK', 'TC', 'NV', 'LN', 'CF']\n" ], [ "s206 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_206 = []\n\nfor i in range(len(s206)):\n if(0<=i<17):\n c1 = s206[i]\n c2 = s206[i+206]\n lambda_206.append(c1+c2)\n else:\n break\nprint(lambda_206)", "['RC', 'VG', 'QP', 'PK', 'TK', 'ES', 'ST', 'IN', 'VL', 'RV', 'FK', 'PN', 'NK', 'IC', 'TV', 'NN', 'LF']\n" ], [ "s207 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_207 = []\n\nfor i in range(len(s207)):\n if(0<=i<16):\n c1 = s207[i]\n c2 = s207[i+207]\n lambda_207.append(c1+c2)\n else:\n break\nprint(lambda_207)", "['RG', 'VP', 'QK', 'PK', 'TS', 'ET', 'SN', 'IL', 'VV', 'RK', 'FN', 'PK', 'NC', 'IV', 'TN', 'NF']\n" ], [ "s208 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_208 = []\n\nfor i in range(len(s208)):\n if(0<=i<15):\n c1 = s208[i]\n c2 = s208[i+208]\n lambda_208.append(c1+c2)\n else:\n break\nprint(lambda_208)", "['RP', 'VK', 'QK', 'PS', 'TT', 'EN', 'SL', 'IV', 'VK', 'RN', 'FK', 'PC', 'NV', 'IN', 'TF']\n" ], [ "s209 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_209 = []\n\nfor i in range(len(s209)):\n if(0<=i<14):\n c1 = s209[i]\n c2 = s209[i+209]\n lambda_209.append(c1+c2)\n else:\n break\nprint(lambda_209)", "['RK', 'VK', 'QS', 'PT', 'TN', 'EL', 'SV', 'IK', 'VN', 'RK', 'FC', 'PV', 'NN', 'IF']\n" ], [ "s210 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_210 = []\n\nfor i in range(len(s210)):\n if(0<=i<13):\n c1 = s210[i]\n c2 = s210[i+210]\n lambda_210.append(c1+c2)\n else:\n break\nprint(lambda_210)", "['RK', 'VS', 'QT', 'PN', 'TL', 'EV', 'SK', 'IN', 'VK', 'RC', 'FV', 'PN', 'NF']\n" ], [ "s211 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_211 = []\n\nfor i in range(len(s211)):\n if(0<=i<12):\n c1 = s211[i]\n c2 = s211[i+211]\n lambda_211.append(c1+c2)\n else:\n break\nprint(lambda_211)", "['RS', 'VT', 'QN', 'PL', 'TV', 'EK', 'SN', 'IK', 'VC', 'RV', 'FN', 'PF']\n" ], [ "s212 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_212 = []\n\nfor i in range(len(s212)):\n if(0<=i<11):\n c1 = s212[i]\n c2 = s212[i+212]\n lambda_212.append(c1+c2)\n else:\n break\nprint(lambda_212)", "['RT', 'VN', 'QL', 'PV', 'TK', 'EN', 'SK', 'IC', 'VV', 'RN', 'FF']\n" ], [ "s213 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_213 = []\n\nfor i in range(len(s213)):\n if(0<=i<10):\n c1 = s213[i]\n c2 = s213[i+213]\n lambda_213.append(c1+c2)\n else:\n break\nprint(lambda_213)", "['RN', 'VL', 'QV', 'PK', 'TN', 'EK', 'SC', 'IV', 'VN', 'RF']\n" ], [ "s214 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_214 = []\n\nfor i in range(len(s214)):\n if(0<=i<9):\n c1 = s214[i]\n c2 = s214[i+214]\n lambda_214.append(c1+c2)\n else:\n break\nprint(lambda_214)", "['RL', 'VV', 'QK', 'PN', 'TK', 'EC', 'SV', 'IN', 'VF']\n" ], [ "s215 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_215 = []\n\nfor i in range(len(s215)):\n if(0<=i<8):\n c1 = s215[i]\n c2 = s215[i+215]\n lambda_215.append(c1+c2)\n else:\n break\nprint(lambda_215)", "['RV', 'VK', 'QN', 'PK', 'TC', 'EV', 'SN', 'IF']\n" ], [ "s216 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_216 = []\n\nfor i in range(len(s216)):\n if(0<=i<7):\n c1 = s216[i]\n c2 = s216[i+216]\n lambda_216.append(c1+c2)\n else:\n break\nprint(lambda_216)", "['RK', 'VN', 'QK', 'PC', 'TV', 'EN', 'SF']\n" ], [ "s217 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_217 = []\n\nfor i in range(len(s217)):\n if(0<=i<6):\n c1 = s217[i]\n c2 = s217[i+217]\n lambda_217.append(c1+c2)\n else:\n break\nprint(lambda_217)", "['RN', 'VK', 'QC', 'PV', 'TN', 'EF']\n" ], [ "s218 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_218 = []\n\nfor i in range(len(s218)):\n if(0<=i<5):\n c1 = s218[i]\n c2 = s218[i+218]\n lambda_218.append(c1+c2)\n else:\n break\nprint(lambda_218)", "['RK', 'VC', 'QV', 'PN', 'TF']\n" ], [ "s219 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_219 = []\n\nfor i in range(len(s219)):\n if(0<=i<4):\n c1 = s219[i]\n c2 = s219[i+219]\n lambda_219.append(c1+c2)\n else:\n break\nprint(lambda_219)", "['RC', 'VV', 'QN', 'PF']\n" ], [ "s220 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_220 = []\n\nfor i in range(len(s220)):\n if(0<=i<3):\n c1 = s220[i]\n c2 = s220[i+220]\n lambda_220.append(c1+c2)\n else:\n break\nprint(lambda_220)", "['RV', 'VN', 'QF']\n" ], [ "s221 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_221 = []\n\nfor i in range(len(s221)):\n if(0<=i<2):\n c1 = s221[i]\n c2 = s221[i+221]\n lambda_221.append(c1+c2)\n else:\n break\nprint(lambda_221)", "['RN', 'VF']\n" ], [ "s222 = \"RVQPTESIVRFPNITNLCPFGEVFNATRFASVYAWNRKRISNCVADYSVLYNSASFSTFKCYGVSPTKLNDLCFTNVYADSFVIRGDEVRQIAPGQTGKIADYNYKLPDDFTGCVIAWNSNNLDSKVGGNYNYLYRLFRKSNLKPFERDISTEIYQAGSTPCNGVEGFNCYFPLQSYGFQPTNGVGYQPYRVVVLSFELLHAPATVCGPKKSTNLVKNKCVNF\"\n\nlambda_222 = []\n\nfor i in range(len(s222)):\n if(0<=i<1):\n c1 = s222[i]\n c2 = s222[i+222]\n lambda_222.append(c1+c2)\n else:\n break\nprint(lambda_222)", "['RF']\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec99467553de3d1ff3b119f197dbda28e4392afe
7,709
ipynb
Jupyter Notebook
answer_final_rot13_fix.ipynb
davidwilby/software_engineering_best_practices
691b9b6fbe6485f7ae690b60a540e4befeb454e6
[ "CC-BY-4.0" ]
1
2021-07-13T09:33:28.000Z
2021-07-13T09:33:28.000Z
answer_final_rot13_fix.ipynb
davidwilby/software_engineering_best_practices
691b9b6fbe6485f7ae690b60a540e4befeb454e6
[ "CC-BY-4.0" ]
31
2021-07-13T09:33:23.000Z
2021-10-05T14:15:39.000Z
answer_final_rot13_fix.ipynb
davidwilby/software_engineering_best_practices
691b9b6fbe6485f7ae690b60a540e4befeb454e6
[ "CC-BY-4.0" ]
1
2021-08-06T13:29:25.000Z
2021-08-06T13:29:25.000Z
37.241546
146
0.528603
[ [ [ "%%writefile rot13.py\n\nimport string\n\n_lower_cipher = string.ascii_lowercase[13:] + string.ascii_lowercase[:13]\n_upper_cipher = string.ascii_uppercase[13:] + string.ascii_uppercase[:13]\n\ndef encode(message):\n \"\"\"\n Encode a message from English to ROT13\n \n Args:\n message (str): the English message to encode\n \n Returns:\n str: The encoded message\n \n Examples:\n >>> encode(\"Secretmessage\")\n 'Frpergzrffntr'\n \"\"\"\n output = []\n for letter in message:\n if letter in string.ascii_lowercase:\n i = string.ascii_lowercase.find(letter)\n output.append(_lower_cipher[i])\n elif letter in string.ascii_uppercase:\n i = string.ascii_uppercase.find(letter)\n output.append(_upper_cipher[i])\n else: # Add this else statement\n raise ValueError(f\"Cannot encode \\\"{message}\\\". Character \\\"{letter}\\\" not valid\")\n \n return \"\".join(output)\n\n\ndef decode(message):\n \"\"\"\n Encode a message from ROT13 to English\n \n Args:\n message (str): the ROT13 message to encode\n \n Returns:\n str: The decoded message\n \n Examples:\n >>> encode(\"Frpergzrffntr\")\n 'Secretmessage'\n \"\"\"\n output = []\n for letter in message:\n if letter in _lower_cipher:\n i = _lower_cipher.find(letter)\n output.append(string.ascii_lowercase[i]) # ascii_uppercase → ascii_lowercase\n elif letter in _upper_cipher:\n i = _upper_cipher.find(letter)\n output.append(string.ascii_uppercase[i])\n else: # Add this else statement\n raise ValueError(f\"Cannot decode \\\"{message}\\\". Character \\\"{letter}\\\" not valid\")\n \n return \"\".join(output)\n\n # An alternate \"clever\" solution is to exploit the fact that rot13 is its own inverse\n # and simply call the encode function again. The entirety of this function would then\n # just become:\n #\n # return encode(message)", "Overwriting rot13.py\n" ], [ "%%writefile test_rot13.py\n\nimport pytest\n\nfrom rot13 import encode, decode\n\[email protected](\"message, expected\", [\n (\"SECRET\", \"FRPERG\"),\n (\"secret\", \"frperg\"),\n])\ndef test_encode(message, expected):\n assert encode(message) == expected\n\[email protected](\"message, expected\", [\n (\"FRPERG\", \"SECRET\"),\n (\"frperg\", \"secret\"),\n])\ndef test_decode(message, expected):\n assert decode(message) == expected\n\ndef test_encode_spaces_error():\n with pytest.raises(ValueError):\n encode(\"Secret message for you\")", "Overwriting test_rot13.py\n" ], [ "!COLUMNS=60 pytest -v --doctest-modules morse.py rot13.py test_morse.py test_rot13.py", "\u001b[1m=================== test session starts ====================\u001b[0m\nplatform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3\ncachedir: .pytest_cache\nrootdir: /home/matt/projects/courses/software_engineering_best_practices\nplugins: requests-mock-1.8.0\ncollected 21 items \u001b[0m\n\nmorse.py::morse.decode \u001b[32mPASSED\u001b[0m\u001b[32m [ 4%]\u001b[0m\nmorse.py::morse.encode \u001b[32mPASSED\u001b[0m\u001b[32m [ 9%]\u001b[0m\nrot13.py::rot13.decode \u001b[32mPASSED\u001b[0m\u001b[32m [ 14%]\u001b[0m\nrot13.py::rot13.encode \u001b[32mPASSED\u001b[0m\u001b[32m [ 19%]\u001b[0m\ntest_morse.py::test_encode[SOS-... --- ...] \u001b[32mPASSED\u001b[0m\u001b[32m [ 23%]\u001b[0m\ntest_morse.py::test_encode[help-.... . .-.. .--.] \u001b[32mPASSED\u001b[0m\u001b[32m [ 28%]\u001b[0m\ntest_morse.py::test_encode[-] \u001b[32mPASSED\u001b[0m\u001b[32m [ 33%]\u001b[0m\ntest_morse.py::test_encode[ -/] \u001b[32mPASSED\u001b[0m\u001b[32m [ 38%]\u001b[0m\ntest_morse.py::test_decode[... --- ...-sos] \u001b[32mPASSED\u001b[0m\u001b[32m [ 42%]\u001b[0m\ntest_morse.py::test_decode[.... . .-.. .--.-help] \u001b[32mPASSED\u001b[0m\u001b[32m [ 47%]\u001b[0m\ntest_morse.py::test_decode[/- ] \u001b[32mPASSED\u001b[0m\u001b[32m [ 52%]\u001b[0m\ntest_morse.py::test_error \u001b[32mPASSED\u001b[0m\u001b[32m [ 57%]\u001b[0m\ntest_morse.py::test_errors[It's sinking] \u001b[32mPASSED\u001b[0m\u001b[32m [ 61%]\u001b[0m\ntest_morse.py::test_errors[Titanic & Olympic] \u001b[32mPASSED\u001b[0m\u001b[32m [ 66%]\u001b[0m\ntest_morse.py::test_errors[This boat is expensive \\xa3\\xa3\\xa3] \u001b[32mPASSED\u001b[0m\u001b[32m [ 71%]\u001b[0m\ntest_morse.py::test_errors[Help!] \u001b[32mPASSED\u001b[0m\u001b[32m [ 76%]\u001b[0m\ntest_rot13.py::test_encode[SECRET-FRPERG] \u001b[32mPASSED\u001b[0m\u001b[32m [ 80%]\u001b[0m\ntest_rot13.py::test_encode[secret-frperg] \u001b[32mPASSED\u001b[0m\u001b[32m [ 85%]\u001b[0m\ntest_rot13.py::test_decode[FRPERG-SECRET] \u001b[32mPASSED\u001b[0m\u001b[32m [ 90%]\u001b[0m\ntest_rot13.py::test_decode[frperg-secret] \u001b[32mPASSED\u001b[0m\u001b[32m [ 95%]\u001b[0m\ntest_rot13.py::test_encode_spaces_error \u001b[32mPASSED\u001b[0m\u001b[32m [100%]\u001b[0m\n\n\u001b[32m==================== \u001b[32m\u001b[1m21 passed\u001b[0m\u001b[32m in 0.06s\u001b[0m\u001b[32m ====================\u001b[0m\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
ec994a3f157151a3b5ad086a43fb5c3874778a80
80,206
ipynb
Jupyter Notebook
RobertsonMSBR/scripts/detector_plotter.ipynb
ZoeRichter/msr-neutronics
8818be0958d0589171a84d9e6462265a27a28208
[ "BSD-3-Clause" ]
1
2022-02-28T20:05:42.000Z
2022-02-28T20:05:42.000Z
RobertsonMSBR/scripts/detector_plotter.ipynb
ZoeRichter/msr-neutronics
8818be0958d0589171a84d9e6462265a27a28208
[ "BSD-3-Clause" ]
null
null
null
RobertsonMSBR/scripts/detector_plotter.ipynb
ZoeRichter/msr-neutronics
8818be0958d0589171a84d9e6462265a27a28208
[ "BSD-3-Clause" ]
null
null
null
89.91704
42,407
0.734309
[ [ [ "%matplotlib notebook \n# Import modules\nimport math\nimport numpy as np\nimport matplotlib.pyplot\nfrom pyne import serpent\nfrom pyne import nucname\n\n#det0 = serpent.parse_det('../neutronics_paper/reproc/core_det0.m')\ndet0 = serpent.parse_det('../library_tests/core_det0.m')\n\ndet10 = serpent.parse_det('../library_tests/jeff32_core_det0.m')\n#det10 = serpent.parse_det('/home/andrei2/Desktop/git/msr-neutronics/RobertsonMSBR/neutronics_paper/reproc/eoc/core_det0.m')\nenergy_grid = det0['DET1E']\nspectrum_grid = det0['DET1']\nspectrum_grid2 = det10['DET1']\nenergy = energy_grid[:,2]\nflux_spectrum = spectrum_grid[:,10]\nprint list(det0.keys())\nprint np.amax(spectrum_grid[:,10])\nprint np.sum(spectrum_grid[:,10])\nprint np.trapz(spectrum_grid[:,10],energy)\ny = spectrum_grid[:,10]/np.trapz(spectrum_grid[:,10],energy)\nprint np.trapz(y,energy)\n#print energy\n#print flux_spectrum\ncsv = np.genfromtxt ('park_spectra_2.csv', delimiter=\",\")\nenergy_p = csv[:,0]\nspectrum_p = csv[:,1]\n\n# Initialize figure\nfig_1 = matplotlib.pyplot.figure(1)\nax = fig_1.add_subplot(111)\nax.grid(True)\n#ax.set_ylim(0,1.25)\nax.set_xlim([1e-8,10])\nax.semilogx(energy, spectrum_grid[:,10], '--', label='ENDF/B-VII', color='red')\nax.semilogx(energy, spectrum_grid2[:,10], '--', label='Generated JEFF-3.2')\n#ax.semilogx(energy_p, spectrum_p, '-', label='MCNP6 (Park et al. 2015 [4])')\nax.legend(loc=0)\nax.set_ylabel('Neutron flux per lethargy')\nax.set_xlabel('Energy [MeV]')\n#ax.set_title(plot_title)\nfig_1.show()\n#fig_1.savefig('/home/andrei2/Desktop/git/publications/2017-msbr-reproc/figures/spectrum.png',bbox_inches='tight')", "['np', 'DET1', 'DET1E']\n88.0914\n16642.8770419\n63.2378742286\n1.0\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
ec9956d1ee6c33c05f79fee02349d01888f33991
20,566
ipynb
Jupyter Notebook
GEE/jupyer_notebooks/.ipynb_checkpoints/atcor+geetoolscloudmask-checkpoint.ipynb
mayastn/S2DataProcessing
54bc967430eb26a471df087cf0139034b7fba600
[ "MIT" ]
4
2019-01-21T04:51:21.000Z
2021-12-01T15:05:32.000Z
GEE/jupyer_notebooks/.ipynb_checkpoints/atcor+geetoolscloudmask-checkpoint.ipynb
mayastn/S2DataProcessing
54bc967430eb26a471df087cf0139034b7fba600
[ "MIT" ]
8
2020-03-24T16:44:12.000Z
2022-03-11T23:39:32.000Z
GEE/jupyer_notebooks/.ipynb_checkpoints/atcor+geetoolscloudmask-checkpoint.ipynb
mayastn/S2DataProcessing
54bc967430eb26a471df087cf0139034b7fba600
[ "MIT" ]
2
2020-04-18T16:44:40.000Z
2021-03-31T02:11:26.000Z
29.676768
827
0.554507
[ [ [ "# Sentinel 2 Atmospheric Correction in Google Earth Engine", "_____no_output_____" ], [ "### Import modules \nand initialize Earth Engine", "_____no_output_____" ] ], [ [ "import ee\nfrom Py6S import *\nimport datetime\nimport math\nimport os\nimport sys\nimport numpy as np\nsys.path.append(os.path.join(os.path.dirname(os.getcwd()),'bin'))\nfrom atmospheric import Atmospheric\nfrom AtcorFunctions import *\n\nee.Initialize()", "_____no_output_____" ] ], [ [ "### time and place\nDefine the time and place that you are looking for.", "_____no_output_____" ] ], [ [ "# start and end of time series\nstart = ee.Date('2018-08-01')\nfinish =ee.Date('2018-08-31')\n\n#coordinates below need adjusting to actual site coordinates - these are just close\ngeom = ee.Geometry.Point([8.1191,60.0676]) \naoi = geom.buffer(10000).bounds() #buffer is in meters, can adjust (10km buffer here)\n\n# Whole park\n#geom = ee.Geometry.Rectangle(8.3018, 60.3967,6.7596,59.9)", "_____no_output_____" ] ], [ [ "### an image\nThe following code will grab the first scene that occurs on or after date.", "_____no_output_____" ] ], [ [ "collection = ee.ImageCollection(\"COPERNICUS/S2\")\\\n .filterBounds(geom)\\\n .filterDate(start,finish)\\\n .filterMetadata('CLOUDY_PIXEL_PERCENTAGE', 'less_than', 80)\n #.map(s2mask())\n\n# The first image in the collection\nS2 = ee.Image(collection.first())\n\nprint(ee.Date(S2.get('system:time_start')).format('yyyy-M-d').getInfo())\nprint(ee.Date(S2.get('system:time_start')).format('yyyy-mm-dd').getInfo())\n\n# top of atmosphere reflectance\ntoa = S2.divide(10000)\n\n#ui.eprint(collection.size())", "2018-8-4\n2018-50-04\n" ], [ "date=start\n## METADATA\ninfo = S2.getInfo()['properties']\nscene_date = datetime.datetime.utcfromtimestamp(info['system:time_start']/1000)# i.e. Python uses seconds, EE uses milliseconds\nsolar_z = info['MEAN_SOLAR_ZENITH_ANGLE']\n\n## ATMOSPHERIC CONSTITUENTS\nh2o = Atmospheric.water(geom,date).getInfo()\no3 = Atmospheric.ozone(geom,date).getInfo()\naot = Atmospheric.aerosol(geom,date).getInfo()\n\n## TARGET ALTITUDE\nSRTM = ee.Image('USGS/GMTED2010')# Shuttle Radar Topography mission covers *most* of the Earth\nalt = SRTM.reduceRegion(reducer = ee.Reducer.mean(),geometry = geom.centroid()).get('be75').getInfo()\nkm = alt/1000 # i.e. Py6S uses units of kilometers", "_____no_output_____" ] ], [ [ "### 6S object\n\nThe backbone of Py6S is the 6S (i.e. SixS) class. It allows you to define the various input parameters, to run the radiative transfer code and to access the outputs which are required to convert radiance to surface reflectance.", "_____no_output_____" ] ], [ [ "# Instantiate\ns = SixS()\n\n# Atmospheric constituents\ns.atmos_profile = AtmosProfile.UserWaterAndOzone(h2o,o3)\ns.aero_profile = AeroProfile.Continental\ns.aot550 = aot\n\n# Earth-Sun-satellite geometry\ns.geometry = Geometry.User()\ns.geometry.view_z = 0 # always NADIR (I think..)\ns.geometry.solar_z = solar_z # solar zenith angle\ns.geometry.month = scene_date.month # month and day used for Earth-Sun distance\ns.geometry.day = scene_date.day # month and day used for Earth-Sun distance\ns.altitudes.set_sensor_satellite_level()\ns.altitudes.set_target_custom_altitude(km)", "_____no_output_____" ] ], [ [ "### Atmospheric Correction", "_____no_output_____" ] ], [ [ "def spectralResponseFunction(bandname):\n \"\"\"\n Extract spectral response function for given band name\n \"\"\"\n bandSelect = {\n 'B1':PredefinedWavelengths.S2A_MSI_01,\n 'B2':PredefinedWavelengths.S2A_MSI_02,\n 'B3':PredefinedWavelengths.S2A_MSI_03,\n 'B4':PredefinedWavelengths.S2A_MSI_04,\n 'B5':PredefinedWavelengths.S2A_MSI_05,\n 'B6':PredefinedWavelengths.S2A_MSI_06,\n 'B7':PredefinedWavelengths.S2A_MSI_07,\n 'B8':PredefinedWavelengths.S2A_MSI_08,\n 'B8A':PredefinedWavelengths.S2A_MSI_09,\n 'B9':PredefinedWavelengths.S2A_MSI_10,\n 'B10':PredefinedWavelengths.S2A_MSI_11,\n 'B11':PredefinedWavelengths.S2A_MSI_12,\n 'B12':PredefinedWavelengths.S2A_MSI_13,\n }\n return Wavelength(bandSelect[bandname])\ndef toa_to_rad(bandname):\n \"\"\"\n Converts top of atmosphere reflectance to at-sensor radiance\n \"\"\"\n # solar exoatmospheric spectral irradiance\n ESUN = info['SOLAR_IRRADIANCE_'+bandname]\n solar_angle_correction = math.cos(math.radians(solar_z))\n # Earth-Sun distance (from day of year)\n doy = scene_date.timetuple().tm_yday\n d = 1 - 0.01672 * math.cos(0.9856 * (doy-4))# http://physics.stackexchange.com/questions/177949/earth-sun-distance-on-a-given-day-of-the-year\n # conversion factor\n multiplier = ESUN*solar_angle_correction/(math.pi*d**2)\n # at-sensor radiance\n rad = toa.select(bandname).multiply(multiplier)\n return rad\ndef surface_reflectance(bandname):\n \"\"\"\n Calculate surface reflectance from at-sensor radiance given waveband name\n \"\"\"\n # run 6S for this waveband\n s.wavelength = spectralResponseFunction(bandname)\n s.run()\n # extract 6S outputs\n Edir = s.outputs.direct_solar_irradiance #direct solar irradiance\n Edif = s.outputs.diffuse_solar_irradiance #diffuse solar irradiance\n Lp = s.outputs.atmospheric_intrinsic_radiance #path radiance\n absorb = s.outputs.trans['global_gas'].upward #absorption transmissivity\n scatter = s.outputs.trans['total_scattering'].upward #scattering transmissivity\n tau2 = absorb*scatter #total transmissivity\n # radiance to surface reflectance\n rad = toa_to_rad(bandname)\n ref = rad.subtract(Lp).multiply(math.pi).divide(tau2*(Edir+Edif))\n return ref\n\n# # all wavebands\ndef imageatcorrector(eeimage):\n corrected = eeimage.select('QA60')\n for band in ['B1','B2','B3','B4','B5','B6','B7','B8','B8A','B9','B10','B11','B12']:\n corrected = corrected.addBands(surface_reflectance(band))\n return corrected", "_____no_output_____" ], [ "# surface reflectance rgb\n#b = surface_reflectance('B2')\n#g = surface_reflectance('B3')\n#r = surface_reflectance('B4')\n#ref = r.addBands(g).addBands(b)\n\n# # all wavebands\noutput = imageatcorrector(S2)", "_____no_output_____" ] ], [ [ "### Display results", "_____no_output_____" ] ], [ [ "from IPython.display import display, Image\n\nregion = geom.buffer(10000).bounds().getInfo()['coordinates']\nchannels = ['B4','B3','B2']\n\noriginal = Image(url=toa.select(channels).getThumbUrl({\n 'region':region,'min':0,'max':0.25\n }))\n\ncorrected = Image(url=output.select(channels).getThumbUrl({\n 'region':region,'min':0,'max':0.25\n }))\n\ndisplay(original, corrected)", "_____no_output_____" ] ], [ [ "## Cloud mask functions (from *geetools*)", "_____no_output_____" ] ], [ [ "# Import relevant functions\nsys.path.append(os.path.join(os.path.dirname(os.getcwd()),'geetools'))\nfrom geetools import ui, cloud_mask", "_____no_output_____" ], [ "MapS2 = ui.Map(tabs=('Inspector',))\nMapS2.show()", "_____no_output_____" ], [ "#visS2 = {'bands':['B4','B3','B2'],'min':0, 'max':5000}\nvisS2 = {min: 0.0,max: 0.25,'bands':channels}\nis2=output#S2/output\nis2=is2.clip(aoi)\n\nMapS2.centerObject(is2, zoom=11)\nMapS2.addLayer(is2,visS2, 'Sentinel 2 Original')", "_____no_output_____" ] ], [ [ "### ESA Cloud mask", "_____no_output_____" ] ], [ [ "ESA_mask_all = cloud_mask.sentinel2()\nis2_ESA = ESA_mask_all(is2)\nMapS2.addLayer(is2_ESA, visS2, 'Sentinel 2 ESA maked')", "_____no_output_____" ] ], [ [ "### Composites", "_____no_output_____" ] ], [ [ "from geetools import ui, tools, composite, cloud_mask, indices", "_____no_output_____" ], [ "p = geom\nmission = 'COPERNICUS/S2'\nfirst= ee.Date('2018-06-01')\nlast = ee.Date('2018-08-31')\n\n#atcorrection=imageatcorrector()\n\ncolnoatcor = ee.ImageCollection(mission)\\\n .filterBounds(p).filterDate(first,last)\\\n .map(cloud_mask.sentinel2())\\\n .map(indices.ndvi('B8','B4'))\n", "_____no_output_____" ] ], [ [ "\n### Mapping atmospheric correction over an image collection", "_____no_output_____" ] ], [ [ "def renameBandsETM(image):\n bands = ['B1','B2','B3','B4','B5','B6','B7','B8','B8A','B9','B10','B11','B12']\n for band in bands:\n image.addBands(surface_reflectance(band))\n return image\n\ndef imageatcorrector(eeimage):\n corrected = eeimage.select('QA60')\n for band in ['B1','B2','B3','B4','B5','B6','B7','B8','B8A','B9','B10','B11','B12']:\n corrected = corrected.addBands(surface_reflectance(band))\n return corrected\n\ndef renameBandsETM2(image):\n image=image.select('QA60')\n for band in ['B1','B2','B3','B4','B5','B6','B7','B8','B8A','B9','B10','B11','B12']:\n image.addBands(surface_reflectance(band))\n return image\n\ncolatcor = ee.ImageCollection(mission)\\\n .filterBounds(p).filterDate(first,last)\\\n .map(cloud_mask.sentinel2())\\\n .map(indices.ndvi('B8','B4'))\\\n .map(renameBandsETM)", "_____no_output_____" ], [ "col = colatcor ##define which image collection (atmospherically corrected)\ncol0 = colnoatcor\n\nmax_ndvi = col.qualityMosaic('ndvi')\nmosaic = col.mosaic()\nbands = ['B1','B2','B3','B4','B5','B6','B7','B8','B8A','B9','B10','B11','B12']\nmedoid = composite.medoid(col, bands=bands)\nmedoid0 = composite.medoid(col0, bands=bands)\nMap.addLayer(medoid, vis, 'Medoid AtCorrected')", "_____no_output_____" ], [ "firstatcor = ee.Image(col.first())\nfirstatcor.bandNames()", "_____no_output_____" ] ], [ [ "#### Show on map", "_____no_output_____" ] ], [ [ "Map = ui.Map()\nMap.show()\nvis = {'bands':['B4', 'B3','B2'], 'min':0, 'max':5000}\nMap.addLayer(p)\nMap.centerObject(p)\n#Map.addLayer(max_ndvi, vis, 'max NDVI')\n#Map.addLayer(mosaic, vis, 'simply Mosaic')\nMap.addLayer(medoid, vis, 'Medoid AtCorrected')\nMap.addLayer(medoid0, vis, 'Medoid Not AtCorrected')", "_____no_output_____" ] ], [ [ "### Sentinel Hub Cloud mask (*uses machine learning*)", "_____no_output_____" ] ], [ [ "import s2cloudless\n#s2cloudless.test_sentinelhub_cloud_detector() \nfrom s2cloudless import S2PixelCloudDetector, CloudMaskRequest", "_____no_output_____" ], [ "cloud_detector = S2PixelCloudDetector(threshold=0.4, average_over=4, dilation_size=2)\ncloud_probs = cloud_detector.get_cloud_probability_maps(np.array(S2))\n#cloud_masks = cloud_detector.get_cloud_masks(np.array(wcsbands))", "_____no_output_____" ] ], [ [ "### Export to Asset", "_____no_output_____" ] ], [ [ "# # set some properties for export\ndateString = scene_date.strftime(\"%Y-%m-%d\")\nref = ref.set({'satellite':'Sentinel 2',\n 'fileID':info['system:index'],\n 'date':dateString,\n 'aerosol_optical_thickness':aot,\n 'water_vapour':h2o,\n 'ozone':o3})", "_____no_output_____" ], [ "# define YOUR assetID \nassetID = 'users/visithuruvixen/test'", "_____no_output_____" ], [ "# # export\nexport = ee.batch.Export.image.toAsset(\\\n image=output,\n description='sentinel2_atmcorr_export',\n assetId = assetID,\n region = region,\n scale = 30)\n\n# # uncomment to run the export\nexport.start() ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
ec9962e9359a6caee03d3091e85524a354106392
270,268
ipynb
Jupyter Notebook
notebooks/tutorials/strax_demo.ipynb
jmosbacher/straxen
ffcf06ad86471caf11cc831f2ff68d70b59464af
[ "BSD-3-Clause" ]
null
null
null
notebooks/tutorials/strax_demo.ipynb
jmosbacher/straxen
ffcf06ad86471caf11cc831f2ff68d70b59464af
[ "BSD-3-Clause" ]
54
2021-11-04T11:22:27.000Z
2022-03-17T13:15:06.000Z
notebooks/tutorials/strax_demo.ipynb
jmosbacher/straxen
ffcf06ad86471caf11cc831f2ff68d70b59464af
[ "BSD-3-Clause" ]
null
null
null
137.751274
82,428
0.816741
[ [ [ "# Tutorial", "_____no_output_____" ], [ "Jelle, updated May 2020\n\nupdated Feb 2022 by Joran\n\nThis notebook shows how to do basic analysis with straxen, much like `hax.minitrees`.", "_____no_output_____" ], [ "For reference, here are some jargon terms which we will introduce below:\n\n * **Context**: Holds configuration on how to process\n * **Dataframe** or **array**: table of related information produced by a plugin.\n * **Plugin**: an algorithm that produces a dataframe\n * **Data type**: specification of which columns are in a dataframe. \n * **Data kind**: e.g. 'events' or 'peaks'. Dataframes of the same kind have the same number of rows and can be merged.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n# This just ensures some comments in dataframes below display nicely\npd.options.display.max_colwidth = 100\nimport straxen", "_____no_output_____" ], [ "straxen.print_versions()", "cutax is not installed\n" ] ], [ [ "## Setting up", "_____no_output_____" ], [ "First we load a strax **context**, much like `hax.init()`. A strax context contains all information on *how* to process: where to read what files from, what plugins provide what data, etc. \n\nYou can make a context yourselves using `strax.Context`, but straxen provides standardized contexts as well. Most future analyses will use such standardized contexts defined by analysis coordinators or straxen maintainers.\n\nUnlike `hax.init`, you can have multiple active contexts, e.g. to load analysis and MC data, or compare data processed with different settings (we will see examples of this below).", "_____no_output_____" ] ], [ [ "st = straxen.contexts.xenonnt_online()", "_____no_output_____" ] ], [ [ "## Finding your data", "_____no_output_____" ], [ "Suposse we want to make a cS1/cS2 plot. We have to figure out which type of **dataframes** to load. A specific type of dataframe is also called a **data type**. (in hax these were called minitrees)\n\nWe can find this out automatically if we know (part of) the name of a field to load:", "_____no_output_____" ] ], [ [ "st.search_field('cs1')", "\ncs1 is part of corrected_areas (provided by CorrectedAreas)\ncs1 is part of event_info (provided by EventInfo)\n\ncs1 is used in CorrectedAreas.infer_dtype\ncs1 is used in CorrectedAreas.compute\ncs1 is used in EnergyEstimates.compute\ncs1 is used in EnergyEstimates.cs1_to_e\n" ] ], [ [ "It seems we're after one of the data types called `event_info` or `corrected_areas`. In the current context, these are provided by **plugins** called EventInfo and CorrectedAreas, respectively (but this doesn't concern us yet). \n\nAdditionally, we see the occurrences of `cs1` of a field in `EnergyEstimates` and `CorrectedAreas`. This means that the field is used there directly\n\nLet's see what else is in these data types:", "_____no_output_____" ] ], [ [ "st.data_info('event_info')", "_____no_output_____" ] ], [ [ "As you can see, `event_info` has a lot more information; let's load that one. You can see from the documentation (TODO link) that `event_info`'s job is to merge the info from `corrected_areas` and other things.\n", "_____no_output_____" ], [ "## Loading data", "_____no_output_____" ], [ "Next, you'll want to select a run. The `select_runs` function will return a dataframe with all available runs; there is a separate tutorial on more advanced use of this. In this demo context, we only have high-level data for the run `180215_1029` available (and low-level data for another):", "_____no_output_____" ] ], [ [ "st.select_runs()", "_____no_output_____" ] ], [ [ "So lets' take 021932.\n\nTo actually load data, you use `get_df` to get a pandas DataFrame, or `get_array` to get a numpy (structured) array. Let's go with pandas for now:", "_____no_output_____" ] ], [ [ "run_id = '021932'\n# The seconds_range=[0,60] is an optional argument to prevent loading too much data at once\ndf = st.get_df(run_id, 'event_info', seconds_range=[0,60])", "_____no_output_____" ] ], [ [ "The first time you run this, it will take a moment: it has to actually process the data somewhat. We didn't ship highest-level demo data with straxen: that would mean we'd have to constantly update the test data when the algorithms change.\n\nYou can also specify a list of runid's instead of one run, and get the concatenated result back. Likewise, you can specify multiple data types (of the same kind) to load, and they will be merged for you.\n\nJust like hax.minitrees.load, we got a dataframe back:", "_____no_output_____" ] ], [ [ "st.show_config('event_info')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "Let's make a quick plot of the events we just loaded:", "_____no_output_____" ] ], [ [ "df.plot.scatter('cs1', 'cs2')\n\nimport matplotlib.pyplot as plt\nplt.xscale('log')\nplt.xlim(1, None)\nplt.yscale('log')", "_____no_output_____" ] ], [ [ "Since making a cS1, cS2 plot for a dataset is such a common task that straxen has a built-in method for it. There are other similar mini-analyses, such a waveform plotting, which we will see in action below.", "_____no_output_____" ] ], [ [ "st.event_scatter(run_id, s=20, seconds_range=[0,60])", "_____no_output_____" ] ], [ [ "Can you guess what kind of data this is?", "_____no_output_____" ], [ "## Waveform analysis", "_____no_output_____" ], [ "The *peaks* data type contains the sum waveform information:", "_____no_output_____" ] ], [ [ "st.data_info('peaks')", "_____no_output_____" ] ], [ [ "Notice the compound data types of the `data`, `width` and `saturated_channel` fields. Pandas does not support such types (well, it sort of does, but the resulting dataframes are quite inefficient), so we have to load this as a numpy array.", "_____no_output_____" ] ], [ [ "peaks = st.get_array(run_id, 'peaks', seconds_range=[0,60])\ntype(peaks), peaks.dtype.names", "_____no_output_____" ] ], [ [ "Now we can plot peak waveforms:", "_____no_output_____" ] ], [ [ "def plot_peak(p, t0=None, **kwargs):\n n = p['length']\n if t0 is None:\n t0 = p['time']\n plt.plot((p['time'] - t0) + np.arange(n) * p['dt'], \n p['data'][:n] / p['dt'], \n drawstyle='steps-mid',\n **kwargs)\n plt.xlabel(\"Time (ns)\")\n plt.ylabel(\"Sum waveform (PE / ns)\")\n\nplot_peak(peaks[148])\nplt.show()", "_____no_output_____" ], [ "def plot_peaks(main_i, n_before=0, n_after=0, label_threshold=0, legendloc='best'):\n for i in main_i + np.arange(-n_before, n_after + 1):\n p = peaks[mask][i]\n label = None\n if p['area'] > label_threshold:\n label = '%d PE, %d ns dt' % (p['area'], p['dt'], )\n color = None\n else:\n color = 'gray'\n plot_peak(p,\n t0=peaks[main_i]['time'],\n label=label,\n color=color)\n plt.ylim(0, None)\n plt.legend(loc=legendloc)\n plt.yscale('symlog')\n\n# Find the largest peak close to 1e4 PE\nmask=peaks['area']<1e4\ni_of_largest_peak = np.argmax(peaks[mask]['area'])\nplot_peaks(i_of_largest_peak,\n n_after=5, \n n_before=2, \n label_threshold=10, \n legendloc=(1.1, 0.0))", "_____no_output_____" ] ], [ [ "The abrupt termination of the S2 above is due to strax's data reduction.", "_____no_output_____" ], [ "If you have access to the raw data (at least the `records` level) you can use straxen's built-in waveform display. For example, try:\n\n```python\nst.waveform_display(run_id, seconds_range=(0, 0.15))\n```\n\n(we didn't evaluate this in the tutorial, as it creates a substantial amount of javascript, which would have made the notebook quite huge).", "_____no_output_____" ], [ "## Configuration changes", "_____no_output_____" ], [ "As you can see in the above plot, we have many events high up in the TPC at low S1. Perhaps you want to get rid of them by increasing the 'S1 coincidence requirement', i.e. the number of PMTs that must see something before a peak is labeled as S1. Then, of course, you want to load the event-level data again to see if it worked.", "_____no_output_____" ], [ "First, we need to see which configuration option we have to change. Strax plugins declare what configuration they take and what other plugins they depend on, so this is not very difficult. We just ask which options with `s1` in their name influence `event_basics`:", "_____no_output_____" ] ], [ [ "st.show_config('event_basics', 's1*')[['option', 'applies_to', 'help', 'current', 'default']]", "_____no_output_____" ] ], [ [ "Looks like we're after the `s1_min_coincidence option`. Note this is not part of the `event_basics` data type, but of a data type called `peak_classification`. As you can see from the table, this option is not set in the current context, so the default value (3) is used.\n\nTo try out a different option, just pass it to get_df:", "_____no_output_____" ] ], [ [ "# Let's use a short run for the following exaples\nrun_id='038769'", "_____no_output_____" ], [ "st2 = st.new_context()\nst2.set_config(dict(s1_min_coincidence=50))\ndf_2 = st2.get_df(run_id, 'event_info',\n config=dict(s1_min_coincidence=50),\n )\nst2.event_scatter(run_id, events=df_2)", "_____no_output_____" ] ], [ [ "Notice all the small S1 events are indeed gone now.\n\nBehind the scenes, this figured out which dataframes had to be remade: as it happens this time just `event_basics` and `peak_basics`. You will now have a new `event_basics_<somehash>` folder in `./custom_data` which contains the results, as well as a new `peak_basics_<somehash> folder`.", "_____no_output_____" ], [ "### More on configuration changes", "_____no_output_____" ], [ "Changing configuration can be done in two other ways. We can change it permanently in the current context:\n```python\nst.set_config(dict(s1_min_coincidence=50))\n```\nOr we could make a new context, with this option set:\n```python\nst_2 = st.new_context(config=dict(s1_min_coincidence=50))\n```\n(feeding it to get_df just does the latter behind the scenes).\n\nIf you just want to run a mini-analysis (like `event_scatter`), you can also pass a new `config` option directly to it, as in the example below.", "_____no_output_____" ], [ "Strax protects you from typos in the configuration. Suppose we typed `s1_min_n_channelz` instead:", "_____no_output_____" ] ], [ [ "st.event_scatter(run_id, config=dict(s1_min_n_channelz=10), seconds_range=[0,60])", "Option s1_min_n_channelz not taken by any registered plugin\nOption s1_min_n_channelz not taken by any registered plugin\n" ] ], [ [ "The result of get_df is just the same as if the option wasn't set (just like in pax/hax), but you also get a warning about an unknown configuration option. \n\nBy the way, you can use \n```python\nimport warnings\nwarnings.filterwarnings(\"error\")\n```\nto ensure any warning raises an exception instead.", "_____no_output_____" ], [ "## Customization: new plugins", "_____no_output_____" ], [ "To add or change processing algorithms, or to define new variables to use in cuts, you have to write new strax plugins. These are somewhat similar to hax's treemakers.\n\nSuppose you have a brilliant new idea for peak classification. Strax does this in the peaklet_classification plugin, which produces:", "_____no_output_____" ] ], [ [ "st.data_info('peaklet_classification')", "_____no_output_____" ] ], [ [ " * The first three fields contain time information of the peak. This is duplicated in many datatypes -- unfortunately, this is necessary for strax to be able to track the data and combine it with other data. Instead of (time, length, dt), plugins could also provide (time, endtime). See [here](https://strax.readthedocs.io/en/latest/advanced/plugin_dev.html#special-time-fields) for more information.\n * The 'channel' field is an historical artifact.\n * The 'type' field contains the classification: 0 for unknown, 1 for S1, 2 for S2. (note [#8](https://github.com/XENONnT/straxen/issues/8))\n \nYou can find the original plugin in [peaklet_processing.py](https://github.com/XENONnT/straxen/blob/master/straxen/plugins/peaklet_processing.py.) Here's how you would make a different classification plugin:", "_____no_output_____" ] ], [ [ "import strax\nimport numpy as np\n\nclass AdvancedExpertClassification(strax.Plugin):\n \"\"\"Everything is an S1!\"\"\"\n \n # Name of the data type this plugin provides\n provides = 'peaklet_classification'\n \n # Data types this plugin requires. Note we don't specify\n # what plugins should produce them: maybe the default PeakBasics\n # has been replaced by another AdvancedExpertBlabla plugin?\n depends_on = ('peaklets',)\n \n # Numpy datatype of the output \n dtype = straxen.PeakletClassification.dtype\n \n # Version of the plugin. Increment this if you change the algorithm.\n __version__ = '0.0.2'\n\n def compute(self, peaklets):\n # Your code here.\n # This function will be called several times with \n # 'peaks' a numpy array of the datatype 'peaks'.\n # Each time you'll see a small part of the run.\n \n # You have to return a numpy array of the dtype you declared above\n result = np.zeros(len(peaklets), self.dtype)\n \n # Copy the basic time fields over from peaklets\n for (_, field), _ in strax.time_dt_fields:\n result[field] = peaklets[field]\n \n # Store the classification results\n # You might want to do real work here\n result['type'] = 1\n \n return result\n \n # Instead of an array, you are also allowed to return a dictionary \n # we can transform into an array.\n # That is, (dict keys -> field names, values -> field values)", "_____no_output_____" ] ], [ [ "To use it in place of PeakClassification, we only have to register it. Again, we can do so permanently using \n```python\nst.register(AdvancedExpertClassification)\n```\nor temporarily, by feeding the registration as an extra argument to `get_df`:", "_____no_output_____" ] ], [ [ "df = st.get_df(run_id, 'event_info',\n register=AdvancedExpertClassification,\n )", "_____no_output_____" ], [ "df['s2_area'].max()", "_____no_output_____" ] ], [ [ "As you can see, all events are now S1-only events, as expected. Maybe this is not the best alternative classification :-)\n\nThis plugin was a rather basic plugin. You'll also want to learn about `LoopPlugin`s and `OverlapWindowPlugin`s, but that's beyond the scope of this tutorial.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ec996c09688ea49668f2d3a1c039e73fe266be71
5,831
ipynb
Jupyter Notebook
15XW62 - Machine Learning/examples/numpy/knn/knn.ipynb
aakashhemadri/courses
73bd0d0708f61435df36f17078ef98c279be35c6
[ "MIT" ]
2
2021-01-20T14:08:26.000Z
2021-10-20T07:56:46.000Z
15XW62 - Machine Learning/examples/numpy/knn/knn.ipynb
aakashhemadri/courses
73bd0d0708f61435df36f17078ef98c279be35c6
[ "MIT" ]
null
null
null
15XW62 - Machine Learning/examples/numpy/knn/knn.ipynb
aakashhemadri/courses
73bd0d0708f61435df36f17078ef98c279be35c6
[ "MIT" ]
null
null
null
26.99537
88
0.402675
[ [ [ "#Import Modules Pandas and Numpy\nimport pandas as pd\nimport numpy as np\nimport operator", "_____no_output_____" ], [ "#Read CSV File\ndata = pd.read_csv(\"../../../datasets/iris.csv\")\ndata.head()", "_____no_output_____" ], [ "#Function definitions\n#To find Euc Distance\ndef ED(x1, x2, length): \n distance = 0\n for x in range(length):\n distance += np.square(x1[x] - x2[x])\n\n return np.sqrt(distance)\n\n#KNN Model Definition\ndef knn(trainingSet, testInstance, k): \n \n distances = {}\n\n #To find number of columns \n length = testInstance.shape[1]\n\n for x in range(len(trainingSet)):\n dist = ED(testInstance, trainingSet.iloc[x], length)\n distances[x] = dist[0]\n\n sortdist = sorted(distances.items(), key=operator.itemgetter(1))\n #Put the index of col you wanna sort with \n neighbors = []\n for x in range(k):\n neighbors.append(sortdist[x][0])\n\n Votes = {} #to get most frequent class of rows\n for x in range(len(neighbors)):\n response = trainingSet.iloc[neighbors[x]][-1]\n #To get the last column for corresponding index \n if response in Votes:\n Votes[response] += 1\n else:\n Votes[response] = 1\n #Appending the Variety to dict along with count\n sortvotes = sorted(Votes.items(), key=operator.itemgetter(1), reverse=True)\n return(sortvotes[0][0], neighbors)", "_____no_output_____" ], [ "#Input TestSet\ntest = pd.DataFrame([[6.8, 3.4, 4.8, 2.4]])\n\n#Different k Values\nk = 6\nk1 = 3\n\n#Function Call\nresult,neigh = knn(data, test, k)\n#result1,neigh1 = knn(data, test, k1)\nprint(result, neigh)", "Virginica [141, 145, 110, 115, 139, 147]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
ec996d11a7bbb652f7ad534ed609c1fb4dec2190
35,727
ipynb
Jupyter Notebook
Data.ipynb
thaisratis/KDD-BR-2018
e0ed63feca42e5aeda6e9748bca0bc3cc493f330
[ "MIT" ]
null
null
null
Data.ipynb
thaisratis/KDD-BR-2018
e0ed63feca42e5aeda6e9748bca0bc3cc493f330
[ "MIT" ]
null
null
null
Data.ipynb
thaisratis/KDD-BR-2018
e0ed63feca42e5aeda6e9748bca0bc3cc493f330
[ "MIT" ]
null
null
null
31.449824
146
0.314776
[ [ [ "# Dependências", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom glob import glob", "_____no_output_____" ] ], [ [ "# Dados", "_____no_output_____" ], [ "### Dados de Treinamento ", "_____no_output_____" ] ], [ [ "data_train = pd.read_csv(\"./data/train.csv\", sep=',')\ndata_train.head(5)", "_____no_output_____" ] ], [ [ "\n### Treinamento + Fields", "_____no_output_____" ] ], [ [ "fields = [ field for field in glob('./data/field**.csv')]", "_____no_output_____" ], [ "data_fields = []\nfor field in fields:\n field_ = pd.read_csv(field, sep=',')\n field_['field'] = int(field.split('-')[1].split('.')[0])\n data_fields.append(field_)\ndata_fields = pd.concat(data_fields)", "_____no_output_____" ], [ "data_train_fields = data_train.merge(data_fields, left_on=['field','harvest_year', 'harvest_month'], right_on=['field', 'year', 'month'])\ndata_train_fields.to_csv('data/data_train_fields.csv', sep=',', encoding='utf-8', index=False)\ndata_train_fields.head(5)", "_____no_output_____" ] ], [ [ "### Treinamento + Fields + Soil", "_____no_output_____" ] ], [ [ "data_soil = pd.read_csv(\"data/soil_data.csv\", sep=',')", "_____no_output_____" ], [ "data_train_fields_soil = data_train_fields.merge(data_soil, on='field')\ndata_train_fields_soil.to_csv('data/data_train_fields_soil.csv', sep=',', encoding='utf-8', index=False)\ndata_train_fields_soil.head(5)", "_____no_output_____" ] ], [ [ "### Dados de testes ", "_____no_output_____" ] ], [ [ "data_test = pd.read_csv('data/test.csv', sep=',')\ndata_test.head(5)", "_____no_output_____" ] ], [ [ "### Testes + Fields", "_____no_output_____" ] ], [ [ "data_test_fields = data_test.merge(data_fields, left_on=['field','harvest_year', 'harvest_month'], right_on=['field', 'year', 'month'])\ndata_test_fields.to_csv('data/data_test_fields.csv', sep=',', encoding='utf-8', index=False)\ndata_test_fields.head(5)", "_____no_output_____" ] ], [ [ "### Teste + Fields + Soil", "_____no_output_____" ] ], [ [ "data_test_fields_soil = data_test_fields.merge(data_soil, on='field')\ndata_test_fields_soil.to_csv('data/data_test_fields_soil.csv', sep=',', encoding='utf-8', index=False)\ndata_test_fields_soil.head(5)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec996ff1b1972309f143029dd9cbaa981469e2ee
102,994
ipynb
Jupyter Notebook
TEMA-1/Clase6_NumerosPseudoaleatorios.ipynb
anarodriguezrod/SPF-2021-verano
50686c5894882cfe0addc4c1f7ab2f2bc7f81a99
[ "MIT" ]
null
null
null
TEMA-1/Clase6_NumerosPseudoaleatorios.ipynb
anarodriguezrod/SPF-2021-verano
50686c5894882cfe0addc4c1f7ab2f2bc7f81a99
[ "MIT" ]
null
null
null
TEMA-1/Clase6_NumerosPseudoaleatorios.ipynb
anarodriguezrod/SPF-2021-verano
50686c5894882cfe0addc4c1f7ab2f2bc7f81a99
[ "MIT" ]
null
null
null
88.179795
10,492
0.827466
[ [ [ "# Generación de números pseudoaleatorios\n\n<img style=\"float: center; margin: 0px 0px 15px 15px;\" src=\"https://upload.wikimedia.org/wikipedia/commons/6/6a/Dice.jpg\" width=\"300px\" height=\"100px\" />\n\n**Referencias de la clase:**\n- https://webs.um.es/mpulido/miwiki/lib/exe/fetch.php?id=amio&cache=cache&media=wiki:simt1b.pdf\n- http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf\n\n**Referencias de las librerías que usaremos:**\n- http://www.numpy.org/\n- https://matplotlib.org/", "_____no_output_____" ], [ "___\n## 0. Introducción\n\n- Los números aleatorios son la base esencial de la simulación de escenarios.\n- Toda la aleatoriedad involucrada en el modelo se obtiene a partir de un generador de números aleatorios que produce una sucesión de valores que supuestamente son realizaciones de una secuencia de variables aleatorias independientes e idénticamente distribuidas.\n\n", "_____no_output_____" ], [ "### 0.1 ¿Qué es un número pseudoaleatorio?\n\n<img style=\"float: right; margin: 0px 0px 15px 15px;\" src=\"http://www.publicdomainpictures.net/pictures/50000/velka/random-numbers.jpg\" width=\"300px\" height=\"100px\" />\n\n- Es un número generado en un proceso que parece producir números al azar, pero no lo hace realmente.\n- Las secuencias de números pseudoaleatorios no muestran ningún patrón o regularidad aparente desde un punto de vista estadístico, a pesar de haber sido generadas por un algoritmo completamente determinista, en el que las mismas condiciones iniciales producen siempre el mismo resultado.\n- Por lo general, el interés no radica en generar un solo número aleatorio, sino muchos, reunidos en lo que se conoce como secuencia aleatoria.\n\n### 0.2 ¿En qué se aplican?\n\n- Modelado y simulación por computadora, estadística, diseño experimental. Normalmente, la entropía (aletoriedad) de los números que se generan actualmente basta para estas aplicaciones.\n- Criptografía. Este campo sigue estando en constante investigación, y por tanto la generación de números aleatorios también.\n- Asimismo, también destacan su uso en el llamado método de Montecarlo, con múltiples utilidades.\n- Entre otros...\n\n### 0.3 Funcionamiento básico\n\n- Elegir una semilla inicial (condición inicial) $x_0$.\n- Generar una sucesión de valores $x_n$ mediante la relación de recurrencia $x_n=T(x_{n-1})$.\n\n> Generalmente, esta secuencia es de números pseudoaleatorios $\\mathcal{U}(0,1)$.\n\n- Finalmente, se genera un número pseudoaleatorio con distribución deseada, definido a través de alguna relación $u_n=g(x_n)$.\n- Estas sucesiones son periódicas. Es decir, en algún momento ocurrirá que $x_j = x_i$ para algún $j > i$.\n\n### 0.4 ¿Cuándo un generador de números pseudoaleatorios es bueno?\n\n- La sucesión de valores que proporcione deberı́a asemejarse a una sucesión de realizaciones independientes de una variable aleatoria $\\mathcal{U}(0, 1)$.\n- Los resultados deben ser reproducibles, en el sentido de que comenzando con la misma semilla inicial, debe ser capaz de reproducir la misma sucesión. Esto para poder probar diferentes alternativas bajo las mismas condiciones y/o poder depurar fallos en el modelo.\n- La sucesión de valores generados debe tener un periodo no repetitivo tan largo como sea posible.", "_____no_output_____" ], [ "___\n## 1. Métodos congruenciales para generación de números pseudoaleatorios $\\mathcal{U}(0,1)$\n\n- Introducidos por Lehmer en 1951.\n- Son los principales generadores de números pseudoaleatorios utilizados hoy en día.\n\n### 1.1 Descripción general del método\n\n- Comienza con un valor inicial (semilla) $x_0$, y los valores subsiguientes, $x_n$ para $n \\geq 1$, se obtienen recursivamente con la siguiente fórmula:\n$$x_n = (ax_{n−1} + b) \\mod m.$$\n- En la fórmula de arriba $\\text{mod}$ representa la operación residuo.\n- Los enteros positivos $m$, $a$ y $b$ en la fórmula se denominan:\n - $0<m$ es el módulo,\n - $0<a<m$ es el multiplicador, y\n - $0\\leq b <m$ es el incremento.\n - La semilla debe satisfacer $0\\leq x_0<m$.\n- Si $b = 0$, el generador se denomina multiplicativo.\n- En caso contrario se llama mixto.", "_____no_output_____" ], [ "**Ejemplo**\n\nPara tomar intuición con este método, probar a mano con los siguientes conjuntos de parámetros:\n1. $m=9$, $a=5$, $b=1$, $x_0=1$.\n2. $m=16$, $a=5$, $b=3$, $x_0=7$.", "_____no_output_____" ] ], [ [ "print((5*1+1)%9)\nprint((5*7+3)%16)", "6\n6\n" ] ], [ [ "De acuerdo a lo anterior, ¿cómo son los números $x_i$?, ¿representa esto algún problema?, ¿cómo se podría solucionar?\n\n<font color=red> Enunciar problemas con sus respectivas soluciones... </font>", "_____no_output_____" ], [ "En efecto, un generador congruencial queda completamente determinado por los parámetros $m$, $a$, $b$ y $x_0$.\n\n**Proposición.** Los valores generados por un método congruencial verifican:\n\n$$x_n = \\left(a^n x_0+b\\frac{a^n-1}{a-1}\\right) \\mod m.$$\n\n<font color=blue> Verificar esto en el pizarrón. </font>", "_____no_output_____" ] ], [ [ "def congruencial_vector(a:'multiplicador',b:'incremento',m:'módulo',x0:'CI',N:'cantidad de términos'):\n '''\n Esta es la programación del método congruencial\n Parámetros\n ----------\n a: multiplicador\n b:'incremento'\n m:'módulo'\n x0:'CI'\n N:'cantidad de términos'\n '''\n n = np.arange(N)\n return ((a**n * x0 + (b * ((a**n-1) / (a-1)))) % m) / m\n\na, b, m, x0, N = 2, 5, 9, 1, 10\n\ncong1 = congruencial_vector(a,b,m,x0,N)\ncong1", "_____no_output_____" ] ], [ [ "### 1.2 Programemos este método\n\nDe acuerdo a lo descrito arriba, quisiéramos programar una función que reciba:\n- la semilla $x_0$,\n- el multiplicador $a$,\n- el incremento $b$,\n- el módulo $m$, y\n- la cantidad de elementos de la secuencia pseudoaleatoria requeridos $n$,\n\ny que retorne la secuencia pseudoaleatoria de longitud $n$.", "_____no_output_____" ] ], [ [ "#### Importar la librería numpy... útil para el manejo de datos n-dimensionales (vectores)\nimport numpy as np", "_____no_output_____" ], [ "# Elevar una constante a un vector\nb=np.array([1, 2, 3])\n2**b", "_____no_output_____" ], [ "#### Escribir la función acá\ndef cong_method_for(x0:\"Semilla inicial\",\n a:\"Multiplicador\",\n b:\"Incremento\", \n m:\"Módulo\",\n n:\"Número de elementos\"):\n '''\n Esta función contiene la programación del método congruencial para la \n sucesión x_n = (ax_{n−1} + b) mod m\n '''\n\n return result/m # Para que regrese números entre 0 y 1", "_____no_output_____" ], [ "#### Escribir la función sin ciclo for tradicional\ndef cong_method1(x0:\"Semilla inicial\",a:\"Multiplicador\",b:\"Incremento\", m:\"Módulo\",\n n:\"Número de elementos\"):\n '''\n Esta función contiene la programación del método congruencial para \n la sucesión x_n = (ax_{n−1} + b) mod m\n UTILIZANDO FUNCIONES ANIDADAS\n '''\n\n return x/m", "_____no_output_____" ] ], [ [ "> [Link](https://www.programiz.com/python-programming/global-local-nonlocal-variables) enlace con la explicación de las variables **globales, locales y no locales**", "_____no_output_____" ] ], [ [ "help(cong_method_for)", "Help on function cong_method_for in module __main__:\n\ncong_method_for(x0: 'Semilla inicial', a: 'Multiplicador', b: 'Incremento', m: 'Módulo', n: 'Número de elementos')\n Esta función contiene la programación del método congruencial para la sucesión x_n = (ax_{n−1} + b) mod m\n\n" ] ], [ [ "**Ejemplo**\n\nProbar con los conjuntos de parámetros anteriores:\n1. $m=9$, $a=5$, $b=1$, $x_0=1$.\n2. $m=16$, $a=5$, $b=3$, $x_0=7$.\n\nAdemás,\n- Para el conjunto de parámetros 1, probar con las semillas $x_0=5,8$.\n- Para el conjunto de parámetros 2, probar con diferentes semillas.", "_____no_output_____" ] ], [ [ "#### Probar acá\nx = cong_method_for(1, 5, 1, 9, 15)\nx", "_____no_output_____" ], [ "x = cong_method1(7, 5, 3, 16, 15)\nx", "_____no_output_____" ] ], [ [ "Probemos con otro conjunto de parámetros", "_____no_output_____" ] ], [ [ "x = cong_method1(1, 5, 3, 17, 20)\nx", "_____no_output_____" ] ], [ [ "**Ejemplo**\n\nLos ciclos *for* o *while* son un atentado contra la computación eficiente. Programar de forma vectorizada usando la fórmula:\n$$x_n = \\left(a^n x_0+b\\frac{a^n-1}{a-1}\\right) \\mod m.$$", "_____no_output_____" ] ], [ [ "#### Escribir la función acá\ndef cong_method2(x0, a, b, m, n):\n N = np.arange(n) \n return ((a**N * x0 + b * ((a**N-1)/(a-1))) % m)/m", "_____no_output_____" ], [ "Me2 = cong_method2(2, 5, 3, 16, 20)\nprint(Me2)\n", "[0.125 0.8125 0.25 0.4375 0.375 0.0625 0.5 0.6875 0.625 0.3125\n 0.75 0.9375 0.875 0.5625 0. 0.1875 0.125 0.8125 0.25 0.4375]\n" ], [ "Me1 = cong_method1(2, 5, 3, 16, 20)\nprint(Me1)", "[0.125 0.8125 0.25 0.4375 0.375 0.0625 0.5 0.6875 0.625 0.3125\n 0.75 0.9375 0.875 0.5625 0. 0.1875 0.125 0.8125 0.25 0.4375]\n" ] ], [ [ "Entonces vemos que la calidad de nuestro generador congruencial depende fuertemente de la elección de los parámetros, pues quisiéramos que los periodos sean lo más grandes posible ($m$).\n\nCuando el periodo de un generador congruencial coincide con el módulo $m$, lo llamaremos *generador de ciclo completo*. El periodo de este tipo de generadores es independiente de la semilla que utilicemos.\n\nEl siguiente Teorema nos da condiciones para crear generadores de ciclo completo:", "_____no_output_____" ], [ "**Teorema.** Un generador congruencial tiene periodo completo si y sólo si se cumplen las siguientes condiciones:\n1. $m$ y $b$ son primos entre sı́.\n2. Si $q$ es un número primo que divide a $m$, entonces $q$ divide a $a − 1$.\n3. Si $4$ divide a m, entonces 4 divide a $a − 1$.", "_____no_output_____" ], [ "**Ejercicio**\n\nComprobar el teorema en el conjunto de parámetros 2.", "_____no_output_____" ], [ "### 1.3 Comentarios adicionales sobre el generador congruencial\n\nHasta ahora solo nos basamos en aspectos teóricos para ver si un generador es bueno. También hay aspectos computacionales...\n\nEn ese sentido los generadores multiplicativos son más eficientes que los mixtos porque se ahorran la operación de suma. Sin embargo, por el **Teorema** <font color=red>¿qué pasa con los generadores multiplicativos?</font>\n\nDe igual forma, una elección computacionalmente adecuada es $m=2^k$ (se elige m grande para tener periodos grandes). Con esta elección, y $k\\geq2$, el generador tendrá periodo completo si y sólo si $b$ es impar y $1 = a \\mod 4$.\n\nSi se combina lo anterior (generador multiplicativo con $m=2^k$), obtenemos que el periodo máximo que se puede obtener es una cuarta parte de $m$, $\\frac{2^k}{4}=2^{k-2}$ y se alcanza únicamente para $x_0$ impar y, $3 = a \\mod 8$ o $5 = a \\mod 8$.\n\nUn generador multiplicativo muy utilizado, conocido como *RANDU*, tomaba $m = 2^{31}$ y $a = 2^{16} + 3$. Sin embargo, se ha demostrado que tiene propiedades estadı́sticas bastante malas.\n\nLos generadores multiplicativos más famosos utilizados por IBM tomaban $m = 2^{31}$ y $b = 12345$ o $a= 1103515245$.\n\nPueden encontrar más información en este [enlace](https://en.wikipedia.org/wiki/Linear_congruential_generator).\n\n- Se pueden hacer combinaciones de generadores y otros generadores más complicados...", "_____no_output_____" ], [ "**Ejemplo**\n\nTomar los parámetros $m=2^{31} − 1$, $a=1103515245$ y $b=12345$, y generar una secuencia pseudoaleatoria uniforme estándar de $n=10^4$ elementos.\n\nLuego, dibujar el histograma (diagrama de frecuencias). ¿Corresponde lo obtenido con lo que se imaginaban?", "_____no_output_____" ] ], [ [ "import time\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Tiempo de cálculo usando el método congruencial \"FOR\" convencional\nt1=time.time()\nx = cong_method_for(3, 1103515245, 12345, 2**31-1, 10**6)\nprint('Tiempo de cálculo:',time.time()-t1)", "Tiempo de cálculo: 0.5894374847412109\n" ], [ "# Tiempo de cálculo usando el método congruencial \"FUNCIONES\" convencional\nt1=time.time()\nx = cong_method2(3, 1103515245, 12345, 2**31-1, 10**6)\nprint('Tiempo de cálculo:',time.time()-t1)", "Tiempo de cálculo: 0.1242365837097168\n" ], [ "(0.591420-0.1611)/0.591420", "_____no_output_____" ], [ "t1=time.time()\nx = cong_method2(3, 1103515245, 12345, 2**31-1, 10**6)\nprint('Tiempo de cálculo:',time.time()-t1)", "Tiempo de cálculo: 0.08669424057006836\n" ], [ "%matplotlib inline", "_____no_output_____" ], [ "plt.hist(x,50,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ] ], [ [ "**Ejemplo**\n\n¿Cómo hacer para obtener secuencias pseudoaleatorias en $\\mathcal{U}(a,b)$?\n\nRealizar un código para esto. Hacer una prueba con los parámetros anteriormente tomados y dibujar el histograma para contrastar.", "_____no_output_____" ] ], [ [ "np.random.seed(10203)\nnp.random.rand()", "_____no_output_____" ], [ "#### Resolver acá\na, b = 7, 10\nxab = (b-a)*x+a", "_____no_output_____" ], [ "plt.hist(xab,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ] ], [ [ "**Ejemplo**\n\nEscribir una función que devuelva secuencias de números aleatorios $\\mathcal{U}(0,1)$ usando los parámetros dados anteriormente y que use como semilla `time.time()`.", "_____no_output_____" ] ], [ [ "time.time()", "_____no_output_____" ], [ "#### Resolver acá\nimport time\ndef randuni(n):\n return cong_method2(round(time.time()*10**7), 1103515245, 12345, 2**31-1, n+1)[1:]", "_____no_output_____" ], [ "randuni(10)", "_____no_output_____" ] ], [ [ "___\n## 2. Método Box–Muller para generación de números pseudoaleatorios $\\mathcal{N}(0,1)$\n\nTeniendo dos secuencias de números pseudoaleatorios independientes e uniformemente distribuidos en el intervalo $\\left[0,1\\right]$ ($\\mathcal{U}(0,1)$) es posible generar dos secuencias de números pseudoaleatorios independientes y normalmente distribuidos con media cero y varianza unitaria ($\\mathcal{N}(0,1)$).\n\nEste método se conoce como el método Box–Muller.", "_____no_output_____" ], [ "Supongamos que $U_1$ y $U_2$ son variables aleatorias independientes que están uniformemente distribuidas en el intervalo $\\left[0,1\\right]$. Sean entonces:\n\n$$X=R\\cos(\\theta)=\\sqrt{-2\\ln(U_1)}\\cos(2\\pi U_2),$$\n\ny\n\n$$Y=R\\sin(\\theta)=\\sqrt{-2\\ln(U_1)}\\sin(2\\pi U_2).$$\n\nEntonces, $X$ y $Y$ son variables aleatorias independientes con una distribución normal estándar ($\\mathcal{N}(0,1)$).", "_____no_output_____" ], [ "La derivación de esto se basa en la transformación del sistema cartesiano al sistema polar.\n\n<font color=blue> Mostrar intuitivamente en el tablero [link](http://www.lmpt.univ-tours.fr/~nicolis/Licence_NEW/08-09/boxmuller.pdf). </font>", "_____no_output_____" ], [ "**Ejemplo**\n\nEscribir una función que devuelva secuencias de números aleatorios $\\mathcal{N}(0,1)$.\n\n*Usar la función escrita anteriormente*", "_____no_output_____" ] ], [ [ "#### Resolver acá\ndef randnorm(n):\n u1,u2 = randuni(n), randuni(n)\n theta = 2*np.pi*u2\n x = np.sqrt(-2*np.log(u1))*np.cos(theta)\n y = np.sqrt(-2*np.log(u1))*np.sin(theta)\n return x,y", "_____no_output_____" ] ], [ [ "**Ejemplo**\n\nGenerar una secuencia pseudoaleatoria normal estándar de $n=10^4$ elementos.\n\nLuego, dibujar el histograma (diagrama de frecuencias). ¿Corresponde lo obtenido con lo que se imaginaban?", "_____no_output_____" ] ], [ [ "#### Resolver acá\nx,y = randnorm(10**5)\nplt.hist(y,50,density=False)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma')\nplt.show()\n", "_____no_output_____" ], [ "plt.hist(x,50,density=False)\nplt.xlabel('valores aleatorios')\nplt.ylabel('probabilidad')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ] ], [ [ "**Ejemplo**\n\n¿Cómo hacer para obtener secuencias pseudoaleatorias en $\\mathcal{N}(\\mu,\\sigma)$?\n\nRealizar un código para esto. Hacer una prueba y dibujar el histograma para contrastar.", "_____no_output_____" ] ], [ [ "#### Resolver acá\nmu = 5\nsigma =3\nX = sigma*y+mu", "_____no_output_____" ], [ "plt.hist(X,200,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('frecuencia')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ] ], [ [ "Finalmente, mostrar que funciones de este tipo ya están en `numpy`. Ya sabemos como se obtienen.", "_____no_output_____" ] ], [ [ "x=np.random.uniform(10,20,10**6)\nplt.hist(x,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('frecuencia')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ], [ "help(np.random.normal)", "Help on built-in function normal:\n\nnormal(...) method of mtrand.RandomState instance\n normal(loc=0.0, scale=1.0, size=None)\n \n Draw random samples from a normal (Gaussian) distribution.\n \n The probability density function of the normal distribution, first\n derived by De Moivre and 200 years later by both Gauss and Laplace\n independently [2]_, is often called the bell curve because of\n its characteristic shape (see the example below).\n \n The normal distributions occurs often in nature. For example, it\n describes the commonly occurring distribution of samples influenced\n by a large number of tiny, random disturbances, each with its own\n unique distribution [2]_.\n \n Parameters\n ----------\n loc : float or array_like of floats\n Mean (\"centre\") of the distribution.\n scale : float or array_like of floats\n Standard deviation (spread or \"width\") of the distribution.\n size : int or tuple of ints, optional\n Output shape. If the given shape is, e.g., ``(m, n, k)``, then\n ``m * n * k`` samples are drawn. If size is ``None`` (default),\n a single value is returned if ``loc`` and ``scale`` are both scalars.\n Otherwise, ``np.broadcast(loc, scale).size`` samples are drawn.\n \n Returns\n -------\n out : ndarray or scalar\n Drawn samples from the parameterized normal distribution.\n \n See Also\n --------\n scipy.stats.norm : probability density function, distribution or\n cumulative density function, etc.\n \n Notes\n -----\n The probability density for the Gaussian distribution is\n \n .. math:: p(x) = \\frac{1}{\\sqrt{ 2 \\pi \\sigma^2 }}\n e^{ - \\frac{ (x - \\mu)^2 } {2 \\sigma^2} },\n \n where :math:`\\mu` is the mean and :math:`\\sigma` the standard\n deviation. The square of the standard deviation, :math:`\\sigma^2`,\n is called the variance.\n \n The function has its peak at the mean, and its \"spread\" increases with\n the standard deviation (the function reaches 0.607 times its maximum at\n :math:`x + \\sigma` and :math:`x - \\sigma` [2]_). This implies that\n `numpy.random.normal` is more likely to return samples lying close to\n the mean, rather than those far away.\n \n References\n ----------\n .. [1] Wikipedia, \"Normal distribution\",\n http://en.wikipedia.org/wiki/Normal_distribution\n .. [2] P. R. Peebles Jr., \"Central Limit Theorem\" in \"Probability,\n Random Variables and Random Signal Principles\", 4th ed., 2001,\n pp. 51, 51, 125.\n \n Examples\n --------\n Draw samples from the distribution:\n \n >>> mu, sigma = 0, 0.1 # mean and standard deviation\n >>> s = np.random.normal(mu, sigma, 1000)\n \n Verify the mean and the variance:\n \n >>> abs(mu - np.mean(s)) < 0.01\n True\n \n >>> abs(sigma - np.std(s, ddof=1)) < 0.01\n True\n \n Display the histogram of the samples, along with\n the probability density function:\n \n >>> import matplotlib.pyplot as plt\n >>> count, bins, ignored = plt.hist(s, 30, normed=True)\n >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *\n ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ),\n ... linewidth=2, color='r')\n >>> plt.show()\n\n" ], [ "x = np.random.normal(5,3,10**6)\nplt.hist(x,100,density=True)\nplt.xlabel('valores aleatorios')\nplt.ylabel('frecuencia')\nplt.title('histograma')\nplt.show()", "_____no_output_____" ] ], [ [ "> ## Tarea 3: (Usando notebook de jupyter)** \n\n> Usando compresión de listas o funciones map(sino recuerda como funciona observar el siguiente enlace https://www.pythonforbeginners.com/lists/list-comprehensions-in-python/), resolver los siguientes ejercicios:\n\n>1. Resolver la siguiente ecuación recursiva usando funciones como se vió en clase\n$$ D_{n}=(n-1) D_{n-1}+(n-1) D_{n-2} \\quad n\\ge 3$$\ncon $D_1=0$ y $D_2 = 1$\n>3. Count the number of spaces in the following string `variable = relaciónn requiere, para obtener el valor de un cierto término, el conocimiento de los dos anteriores`.\n>4. Remove all of the vowels in a string [make a list of the non-vowels].\n>5. Find all of the words in a string that are less than 4 letters.\n>6. Use a dictionary comprehension to count the length of each word in a sentence.\n>7. Use a nested list comprehension to find all of the numbers from 1-1000 that are divisible by any single digit besides 1 (2-9). ", "_____no_output_____" ], [ "<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Esteban Jiménez Rodríguez and edited by Oscar David Jaramillo Z.\n</footer>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
ec997012f8910c830605922c1e5f165b8fca7d03
14,043
ipynb
Jupyter Notebook
research/train_income_classifier.ipynb
lovedeepkaursaini/ml_api_lt
4af98b6d229fdfcbd2eec56526b6faedd86bf1e3
[ "MIT" ]
null
null
null
research/train_income_classifier.ipynb
lovedeepkaursaini/ml_api_lt
4af98b6d229fdfcbd2eec56526b6faedd86bf1e3
[ "MIT" ]
null
null
null
research/train_income_classifier.ipynb
lovedeepkaursaini/ml_api_lt
4af98b6d229fdfcbd2eec56526b6faedd86bf1e3
[ "MIT" ]
null
null
null
32.582367
343
0.427758
[ [ [ "import json # will be needimport json # will be needed for saving preprocessing details\nimport numpy as np # for data manipulation\nimport pandas as pd # for data manipulation\nfrom sklearn.model_selection import train_test_split # will be used for data split\nfrom sklearn.preprocessing import LabelEncoder # for preprocessing\nfrom sklearn.ensemble import RandomForestClassifier # for training the algorithm\nfrom sklearn.ensemble import ExtraTreesClassifier # for training the algorithm\nimport joblib # for saving algorithm and preprocessing objectsed for saving preprocessing details\nimport numpy as np # for data manipulation\nimport pandas as pd # for data manipulation\nfrom sklearn.model_selection import train_test_split # will be used for data split\nfrom sklearn.preprocessing import LabelEncoder # for preprocessing\nfrom sklearn.ensemble import RandomForestClassifier # for training the algorithm\nfrom sklearn.ensemble import ExtraTreesClassifier # for training the algorithm\nimport joblib # for saving algorithm and preprocessing objects", "_____no_output_____" ], [ "# load dataset\ndf = pd.read_csv('https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv', skipinitialspace=True)\nx_cols = [c for c in df.columns if c != 'income']\n# set input matrix and target column\nX = df[x_cols]\ny = df['income']\n# show first rows of data\ndf.head()", "_____no_output_____" ], [ "# data split train / test\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1234)", "_____no_output_____" ], [ "# fill missing values\ntrain_mode = dict(X_train.mode().iloc[0])\nX_train = X_train.fillna(train_mode)\nprint(train_mode)", "{'age': 31.0, 'workclass': 'Private', 'fnlwgt': 121124, 'education': 'HS-grad', 'education-num': 9.0, 'marital-status': 'Married-civ-spouse', 'occupation': 'Prof-specialty', 'relationship': 'Husband', 'race': 'White', 'sex': 'Male', 'capital-gain': 0.0, 'capital-loss': 0.0, 'hours-per-week': 40.0, 'native-country': 'United-States'}\n" ], [ "# convert categoricals\nencoders = {}\nfor column in ['workclass', 'education', 'marital-status',\n 'occupation', 'relationship', 'race',\n 'sex','native-country']:\n categorical_convert = LabelEncoder()\n X_train[column] = categorical_convert.fit_transform(X_train[column])\n encoders[column] = categorical_convert", "_____no_output_____" ], [ "X_train.mode()", "_____no_output_____" ], [ "# train the Random Forest algorithm\nrf = RandomForestClassifier(n_estimators = 100)\nrf = rf.fit(X_train, y_train)", "_____no_output_____" ], [ "# train the Extra Trees algorithm\net = ExtraTreesClassifier(n_estimators = 100)\net = et.fit(X_train, y_train)", "_____no_output_____" ], [ "# save preprocessing objects and RF algorithm\njoblib.dump(train_mode, \"./train_mode.joblib\", compress=True)\njoblib.dump(encoders, \"./encoders.joblib\", compress=True)\njoblib.dump(rf, \"./random_forest.joblib\", compress=True)\njoblib.dump(et, \"./extra_trees.joblib\", compress=True)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9987ca859dc69295102bb752bff6ebfcab0b3d
103,957
ipynb
Jupyter Notebook
.ipynb_checkpoints/mnist_dcgan + gan-checkpoint.ipynb
mshaikh2/GANs_Comparision
123d94b241b041ea3a9c636653cae3265217d306
[ "MIT" ]
1
2018-02-20T13:05:37.000Z
2018-02-20T13:05:37.000Z
.ipynb_checkpoints/mnist_dcgan + gan-checkpoint.ipynb
mshaikh2/GANs_Comparision
123d94b241b041ea3a9c636653cae3265217d306
[ "MIT" ]
null
null
null
.ipynb_checkpoints/mnist_dcgan + gan-checkpoint.ipynb
mshaikh2/GANs_Comparision
123d94b241b041ea3a9c636653cae3265217d306
[ "MIT" ]
null
null
null
26.758559
174
0.309638
[ [ [ "# %load mnist_dcgan.py\n# import os\n# os.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\nimport numpy as np\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\nfrom keras.layers import Input\nfrom keras.models import Model, Sequential\nfrom keras.layers.core import Reshape, Dense, Dropout, Flatten\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.layers.convolutional import Conv2D, UpSampling2D\nfrom keras.datasets import mnist\nfrom keras.optimizers import Adam\nfrom keras import backend as K\nfrom keras import initializers\n\nK.set_image_dim_ordering('th')\n\n# Deterministic output.\n# Tired of seeing the same results every time? Remove the line below.\nnp.random.seed(1000)\n\n# The results are a little better when the dimensionality of the random vector is only 10.\n# The dimensionality has been left at 100 for consistency with other GAN implementations.\nrandomDim = 100\n\n# Load MNIST data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train = (X_train.astype(np.float32) - 127.5)/127.5\nX_train = X_train[:, np.newaxis, :, :]\n\n# Optimizer\nadam = Adam(lr=0.0002, beta_1=0.5)\n\n# Generator\ngenerator = Sequential()\ngenerator.add(Dense(128*7*7, input_dim=randomDim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(Reshape((128, 7, 7)))\ngenerator.add(UpSampling2D(size=(2, 2)))\ngenerator.add(Conv2D(64, kernel_size=(5, 5), padding='same'))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(UpSampling2D(size=(2, 2)))\ngenerator.add(Conv2D(1, kernel_size=(5, 5), padding='same', activation='tanh'))\ngenerator.compile(loss='binary_crossentropy', optimizer=adam)\n\n# Discriminator\ndiscriminator = Sequential()\ndiscriminator.add(Conv2D(64, kernel_size=(5, 5), strides=(2, 2), padding='same', input_shape=(1, 28, 28), kernel_initializer=initializers.RandomNormal(stddev=0.02)))\ndiscriminator.add(LeakyReLU(0.2))\ndiscriminator.add(Dropout(0.3))\ndiscriminator.add(Conv2D(128, kernel_size=(5, 5), strides=(2, 2), padding='same'))\ndiscriminator.add(LeakyReLU(0.2))\ndiscriminator.add(Dropout(0.3))\ndiscriminator.add(Flatten())\ndiscriminator.add(Dense(1, activation='sigmoid'))\ndiscriminator.compile(loss='binary_crossentropy', optimizer=adam)\n\n# Combined network\ndiscriminator.trainable = False\nganInput = Input(shape=(randomDim,))\nx = generator(ganInput)\nganOutput = discriminator(x)\ngan = Model(inputs=ganInput, outputs=ganOutput)\ngan.compile(loss='binary_crossentropy', optimizer=adam)\n\ndLosses = []\ngLosses = []\n\n# Plot the loss from each batch\ndef plotLoss(epoch):\n plt.figure(figsize=(10, 8))\n plt.plot(dLosses, label='Discriminitive loss')\n plt.plot(gLosses, label='Generative loss')\n plt.xlabel('Epoch')\n plt.ylabel('Loss')\n plt.legend()\n plt.savefig('images/dcgan_%d_loss_epoch.png' % epoch)\n\n# Create a wall of generated MNIST images\ndef plotGeneratedImages(epoch, examples=100, dim=(10, 10), figsize=(10, 10)):\n noise = np.random.normal(0, 1, size=[examples, randomDim])\n generatedImages = generator.predict(noise)\n\n plt.figure(figsize=figsize)\n for i in range(generatedImages.shape[0]):\n plt.subplot(dim[0], dim[1], i+1)\n plt.imshow(generatedImages[i, 0], interpolation='nearest', cmap='gray_r')\n plt.axis('off')\n plt.tight_layout()\n plt.savefig('images/dcgan_%d_generated_image_epoch.png' % epoch)\n\n# Save the generator and discriminator networks (and weights) for later use\ndef saveModels(epoch):\n generator.save('models/dcgan_generator_epoch_%d.h5' % epoch)\n discriminator.save('models/dcgan_discriminator_epoch_%d.h5' % epoch)\n\ndef train(epochs=1, batchSize=128):\n batchCount = X_train.shape[0] / batchSize\n print ('Epochs:', epochs)\n print ('Batch size:', batchSize)\n print ('Batches per epoch:', int(batchCount))\n\n for e in range(1, epochs+1):\n print ('-'*15, 'Epoch %d' % e, '-'*15)\n for _ in tqdm(range(int(batchCount))):\n # Get a random set of input noise and images\n noise = np.random.normal(0, 1, size=[batchSize, randomDim])\n imageBatch = X_train[np.random.randint(0, X_train.shape[0], size=batchSize)]\n\n # Generate fake MNIST images\n generatedImages = generator.predict(noise)\n X = np.concatenate([imageBatch, generatedImages])\n\n # Labels for generated and real data\n yDis = np.zeros(2*batchSize)\n # One-sided label smoothing\n yDis[:batchSize] = 0.9\n\n # Train discriminator\n discriminator.trainable = True\n dloss = discriminator.train_on_batch(X, yDis)\n\n # Train generator\n noise = np.random.normal(0, 1, size=[batchSize, randomDim])\n yGen = np.ones(batchSize)\n discriminator.trainable = False\n gloss = gan.train_on_batch(noise, yGen)\n\n # Store loss of most recent batch from this epoch\n dLosses.append(dloss)\n gLosses.append(gloss)\n\n if e == 1 or e % 5 == 0:\n plotGeneratedImages(e)\n saveModels(e)\n\n # Plot losses from every epoch\n plotLoss(e)\n\nif __name__ == '__main__':\n train(50, 128)\n\n", "Epochs: 50\nBatch size: 128\nBatches per epoch: 468\n--------------- Epoch 1 ---------------\n" ], [ "# %load mnist_gan.py\nimport os\nos.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\nimport numpy as np\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\nfrom keras.layers import Input\nfrom keras.models import Model, Sequential\nfrom keras.layers.core import Reshape, Dense, Dropout, Flatten\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.layers.convolutional import Convolution2D, UpSampling2D\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.datasets import mnist\nfrom keras.optimizers import Adam\nfrom keras import backend as K\nfrom keras import initializers\n\nK.set_image_dim_ordering('th')\n\n# Deterministic output.\n# Tired of seeing the same results every time? Remove the line below.\nnp.random.seed(1000)\n\n# The results are a little better when the dimensionality of the random vector is only 10.\n# The dimensionality has been left at 100 for consistency with other GAN implementations.\nrandomDim = 100\n\n# Load MNIST data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\nX_train = (X_train.astype(np.float32) - 127.5)/127.5\nX_train = X_train.reshape(60000, 784)\n\n# Optimizer\nadam = Adam(lr=0.0002, beta_1=0.5)\n\ngenerator = Sequential()\ngenerator.add(Dense(256, input_dim=randomDim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(Dense(512))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(Dense(1024))\ngenerator.add(LeakyReLU(0.2))\ngenerator.add(Dense(784, activation='tanh'))\ngenerator.compile(loss='binary_crossentropy', optimizer=adam)\n\ndiscriminator = Sequential()\ndiscriminator.add(Dense(1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02)))\ndiscriminator.add(LeakyReLU(0.2))\ndiscriminator.add(Dropout(0.3))\ndiscriminator.add(Dense(512))\ndiscriminator.add(LeakyReLU(0.2))\ndiscriminator.add(Dropout(0.3))\ndiscriminator.add(Dense(256))\ndiscriminator.add(LeakyReLU(0.2))\ndiscriminator.add(Dropout(0.3))\ndiscriminator.add(Dense(1, activation='sigmoid'))\ndiscriminator.compile(loss='binary_crossentropy', optimizer=adam)\n\n# Combined network\ndiscriminator.trainable = False\nganInput = Input(shape=(randomDim,))\nx = generator(ganInput)\nganOutput = discriminator(x)\ngan = Model(inputs=ganInput, outputs=ganOutput)\ngan.compile(loss='binary_crossentropy', optimizer=adam)\n\ndLosses = []\ngLosses = []\n\n# Plot the loss from each batch\ndef plotLoss(epoch):\n plt.figure(figsize=(10, 8))\n plt.plot(dLosses, label='Discriminitive loss')\n plt.plot(gLosses, label='Generative loss')\n plt.xlabel('Epoch')\n plt.ylabel('Loss')\n plt.legend()\n plt.savefig('images/gan_neuralnet/gan_%d_loss_epoch_.png' % epoch)\n\n# Create a wall of generated MNIST images\ndef plotGeneratedImages(epoch, examples=100, dim=(10, 10), figsize=(10, 10)):\n noise = np.random.normal(0, 1, size=[examples, randomDim])\n generatedImages = generator.predict(noise)\n generatedImages = generatedImages.reshape(examples, 28, 28)\n\n plt.figure(figsize=figsize)\n for i in range(generatedImages.shape[0]):\n plt.subplot(dim[0], dim[1], i+1)\n plt.imshow(generatedImages[i], interpolation='nearest', cmap='gray_r')\n plt.axis('off')\n plt.tight_layout()\n plt.savefig('images/gan_neuralnet/gan_%d_generated_image_epoch.png' % epoch)\n\n# Save the generator and discriminator networks (and weights) for later use\ndef saveModels(epoch):\n generator.save('models/gan_neuralnet/gan_generator_epoch_%d.h5' % epoch)\n discriminator.save('models/gan_neuralnet/gan_discriminator_epoch_%d.h5' % epoch)\n\ndef train(epochs=1, batchSize=128):\n batchCount = int(X_train.shape[0] / batchSize)\n print( 'Epochs:', epochs)\n print( 'Batch size:', batchSize)\n print( 'Batches per epoch:', batchCount)\n\n for e in range(1, epochs+1):\n print ('-'*15, 'Epoch %d' % e, '-'*15)\n for _ in tqdm(range(batchCount)):\n # Get a random set of input noise and images\n noise = np.random.normal(0, 1, size=[batchSize, randomDim])\n imageBatch = X_train[np.random.randint(0, X_train.shape[0], size=batchSize)]\n\n # Generate fake MNIST images\n generatedImages = generator.predict(noise)\n # print np.shape(imageBatch), np.shape(generatedImages)\n X = np.concatenate([imageBatch, generatedImages])\n\n # Labels for generated and real data\n yDis = np.zeros(2*batchSize)\n # One-sided label smoothing\n yDis[:batchSize] = 0.9\n\n # Train discriminator\n discriminator.trainable = True\n dloss = discriminator.train_on_batch(X, yDis)\n\n # Train generator\n noise = np.random.normal(0, 1, size=[batchSize, randomDim])\n yGen = np.ones(batchSize)\n discriminator.trainable = False\n gloss = gan.train_on_batch(noise, yGen)\n\n # Store loss of most recent batch from this epoch\n dLosses.append(dloss)\n gLosses.append(gloss)\n\n if e == 1 or e % 20 == 0:\n plotGeneratedImages(e)\n saveModels(e)\n\n # Plot losses from every epoch\n plotLoss(e)\n\nif __name__ == '__main__':\n train(200, 128)\n\n", "Epochs: 200\nBatch size: 128\nBatches per epoch: 468\n--------------- Epoch 1 ---------------\n" ], [ "randomDim = 100\nexamples=1\nnoise = np.random.normal(0, 1, size=[examples, randomDim])\ngeneratedImages = generator.predict(noise)\ngeneratedImages = generatedImages.reshape(examples, 28, 28)\ngeneratedImages.shape\nimport cv2", "_____no_output_____" ], [ "cv2.imshow('IMG',generatedImages[0])\ncv2.waitKey(0)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
ec99abe2c3d939a9e0d1f3987e9c7cc996d38f4d
11,714
ipynb
Jupyter Notebook
core-py/Python3 - Keywords.ipynb
Nahid-Hassan/software-development
c2157bc6a93fa9fdf2e86de2e6f51876ff46b83f
[ "MIT" ]
null
null
null
core-py/Python3 - Keywords.ipynb
Nahid-Hassan/software-development
c2157bc6a93fa9fdf2e86de2e6f51876ff46b83f
[ "MIT" ]
null
null
null
core-py/Python3 - Keywords.ipynb
Nahid-Hassan/software-development
c2157bc6a93fa9fdf2e86de2e6f51876ff46b83f
[ "MIT" ]
null
null
null
19.047154
153
0.445023
[ [ [ "# Python Keywords\n\nPython has a set of `keywords` that are `reserved` words that cannot be used as `variable` names, `function` names, or any other `identifiers`.\n\nKeywords are `case-sensitive`. All the words except `True`, `False` & `None` are lowercase.", "_____no_output_____" ], [ "### `and`: A logical Operator \n\n`and` keywords combines conditional operators. It returns `True` if both statements are `True` else returns `False`. ", "_____no_output_____" ] ], [ [ "# example - 1\nflag = 'Age' < 'Year' and 23 < 27\nflag", "_____no_output_____" ], [ "# example - 2\nif 5 % 2 == 1 and 5 >= 3 and 4 <= 10:\n print(\"All the three statements are True.\")\nelse:\n print(\"At least one of the statements are False.\")", "All the three statements are True.\n" ] ], [ [ "## `as`: Create an `alias`\n\n`as` is used to create an alias while importing a module.", "_____no_output_____" ] ], [ [ "# example - 1\nimport calendar as cal\n\n# display the month of year 2021, feburary\nyear = 2021\nmonth = 2\n\n# now we can use calendar module using `cal`\nprint(cal.month(theyear=year, themonth=month))", " February 2021\nMo Tu We Th Fr Sa Su\n 1 2 3 4 5 6 7\n 8 9 10 11 12 13 14\n15 16 17 18 19 20 21\n22 23 24 25 26 27 28\n\n" ] ], [ [ "### `assert`: Using for `Debbuing`\n\n`assert` is use for debugging purposes. While programming, sometimes we want to check the internal state or check if our `assumptions` are true.\n\n<br>\n\n```py\nassert condition, message \n```\n\nIf condition are true then nothing happens else raised an `AssertionError`.", "_____no_output_____" ] ], [ [ "assert 5 > 2 # Nothing happened ", "_____no_output_____" ] ], [ [ "```py\nassert 5 < 2, '5 is greater than 2' # Raise an assertion error\n```\n\nBoth Equivalent...\n\n```py\nif not codition:\n raise AssertionError(message)\n```", "_____no_output_____" ], [ "### `break`: Break out from a loop.\n\n`break` is used in the `for` and `while` loops to alter their normal behavior.", "_____no_output_____" ] ], [ [ "# example - 1 \nfor i in range(5):\n if i == 3:\n break\n print(i)", "0\n1\n2\n" ], [ "# example - 2\nfor i in range(3):\n print('i = ', i)\n for j in range(3):\n if j == 1:\n print(\"Break\")\n break\n print('j = ', j)", "i = 0\nj = 0\nBreak\ni = 1\nj = 0\nBreak\ni = 2\nj = 0\nBreak\n" ], [ "# example - 3: In while loop\ni = 1\nwhile i < 5:\n print(i)\n if i == 3:\n break\n i = i + 1", "1\n2\n3\n" ] ], [ [ "### `class`: Define a `class`\n\n`class` keyword is used to defined a user-defined class in Python.", "_____no_output_____" ] ], [ [ "class Student:\n def __init__(self, name, age):\n self.name = name\n self.age = age\n \n def display(self):\n print(f\"Name: {self.name}, Age: {self.age}\")", "_____no_output_____" ], [ "s = Student('Nahid', 23)\ns.display()", "Name: Nahid, Age: 23\n" ] ], [ [ "### `continue`: Continue the next iteration of a loop\n\n`continue` is used in the `for` and `while` loops to alter their normal behavior.", "_____no_output_____" ] ], [ [ "for i in range(7):\n if i == 3:\n print(\"Three is not printed.\")\n continue\n print(i)", "0\n1\n2\nThree is not printed.\n4\n5\n6\n" ] ], [ [ "### `def`: Define a `function`\n\n`def` is used to create, or define a function and `function` is \na block of related statements.", "_____no_output_____" ] ], [ [ "def powertwo(x):\n return x ** 2\n\npowertwo(4)", "_____no_output_____" ] ], [ [ "### `del`: To delete an `object`\n\n`del` is used to `delete` the reference to an `object`.", "_____no_output_____" ] ], [ [ "num = 10\nnum", "_____no_output_____" ], [ "del num", "_____no_output_____" ], [ "#num # num is not defined.", "_____no_output_____" ], [ "def yell(text):\n return text.title()\nyell('my name is nahid hassan.')", "_____no_output_____" ], [ "bark = yell", "_____no_output_____" ], [ "del yell", "_____no_output_____" ], [ "bark('my name is nahid hassan') # del is delete the reference object.", "_____no_output_____" ], [ "a = ['x', 'y', 'z']\na", "_____no_output_____" ], [ "del a[1]", "_____no_output_____" ], [ "a", "_____no_output_____" ] ], [ [ "### `if`, `elif`, `else`\n\n```py\nif condition:\n # if condition is True\n # this block of code is run\nelse:\n # this block of code is run\n```", "_____no_output_____" ] ], [ [ "if 3 > 2:\n print(\"3 is greater than 2\")\nelse:\n print(\"condition is false\")", "3 is greater than 2\n" ] ], [ [ "```py\nif condition:\n # if condition is true\n # this block of code is run\nelif condition:\n # if above condition is not run and this\n # condition is true than this block of code \n # is run\nelse:\n # if none of above block of code is not run \n # this block definately run\n```", "_____no_output_____" ] ], [ [ "mark = 70\n\nif mark >= 80:\n print(\"A+\")\nelif mark >= 70:\n print(\"A\")\nelif mark >= 60:\n print(\"Pass\")\nelse:\n print(\"Bad Grade\")", "A\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec99c75099f53be0974f2b1fa7dbde538e33ee1b
14,039
ipynb
Jupyter Notebook
paper/GNN.ipynb
varnerjwpu/exmol
6458c00272a4ae0b58c627c3fcb4f529eb8e1fdf
[ "MIT" ]
null
null
null
paper/GNN.ipynb
varnerjwpu/exmol
6458c00272a4ae0b58c627c3fcb4f529eb8e1fdf
[ "MIT" ]
null
null
null
paper/GNN.ipynb
varnerjwpu/exmol
6458c00272a4ae0b58c627c3fcb4f529eb8e1fdf
[ "MIT" ]
null
null
null
30.387446
99
0.517273
[ [ [ "# import os\n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\"\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport matplotlib as mpl\nimport numpy as np\nimport tensorflow as tf\nimport selfies as sf\nimport exmol\nimport skunk\nimport warnings\nfrom rdkit import Chem\nfrom rdkit.Chem.Draw import rdDepictor\n\nrdDepictor.SetPreferCoordGen(True)\nfrom rdkit.Chem.Draw import IPythonConsole\n\nIPythonConsole.ipython_useSVG = True\nsns.set_context(\"notebook\")\nsns.set_style(\n \"dark\",\n {\n \"xtick.bottom\": True,\n \"ytick.left\": True,\n \"xtick.color\": \"#666666\",\n \"ytick.color\": \"#666666\",\n \"axes.edgecolor\": \"#666666\",\n \"axes.linewidth\": 0.8,\n \"figure.dpi\": 300,\n },\n)\ncolor_cycle = [\"#1BBC9B\", \"#F06060\", \"#F3B562\", \"#6e5687\", \"#5C4B51\"]\nmpl.rcParams[\"axes.prop_cycle\"] = mpl.cycler(color=color_cycle)\nnp.random.seed(0)\n\nhivdata = pd.read_csv(\"HIV.csv\")", "_____no_output_____" ], [ "# shuffle rows and sample fom HIV dataset\n# REDUCED FOR CI, make frac = 1 for paper results\nhivdata = hivdata.sample(frac=0.01).reset_index(drop=True)\nhivdata.head()", "_____no_output_____" ], [ "def gen_smiles2graph(sml):\n \"\"\"Argument for the RD2NX function should be a valid SMILES sequence\n returns: the graph\n \"\"\"\n m, smi_canon, status = exmol.stoned.sanitize_smiles(sml)\n # m = Chem.MolFromSmiles(smi_canon)\n m = Chem.AddHs(m)\n order_string = {\n Chem.rdchem.BondType.SINGLE: 1,\n Chem.rdchem.BondType.DOUBLE: 2,\n Chem.rdchem.BondType.TRIPLE: 3,\n Chem.rdchem.BondType.AROMATIC: 4,\n }\n N = len(list(m.GetAtoms()))\n # nodes = np.zeros((N,100))\n nodes = np.zeros((440, 100))\n for i in m.GetAtoms():\n nodes[i.GetIdx(), i.GetAtomicNum()] = 1\n\n # adj = np.zeros((N,N))\n adj = np.zeros((440, 440))\n for j in m.GetBonds():\n u = min(j.GetBeginAtomIdx(), j.GetEndAtomIdx())\n v = max(j.GetBeginAtomIdx(), j.GetEndAtomIdx())\n order = j.GetBondType()\n if order in order_string:\n order = order_string[order]\n else:\n raise Warning(\"Ignoring bond order\" + order)\n adj[u, v] = 1\n adj[v, u] = 1\n adj += np.eye(440)\n return nodes, adj", "_____no_output_____" ], [ "class GCNLayer(tf.keras.layers.Layer):\n \"\"\"Implementation of GCN as layer\"\"\"\n\n def __init__(self, activation=None, **kwargs):\n # constructor, which just calls super constructor\n # and turns requested activation into a callable function\n super(GCNLayer, self).__init__(**kwargs)\n self.activation = tf.keras.activations.get(activation)\n\n def build(self, input_shape):\n # create trainable weights\n node_shape, adj_shape = input_shape\n self.w = self.add_weight(shape=(node_shape[2], node_shape[2]), name=\"w\")\n\n def call(self, inputs):\n # split input into nodes, adj\n nodes, adj = inputs\n # compute degree\n degree = tf.reduce_sum(adj, axis=-1)\n # GCN equation\n new_nodes = tf.einsum(\"bi,bij,bjk,kl->bil\", 1 / degree, adj, nodes, self.w)\n out = self.activation(new_nodes)\n return out, adj", "_____no_output_____" ], [ "class GRLayer(tf.keras.layers.Layer):\n \"\"\"Reduction layer: A GNN layer that computes average over all node features\"\"\"\n\n def __init__(self, name=\"GRLayer\", **kwargs):\n super(GRLayer, self).__init__(name=name, **kwargs)\n\n def call(self, inputs):\n nodes, adj = inputs\n reduction = tf.reduce_mean(nodes, axis=1)\n return reduction", "_____no_output_____" ], [ "ninput = tf.keras.Input(\n (\n None,\n 100,\n )\n)\nainput = tf.keras.Input(\n (\n None,\n None,\n )\n)\n# GCN block\nx = GCNLayer(\"relu\")([ninput, ainput])\nx = GCNLayer(\"relu\")(x)\nx = GCNLayer(\"relu\")(x)\nx = GCNLayer(\"relu\")(x)\n# reduce to graph features\nx = GRLayer()(x)\n# standard layers\nx = tf.keras.layers.Dense(256)(x)\nx = tf.keras.layers.Dense(1, activation=\"sigmoid\")(x)\ngcnmodel = tf.keras.Model(inputs=(ninput, ainput), outputs=x)\ngcnmodel.compile(\n \"adam\",\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),\n metrics=[\"accuracy\"],\n)\ngcnmodel.summary()", "_____no_output_____" ], [ "def gen_data():\n for i in range(len(hivdata)):\n graph = gen_smiles2graph(hivdata.smiles[i])\n activity = hivdata.HIV_active[i]\n yield graph, activity\n\n\ndata = tf.data.Dataset.from_generator(\n gen_data,\n output_types=((tf.float32, tf.float32), tf.float32),\n output_shapes=(\n (tf.TensorShape([None, 100]), tf.TensorShape([None, None])),\n tf.TensorShape([]),\n ),\n)", "_____no_output_____" ], [ "N = len(hivdata)\nsplit = int(0.1 * N)\ntest_data = data.take(split)\nnontest = data.skip(split)\nval_data, train_data = nontest.take(split), nontest.skip(split).shuffle(1000)", "_____no_output_____" ], [ "%%time\nclass_weight = {0: 1.0, 1: 30.0} # to account for class imbalance\nresult = gcnmodel.fit(\n train_data.batch(128),\n validation_data=val_data.batch(128),\n epochs=30,\n verbose=2,\n class_weight=class_weight,\n)", "_____no_output_____" ], [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))\nax1.plot(result.history[\"loss\"], label=\"training\")\nax1.plot(result.history[\"val_loss\"], label=\"validation\")\nax1.legend()\nax1.set_xlabel(\"Epoch\")\nax1.set_ylabel(\"Loss\")\n\nax2.plot(result.history[\"accuracy\"], label=\"training\")\nax2.plot(result.history[\"val_accuracy\"], label=\"validation\")\nax2.legend()\nax2.set_xlabel(\"Epoch\")\nax2.set_ylabel(\"Accuracy\")\nfig.tight_layout()\nfig.savefig(\"gnn-loss-acc.png\", dpi=180)\nfig.show()", "_____no_output_____" ], [ "from sklearn.metrics import roc_curve\nfrom sklearn.metrics import auc\n\nprediction = []\ntest_y = []\n\nfor x, y in test_data.as_numpy_iterator():\n yhat = gcnmodel((x[0][np.newaxis, ...], x[1][np.newaxis, ...]))\n prediction.append(yhat.numpy())\n test_y.append(y)\n\nprediction = np.array(prediction).flatten()\ntest_y = np.array(test_y)\n\nfpr_keras, tpr_keras, thresholds_keras = roc_curve(test_y, prediction)\nauc_keras = auc(fpr_keras, tpr_keras)", "_____no_output_____" ], [ "plt.figure(figsize=(6, 4), dpi=100)\nplt.plot(fpr_keras, tpr_keras, label=\"AUC = {:.3f}\".format(auc_keras))\nplt.plot([0, 1], [0, 1], linestyle=\"--\")\nplt.xlabel(\"True Positive Rate\")\nplt.ylabel(\"False Positive Rate\")\nplt.legend()\nplt.savefig(\"gnn-roc.png\", dpi=300)\nplt.show()", "_____no_output_____" ] ], [ [ "## CF explanation\nThe following example find CFs for a given molecule where the HIV activity is zero.", "_____no_output_____" ] ], [ [ "def predictor_function(smiles, selfies):\n # print('inut:',smiles)\n labels = []\n for sml in smiles:\n nodes, adj_mat = gen_smiles2graph(sml)\n pred = gcnmodel((nodes[np.newaxis, ...], adj_mat[np.newaxis, ...])).numpy()\n labels.append(pred)\n\n labels = np.array(labels).flatten()\n bin_labels = np.where(labels > 0.5, np.ones(len(labels)), np.zeros(len(labels)))\n target_act = np.zeros(len(labels))\n return abs(bin_labels - target_act).astype(bool)", "_____no_output_____" ], [ "basic = exmol.get_basic_alphabet()\nstoned_kwargs = {\"num_samples\": 1500, \"alphabet\": basic, \"max_mutations\": 2}", "_____no_output_____" ], [ "example_base = \"C=CCN(CC=C)C(=O)Nc1ccc(C(=O)NN=Cc2cccc(OC)c2OC)cc1\"\nspace = exmol.sample_space(\n example_base,\n predictor_function,\n stoned_kwargs={\"num_samples\": 1500, \"alphabet\": basic, \"max_mutations\": 2},\n)", "_____no_output_____" ], [ "exps = exmol.cf_explain(space)\nfkw = {\"figsize\": (8, 6)}\nmpl.rc(\"axes\", titlesize=12)\nexmol.plot_cf(exps, figure_kwargs=fkw, mol_size=(450, 400), nrows=1)\nplt.savefig(\"gnn-simple.png\", dpi=180)\nsvg = exmol.insert_svg(exps, mol_fontsize=16)\n# with open(\"gnn-simple.svg\", \"w\") as f:\n# f.write(svg)", "_____no_output_____" ], [ "font = {\"family\": \"normal\", \"weight\": \"normal\", \"size\": 22}\nexmol.plot_space(\n space,\n exps,\n figure_kwargs=fkw,\n mol_size=(300, 200),\n offset=0,\n cartoon=True,\n rasterized=True,\n)\nplt.scatter([], [], label=\"Counterfactual\", s=150, color=plt.get_cmap(\"viridis\")(1.0))\nplt.scatter([], [], label=\"Same Class\", s=150, color=plt.get_cmap(\"viridis\")(0.0))\nplt.legend(fontsize=22)\nplt.tight_layout()\nsvg = exmol.insert_svg(exps, mol_fontsize=16)\n# with open(\"gnn-space.svg\", \"w\") as f:\n# f.write(svg)\n# skunk.display(svg)", "_____no_output_____" ], [ "exps = exmol.cf_explain(space, nmols=19)", "_____no_output_____" ], [ "fkw = {\"figsize\": (12, 10)}\nmpl.rc(\"axes\", titlesize=10)\nexmol.plot_cf(\n exps, figure_kwargs=fkw, mol_size=(450, 400), mol_fontsize=26, nrows=4, ncols=5\n)\nplt.savefig(\"gnn-simple-20.png\", bbox_inches=\"tight\", dpi=300)\nsvg = exmol.insert_svg(exps, mol_fontsize=14)\n# with open(\"gnn-simple-20.svg\", \"w\") as f:\n# f.write(svg)", "_____no_output_____" ], [ "fkw = {\"figsize\": (8, 6)}\nfont = {\"family\": \"normal\", \"weight\": \"normal\", \"size\": 22}\n\nexmol.plot_space(space, exps, figure_kwargs=fkw, mol_size=(350, 300), mol_fontsize=22)\nplt.scatter([], [], label=\"Same Label\", s=150, color=plt.get_cmap(\"viridis\")(1.0))\nplt.scatter([], [], label=\"Counterfactual\", s=150, color=plt.get_cmap(\"viridis\")(0.0))\nplt.legend(fontsize=22)\nplt.savefig(\"gnn-space.png\", bbox_inches=\"tight\", dpi=180)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec99c76a402e16f3d466805e332c9c69f9e9860a
1,030,292
ipynb
Jupyter Notebook
script/.ipynb_checkpoints/analise-satisfcao-cliente-checkpoint.ipynb
helenacypreste/satisfacao-clientes-santander
680c3214971dae61729aad5958a13763a5525bfa
[ "MIT" ]
null
null
null
script/.ipynb_checkpoints/analise-satisfcao-cliente-checkpoint.ipynb
helenacypreste/satisfacao-clientes-santander
680c3214971dae61729aad5958a13763a5525bfa
[ "MIT" ]
null
null
null
script/.ipynb_checkpoints/analise-satisfcao-cliente-checkpoint.ipynb
helenacypreste/satisfacao-clientes-santander
680c3214971dae61729aad5958a13763a5525bfa
[ "MIT" ]
null
null
null
454.674316
267,440
0.927683
[ [ [ "# Helen Cristina de Acypreste Rocha\n### Formação Cientista de Dados - Data Science Academy\n### Big Dara Real-Time Analytics com Python e Spark\n### Projeto 3 - Prevendo o Nível de Satisfação dos Clientes do Santander", "_____no_output_____" ], [ "O objetivo do projeto é identificar clientes insatisfeitos com o banco, a fim de tomar medidas proativas para aumentar a felicidade do indivíduo e melhorar sua experiência bancária no Santander. Afinal, um cliente satisfeito é de fundamental importância para o sucesso do negócio.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.preprocessing import normalize\nimport matplotlib.pyplot as plt \nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import cross_val_score\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "# Carregando os dados de teste\narq_test = '../data/test.csv'\ndf_test = pd.read_csv(arq_test)\ndf_test.head()", "_____no_output_____" ], [ "# Carregando os dados de treino\narq_train = '../data/train.csv'\ndf_train = pd.read_csv(arq_train)\ndf_train.head()", "_____no_output_____" ], [ "df_train.shape", "_____no_output_____" ], [ "# Verificando valores nulos\ndf_train.isnull().values.any()", "_____no_output_____" ], [ "# Descrição das variaveis\ndisplay(df_train.describe())", "_____no_output_____" ], [ "# Tipo das variáveis\ndf_train.dtypes", "_____no_output_____" ], [ "# Variáveis do tipo int\nvar_int = df_train.select_dtypes(include='int64')\nvar_int = var_int.drop(columns=['ID'])\n# Variáveis do tipo float\nvar_float = df_train.select_dtypes(include='float64')", "_____no_output_____" ], [ "len(var_int.columns)", "_____no_output_____" ], [ "df_train[var_int.columns[0:120]].hist(figsize=(24,30));", "_____no_output_____" ], [ "plt.hist(df_train['var15'])\nplt.ylim(0, 80000)\nplt.xlabel('var15')\nplt.ylabel('Quantidade')\nplt.show()\n", "_____no_output_____" ], [ "sns.countplot(df_train['var36'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var5_0'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var5'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var30_0'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var42_0'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var42'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_meses_var5_ult3'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['ind_var30'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_meses_var39_vig_ult3'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var4'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "df_train[var_int.columns[121:259]].hist(figsize=(24,30));", "_____no_output_____" ], [ "plt.hist(df_train['num_op_var39_ult3'])\nplt.ylim(0, 80000)\nplt.xlabel('num_op_var39_ult3')\nplt.ylabel('Quantidade')\nplt.show()", "_____no_output_____" ], [ "sns.countplot(df_train['ind_var5'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var35'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "sns.countplot(df_train['num_var30'])\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "plt.figure(figsize=(7,4))\ng = sns.countplot(df_train['var21'])\ng.set_xticklabels(g.get_xticklabels(), rotation=45)\nplt.ylim(0, 80000);", "_____no_output_____" ], [ "df_train[var_float.columns].hist(figsize=(20,22));", "_____no_output_____" ], [ "var_resp_count = df_train.groupby('TARGET').size()\nprint(f\"Cliente Satisfeito: {var_resp_count[0]}\\nCliente Insatisfeito: {var_resp_count[1]}\")", "Cliente Satisfeito: 73012\nCliente Insatisfeito: 3008\n" ], [ "sns.countplot(x='TARGET', data=df_train);", "_____no_output_____" ], [ "# Correlação entre variavel TARGET e as outras features\ntarget_corr = df_train.corr()['TARGET']\ntarget_corr.sort_values(ascending=False)", "_____no_output_____" ], [ "sns.scatterplot(\n x=df_train[\"TARGET\"], \n y=df_train[\"var36\"]);", "_____no_output_____" ], [ "sns.scatterplot(\n x=df_train[\"var15\"], \n y=df_train[\"TARGET\"]);", "_____no_output_____" ] ], [ [ "#### Balanceamento dos dados", "_____no_output_____" ] ], [ [ "# Transformar variavel 'TARGET' em variavel categorica\ndf_train['TARGET'] = df_train['TARGET'].astype('category')", "_____no_output_____" ], [ "# Balanceando os dados \nfrom imblearn.over_sampling import RandomOverSampler\nx = df_train.iloc[:,1:370]\ny = df_train['TARGET']\n\nros = RandomOverSampler()\nx_over, y_over = ros.fit_sample(x, y)", "_____no_output_____" ], [ "sns.countplot(y_over);\nprint(y_over.value_counts());", "1 73012\n0 73012\nName: TARGET, dtype: int64\n" ] ], [ [ "## Pre-processamento para aplicar modelo", "_____no_output_____" ], [ "### Seleção de variáveis", "_____no_output_____" ] ], [ [ "def f_importances(coef, names):\n imp = coef\n imp,names = zip(*sorted(zip(imp,names)))\n plt.figure(figsize=(16,12))\n plt.barh(range(len(names)), imp, align='center')\n plt.yticks(range(len(names)), names)\n plt.grid()\n plt.show()", "_____no_output_____" ] ], [ [ "#### Método Ensemble para Seleção de Variáveis", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import ExtraTreesClassifier\n\n# Criando o modelo Feature Selection \nmodel = ExtraTreesClassifier()\nmodel.fit(normalize(x_over), y_over)", "_____no_output_____" ], [ "# Cria um dataframe com as features da base junto com score obtido pelo modelo ExtraTreesClassifier\ndf_atr = pd.DataFrame({'Features': x_over.columns,\n 'Score_Features': model.feature_importances_})\n\n# Trazer as 30 variaveis com maior Score\nrotulos = df_atr.sort_values(by=['Score_Features'], ascending=False)[0:20]\n# rotulos", "_____no_output_____" ], [ "f_importances(rotulos['Score_Features'], rotulos['Features'])", "_____no_output_____" ], [ "# Seleciona os 20 atributos mais importantes. \ncolumns_import = rotulos['Features']\nx_over_select = x_over[columns_import]", "_____no_output_____" ], [ "x_over_select.hist(figsize=(12,14));", "_____no_output_____" ] ], [ [ "#### SVM para seleção de atributos", "_____no_output_____" ] ], [ [ "from sklearn.svm import LinearSVC\nlinearsvm = LinearSVC(random_state=0)\n\nlinearsvm = linearsvm.fit(normalize(x_over), y_over)\n#svm_y_pred = linearsvm.predict(scaled_x_test)", "_____no_output_____" ], [ "df_atr_svm = pd.DataFrame({'Features': x_over.columns,\n 'Score_Features': linearsvm.coef_[0]})\n\n# Trazer as 20 variaveis com maior Score\nrot_svm = df_atr_svm.sort_values(by=['Score_Features'], ascending=False)[0:20]\n#rot_svm", "_____no_output_____" ], [ "f_importances(rot_svm['Score_Features'], rot_svm['Features'])", "_____no_output_____" ], [ "# Seleciona os 20 atributos mais importantes. \ncol_import_svm = rot_svm['Features']\nx_over_select_svm = x_over[col_import_svm]", "_____no_output_____" ] ], [ [ "### Criação do modelo", "_____no_output_____" ] ], [ [ "# Tabela de teste somente com as variaveis selecionadas\ndf_test_select = df_test[columns_import]", "_____no_output_____" ], [ "# Parametros para KFold\nfolds = 10\nseed = 2\n\n# Separando os dados em folds\nkfold = KFold(folds, True, random_state = seed)", "_____no_output_____" ] ], [ [ "##### Regressão Logística", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n\nmodelo_lr = LogisticRegression()\n\n# Cross Validation\nresult = cross_val_score(modelo_lr, normalize(x_over_select), y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result.mean() * 100))", "Acurácia: 63.844\n" ] ], [ [ "##### Classification and Regression Trees", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeClassifier\n\nmodelo_trees = DecisionTreeClassifier()\n\n# Cross Validation\nresult_cart = cross_val_score(modelo_trees, normalize(x_over_select), y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result_cart.mean() * 100))", "Acurácia: 96.339\n" ] ], [ [ "##### XGBoost", "_____no_output_____" ] ], [ [ "from xgboost import XGBClassifier\n\nmodelo_xgb = XGBClassifier()\n\n# Cross Validation\nresult_xgboost = cross_val_score(modelo_xgb, normalize(x_over_select), y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result_xgboost.mean() * 100.0))", "Acurácia: 87.995\n" ] ], [ [ "### Principal Component Analysis (PCA)", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import MinMaxScaler\nfrom sklearn.decomposition import PCA\n\n# Normalizando os dados\nscaler = MinMaxScaler(feature_range = (0, 1))\nrescaledX = scaler.fit_transform(x_over)\n\n# Seleção de atributos\npca = PCA(n_components = 15)\nfit = pca.fit(rescaledX)\n\n# Sumarizando os componentes\nprint(\"Variância: %s\" % fit.explained_variance_ratio_)", "Variância: [0.32722169 0.21021199 0.09572461 0.07533987 0.0591028 0.04221676\n 0.02992184 0.02258673 0.01672075 0.01573952 0.01389869 0.00756876\n 0.00675595 0.0062182 0.00604612]\n" ], [ "# Aplique o PCA aos datasets\nfit_treino = pca.fit_transform(x_over)\nfit_teste = pca.fit_transform(df_test.iloc[:,1:370])", "_____no_output_____" ], [ "# DataFrames com as componentes geradas\ndf_treino_pca = pd.DataFrame(fit_treino)\ndf_teste_pca = pd.DataFrame(fit_teste)", "_____no_output_____" ] ], [ [ "##### Classification and Regression Trees com PCA", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeClassifier\n\nmodelo_trees_pca = DecisionTreeClassifier()\n\n# Cross Validation\nresult_trees_pca = cross_val_score(modelo_trees_pca, df_treino_pca, y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result_trees_pca.mean() * 100))", "Acurácia: 94.867\n" ] ], [ [ "##### XGBoost com PCA", "_____no_output_____" ] ], [ [ "from xgboost import XGBClassifier\n\nmodelo_xgb_pca = XGBClassifier()\n\n# Cross Validation\nresult_xgboost_pca = cross_val_score(modelo_xgb_pca, df_treino_pca, y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result_xgboost_pca.mean() * 100.0))", "Acurácia: 76.144\n" ] ], [ [ "##### Logistic Regression com PCA", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\n\nmodelo_lr_pca = LogisticRegression()\n\n# Cross Validation\nresult_lr_pca = cross_val_score(modelo_lr_pca, df_treino_pca, y_over, cv = kfold)\n\n# Resultado da acurácia\nprint(\"Acurácia: %.3f\" % (result_lr_pca.mean() * 100))", "Acurácia: 50.405\n" ] ], [ [ "##### Modelo com melhor acurácia: Classification and Regression Trees", "_____no_output_____" ] ], [ [ "# Fazendo previsões\nfit = modelo_trees.fit(normalize(x_over_select), y_over)\n\ny_pred = fit.predict(df_test_select)", "_____no_output_____" ], [ "result = pd.DataFrame({\"ID\": df_test['ID'], \"TARGET\": y_pred})\nresult[\"TARGET\"]= result[\"TARGET\"].astype(str)\nresult.shape", "_____no_output_____" ], [ "result['TARGET'].value_counts()", "_____no_output_____" ], [ "# Salvando o arquivo para submissão\nresult.to_csv(\"../Dados/submission.csv\", index=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
ec99e134b74e03d4aff6b50f2d20e0d7f25fe1c9
62,201
ipynb
Jupyter Notebook
Bayesian_Inference/Bayesian_Inference.ipynb
echen805/AIND
969b4cad97ddbf7841c79dad862d4d1142825565
[ "MIT" ]
null
null
null
Bayesian_Inference/Bayesian_Inference.ipynb
echen805/AIND
969b4cad97ddbf7841c79dad862d4d1142825565
[ "MIT" ]
null
null
null
Bayesian_Inference/Bayesian_Inference.ipynb
echen805/AIND
969b4cad97ddbf7841c79dad862d4d1142825565
[ "MIT" ]
null
null
null
49.483691
1,450
0.651903
[ [ [ "## Our Mission ##\n\nSpam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'. \n\nIn this mission we will be using the Naive Bayes algorithm to create a model that can classify [dataset](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Usually they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!\n\nBeing able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions. \n\n# Overview\n\nThis project has been broken down in to the following steps: \n\n- Step 0: Introduction to the Naive Bayes Theorem\n- Step 1.1: Understanding our dataset\n- Step 1.2: Data Preprocessing\n- Step 2.1: Bag of Words(BoW)\n- Step 2.2: Implementing BoW from scratch\n- Step 2.3: Implementing Bag of Words in scikit-learn\n- Step 3.1: Training and testing sets\n- Step 3.2: Applying Bag of Words processing to our dataset.\n- Step 4.1: Bayes Theorem implementation from scratch\n- Step 4.2: Naive Bayes implementation from scratch\n- Step 5: Naive Bayes implementation using scikit-learn\n- Step 6: Evaluating our model\n- Step 7: Conclusion\n", "_____no_output_____" ], [ "### Step 0: Introduction to the Naive Bayes Theorem ###\n\nBayes theorem is one of the earliest probabilistic inference algorithms developed by Reverend Bayes (which he used to try and infer the existence of God no less) and still performs extremely well for certain use cases. \n\nIt's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like the age, sex, and other smaller factors like is the person carrying a bag?, does the person look nervous? etc. you can make a judgement call as to if that person is viable threat. \n\nIf an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. The Bayes theorem works in the same way as we are computing the probability of an event(a person being a threat) based on the probabilities of certain related events(age, sex, presence of bag or not, nervousness etc. of the person). \n\nOne thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't. \n\nThis is the 'Naive' bit of the theorem where it considers each feature to be independent of each other which may not always be the case and hence that can affect the final judgement.\n\nIn short, the Bayes theorem calculates the probability of a certain event happening(in our case, a message being spam) based on the joint probabilistic distributions of certain other events(in our case, the appearance of certain words in a message). We will dive into the workings of the Bayes theorem later in the mission, but first, let us understand the data we are going to work with.", "_____no_output_____" ], [ "### Step 1.1: Understanding our dataset ### \n\n\nWe will be using a [dataset](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) from the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. The direct data link is [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00228/).\n\n\n ** Here's a preview of the data: ** \n\n<img src=\"images/dqnb.png\" height=\"1242\" width=\"1242\">\n\nThe columns in the data set are currently not named and as you can see, there are 2 columns. \n\nThe first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam. \n\nThe second column is the text content of the SMS message that is being classified.", "_____no_output_____" ], [ ">** Instructions: **\n* Import the dataset into a pandas dataframe using the read_table method. Because this is a tab separated dataset we will be using '\\t' as the value for the 'sep' argument which specifies this format. \n* Also, rename the column names by specifying a list ['label, 'sms_message'] to the 'names' argument of read_table().\n* Print the first five values of the dataframe with the new column names.", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\nimport pandas as pd\n\n# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection\nname = ['label','sms_message']\ndf = pd.read_table('.\\smsspamcollection\\SMSSpamCollection', sep='\\t', header= None, names = name)\n\n# Output printing out first 5 rows\ndf.head()", "_____no_output_____" ] ], [ [ "### Step 1.2: Data Preprocessing ###\n\nNow that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. \n\nYou might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). \n\nOur model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. ", "_____no_output_____" ], [ ">**Instructions: **\n* Convert the values in the 'label' column to numerical values using map method as follows:\n{'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.\n* Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using \n'shape'.", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\ndf['label'] = df.label.map({'ham': 0, 'spam':1})", "_____no_output_____" ] ], [ [ "### Step 2.1: Bag of words ###\n\nWhat we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. \n\nHere we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. \n\nUsing a process which we will go through now, we can convert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrence of each word or token in that document.\n\nFor example: \n\nLets say we have 4 documents as follows:\n\n`['Hello, how are you!',\n'Win money, win from home.',\n'Call me now',\n'Hello, Call you tomorrow?']`\n\nOur objective here is to convert this set of text to a frequency distribution matrix, as follows:\n\n<img src=\"images/countvectorizer.png\" height=\"542\" width=\"542\">\n\nHere as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.\n\nLets break this down and see how we can do this conversion using a small set of documents.\n\nTo handle this, we will be using sklearns \n[count vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer) method which does the following:\n\n* It tokenizes the string(separates the string into individual words) and gives an integer ID to each token.\n* It counts the occurrence of each of those tokens.\n\n** Please Note: ** \n\n* The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the `lowercase` parameter which is by default set to `True`.\n\n* It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the `token_pattern` parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.\n\n* The third parameter to take note of is the `stop_words` parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to `english`, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.\n\nWe will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data.", "_____no_output_____" ], [ "### Step 2.2: Implementing Bag of Words from scratch ###\n\nBefore we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. \n\n** Step 1: Convert all strings to their lower case form. **\n\nLet's say we have a document set:\n\n```\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']\n```\n>>** Instructions: **\n* Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.\n", "_____no_output_____" ] ], [ [ "'''\nSolution:\n'''\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']\n\nlower_case_documents = []\nfor i in documents:\n # TODO\nprint(lower_case_documents)", "_____no_output_____" ] ], [ [ "** Step 2: Removing all punctuations **\n\n>>**Instructions: **\nRemove all punctuation from the strings in the document set. Save them into a list called \n'sans_punctuation_documents'. ", "_____no_output_____" ] ], [ [ "'''\nSolution:\n'''\nsans_punctuation_documents = []\nimport string\n\nfor i in lower_case_documents:\n # TODO\n \nprint(sans_punctuation_documents)", "_____no_output_____" ] ], [ [ "** Step 3: Tokenization **\n\nTokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.)", "_____no_output_____" ], [ ">>**Instructions:**\nTokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set \nin a list called 'preprocessed_documents'.\n", "_____no_output_____" ] ], [ [ "'''\nSolution:\n'''\npreprocessed_documents = []\nfor i in sans_punctuation_documents:\n # TODO\nprint(preprocessed_documents)", "_____no_output_____" ] ], [ [ "** Step 4: Count frequencies **\n\nNow that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the `Counter` method from the Python `collections` library for this purpose. \n\n`Counter` counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. ", "_____no_output_____" ], [ ">>**Instructions:**\nUsing the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequency of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.\n", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\nfrequency_list = []\nimport pprint\nfrom collections import Counter\n\nfor i in preprocessed_documents:\n #TODO\n \npprint.pprint(frequency_list)", "_____no_output_____" ] ], [ [ "Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.\n\nWe should now have a solid understanding of what is happening behind the scenes in the `sklearn.feature_extraction.text.CountVectorizer` method of scikit-learn. \n\nWe will now implement `sklearn.feature_extraction.text.CountVectorizer` method in the next step.", "_____no_output_____" ], [ "### Step 2.3: Implementing Bag of Words in scikit-learn ###\n\nNow that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step. ", "_____no_output_____" ] ], [ [ "'''\nHere we will look to create a frequency matrix on a smaller document set to make sure we understand how the \ndocument-term matrix generation happens. We have created a sample document set 'documents'.\n'''\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']", "_____no_output_____" ] ], [ [ ">>**Instructions:**\nImport the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'. ", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vector = # TODO", "_____no_output_____" ] ], [ [ "** Data preprocessing with CountVectorizer() **\n\nIn Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:\n\n* `lowercase = True`\n \n The `lowercase` parameter has a default value of `True` which converts all of our text to its lower case form.\n\n\n* `token_pattern = (?u)\\\\b\\\\w\\\\w+\\\\b`\n \n The `token_pattern` parameter has a default regular expression value of `(?u)\\\\b\\\\w\\\\w+\\\\b` which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.\n\n\n* `stop_words`\n\n The `stop_words` parameter, if set to `english` will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value.\n\nYou can take a look at all the parameter values of your `count_vector` object by simply printing out the object as follows:", "_____no_output_____" ] ], [ [ "'''\nPractice node:\nPrint the 'count_vector' object which is an instance of 'CountVectorizer()'\n'''\nprint(count_vector)", "_____no_output_____" ] ], [ [ ">>**Instructions:**\nFit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words \nwhich have been categorized as features using the get_feature_names() method.", "_____no_output_____" ] ], [ [ "'''\nSolution:\n'''\ncount_vector.fit(documents)\ncount_vector.get_feature_names()", "_____no_output_____" ] ], [ [ "The `get_feature_names()` method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.", "_____no_output_____" ], [ ">>**\nInstructions:**\nCreate a matrix with the rows being each of the 4 documents, and the columns being each word. \nThe corresponding (row, column) value is the frequency of occurrence of that word(in the column) in a particular\ndocument(in the row). You can do this using the transform() method and passing in the document data set as the \nargument. The transform() method returns a matrix of numpy integers, you can convert this to an array using\ntoarray(). Call the array 'doc_array'\n", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\ndoc_array = # TODO\ndoc_array", "_____no_output_____" ] ], [ [ "Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.", "_____no_output_____" ], [ ">>**Instructions:**\nConvert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to \nthe word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'.\n", "_____no_output_____" ] ], [ [ "'''\nSolution\n'''\nfrequency_matrix = pd.DataFrame(# TODO))\nfrequency_matrix", "_____no_output_____" ] ], [ [ "Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. \n\nOne potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical constructs etc could skew our matrix and affect our analyis. \n\nThere are a couple of ways to mitigate this. One way is to use the `stop_words` parameter and set its value to `english`. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn.\n\nAnother way of mitigating this is by using the [tfidf](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer) method. This method is out of scope for the context of this lesson.", "_____no_output_____" ], [ "### Step 3.1: Training and testing sets ###\n\nNow that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later. ", "_____no_output_____" ], [ "\n>>**Instructions:**\nSplit the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data\nusing the following variables:\n* `X_train` is our training data for the 'sms_message' column.\n* `y_train` is our training data for the 'label' column\n* `X_test` is our testing data for the 'sms_message' column.\n* `y_test` is our testing data for the 'label' column\nPrint out the number of rows we have in each our training and testing data.\n", "_____no_output_____" ] ], [ [ "'''\nSolution\n\nNOTE: sklearn.cross_validation will be deprecated soon to sklearn.model_selection \n'''\n# split into training and testing sets\n# USE from sklearn.model_selection import train_test_split to avoid seeing deprecation warning.\nfrom sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(df['sms_message'], \n df['label'], \n random_state=1)\n\nprint('Number of rows in the total set: {}'.format(df.shape[0]))\nprint('Number of rows in the training set: {}'.format(X_train.shape[0]))\nprint('Number of rows in the test set: {}'.format(X_test.shape[0]))", "_____no_output_____" ] ], [ [ "### Step 3.2: Applying Bag of Words processing to our dataset. ###\n\nNow that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:\n\n* Firstly, we have to fit our training data (`X_train`) into `CountVectorizer()` and return the matrix.\n* Secondly, we have to transform our testing data (`X_test`) to return the matrix. \n\nNote that `X_train` is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. \n\n`X_test` is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with `y_test` in a later step. \n\nFor now, we have provided the code that does the matrix transformations for you!", "_____no_output_____" ] ], [ [ "'''\n[Practice Node]\n\nThe code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data \nand then transforming the data into a document-term matrix; secondly, for the testing data we are only \ntransforming the data into a document-term matrix.\n\nThis is similar to the process we followed in Step 2.3\n\nWe will provide the transformed data to students in the variables 'training_data' and 'testing_data'.\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\n# Instantiate the CountVectorizer method\ncount_vector = CountVectorizer()\n\n# Fit the training data and then return the matrix\ntraining_data = count_vector.fit_transform(X_train)\n\n# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()\ntesting_data = count_vector.transform(X_test)", "_____no_output_____" ] ], [ [ "### Step 4.1: Bayes Theorem implementation from scratch ###\n\nNow that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors). \n\nLet us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. \nIn the medical field, such probabilies play a very important role as it usually deals with life and death situations. \n\nWe assume the following:\n\n`P(D)` is the probability of a person having Diabetes. It's value is `0.01` or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study).\n\n`P(Pos)` is the probability of getting a positive test result.\n\n`P(Neg)` is the probability of getting a negative test result.\n\n`P(Pos|D)` is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value `0.9`. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.\n\n`P(Neg|~D)` is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of `0.9` and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.\n\nThe Bayes formula is as follows:\n\n<img src=\"images/bayes_formula.png\" height=\"242\" width=\"242\">\n\n* `P(A)` is the prior probability of A occurring independently. In our example this is `P(D)`. This value is given to us.\n\n* `P(B)` is the prior probability of B occurring independently. In our example this is `P(Pos)`.\n\n* `P(A|B)` is the posterior probability that A occurs given B. In our example this is `P(D|Pos)`. That is, **the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate.**\n\n* `P(B|A)` is the likelihood probability of B occurring, given A. In our example this is `P(Pos|D)`. This value is given to us.", "_____no_output_____" ], [ "Putting our values into the formula for Bayes theorem we get:\n\n`P(D|Pos) = P(D) * P(Pos|D) / P(Pos)`\n\nThe probability of getting a positive test result `P(Pos)` can be calculated using the Sensitivity and Specificity as follows:\n\n`P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]`", "_____no_output_____" ] ], [ [ "'''\nInstructions:\nCalculate probability of getting a positive test result, P(Pos)\n'''", "_____no_output_____" ], [ "'''\nSolution (skeleton code will be provided)\n'''\n# P(D)\np_diabetes = 0.01\n\n# P(~D)\np_no_diabetes = 0.99\n\n# Sensitivity or P(Pos|D)\np_pos_diabetes = 0.9\n\n# Specificity or P(Neg|~D)\np_neg_no_diabetes = 0.9\n\n# P(Pos)\np_pos = # TODO\nprint('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))", "_____no_output_____" ] ], [ [ "** Using all of this information we can calculate our posteriors as follows: **\n \nThe probability of an individual having diabetes, given that, that individual got a positive test result:\n\n`P(D|Pos) = (P(D) * Sensitivity)) / P(Pos)`\n\nThe probability of an individual not having diabetes, given that, that individual got a positive test result:\n\n`P(~D|Pos) = (P(~D) * (1-Specificity)) / P(Pos)`\n\nThe sum of our posteriors will always equal `1`. ", "_____no_output_____" ] ], [ [ "'''\nInstructions:\nCompute the probability of an individual having diabetes, given that, that individual got a positive test result.\nIn other words, compute P(D|Pos).\n\nThe formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\n# P(D|Pos)\np_diabetes_pos = # TODO\nprint('Probability of an individual having diabetes, given that that individual got a positive test result is:\\\n',format(p_diabetes_pos)) ", "_____no_output_____" ], [ "'''\nInstructions:\nCompute the probability of an individual not having diabetes, given that, that individual got a positive test result.\nIn other words, compute P(~D|Pos).\n\nThe formula is: P(~D|Pos) = P(~D) * P(Pos|~D) / P(Pos)\n\nNote that P(Pos|~D) can be computed as 1 - P(Neg|~D). \n\nTherefore:\nP(Pos|~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\n# P(Pos|~D)\np_pos_no_diabetes = 0.1\n\n# P(~D|Pos)\np_no_diabetes_pos = # TODO\nprint 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\\\n,p_no_diabetes_pos", "_____no_output_____" ] ], [ [ "Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.", "_____no_output_____" ], [ "** What does the term 'Naive' in 'Naive Bayes' mean ? ** \n\nThe term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of `0` and `1`, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. ", "_____no_output_____" ], [ "### Step 4.2: Naive Bayes implementation from scratch ###\n\n", "_____no_output_____" ], [ "Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature. \n\nLet's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:\n\n* Probability that Jill Stein says 'freedom': 0.1 ---------> `P(F|J)`\n* Probability that Jill Stein says 'immigration': 0.1 -----> `P(I|J)`\n* Probability that Jill Stein says 'environment': 0.8 -----> `P(E|J)`\n\n\n* Probability that Gary Johnson says 'freedom': 0.7 -------> `P(F|G)`\n* Probability that Gary Johnson says 'immigration': 0.2 ---> `P(I|G)`\n* Probability that Gary Johnson says 'environment': 0.1 ---> `P(E|G)`\n\n\nAnd let us also assume that the probability of Jill Stein giving a speech, `P(J)` is `0.5` and the same for Gary Johnson, `P(G) = 0.5`. \n\n\nGiven this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'.\n\nNow we are at a place where we can define the formula for the Naive Bayes' theorem:\n\n<img src=\"images/naivebayes.png\" height=\"342\" width=\"342\">\n\nHere, `y` is the class variable or in our case the name of the candidate and `x1` through `xn` are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (`xi`) are independent of each other.", "_____no_output_____" ], [ "To break this down, we have to compute the following posterior probabilities:\n\n* `P(J|F,I)`: Probability of Jill Stein saying the words Freedom and Immigration. \n\n Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: `P(J|F,I)` = `(P(J) * P(F|J) * P(I|J)) / P(F,I)`. Here `P(F,I)` is the probability of the words 'freedom' and 'immigration' being said in a speech.\n \n\n* `P(G|F,I)`: Probability of Gary Johnson saying the words Freedom and Immigration. \n \n Using the formula, we can compute this as follows: `P(G|F,I)` = `(P(G) * P(F|G) * P(I|G)) / P(F,I)`", "_____no_output_____" ] ], [ [ "'''\nInstructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or\nP(F,I).\n\nThe first step is multiplying the probabilities of Jill Stein giving a speech with her individual \nprobabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text\n\nThe second step is multiplying the probabilities of Gary Johnson giving a speech with his individual \nprobabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text\n\nThe third step is to add both of these probabilities and you will get P(F,I).\n'''", "_____no_output_____" ], [ "'''\nSolution: Step 1\n'''\n# P(J)\np_j = 0.5\n\n# P(F/J)\np_j_f = 0.1\n\n# P(I/J)\np_j_i = 0.1\n\np_j_text = # TODO\nprint(p_j_text)", "_____no_output_____" ], [ "'''\nSolution: Step 2\n'''\n# P(G)\np_g = 0.5\n\n# P(F/G)\np_g_f = 0.7\n\n# P(I/G)\np_g_i = 0.2\n\np_g_text = # TODO\nprint(p_g_text)", "_____no_output_____" ], [ "'''\nSolution: Step 3: Compute P(F,I) and store in p_f_i\n'''\np_f_i = # TODO\nprint('Probability of words freedom and immigration being said are: ', format(p_f_i))", "_____no_output_____" ] ], [ [ "Now we can compute the probability of `P(J|F,I)`, that is the probability of Jill Stein saying the words Freedom and Immigration and `P(G|F,I)`, that is the probability of Gary Johnson saying the words Freedom and Immigration.", "_____no_output_____" ] ], [ [ "'''\nInstructions:\nCompute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(F|J) * P(I|J)) / P(F,I) and store it in a variable p_j_fi\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\np_j_fi = # TODO\nprint('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))", "_____no_output_____" ], [ "'''\nInstructions:\nCompute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(F|G) * P(I|G)) / P(F,I) and store it in a variable p_g_fi\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\np_g_fi = # TODO\nprint('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))", "_____no_output_____" ] ], [ [ "And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compared the the 93.3% chance for Gary Johnson of the Libertarian party.", "_____no_output_____" ], [ "Another more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independent entities and hence being 'naive' in its approach. \n\n\nApplying this to our problem of classifying messages as spam, the Naive Bayes algorithm *looks at each word individually and not as associated entities* with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam.", "_____no_output_____" ], [ "### Step 5: Naive Bayes implementation using scikit-learn ###\n\nThankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns `sklearn.naive_bayes` method to make predictions on our dataset. \n\nSpecifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution.", "_____no_output_____" ] ], [ [ "'''\nInstructions:\n\nWe have loaded the training data into the variable 'training_data' and the testing data into the \nvariable 'testing_data'.\n\nImport the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier\n'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier. \n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\nfrom sklearn.naive_bayes import MultinomialNB\nnaive_bayes = # TODO\nnaive_bayes.fit(# TODO)", "_____no_output_____" ], [ "'''\nInstructions:\nNow that our algorithm has been trained using the training data set we can now make some predictions on the test data\nstored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\npredictions = naive_bayes.predict(# TODO)", "_____no_output_____" ] ], [ [ "Now that predictions have been made on our test set, we need to check the accuracy of our predictions.", "_____no_output_____" ], [ "### Step 6: Evaluating our model ###\n\nNow that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them.\n\n** Accuracy ** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).\n\n** Precision ** tells us what proportion of messages we classified as spam, actually were spam.\nIt is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classification), in other words it is the ratio of\n\n`[True Positives/(True Positives + False Positives)]`\n\n** Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.\nIt is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of\n\n`[True Positives/(True Positives + False Negatives)]`\n\nFor classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score.", "_____no_output_____" ], [ "We will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.", "_____no_output_____" ] ], [ [ "'''\nInstructions:\nCompute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions\nyou made earlier stored in the 'predictions' variable.\n'''", "_____no_output_____" ], [ "'''\nSolution\n'''\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score\nprint('Accuracy score: ', format(accuracy_score(# TODO)))\nprint('Precision score: ', format(precision_score(# TODO)))\nprint('Recall score: ', format(recall_score(# TODO)))\nprint('F1 score: ', format(f1_score(# TODO)))", "_____no_output_____" ] ], [ [ "### Step 7: Conclusion ###\n\nOne of the major advantages that Naive Bayes has over other classification algorithms is its ability to handle an extremely large number of features. In our case, each word is treated as a feature and there are thousands of different words. Also, it performs well even with the presence of irrelevant features and is relatively unaffected by them. The other major advantage it has is its relative simplicity. Naive Bayes' works well right out of the box and tuning it's parameters is rarely ever necessary, except usually in cases where the distribution of the data is known. \nIt rarely ever overfits the data. Another important advantage is that its model training and prediction times are very fast for the amount of data it can handle. All in all, Naive Bayes' really is a gem of an algorithm!\n\nCongratulations! You have successfully designed a model that can efficiently predict if an SMS message is spam or not!\n\nThank you for learning with us!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
ec99eaaac340837c726bb84362356f1499b8616a
7,139
ipynb
Jupyter Notebook
tutorials/wowScala.ipynb
cloudant-labs/spark-cloudant
07f772eb21ec268f2e013a65dc2f21e01232f82b
[ "Apache-2.0" ]
25
2015-11-02T16:50:47.000Z
2020-11-07T13:41:36.000Z
tutorials/wowScala.ipynb
cloudant-labs/spark-cloudant
07f772eb21ec268f2e013a65dc2f21e01232f82b
[ "Apache-2.0" ]
80
2015-10-14T11:08:00.000Z
2018-03-26T01:30:20.000Z
tutorials/wowScala.ipynb
cloudant-labs/spark-cloudant
07f772eb21ec268f2e013a65dc2f21e01232f82b
[ "Apache-2.0" ]
30
2015-10-13T19:14:13.000Z
2020-11-08T07:23:30.000Z
33.674528
313
0.587757
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
ec99eb63c4b04af200cb823142443d4bcbcdadb2
3,934
ipynb
Jupyter Notebook
4_2_Backpropagation.ipynb
Sung-Kyu/Pytorch-sample
454aec5a71e0c443c0a941fccb6364960590d509
[ "MIT" ]
null
null
null
4_2_Backpropagation.ipynb
Sung-Kyu/Pytorch-sample
454aec5a71e0c443c0a941fccb6364960590d509
[ "MIT" ]
null
null
null
4_2_Backpropagation.ipynb
Sung-Kyu/Pytorch-sample
454aec5a71e0c443c0a941fccb6364960590d509
[ "MIT" ]
null
null
null
24.5875
95
0.402389
[ [ [ "# 4. 역전파\n\n## 4.2 그레디언트 텐서", "_____no_output_____" ] ], [ [ "# 라이브러리 불러오기\nimport torch", "_____no_output_____" ], [ "# requires_grad=True는 해당 텐서를 기준으로 모든 연산들을 추적할 수 있게 하는 옵션이다.\n# 즉, x에 대해서 연쇄 법칙을 이용한 미분이 가능하다는 것이다.\nx = torch.ones(2,2, requires_grad=True)\nprint(x)", "tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n" ], [ "# y는 x에 대한 식, z는 y에 대한 식, res는 z에 대한 식이다. 따라서 이는 합성함수의 개념으로써 x에 대해서 표현 및 미분이 가능하다.\ny = x+1\nz = 2*y**2\nr = z.mean()\nprint(\"y: \", y)\nprint(\"z: \", z)\nprint(\"Result: \", r)\n# grad_fn=..은 추적이 잘 되고 있다는 의미다.", "y: tensor([[2., 2.],\n [2., 2.]], grad_fn=<AddBackward0>)\nz: tensor([[8., 8.],\n [8., 8.]], grad_fn=<MulBackward0>)\nResult: tensor(8., grad_fn=<MeanBackward0>)\n" ], [ "r.backward() # res를 기준으로 역전파를 진행하겠다는 의미다.\n\n# 역으로 식을 써내려 가보자.\n# res = (z_1 + z_2 + z_3 +z_4)/4\n# z_i = 2 y_i **2\n# z_i = 2(x_i+1)**2\n# d(res)/dx_i = x_i + 1\n", "_____no_output_____" ], [ "print(x)\nprint(x.grad) \n# x.grad는 backward()가 선언 된 변수를 기준으로 미분을 한다. 즉 d(res)/dx를 계산한다.\n# #d(res)/dx_i = x_i + 1", "tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\ntensor([[2., 2.],\n [2., 2.]])\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
ec99ec0da45df11966a4be9cf016bca549bf384e
456,081
ipynb
Jupyter Notebook
examples/training_simclr_iwang.ipynb
gsganden/self_supervised
b3a536a3af55eeb639f52b280cfa6e7aed82c158
[ "Apache-2.0" ]
null
null
null
examples/training_simclr_iwang.ipynb
gsganden/self_supervised
b3a536a3af55eeb639f52b280cfa6e7aed82c158
[ "Apache-2.0" ]
null
null
null
examples/training_simclr_iwang.ipynb
gsganden/self_supervised
b3a536a3af55eeb639f52b280cfa6e7aed82c158
[ "Apache-2.0" ]
null
null
null
811.532028
424,184
0.954302
[ [ [ "## SimCLR ImageWang Tutorial", "_____no_output_____" ], [ "**Note:** This notebook demonstrates how to use `SimCLR` callback with a single GPU. For distributed version, `DistributedSimCLR` checkout documentation.", "_____no_output_____" ], [ "First import **fastai** for training and other helpers, you can choose not to use **wandb** by setting `WANDB=False`.", "_____no_output_____" ] ], [ [ "from fastai.vision.all import *\nfrom fastai.callback.wandb import WandbCallback\nimport wandb\n\ntorch.backends.cudnn.benchmark = True\nWANDB = False", "_____no_output_____" ] ], [ [ "Then import **self_supervised** `augmentations` module for creating augmentations pipeline, `layers` module for creating encoder and model, and finally `simclr` for self-supervised training.", "_____no_output_____" ] ], [ [ "from self_supervised.augmentations import *\nfrom self_supervised.layers import *\nfrom self_supervised.vision.simclr import *", "_____no_output_____" ] ], [ [ "In this notebook we will take a look at [ImageWang](https://github.com/fastai/imagenette#image%E7%BD%91) benchmark, how to train a self-supervised model using MoCo algorithm and then how to use this pretrained model for finetuning on the given downstream task. ", "_____no_output_____" ], [ "## Pretraining", "_____no_output_____" ] ], [ [ "def get_dls(size, bs, workers=None):\n path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG\n source = untar_data(path)\n \n files = get_image_files(source)\n tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=1.)], \n [parent_label, Categorize()]]\n \n dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))\n \n batch_tfms = [IntToFloatTensor]\n dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)\n return dls", "_____no_output_____" ] ], [ [ "ImageWang has several benchmarks for different image sizes, in this tutorial we will go for `size=224` and also demonstrate how effectively you can utilize GPU memory.", "_____no_output_____" ], [ "Define batch size, resize resolution before batching and size for random cropping during self-supervised training. It's always good to use a batch size as high as it can fit the GPU memory.", "_____no_output_____" ] ], [ [ "bs, resize, size = 128, 256, 224", "_____no_output_____" ] ], [ [ "Select architecture to train on, remember all **timm** and **fastai** models are available! We need to set `pretrained=False` here because using imagenet weights for ImageWang data would be cheating.", "_____no_output_____" ] ], [ [ "arch = \"xresnet34\"\nencoder = create_encoder(arch, pretrained=False, n_in=3)", "_____no_output_____" ], [ "if WANDB:\n xtra_config = {\"Arch\":arch, \"Resize\":resize, \"Size\":size, \"Algorithm\":\"SWAV\"}\n wandb.init(project=\"self-supervised-imagewang\", config=xtra_config);", "_____no_output_____" ] ], [ [ "Initialize the Dataloaders using the function above.", "_____no_output_____" ] ], [ [ "dls = get_dls(resize, bs)", "_____no_output_____" ] ], [ [ "Create SimCLR model. You can change values of `hidden_size`, `projection_size`, and `n_layers`. For this problem, defaults work just fine so we don't do any changes.", "_____no_output_____" ] ], [ [ "model = create_simclr_model(encoder)", "_____no_output_____" ] ], [ [ "Next step is perhaps the most critical step for achieving good results on a custom problem - data augmentation. For this, we will use utility function from `self_supervised.vision.simclr.get_simclr_aug_pipelines` but you can also use your own list of Pipeline augmentations. `self_supervised.vision.simclr.get_moco_aug_pipelines`should be enough for most of the cases since under the hood it uses `self_supervised.augmentations.get_multi_aug_pipelines` and `self_supervised.augmentations.get_batch_augs`. You can do shift+tab and see all the arguments that can be passed to `get_simclr_aug_pipelines`. You can simply pass anything that you could pass to `get_batch_augs` including custom `xtra_tfms`.", "_____no_output_____" ], [ "`get_simclr_aug_pipelines` excepts size for random resized cropping of the 2 views of a given image and the rest of the arguments are coming from `get_batch_augs()`", "_____no_output_____" ] ], [ [ "aug_pipelines = get_simclr_aug_pipelines(size, rotate=True, rotate_deg=10, jitter=True, bw=True, blur=False) ", "_____no_output_____" ] ], [ [ "Here, we will feed the augmentation pipelines and leave temperature parameter as default.", "_____no_output_____" ] ], [ [ "cbs=[SimCLR(aug_pipelines)]\nif WANDB: cbs += [WandbCallback(log_preds=False,log_model=False)]", "_____no_output_____" ], [ "learn = Learner(dls, model, cbs=cbs)", "_____no_output_____" ] ], [ [ "Before starting training let's check whether our augmentations makes sense or not. Since this step consumes GPU memory, once you are done with inspection, restart the notebook and skip this step. We can see that 2 views of the same image side by side and indeed augmentations look pretty good. Now, it's time restart the notebook and skip this step.", "_____no_output_____" ] ], [ [ "b = dls.one_batch()\nlearn._split(b)\nlearn('before_batch')\nlearn.sim_clr.show(n=5);", "_____no_output_____" ] ], [ [ "Use mixed precision with `to_fp16()` for more GPU memory, larger batch size and faster training . We could also use gradient checkpointing wrapper models from `self_supervised.layers` to save even more memory, e.g. `CheckpointSequential()`.", "_____no_output_____" ] ], [ [ "learn.to_fp16();", "_____no_output_____" ] ], [ [ "Learning good representations via contrastive learning usually takes a lot of epochs. So here number epochs are set to 100. This might change depending on your data distribution and dataset size.", "_____no_output_____" ] ], [ [ "lr,wd,epochs=1e-2,1e-2,100", "_____no_output_____" ], [ "learn.unfreeze()\nlearn.fit_flat_cos(epochs, lr, wd=wd, pct_start=0.5)", "_____no_output_____" ], [ "if WANDB: wandb.finish()", "_____no_output_____" ], [ "save_name = f'simclr_iwang_sz{size}_epc{epochs}'\nlearn.save(save_name)\ntorch.save(learn.model.encoder.state_dict(), learn.path/learn.model_dir/f'{save_name}_encoder.pth')", "_____no_output_____" ], [ "learn.recorder.plot_loss()", "_____no_output_____" ] ], [ [ "## Downstream Task", "_____no_output_____" ] ], [ [ "optdict = dict(sqrmom=0.99,mom=0.95,beta=0.,eps=1e-4)\nopt_func = partial(ranger, **optdict)", "_____no_output_____" ], [ "bs, size", "_____no_output_____" ], [ "def get_dls(size, bs, workers=None):\n path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG\n source = untar_data(path)\n files = get_image_files(source, folders=['train', 'val'])\n splits = GrandparentSplitter(valid_name='val')(files)\n \n item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]\n tfms = [[PILImage.create, ToTensor, *item_aug], \n [parent_label, Categorize()]]\n \n dsets = Datasets(files, tfms=tfms, splits=splits)\n \n batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]\n dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)\n return dls", "_____no_output_____" ], [ "def split_func(m): return L(m[0], m[1]).map(params)\n\ndef create_learner(size=size, arch='xresnet34', encoder_path=\"models/swav_iwang_sz128_epc100_encoder.pth\"):\n \n dls = get_dls(size, bs=bs//2)\n pretrained_encoder = torch.load(encoder_path)\n encoder = create_encoder(arch, pretrained=False, n_in=3)\n encoder.load_state_dict(pretrained_encoder)\n nf = encoder(torch.randn(2,3,224,224)).size(-1)\n classifier = create_cls_module(nf, dls.c)\n model = nn.Sequential(encoder, classifier)\n learn = Learner(dls, model, opt_func=opt_func, splitter=split_func,\n metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy())\n return learn", "_____no_output_____" ], [ "def finetune(size, epochs, arch, encoder_path, lr=1e-2, wd=1e-2):\n learn = create_learner(size, arch, encoder_path)\n learn.unfreeze()\n learn.fit_flat_cos(epochs, lr, wd=wd)\n final_acc = learn.recorder.values[-1][-2]\n return final_acc", "_____no_output_____" ] ], [ [ "### 5 epochs", "_____no_output_____" ] ], [ [ "acc = []\nruns = 5\nfor i in range(runs): acc += [finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]", "_____no_output_____" ], [ "np.mean(acc)", "_____no_output_____" ] ], [ [ "### 20 epochs", "_____no_output_____" ] ], [ [ "acc = []\nruns = 3\nfor i in range(runs): acc += [finetune(size, epochs=20, arch='xresnet34', encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]", "_____no_output_____" ], [ "np.mean(acc)", "_____no_output_____" ] ], [ [ "### 80 epochs", "_____no_output_____" ] ], [ [ "acc = []\nruns = 1\nfor i in range(runs): acc += [finetune(size, epochs=80, arch='xresnet34',encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]", "_____no_output_____" ], [ "np.mean(acc)", "_____no_output_____" ] ], [ [ "### 200 epochs", "_____no_output_____" ] ], [ [ "acc = []\nruns = 1\nfor i in range(runs): acc += [finetune(size, epochs=200, arch='xresnet34', encoder_path=f'models/simclr_iwang_sz{size}_epc100_encoder.pth')]", "_____no_output_____" ], [ "np.mean(acc)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ec99efbfeb41fb988a81320ffa1011fe56db779c
9,511
ipynb
Jupyter Notebook
lectures/lecture-00-intro.ipynb
SABS-R3/2021-essential-maths
8a81449928e602b51a4a4172afbcd70a02e468b8
[ "Apache-2.0" ]
1
2021-11-27T12:07:13.000Z
2021-11-27T12:07:13.000Z
lectures/lecture-00-intro.ipynb
SABS-R3/2021-essential-maths
8a81449928e602b51a4a4172afbcd70a02e468b8
[ "Apache-2.0" ]
null
null
null
lectures/lecture-00-intro.ipynb
SABS-R3/2021-essential-maths
8a81449928e602b51a4a4172afbcd70a02e468b8
[ "Apache-2.0" ]
null
null
null
26.201102
185
0.560088
[ [ [ "# Lecture 0\n\n# Essential Maths 2021\n\n### Fergus Cooper, Beth Dingley, Elliot Howard-Spink\n#### and many before us", "_____no_output_____" ] ], [ [ "import numpy as np\n\n##################################################\n##### Matplotlib boilerplate for consistency #####\n##################################################\nfrom ipywidgets import interact\nfrom ipywidgets import FloatSlider\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline\n\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('svg')\n\nglobal_fig_width = 10\nglobal_fig_height = global_fig_width / 1.61803399\nfont_size = 12\n\nplt.rcParams['axes.axisbelow'] = True\nplt.rcParams['axes.edgecolor'] = '0.8'\nplt.rcParams['axes.grid'] = True\nplt.rcParams['axes.labelpad'] = 8\nplt.rcParams['axes.linewidth'] = 2\nplt.rcParams['axes.titlepad'] = 16.0\nplt.rcParams['axes.titlesize'] = font_size * 1.4\nplt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height)\nplt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif']\nplt.rcParams['font.size'] = font_size\nplt.rcParams['grid.color'] = '0.8'\nplt.rcParams['grid.linestyle'] = 'dashed'\nplt.rcParams['grid.linewidth'] = 2\nplt.rcParams['lines.dash_capstyle'] = 'round'\nplt.rcParams['lines.dashed_pattern'] = [1, 4]\nplt.rcParams['xtick.labelsize'] = font_size\nplt.rcParams['xtick.major.pad'] = 4\nplt.rcParams['xtick.major.size'] = 0\nplt.rcParams['ytick.labelsize'] = font_size\nplt.rcParams['ytick.major.pad'] = 4\nplt.rcParams['ytick.major.size'] = 0\n##################################################", "_____no_output_____" ] ], [ [ "# Why Maths?\n\nMaths is the language we use to quantitatively describe the world.\n\nYou are going into research, and you\n\n- **may** need to be proficient in maths as a tool for describing what you work on\n- **will** have to read and understand papers that use maths\n\nSome of you **may** feel you don't need to know any maths, but this course will be useful to you even if you never have to write down a system of differential equations yourself. ", "_____no_output_____" ], [ "# Why Maths?\n\nData analysis\n\n- Interpretation and inference\n- Identify patterns, trends, relationships\n- Deal robustly with uncertainty and variation", "_____no_output_____" ], [ "# Why Maths?\n\nDescribe the behaviour of systems\n\n- Remove ambiguity: explicit assumptions\n- Quantitative hypotheses\n- Make predictions, through simulation and analysis\n- \"If I make this intervention, I expect to see that change\"\n- Explain **why** something is observed", "_____no_output_____" ], [ "# Why Maths?\n\nVital for **dynamic** and **nonlinear** systems\n\n- Simple intuition breaks down\n- Most of biology is dynamic and nonlinear!", "_____no_output_____" ], [ "# Course aims\n\n- Develop confidence in your mathematical abilities\n - Extensive practice\n- Become able to communicate effectively with mathematical collaborators\n- Ensure you can read and understand mathematical papers in your field\n- Build on your ability to apply computational tools from **Python** to solve problems\n", "_____no_output_____" ], [ "# Topics covered\n\n- Graphs, and basic tools such as logs\n- Calculus: differentiation & integration\n- Complex numbers\n- Ordinary differential equations\n- Linear algebra (matrices)\n- Coupled systems of ordinary differential equations\n- Use of Python", "_____no_output_____" ], [ "# Logistics\n\nCanvas page:\n- https://canvas.ox.ac.uk/courses/112902\n\nZoom call all day every day:\n- Link and passcode on Canvas", "_____no_output_____" ], [ "# Logistics\n\nQ&A sessions (https://www.sli.do/ ; event code on Canvas)\n\n- Mornings: 09:30\n- Afternoons: 14:00\n\nLectures (YouTube; links on Canvas)\n\n- Mornings: 9:45-10:30\n- Afternoons: 14:15-15:00\n\nPractical sessions in Zoom Breakout Rooms\n\n- Mornings: 10:30-13:00\n- Afternoons: 15:00-17:30", "_____no_output_____" ], [ "# Lecture slides\n\nLecture slides are Jupyter notebooks turned into presentations using the RISE plugin.\n\nAvailable on GitHub:\n- https://github.com/SABS-R3/2021-essential-maths\n\nAnd viewable via MyBinder:\n- https://mybinder.org/v2/gh/SABS-R3/2021-essential-maths/HEAD\n\n(Demo) ", "_____no_output_____" ], [ "# Problem sheets\n\nOne per half-day, and available on GitHub:\n- https://github.com/SABS-R3/2021-essential-maths\n\nand linked to from Canvas.\n\nMost exercises can be solved using pen & paper, but Python is expected for some problems, and for checking your solutions.", "_____no_output_____" ], [ "# Problem sheets\n\nThere are three sections of problems on each sheet:\n\n- Introductory\n- Main\n- Extension\n\nIf you are finding the material tough, focus on the introductory problems first and then work your way onto the main problems.\n\nIf you are finding the material straightforward, work through the main problems and on to the extension problems.", "_____no_output_____" ], [ "# Exercise strategy\n\n- Don't panic if you don't complete every question!\n- Focus on understanding the concepts\n- You can use Wednesdays to consolidate the material\n\nDo help your peers, as well as using the demonstrators and lecturers who are here to help!\n\nPlease let me know if you are either finding the material far too difficult, or far too easy. Both of those problems can be solved!", "_____no_output_____" ], [ "# Module assessment\n\nYou will be continuously assessed through the course, based on the following criteria:\n\n- Level of engagement with the material.\n- Quality of written solutions to problem sheets.\n- Ability to confidently manipulate mathematical expressions.\n- Ability to use computational tools.\n- Course attendance.\n\nPlease interact regularly with course demonstrators.\n\nAt least one question from Week 2 will be submitted and assessed, but more on that next week.", "_____no_output_____" ], [ "# Any questions?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
ec9a34c9b728eda9b7d896f7f657be02a407ce53
19,591
ipynb
Jupyter Notebook
notebooks/2b_spacy_processing.ipynb
emtom2019/em_topicmodeling_project
de5c94e52ec8702777978c44ad5ad976f1a1e01b
[ "MIT" ]
1
2020-09-28T01:38:20.000Z
2020-09-28T01:38:20.000Z
notebooks/2b_spacy_processing.ipynb
emtom2019/em_topicmodeling_project
de5c94e52ec8702777978c44ad5ad976f1a1e01b
[ "MIT" ]
6
2021-02-02T22:46:41.000Z
2021-09-08T02:04:43.000Z
notebooks/2b_spacy_processing.ipynb
emtom2019/em_topicmodeling_project
de5c94e52ec8702777978c44ad5ad976f1a1e01b
[ "MIT" ]
null
null
null
35.75
247
0.484508
[ [ [ "#### Cleaning and Processing Abstract for Topic Modeling", "_____no_output_____" ] ], [ [ "import sys\nimport os\nimport pandas as pd\nimport pickle\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nfrom gensim.corpora import Dictionary\nfrom gensim.models import Phrases\n\nimport gensim\nimport nltk\n\nfrom nltk import word_tokenize\nfrom nltk.corpus import stopwords\nimport re\n\nimport spacy\n", "_____no_output_____" ], [ "df = pd.read_csv(\"../Data/data_cleaned.csv\")\ndf.head()", "_____no_output_____" ], [ "# function to clean text\n\ndef cleanText(text):\n \n text = re.split(\"(methods:)|(methodology:)\", text, flags=re.IGNORECASE)[0]\n text = text.lower()\n return text\n\ndf['title_abstract'] = df['title_abstract'].astype('str').apply(cleanText)\ndf.head()\n", "_____no_output_____" ], [ "import scispacy\nnlp = spacy.load(\"en_core_sci_md\") # loading the language model ", "_____no_output_____" ], [ "def clean_nlp(text): # clean up your text and generate list of words for each document. \n removal=['ADV','PRON','CCONJ','PUNCT','PART','DET','ADP','SPACE']\n text_out = []\n doc= nlp(text)\n for token in doc:\n if token.is_stop == False and token.is_alpha and len(token)>2 and token.pos_ not in removal:\n lemma = token.lemma_\n text_out.append(lemma)\n return text_out\ndocuments = df.title_abstract.apply(lambda x:clean_nlp(x))\n", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "print(documents[57])", "['clinical', 'management', 'decision', 'adult', 'prolonged', 'acute', 'cough', 'frequency', 'associate', 'uncomplicated', 'episode', 'prolonged', 'acute', 'cough', 'viral', 'evidence', 'recommendation', 'contrary', 'treat', 'antibiotic']\n" ], [ "# Compute bigrams.\n\n\n# Add bigrams to docs (only ones that appear 20 times or more).\nbigram = Phrases(documents, min_count=20)\nfor idx in range(len(documents)):\n for token in bigram[documents[idx]]:\n if '_' in token:\n # Token is a bigram, add to document.\n documents[idx].append(token)\n \ntrigram = Phrases(documents, min_count=20)\nfor idx in range(len(documents)):\n for token in trigram[documents[idx]]:\n if '_' in token:\n # Token is a bigram, add to document.\n documents[idx].append(token)", "_____no_output_____" ], [ "# Remove rare and common tokens.\n\n# Create a dictionary representation of the documents.\ndictionary = Dictionary(documents)\n\nprint('Number of unique tokens prior to filter: %d' % len(dictionary))\n\n# Filter out words that occur less than 25 documents, or more than 10% of the documents.\ndictionary.filter_extremes(no_below=25, no_above=0.10)\n\nprint('Number of unique tokens after to filter: %d' % len(dictionary))\n", "Number of unique tokens prior to filter: 22791\nNumber of unique tokens after to filter: 4181\n" ], [ "corpus = [dictionary.doc2bow(doc) for doc in documents]", "_____no_output_____" ], [ "\ndata_list = [df, documents, dictionary, corpus]\n\nwith open('../data/data_list_spacy', 'wb') as fp:\n pickle.dump(data_list, fp)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9a47d616a18ab7c4813b1750796ce4709e1fd4
124,271
ipynb
Jupyter Notebook
etk_stock_symbol_rules.ipynb
linqyd/etk
dcf0cae4076619f5261573d47b4f5f26baaf15b7
[ "MIT" ]
null
null
null
etk_stock_symbol_rules.ipynb
linqyd/etk
dcf0cae4076619f5261573d47b4f5f26baaf15b7
[ "MIT" ]
null
null
null
etk_stock_symbol_rules.ipynb
linqyd/etk
dcf0cae4076619f5261573d47b4f5f26baaf15b7
[ "MIT" ]
null
null
null
38.798314
2,930
0.453549
[ [ [ "# Custom Spacy Rules", "_____no_output_____" ] ], [ [ "def generic_token(type=\"word\", token=[], shape=[], number =[], capitalization=[], part_of_speech=[], length=[], minimum=\"\", maximum=\"\", prefix=\"\", suffix=\"\", is_followed_by_space=\"\", is_required=\"true\", is_in_output=\"true\", is_out_of_vocabulary=\"\", is_in_vocabulary=\"\", contain_digit=\"\"):\n return {\n \"type\": type,\n \"token\": token,\n \"shapes\": shape,\n \"numbers\": number,\n \"capitalization\": capitalization,\n \"part_of_speech\": part_of_speech,\n \"length\": length,\n \"minimum\": minimum,\n \"maximum\": maximum,\n \"prefix\": prefix,\n \"suffix\": suffix,\n \"is_required\": is_required,\n \"is_in_output\": is_in_output,\n \"is_out_of_vocabulary\": is_out_of_vocabulary,\n \"is_in_vocabulary\": is_in_vocabulary,\n \"contain_digit\": contain_digit\n }\ndef word_token(token=[], capitalization=[], part_of_speech=[], length=[], minimum=\"\", maximum=\"\", prefix=\"\", suffix=\"\", is_required=\"true\", is_in_output=\"false\", is_out_of_vocabulary=\"\", is_in_vocabulary=\"\", contain_digit=\"\"):\n return generic_token(type=\"word\", token=token, capitalization=capitalization, part_of_speech=part_of_speech, length=length, minimum=minimum, maximum=maximum,prefix=prefix, suffix=suffix, is_required=is_required, is_in_output=is_in_output, is_out_of_vocabulary=is_out_of_vocabulary, is_in_vocabulary=is_in_vocabulary, contain_digit=contain_digit)\n \ndef punctuation_token(token=[], capitalization=[], part_of_speech=[], length=[], minimum=\"\", maximum=\"\", prefix=\"\", suffix=\"\", is_required=\"true\", is_in_output=\"false\", is_out_of_vocabulary=\"\", is_in_vocabulary=\"\", contain_digit=\"\"):\n return generic_token(type=\"punctuation\", token=token, capitalization=capitalization, part_of_speech=part_of_speech, length=length, minimum=minimum, maximum=maximum,prefix=prefix, suffix=suffix, is_required=is_required, is_in_output=is_in_output, is_out_of_vocabulary=is_out_of_vocabulary, is_in_vocabulary=is_in_vocabulary, contain_digit=contain_digit)\n\ndef shape_token(shape=[], capitalization=[], part_of_speech=[], length=[], minimum=\"\", maximum=\"\", prefix=\"\", suffix=\"\",is_required=\"true\", is_in_output=\"false\", is_out_of_vocabulary=\"\", is_in_vocabulary=\"\", contain_digit=\"\"):\n return generic_token(type=\"shape\", shape=shape, capitalization=capitalization, part_of_speech=part_of_speech, length=length, minimum=minimum, maximum=maximum,prefix=prefix, suffix=suffix, is_required=is_required, is_in_output=is_in_output, is_out_of_vocabulary=is_out_of_vocabulary, is_in_vocabulary=is_in_vocabulary, contain_digit=contain_digit)\n\ndef number_token(number =[], capitalization=[], part_of_speech=[], length=[], minimum=\"\", maximum=\"\", prefix=\"\", suffix=\"\",is_required=\"true\", is_in_output=\"false\", is_out_of_vocabulary=\"\", is_in_vocabulary=\"\", contain_digit=\"\"):\n return generic_token(type=\"number\", number=number, capitalization=capitalization, part_of_speech=part_of_speech, length=length, minimum=minimum, maximum=maximum,prefix=prefix, suffix=suffix, is_required=is_required, is_in_output=is_in_output, is_out_of_vocabulary=is_out_of_vocabulary, is_in_vocabulary=is_in_vocabulary, contain_digit=contain_digit)\n\n\n \n \n", "_____no_output_____" ], [ "import json\nfrom etk.core import Core\nc = Core()", "_____no_output_____" ], [ "# Text to test the rules\nt = []\nt.append(u\" A, BA, \")\nt.append(u\" C^J, BAC, C-C, JW.B, \")\nt.append(u\" BK^C, ABRN, NS-A, \")\nt.append(u\" ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \") #5\nt.append(u\" MITT^A, HCAC.U, BTX.WS, , C.WS.A,\") #6\nt.append(u\" IMUC.WS, \") #7\nt.append(u\" BAC.WS.A \") #8\nt.append(u\" CHSP^A.CL \") #9\n\n\nt.append(u\" Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \")\nt.append(u\" $USCR, $TSLA \")\nt.append(u\" common Stock (AAPL) , Apple Inc. (AAPL). \")\n\n\nt.append(u\"AAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \")\n\nt.append(u\"AAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\")\n \n \n \nt.append(u\"GOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\")\n \n\nd = dict()\nd['text'] = \"\\n\".join(t)\nd['simple_tokens_original_case'] = c.extract_tokens_from_crf(c.extract_crftokens(d['text'], lowercase=False))\n\nprint d\n\nconfig = dict()\nconfig['field_name'] = 'field02'\n", "{'text': u\" A, BA, \\n C^J, BAC, C-C, JW.B, \\n BK^C, ABRN, NS-A, \\n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \\n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\\n IMUC.WS, \\n BAC.WS.A \\n CHSP^A.CL \\n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \\n $USCR, $TSLA \\n common Stock (AAPL) , Apple Inc. (AAPL). \\nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \\nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\", 'simple_tokens_original_case': [u'A', u',', u'BA', u',', u'\\n', u'C', u'^', u'J', u',', u'BAC', u',', u'C', u'-', u'C', u',', u'JW', u'.', u'B', u',', u'\\n', u'BK', u'^', u'C', u',', u'ABRN', u',', u'NS', u'-', u'A', u',', u'\\n', u'ADK', u'^', u'A', u',', u'ABEOW', u',', u'ABC', u'-', u'A', u',', u'BAC', u'.', u'A', u',', u'HK', u'.', u'WS', u',', u'\\n', u'MITT', u'^', u'A', u',', u'HCAC', u'.', u'U', u',', u'BTX', u'.', u'WS', u',', u',', u'C', u'.', u'WS', u'.', u'A', u',', u'\\n', u'IMUC', u'.', u'WS', u',', u'\\n', u'BAC', u'.', u'WS', u'.', u'A', u'\\n', u'CHSP', u'^', u'A', u'.', u'CL', u'\\n', u'Alibaba', u'Group', u'Holding', u'Ltd', u'(', u'NYSE', u':', u'BABA', u')', u'dealt', u'another', u',', u'(', u'NASDAQ', u':', u'AMZN', u')', u'this', u'week', u'\\n', u'$', u'USCR', u',', u'$', u'TSLA', u'\\n', u'common', u'Stock', u'(', u'AAPL', u')', u',', u'Apple', u'Inc', u'.', u'(', u'AAPL', u')', u'.', u'\\n', u'AAPL', u'is', u'looking', u'to', u',', u'|', u'for', u'AAPQ', u'was', u'8', u'.', u'31', u'For', u'the', u'fiscal', u'y', u'|', u',', u'AAPW', u'has', u'efficiently', u'invested', u',', u'|', u'AAPE', u'comes', u'one', u'wee', u',', u',', u'AAPR', u'may', u'refer', u'to', u':', u'|', u',', u'AAPl', u'closed', u'at', u'ab', u'|', u'including', u'AAPT', u'news', u',', u'historical', u'|', u'The', u'bank', u'lowered', u'its', u'AAPY', u'price', u'target', u'to', u'$', u'150', u',', u'|', u'Earnings', u'estimates', u'for', u'AAPU', u'from', u'thousands', u'of', u'|', u'View', u'the', u'basic', u'AAPO', u'stock', u'chart', u'\\n', u'AAPA', u':', u'Get', u'the', u'latest', u'Apple', u',', u'AMZA', u'-', u'Free', u'Report', u',', u'unveiled', u',', u'AAPD', u\"'\", u's', u'stock', u'sold', u'off', u'\\n', u'GOOGL', u'919', u'.', u'46', u'-', u'10', u'.', u'22', u'-', u'1', u'.', u'10', u'%', u',', u'AAPL', u'146', u'.', u'28', u'0', u'.', u'65', u'0', u'.', u'45', u'%', u':']}\n" ], [ "# C^J, BK^C, ADK^A, MITT^A,\n\nrule_01 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_01\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_02 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_02\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n\nrule_03 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_03\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_04 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_04\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_05 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_05\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_06 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_06\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_07 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_07\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C^J, BK^C, ADK^A, MITT^A, (NYSE:BABA) dealt another , (NASDAQ:AMZN) this \n\nrule_08 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_08\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_09 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_09\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n\nrule_10 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_10\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_11 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_11\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_12 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_12\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_13 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_13\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_14 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n word_token(token=[\"NYSE\",\"NASDAQ\",\"OTCQB\"],is_in_output=\"false\"),\n punctuation_token(token=[\":\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_14\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C^J, BK^C, ADK^A, MITT^A, $USCR, $TSLA\n\nrule_15 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_15\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_16 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_16\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n\nrule_17 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_17\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_18 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_18\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_19 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_19\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_20 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_20\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_21 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n punctuation_token(token=[\"$\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_21\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C^J, BK^C, ADK^A, MITT^A, Stock (AAPL) , Apple Inc. (AAPL).\n\nrule_22 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n \n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n \n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_22\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_23 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n \n punctuation_token(token=[\")\"], is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_23\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n\nrule_24 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\"),\n \n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_24\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_25 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n\n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_25\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_26 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n\n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_26\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_27 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n \n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_27\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_28 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n punctuation_token(token=[\"(\"], is_in_output=\"false\"),\n\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n\n punctuation_token(token=[\")\"], is_in_output=\"false\")\n\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_28\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C^J, BK^C, ADK^A, MITT^A,\n# \"AAPL is looking to, | for AAPL was 8.31 For the fiscal y | ,AAPL has efficiently invested ,| AAPL comes one wee,,AAPL may refer to: |,AAPl closed at ab|including AAPL news, historical|The bank lowered its AAPL price target to $150, |Earnings estimates for AAPL from thousands of | View the basic AAPL stock chart \")\n\nrule_29 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_29\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_30 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_30\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n#\"AAPL: Get the latest Apple, AMZN - Free Report, unveiled, AAPL's \nrule_31 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_31\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_32 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_32\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_33 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_33\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_34 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_34\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_35 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"false\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n word_token(token=[\"is\",\"was\",\"has\",\"comes\",\"closed\",\"news\",\"price\",\"from\",\"stock\",\"may\",\":\",\"-\", \"'\"],is_in_output=\"false\")\n\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_35\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C^J, BK^C, ADK^A, MITT^A,\n# GOOGL 919.46 -10.22 -1.10%,AAPL 146.28 \nrule_36 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_36\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# CHSP^A.CL \nrule_37 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}{6}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"^\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_37\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# BAC, ABRN, ABEOW ,\n\nrule_38 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\",\"XXXX\",\"XXXXX\"], is_in_output=\"true\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_38\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C-C, NS-A, ABC-A,\nrule_39 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XX\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\"-\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_39\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# JW.B, BAC.A, HCAC.U,\n\nrule_40 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_40\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# HK.WS, BTX.WS, IMUC.WS, \n\nrule_41 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}\",\n \"pattern\": [\n shape_token(shape =[\"XX\",\"XXX\",\"XXXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n \n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_41\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ], [ "# C.WS.A, BAC.WS.A\n\nrule_42 = {\n \"identifier\": \"stock_symbol_rule_us\",\n \"description\": \"a description\",\n \"is_active\": \"true\",\n \"output_format\": \"{1}{2}{3}{4}{5}\",\n \"pattern\": [\n shape_token(shape =[\"X\",\"XXX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"XX\"], is_in_output=\"true\"),\n punctuation_token(token=[\".\"], is_in_output=\"true\"),\n shape_token(shape =[\"X\"], is_in_output=\"true\"),\n\n shape_token(shape =[\"ddd\"],is_in_output=\"false\"),\n punctuation_token(token=[\".\"], is_in_output=\"false\"),\n shape_token(shape =[\"dd\"],is_in_output=\"false\")\n\n ]\n }\n\nfield_rules = {\n \"rules\": [\n rule_42\n ]\n}\n\nprint \"text:\", d['text']\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\ntele_lst = []\nfor i in results:\n tele_lst.append(''.join((i.values()[1]).split()))\ntele_lst", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n" ] ], [ [ "## Test Several Rules", "_____no_output_____" ] ], [ [ "field_rules = {\n \"rules\": [\n rule_01, \n rule_02,\n rule_03,\n rule_04,\n rule_05,\n rule_06,\n rule_07,\n rule_08,\n rule_09,\n rule_10,\n rule_11,\n rule_12,\n rule_13,\n rule_14,\n rule_15,\n rule_16,\n rule_17,\n rule_18,\n rule_19,\n rule_20,\n rule_21,\n rule_22,\n rule_23,\n rule_24,\n rule_25,\n rule_26,\n rule_27,\n rule_28,\n rule_29,\n rule_30,\n rule_31, \n rule_32,\n rule_33,\n rule_34,\n rule_35,\n rule_36,\n rule_37,\n rule_38,\n rule_39,\n rule_40,\n rule_41, \n rule_42\n \n ],\n \"test_text\": d['text'],\n \"test_tokens\": c.extract_tokens_from_crf(c.extract_crftokens(d['text'], lowercase=False))\n}\n\n\n\nresults = c.extract_using_custom_spacy(d, config, field_rules=field_rules)\n\nprint \"text:\", d['text']\n\nprice_lst = []\nfor i in results:\n price_lst.append(''.join((i.values()[1]).split()))\nresults.append(price_lst)\nfield_rules['results']=results\n\ns = json.dumps(field_rules, indent=2)\nprint price_lst\n\nimport codecs\no = codecs.open('stock_ticker.json', 'w')\no.write(s)\no.close()\n\n\n\n", "text: A, BA, \n C^J, BAC, C-C, JW.B, \n BK^C, ABRN, NS-A, \n ADK^A, ABEOW , ABC-A, BAC.A, HK.WS, \n MITT^A, HCAC.U, BTX.WS, , C.WS.A,\n IMUC.WS, \n BAC.WS.A \n CHSP^A.CL \n Alibaba Group Holding Ltd (NYSE:BABA) dealt another , (NASDAQ:AMZN) this week \n $USCR, $TSLA \n common Stock (AAPL) , Apple Inc. (AAPL). \nAAPL is looking to, | for AAPQ was 8.31 For the fiscal y | ,AAPW has efficiently invested ,| AAPE comes one wee,,AAPR may refer to: |,AAPl closed at ab|including AAPT news, historical|The bank lowered its AAPY price target to $150, |Earnings estimates for AAPU from thousands of | View the basic AAPO stock chart \nAAPA: Get the latest Apple, AMZA - Free Report, unveiled, AAPD's stock sold off\nGOOGL 919.46 -10.22 -1.10%,AAPL 146.28 0.65 0.45% :\n['BABA', 'AMZN', 'USCR', 'TSLA', 'AAPL', 'AAPL', 'GOOGL', 'AAPL']\n" ], [ "\nprint word_token(token=[\"hello\"])", "{'prefix': '', 'suffix': '', 'capitalization': [], 'part_of_speech': [], 'length': [], 'maximum': '', 'shapes': [], 'token': ['hello'], 'minimum': '', 'numbers': [], 'contain_digit': '', 'is_in_vocabulary': '', 'is_out_of_vocabulary': '', 'is_required': 'true', 'type': 'word', 'is_in_output': 'false'}\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
ec9a4b5a0e97ff6ea42458a86c934523d1be7983
151,389
ipynb
Jupyter Notebook
AAAI/Learnability/CIN/older/ds4/synthetic_type4_Linear_m_5.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
2
2019-08-24T07:20:35.000Z
2020-03-27T08:16:59.000Z
AAAI/Learnability/CIN/older/ds4/synthetic_type4_Linear_m_5.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
null
null
null
AAAI/Learnability/CIN/older/ds4/synthetic_type4_Linear_m_5.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
3
2019-06-21T09:34:32.000Z
2019-09-19T10:43:07.000Z
117.446858
44,662
0.826196
[ [ [ "import numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom tqdm import tqdm\n%matplotlib inline\nfrom torch.utils.data import Dataset, DataLoader\nimport torch\nimport torchvision\n\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.nn import functional as F\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(device)", "cuda\n" ], [ "m = 5", "_____no_output_____" ] ], [ [ "# Generate dataset", "_____no_output_____" ] ], [ [ "np.random.seed(12)\ny = np.random.randint(0,10,5000)\nidx= []\nfor i in range(10):\n print(i,sum(y==i))\n idx.append(y==i)", "0 530\n1 463\n2 494\n3 517\n4 488\n5 497\n6 493\n7 507\n8 492\n9 519\n" ], [ "x = np.zeros((5000,2))", "_____no_output_____" ], [ "np.random.seed(12)\nx[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))\nx[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))\nx[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))\nx[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))\nx[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))\nx[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))\nx[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))\nx[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))\nx[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))\nx[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))", "_____no_output_____" ], [ "x[idx[0]][0], x[idx[5]][5] ", "_____no_output_____" ], [ "for i in range(10):\n plt.scatter(x[idx[i],0],x[idx[i],1],label=\"class_\"+str(i))\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))", "_____no_output_____" ], [ "bg_idx = [ np.where(idx[3] == True)[0], \n np.where(idx[4] == True)[0], \n np.where(idx[5] == True)[0],\n np.where(idx[6] == True)[0], \n np.where(idx[7] == True)[0], \n np.where(idx[8] == True)[0],\n np.where(idx[9] == True)[0]]\n\nbg_idx = np.concatenate(bg_idx, axis = 0)\nbg_idx.shape", "_____no_output_____" ], [ "np.unique(bg_idx).shape", "_____no_output_____" ], [ "x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)\n", "_____no_output_____" ], [ "np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)", "_____no_output_____" ], [ "x = x/np.std(x[bg_idx], axis = 0, keepdims = True)", "_____no_output_____" ], [ "np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)", "_____no_output_____" ], [ "for i in range(10):\n plt.scatter(x[idx[i],0],x[idx[i],1],label=\"class_\"+str(i))\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))", "_____no_output_____" ], [ "foreground_classes = {'class_0','class_1', 'class_2'}\n\nbackground_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}", "_____no_output_____" ], [ "fg_class = np.random.randint(0,3)\nfg_idx = np.random.randint(0,m)\n\na = []\nfor i in range(m):\n if i == fg_idx:\n b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)\n a.append(x[b])\n print(\"foreground \"+str(fg_class)+\" present at \" + str(fg_idx))\n else:\n bg_class = np.random.randint(3,10)\n b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)\n a.append(x[b])\n print(\"background \"+str(bg_class)+\" present at \" + str(i))\na = np.concatenate(a,axis=0)\nprint(a.shape)\n\nprint(fg_class , fg_idx)", "background 5 present at 0\nbackground 6 present at 1\nforeground 2 present at 2\nbackground 5 present at 3\nbackground 3 present at 4\n(5, 2)\n2 2\n" ], [ "np.reshape(a,(2*m,1))", "_____no_output_____" ], [ "desired_num = 2000\nmosaic_list_of_images =[]\nmosaic_label = []\nfore_idx=[]\nfor j in range(desired_num):\n np.random.seed(j)\n fg_class = np.random.randint(0,3)\n fg_idx = np.random.randint(0,m)\n a = []\n for i in range(m):\n if i == fg_idx:\n b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)\n a.append(x[b])\n# print(\"foreground \"+str(fg_class)+\" present at \" + str(fg_idx))\n else:\n bg_class = np.random.randint(3,10)\n b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)\n a.append(x[b])\n# print(\"background \"+str(bg_class)+\" present at \" + str(i))\n a = np.concatenate(a,axis=0)\n mosaic_list_of_images.append(np.reshape(a,(2*m,1)))\n mosaic_label.append(fg_class)\n fore_idx.append(fg_idx)", "_____no_output_____" ], [ "mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T\nmosaic_list_of_images.shape", "_____no_output_____" ], [ "mosaic_list_of_images.shape, mosaic_list_of_images[0]", "_____no_output_____" ], [ "for j in range(m):\n print(mosaic_list_of_images[0][2*j:2*j+2])\n ", "[-0.47703607 0.6121796 ]\n[-1.30547943 -0.083791 ]\n[ 1.10227457 -0.5565904 ]\n[1.44996512 0.52630897]\n[ 0.18635473 -1.38666417]\n" ], [ "def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):\n \"\"\"\n mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point\n labels : mosaic_dataset labels\n foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average\n dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is \"j\" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9\n \"\"\"\n avg_image_dataset = []\n cnt = 0\n counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])\n for i in range(len(mosaic_dataset)):\n img = torch.zeros([2], dtype=torch.float64)\n np.random.seed(int(dataset_number*10000 + i))\n give_pref = foreground_index[i] #np.random.randint(0,9)\n # print(\"outside\", give_pref,foreground_index[i])\n for j in range(m):\n if j == give_pref:\n img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim\n else :\n img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)\n\n if give_pref == foreground_index[i] :\n # print(\"equal are\", give_pref,foreground_index[i])\n cnt += 1\n counter[give_pref] += 1\n else :\n counter[give_pref] += 1\n\n avg_image_dataset.append(img)\n\n print(\"number of correct averaging happened for dataset \"+str(dataset_number)+\" is \"+str(cnt)) \n print(\"the averaging are done as \", counter) \n return avg_image_dataset , labels , foreground_index\n \n ", "_____no_output_____" ], [ "avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)\n\n\ntest_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)", "number of correct averaging happened for dataset 1 is 1000\nthe averaging are done as [203. 191. 218. 183. 205.]\nnumber of correct averaging happened for dataset 5 is 1000\nthe averaging are done as [187. 218. 203. 209. 183.]\n" ], [ "avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)\n# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)\n# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))\n# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))\nprint(\"==\"*40)\n\n\ntest_dataset = torch.stack(test_dataset, axis = 0)\n# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)\n# print(torch.mean(test_dataset, keepdims= True, axis = 0))\n# print(torch.std(test_dataset, keepdims= True, axis = 0))\nprint(\"==\"*40)\n", "================================================================================\n================================================================================\n" ], [ "x1 = (avg_image_dataset_1).numpy()\ny1 = np.array(labels_1)\n\nplt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')\nplt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')\nplt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')\nplt.legend()\nplt.title(\"dataset4 CIN with alpha = 1/\"+str(m))", "_____no_output_____" ], [ "x1 = (test_dataset).numpy() / m\ny1 = np.array(labels)\n\nplt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')\nplt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')\nplt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')\nplt.legend()\nplt.title(\"test dataset4\")", "_____no_output_____" ], [ "test_dataset[0:10]/m", "_____no_output_____" ], [ "test_dataset = test_dataset/m\ntest_dataset[0:10]", "_____no_output_____" ], [ "class MosaicDataset(Dataset):\n \"\"\"MosaicDataset dataset.\"\"\"\n\n def __init__(self, mosaic_list_of_images, mosaic_label):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.mosaic = mosaic_list_of_images\n self.label = mosaic_label\n #self.fore_idx = fore_idx\n \n def __len__(self):\n return len(self.label)\n\n def __getitem__(self, idx):\n return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]\n\n", "_____no_output_____" ], [ "avg_image_dataset_1[0].shape\navg_image_dataset_1[0]", "_____no_output_____" ], [ "batch = 200\n\ntraindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )\ntrainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)\n", "_____no_output_____" ], [ "testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )\ntestloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)\n", "_____no_output_____" ], [ "testdata_11 = MosaicDataset(test_dataset, labels )\ntestloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)", "_____no_output_____" ], [ "class Whatnet(nn.Module):\n def __init__(self):\n super(Whatnet,self).__init__()\n self.linear1 = nn.Linear(2,3)\n # self.linear2 = nn.Linear(50,10)\n # self.linear3 = nn.Linear(10,3)\n\n torch.nn.init.xavier_normal_(self.linear1.weight)\n torch.nn.init.zeros_(self.linear1.bias)\n\n def forward(self,x):\n # x = F.relu(self.linear1(x))\n # x = F.relu(self.linear2(x))\n x = (self.linear1(x))\n\n return x", "_____no_output_____" ], [ "# class Whatnet(nn.Module):\n# def __init__(self):\n# super(Whatnet,self).__init__()\n# self.linear1 = nn.Linear(2,50)\n# self.linear2 = nn.Linear(50,10)\n# self.linear3 = nn.Linear(10,3)\n\n# torch.nn.init.xavier_normal_(self.linear1.weight)\n# torch.nn.init.zeros_(self.linear1.bias)\n# torch.nn.init.xavier_normal_(self.linear2.weight)\n# torch.nn.init.zeros_(self.linear2.bias)\n# torch.nn.init.xavier_normal_(self.linear3.weight)\n# torch.nn.init.zeros_(self.linear3.bias)\n\n# def forward(self,x):\n# x = F.relu(self.linear1(x))\n# x = F.relu(self.linear2(x))\n# x = (self.linear3(x))\n\n# return x", "_____no_output_____" ], [ "def calculate_loss(dataloader,model,criter):\n model.eval()\n r_loss = 0\n with torch.no_grad():\n for i, data in enumerate(dataloader, 0):\n inputs, labels = data\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n outputs = model(inputs)\n loss = criter(outputs, labels)\n r_loss += loss.item()\n return r_loss/i", "_____no_output_____" ], [ "def test_all(number, testloader,net):\n correct = 0\n total = 0\n out = []\n pred = []\n with torch.no_grad():\n for data in testloader:\n images, labels = data\n images, labels = images.to(\"cuda\"),labels.to(\"cuda\")\n out.append(labels.cpu().numpy())\n outputs= net(images)\n _, predicted = torch.max(outputs.data, 1)\n pred.append(predicted.cpu().numpy())\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n \n pred = np.concatenate(pred, axis = 0)\n out = np.concatenate(out, axis = 0)\n print(\"unique out: \", np.unique(out), \"unique pred: \", np.unique(pred) )\n print(\"correct: \", correct, \"total \", total)\n print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))", "_____no_output_____" ], [ "def train_all(trainloader, ds_number, testloader_list):\n \n print(\"--\"*40)\n print(\"training on data set \", ds_number)\n \n torch.manual_seed(12)\n net = Whatnet().double()\n net = net.to(\"cuda\")\n \n criterion_net = nn.CrossEntropyLoss()\n optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)\n \n acti = []\n loss_curi = []\n epochs = 1500\n running_loss = calculate_loss(trainloader,net,criterion_net)\n loss_curi.append(running_loss)\n print('epoch: [%d ] loss: %.3f' %(0,running_loss)) \n for epoch in range(epochs): # loop over the dataset multiple times\n ep_lossi = []\n\n running_loss = 0.0\n net.train()\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n\n # zero the parameter gradients\n optimizer_net.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion_net(outputs, labels)\n # print statistics\n running_loss += loss.item()\n loss.backward()\n optimizer_net.step()\n\n running_loss = calculate_loss(trainloader,net,criterion_net)\n if(epoch%200 == 0):\n print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss)) \n loss_curi.append(running_loss) #loss per epoch\n if running_loss<=0.05:\n print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))\n break\n\n print('Finished Training')\n \n correct = 0\n total = 0\n with torch.no_grad():\n for data in trainloader:\n images, labels = data\n images, labels = images.to(\"cuda\"), labels.to(\"cuda\")\n outputs = net(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\n print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))\n \n for i, j in enumerate(testloader_list):\n test_all(i+1, j,net)\n \n print(\"--\"*40)\n \n return loss_curi\n ", "_____no_output_____" ], [ "train_loss_all=[]\n\ntestloader_list= [ testloader_1, testloader_11]", "_____no_output_____" ], [ "train_loss_all.append(train_all(trainloader_1, 1, testloader_list))", "--------------------------------------------------------------------------------\ntraining on data set 1\nepoch: [0 ] loss: 1.369\nepoch: [1] loss: 1.368\nepoch: [201] loss: 1.310\nepoch: [401] loss: 1.308\nepoch: [601] loss: 1.308\nepoch: [801] loss: 1.308\nepoch: [1001] loss: 1.308\nepoch: [1201] loss: 1.308\nepoch: [1401] loss: 1.308\nFinished Training\nAccuracy of the network on the 1000 train images: 44.10 %\nunique out: [0 1 2] unique pred: [0 1 2]\ncorrect: 441 total 1000\nAccuracy of the network on the 1000 test dataset 1: 44.10 %\nunique out: [0 1 2] unique pred: [0 1 2]\ncorrect: 712 total 1000\nAccuracy of the network on the 1000 test dataset 2: 71.20 %\n--------------------------------------------------------------------------------\n" ], [ "%matplotlib inline", "_____no_output_____" ], [ "for i,j in enumerate(train_loss_all):\n plt.plot(j,label =\"dataset \"+str(i+1))\n \n\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Training_loss\")\n\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9a51c5e2265b13b7bdf567ccad52f2aaf789b8
62,215
ipynb
Jupyter Notebook
NMT/nmt_Final.ipynb
ranadigvijay991/MachineLearning2
c0f96cdac342ee059dd1990491a836bc2820062c
[ "MIT" ]
null
null
null
NMT/nmt_Final.ipynb
ranadigvijay991/MachineLearning2
c0f96cdac342ee059dd1990491a836bc2820062c
[ "MIT" ]
null
null
null
NMT/nmt_Final.ipynb
ranadigvijay991/MachineLearning2
c0f96cdac342ee059dd1990491a836bc2820062c
[ "MIT" ]
null
null
null
36.004051
236
0.510118
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\nimport os\nimport string\nfrom string import digits\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport re\n\nfrom sklearn.utils import shuffle\nfrom sklearn.model_selection import train_test_split\nfrom keras.layers import Input, LSTM, Embedding, Dense\nfrom keras.models import Model\n\n", "_____no_output_____" ], [ "lines=pd.read_csv(\"Hindi_English_Truncated_Corpus.csv\",encoding='utf-8')", "_____no_output_____" ], [ "lines['source'].value_counts()", "_____no_output_____" ], [ "lines=lines[lines['source']=='indic2012']", "_____no_output_____" ], [ "lines.head(20)", "_____no_output_____" ], [ "pd.isnull(lines).sum()", "_____no_output_____" ], [ "lines = lines.dropna()", "_____no_output_____" ], [ "lines=lines[~pd.isnull(lines['english_sentence'])]", "_____no_output_____" ], [ "lines.drop_duplicates(inplace=True)", "_____no_output_____" ], [ "lines=lines.sample(n=25000,random_state=42)\nlines.shape", "_____no_output_____" ], [ "# Lowercase all characters\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: x.lower())\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: x.lower())", "_____no_output_____" ], [ "# Remove quotes\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: re.sub(\"'\", '', x))\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: re.sub(\"'\", '', x))", "_____no_output_____" ], [ "exclude = set(string.punctuation) # Set of all special characters\n# Remove all the special characters\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: ''.join(ch for ch in x if ch not in exclude))\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: ''.join(ch for ch in x if ch not in exclude))", "_____no_output_____" ], [ "# Remove all numbers from text\nremove_digits = str.maketrans('', '', digits)\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: x.translate(remove_digits))\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: x.translate(remove_digits))\n\nlines['hindi_sentence'] = lines['hindi_sentence'].apply(lambda x: re.sub(\"[२३०८१५७९४६]\", \"\", x))\n\n# Remove extra spaces\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: x.strip())\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: x.strip())\nlines['english_sentence']=lines['english_sentence'].apply(lambda x: re.sub(\" +\", \" \", x))\nlines['hindi_sentence']=lines['hindi_sentence'].apply(lambda x: re.sub(\" +\", \" \", x))\n", "_____no_output_____" ], [ "# Add start and end tokens to target sequences\nlines['hindi_sentence'] = lines['hindi_sentence'].apply(lambda x : 'START_ '+ x + ' _END')", "_____no_output_____" ], [ "lines.head()", "_____no_output_____" ], [ "### Get English and Hindi Vocabulary\nall_eng_words=set()\nfor eng in lines['english_sentence']:\n for word in eng.split():\n if word not in all_eng_words:\n all_eng_words.add(word)\n\nall_hindi_words=set()\nfor hin in lines['hindi_sentence']:\n for word in hin.split():\n if word not in all_hindi_words:\n all_hindi_words.add(word)", "_____no_output_____" ], [ "len(all_eng_words)", "_____no_output_____" ], [ "len(all_hindi_words)", "_____no_output_____" ], [ "lines['length_eng_sentence']=lines['english_sentence'].apply(lambda x:len(x.split(\" \")))\nlines['length_hin_sentence']=lines['hindi_sentence'].apply(lambda x:len(x.split(\" \")))", "_____no_output_____" ], [ "lines.head()", "_____no_output_____" ], [ "lines[lines['length_eng_sentence']>30].shape", "_____no_output_____" ], [ "lines=lines[lines['length_eng_sentence']<=20]\nlines=lines[lines['length_hin_sentence']<=20]", "_____no_output_____" ], [ "lines.shape", "_____no_output_____" ], [ "print(\"maximum length of Hindi Sentence \",max(lines['length_hin_sentence']))\nprint(\"maximum length of English Sentence \",max(lines['length_eng_sentence']))", "maximum length of Hindi Sentence 20\nmaximum length of English Sentence 20\n" ], [ "max_length_src=max(lines['length_hin_sentence'])\nmax_length_tar=max(lines['length_eng_sentence'])", "_____no_output_____" ], [ "input_words = sorted(list(all_eng_words))\ntarget_words = sorted(list(all_hindi_words))\nnum_encoder_tokens = len(all_eng_words)\nnum_decoder_tokens = len(all_hindi_words)\nnum_encoder_tokens, num_decoder_tokens", "_____no_output_____" ], [ "num_decoder_tokens += 1 #for zero padding\n", "_____no_output_____" ], [ "input_token_index = dict([(word, i+1) for i, word in enumerate(input_words)])\ntarget_token_index = dict([(word, i+1) for i, word in enumerate(target_words)])", "_____no_output_____" ], [ "reverse_input_char_index = dict((i, word) for word, i in input_token_index.items())\nreverse_target_char_index = dict((i, word) for word, i in target_token_index.items())", "_____no_output_____" ], [ "lines = shuffle(lines)\nlines.head(10)", "_____no_output_____" ], [ "X, y = lines['english_sentence'], lines['hindi_sentence']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2,random_state=42)\nX_train.shape, X_test.shape", "_____no_output_____" ], [ "X_train.to_pickle('X_train.pkl')\nX_test.to_pickle('X_test.pkl')\n", "_____no_output_____" ], [ "def generate_batch(X = X_train, y = y_train, batch_size = 128):\n ''' Generate a batch of data '''\n while True:\n for j in range(0, len(X), batch_size):\n encoder_input_data = np.zeros((batch_size, max_length_src),dtype='float32')\n decoder_input_data = np.zeros((batch_size, max_length_tar),dtype='float32')\n decoder_target_data = np.zeros((batch_size, max_length_tar, num_decoder_tokens),dtype='float32')\n for i, (input_text, target_text) in enumerate(zip(X[j:j+batch_size], y[j:j+batch_size])):\n for t, word in enumerate(input_text.split()):\n encoder_input_data[i, t] = input_token_index[word] # encoder input seq\n for t, word in enumerate(target_text.split()):\n if t<len(target_text.split())-1:\n decoder_input_data[i, t] = target_token_index[word] # decoder input seq\n if t>0:\n # decoder target sequence (one hot encoded)\n # does not include the START_ token\n # Offset by one timestep\n decoder_target_data[i, t - 1, target_token_index[word]] = 1.\n yield([encoder_input_data, decoder_input_data], decoder_target_data)", "_____no_output_____" ] ], [ [ "### Encoder-Decoder Architecture", "_____no_output_____" ] ], [ [ "latent_dim=300", "_____no_output_____" ], [ "# Encoder\nencoder_inputs = Input(shape=(None,))\nenc_emb = Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)\nencoder_lstm = LSTM(latent_dim, return_state=True)\nencoder_outputs, state_h, state_c = encoder_lstm(enc_emb)\n# We discard `encoder_outputs` and only keep the states.\nencoder_states = [state_h, state_c]", "_____no_output_____" ], [ "# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = Input(shape=(None,))\ndec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)\ndec_emb = dec_emb_layer(decoder_inputs)\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, _, _ = decoder_lstm(dec_emb,\n initial_state=encoder_states)\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax')\ndecoder_outputs = decoder_dense(decoder_outputs)\n\n# Define the model that will turn\n# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`\nmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)", "_____no_output_____" ], [ "model.compile(optimizer='rmsprop', loss='categorical_crossentropy')", "_____no_output_____" ], [ "model.summary()", "Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, None)] 0 \n__________________________________________________________________________________________________\ninput_2 (InputLayer) [(None, None)] 0 \n__________________________________________________________________________________________________\nembedding (Embedding) (None, None, 300) 9134100 input_1[0][0] \n__________________________________________________________________________________________________\nembedding_1 (Embedding) (None, None, 300) 6152100 input_2[0][0] \n__________________________________________________________________________________________________\nlstm (LSTM) [(None, 300), (None, 721200 embedding[0][0] \n__________________________________________________________________________________________________\nlstm_1 (LSTM) [(None, None, 300), 721200 embedding_1[0][0] \n lstm[0][1] \n lstm[0][2] \n__________________________________________________________________________________________________\ndense (Dense) (None, None, 20507) 6172607 lstm_1[0][0] \n==================================================================================================\nTotal params: 22,901,207\nTrainable params: 22,901,207\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "train_samples = len(X_train)\nval_samples = len(X_test)\nbatch_size = 128\nepochs = 50", "_____no_output_____" ], [ "model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size),\n steps_per_epoch = train_samples//batch_size,\n epochs=epochs,\n validation_data = generate_batch(X_test, y_test, batch_size = batch_size),\n validation_steps = val_samples//batch_size)\n\n", "C:\\Users\\Digvijay\\anaconda3\\lib\\site-packages\\keras\\engine\\training.py:1972: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.\n warnings.warn('`Model.fit_generator` is deprecated and '\n" ], [ "model.save_weights('nmt_weights.h5')", "_____no_output_____" ], [ "# Encode the input sequence to get the \"thought vectors\"\nencoder_model = Model(encoder_inputs, encoder_states)\n\n# Decoder setup\n# Below tensors will hold the states of the previous time step\ndecoder_state_input_h = Input(shape=(latent_dim,))\ndecoder_state_input_c = Input(shape=(latent_dim,))\ndecoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\n\ndec_emb2= dec_emb_layer(decoder_inputs) # Get the embeddings of the decoder sequence\n\n# To predict the next word in the sequence, set the initial states to the states from the previous time step\ndecoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=decoder_states_inputs)\ndecoder_states2 = [state_h2, state_c2]\ndecoder_outputs2 = decoder_dense(decoder_outputs2) # A dense softmax layer to generate prob dist. over the target vocabulary\n\n# Final decoder model\ndecoder_model = Model(\n [decoder_inputs] + decoder_states_inputs,\n [decoder_outputs2] + decoder_states2)\n", "_____no_output_____" ], [ "def decode_sequence(input_seq):\n # Encode the input as state vectors.\n states_value = encoder_model.predict(input_seq)\n # Generate empty target sequence of length 1.\n target_seq = np.zeros((1,1))\n # Populate the first character of target sequence with the start character.\n target_seq[0, 0] = target_token_index['START_']\n\n # Sampling loop for a batch of sequences\n # (to simplify, here we assume a batch of size 1).\n stop_condition = False\n decoded_sentence = ''\n while not stop_condition:\n output_tokens, h, c = decoder_model.predict([target_seq] + states_value)\n\n # Sample a token\n sampled_token_index = np.argmax(output_tokens[0, -1, :])\n sampled_char = reverse_target_char_index[sampled_token_index]\n decoded_sentence += ' '+sampled_char\n\n # Exit condition: either hit max length\n # or find stop character.\n if (sampled_char == '_END' or\n len(decoded_sentence) > 50):\n stop_condition = True\n\n # Update the target sequence (of length 1).\n target_seq = np.zeros((1,1))\n target_seq[0, 0] = sampled_token_index\n\n # Update states\n states_value = [h, c]\n\n return decoded_sentence", "_____no_output_____" ], [ "train_gen = generate_batch(X_train, y_train, batch_size = 1)\nk=-1", "_____no_output_____" ], [ "k+=1\n(input_seq, actual_output), _ = next(train_gen)\ndecoded_sentence = decode_sequence(input_seq)\nprint('Input English sentence:', X_train[k:k+1].values[0])\nprint('Actual Hindi Translation:', y_train[k:k+1].values[0][6:-4])\nprint('Predicted Hindi Translation:', decoded_sentence[:-4])", "_____no_output_____" ], [ "a = y_train[k:k+1].values[0][6:-4]\nb = decoded_sentence[:-4]\nfrom nltk.translate.bleu_score import sentence_bleu\nscore = sentence_bleu( a, b)\nprint('Bleu score:', '%3f'%score)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
ec9a6d69b99afa2110753554e1d575b1aedc99d9
6,189
ipynb
Jupyter Notebook
labs/Lab05S.ipynb
taotangtt/sta-663-2018
67dac909477f81d83ebe61e0753de2328af1be9c
[ "BSD-3-Clause" ]
72
2018-01-20T20:50:22.000Z
2022-02-27T23:24:21.000Z
labs/Lab05S.ipynb
taotangtt/sta-663-2018
67dac909477f81d83ebe61e0753de2328af1be9c
[ "BSD-3-Clause" ]
1
2020-02-03T13:43:46.000Z
2020-02-03T13:43:46.000Z
labs/Lab05S.ipynb
taotangtt/sta-663-2018
67dac909477f81d83ebe61e0753de2328af1be9c
[ "BSD-3-Clause" ]
64
2018-01-12T17:13:14.000Z
2022-03-14T20:22:46.000Z
26.792208
288
0.490225
[ [ [ "# LabX1: Supplementary Practice Problems\n\nThese are similar to programming problems you may encounter in the mid-terms. They are not graded but we will review them in lab sessions.", "_____no_output_____" ], [ "**1**. (10 points) The logistic map is defined by the following simple function\n\n$$\nf(x) = rx(1-x)\n$$\n\nFor $x_0 = 0.1$ and $r = 4.0$, store the first 10 values of the iterated logistic map $x_{i+1} = rx_i(1-x_i)$ in a list. The first value in the list should be $x_0$.", "_____no_output_____" ], [ "**2**. (10 points) Write a function to find the greatest common divisor (GCD) of 2 numbers using Euclid's algorithm.:\n\n\\begin{align}\n\\gcd(a,0) &= a \\\\\n\\gcd(a, b) &= \\gcd(b, a \\mod b)\n\\end{align}\n\nFind the GCD of 5797 and 190978. \n\nNow write a function to find the GCD given a collection of numbers.\n\nFind the GCD of (24, 48, 60, 120, 8).", "_____no_output_____" ], [ "**3**. (10 points) Find the least squares linear solution to the following data\n\n```\ny = [1,2,3,4]\nx1 = [1,2,3,4]\nx2 = [2,3,4,5]\n```\n\nThat is, find the \"best\" intercept and slope for the variables `x1` and `x2`.", "_____no_output_____" ], [ "**4**. (10 points) Read the `mtcars` data frame from R to a `pandas` DataFrame. Find the mean `wt` and `mpg` for all cars grouped by the number of `gear`s.", "_____no_output_____" ], [ "**5**. (10 points) Read the `iris` data frame from R to a `pandas` DataFrame. Make a `seaborn` plot showing a linear regression of `Petal.Length` (y) against `Sepal.Length` (x). Make a separate regression line for each `Species`.", "_____no_output_____" ], [ "**6**. (10 points) Write a function that can flatten a nested list of arbitrary depth. Check that\n\n```python\nflatten([1,[2,3],[4,[5,[6,7],8],9],10,[11,12]])\n```\n\nreturns\n\n```python\n[1,2,3,4,5,6,7,8,9,10,11,12]\n```\n\nFor simplicity, assume that the only data structure you will encounter is a list. You can check if an item is a list by using \n\n```python\nisinstance(item, list)\n```", "_____no_output_____" ], [ "**7**. (10 points) Create the following table\n\n```python\narray([[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 3, 3, 1, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 4, 6, 4, 1, 0, 0, 0, 0, 0, 0],\n [ 1, 5, 10, 10, 5, 1, 0, 0, 0, 0, 0],\n [ 1, 6, 15, 20, 15, 6, 1, 0, 0, 0, 0],\n [ 1, 7, 21, 35, 35, 21, 7, 1, 0, 0, 0],\n [ 1, 8, 28, 56, 70, 56, 28, 8, 1, 0, 0],\n [ 1, 9, 36, 84, 126, 126, 84, 36, 9, 1, 0],\n [ 1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1]])\n```\n\nStart with the first row\n\n```\n[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n```\n\nand build the subsequent rows using a simple rule that only depends on the previous row.", "_____no_output_____" ], [ "**8**. (10 points) Read the following data sets into DataFrames. \n\n- url1 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills.csv\"\n- url2 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills2000.csv\"\n\nCreate a new DataFraem only containing the names present in both DataFrames. Drop the `timef` column and have a single column for `dist` , `climb` and `time` that shows the average value of the two DataFrames. The final DtataFrame will thus have 4 columns (name, dist, climb, time).", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
ec9a872f0a308a289aeb8e6f2fcae57c0eaf10fc
172,334
ipynb
Jupyter Notebook
examples/example-01.ipynb
eng-tools/pysra
aa4317eed85d7829fd862f85ec0f23fcc7723408
[ "MIT" ]
2
2019-01-11T08:13:05.000Z
2019-10-31T23:23:55.000Z
examples/example-01.ipynb
eng-tools/pysra
aa4317eed85d7829fd862f85ec0f23fcc7723408
[ "MIT" ]
null
null
null
examples/example-01.ipynb
eng-tools/pysra
aa4317eed85d7829fd862f85ec0f23fcc7723408
[ "MIT" ]
1
2019-01-11T08:13:06.000Z
2019-01-11T08:13:06.000Z
476.060773
46,324
0.947404
[ [ [ "# Example 1 : Time series SRA\n\nTime series analysis to compute surface response spectrum and site \namplification functions.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport pysra\n\n%matplotlib inline", "_____no_output_____" ], [ "# Increased figure sizes\nplt.rcParams['figure.dpi'] = 120", "_____no_output_____" ] ], [ [ "## Load time series data", "_____no_output_____" ] ], [ [ "fname = 'data/NIS090.AT2'\nwith open(fname) as fp:\n next(fp)\n description = next(fp).strip()\n next(fp)\n parts = next(fp).split()\n time_step = float(parts[1])\n\n accels = [float(p) for l in fp for p in l.split()]\n\n ts = pysra.motion.TimeSeriesMotion(\n fname, description, time_step, accels)", "_____no_output_____" ], [ "ts.accels", "_____no_output_____" ] ], [ [ "There are a few supported file formats. AT2 files can be loaded as follows:", "_____no_output_____" ] ], [ [ "ts = pysra.motion.TimeSeriesMotion.load_at2_file(fname)\nts.accels", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.plot(ts.times, ts.accels)\nax.set(xlabel='Time (sec)', ylabel='Accel (g)')\nfig.tight_layout();", "_____no_output_____" ] ], [ [ "## Create site profile\n\nThis is about the simplest profile that we can create. Linear-elastic soil and rock.", "_____no_output_____" ] ], [ [ "profile = pysra.site.Profile([\n pysra.site.Layer(\n pysra.site.SoilType(\n 'Soil', 18., None, 0.05\n ),\n 30, 400\n ),\n pysra.site.Layer(\n pysra.site.SoilType(\n 'Rock', 24., None, 0.01\n ),\n 0, 1200\n ),\n])", "_____no_output_____" ] ], [ [ "## Create the site response calculator", "_____no_output_____" ] ], [ [ "calc = pysra.propagation.LinearElasticCalculator()", "_____no_output_____" ] ], [ [ "## Specify the output", "_____no_output_____" ] ], [ [ "freqs = np.logspace(-1, 2, num=500)\n\noutputs = pysra.output.OutputCollection([\n pysra.output.ResponseSpectrumOutput(\n # Frequency\n freqs,\n # Location of the output\n pysra.output.OutputLocation('outcrop', index=0),\n # Damping\n 0.05),\n pysra.output.ResponseSpectrumRatioOutput(\n # Frequency\n freqs,\n # Location in (denominator),\n pysra.output.OutputLocation('outcrop', index=-1),\n # Location out (numerator)\n pysra.output.OutputLocation('outcrop', index=0),\n # Damping\n 0.05),\n pysra.output.FourierAmplitudeSpectrumOutput(\n # Frequency\n freqs,\n # Location of the output\n pysra.output.OutputLocation('outcrop', index=0),\n # Bandwidth for Konno-Omachi smoothing window\n ko_bandwidth=30)\n])", "_____no_output_____" ] ], [ [ "## Perform the calculation", "_____no_output_____" ], [ "Compute the response of the site, and store the state within the calculation object. Nothing is provided.", "_____no_output_____" ] ], [ [ "calc(ts, profile, profile.location('outcrop', index=-1))", "_____no_output_____" ] ], [ [ "Calculate all of the outputs from the calculation object.", "_____no_output_____" ] ], [ [ "outputs(calc)", "_____no_output_____" ] ], [ [ "## Plot the outputs\n\nCreate a few plots of the output.", "_____no_output_____" ] ], [ [ "for o in outputs:\n fig, ax = plt.subplots()\n ax.plot(o.refs, o.values)\n ax.set(xlabel=o.xlabel, xscale='log', ylabel=o.ylabel)\n fig.tight_layout();", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
ec9a90fa157d69e035cb6506f372419aa104f8b7
12,903
ipynb
Jupyter Notebook
examples_and_tutorials/tutorials/semantic_parser_onboarding.ipynb
CowherdChris/droidlet
8d965c1ebc38eceb6f8083c52b1146c1bc17d5e1
[ "MIT" ]
null
null
null
examples_and_tutorials/tutorials/semantic_parser_onboarding.ipynb
CowherdChris/droidlet
8d965c1ebc38eceb6f8083c52b1146c1bc17d5e1
[ "MIT" ]
null
null
null
examples_and_tutorials/tutorials/semantic_parser_onboarding.ipynb
CowherdChris/droidlet
8d965c1ebc38eceb6f8083c52b1146c1bc17d5e1
[ "MIT" ]
null
null
null
32.097015
591
0.627296
[ [ [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/facebookresearch/droidlet/blob/master/tutorials/semantic_parser_onboarding.ipynb)", "_____no_output_____" ], [ "# Semantic Parser Onboarding\n\nThe **semantic parser** is a seq-to-seq model built on the Huggingface Transformers library. The input to the parser is a chat command, eg. \"build a red cube\". The output is a linearized parse tree (see [Action Dictionary Spec Doc](https://github.com/facebookresearch/droidlet/blob/main/base_agent/documents/Action_Dictionary_Spec.md) for the grammar specification).\n\nThe encoder uses a pretrained DistilBERT model, followed by a highway transformation. For the default model, encoder parameters are frozen during training. The decoder consists of a 6-layer Transformer, and has a **Language Modeling** head, **span** beginning and span end heads, and **text span** beginning and end heads. The Language Modeling head predicts the next node in the linearized tree. The span heads predict the span range, which provides the value for the span node. For more details, see the [Craftassist Paper](https://www.aclweb.org/anthology/2020.acl-main.427.pdf).\n\nThis tutorial covers the end-to-end process of how to train a semantic parser model and use it in the CraftAssist agent:\n\n* Generating and preparing datasets\n* Training models\n* Evaluating models\n* Using models in the agent\n", "_____no_output_____" ], [ "## Set Up\n\n### Downloading Pre-Trained Models and Datasets\n\nWhen you run the CraftAssist agent for the first time, the pre-trained models and data files required by the project are downloaded automatically from S3.\n\n```\ncd ~/minecraft/craftassist\npython ./agent/craftassist_agent.py\n```\n\nYou can also do this manually:\n\n```\ncd ~/minecraft\n./tools/data_scripts/try_download.sh\n```\n\nThis script checks your local paths `craftassist/agent/models` and `craftassist/agent/datasets` for updates, and downloads the files from S3 if your local files are missing or outdated (optional).\n\n### Conda Env\n\nYou may need to upgrade/downgrade your pytorch and CUDA versions based on your GPU driver.\n\nFor a list of pytorch and CUDA compatible versions, see: https://pytorch.org/get-started/previous-versions/", "_____no_output_____" ], [ "## Datasets\n\nThe datasets we use to train the semantic parsing models consist of:\n* **Templated**: This file has 800K dialogue, action dictionary pairs generated using our generation script.\n * **Templated Modify**: This file has 100K dialogue, action dictionary pairs generated in the same way as templated.txt, except covering modify type commands, eg. \"make this hole larger\".\n* **Annotated**: This file contains 7k dialogue, action dictionary pairs. These are human labelled examples obtained from crowd sourced tasks and in game interactions.\n\nSee the CraftAssist paper for more information on how datasets are collected.", "_____no_output_____" ], [ "We provide all the dialogue datasets we use in the CraftAssist project in a public S3 folder: \nhttps://craftassist.s3-us-west-2.amazonaws.com/pubr/dialogue_data.tar.gz\n\nIn addition to the datasets used to train the model, this folder also contains greetings and short commands that the agent queries during gameplay.", "_____no_output_____" ], [ "### Generating Datasets\n\nThis section describes how to use our tools to generate and process training data.\n\nTo generate some templated data to train the model on, run ``generate_dialogue.py``. This script generates language commands and their corresponding logical forms using heuristic rules and publicly available dialogue datasets. \n\nProvide the number of examples you want to generate, eg. for 500K examples:", "_____no_output_____" ] ], [ [ "! cd ~/minecraft/base_agent/ttad/generation_dialogues\n! python generate_dialogue.py -n 500000 > generated_dialogues.txt", "_____no_output_____" ] ], [ [ "This creates a text file. We next pre-process the data into the format required by the training script:", "_____no_output_____" ] ], [ [ "! cd ../ttad_transformer_model/\n! python ~/droidlet/tools/nsp_scripts/data_processing_scripts/preprocess_templated.py \\\n--raw_data_path ../generation_dialogues/generated_dialogues.txt \\\n--output_path [OUTPUT_PATH (file must be named templated.txt)]", "_____no_output_____" ] ], [ [ "The format of each row is \n```\n[TEXT]|[ACTION DICTIONARY]\n```\n\nTo create train/test/valid splits of the data, run\n", "_____no_output_____" ] ], [ [ "! python ~/droidlet/tools/nsp_scripts/data_processing_scripts/create_annotated_split.py \\\n--raw_data_path [PATH_TO_DATA_DIR] \\\n--output_path [PATH_TO_SPLIT_FOLDERS] \\\n--filename \"templated.txt\" \\\n--split_ratio \"0.7:0.2:0.1\"\n", "_____no_output_____" ] ], [ [ "To create a split of annotated data too, simply run the above, but with filename \"annotated.txt\".", "_____no_output_____" ], [ "## Training Models", "_____no_output_____" ], [ "We are now ready to train the model with", "_____no_output_____" ] ], [ [ "! cd ~/minecraft\n! python base_agent/ttad/ttad_transformer_model/train_model.py \\\n--data_dir craftassist/agent/models/ttad_bert_updated/annotated_data/ \\\n--dtype_samples '[[\"templated\", 0.35], [\"templated_modify\", 0.05], [\"annotated\", 0.6]]' \\\n--tree_voc_file craftassist/agent/models/ttad_bert_updated/models/caip_test_model_tree.json \\\n--output_dir $CHECKPOINT_PATH", "_____no_output_____" ] ], [ [ "Feel free to experiment with the model parameters. Note that ``dtype_samples`` is the sampling proportions of the different data types. ``templated`` is generated using the ``generate_dialogue`` script as described above, whereas ``annotated`` is obtained from human labellers.\n\nWith a single NVIDIA Quadro GP100 GPU, one training epoch typically takes 30 minutes.", "_____no_output_____" ], [ "The models and tree vocabulary files are saved under ``$CHECKPOINT_PATH``, along with a log that contains training and validation accuracies after every epoch. Once you're done, you can choose which epoch you want the parameters for, and use that model.", "_____no_output_____" ], [ "You can take the params of the best model", "_____no_output_____" ] ], [ [ "! cp $PATH_TO_BEST_CHECKPOINT_MODEL craftassist/agent/models/caip_test_model.pth", "_____no_output_____" ] ], [ [ "## Testing Models\n\nDuring training, validation accuracy after every epoch is calculated and logged. You can access the log file in the output directory, where the checkpointed models are also saved.", "_____no_output_____" ], [ "You can test the model using our inference script:\n\n", "_____no_output_____" ] ], [ [ "! python3 -i ~/droidlet/tools/nsp_scripts/data_processing_scripts/test_model_script.py\n>>> get_beam_tree(\"build a house\")", "_____no_output_____" ] ], [ [ "This will output the logical form for this command, i.e.", "_____no_output_____" ] ], [ [ "from pprint import pprint\n\npprint({'dialogue_type': 'HUMAN_GIVE_COMMAND', 'action_sequence': [{'action_type': 'BUILD', 'schematic': {'has_name': [0, [2, 2]], 'text_span': [0, [2, 2]]}}]})", "_____no_output_____" ] ], [ [ "To calculate accuracy on a test dataset, eg. annotated", "_____no_output_____" ] ], [ [ ">>> model_trainer = ModelTrainer(args)\n>>> full_tree_voc = (full_tree, tree_i2w)\n>>> model_trainer.eval_model_on_dataset(encoder_decoder, \"annotated\", full_tree_voc, tokenizer)", "_____no_output_____" ] ], [ [ "You can now use this model to run the agents. Some command line params to note:\n\n`--dev`: Disables automatic model/dataset downloads.\n\n`--ground_truth_data_dir`: Path to folder of ground truth short commands and templated commands. When given a command, the agent first queries this set for an exact match. If it exists, the agent returns the action dictionary from ground truth. Otherwise, the agent queries the semantic parsing model. Defaults to `~/minecraft/craftassist/agent/datasets/ground_truth/`. You can write your own templated examples and add them to `~/minecraft/craftassist/agent/datasets/ground_truth/datasets/`.\n\n`--nsp_models_dir`: Path to binarized models and vocabulary files. Defaults to `~/minecraft/craftassist/agent/models/semantic_parser/`.\n\n`--nsp_data_dir`: Path to semantic parser datasets. Defaults to `~/minecraft/craftassist/agent/datasets/annotated_data/`.", "_____no_output_____" ], [ "You can now plug your own parsing models into the craftassist or locobot agents.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
ec9a9e0553733247947aca257809bd7ac8625065
3,918
ipynb
Jupyter Notebook
materials/functions-using.ipynb
ethanwhite/dcsem-import-test
f55d10981016f9ff5282502db94e8cbcb4a576a6
[ "CC-BY-4.0" ]
60
2015-08-21T12:00:46.000Z
2022-02-22T01:26:41.000Z
materials/functions-using.ipynb
ethanwhite/dcsem-import-test
f55d10981016f9ff5282502db94e8cbcb4a576a6
[ "CC-BY-4.0" ]
560
2015-08-17T20:39:52.000Z
2022-02-04T15:45:04.000Z
materials/functions-using.ipynb
tychen742/dataScienceR
30d5fb4123b1ef6c0f9acc5739258713acd22ee1
[ "CC-BY-4.0" ]
134
2015-08-17T13:19:08.000Z
2022-03-11T11:56:25.000Z
26.653061
295
0.513783
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]