hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
cb12585664cf46aa10b5aad5555ceb4b1a6fb80f
17,991
ipynb
Jupyter Notebook
50_ode/30_Runge_Kutta.ipynb
kangwonlee/2109eca-nmisp-template
2e078870757fa06222df62d0ff8f4f4f288af51a
[ "BSD-3-Clause" ]
null
null
null
50_ode/30_Runge_Kutta.ipynb
kangwonlee/2109eca-nmisp-template
2e078870757fa06222df62d0ff8f4f4f288af51a
[ "BSD-3-Clause" ]
null
null
null
50_ode/30_Runge_Kutta.ipynb
kangwonlee/2109eca-nmisp-template
2e078870757fa06222df62d0ff8f4f4f288af51a
[ "BSD-3-Clause" ]
null
null
null
22.687264
251
0.492524
[ [ [ "# This cell is for the Google Colaboratory\n# https://stackoverflow.com/a/63519730\nif 'google.colab' in str(get_ipython()):\n # https://colab.research.google.com/notebooks/io.ipynb\n import google.colab.drive as gcdrive\n # may need to visit a link for the Google Colab authorization code\n gcdrive.mount(\"/content/drive/\")\n import sys\n sys.path.insert(0,\"/content/drive/My Drive/Colab Notebooks/nmisp/50_ode\")\n", "_____no_output_____" ], [ "# 그래프, 수학 기능 추가\n# Add graph and math features\nimport pylab as py\nimport numpy as np\nimport numpy.linalg as nl\n# 기호 연산 기능 추가\n# Add symbolic operation capability\nimport sympy as sy\n\n", "_____no_output_____" ], [ "sy.init_printing()\n\n", "_____no_output_____" ] ], [ [ "# 룽게-쿠타법 (RK4)<br>Runge-Kutta Method (RK4)\n\n", "_____no_output_____" ], [ "오일러법과 훈의 방법은 $t=t_0$, $t=t_1$, ... 에서의 기울기만 사용하였다.<br>Euler Method and Heun's Method used slopes at $t=t_0$, $t=t_1$, ... only.\n\n", "_____no_output_____" ], [ "룽게-쿠타 법은 1900년대에 독일 수학자 칼 룽게와 마틴 쿠타가 공동 개발했던 상미분 방정식의 근사 해법의 모음이다.<br>Runge-Kutta methods are a group of numerical methods to solve ordinary differential equations that two German mathematicians Carl Runge and Martin Kutta developed in 1900s.\n\n", "_____no_output_____" ], [ "실은 오일러법이나 훈의 방법도 룽게-쿠타법에 포함된다.<br>In fact, Euler method or Heun's Method are included in the Runge-Kutta method.\n\n", "_____no_output_____" ], [ "룽게-쿠타법 가운데 **RK4**가 가장 널리 사용된다.<br>Among the Runge-Kutta methods, **RK4** is used the most frequently.\n\n", "_____no_output_____" ], [ "**RK4** 는 $t=t_1$ 에서의 해를 구하기 위해 $t=t_0$, $t=t_\\frac{1}{2}=\\frac{1}{2}\\left(t_0+t_1\\right)$, $t=t_1$ 에서의 기울기를 사용한다.<br>\nTo find a solution at $t=t_1$, **RK4** uses slopes at $t=t_0$, $t=t_\\frac{1}{2}=\\frac{1}{2}\\left(t_0+t_1\\right)$, and $t=t_1$.\n\n", "_____no_output_____" ], [ "$$\n\\begin{cases}\n \\frac{d}{dt}\\mathbf{x}=f(t, \\mathbf{x}) \\\\\n \\mathbf{x}(t_0)=\\mathbf{x}_0\n\\end{cases}\n$$\n\n", "_____no_output_____" ], [ "위 미분방정식의 경우, 시간 간격 $\\Delta t=t_1 - t_0$ 일 때 절차는 다음과 같다.<br>\nFor the differential equation above, when time step $\\Delta t=t_1 - t_0$, the procedure is as follows.\n\n", "_____no_output_____" ], [ "1. $t=t_0$ 에서의 기울기 $s_1=f(t_0, \\mathbf{x}_0)$을 구한다.<br>\nAt $t=t_0$, find the slope $s_1=f(t_0, \\mathbf{x}_0)$.\n1. $t_0$ 로 부터 ${\\frac{1}{2}} \\Delta t = {\\frac{1}{2}}(t_1 - t_0)$ 만큼 기울기 $s_1$을 따라 전진하여 $t=t_\\frac{1}{2}$ 에서의 기울기 $s_2=f(t_{\\frac{1}{2}}, \\mathbf{x}_0 + s_1 {\\frac{1}{2}} \\Delta t)$을 구한다.<br>\nAdvancing from $t_0$ by ${\\frac{1}{2}} \\Delta t = {\\frac{1}{2}}(t_1 - t_0)$ in time, along the slope $s_1$, find the slope at $t=t_\\frac{1}{2}$, $s_2=f(t_{\\frac{1}{2}}, \\mathbf{x}_0 + s_1 {\\frac{1}{2}} \\Delta t)$.\n1. 다시 한번, $t_0$에서 $t_{\\frac{1}{2}}$ 까지 $s_2$를 따라 전진하여 $t=t_\\frac{1}{2}$ 에서의 기울기 $s_3=f(t_{\\frac{1}{2}}, \\mathbf{x}_0 + s_2 {\\frac{1}{2}} \\Delta t)$을 구한다.<br>\nOnce again, by time-advancing from $t_0$ to $t=t_\\frac{1}{2}$, along the slope $s_2$, find the slope $s_3=f(t_{\\frac{1}{2}}, \\mathbf{x}_0 + s_2 {\\frac{1}{2}} \\Delta t)$.\n1. 이번에는 $t_0$에서 $t_1$ 까지 $s_3$를 따라 전진하여 $t=t_1$ 에서의 기울기 $s_4=f(t_1, \\mathbf{x}_0 + s_3 \\Delta t)$을 구한다.<br>\nThis time, by going forward from $t_0$ to $t_1$, find the slope at $t=t_1$, $s_4=f(t_1, \\mathbf{x}_0 + s_3 \\Delta t)$.\n1. $t_0 \\le t \\le t_1$ 구간을 대표하는 기울기 $s=\\frac{\\Delta t}{6} \\left( s_1 + 2s_2 + 2s_3 + s_4 \\right)$을 구한다.<br>\n Find the slope representing interval $t_0 \\le t \\le t_1$, $s=\\frac{\\Delta t}{6} \\left( s_1 + 2s_2 + 2s_3 + s_4 \\right)$.\n1. $t=t_1$ 에서 $\\mathbf{x}(t_1) = \\mathbf{x}_0 + s \\Delta t$ 을 구한다.<br>\nAt $t=t_1$ 에서, find $\\mathbf{x}(t_1) = \\mathbf{x}_0 + s \\Delta t$.\n\n", "_____no_output_____" ], [ "python 으로 써 보자.<br>Let's write in python.\n\n", "_____no_output_____" ] ], [ [ "def rk4_step(f, x0, t0, t1):\n \"\"\"\n One time step of Runge-Kutta method\n\n f: dx_dt function\n x0 : initial condition\n t0 : this step time\n t1 : next step time\n \"\"\"\n delta_t = (t1 - t0)\n delta_t_half = delta_t * 0.5\n t_half = t0 + delta_t_half\n \n # Step 1\n s1 = f(t0, x0)\n\n # Step 2\n s2 = f(t_half, x0 + s1 * delta_t_half)\n\n # Step 3\n s3 = f(t_half, x0 + s2 * delta_t_half)\n\n # Step 4\n s4 = f(t1, x0 + s3 * delta_t)\n\n # Step 5\n s = (1.0 / 6.0) * (s1 + (s2 + s3) * 2 + s4)\n\n # Step 6\n x1 = x0 + s * delta_t\n\n return x1\n\n", "_____no_output_____" ], [ "def rk4(f, t_array, x_0):\n time_list = [t_array[0]]\n result_list = [x_0]\n\n x_i = x_0\n\n for k, t_i in enumerate(t_array[:-1]):\n # time step\n x_i_plus_1 = rk4_step(f, x_i, t_i, t_array[k+1])\n\n time_list.append(t_array[k+1])\n result_list.append(x_i_plus_1)\n \n x_i = x_i_plus_1\n\n return time_list, result_list\n\n", "_____no_output_____" ] ], [ [ "다시 1계 선형 상미분 방정식의 예를 살펴 보자.<br>Let's reconsider the 1st order linear ODE.\n\n", "_____no_output_____" ], [ "$$\n\\left\\{\n \\begin{align}\n a_0 \\frac{d}{dt}x(t)+a_1 x(t)&=0 \\\\\n x(0)&=x_0 \\\\\n \\end{align}\n\\right.\n$$\n\n", "_____no_output_____" ], [ "룽게-쿠타법 결과를 엄밀해, 오일러법, 훈법과 비교해보자.<br>Let's compare the result from Runge-Kutta method with the exact solution, Euler method, and Heun's method.\n\n", "_____no_output_____" ] ], [ [ "a_0, a_1 = 2.0, 1.0\n\ndef dx_dt(t, x):\n return - a_1 * x / a_0\n\n", "_____no_output_____" ], [ "def exact(t):\n return x_0 * py.exp((-a_1 / a_0) * t)\n\n", "_____no_output_____" ], [ "import ode_solver\n\n", "_____no_output_____" ] ], [ [ "$\\Delta t$\n\n", "_____no_output_____" ] ], [ [ "delta_t = 1.0\nt_sec_array = np.arange(0, 6 + delta_t*0.5, delta_t)\n\n", "_____no_output_____" ] ], [ [ "초기값<br>Initial value<br>\n$x(t_0)$\n\n", "_____no_output_____" ] ], [ [ "x_0 = 4.5\n\n", "_____no_output_____" ] ], [ [ "오일러법<br>Euler method\n\n", "_____no_output_____" ] ], [ [ "t_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "훈의 방법<br>\nHeun's method\n\n", "_____no_output_____" ] ], [ [ "t_heun__out, x_heun__out = ode_solver.heun(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "RK4\n\n", "_____no_output_____" ] ], [ [ "t_rk4___out, x_rk4___out = rk4(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "근사해 그림<br>\nPlots of approximate solutions\n\n", "_____no_output_____" ] ], [ [ "import ode_plot\n\n", "_____no_output_____" ], [ "# Approximate solutions\npy.plot(t_euler_out, x_euler_out, '.-', label='Euler')\npy.plot(t_heun__out, x_heun__out, '.-', label='Heun')\npy.plot(t_rk4___out, x_rk4___out, '.-', label='RK4')\n\n# *** Exact Solution\nt_exact_array = np.linspace(0, 6)\nexact = ode_plot.ExactPlotterFirstOrderODE(t_exact_array)\nexact.plot()\n\npy.xlabel('t(sec)')\npy.ylabel('x(m)')\n\npy.legend(loc=0)\npy.grid(True)\n\n", "_____no_output_____" ] ], [ [ "룽게-쿠타법의 해가 엄밀해에 더 가까움을 알 수 있다.<br>\nWe can see that Runge-Kutta method is closer to the exact solution.\n\n", "_____no_output_____" ], [ "## Scipy\n\n", "_____no_output_____" ] ], [ [ "import scipy.integrate as si\n\n", "_____no_output_____" ], [ "sol = si.solve_ivp(dx_dt, (t_heun__out[0], t_heun__out[-1]), [x_0], t_eval=t_heun__out)\n\n", "_____no_output_____" ], [ "py.plot(sol.t, sol.y[0, :], 'o', label='solve_ivp')\npy.plot(t_euler_out, x_euler_out, '.-', label='Euler')\npy.plot(t_heun__out, x_heun__out, '*-', label='Heun')\npy.plot(t_rk4___out, x_rk4___out, '.-', label='RK4')\n\n# plot exact solution\nexact = ode_plot.ExactPlotterFirstOrderODE(py.array(t_rk4___out))\nexact.plot()\npy.grid(True)\npy.xlabel('t(sec)')\npy.ylabel('y(t)')\npy.legend(loc=0);\n\n", "_____no_output_____" ], [ "import pandas as pd\n\n\ndf = pd.DataFrame(\n data={\n 'euler':x_euler_out,\n 'heun' :x_heun__out,\n 'rk4' :x_rk4___out,\n 'solve_ivp':sol.y[0, :],\n 'exact':exact.exact(py.array(t_heun__out))\n },\n index=pd.Series(t_heun__out, name='t(sec)'),\n columns=['exact', 'euler', 'heun', 'rk4', 'solve_ivp']\n)\n\n", "_____no_output_____" ], [ "df['euler_error'] = df.euler - df.exact\ndf['heun_error'] = df.heun - df.exact\ndf['rk4_error'] = df.rk4 - df.exact\ndf['solve_ivp_error'] = df.solve_ivp - df.exact\n\n", "_____no_output_____" ] ], [ [ "표 형태<br>Table form\n\n", "_____no_output_____" ] ], [ [ "pd.set_option('display.max_rows', 10)\ndf\n\n", "_____no_output_____" ] ], [ [ "각종 통계<br>Statistics\n\n", "_____no_output_____" ] ], [ [ "df.describe()\n\n", "_____no_output_____" ] ], [ [ "이 경우, RK4 오차에 대한 의견은?<br>\nIn this case, what do you think about the error of the RK4?\n\n", "_____no_output_____" ] ], [ [ "import numpy.linalg as nl\n\n\nnl.norm(df.euler_error), nl.norm(df.heun_error), nl.norm(df.rk4_error), nl.norm(df.solve_ivp_error), \n\n", "_____no_output_____" ] ], [ [ "### 계산시간<br>Computation time\n\n", "_____no_output_____" ], [ "$\\Delta t$ 을 더 작은 값으로 바꾸어 보자.<br>Let's try a smaller $\\Delta t$.\n\n", "_____no_output_____" ] ], [ [ "delta_t = 1e-3\nt_sec_array = np.arange(0, 6 + delta_t*0.5, delta_t)\n\n", "_____no_output_____" ] ], [ [ "오일러법<br>Euler method\n\n", "_____no_output_____" ] ], [ [ "%%timeit -n100\n\nt_euler_out, x_euler_out = ode_solver.euler(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "훈의 방법<br>\nHeun's method\n\n", "_____no_output_____" ] ], [ [ "%%timeit -n100\n\nt_heun__out, x_heun__out = ode_solver.heun(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "RK4\n\n", "_____no_output_____" ] ], [ [ "%%timeit -n100\n\nt_rk4___out, x_rk4___out = rk4(dx_dt, t_sec_array, x_0)\n\n", "_____no_output_____" ] ], [ [ "계산의 단계 수가 더 많은 해법일 수록 계산 시간이 많이 필요한 것으로 보인다.<br>\nWith more steps, computation takes more time.\n\n", "_____no_output_____" ], [ "## 연습 문제<br>Exercises\n\n", "_____no_output_____" ], [ "다음 미분방정식의 엄밀해를 구하시오:<br>\nFind exact solution of the following differential equation:\n\n$$\n\\begin{align}\n10 \\frac{d}{dt}x(t) + 50 x(t) &= 0 \\\\\nx(0) &= 2\n\\end{align}\n$$\n\n", "_____no_output_____" ], [ "위 미분방정식의 수치해를 오일러법으로 구하시오.<br>\nFind numerical solution of the above differential equation using Euler Method.\n\n", "_____no_output_____" ], [ "위 미분방정식의 수치해를 훈의 방법으로 구하고 엄밀해, 오일러법과 비교하시오.<br>\nFind numerical solution of the above differential equation using Heun's method and compare with exact solution and Euler Method.\n\n", "_____no_output_____" ], [ "위 미분방정식의 수치해를 RK4법으로 구하고 엄밀해, 오일러법, 훈의 방법과 비교하시오.<br>\nFind numerical solution of the above differential equation using RK$ and compare with exact solution, Euler Method, and Heun's Method.\n\n", "_____no_output_____" ], [ "## Final Bell<br>마지막 종\n\n", "_____no_output_____" ] ], [ [ "# stackoverfow.com/a/24634221\nimport os\nos.system(\"printf '\\a'\");\n\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
cb12781272668d1d77f73f191b3166457a90fbec
76,052
ipynb
Jupyter Notebook
main_notebooks_old/main_rgb_autoencoder_DEC.ipynb
ustundag/2D-3D-Semantics
6f79be0082e2bfd6b7940c2314972a603e55f201
[ "Apache-2.0" ]
null
null
null
main_notebooks_old/main_rgb_autoencoder_DEC.ipynb
ustundag/2D-3D-Semantics
6f79be0082e2bfd6b7940c2314972a603e55f201
[ "Apache-2.0" ]
null
null
null
main_notebooks_old/main_rgb_autoencoder_DEC.ipynb
ustundag/2D-3D-Semantics
6f79be0082e2bfd6b7940c2314972a603e55f201
[ "Apache-2.0" ]
null
null
null
55.350801
28,196
0.673973
[ [ [ "%pylab inline\nimport os\nimport keras\nimport metrics\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nimport keras.backend as K\nimport glob\nfrom scipy.io import loadmat \nfrom IPython.display import display, clear_output\n\nfrom time import time\n\nfrom keras import callbacks\nfrom keras.models import Model, Sequential\nfrom keras.optimizers import SGD\nfrom keras.layers import Input, Dense, Dropout, Conv2D, MaxPool2D, UpSampling2D, Activation\nfrom keras.initializers import VarianceScaling\nfrom keras.engine.topology import Layer, InputSpec\n\nfrom PIL import Image\nfrom sklearn.cluster import KMeans\nfrom sklearn.metrics import normalized_mutual_info_score, confusion_matrix", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "images = loadmat(\"C:\\\\Users\\\\ustundag\\\\GitHub\\\\2D-3D-Semantics\\\\noXYZ_area_3_no_xyz_data_rgb_90x90.mat\")\nimages = images[\"rgb\"]\nlabels = loadmat(\"C:\\\\Users\\\\ustundag\\\\GitHub\\\\2D-3D-Semantics\\\\noXYZ_area_3_no_xyz_data_rgb_90x90_labels.mat\")\nlabels = labels[\"labels\"]", "_____no_output_____" ], [ "images.shape", "_____no_output_____" ], [ "# Assign ground truth labels\nlabels_gt = labels[0]\n# Split dataset into tarin and test\nx_train = images[:3000] / 255.0\nx_test = images[-704:] / 255.0\ny_train = labels_gt[:3000]\ny_test = labels_gt[-704:]", "_____no_output_____" ], [ "set(labels_gt)", "_____no_output_____" ], [ "def get_room_type(label):\n if label == 0: return 'WC'\n if label == 1: return 'conferenceRoom'\n if label == 2: return 'hallway'\n if label == 3: return 'lounge'\n if label == 4: return 'office'\n if label == 5: return 'storage'", "_____no_output_____" ], [ "i = 1234\npylab.imshow(x_train[i].reshape(90, 90), cmap='gray')\npylab.show()\nprint('Room type: ' + get_room_type(y_train[i]))", "_____no_output_____" ] ], [ [ "### KMeans Beasic Implementation", "_____no_output_____" ] ], [ [ "km = KMeans(n_jobs=-1, n_clusters = 6, n_init=20)", "_____no_output_____" ], [ "km.fit(x_train)", "_____no_output_____" ], [ "pred = km.predict(x_test)", "_____no_output_____" ], [ "set(pred)", "_____no_output_____" ], [ "import warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)\n\nnormalized_mutual_info_score(y_test, pred)", "_____no_output_____" ] ], [ [ "### Autoencoder + KMeans", "_____no_output_____" ] ], [ [ "# this is our input placeholder\ninput_img = Input(shape=(8100,))\n\n# \"encoded\" is the encoded representation of the input\nencoded = Dense(500, activation='relu')(input_img)\nencoded = Dense(500, activation='relu')(encoded)\nencoded = Dense(2000, activation='relu')(encoded)\nencoded = Dense(30, activation='sigmoid')(encoded)\n\n# \"decoded\" is the lossy reconstruction of the input\ndecoded = Dense(2000, activation='relu')(encoded)\ndecoded = Dense(500, activation='relu')(decoded)\ndecoded = Dense(500, activation='relu')(decoded)\ndecoded = Dense(8100)(decoded)\n\n# this model maps an input to its reconstruction\nautoencoder = Model(input_img, decoded)\nautoencoder.summary()", "WARNING:tensorflow:From c:\\python368-64\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 8100) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 500) 4050500 \n_________________________________________________________________\ndense_2 (Dense) (None, 500) 250500 \n_________________________________________________________________\ndense_3 (Dense) (None, 2000) 1002000 \n_________________________________________________________________\ndense_4 (Dense) (None, 30) 60030 \n_________________________________________________________________\ndense_5 (Dense) (None, 2000) 62000 \n_________________________________________________________________\ndense_6 (Dense) (None, 500) 1000500 \n_________________________________________________________________\ndense_7 (Dense) (None, 500) 250500 \n_________________________________________________________________\ndense_8 (Dense) (None, 8100) 4058100 \n=================================================================\nTotal params: 10,734,130\nTrainable params: 10,734,130\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# this model maps an input to its encoded representation\nencoder = Model(input_img, encoded)\nautoencoder.compile(optimizer='adam', loss='mse')", "_____no_output_____" ], [ "train_history = autoencoder.fit(x_train, x_train,\n epochs=10,\n batch_size=32,\n shuffle=True,\n validation_data=(x_test, x_test))", "WARNING:tensorflow:From c:\\python368-64\\lib\\site-packages\\tensorflow\\python\\ops\\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nTrain on 3000 samples, validate on 704 samples\nEpoch 1/10\n3000/3000 [==============================] - 15s 5ms/step - loss: 3.9231e-06 - val_loss: 5.1463e-07\nEpoch 2/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.1580e-07 - val_loss: 5.1462e-07\nEpoch 3/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.1628e-07 - val_loss: 5.1735e-07\nEpoch 4/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.1700e-07 - val_loss: 5.1825e-07\nEpoch 5/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.1884e-07 - val_loss: 5.1941e-07\nEpoch 6/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.1956e-07 - val_loss: 5.1749e-07\nEpoch 7/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.2032e-07 - val_loss: 5.1957e-07\nEpoch 8/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.2055e-07 - val_loss: 5.1963e-07\nEpoch 9/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.2115e-07 - val_loss: 5.1733e-07\nEpoch 10/10\n3000/3000 [==============================] - 7s 2ms/step - loss: 5.2118e-07 - val_loss: 5.1948e-07\n" ], [ "pred_auto_train = encoder.predict(x_train)\npred_auto = encoder.predict(x_test)", "_____no_output_____" ], [ "km.fit(pred_auto_train)\npred = km.predict(pred_auto)", "_____no_output_____" ], [ "set(pred)", "_____no_output_____" ], [ "normalized_mutual_info_score(y_test, pred)", "_____no_output_____" ] ], [ [ "### ConvAutoencoder + KMeans (currently not working, in progress...)", "_____no_output_____" ] ], [ [ "# Reshape the images\nx_train_s = x_train.reshape(-1,90,90,1)\nx_test_s = x_test.reshape(-1,90,90,1)", "_____no_output_____" ], [ "x_test_s[0].shape", "_____no_output_____" ], [ "# Build the autoencoder\nmodel = Sequential()\nmodel.add(Conv2D(45, kernel_size=3, padding='same', activation='relu', input_shape=(90,90,1)))\nmodel.add(MaxPool2D((3,3), padding='same'))\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(15, kernel_size=3, padding='same', activation='relu'))\nmodel.add(MaxPool2D((3,3), padding='same'))\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(15, kernel_size=3, padding='same', activation='relu'))\nmodel.add(UpSampling2D((3,3)))\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(45, kernel_size=3, padding='same', activation='relu'))\nmodel.add(UpSampling2D((3,3)))\nmodel.add(Dropout(0.2))\nmodel.add(Conv2D(1, kernel_size=3, padding='same', activation='relu'))\n\nmodel.compile(optimizer='adam', loss=\"mse\")\nmodel.summary()", "_____no_output_____" ], [ "# Train the model\nmodel.fit(x_train_s, x_train_s, epochs=10, batch_size=64, validation_data=(x_test_s, x_test_s), verbose=1)", "_____no_output_____" ], [ "# Fitting testing dataset\nrestored_testing_dataset = model.predict(x_test_s)", "_____no_output_____" ], [ "# Observe the reconstructed image quality\nplt.figure(figsize=(20,5))\nfor i in range(5):\n index = y_test.tolist().index(i)\n plt.subplot(2, 6, i+1)\n plt.imshow(x_test_s[index].reshape((90,90)), cmap='gray')\n plt.gray()\n plt.subplot(2, 6, i+7)\n plt.imshow(restored_testing_dataset[index].reshape((90,90)), cmap='gray')\n plt.gray()", "_____no_output_____" ], [ "# Extract the encoder\nencoder = K.function([model.layers[0].input], [model.layers[4].output])", "_____no_output_____" ], [ "# Encode the training set\nencoded_images = encoder([x_test_s])[0].reshape(-1, 10*10*15)", "_____no_output_____" ], [ "encoded_images.shape", "_____no_output_____" ], [ "# Cluster the training set\nkmeans = KMeans(n_clusters = 6)\nclustered_training_set = kmeans.fit_predict(encoded_images)", "_____no_output_____" ], [ "# Observe and compare clustering result with actual label using confusion matrix\ncm = confusion_matrix(y_test, clustered_training_set)\nplt.figure(figsize=(8, 8))\nsns.heatmap(cm, annot=True, fmt=\"d\")\nplt.title(\"Confusion matrix\", fontsize=20)\nplt.ylabel('True label', fontsize=15)\nplt.xlabel('Clustering label', fontsize=15)\nplt.show()", "_____no_output_____" ], [ "# Plot the actual pictures grouped by clustering\nfig = plt.figure(figsize=(20,20))\nfor r in range(6):\n cluster = cm[r].argmax()\n for c, val in enumerate(x_test_s[clustered_training_set == cluster][0:6]):\n fig.add_subplot(6, 6, 6*r+c+1)\n plt.imshow(val.reshape((90,90)))\n plt.gray()\n plt.xticks([])\n plt.yticks([])\n plt.xlabel('cluster: '+str(cluster))\n plt.ylabel('digit: '+str(r))", "_____no_output_____" ], [ "normalized_mutual_info_score(y_test, clustered_training_set)", "_____no_output_____" ] ], [ [ "### Deep Embedded Clustering (DEC) implementation", "_____no_output_____" ] ], [ [ "from time import time\nimport numpy as np\nimport keras.backend as K\nfrom keras.engine.topology import Layer, InputSpec\nfrom keras.layers import Dense, Input\nfrom keras.models import Model\nfrom keras.optimizers import SGD\nfrom keras import callbacks\nfrom keras.initializers import VarianceScaling\nfrom sklearn.cluster import KMeans", "_____no_output_____" ], [ "\"\"\"\nKeras implementation for Deep Embedded Clustering (DEC) algorithm:\n\nOriginal Author:\n Xifeng Guo. 2017.1.30\n\"\"\"\n\ndef autoencoder(dims, act='relu', init='glorot_uniform'):\n \"\"\"\n Fully connected auto-encoder model, symmetric.\n Arguments:\n dims: list of number of units in each layer of encoder. dims[0] is input dim, dims[-1] is units in hidden layer.\n The decoder is symmetric with encoder. So number of layers of the auto-encoder is 2*len(dims)-1\n act: activation, not applied to Input, Hidden and Output layers\n return:\n (ae_model, encoder_model), Model of autoencoder and model of encoder\n \"\"\"\n n_stacks = len(dims) - 1\n # input\n x = Input(shape=(dims[0],), name='input')\n h = x\n\n # internal layers in encoder\n for i in range(n_stacks-1):\n h = Dense(dims[i + 1], activation=act, kernel_initializer=init, name='encoder_%d' % i)(h)\n\n # hidden layer\n h = Dense(dims[-1], kernel_initializer=init, name='encoder_%d' % (n_stacks - 1))(h) # hidden layer, features are extracted from here\n\n y = h\n # internal layers in decoder\n for i in range(n_stacks-1, 0, -1):\n y = Dense(dims[i], activation=act, kernel_initializer=init, name='decoder_%d' % i)(y)\n\n # output\n y = Dense(dims[0], kernel_initializer=init, name='decoder_0')(y)\n\n return Model(inputs=x, outputs=y, name='AE'), Model(inputs=x, outputs=h, name='encoder')\n\n\nclass ClusteringLayer(Layer):\n \"\"\"\n Clustering layer converts input sample (feature) to soft label, i.e. a vector that represents the probability of the\n sample belonging to each cluster. The probability is calculated with student's t-distribution.\n\n # Example\n ```\n model.add(ClusteringLayer(n_clusters=6))\n ```\n # Arguments\n n_clusters: number of clusters.\n weights: list of Numpy array with shape `(n_clusters, n_features)` witch represents the initial cluster centers.\n alpha: parameter in Student's t-distribution. Default to 1.0.\n # Input shape\n 2D tensor with shape: `(n_samples, n_features)`.\n # Output shape\n 2D tensor with shape: `(n_samples, n_clusters)`.\n \"\"\"\n\n def __init__(self, n_clusters, weights=None, alpha=1.0, **kwargs):\n if 'input_shape' not in kwargs and 'input_dim' in kwargs:\n kwargs['input_shape'] = (kwargs.pop('input_dim'),)\n super(ClusteringLayer, self).__init__(**kwargs)\n self.n_clusters = n_clusters\n self.alpha = alpha\n self.initial_weights = weights\n self.input_spec = InputSpec(ndim=2)\n\n def build(self, input_shape):\n assert len(input_shape) == 2\n input_dim = input_shape[1]\n self.input_spec = InputSpec(dtype=K.floatx(), shape=(None, input_dim))\n self.clusters = self.add_weight((self.n_clusters, input_dim), initializer='glorot_uniform', name='clusters')\n if self.initial_weights is not None:\n self.set_weights(self.initial_weights)\n del self.initial_weights\n self.built = True\n\n def call(self, inputs, **kwargs):\n \"\"\" student t-distribution, as same as used in t-SNE algorithm.\n q_ij = 1/(1+dist(x_i, u_j)^2), then normalize it.\n Arguments:\n inputs: the variable containing data, shape=(n_samples, n_features)\n Return:\n q: student's t-distribution, or soft labels for each sample. shape=(n_samples, n_clusters)\n \"\"\"\n q = 1.0 / (1.0 + (K.sum(K.square(K.expand_dims(inputs, axis=1) - self.clusters), axis=2) / self.alpha))\n q **= (self.alpha + 1.0) / 2.0\n q = K.transpose(K.transpose(q) / K.sum(q, axis=1))\n return q\n\n def compute_output_shape(self, input_shape):\n assert input_shape and len(input_shape) == 2\n return input_shape[0], self.n_clusters\n\n def get_config(self):\n config = {'n_clusters': self.n_clusters}\n base_config = super(ClusteringLayer, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n\n\nclass DEC(object):\n def __init__(self,\n dims,\n n_clusters=6,\n alpha=1.0,\n init='glorot_uniform'):\n\n super(DEC, self).__init__()\n\n self.dims = dims\n self.input_dim = dims[0]\n self.n_stacks = len(self.dims) - 1\n\n self.n_clusters = n_clusters\n self.alpha = alpha\n self.autoencoder, self.encoder = autoencoder(self.dims, init=init)\n\n # prepare DEC model\n clustering_layer = ClusteringLayer(self.n_clusters, name='clustering')(self.encoder.output)\n self.model = Model(inputs=self.encoder.input, outputs=clustering_layer)\n\n def pretrain(self, x, y=None, optimizer='adam', epochs=200, batch_size=256, save_dir='results/temp'):\n print('...Pretraining...')\n self.autoencoder.compile(optimizer=optimizer, loss='mse')\n\n csv_logger = callbacks.CSVLogger(save_dir + '/pretrain_log.csv')\n cb = [csv_logger]\n if y is not None:\n class PrintACC(callbacks.Callback):\n def __init__(self, x, y):\n self.x = x\n self.y = y\n super(PrintACC, self).__init__()\n\n def on_epoch_end(self, epoch, logs=None):\n if epoch % int(epochs/10) != 0:\n return\n feature_model = Model(self.model.input,\n self.model.get_layer(\n 'encoder_%d' % (int(len(self.model.layers) / 2) - 1)).output)\n features = feature_model.predict(self.x)\n km = KMeans(n_clusters=len(np.unique(self.y)), n_init=20, n_jobs=4)\n y_pred = km.fit_predict(features)\n # print()\n print(' '*8 + '|==> acc: %.4f, nmi: %.4f <==|'\n % (metrics.acc(self.y, y_pred), metrics.nmi(self.y, y_pred)))\n\n cb.append(PrintACC(x, y))\n\n # begin pretraining\n t0 = time()\n self.autoencoder.fit(x, x, batch_size=batch_size, epochs=epochs, callbacks=cb)\n print('Pretraining time: ', time() - t0)\n self.autoencoder.save_weights(save_dir + '/ae_weights.h5')\n print('Pretrained weights are saved to %s/ae_weights.h5' % save_dir)\n self.pretrained = True\n\n def load_weights(self, weights): # load weights of DEC model\n self.model.load_weights(weights)\n\n def extract_features(self, x):\n return self.encoder.predict(x)\n\n def predict(self, x): # predict cluster labels using the output of clustering layer\n q = self.model.predict(x, verbose=0)\n return q.argmax(1)\n\n @staticmethod\n def target_distribution(q):\n weight = q ** 2 / q.sum(0)\n return (weight.T / weight.sum(1)).T\n\n def compile(self, optimizer='sgd', loss='kld'):\n self.model.compile(optimizer=optimizer, loss=loss)\n\n def fit(self, x, y=None, maxiter=2e4, batch_size=256, tol=1e-3,\n update_interval=140, save_dir='./results/temp'):\n\n print('Update interval', update_interval)\n save_interval = x.shape[0] / batch_size * 5 # 5 epochs\n print('Save interval', save_interval)\n\n # Step 1: initialize cluster centers using k-means\n t1 = time()\n print('Initializing cluster centers with k-means.')\n kmeans = KMeans(n_clusters=self.n_clusters, n_init=20)\n y_pred = kmeans.fit_predict(self.encoder.predict(x))\n y_pred_last = np.copy(y_pred)\n self.model.get_layer(name='clustering').set_weights([kmeans.cluster_centers_])\n\n # Step 2: deep clustering\n # logging file\n import csv\n logfile = open(save_dir + '/dec_log.csv', 'w')\n logwriter = csv.DictWriter(logfile, fieldnames=['iter', 'acc', 'nmi', 'ari', 'loss'])\n logwriter.writeheader()\n\n loss = 0\n index = 0\n index_array = np.arange(x.shape[0])\n for ite in range(int(maxiter)):\n if ite % update_interval == 0:\n q = self.model.predict(x, verbose=0)\n p = self.target_distribution(q) # update the auxiliary target distribution p\n\n # evaluate the clustering performance\n y_pred = q.argmax(1)\n if y is not None:\n acc = np.round(metrics.acc(y, y_pred), 5)\n nmi = np.round(metrics.nmi(y, y_pred), 5)\n ari = np.round(metrics.ari(y, y_pred), 5)\n loss = np.round(loss, 5)\n logdict = dict(iter=ite, acc=acc, nmi=nmi, ari=ari, loss=loss)\n logwriter.writerow(logdict)\n print('Iter %d: acc = %.5f, nmi = %.5f, ari = %.5f' % (ite, acc, nmi, ari), ' ; loss=', loss)\n\n # check stop criterion\n delta_label = np.sum(y_pred != y_pred_last).astype(np.float32) / y_pred.shape[0]\n y_pred_last = np.copy(y_pred)\n if ite > 0 and delta_label < tol:\n print('delta_label ', delta_label, '< tol ', tol)\n print('Reached tolerance threshold. Stopping training.')\n logfile.close()\n break\n\n # train on batch\n # if index == 0:\n # np.random.shuffle(index_array)\n idx = index_array[index * batch_size: min((index+1) * batch_size, x.shape[0])]\n self.model.train_on_batch(x=x[idx], y=p[idx])\n index = index + 1 if (index + 1) * batch_size <= x.shape[0] else 0\n\n # save intermediate model\n if ite % save_interval == 0:\n print('saving model to:', save_dir + '/DEC_model_' + str(ite) + '.h5')\n self.model.save_weights(save_dir + '/DEC_model_' + str(ite) + '.h5')\n\n ite += 1\n\n # save the trained model\n logfile.close()\n print('saving model to:', save_dir + '/DEC_model_final.h5')\n self.model.save_weights(save_dir + '/DEC_model_final.h5')\n\n return y_pred", "_____no_output_____" ], [ "import sys\nsys.path.insert(0, 'Deep_Embedding_Clustering')\nfrom Deep_Embedding_Clustering import metrics\n\n# setting the hyper parameters\ninit = 'glorot_uniform'\npretrain_optimizer = 'adam'\ndataset = 'mnist'\nbatch_size = 32\nmaxiter = 2e4\ntol = 0.001\nsave_dir = 'results'\n\nimport os\nif not os.path.exists(save_dir):\n os.makedirs(save_dir)\n\nupdate_interval = 200\npretrain_epochs = 50\ninit = VarianceScaling(scale=1. / 3., mode='fan_in',\n distribution='uniform') # [-limit, limit], limit=sqrt(1./fan_in)\n#pretrain_optimizer = SGD(lr=1, momentum=0.9)\n\n\n# prepare the DEC model\ndec = DEC(dims=[x_train.shape[-1], 500, 500, 2000, 10], n_clusters=6, init=init)\n\ndec.pretrain(x=x_train, y=y_train, optimizer=pretrain_optimizer,\n epochs=pretrain_epochs, batch_size=batch_size,\n save_dir=save_dir)", "...Pretraining...\nEpoch 1/50\n3000/3000 [==============================] - 10s 3ms/step - loss: 7.6436e-07\n |==> acc: 0.2347, nmi: 0.0464 <==|\nEpoch 2/50\n 64/3000 [..............................] - ETA: 6s - loss: 5.2607e-07" ], [ "dec.model.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput (InputLayer) (None, 8100) 0 \n_________________________________________________________________\nencoder_0 (Dense) (None, 500) 4050500 \n_________________________________________________________________\nencoder_1 (Dense) (None, 500) 250500 \n_________________________________________________________________\nencoder_2 (Dense) (None, 2000) 1002000 \n_________________________________________________________________\nencoder_3 (Dense) (None, 10) 20010 \n_________________________________________________________________\nclustering (ClusteringLayer) (None, 6) 60 \n=================================================================\nTotal params: 5,323,070\nTrainable params: 5,323,070\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "dec.compile(optimizer=SGD(0.01, 0.9), loss='kld')", "_____no_output_____" ], [ "y_pred = dec.fit(x_train, y=y_train, tol=tol, maxiter=maxiter, batch_size=batch_size,\n update_interval=update_interval, save_dir=save_dir)", "Update interval 200\nSave interval 468.75\nInitializing cluster centers with k-means.\n" ], [ "pred_val = dec.predict(x_test)", "_____no_output_____" ], [ "set(pred_val)", "_____no_output_____" ], [ "normalized_mutual_info_score(y_test, pred_val)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb12a3e7c6122b3c329a040a243aa6b0008ffa4b
103,457
ipynb
Jupyter Notebook
Coursera/Data Visualization with Python-IBM/Week-2/Excercise/Pie-Chart.ipynb
manipiradi/Online-Courses-Learning
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
[ "MIT" ]
331
2019-10-22T09:06:28.000Z
2022-03-27T13:36:03.000Z
Coursera/Data Visualization with Python-IBM/Week-2/Excercise/Pie-Chart.ipynb
manipiradi/Online-Courses-Learning
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
[ "MIT" ]
8
2020-04-10T07:59:06.000Z
2022-02-06T11:36:47.000Z
Coursera/Data Visualization with Python-IBM/Week-2/Excercise/Pie-Chart.ipynb
manipiradi/Online-Courses-Learning
2a4ce7590d1f6d1dfa5cfde632660b562fcff596
[ "MIT" ]
572
2019-07-28T23:43:35.000Z
2022-03-27T22:40:08.000Z
153.26963
47,440
0.839054
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',\n sheet_name='Canada by Citizenship',\n skiprows=range(20),\n skip_footer=2\n )\n\nprint('Data downloaded and read into a dataframe!')", "Data downloaded and read into a dataframe!\n" ], [ "df_can.head()", "_____no_output_____" ], [ "print(df_can.shape)", "(195, 43)\n" ], [ "# clean up the dataset to remove unnecessary columns (eg. REG) \ndf_can.drop(['AREA', 'REG', 'DEV', 'Type', 'Coverage'], axis=1, inplace=True)\n\n# let's rename the columns so that they make sense\ndf_can.rename(columns={'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace=True)\n\n# for sake of consistency, let's also make all column labels of type string\ndf_can.columns = list(map(str, df_can.columns))\n\n# set the country name as index - useful for quickly looking up countries using .loc method\ndf_can.set_index('Country', inplace=True)\n\n# add total column\ndf_can['Total'] = df_can.sum(axis=1)\n\n# years that we will be using in this lesson - useful for plotting later on\nyears = list(map(str, range(1980, 2014)))\nprint('data dimensions:', df_can.shape)", "data dimensions: (195, 38)\n" ], [ "%matplotlib inline\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nmpl.style.use('ggplot') # optional: for ggplot-like style\n\n# check for latest version of Matplotlib\nprint('Matplotlib version: ', mpl.__version__) # >= 2.0.0", "Matplotlib version: 2.1.2\n" ], [ "# group countries by continents and apply sum() function \ndf_continents = df_can.groupby('Continent', axis=0).sum()\n\n# note: the output of the groupby method is a `groupby' object. \n# we can not use it further until we apply a function (eg .sum())\nprint(type(df_can.groupby('Continent', axis=0)))\n\ndf_continents.head()", "<class 'pandas.core.groupby.DataFrameGroupBy'>\n" ], [ "# autopct create %, start angle represent starting point\ndf_continents['Total'].plot(kind='pie',\n figsize=(5, 6),\n autopct='%1.1f%%', # add in percentages\n startangle=90, # start angle 90° (Africa)\n shadow=True, # add shadow \n )\n\nplt.title('Immigration to Canada by Continent [1980 - 2013]')\nplt.axis('equal') # Sets the pie chart to look like a circle.\n\nplt.show()", "_____no_output_____" ], [ "colors_list = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue', 'lightgreen', 'pink']\nexplode_list = [0.1, 0, 0, 0, 0.1, 0.1] # ratio for each continent with which to offset each wedge.\n\ndf_continents['Total'].plot(kind='pie',\n figsize=(15, 6),\n autopct='%1.1f%%', \n startangle=90, \n shadow=True, \n labels=None, # turn off labels on pie chart\n pctdistance=1.12, # the ratio between the center of each pie slice and the start of the text generated by autopct \n colors=colors_list, # add custom colors\n explode=explode_list # 'explode' lowest 3 continents\n )\n\n# scale the title up by 12% to match pctdistance\nplt.title('Immigration to Canada by Continent [1980 - 2013]', y=1.12) \n\nplt.axis('equal') \n\n# add legend\nplt.legend(labels=df_continents.index, loc='upper left') \n\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb12aa748bce2d86bcf0b6213c5b250490194014
61,887
ipynb
Jupyter Notebook
mem_mem/tests/2cke_transOvlp/2cke_transOvlp.ipynb
3upperm2n/trans_kernel_model
72a9156fa35b5b5407173f6dbde685feb0a6a3f5
[ "MIT" ]
null
null
null
mem_mem/tests/2cke_transOvlp/2cke_transOvlp.ipynb
3upperm2n/trans_kernel_model
72a9156fa35b5b5407173f6dbde685feb0a6a3f5
[ "MIT" ]
null
null
null
mem_mem/tests/2cke_transOvlp/2cke_transOvlp.ipynb
3upperm2n/trans_kernel_model
72a9156fa35b5b5407173f6dbde685feb0a6a3f5
[ "MIT" ]
null
null
null
33.855033
6,970
0.434033
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import warnings\nimport pandas as pd\nimport numpy as np\nimport os\nimport sys # error msg, add the modules\nimport operator # sorting\nfrom math import *\nimport matplotlib.pyplot as plt\n\nsys.path.append('../../')\n\nimport read_trace\nimport cuda_timeline\n# from avgblkmodel import *\nfrom ModelParam import *\nimport cke\nfrom df_util import *\n#from model_cke import *\n\nwarnings.filterwarnings(\"ignore\", category=np.VisibleDeprecationWarning)", "_____no_output_____" ] ], [ [ "# gpu info", "_____no_output_____" ] ], [ [ "gtx950 = DeviceInfo()\ngtx950.sm_num = 6\ngtx950.sharedmem_per_sm = 49152\ngtx950.reg_per_sm = 65536\ngtx950.maxthreads_per_sm = 2048", "_____no_output_____" ] ], [ [ "# 2 stream info", "_____no_output_____" ] ], [ [ "# 10M for mem_mem : where the h2d between streams are overlapped\ntrace_file = 'trace_10M_s1.csv'\ntrace_file_2cke = 'trace_h2d_h2d_ovlp.csv'\n\ndf_trace = read_trace.trace2dataframe(trace_file) # read the trace to the dataframe\ndf_trace_2cke = read_trace.trace2dataframe(trace_file_2cke)", "_____no_output_____" ], [ "#df_trace", "_____no_output_____" ], [ "#cuda_timeline.plot_trace(df_trace)", "_____no_output_____" ], [ "df_trace_2cke", "_____no_output_____" ], [ "cuda_timeline.plot_trace(df_trace_2cke)", "/home/leiming/anaconda2/lib/python2.7/site-packages/matplotlib/axes/_base.py:1292: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal\n if aspect == 'normal':\n/home/leiming/anaconda2/lib/python2.7/site-packages/matplotlib/axes/_base.py:1297: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal\n elif aspect in ('equal', 'auto'):\n" ] ], [ [ "# 1cke - read trace and reset the timeline", "_____no_output_____" ] ], [ [ "df_single_stream = read_trace.get_timing(df_trace)", "_____no_output_____" ], [ "df_single_stream", "_____no_output_____" ], [ "df_s1 = read_trace.reset_starting(df_single_stream)", "_____no_output_____" ], [ "df_s1", "_____no_output_____" ] ], [ [ "### 2cke case", "_____no_output_____" ] ], [ [ "df_2stream = read_trace.get_timing(df_trace_2cke)", "_____no_output_____" ], [ "df_2stream", "_____no_output_____" ], [ "tot_runtime = read_trace.getTotalRuntime(df_2stream)\nprint tot_runtime", "37.784463\n" ] ], [ [ "# 2 cke", "_____no_output_____" ] ], [ [ "stream_num = 2\n\n# find when to start the stream and update the starting pos for the trace\nH2D_H2D_OVLP_TH = 3.158431\n\ndf_cke_list = cke.init_trace_list(df_s1, stream_num = stream_num, h2d_ovlp_th = H2D_H2D_OVLP_TH)", "_____no_output_____" ], [ "df_cke_list[0]", "_____no_output_____" ], [ "df_cke_list[1]", "_____no_output_____" ] ], [ [ "### sort", "_____no_output_____" ] ], [ [ "df_all_api = cke.init_sort_api_with_extra_cols(df_cke_list)", "_____no_output_____" ], [ "df_all_api", "_____no_output_____" ] ], [ [ "### start algo", "_____no_output_____" ] ], [ [ "count = 1\n# break_count = 7\n\nwhile not cke.AllDone(df_all_api):\n # pick two api to learn \n df_all_api, r1, r2 = cke.PickTwo(df_all_api)\n \n if r1 == None and r2 == None: # go directly updating the last wake api\n df_all_api = cke.UpdateStream_lastapi(df_all_api)\n else:\n df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])\n\n whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api\n# print whichType\n\n if whichType == None:\n df_all_api = cke.Predict_noConflict(df_all_api, r1, r2)\n elif whichType in ['h2d', 'd2h']: # data transfer in the same direction\n df_all_api = cke.Predict_transferOvlp(df_all_api, r1, r2, ways = 2.0)\n else: # concurrent kernel: todo\n pass\n\n # if count == break_count:\n # break\n\n rangeT = cke.Get_pred_range(df_all_api)\n# print rangeT\n\n # if count == break_count:\n # break\n\n extra_conc = cke.Check_cc_by_time(df_all_api, rangeT) # check whether there is conc during the rangeT\n\n if extra_conc == 0:\n if whichType in ['h2d', 'd2h']:\n df_all_api = cke.Update_wake_transferOvlp(df_all_api, rangeT, ways = 2.0)\n elif whichType == 'kern':\n pass\n else: # no overlapping\n df_all_api = cke.Update_wake_noConflict(df_all_api, rangeT)\n\n # check if any api is done, and update the timing for the other apis in that stream\n df_all_api = cke.UpdateStreamTime(df_all_api)\n\n else: # todo : when there is additional overlapping\n pass\n\n# if count == break_count:\n# break\n \n # next call\n count = count + 1", "_____no_output_____" ], [ "df_all_api", "_____no_output_____" ], [ "df_all_api.loc[df_all_api.stream_id == 0]", "_____no_output_____" ], [ "df_all_api.loc[df_all_api.stream_id == 1]", "_____no_output_____" ], [ "#\n# run above\n#", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb12c29589a1fe27b978f377afa3fc4352a8c46d
85,500
ipynb
Jupyter Notebook
data_analysis/analyse_site_outputs_graph.ipynb
casperg92/MaSIF_colab
f030061276cc21b812bb3be652124b75dcdf7e5b
[ "MIT" ]
8
2022-02-21T12:54:25.000Z
2022-03-22T00:35:26.000Z
data_analysis/analyse_site_outputs_graph.ipynb
casperg92/MaSIF_colab
f030061276cc21b812bb3be652124b75dcdf7e5b
[ "MIT" ]
1
2022-03-19T02:44:08.000Z
2022-03-21T12:20:59.000Z
data_analysis/analyse_site_outputs_graph.ipynb
casperg92/MaSIF_colab
f030061276cc21b812bb3be652124b75dcdf7e5b
[ "MIT" ]
2
2022-03-18T08:59:38.000Z
2022-03-26T11:48:59.000Z
33.529412
13,100
0.380444
[ [ [ "import numpy as np\nimport pandas as pd\nfrom pathlib import Path\nimport matplotlib.pyplot as plt\nimport plotly.graph_objects as go\nfrom tqdm import tqdm\nfrom scipy.spatial.distance import cdist\nfrom sklearn.metrics import roc_curve, roc_auc_score", "_____no_output_____" ], [ "timings = Path('timings/')\nraw_data = Path('surface_data/raw/protein_surfaces/01-benchmark_surfaces_npy')\n\nexperiment_names = ['TangentConv_site_1layer_5A_epoch49', 'TangentConv_site_1layer_9A_epoch49', 'TangentConv_site_1layer_15A_epoch49','TangentConv_site_3layer_5A_epoch49','TangentConv_site_3layer_9A_epoch46', 'TangentConv_site_3layer_15A_epoch17','PointNet_site_1layer_5A_epoch30','PointNet_site_1layer_9A_epoch30','PointNet_site_3layer_5A_epoch46', 'PointNet_site_3layer_9A_epoch37', 'DGCNN_site_1layer_k40_epoch46','DGCNN_site_1layer_k100_epoch32','DGCNN_site_3layer_k40_epoch33']\n\nexperiment_names_short = ['Ours 1L 5A', 'Ours 1L 9A', 'Ours 1L 15A','Ours 3L 5A','Ours 3L 9A', 'Ours 3L 15A','PN++ 1L 5A','PN++ 1L 9A','PN++ 3L 5A', 'PN++ 3L 9A', 'DGCNN 1L K40','DGCNN 1L K100','DGCNN 3L K40']", "_____no_output_____" ], [ "performance = []\ntimes = []\ntime_errors = []\nmemory = []\nmemory_errors = []\nfor experiment_name in experiment_names:\n\n predpoints_preds = np.load(timings/f'{experiment_name}_predpoints_preds.npy')\n predpoints_labels = np.load(timings/f'{experiment_name}_predpoints_labels.npy')\n\n rocauc = roc_auc_score(predpoints_labels,predpoints_preds)\n conv_times = np.load(timings/f'{experiment_name}_convtime.npy')\n memoryusage = np.load(timings/f'{experiment_name}_memoryusage.npy')\n memoryusage = memoryusage\n conv_times = conv_times\n\n performance.append(rocauc)\n times.append(conv_times.mean())\n time_errors.append(conv_times.std())\n memory.append(memoryusage.mean())\n memory_errors.append(memoryusage.std())\n", "_____no_output_____" ], [ "performance += [0.849]\ntimes += [0.16402676922934395]\ntime_errors += [0.04377787154914341]\nmemory += [1491945956.9371428]\nmemory_errors += [125881554.73354617]\nexperiment_names_short += ['MaSIF 3L 9A']\ncolors += [40]", "_____no_output_____" ], [ "experiment_names_short = [f'{i+1}) {experiment_names_short[i]}' for i in range(len(experiment_names_short))]", "_____no_output_____" ], [ "times = np.array(times)*1e3\ntime_errors = np.array(time_errors)*1e3\nmemory = np.array(memory)*1e-6\nmemory_errors = np.array(memory_errors)*1e-6", "_____no_output_____" ], [ "colors = [f'hsl(240,100,{25+i*10.83})' for i in range(6)]+[f'hsl(116,100,{25+i*16.25})' for i in range(4)] + [f'hsl(300,100,{25+i*21.66})' for i in range(3)] + [f'hsl(0,100,50)']", "_____no_output_____" ], [ "fig = go.Figure()\nfor i in range(len(times)):\n fig.add_trace(go.Scatter(\n x=[times[i]],\n y=[performance[i]],\n mode='markers',\n name=experiment_names_short[i],\n marker = dict(color=colors[i]),\n error_x=dict(\n type='data',\n symmetric=True,\n array=[time_errors[i]])))\n\n\nfig.update_layout(\n xaxis_title='Forward pass time per protein [ms] (log)',\n yaxis_title='Site identification ROC-AUC',\n legend_title=\"Models\",\n)\nfig.update_xaxes(type=\"log\")\nfig.update_layout(\n xaxis = dict(\n tickvals = [1e1,2e1,4e1,6e1,8e1,1e2,2e2,4e2,6e2],\n #tickvals = [10, 20, 50, 100, 200, 500],\n )\n)\n\nfig.show()\nfig.write_image('figures/time_vs_perf.pdf')", "_____no_output_____" ], [ "fig = go.Figure()\nfor i in range(len(times)):\n fig.add_trace(go.Scatter(\n x=[memory[i]],\n y=[performance[i]],\n mode='markers',\n marker = dict(color=colors[i]),\n name=experiment_names_short[i],\n error_x=dict(\n type='data',\n symmetric=True,\n array=[memory_errors[i]])))\n\n\nfig.update_layout(\n xaxis_title='Memory usage per protein [MB] (log)',\n yaxis_title='Site identification ROC-AUC',\n legend_title=\"Models\",\n)\nfig.update_xaxes(type=\"log\")\nfig.update_layout(\n xaxis = dict(\n tickvals = [100,200,400,600,800,1000,2000,4000],\n )\n)\nfig.show()\nfig.write_image('figures/mem_vs_perf.pdf')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb12cd6b7ffeed7ae694ab5b2985cd7c576580b0
4,409
ipynb
Jupyter Notebook
Lesson02/Activity03/.ipynb_checkpoints/Student Activity-checkpoint.ipynb
RileyAfencl/DSC-540
dffdabf11116aef2c25ac0ae1570594833734eca
[ "MIT" ]
77
2018-09-26T05:50:24.000Z
2022-03-21T08:25:30.000Z
Lesson02/Activity03/.ipynb_checkpoints/Student Activity-checkpoint.ipynb
RileyAfencl/DSC-540
dffdabf11116aef2c25ac0ae1570594833734eca
[ "MIT" ]
2
2020-03-22T21:15:43.000Z
2021-05-07T06:52:36.000Z
Lesson02/Activity03/.ipynb_checkpoints/Student Activity-checkpoint.ipynb
RileyAfencl/DSC-540
dffdabf11116aef2c25ac0ae1570594833734eca
[ "MIT" ]
182
2018-10-16T05:55:06.000Z
2022-03-26T21:47:26.000Z
31.719424
525
0.618961
[ [ [ "## Student Activity on Advanced Data Structure", "_____no_output_____" ], [ "In this activity we will have to do the following tasks\n\n- Look up the definition of permutations, and dropwhile from [itertools documentation](https://docs.python.org/3/library/itertools.html) in Python\n- Using permutations generate all possible three digit numbers that can be generated using 0, 1, and 2\n- Loop over this iterator and print them and also use `type` and `assert` to make sure that the return types are tuples\n- Use a single line code involving `dropwhile` and an lambda expression to convert all the tuples to lists while dropping any leading zeros (example - `(0, 1, 2)` becomes `[1, 2]`)\n- Write a function which takes a list like above and returns the actual number contained in it. Example - if you pass `[1, 2]` to the function it will return you `12`. Make sure it is indeed a number and not just a concatenated string. (Hint - You will need to treat the incoming list as a stack in the function to achieve this)", "_____no_output_____" ], [ "### Task 1\n\nLook up the definition of `permutations` and `dropwhile` from itertools. \n\nThere is a way to look up the definition of a function inside Jupyter itself. just type the function name followed by a `?` and press `Shift+Enter`. We encourage you to also try this way", "_____no_output_____" ] ], [ [ "### Write your code bellow this comment.", "_____no_output_____" ] ], [ [ "### Task 2\n\nWrite an expression to generate all the possible three digit numbers using 0, 1, and 2", "_____no_output_____" ] ], [ [ "### Write your code bellow this comment", "_____no_output_____" ] ], [ [ "### Task 3\nLoop over the iterator expression you generated before. Use print to print each element returned by the iterator. Use `assert` and `type` to make sure that the elements are of type tuple", "_____no_output_____" ] ], [ [ "### Write your code bellow this comment", "_____no_output_____" ] ], [ [ "### Task 4\n\nWrite the loop again. But this time use `dropwhile` with a lambda expression to drop any leading zeros from the tuples. As an example `(0, 1, 2)` will become `[0, 2]`. Also cast the output of the dropwhile to a list.\n\n_Extra task can be to check the actual type that dropwhile returns without the casting asked above_", "_____no_output_____" ] ], [ [ "### Write your code bellow this comment", "_____no_output_____" ] ], [ [ "### Task 5\n\nWrite all the logic you had written above, but this time write a separate function where you will be passing the list generated from dropwhile and the function will return the whole number contained in the list. As an example if you pass `[1, 2]` to the fucntion it will return 12 to you. Make sure that the return type is indeed a number and not a string. Although this task can be achieved using some other tricks, we require that you treat the incoming list as a stack in the function and generate the number there. ", "_____no_output_____" ] ], [ [ "### Write your code bellow this comment", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb12d47678fd71684f81bf2d22b8144db5274172
49,977
ipynb
Jupyter Notebook
notebooks/kubeflow_pipelines/walkthrough/labs/kfp_walkthrough_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
notebooks/kubeflow_pipelines/walkthrough/labs/kfp_walkthrough_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
notebooks/kubeflow_pipelines/walkthrough/labs/kfp_walkthrough_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
34.562241
1,494
0.517998
[ [ [ "# Using custom containers with Vertex AI Training\n\n**Learning Objectives:**\n1. Learn how to create a train and a validation split with BigQuery\n1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI\n1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters\n1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query it\n\nIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that **predicts the type of forest cover from cartographic data**. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.\n\nThe training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.\n", "_____no_output_____" ] ], [ [ "import os\nimport time\n\nimport pandas as pd\nfrom google.cloud import aiplatform, bigquery\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import OneHotEncoder, StandardScaler", "_____no_output_____" ] ], [ [ "## Configure environment settings", "_____no_output_____" ], [ "Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. \n\n- `REGION` - the compute region for Vertex AI Training and Prediction\n- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.", "_____no_output_____" ] ], [ [ "REGION = \"us-central1\"\n\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\n\nARTIFACT_STORE = f\"gs://{PROJECT_ID}-kfp-artifact-store\"\n\nDATA_ROOT = f\"{ARTIFACT_STORE}/data\"\nJOB_DIR_ROOT = f\"{ARTIFACT_STORE}/jobs\"\nTRAINING_FILE_PATH = f\"{DATA_ROOT}/training/dataset.csv\"\nVALIDATION_FILE_PATH = f\"{DATA_ROOT}/validation/dataset.csv\"\nAPI_ENDPOINT = f\"{REGION}-aiplatform.googleapis.com\"", "_____no_output_____" ], [ "os.environ[\"JOB_DIR_ROOT\"] = JOB_DIR_ROOT\nos.environ[\"TRAINING_FILE_PATH\"] = TRAINING_FILE_PATH\nos.environ[\"VALIDATION_FILE_PATH\"] = VALIDATION_FILE_PATH\nos.environ[\"PROJECT_ID\"] = PROJECT_ID\nos.environ[\"REGION\"] = REGION", "_____no_output_____" ] ], [ [ "We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).", "_____no_output_____" ] ], [ [ "!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}", "gs://qwiklabs-gcp-04-853e5675f5e8-kfp-artifact-store/\n" ] ], [ [ "## Importing the dataset into BigQuery", "_____no_output_____" ] ], [ [ "%%bash\n\nDATASET_LOCATION=US\nDATASET_ID=covertype_dataset\nTABLE_ID=covertype\nDATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv\nSCHEMA=Elevation:INTEGER,\\\nAspect:INTEGER,\\\nSlope:INTEGER,\\\nHorizontal_Distance_To_Hydrology:INTEGER,\\\nVertical_Distance_To_Hydrology:INTEGER,\\\nHorizontal_Distance_To_Roadways:INTEGER,\\\nHillshade_9am:INTEGER,\\\nHillshade_Noon:INTEGER,\\\nHillshade_3pm:INTEGER,\\\nHorizontal_Distance_To_Fire_Points:INTEGER,\\\nWilderness_Area:STRING,\\\nSoil_Type:STRING,\\\nCover_Type:INTEGER\n\nbq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID\n\nbq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \\\n--source_format=CSV \\\n--skip_leading_rows=1 \\\n--replace \\\n$TABLE_ID \\\n$DATA_SOURCE \\\n$SCHEMA", "BigQuery error in mk operation: Dataset 'qwiklabs-\ngcp-04-853e5675f5e8:covertype_dataset' already exists.\n" ] ], [ [ "## Explore the Covertype dataset ", "_____no_output_____" ] ], [ [ "%%bigquery\nSELECT *\nFROM `covertype_dataset.covertype`", "Query complete after 0.00s: 100%|██████████| 2/2 [00:00<00:00, 1127.05query/s] \nDownloading: 100%|██████████| 100000/100000 [00:02<00:00, 43373.34rows/s]\n" ] ], [ [ "## Create training and validation splits\n\nUse BigQuery to sample training and validation splits and save them to GCS storage\n### Create a training split", "_____no_output_____" ] ], [ [ "!bq query \\\n-n 0 \\\n--destination_table covertype_dataset.training \\\n--replace \\\n--use_legacy_sql=false \\\n'SELECT * \\\nFROM `covertype_dataset.covertype` AS cover \\\nWHERE \\\nMOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)' ", "Waiting on bqjob_r7d554294c3e4a30c_0000017f73333420_1 ... (1s) Current status: DONE \n" ], [ "!bq extract \\\n--destination_format CSV \\\ncovertype_dataset.training \\\n$TRAINING_FILE_PATH", "Waiting on bqjob_r4574ab68f2451d92_0000017f73335f9b_1 ... (0s) Current status: DONE \n" ] ], [ [ "### Create a validation split", "_____no_output_____" ], [ "## Exercise", "_____no_output_____" ] ], [ [ "# TODO: You code to create the BQ table validation split\n!bq query \\\n-n 0 \\\n--destination_table covertype_dataset.validation \\\n--replace \\\n--use_legacy_sql=false \\\n'SELECT * \\\nFROM `covertype_dataset.covertype` AS cover \\\nWHERE \\\nMOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)' ", "Waiting on bqjob_r419bcc6e8f99b193_0000017f7334427f_1 ... (1s) Current status: DONE \n" ], [ "# TODO: Your code to export the validation table to GCS\n!bq extract \\\n--destination_format CSV \\\ncovertype_dataset.validation \\\n$VALIDATION_FILE_PATH", "Waiting on bqjob_r4441384921644dce_0000017f73345183_1 ... (0s) Current status: DONE \n" ], [ "df_train = pd.read_csv(TRAINING_FILE_PATH)\ndf_validation = pd.read_csv(VALIDATION_FILE_PATH)\nprint(df_train.shape)\nprint(df_validation.shape)", "(40009, 13)\n(9836, 13)\n" ] ], [ [ "## Develop a training application", "_____no_output_____" ], [ "### Configure the `sklearn` training pipeline.\n\nThe training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.", "_____no_output_____" ] ], [ [ "numeric_feature_indexes = slice(0, 10)\ncategorical_feature_indexes = slice(10, 12)\n\npreprocessor = ColumnTransformer(\n transformers=[\n (\"num\", StandardScaler(), numeric_feature_indexes),\n (\"cat\", OneHotEncoder(), categorical_feature_indexes),\n ]\n)\n\npipeline = Pipeline(\n [\n (\"preprocessor\", preprocessor),\n (\"classifier\", SGDClassifier(loss=\"log\", tol=1e-3)),\n ]\n)", "_____no_output_____" ] ], [ [ "### Convert all numeric features to `float64`\n\nTo avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.", "_____no_output_____" ] ], [ [ "num_features_type_map = {\n feature: \"float64\" for feature in df_train.columns[numeric_feature_indexes]\n}\n\ndf_train = df_train.astype(num_features_type_map)\ndf_validation = df_validation.astype(num_features_type_map)", "_____no_output_____" ] ], [ [ "### Run the pipeline locally.", "_____no_output_____" ] ], [ [ "X_train = df_train.drop(\"Cover_Type\", axis=1)\ny_train = df_train[\"Cover_Type\"] \nX_validation = df_validation.drop(\"Cover_Type\", axis=1)\ny_validation = df_validation[\"Cover_Type\"]\n\npipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)\npipeline.fit(X_train, y_train)", "_____no_output_____" ] ], [ [ "### Calculate the trained model's accuracy.", "_____no_output_____" ] ], [ [ "accuracy = pipeline.score(X_validation, y_validation)\nprint(accuracy)", "0.6968279788531924\n" ] ], [ [ "### Prepare the hyperparameter tuning application.\nSince the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.", "_____no_output_____" ] ], [ [ "TRAINING_APP_FOLDER = \"training_app\"\nos.makedirs(TRAINING_APP_FOLDER, exist_ok=True)", "_____no_output_____" ] ], [ [ "### Write the tuning script. \n\nNotice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.", "_____no_output_____" ] ], [ [ "%%writefile {TRAINING_APP_FOLDER}/train.py\nimport os\nimport subprocess\nimport sys\n\nimport fire\nimport hypertune\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\n\ndef train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):\n \n df_train = pd.read_csv(training_dataset_path)\n df_validation = pd.read_csv(validation_dataset_path)\n\n if not hptune:\n df_train = pd.concat([df_train, df_validation])\n\n numeric_feature_indexes = slice(0, 10)\n categorical_feature_indexes = slice(10, 12)\n\n preprocessor = ColumnTransformer(\n transformers=[\n ('num', StandardScaler(), numeric_feature_indexes),\n ('cat', OneHotEncoder(), categorical_feature_indexes) \n ])\n\n pipeline = Pipeline([\n ('preprocessor', preprocessor),\n ('classifier', SGDClassifier(loss='log',tol=1e-3))\n ])\n\n num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}\n df_train = df_train.astype(num_features_type_map)\n df_validation = df_validation.astype(num_features_type_map) \n\n print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))\n X_train = df_train.drop('Cover_Type', axis=1)\n y_train = df_train['Cover_Type']\n\n pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)\n pipeline.fit(X_train, y_train)\n\n if hptune:\n X_validation = df_validation.drop('Cover_Type', axis=1)\n y_validation = df_validation['Cover_Type']\n accuracy = pipeline.score(X_validation, y_validation)\n print('Model accuracy: {}'.format(accuracy))\n # Log it with hypertune\n hpt = hypertune.HyperTune()\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='accuracy',\n metric_value=accuracy\n )\n\n # Save the model\n if not hptune:\n model_filename = 'model.pkl'\n with open(model_filename, 'wb') as model_file:\n pickle.dump(pipeline, model_file)\n gcs_model_path = \"{}/{}\".format(job_dir, model_filename)\n subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)\n print(\"Saved model in: {}\".format(gcs_model_path)) \n \nif __name__ == \"__main__\":\n fire.Fire(train_evaluate)", "Writing training_app/train.py\n" ] ], [ [ "### Package the script into a docker image.\n\nNotice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. \n\nMake sure to update the URI for the base image so that it points to your project's **Container Registry**.", "_____no_output_____" ], [ "### Exercise\n\nComplete the Dockerfile below so that it copies the 'train.py' file into the container\nat `/app` and runs it when the container is started. ", "_____no_output_____" ] ], [ [ "%%writefile {TRAINING_APP_FOLDER}/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2\n\n# TODO\nWORKDIR /app\nCOPY train.py .\n\nENTRYPOINT [\"python\", \"train.py\"]", "Writing training_app/Dockerfile\n" ] ], [ [ "### Build the docker image. \n\nYou use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.", "_____no_output_____" ] ], [ [ "IMAGE_NAME = \"trainer_image\"\nIMAGE_TAG = \"latest\"\nIMAGE_URI = f\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}\"\n\nos.environ[\"IMAGE_URI\"] = IMAGE_URI", "_____no_output_____" ], [ "!gcloud builds submit --async --tag $IMAGE_URI $TRAINING_APP_FOLDER", "Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.\nUploading tarball of [training_app] to [gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646905496.70723-a012a772a5584bb896fac8fd0e2bad1e.tgz]\nCreated [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-853e5675f5e8/locations/global/builds/9e9e5120-7f9f-43f1-8adf-7283b92794fb].\nLogs are available at [https://console.cloud.google.com/cloud-build/builds/9e9e5120-7f9f-43f1-8adf-7283b92794fb?project=1076138843678].\nID CREATE_TIME DURATION SOURCE IMAGES STATUS\n9e9e5120-7f9f-43f1-8adf-7283b92794fb 2022-03-10T09:45:09+00:00 - gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646905496.70723-a012a772a5584bb896fac8fd0e2bad1e.tgz - QUEUED\n" ] ], [ [ "## Submit an Vertex AI hyperparameter tuning job", "_____no_output_____" ], [ "### Create the hyperparameter configuration file. \nRecall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:\n- Max iterations\n- Alpha\n\nThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.", "_____no_output_____" ] ], [ [ "TIMESTAMP = time.strftime(\"%Y%m%d_%H%M%S\")\nJOB_NAME = f\"forestcover_tuning_{TIMESTAMP}\"\nJOB_DIR = f\"{JOB_DIR_ROOT}/{JOB_NAME}\"\n\nos.environ[\"JOB_NAME\"] = JOB_NAME\nos.environ[\"JOB_DIR\"] = JOB_DIR", "_____no_output_____" ] ], [ [ "### Exercise\n\nComplete the `config.yaml` file generated below so that the hyperparameter\ntunning engine try for parameter values\n* `max_iter` the two values 10 and 20\n* `alpha` a linear range of values between 1.0e-4 and 1.0e-1\n\nAlso complete the `gcloud` command to start the hyperparameter tuning job with a max trial count and\na max number of parallel trials both of 5 each. ", "_____no_output_____" ] ], [ [ "%%bash\n\nMACHINE_TYPE=\"n1-standard-4\"\nREPLICA_COUNT=1\nCONFIG_YAML=config.yaml\n\ncat <<EOF > $CONFIG_YAML\nstudySpec:\n metrics:\n - metricId: accuracy\n goal: MAXIMIZE\n parameters:\n - parameterId: max_iter\n discreteValueSpec:\n values:\n - 10\n - 20\n # TODO\n - parameterId: alpha\n doubleValueSpec:\n minValue: 1.0e-4\n maxValue: 1.0e-1\n scaleType: UNIT_LINEAR_SCALE\n algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization\ntrialJobSpec:\n workerPoolSpecs: \n - machineSpec:\n machineType: $MACHINE_TYPE\n replicaCount: $REPLICA_COUNT\n containerSpec:\n imageUri: $IMAGE_URI\n args:\n - --job_dir=$JOB_DIR\n - --training_dataset_path=$TRAINING_FILE_PATH\n - --validation_dataset_path=$VALIDATION_FILE_PATH\n - --hptune\nEOF\n\n# TODO\ngcloud ai hp-tuning-jobs create \\\n --region=$REGION \\\n --display-name=$JOB_NAME \\\n --config=$CONFIG_YAML \\\n --max-trial-count=5 \\\n --parallel-trial-count=5\n\necho \"JOB_NAME: $JOB_NAME\"", "JOB_NAME: forestcover_tuning_20220310_094541\n" ] ], [ [ "Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under \"Hyperparameter Tuning Jobs\".", "_____no_output_____" ], [ "### Retrieve HP-tuning results.", "_____no_output_____" ], [ "After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ", "_____no_output_____" ], [ "## Exercise\n\nComplete the body of the function below to retrieve the best trial from the `JOBNAME`:", "_____no_output_____" ] ], [ [ "# TODO\ndef get_trials(job_name):\n jobs = aiplatform.HyperparameterTuningJob.list()\n match = [job for job in jobs if job.display_name == JOB_NAME]\n tuning_job = match[0] if match else None\n return tuning_job.trials if tuning_job else None\n\n\ndef get_best_trial(trials):\n metrics = [trial.final_measurement.metrics[0].value for trial in trials]\n best_trial = trials[metrics.index(max(metrics))]\n return best_trial\n\n\ndef retrieve_best_trial_from_job_name(jobname):\n trials = get_trials(jobname)\n best_trial = get_best_trial(trials)\n return best_trial", "_____no_output_____" ] ], [ [ "You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.", "_____no_output_____" ] ], [ [ "best_trial = retrieve_best_trial_from_job_name(JOB_NAME)", "_____no_output_____" ] ], [ [ "## Retrain the model with the best hyperparameters\n\nYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.", "_____no_output_____" ], [ "### Configure and run the training job", "_____no_output_____" ] ], [ [ "alpha = best_trial.parameters[0].value\nmax_iter = best_trial.parameters[1].value", "_____no_output_____" ], [ "TIMESTAMP = time.strftime(\"%Y%m%d_%H%M%S\")\nJOB_NAME = f\"JOB_VERTEX_{TIMESTAMP}\"\nJOB_DIR = f\"{JOB_DIR_ROOT}/{JOB_NAME}\"\n\nMACHINE_TYPE=\"n1-standard-4\"\nREPLICA_COUNT=1\n\nWORKER_POOL_SPEC = f\"\"\"\\\nmachine-type={MACHINE_TYPE},\\\nreplica-count={REPLICA_COUNT},\\\ncontainer-image-uri={IMAGE_URI}\\\n\"\"\"\n\nARGS = f\"\"\"\\\n--job_dir={JOB_DIR},\\\n--training_dataset_path={TRAINING_FILE_PATH},\\\n--validation_dataset_path={VALIDATION_FILE_PATH},\\\n--alpha={alpha},\\\n--max_iter={max_iter},\\\n--nohptune\\\n\"\"\"\n\n!gcloud ai custom-jobs create \\\n --region={REGION} \\\n --display-name={JOB_NAME} \\\n --worker-pool-spec={WORKER_POOL_SPEC} \\\n --args={ARGS}\n\n\nprint(\"The model will be exported at:\", JOB_DIR)", "_____no_output_____" ] ], [ [ "### Examine the training output\n\nThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.\n\n**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.", "_____no_output_____" ] ], [ [ "!gsutil ls $JOB_DIR", "_____no_output_____" ] ], [ [ "## Deploy the model to Vertex AI Prediction", "_____no_output_____" ] ], [ [ "MODEL_NAME = \"forest_cover_classifier_2\"\nSERVING_CONTAINER_IMAGE_URI = (\n \"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest\"\n)\nSERVING_MACHINE_TYPE = \"n1-standard-2\"", "_____no_output_____" ] ], [ [ "### Uploading the trained model", "_____no_output_____" ], [ "## Exercise\n\nUpload the trained model using `aiplatform.Model.upload`:", "_____no_output_____" ] ], [ [ "JOB_DIR", "_____no_output_____" ], [ "# TODO\nuploaded_model = aiplatform.Model.upload(\n display_name=MODEL_NAME,\n artifact_uri=JOB_DIR,\n serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,\n)", "_____no_output_____" ] ], [ [ "### Deploying the uploaded model", "_____no_output_____" ], [ "## Exercise\n\nDeploy the model using `uploaded_model`:", "_____no_output_____" ] ], [ [ "# TODO\nendpoint = uploaded_model.deploy(\n machine_type=SERVING_MACHINE_TYPE,\n accelerator_type=None,\n accelerator_count=None,\n)", "_____no_output_____" ] ], [ [ "### Serve predictions\n#### Prepare the input file with JSON formated instances.", "_____no_output_____" ], [ "## Exercise\n\nQuery the deployed model using `endpoint`:", "_____no_output_____" ] ], [ [ "instance = [\n 2841.0,\n 45.0,\n 0.0,\n 644.0,\n 282.0,\n 1376.0,\n 218.0,\n 237.0,\n 156.0,\n 1003.0,\n \"Commanche\",\n \"C4758\",\n]\n\n# TODO\nendpoint.predict([instance])", "_____no_output_____" ] ], [ [ "Copyright 2021 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
cb12e318d406585de1fa0f14f42a0618d064fc16
11,615
ipynb
Jupyter Notebook
Py5.ipynb
Bailey0606/python-m
5fd77ff64e82ab94aa54bb51d4cdb9f0ed1bcd38
[ "CC0-1.0" ]
193
2020-05-21T15:23:59.000Z
2021-08-22T04:31:11.000Z
Py5.ipynb
jafet2399/python
435ff02c5ffdf6ddc4b292dc86adde67a7f7b3e0
[ "CC0-1.0" ]
1
2020-10-20T14:36:09.000Z
2020-10-20T14:36:09.000Z
Py5.ipynb
jafet2399/python
435ff02c5ffdf6ddc4b292dc86adde67a7f7b3e0
[ "CC0-1.0" ]
118
2020-05-18T20:52:13.000Z
2021-06-18T17:22:05.000Z
34.87988
732
0.607146
[ [ [ "---\n\n**Universidad de Costa Rica** | Escuela de Ingeniería Eléctrica\n\n*IE0405 - Modelos Probabilísticos de Señales y Sistemas*\n\n### `PyX` - Serie de tutoriales de Python para el análisis de datos\n\n\n# `Py5` - *Curvas de ajuste de datos*\n\n> Los modelos para describir un fenómeno y sus parámetros pueden obtenerse a partir de una muestra de datos. Debido a la gran cantidad de modelos probabilísticos disponibles, a menudo es necesario hacer una comparación de ajuste entre muchas de ellas.\n\n*Fabián Abarca Calderón* \\\n*Jonathan Rojas Sibaja*\n\n---", "_____no_output_____" ], [ "## Ajuste de modelos\n\nEl ajuste de modelos es ampliamente utilizado para obtener un modelo matemático que caracterize el comportamiento de cierto sistema basandose en los datos experimentales obtenidos. Este modelo deberá predecir también otras medidas experimentales que se obtengan de su recreación.\n\n### Estimación de máxima verosimilitud (MLE)\n\n(Esto es de menor prioridad) La estimación de máxima verosimilitud (**MLE**, *maximum likelihood estimation*) es...\n", "_____no_output_____" ], [ "---\n## 5.1 - Con el módulo `numpy`\n\nPara iniciar, con la función `polyfit()` de la librería `numpy` se puede realizar el ajuste de datos experimentals a polinomios de cualquier orden. Esta función devuelve los parámetros de la recta para un modelo lineal de la forma:\n$$\nf(x) = mx + b\n$$\nEsto en el caso de un polinomio de grado 1. Un ejemplo utilizando este método es el siguiente:", "_____no_output_____" ] ], [ [ "from numpy import *\nimport matplotlib.pyplot as plt\n# Datos experimentales\nx = array([ 0., 1., 2., 3., 4.])\ny = array([ 10.2 , 12.1, 15.5 , 18.3, 20.6 ])\n\n# Ajuste a una recta (polinomio de grado 1)\np = polyfit(x, y, 1)\n# Una vez conocidos los parámetros de la recta de ajuste,\n#se pueden utilizar para graficar la recta de ajuste.\ny_ajuste = p[0]*x + p[1]\n\n# Dibujamos los datos experimentales\np_datos, = plt.plot(x, y, 'b.')\n# Dibujamos la recta de ajuste\np_ajuste, = plt.plot(x, y_ajuste, 'r-')\n\nplt.title('Ajuste lineal por minimos cuadrados')\n\nplt.xlabel('Eje x')\nplt.ylabel('Eje y')\n\nplt.legend(('Datos experimentales', 'Ajuste lineal'), loc=\"upper left\")", "_____no_output_____" ] ], [ [ "En el caso de otro tipo de regresiones, se debe aumentar el grado del polinomio. Por ejemplo, el caso de una regresió polinomial se muestra a continuación:", "_____no_output_____" ] ], [ [ "import numpy\nimport matplotlib.pyplot as plt\n#Lo primero es crear los vectores que definen los puntos de datos\nx = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]\ny = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]\n#Este método nos permite crear un modelo polinomial\nmimodelo = numpy.poly1d(numpy.polyfit(x, y, 3))\n#Esto determina cómo se mostrara la línea, la cual inicia en 1 \n#y termina en 22\nmilinea = numpy.linspace(1,22,100)\n#Y por último graficamos los datos y la curva de\n#la regresion polinomial\nplt.scatter(x,y)\nplt.plot(milinea, mimodelo(milinea))\nplt.show()", "_____no_output_____" ] ], [ [ "Una vez trazada la recta de mejor ajuste, se puede obtener el valor de un punto dado, evaluando la curva en dicho punto. por ejemplo si quisieramos obtener el valor dado para un valor de 17 en el eje x, entonces sería:", "_____no_output_____" ] ], [ [ "valor = mimodelo(17)\nprint(valor)", "_____no_output_____" ] ], [ [ "---\n## 5.2 - Con el módulo `stats`\n\nEn este caso existen diversos comandos que pueden ser utilizados para crear diferentes distribuciones basadas en datos dados. por ejemplo, partiendo de los datos de un histograma de una PDF, se puede crear el la curva de dicha distribución normal utiliando el comando `scipy.stats.rv_histogram`, además también se puede graficar el CDF de dichos datos: ", "_____no_output_____" ] ], [ [ " import scipy.stats\n import numpy as np\n import matplotlib.pyplot as plt\n\n data = scipy.stats.norm.rvs(size=100000, loc=0, scale=1.5, random_state=123)\n hist = np.histogram(data, bins=100)\n hist_dist = scipy.stats.rv_histogram(hist)\n X = np.linspace(-5.0, 5.0, 100)\n plt.title(\"Datos aleatorios\")\n plt.hist(data, density=True, bins=100)\n plt.show()", "_____no_output_____" ], [ "X = np.linspace(-5.0, 5.0, 100)\nplt.title(\"PDF de los datos\")\nplt.plot(X, hist_dist.pdf(X), label='PDF')\nplt.show()", "_____no_output_____" ], [ "X = np.linspace(-5.0, 5.0, 100)\nplt.title(\"CDF de los datos\")\nplt.plot(X, hist_dist.cdf(X), label='CDF')\nplt.show()", "_____no_output_____" ] ], [ [ "Otro paquete que brinda la librería ´Scipy´ es ´optimize´ el cuál tiene algorítmos de curvas de ajuste por medio de la función ´curve_fit´ con la cuál se pueden ajustar curvas de sistemas no lineales utilizando mínimos cuadrados. A continuación un ejemplo de su implementación para encontrar la recta de mejor ajuste ante una serie de datos experimentales obtenidos:", "_____no_output_____" ] ], [ [ "import numpy\nfrom scipy.optimize import curve_fit\nimport matplotlib.pyplot as plt\n\ndef _polynomial(x, *p):\n \"\"\"Ajuste polinomial de grado arbitrario\"\"\"\n poly = 0.\n for i, n in enumerate(p):\n poly += n * x**i\n return poly\n\n# Se definen los datos experimentales:\nx = numpy.linspace(0., numpy.pi)\ny = numpy.cos(x) + 0.05 * numpy.random.normal(size=len(x))\n\n# p0 es la suposición inicial para los coeficientes de ajuste, este valor\n# establece el orden del polinomio que desea ajustar. Aquí yo\n# ya establecí todas las conjeturas iniciales en 1., es posible que tenga una mejor idea de\n# qué valores esperar en función de sus datos.\np0 = numpy.ones(6,)\n\ncoeff, var_matrix = curve_fit(_polynomial, x, y, p0=p0)\n\nyfit = [_polynomial(xx, *tuple(coeff)) for xx in x]\n\nplt.plot(x, y, label='Test data')\nplt.plot(x, yfit, label='fitted data')\n\nplt.show()", "_____no_output_____" ] ], [ [ "---\n## 5.3 - Con la librería `fitter`\n\nSi es necesario, el paquete de `fitter` provee una simple clases la cual identifica la distribución de la cuál las muestras de datos son generados. Utiliza 80 distribuciones de Scipy y permite graficar los resultados para verificar que dicha distribución es la que mejor se ajusta a los datos. En el siguiente ejemplo se generarán los una muestra de 1000 puntos con una distribución gamma, para luego utilizar `fitter` el cuál revisará las 80 distribuciones de Scipy y desplegará un resumen con las distribuciones que calzan de mejor forma con nuestros datos, basandose en la suma del cuadrado de los errores. Los resultados del resumen se puede verificar de manera visual en las gráficas que dicho resumen traza por sí mismo:", "_____no_output_____" ] ], [ [ "from scipy import stats\nfrom fitter import Fitter\n\n# Crear los datos\ndata = stats.gamma.rvs(2, loc=1.5, scale=2, size=1000)\n\n# Definir cuáles distribuciones queremos que evalúe\nf = Fitter(data, distributions=['gamma', 'rayleigh', 'uniform'])\nf.fit()\nf.summary()", "_____no_output_____" ] ], [ [ "Por último, un ejemplo que que ilustra la combinación de el paquete ´scipy.stats´ y ´fitter´ es mediante el modulo ´histfit´, el cuál permite graficar tanto los datos y también las curvas de mejor ajuste al agregar ruido a la medición y calcular ese ajuste en 10 ocaciones, ´Nfit = 10´. En este caso la serie de datos utilizada corresponde a una distribuación normal (creada con el paquete ´scipy.stats´) y se obtuvieron 10 curvas de mejor ajuste ante diversos casos de ruido (con ´error_rate = 0.01´) y además se obtuvo un estimado de los valores correspondientes a la media, la varianza y la amplitud de la distribución de las curvas de mejor ajuste.", "_____no_output_____" ] ], [ [ "from fitter import HistFit\nfrom pylab import hist\nimport scipy.stats\n#Creamos la curva con distribución normal\ndata = [scipy.stats.norm.rvs(2,3.4) for x in range(10000)]\n#Graficamos los valores asignándoles espaciamiento temporal\nY, X, _ = hist(data, bins=30)\n#Creamos las curvas de mejor ajuste\nhf = HistFit(X=X, Y=Y)\n#Aplicamos un margen de error para simular ruido y calcular 10\n#curvas de mejor ajuste\nhf.fit(error_rate=0.01, Nfit=10)\n#Obtenemos los valores correspondientes a la media, la varianza y \n#la amplitud de las curvas de mejor ajuste\nprint(hf.mu, hf.sigma, hf.amplitude)", "_____no_output_____" ] ], [ [ "---\n### Más información\n\n* [Página web](https://www.google.com/)\n* Libro o algo\n* Tutorial [w3schools](https://www.w3schools.com/python/)", "_____no_output_____" ], [ "---\n**Universidad de Costa Rica** | Facultad de Ingeniería | Escuela de Ingeniería Eléctrica\n\n&copy; 2021\n\n---", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb12eda76490680e3880e100d07b14837e47900e
6,629
ipynb
Jupyter Notebook
tasks/task_12_CAD_mesh_fast_flux/3_making_parametric_reactors_into_a_neutronics_model.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
1
2021-08-23T22:49:31.000Z
2021-08-23T22:49:31.000Z
tasks/task_12_CAD_mesh_fast_flux/3_making_parametric_reactors_into_a_neutronics_model.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
null
null
null
tasks/task_12_CAD_mesh_fast_flux/3_making_parametric_reactors_into_a_neutronics_model.ipynb
pshriwise/neutronics-workshop
d2b80b2f73c50b94a56b98f0bb180c03ecb0a906
[ "MIT" ]
null
null
null
28.947598
193
0.603409
[ [ [ "# This task is not quite ready as we don't have an open source route for simulating geometry that requires imprinting and merging. However this simulation can be carried out using Trelis.", "_____no_output_____" ], [ "# Heating Mesh Tally on CAD geometry made from Components\n\nThis constructs a reactor geometry from 3 Component objects each made from points.\n\nThe Component made include a breeder blanket, PF coil and a central column shield.\n\n2D and 3D Meshes tally are then simulated to show nuclear heating, flux and tritium_production across the model.", "_____no_output_____" ], [ "This section makes the 3d geometry for the entire reactor from a input parameters.", "_____no_output_____" ] ], [ [ "import paramak\n\nmy_reactor = paramak.BallReactor(\n inner_bore_radial_thickness=50,\n inboard_tf_leg_radial_thickness=55,\n center_column_shield_radial_thickness=50,\n divertor_radial_thickness=50,\n inner_plasma_gap_radial_thickness=50,\n plasma_radial_thickness=100,\n outer_plasma_gap_radial_thickness=50,\n firstwall_radial_thickness=1,\n blanket_radial_thickness=100,\n blanket_rear_wall_radial_thickness=10,\n elongation=2,\n triangularity=0.55,\n number_of_tf_coils=16,\n rotation_angle=180,\n)\n\n# TF and PF coils can be added with additional arguments.\n# see the documentation for more details \n# https://paramak.readthedocs.io/en/main/paramak.parametric_reactors.html\n\nmy_reactor.show()", "_____no_output_____" ] ], [ [ "The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.", "_____no_output_____" ] ], [ [ "my_reactor.export_stp()\n\nfrom IPython.display import FileLink\ndisplay(FileLink('blanket.stp'))\ndisplay(FileLink('pf_coil.stp'))\ndisplay(FileLink('center_column.stp'))\ndisplay(FileLink('Graveyard.stp'))", "_____no_output_____" ] ], [ [ "The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.", "_____no_output_____" ] ], [ [ "from neutronics_material_maker import Material\n\nmat1 = Material.from_library(name='Li4SiO4')\n\nmat2 = Material.from_library(name='copper')\n\nmat3 = Material.from_library(name='WC')", "_____no_output_____" ] ], [ [ "This next step makes a simple point source.", "_____no_output_____" ] ], [ [ "import openmc\n\n# initialises a new source object\nsource = openmc.Source()\n\n# sets the location of the source to x=0 y=0 z=0\nsource.space = openmc.stats.Point((100, 0, 0))\n\n# sets the direction to isotropic\nsource.angle = openmc.stats.Isotropic()\n\n# sets the energy distribution to 100% 14MeV neutrons\nsource.energy = openmc.stats.Discrete([14e6], [1])", "_____no_output_____" ] ], [ [ "This next section combines the geometry with the materials and specifies a few mesh tallies", "_____no_output_____" ] ], [ [ "import paramak_neutronics\n\nneutronics_model = paramak_neutronics.NeutronicsModel(\n geometry=my_reactor,\n cell_tallies=['heating', 'flux', 'TBR', 'spectra'],\n mesh_tally_2d=['heating', 'flux', '(n,Xt)'],\n mesh_tally_3d=['heating', 'flux', '(n,Xt)'],\n source=source,\n simulation_batches=2,\n simulation_particles_per_batch=10000,\n materials={\n 'blanket_material': mat1,\n 'pf_coil_material': mat2,\n 'center_column_material': mat3,\n }\n)\n\n\n# You will need to have Trelis installed to run this command\nneutronics_model.simulate()", "_____no_output_____" ] ], [ [ "The next section produces download links for:\n\n- vtk files that contain the 3D mesh results (open with Paraview)\n- png images that show the resuls of the 2D mesh tally", "_____no_output_____" ] ], [ [ "from IPython.display import FileLink\ndisplay(FileLink('heating_on_3D_mesh.vtk'))\ndisplay(FileLink('flux_on_3D_mesh.vtk'))\ndisplay(FileLink('tritium_production_on_3D_mesh.vtk'))\ndisplay(FileLink('flux_on_2D_mesh_xy.png'))\ndisplay(FileLink('flux_on_2D_mesh_xz.png'))\ndisplay(FileLink('flux_on_2D_mesh_yz.png'))\ndisplay(FileLink('heating_on_2D_mesh_xy.png'))\ndisplay(FileLink('heating_on_2D_mesh_xz.png'))\ndisplay(FileLink('heating_on_2D_mesh_yz.png'))\ndisplay(FileLink('tritium_production_on_2D_mesh_yz.png'))\ndisplay(FileLink('tritium_production_on_2D_mesh_xz.png'))\ndisplay(FileLink('tritium_production_on_2D_mesh_yz.png'))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1305d8e00b5ac7da3b99258723e9797d60aa33
492,128
ipynb
Jupyter Notebook
guide/14-deep-learning/how_deeplabv3_works.ipynb
markjdugger/arcgis-python-api
4d482711402817abadee2a9f73751cbfe075c5fe
[ "Apache-2.0" ]
2
2019-03-22T16:29:24.000Z
2021-03-23T16:37:10.000Z
guide/14-deep-learning/how_deeplabv3_works.ipynb
markjdugger/arcgis-python-api
4d482711402817abadee2a9f73751cbfe075c5fe
[ "Apache-2.0" ]
null
null
null
guide/14-deep-learning/how_deeplabv3_works.ipynb
markjdugger/arcgis-python-api
4d482711402817abadee2a9f73751cbfe075c5fe
[ "Apache-2.0" ]
null
null
null
2,050.533333
321,963
0.959015
[ [ [ "# How DeepLabV3 Works\n\n## Semantic segmentation", "_____no_output_____" ], [ "Semantic segmentation, also known as pixel-based classification, is an important task in which we classify each pixel of an image as belonging to a particular class. Our guides [How u-net works](https://developers.arcgis.com/python/guide/how-unet-works/) and [How PSPNet works](https://developers.arcgis.com/python/guide/how-pspnet-works/) give an intuition on how Semantic Segmentation works.\n\n\n*Note:* To understand contents of this guide, we assume that you have some basic understanding of the convolutional neural networks (CNN) concepts. You can refresh your CNN knowledge by going through this short paper “A guide to convolution arithmetic for deep learning”. ", "_____no_output_____" ], [ "## Challenge with Deep Convolutional Neural Networks (DCNNs)", "_____no_output_____" ], [ "Fully Convolutional Neural Network (FCN) is a DCNN used for Semantic Segmentation. A challenge with using FCN on images for segmentation tasks is that, input feature map becomes smaller while traversing through the network (bunch of convolutional & pooling layers). This causes loss of information about the images and results in output where: ", "_____no_output_____" ], [ " - Predictions are of low resolution\n - Object boundaries are fuzzy\n\n", "_____no_output_____" ], [ "<center><img src=\"../../static/img/dcnn_img.png\" height=\"300\" width=\"800\" /></center>\n<br>\n<center><b>Figure 1.</b> Repeated combination of convolution & pooling layers reduces the spatial resolution of the feature maps as the input traverses through the DCNN.</center>", "_____no_output_____" ], [ "## History of DeepLab", "_____no_output_____" ], [ "**DeepLabv1** : Uses Atrous Convolution and Fully Connected Conditional Random Field (CRF) to control the resolution at which image features are computed.\n\n\n**DeepLabv2** : Uses Atrous Spatial Pyramid Pooling (ASPP) to consider objects at different scales and segment with much improved accuracy.\n\n\n**DeepLabv3** : Apart from using Atrous Convolution, it uses an improved ASPP module by including batch normalization and image-level features. It gets rid of CRF (Conditional Random Field) as used in V1 and V2. \n\nThese improvements help in extracting dense feature maps for long-range contexts. This increases the receptive field exponentially without reducing/losing the spatial dimension and improves performance of segmentation tasks.\n", "_____no_output_____" ], [ "## Atrous Convoltion (Dilated Convolution)", "_____no_output_____" ], [ "Atrous Convolution is introduced in DeepLab as a tool to adjust/control effective field-of-view of the convolution. It uses a parameter called ‘atrous/dilation rate’ that adjusts field-of-view. It is a simple yet powerful technique to make field of view of filters larger, without impacting computation or number of parameters. \n\nAtrous Convolution is similar to the traditional convolution except the filter is upsampled by inserting zeros between two successive filter values along each spatial dimension. r - 1 zeros are inserted where r is atrous/dilation rate. This is equivalent to creating r − 1 holes between two consecutive filter values in each spatial dimension.\n\nIn the diagram below, the filter of size 3 with a dilation rate of 2 is applied to calculate the output. We can visualize filter values are separated by one hole since the dilation rate is 2. If the dilation rate r is 1, it will be Standard convolution.", "_____no_output_____" ], [ "<figure>\n <img src=\"../../static/img/dilated.gif\", align=\"left\">\n <img src=\"../../static/img/normal_convolution.gif\">\n<center>\n <figcaption><b>Figure 2</b>. Animation of convolution with <b>dilation=2 (left)</b> and <b>dilation=1(right)</b>. When dilation=1, it is just the standard convolution operation.</figcaption>\n</center>\n</figure>", "_____no_output_____" ], [ "<figure>\n<img src=\"data:image/PNG; base64, iVBORw0KGgoAAAANSUhEUgAABa8AAAH+CAYAAACbYPLRAAAABHNCSVQICAgIfAhkiAAAIABJREFUeJzsvWuspWd5n3/d9/Ou4z7NnsEe4xNgm4akiEM4tOCYBCECSipSJQpVFTWR8qHlQ9WDguoqUEVqmlZtVZX0Q6maPzSRWpVGbRGKVEjTxhzKqWlCCTE4DuADxmPPce/Zx7XW+9z3/8P9rDUmYAePk8zQ3JfZ0szee+31vO8sIc1v/eb6ibs7jaOjI0SE0WiEiJAkSZIkSZIkSZIkSZIkSZIkfxK4O3t7e6ytrVFK+aav6zU4U5IkSZIkSZIkSZIkSZIkSZI8IxleJ0mSJEmSJEmSJEmSJEmSJNcd3dU+8D3/8ue5vHsJFcesIqpg0ImAg+PM5gsu7uzQqVAXCwbDgqogUug6QQTMBRXFqZg5okqnHW6OU5FSEClQoSsF8wpaQRzRAUJBHBBANH4hjpu1kzruxO/dMIvzOj3m8TWrFRzEHSmCFmVQBkChDDqKCC4eP1o71EG9IEVx8fhw4mcQ90FVEDWqO0VK+7ojIriAioA77hKPUUFEsGqYGaqKqoCAET9cbfn98b3VDCnSLtnBHHfHcEyMTjvEQUSJmxD323D2Dg/ZP17g7brfee/Pceuttz/X11OSJEmSJEmSJEmSJEmSJMkfC1cdXs9mhxweXEYjg42AGYdlaCyO45Ri7O7uUlAOD53SFToRSqcRuiIggrtBUQSNMBrQAqjgLiiKxgOAHhTcFZGCEoGwiGAugFNEMHdUtQXZhnvFDcwNswivRRSrRhGNQNkK9EBniMKiryiOKO37pZ1RERUqFsG2g7hFsOyCu6Ndice0eybE5yNoj6zdEYQOBUIz7piH70XxuP64JNRp9wqKKEtZeaW2MJz4w1DBMAqCW9wX2tM6guPMj4853D/GXXG3CPCTJEmSJEmSJEmSJEmSJEmuE646vPZl1bg1h2s1RCU+3CNHFRiPR/R1jcODIxRlMe+RThGBosJyF9Jb0BqhK2gRpAi19iiKY7RoGFXFzJDWOPYWLGPRmnb3aHGzbFw7jgERnmMRthfATegGHQIULREIawEFLVHpXoXRLaCXFkCbO+a2TIVRicDZRQCl2upuLTNlVCI8xmnnAMSikW01WtarJjmsbiTRml4+1ltQjsgq9HZvT+jtc22L0z3eHJAW8FMNasXNMXFcFXKgM0mSJEmSJEmSJEmSJEmS64jnEF6zrFzTElbAcItg26GFr8JkMsWqc3R0DO4s+oqIxsPVooXdAlppIax7oe+JtjXLkLe1kttzqPofUoQ0BQdE2O1+ReEh0eDGhaIDjGgcSxePU23BuDhG3xrPGm1posVdWsDrDmZ11Z5eXiqEXqTWHncF7XCvgFNKoVqk1cszRggeYXQ1X+lWRCSuu/0+Au/2hoGDqjZLSuhAzO1K61pau9stdCEi9LVv9y9OqUoE7FFtX50+SZIkSZIkSZIkSZIkSZLkeuGqw2vrHVDcHFFHVaINTVOBEOG0e0UQptM1am8sfEY/n1OtMhh0DIYFsQorDUYTW7i0pjEtPpaoLrdfR0DruEcwuwyE+9rjTdWxDGxd2jlcQLWF1AVXaY3spS5b8NboRgQzVqoSEad6DW+1ltZ+9lCFSATf5lcCaBdDxCKMbt8bIbNEqAwRvhON7WCp94gWersKvLWrHcOtAgVEqDV+fllF58Tji+DLLU6HUgoiUGuoXCDc4qodmLSCdgbYSZIkSZIkSZIkSZIkSZJcP1x1eP3kuYsoM8aj4Uoh4i3ojdC3aTscRIWuCJuba+zsLBDpWMzmuAt1AdIpZaBYawtXd6iOlGgFhywjxhZb2RhFrwTIaPNHRzNb2tChE5oOkYoQw4dORXWA0gYPpWlGXFaWDsxQYmTRxWKgsbmmzYV4kgi8xXUVOIcyJDQd4oJYRMhifmVU0g0l/N7eQnjxaKIbjmgLwcUQL+1AV/QkUmgClfB9R/Ic94h2jZi16y9PUY44Khp/TtIjRaJJbpW6bG0nSZIkSZIkSZIkSZIkSZJcJ1x1eH0079m7fJHn33CK4WCAYKDRBi6ioa1wibI0kduW0rG5ucXOziUGwyGHx4dMRmNUBRY1VBZFWpMZvG8rgyGLbj+r+aE9GtnaWs9hCInwN9rgpQ0VLrvJ1nQZTm89KuUpihFvKXucVHWwOgMSjmtzxy0Sa7cIkdVlFS63PnP8J4qLYyKIyUrfISqgddXkbg8FCrZsPzcFiKCrMvQyL49Wd1lpUKSF5kuFiHsE7d4WHkVW2fVKReLmrNTYyxVIl9VzfTtcvnyZ3/u93+ORRx5hNptx4sQJvvu7v5sXvehFDIfDZ/9iuoYsFgsODw8ZDAaMx2NU9VofKUmSJEmSJEmSJEmSJEkSnkN4XV05mleePH+Jm244xXDQRg2LRGu6tbHNHXFduZwHgyEnTmyzs3OJ8XjK8dEx7jAcKFpClyGyDLB9FdA6oSUpIk9xQDcnNM4q6/XmuPYaTeNlFilN/9Hazo6jWuJLGCKOeY0mNa0FjSPWQmaWPz8UKdiy6d1mJKVpVHz5XKH6EJUrPuplaoy3tnr7uVowiwA9QmZQKVQclzirlGhZuy0D82XWHqG1NWf2sqdetCyz+JX7u1LRZTDuhqrEqOS3GVw/9thjvO997+O//Jf/wte//nUWi0Vzjyuj0YiXv/zlvOMd7+Atb3kL6+vrz/o19adJrZWPfezjvOc9/5LPfe5z/PiP/zjvfOc7ufnmm6/10ZIkSZIkSZIkSZIkSZIk4TmE1yqKe+HweMGZJy9w4w1bDLpwMeNOtdYKLks/NdCC5244ZOvENpcuXmQwGjGbz0JhIQWVNiKo8Vi4ogNZNpItPBdop80nzcpFLWHciGaz92CCFg0tiHkbZgSaw3r1eKIZvgyGPVL0OItLNHJbwI1bazc3E8jSyd2QZbO6/WBvw5WdS5OpFMQdp6eIUmsfVW4M80rRGHdswg9cQ3GiEoOS4m3UUkoE8bIcb2yhOUsFSm3Bd+TmYvF1VUHVMFugpYtA/I9IsD//+c/zD/7BP+ATn/gEg8GA7//+7+cVr3gFW1tbnDlzho997GP87//9v/niF7/I3/27f5ef/umf5uTJk1f78voT5cyZM7z3ve/lP/7H/8iZM2eYzWbs7+9jq+HPJEmSJEmSJEmSJEmSJEmuNVcdXhegEH7qw+MFZ89d4sbnnWQ46JA2tOhCG0CUVQPZWqA7GIw4sb3N3u4uRYX5/JhxN44BQXVwxVXjZ7mzlEYvc3AtMRYJ0dK21vRexsni2srWgtflr72ZqStuFtJqFdQFkdCMuBiqfsUpbbLSezgVbW3r5ZjiFTXJFe+0Nzd3tLEtBhcRihR6M6QUnH7loF6ereXk0QBvmg8Lx0gL95UOjyFIb4F8GcCynQ3hzq7tPojiuhykbKOPVdqIY4cWwykt83768PqJJ57gn//zf84nPvEJXvrSl/Lud7+bV7/61UwmE1SVWit/+2//bf7dv/t3vPe97+W9730vd9xxB29961uZTqdX+xL7E2FnZ4ef//mf58Mf/jA/8AM/wPr6Oh/84Aev9bGSJEmSJEmSJEmSJEmSJPlDXHV4jRtuTqcxKjg/Ms6fvcQNN2wz7iIgrQiORRt66RKuoKUgAoPhkLX1DfYuX6YMBhwfzxgNhgyGA/reKUWW64wtRHYMMHF6M5bW6rp0XddoJ8coYSTBZksPtrUBSW+DjjEQiYX2o6jH0CJgtbWe3ehNI9TWGFo0DNECldZ6rkCExhFkg2PUalBapG0RLM+9Igi2WESLu8RjlVB/OIZK6EwimG+3etngrj09rT0uJQJ774FCEQmvt1vTomjowr1Qa4+WK35vt9oC7nBpm9k3aEy+4Y/ZnY985CN86lOf4qabbuJnf/Znueeee74plJ5Op7zjHe/gySef5Atf+AJHR0fUWldfP3PmDB/4wAf4yEc+wkMPPYS7c/vtt/PDP/zDvP3tb+fmm29GNcYz/8f/+B/8xm/8Bm94wxvY2NjgV3/1V/nkJz/J3t4et912G3/lr/wVfvzHf5y1tTV+9Vd/lc9//vO89a1v5fu+7/u+6VwPPPAAH/jABxgOh/zET/wE0+mU5z//+fzCL/wCP/ADP8CHPvQhfu3Xfu1ZvfSTJEmSJEmSJEmSJEmSJPmT5+qd1xhmFcyakwL2j3u4cInnndhk1JWVh3oZnAogRaOp7E7XFcp0ioiwv7eLGRzP5lRzhqMhRo3ucRtKFJq3ujWql0OJikfY6x66EdVVk9gBLJrMpZRoTVvE4CIaOhJR3LUFuN5CaENVCYV3C2G1Q3D6lX4k9CHuYNaDdLAM7NsZRTQa1O15tYRneyWkboeU1X8FswjDV7qV1qoGKO3aBIlmuzdnt3soRUQoSvtCLD0WkRidxCmF+Jy2tjceTfOnKV7v7e3xsY99jEuXLvHTP/3TvPKVr/yWbWoRYXt7m1/4hV+g73vG4zGj0QiAL3/5y7zrXe/i4x//OC95yUt429vehqrymc98hn/6T/8pv/3bv83P/dzPcddddwHwla98hf/6X/8rX/3qV3nyySe55ZZb+OEf/mGefPJJ7rvvPv7RP/pHqCpvf/vb2d/f50Mf+hD7+/u85CUv4fbbb1+dycz4jd/4Df79v//3vPnNb0ZEOHXqFD/zMz9DKYWu6yjLP48kSZIkSZIkSZIkSZIkSa4rrjq8XqpAxMA1wlkz4fLhHKu73HByi+GoIC1IdSeCZZq+Q4VaK+YwHI2Z9JX9ehnU6avBfE6H0JUSMpDmu3YxaCoR1QjCzSuqGueJp6Oaoy2QFdX2+WhmLw0jK1f1MrQGXKRF4kK1iopQtOlKzKi29H0boqtHAVc82ebRNi+imDuKgocipK89RWPYUZo3O/zZvvwfV5Lk+Ly230dgrld0I+ahT/Hazi8tmFeqG+KyCrQlbv/qvql6nL+3pwxJfjNnz57loYceYjKZ8NrXvpbNzc1nfE384aHG4+NjfvmXf5lPfOIT/NAP/RA/8zM/w+23346IcObMGe69915+8zd/k9e+9rX81E/91Ornz2YzPvaxj/HOd76Tv/pX/yqnTp2i1sr73/9+/sW/+Bf8z//5P3nzm9/Mq1/9ak6dOsX/+T//h6997WvcfPPNdF28rC9dusRnP/tZ5vM5r3nNa9je3kZVV+F7Oq6TJEmSJEmSJEmSJEmS5PpFr/aBpSkyTCXcy9KhUnAX9o/nnLu4w7zWNrZoIBYOaLfWVI5QVRWKwvr6OpPpGt1wSG+VvhrWx/e1aHj1sZwXXH4txhGNWnuqGdUMc6fS1B7uVKvUWulrxbCnXHl83cyoVqENMq6a0Xol+DVogXUP9O0GarSrVSKob+OJqiE1EdfVIKQIocZo54rstKBLJ7gIuESo3VQe7oI4aLv86kZ1p7Zrd7d40pVPXONU7TwGuDgm1u7j8nq0NcNlFY5/Ky5evMjly5dZX1/nxhtvXAXD3y6PPvoon/70p5lMJvzIj/wIL3rRi1hfX2dtbY077riDv/SX/hKj0YiPfvSj7OzsfMNjv+u7vos3v/nN3HLLLaytrbG5ucnLX/5ytre3eeKJJ9jb2+PFL34xL33pSzl37hz3338/h4eHq8d/6Utf4ktf+hK33347L3vZy647/3aSJEmSJEmSJEmSJEmSJE/PVYfXVnsQCbe0FtxisFAczISD4xlPnLvA0WweDWeNcNVaqzl008aVUDoC7OF4zGA0ovbhZbYaAa4vfSHuoQFpbWmzFsrGWiFIDBPGwGMMGbrHaKTTzuBONVt9zXwRA4pqLE3dzccRITnh1XZtH3IlMF42uqNFXltI347alCoRDguipTW9rzR+w0+99GdfmZxczT8uxxSl3V9RVul9C96X124Q91dbKF00NC2l6VYQtPm+VQsqGtoMffqXwWw2o+97hsMhw+HwGYcdvxVf+9rXeOKJJzh9+jS33347o+Fo9TVV5a677uLEiRM89thjHB0drb4mItx5552cOnUK1Stqj7W1NUajEYvFgr7v2dzc5Pu+7/sYDAZ85jOfWQXgZsanP/1pzp49y+te9zpuueWWVIQkSZIkSZIkSZIkSZIkyXcQVx1eV5r3ugWo7kalR7yi4lSEg+MF5y5eZtb30ZI2w20ZzbbM1dvvJULXjc0NRsMRg+GQRb9gvjD63qkV3Jahd8UkWt2q2oJeae7qpsGwnloXONZy6ObMXjakmydbHDotVxzZTSeiGo3kZSPb3GMQUUGki7Y54N6jrW0dtWwDq7j1REs8zqIe/mn3RfNShwsbX4bhEZjHGGOzeS9T8BZGy5VIO55PWxjtdeXZri0wF3EEp7ThSlRbM1xxEXp3aq1UM6S5ur8Vw+GQruuYz+fM5/OnHXZ8OnZ2djg6OmJjY4PpdBoN86ewtbXFZDLh4ODgGwYeASaTCV3XfYOPWyTuxbJRLyK85jWv4dZbb+Xzn/88jz76KIvFgrNnz/Jbv/VbjMdjXve613HixIlnde4kSZIkSZIkSZIkSZIkSa4tVx1eSwt8vTbtBo6UGBE0s6aRVvYOZpw9t8ts1rfmc8Wkx0s0jW2p7PAIX4sI6xsblK5jOBzR155aDTOL0UOaYsOX5erWPvYIx1Wl6TMKKh3QnNBiII64RSP6Sk+ZWlsbfBmka4Tg0QyXcEyb4TU809FgBudK6CxewAQ3RSgoZRXRO7pSd4hKa3z7SkeyGmtsKa20awzNSNwj0daiXr1ZEOONy4HJ+FAGg/CMl6ZoMatR9HbBHKqBieAarnIRxZ+hTH3q1Ck2Nja4fPkyTzzxBIvF4lm9TpaBd9d14Sj/Q4R/W6m1Pqtg/Knfe8cdd/DKV76S8+fP83u/93scHh7ypS99iQcffJDv+q7v4ru/+7sZj8bP6txJkiRJkiRJkiRJkiRJklxbrjq8djcEKEUjSFSar9kpQEdBWpB7dFw5e2GP+TwUF+Z1KfRYOaJpo4I4dKWwubmJCZSuYz6fRYjthpljrYEdLWxvY5G+auMCuDm19tHSbs7raGXHAGIoOJbxNfR9be1w6BeGVYhAOwYVVUpcojlmfQTpspx2XLbIl45roXoErKIFBCq1DSsuVSO+UoIIbYxx1f72lZIkvj9GIN1BilJU2vgiiDS3tmh7fIi3a9Oj4IJqiRHH6vSLyuHhMXv7R/Q1gnx5hvT6hhtu4EUvehGz2YzPfOYzXL58+RlfFxcuXOCLX/wiBwcHQGg+uq7j+Pj4Wwbfs9mMxWLBaDT6luH2t8N0OuWee+5hNBrz2c9+lgsXLvDJT36SnZ0d7r77bm688cZvanwnSZIkSZIkSZIkSZIkSXJ98+zW956C4/SLni4qyDFmGJ3kGGnEoi3cwtyj48q5S5d53qkNRl3X5M7NS10kgurWpBYVuuGAEydOcvHieQbdgEXf4+4Muy4C83aOZevYlikyEJG0tcAyvmDV0RIBNNb0IOItAG7jha4gvgqGMW+t57LsTzf/dPuxduWXjq30HhGgX3F0O45IC84RfKkDeer9bLqSuCiNhrZbDDw6Ky2K0HQpT3nfQbRro5BNKtJ2G82jVd7Pj+n7BTgU7RiNhqxNJ5y/uE/voT55OjY3N3nDG97Afffdx3/7b/+NH/zBH+RNb3oT4/E3N5lnsxnve9/7+JVf+RV+9Ed/lL/1t/4Wp0+fZnNzk/Pnz3Px4kVqrd8w+vjEE0+wu7vLzTffzHA4/KNedt8SEeGVr3wld9zxIu6//34+97nP8dnPfpatrS1e/epXs7GxcVU/99ly4cIF7rvvPp588slnrVdJ/vgZj8fcfffd3HXXXQwGg2t9nCRJkiRJkiRJkiRJkuRZctXhtXgoH5BwSC9D3Eq0sK1F2Oo0d7RzeHjM4/MZJzc32dxco8TDqdVAwUVQ1abrCN/y9omT7O5eRIG+X9Cp0i8cHURIjkXwKqrgSjxUELXWRI7msRn0tac0d7SqRntbWkPaZRWI05zYiKBEUzwCcodam0JaV6oOs3BOx8+s4ZdG6KtREEqB5U2K8LtEyxvoiraknGiuEyG2eLS+azW0tCVHDPPaPNcgFKr7yrXtZvS1YtXorYIIRQqDwYD19TVGgxGldKiAubKze8hi/vTBNcR53/rWt/Lf//t/58Mf/jD/8B/+Q9ydN73pTUwmk9X37e/v8/73v59f+qVfou97XvnKV7K2tsadd97Jn/tzf46PfvSjfO5zn+OlL30pW1tbABwfH/O//tf/Ynd3l7e97W1sbm5e7cuR2267jde85jX8p//0n/jwhz/MAw88wF/8i3+RO++8808tuHz3u9/NBz/4wVXrPLm2iAhveMMb+Mf/+B/zspe97FofJ0mSJEmSJEmSJEmSJHmWXHV4jfGUgUAPpYaHxiNC7aa+aEGwSgwuzmbG1584i/A8tjan0c5eeaCjDb2adHRhOBqyubXFzu4lpBJh66jD+kpRoZTShvtCZVKroKUADjVa3ObLTrNhBiqKewvelw1wCX9083+0QNtb+3nZKC+hSxFvDuloYUtrQXttgXnTcIgsndShDnGMKnHWGGMM5Ymuov8WcGtToIiDxucFjyq18g1akkXfU+scqYvomHcDxsMxa4Mh3WjIoHSohItcXML9Tfiw4x2I0hzgT89NN93Evffey87ODp/61Kf463/9r/P617+eV73qVZw8eZKzZ8/y8Y9/nP/7f/8vw+GQe++9l+///u9nMpkwHo95+9vfzu/+7u/yb/7Nv2E4HPKWt7yFWisf/OAH+c//+T9zyy238EM/9ENsb29f9ctxOBxy991386EPfYhf+7Vfw925++67OXXq1ErHAvA7v/M7/Mqv/Apf//rXAXj44Yc5f/489913H48//jjj8ZjpdMpP/dRP8frXv/4bAvo/igcffJBz586t3nxJrj0PPPAAu7u71/oYSZIkSZIkSZIkSZIkyVVw9c3rEk1lW4bXvhR0EOoPI0JejSw3dBpEm7nreOSxxzm5tc7zn3+abiC4VUw0wuxl2BhbiIzGYzb9BAeX96mLObXv6boWGLdxQ7eKm6M6xKxvQ4gtUF6e2Y0YcGy6EF/9hDa+GA+y6hSJz6k81cPctCHuVzQeTXWiS2H3KnoXiihFtWlDYKn10JXPWsI5rXpldNKjtR6B99IMruFlcaH2xqJW+h4Eo2jl8PJFhl3Hie2TrG+dQMowJCdtxBIp+HLcsZ1OZfnhV3zjT4Oq8rKXvYx//a//Nb/8y7/MBz7wAT7ykY/w67/+6ytNSimFu+++m3e84x3cc889bG1trZQub33rWzk+PuYXf/EXede73sW73/1u3J2+73n5y1/O3/t7f4/Xvua1DIfD5xT8vvzlL+euu+7iq1/9Ki95yUt4xStewdra2jd8z9mzZ7nvvvt48MEHY2jUjForDz30EI8++igQqpQ3vvGNvPa1r31Wz78cnbz11hfwd/7Ou3nhC++86mtJnhv/5J+8i9/+7U8/6yHQJEmSJEmSJEmSJEmS5PrhqsPrGE6MUFYQ3JZuZ2lhqK+UGuFv9lVAenR0SNHCpctHbG3O2di4otQwi7B5GWBrKbhXxuMJmHC4v0tfF9S6HEh0zCMoj2A5msVm0iLkCNiXjejlSGS12lrbhnubXCzaGt/RatZ2BvMWbLefJ0Xa4yJoVVFUlx7tp6i3iWZ13Icr5e7QgjjetCOgOP0Vl7U5FEJ10vd47fEKitINOsaDMeONCeI9jz32FQ4unWM6WWOnP+Lw8nm64RqDyRi8cvbMWU7deAtbN9wUz9VC9HhOiRHJbyPbK6Xwwhe+kHvvvZe/8Tf+Br//+7/Pww8/zHw+5+TJk7zkJS/hlltuYWNjg8Fg8A1t58lkwo/+6I/yxje+kfvvv59HHnkEVeWuu+7ixS9+Mdvb2yu1h6rykz/5k/zYj/0Yo9GI9fX1bzjHK17xCj784Q8DETKXUlZfO336NO9///uZzWZ0Xcfm5uY3+LUB3vjGN/Kbv/mb9P3T61JEhI2NjWfVun4qt9/+Il71qr/AnXe+5Koenzx3vud7XsYDD3zhWh8jSZIkSZIkSZIkSZIkeQ5c/WCjWbisXagWKo1la1hZajx81SgOhYgg7tDPGa6dAOnY3z9mbTJCBrQwOMYYS1SZY1SRaG1PplOs9hweHWDmVFs6pw0R0EILzUuMSFp8zolMW1tL2L2CClbrKvw0c9RYBd7OFQf2cqRRWgtblnoRj8DcqBHkqzb3drOCEEoRd2/htoeCxIHl9TnUNuqoUrDq9H1lUReIKMNuwHA0YjQaMxwOkKLU2ZzLF89y4dwZDg532dw8wembbuXw8nm0HsPcmS/2qXXOtDNsfoB4bTqVCOhdSrthlVL0qYn706KqrK2tMZ1OOX36NK9//evj8xKhuqo+7WNHoxGnT5/m1KlT1FoBoesKpZRvCLoBptMp0+n0W/6cwWDAqVOnnvZ8J06ceMZrGI1GjEajZ/ye50ophVK6bwrOkz89Sum+6XWVJEmSJEmSJEmSJEmSfGdx9emaRNtaRFbOakWxahEQEwG0moS2WSSGDa2iIhRVKsr+bMFg54Dtk2t0A20N7hY8K02XQWtNK+ubW5g4x0dHmEGFVaM47ADSxhQVoSkDPPQVbqH1WLqobdmMZhm6ruJqltrudqmIxhNUvzIEiSiosMzIQpctTZfhrcVtaKTqsasIbZQyLs3MqP2CWhdgQikDRqMx65Mpo26I6jLcbW8IAAf7lzh/9lFms0NUnKoFHw6oonjv4PN4vlrpcGx+RJ0dodJFaI9jxHUsz/Ls/uiFrnv24ayIMBgM/tQGFJMkSZIkSZIkSZIkSZIk+c7l6pvXbcwQA7MeKdL814T2QsBrBMSiytKGcXR0xGgwjBFFhd6dS3uHVO85dWqTQaertnLbJ2Rpa3YcM1ibrmF9T7/oqbUPh7MKBUCEWiuqhniNINkEl2UjWvDWwEZC6RGBc9ttBEQsBh0BPLrKte/RUlqjPALtppFuiXTTjbTRRST8NDGBAAAgAElEQVQGFt0l/Nty5WzzeQ1vtUXbejoesbG+wWAwQjUao2V1xhicVBVE4w2C+eyIvi7a+YyD3Yt89WCPcVfYnI7CY93C/ojIe+ZHe3TDKVKGlALq0sQh8QZAaoGTJEmSJEmSJEmSJEmSJLmeuPrmtVcEw4jlw+oGJnSi0cpuXWwH+hrOaHNj0S9YG280AXQ4os2dvcMZ0u2xvbXBYNDs0q5tAjEc2jgtJC9sbW1z+dIOc4sBxzIqEfI6IBFyL+PbSNIFKJh4NLBVUFesX44tyiqDFiI4tqXDGwEpmEkMODpoR7hIWtvbq9Kb0RWN31s7v8GiX9BbT62VrisMByM2p5sMhkNK6SiyVJNIa6jzFG94NMVFPLQrVaA6YhGOK0C/oPYz+sEQH3WYKNUrYk3FMp8xnx8yxULt0TQuKtqGJX3VHk+SJPmjmM1m/NZv/RZf/vKXWSwW1/o433FMp1Ne9apXcdddd6VeKEmSJEmSJEmSJEmegav+W7M0SbJZjUFDFdBwYYMQ+autWtHuDrXSqWIIag6lqTVQqhuX944QUbY31xgMCmCh13AoRUPdwbLoXFjf3GRn9xJmzqKvdKW0VrVFcxmjt6YFcYgWsuLiSCWUH4SzW1WXGmuW+42iSx92XHNRXY07Wq04tQXrBSP83vO+xxfRRneUTjoGwyHra+sMuo5O4zpECnH1V8Jkk3Bjx1k05B4eoTUt5C9FqTjz2qNFKTqAsog3A8SpVpHaWuoucV53+mr05lhvmMXPXawULcsQPkmS5I/mP/yH/8C/+lf/iocffjj+vz15VogIb3rTm/jZn/1ZXvWqV13r4yRJkiRJkiRJkiTJdctVh9eRd0b4KpGs4qLRfpbWWBZvVg0HjPnsiPFw0ILWCJnNIgwuWrAKl3cPEYftrXUGnYSBRAsmgBtFCqWFyjro2Nja4vLuZar1uCvVQVFqrUihqTyiA27e4zWCcG0ptcUSY+hEPEYnm4AELLzQJhHwSvNlR1ANQhdhvVdmi0UbXRwyHo1YX1tjMBpTymDluxagQ6BEIK++7HuHr0RV8KhLx/BlG3/UpiShPW9faxs9VFyETgaYOl3XsTDBRJAyohuOGYzGDCcTBuN1Fq5goNKF8bsUvDdKt/SaJEmS/NE89NBDPPbYY+zu7l7ro3zH8vu///ucPXv2Wh8jSZIkSZIkSZIkSa5rnkPzmgiGHVSiPW3CahBwacEwr0BraNfKYLKGlC6GFoGBlghr3Sgto907OKIbFLY2xpSuw9tyoopSzVdBMALD0YitEyfY3d3FrbZRw/h+a41AbcVA1QjXl27r+GRcg9lSEdI2Iptve6mvBscFqjsLq9RFH9oOEYbDIesbm4zGk1W4XrQZpZs6W0QQv1IAV9XWWGwDkhoTkssmdkw0Er7w+AGIhHO7LhZ02qGimCtFhwwGA8bTKZPpOoPpGsPJGqojtBQQRVSxdsFRHldUFdUWwCdJknybmNmqcf03/+a9vO5133+NT/Sdwy/90i/y0Y/+OrXW+FcvSZIkSZIkSZIkSZI8Lc9psLHWnmXK6k6ErgrmFuGvCm6GA4vFHAjthpvhIm380KkeIfCy+Vz7yoVLewBsrU1QbUG0djiOUFYubDenKx2bG5vs7uzQLxbgRjdYtrvbuCQGpuG3dujdUC14BSTcJCJNcdL+M4/AuS7i18e2ABG6wYDpdI3JcExXSmhAtLmuawxZWhNoew3th7qu/NlxDxyRdq62Zikazyct0NfmDocIv809ProR4xPPY226wXiyzmg0pYzHlEGHo5i266QpWgT6FpKIGFZDE1L7RQxn5r/6T5LkKrjhhtN87/f+Be65503X+ijfMdx336/zmc98/FofI0mSJEmSJEmSJEm+I7h6bUjIM8CtjRMuw+rwNhvelB+hFDk+OmZtugGUCEst2tkmhmnTadACWgTr4ez5faTC+nSIlNB1oM3P7IJLeKrDC62sra+zf3kXWxi2MHCN5jFAdJoxiwY1CGYtGG6t59ByN51I80IDdN2IYTdkY33CcDSMNrj76nqfYv6gdHrFm4230ci4P2YWg4/LEUa1Nkzpy+XKCOWLr1Qh0mJ62r2e95Xn3XwrZjAaTxmUMeYSQ5TLK5UI0EXCyG1m7O7s8OBXvsJNN93EdDKNFnvviC/feUiSJHl2qCpdN6DrBtf6KN8xlFJW/8onSZIkSZIkSZIkSZJn5urDa3Gq1wg/ieZxlXqlhd30IcXDbQ2OluZwXoa7TTkiIphVcI/BRVFwobpwYWcf1zWmaxM6AXFfNbSVaDi7Cq5CNxow3Vxnb2eH4+NjSt8xmoybcdtRFDcPrzSy0jzX6rjXaCebMOiGTMZrDMcjhsMhpXS4OaUUWloeqpPW1g4VSPzM6jGuGJqQuC9mjkqMWrpEoO0e/+xeJM4lrSUuuhxbbEZx1ZW7RIxolXcds0WluoSyxQwxD72KOmpONYtwXxVwuuGAO+68k1qNyXSduujpF4bXGud9Di+iJEmSJEmSJEmSJEmSJEmSP26ekzYEom1dbam/EERaMu1LMbUzmx0xGg2ghbI4tJ1GxA2qI1IjxG4tZXVB3CPAvrRPdefExqQF3TEK6Q5d11FrO5Mqo9EI39hg3+HgYB8XYTgaggoL68Nv7WDVsFqjLV2GjKdrbE7GDEcjVKCTCLrRGJ6UQhtTbA7qFveuvNVxMaCyatX58r6IhCaFaInr0vgdUmtc4h5CqELEQYsgGi31VUlPmpKlDWSKKL1VRBxxa7oWjyDbaOeIoLwUpevKqiFZPT5o4XZ2r5MkSZIkSZIkSZIkSZIkuZ646vBaTSCKygiKWbSGm8EZMUOKAJX54pjpdG01eLgMfiNbDQ2IEfoOEXCzGDFs3+oGO5cuozjraxOKtrY1UHujaAEzlgaM0XgSehAzDo8PQYQigtU+Po8yHk/Y2NhiOBpRSodqAcJ5HUOT4b2OtnO7NndUvelSmiekuboFwdwQM7QsQ2eNS5DWqG4N9NCdOJ100VKXCLiF5b6jgAnmFZU27CjhyjaPO2zuuPVA6EgQu3Ik4rkOZvscHB2yMV2nIMwOj+mrUboB7uEhHw4GbcDxal8JfzI89thjvOc97+Gmm27ir/21v8bp06ev9ZGSJEmSJEmSJEmSJEmSJPlT5KrDa7G2c4i38FaxWsPn6YYIqDn9Yk6RrrmXlwYMR1xbAGyoNN2GRyCtTcVhLVBVwCpc3D3AHTY3pvE9ZlSvmGk0sK0CMVY4naxBbxiAK5PJlMGgYzAYIFLQUiglRhZlqfloTXF3o+9rG03s0K4NKJZCeKybKqQpTJAI7c0rahpBuEdL2qhQwRcdBwdHXLq0w+Xdy3SdcOutN7N1YpOu61ApqIRiZGkmEYm2t6qumtHujlmPmVGtogiGsZgv2NvfZzGfs7V1gqOjQx5/4gyL2vOCW25lOp4wX8wRF2o1ymBA1xXa2wZX+zL4E+P4+JjPf/7z7DQFTJIkSZIkSZIkSZIkSZIkf7a4em2IPMWVbEYpEsONFuOJ7o73lbroGXRDVDoMWVo4YujxKcFpEV2KR+Kx5qDaQtxQkCx6Z2f3CEEiwC6tvW1GX735oa2NRxY2NrbY3DyBA0XLSjmy1JkstRrijmBUM6otMOujjS3KYBiDj31fERVKKe18rZKNNMc3iJRVC3s15KiFRx75GmeeOM+F8xeZz2aYOX0/5/HHH+c1r301J7dPImIrI4lr3ANFUVVqG190Cy2IuVM9BjNrO8eFCxf4+pkzDLqOyXTK8fGM+fEsrrk6fY0QvM57Blqo8zlVYDAchh/7OvOGuDt931NrXSlqkiRJkiRJkiRJkiRJkiT5s8NVh9fVe8z71kI2equodigFEwF1rDfmszmj6doVfYbX0G+0QNqWihAMikagXX01RKht8BALl3TvzqWdPUSc6WRE6QS3StcVzGobhYxQWbuCtrDX3CmiTflhhDk6WtqCcHR8xOHRIU5lNBqwWPSMR2MQ4+jwmK7rKBq+6L5GK7uZrWky6qaxdmptKpTqPPjAQ3z5y19l2HVMBwO2piPm8wUXLy04c+Yc589fYnN9gzIa4kv3tFkMLerScR3+bHOn1oq1cLz2FdXwqojFOKYteqhGp0pXOhbzOX3f0/cL3CsqMF/M2dw6QcWpCC52xaudJEmSJEmSJEmSJEmSJElyHXDV4bWJY7T1Q/Hmr65NAeL0tsD6Od2gIEXie6kxbKiCyXLoMFq2jq0atkUEdY+BRACP8UIBXKJF/OSTF8CNw4ND+nnP+vo648mY593wPNbXJ2hHjC3WK75q88pgMGBRF0hrXptV+r5ydHRItZ7RZIiLMCgDShng1bG6QLquuaVbs1pCEyIuKK1lLTGi6O7MZj33f/GLPP61xzm1tc2JjS0GRag2Z29/j/lkzIXdA849eYEX3HIaG3UxdNlQM/o2yqgltCVmFiG2VWpv1NJjzRNeLa7PzfFqKw/5ohqLRaVIz/HhEX3tQQvjfoG5od0ICUv51b4U/lh47LHHeOKJJ1gsFrjD449/nb29PS5cuMDnPvc5Hn/8cUAYDDruuOMOtre3o2mfJEmSJEmSJEmSJEmSJMn/kzwH57XjXpsDRBAXDKH3BVIKVGN+fMxkNAnvtDmq0Zx2YIBG0Org6hRtPWZfjiVaNLTNQRT10Ig4FauVc09eYP/iLgWYrk04PjiiG3Q8+ujX2N7e5gUvup3nndpGiiO+VHxE2KkSOhKzaEjXWun7aDv3ix6RjoF2SFN2xPdWrA0kurcwvp3VMARpDm04ODriC7/7RS6cO8+JjTVOba0zHnZM19eYzedoURY97B3Mmc3mLHpjZG20URRBcYsBS5fWvG7nrTVc11Y9Gum+aEqRSr+ooMrCKrPFEYvFMX0/x72nKyPW1qZUj8HKeV/xWhlq6FGuZXRtZnzgAx/gff/f+zh/4TwQfyb7+/vcf//9fPKTn1wF1SdPnuSf/bN/xg/+4A8ymUyu4amTJEmS64n5bM5XvvoVnnjiidRNPQMiwq233soLXvAChsPhtT5OkiRJkiRJkiTJM3LV4bVSKFKoXkN1oV20sN3wGg1scaOoIh5/WTIHmts61Bs0jYdQaw1XNkoYMyTUIbGMiFmlSLS0a99zuH9IcWFrfcp0OmY8HlK6wv7REZd3dvndz32BW269mVtvu5nxsIuAF6N6T1cKWEVRqlUWi57qPdM1YWNzndmR4LUiOIvFAlBECmbh4nZ85Yhejjc6YMTQ4/1ffIDzZ89xYn2LUyc2mAyHrE/XkFLwbsBMC9Ybi/ksvOEaGhDDUaUpSbQNW0qE1eZYrZjXGIJsChE02uB9P6fWnsFgQO3n1MUCt0oRwWrF3VAVVAqzeR9jkO0i4s2Ca/sX/e/5nu/hR/7yj7C/v4+7s7e3x3333cfW1havfvWrWVtbA2B9fZ3bbruNUso1PW+SJEly/VBr5T2/+B7e+973sre3d62Pc91z44038vf//t/nJ3/yJ6/1UZIkSZIkSZIkSZ6Rqw+vPZQcqgUXpa9GIZQhokK/WFC6QgwjAiK4R7NY2+e8ibAVgRZuq3oE1GYMlqoONL6/DRYeHR9ji54T0yknN9cpKoxHI7rhgPW1NXb39zl77gJfe+Qxzp67wF0vfhHbJ7bQoiiGGQwGA/rFAl2GuKpUKseLQ6p1uHUcHlfcAINaDbEYeKSNSkIzfbRf973zyCOPsXthl1NbJ1gbTShaGE/GDEZDuuEQ4Rg3QduAZNyHaFSLRuMaPLQmCFhcu2KhY+l7jGV43a/82H2tEdBXxxYVMSKc90q/WNDXBSpKrdAverqua7oWgdXHtUFVedOb3sQ999yzuq9f+cpXePjhh7n99tt55zvfyW233dbulzAej+m6q37pJkmSJP+PsbOzw4MPPsgjjzySretvg0uXLvEHf/AHHB0d5b9iSpIkSZIkSZLkuubqBxsJnQYVpIRe2ixCabPK4fERG+MRTsFcVu1rwVEktBhyxRXt1dGiWG+oOuIOHg3nMEZEwOpeOTo6xqtx8uQJ1taH1MWC3ipYYTQc0qmyfWKTSzv7HB8cc//vfok77nghp2+6kdF4QF8XcU73OAOOMmB24BztH3F81OMmHB8fsjaZcsOpG3Fvmg7i3OG/vtK87mvPmScusLuzz4nNLTanU4oIo8mI9a1NppM1jmfH9G1ccm1tytbWFtPpWqhTWjAvGvfHaN4QKmg8RzULN7hFqO5eo+nutDcGCot5+LDjjkWj261itce1Y7Ho6ecLBENLaW8cXFttCMBoNGI0Gq1+v76+TimFwWDA+vo6W1tb1/B0SZIkyfVMaLUq7s7Jkzfwlre8jZe+9JXX+ljXHZ/61H3cd99HODw8iDfDza71kZIkSZIkSZIkSZ6R5xBeV0yMQoFaWwjcolwXtMJABvRtDFAAaUErKKJNFSISjWZpbWZx3I0iGvoMf4qqY9l6BhBYW5swXhvS9wNmi8rhbMHewRGbG6GY8E1lZ3cfzPjql79KrT0333yaMgh/cleG9Is549GAYdcxmwl7BwuOD40TJ7bZ2tpiOBxQuq41lOME2sYjkWiImzk7l3f5+mOPc/rkadbHIzp1uk4ZjccIwvHhES4wLANGwwEiEeZvbW1RSokWtLfGtSqYUUVYzVpKKE5iuLE1wmvFCW1IXSyYz4/bn45RzZjNZywWC6yPhnZvC/B4G2Axn4Eqw0nN4cMkSZLk/xn+/J9/GT/2Yz/B937va6/1Ua47Tp++iUcffYgvfOF3rvVRkiRJkiRJkiRJvi2uXhsiJcYOl4OBLtBc1XVxzHCg9DiiBqqISwwliuPeg2mMPTp0lNBdqIGGNxsFr62ZDeCGlEJfHatOCTH2KvQdlo5uVKgD5+DwmL7v6QaFra0pe/t7SOl4+OGvseiNm55/A+vrY8x6uuGAYTdhbTql9pX9vQN29/bQrlA6oaji7vTWIzhd6dja2qRfLJgdH3EwP+Tg8JgzZ84xGQ3Z2hgz6QaUohzOwmlda6VoYbHoOTw65OhwH68LCqDWc3w0oxt0qApFJQYuRdBO4/pdon1uBq4IBSGa2nihtzbqaDF8aV5RcSaDIdPhkKLCbDbDbUFd1PBrO5TBiCLS3lpIkiRJku98um7AZDJhPJ5e66Ncd4zHE7pucK2PkSRJkiRJkiRJ8m1z1eG1ewwNChGwihSq9Zg4x0fHTNfW8JZPY00ZYkYp0lzXjovgAr31qMavHY8wXDQGICUaz6LStBrQlcLMI8Su1Tk+noELZjCbzzk6mjHoBiAR8I6GI45nM4ooj3z1YQ7397nthbcymYw4ub3NcDhia+skmxubLBYLHvyDBziaHcXzRpRMmEuE6WTKqVM3UCicO3eOo8OeOp/xvBM3sjadMO6E6WCEu7O1fZLhYBTO6cWc3hbUfsHh4QGHhzMODg6wNqRYNNrntLBc3KMx3e5xrTR1SfwzX6s9uLBYOBcv7SAKz9veYNDFcKabs7WxAWJoEWo/x6vFmwAa99apbVzz+vuL7Ate8AL+7b/9t4xGI2666aZrfZwkSZIkSZIkSZIkSZIkSf6UufrVu5a1uhkiglldWqmRorgIqh1QsOpICU+2m9HE2Jgq2jzYy+YwGqN8eDiptakzaDm2uDEeDjhwEFemkzVmszmLxYLhaIR2BRXlYP+Ivvah2pBQZVjfsz5d49zZ8+wfHLKxuc4LX2CcOnmSE1vOaDjl1Mk1Ll26xIWL57Faw+Xtxnw+Q1QYlI6jvX0m03U2N04wGa9zcnufUgqXdncYDwesjUbMZzNUCydP3shiseDocJ8L559kNpshWuiGQ8brE8pAQwNSFS9CrYailKZQMTPMe9zDq71YLDCr4NAvFjz88Nc4Op5x8003MBl3oAXMERdKiUa1hyAbQXHCTa6dxJsLRa/lVuPTMhwOufPOOwFSa5IkSZIkSZIkSZIkSZIkfwa56vBaANxRjcAVFBWYHx8zGg4RUdxjRMmghc+CqGLueKtkW/U2GijhuvYCHo9xkRhJ9AjIVQRVoeuaYqMoijIej+mGHV3pGI1G2IZztDHn0qWL7B3ssaiVyXhMv6iU0nH69M3sH+7Tz53paJ3pZIP19Q20U6QoL3jhHZzcPslifhyjiOLs7+0xX8wZj8YcHhwxOz5mMl1je3ub7ZOblK5ww/ENHB/P6ERxc6aTdUaTKZcuXeTS5V32jo4ooyEyX3B4dEg3KKtr6a2PaxUBA3XH3DD31Ye3QN9dEHcWszmz2YyuKxRvFhZvbXGhqUWsDVMqLiBeEEIdIgq9GwO5DtNrMrROkiRJkiRJkiRJkiRJkj/LPAdtiLUEW9CiYEIh2tPdsENUERTDUS0RRHr7uoKJYDh0gjlNJQLVK6WUVYgrEuOOtPAWlNFgxKAb4FbDtFFKeKTNkct7uMNwOGFr+wTr25scHO7TlY7trVPUakzX1lnbWGPn4iW2tra5+fk3s7G+gdUF+/s7WK3s7l7g6PCAagvW19YjmJ8d41bpSkdfe46PD5hOp2xubTNZm7Jdhsxn8zboWLAKs+Njum6IGdx66ws4/+QZjvaPMXeGkxF97ZnN56FgERALrYqpxihjG7I0a/10UeT/Z+/Mo6Sozv7/uVXV60zPMDPADAwIDjLsOyKIrIogishBRaNoosYsuEZfk/hqfiaub4zGxLhGfdWgQRREjOyb7NuwhX0fGGBg1p7ptbb7+6O6W0cWDRHBN/05h8M51dVdt2uevnXvc7/3+0g7WQLTUahjIYSCItxYikzcK6eQIwIUFGTSw0U4iwW2BAUVgbMowDmawE6TJk2aNGnSpEmTJk2aNGnSpEmTJs1/JqdfsFFKsGxkKukpsUwLR96rJdyirYSXtcS0bRQhHXWwhZP4ljaqULARCWtr4aiGpY1pW4CKsJ1ErVAECAXbshFSIlUI1tfj8mhYwsYwTfR4HGGCNG2iLp2MQCY5eXm4NBehUIhoXKdT565YpkVeXg51OY3wejyggC0tIuF69HiEiooKDh44SF1dHR63m0BWFo0aZSOlpPzIUdyKStP8AmKxKMeMcjSXi5y8XDS3j0BWDoqqEYvpGHEdoYJh+CgqKqK2upJqIfD7fTRqlE3MtjlUdhRvkQuXS0UIJ+ktBNiWhSWlk6JWFIRIVK6UTnFGS9pIBTS3h5gZxcDCI2ynwKNwkv5Yzr2zbYkUNpYlEMJCYCVU2o4yW6aqYqZJkyZNmjRp0qQ5k0gpicfjxOPxs92UM4aiqHi9Hlyuc6+uSpo0adKkSZMmTZrvF6edvLYSVhZKQn0NYBhxXG7nI20pHKuKROJUJLyrRUJBbQkboQjHBgMc9W/S+9pK2lwkMrHYWBag2CiKgo2Ny+ehvLoSwzLw+TyOt7VUQJH4A5lk5+Ti9QewBXTt3gG3x8WxigpUTSMnLw+/34vmVjH0GLoRpzZoUF15jMOHyqivD2EZBrFonPq6CJYlsEywLJNjxyrwuj2guLFME19mBsHaIE3yDbz+AB6vD920cHtcuD0a7piKZVqY8TjB2lqaNT+PzEAumbkhTAHB+mpUVYNEocqEa7hzT1QneS2lRAjpLBgkSkgKoaBikenzEawIYpoGlu1GKGrDZLSUyMQRVbOQUoIJUjqKblUoTlFNmU5gp0mTJk2aNGnSnEnCoTAT35vIiy++SE1NzdluzhlDURT69OnD/fffzyWXXHK2m5MmTZo0adKkSZPme8zpe14nktGWLVESNf90UyfD60VRNScXKgHhGFxYtoWiOIlYbIFUIVWFEekUcpS2o662HXWxUEh4ZzuFGx1Fso0Q4M/MoL42SDgawzBMAlkB3B4Xhh7H5fNiSUl+s2ZkNcrFsiz8GQFa+f0YuoFLUzEtAykkmkuhvj6IHo9x5MgRDh4sI1wfIi83F78/QCQcIxYzse0Quq5TH4oRd9to7nrcmorbl4Ghm8SjcWKeCEKoGJaF2+3BSBR5zMzKpLpSp2l+M+rraslpnI96tJyqYBUZmRmoquLcBqEgpaMwlwnfamTCU0U4SWscsxVsHN9wj8dNPKY76upEElqIxH1OJK6FIpDSQooYSIEUGhIFUByFdtKCJE2aNGnSpEmTJs0Z48DBAyxdupRt27Zh2/bZbs4ZRdd1+vbtm05ep0mTJk2aNGnSpPm3+LcKNiqKklL1mpaF6taQUmLZjopXYie8lm2EtJFWsmijk4+1LYkmQFVEqhihEDYoJHybJVYicY1tOx7N0vF9dns8uN0ewuEw+PxEIzGat8ijWfPmHD58mGg0xtGjR4jEoxQ0a0YsFkZg49ZcCGlgGTZIi2gkTCRUTyQUobqyinA4Rn04SmZGHLemoWoqdfX1eN1eTNskrlvolo4I1uFxa6guDU1TELslgexsmuYX4M/MIlwfxKVpeL1eDNsgKzuAqVu4vB6OHT0K2ESjIRTFRkFzbFESqmoU2/n+QmJGYoRr6sjIzkZ4NGRiAcDGSVK7NBVdN7EtmUpcO8JrCYn77Jhnk/jbGAhpA24sW8EyTYTiSr4pTZr/aGzbJhKJoOu6swCE08+pqorX68XlciV2SDQkEonw/vvvs3btWkaPHs3AgQPJyMg4Y+2MRCJMmjSJ1atXM2rUKAYPHvwvXS8Wi6HHdTxeD263+4TfKc3Zx7ZtotEo8Xg8FY9CiFQ8nuxvF41G+eSTT1i8eDFDhgxh+PDhZGVlnbF2RqNRPv30UxYtWsSgQYMYMWIE2dnZX/u+mpoatm/bzoGDB1AUhaKiIoqLiwkEAmesrWnSnG0MwyAWi2HbNjk5eXTq1I3CwvPOdrO+VXbt2sa6davQdZ1oNHq2m5MmTZo0adKkSZPme87pJ6+lYwGCIrClTSwew+/1IBQtYQOCkzlVBEjh2H1I6Xhg2xIlofgVtuO7LIRGooRjQjXMF6psSUKZ7RyTtgWqisfvpaouhNvlJa4blJWVEY1FadmiJZYr+doAACAASURBVP6MDOrDYWzbJFhbhd/nQUiJ4vUSj+uggG7omKaOaRrE9TiGbhCNRBGKQiQSwe3SiMUMTMumPhx2VMxCwbYl0XgciU11TRWRcD1VFRXk5DQiWFtNRiCbrKws3C6NY7qOPyOAx5OB5nYR1WOgSAwzTqbfg5SOFYpMqqST3x8LRQrsuMmRHfto0bYIf9McREKBLWwbYSWsRGwwTMspfGnZaGqiOCYSaUts6STDbWFj2TqqkDjSdw2hOIsH6eRVmjSwefNmHn/8cVavXu14+EPCQ16gaRoFBQX079+fG264gS5duuDxeAAnGbFlyxYWLlxI9+7dMU3zjLbTMAy2bt3KwoUL6dq1K4ZhfKP3hcNhPv30U9566y0OHTrEXXfdxbhx48jNzT2j7U1zeuzcuZNnn32W+fPnYxqJmPpSPDZp0oS+ffty3XXX0bt3b3w+HwCmabJjx04WLlzIeeedh67rZ7Sdpmmyc+dOFi1aRGFh4dfGYygUSsXh5s2bU+3zer306tWLn/70pwwZMiT1fdKcPWzbJhaLEY/HUyrh5AKKx+PB4/GccPwgpWTDho18+OFk/H4/119/PcXFxWesnfF4nHnz5jFz5kx69OjBmDFjvlG/FovF2Lt3Lzt27CAajZKfn0/Hjh0pKCg4Y+Oi5EIUQN++A7jvvkc477zzz8i1zhYzZ07jyJFDRCL1/9bnSCmJxWKpZD98EX9utxuPx+MUhP8Kuq6zbNkypk6dSrt27Rg3bhxNmjT5t9pyKnRdZ8WKFUyZMoULLriAG2+88V++npQSXdeJxWJomobP5zvhd0vz3WFZFtFoFMMwGsSfpmmp/u9EBINB5syZw7x58xg8eDDXXnvtGfV+r6urY+7cucyZM4eBAwdy/fXXn/x6EgzTIBwOY1nWCU9RFAWfz4fX6z1jbU7z9RiGQSQSSc0phBDHCWpORDgc5vPPP2fq1Kn069ePW8bfgst95uIvEomwePFiPvzwQ/r27cutt96K2+3+2vdZlkVFRQUbNmygpqaG/Px8unXrRl5e3hlra5pvjmlahMOhBvGXfP76fL6Txl9dXR3vvfceq1atYvz48Vx66aVntJ2hUIiJEyeyYsUKxo8fz2WXXXbScy3LIhQKnXKenvyOgUAg/Qw+Bzn9go2qQBECy3a8rTUbtC+kvtg49hd8yb5CUQTOs1+iIJA2TsFHIZG2iaKIhABYc7ywRcIDW+BYXyTMLgQCqeu4XS5MS1IbrKeJx41X04iEI5QfKcebkUHTgnyysrLRFAVVFSBt4rqOZZlE6uqxZcJ325JoLg2hOv4nqnDUzJGIY0mimzaay0M0GsHn9WHqOnFdx+NSCYfD2JaJqinU1deDUFAVDT0SIh4L4/VnYhkmZElsFOKxGKqikNsom2g0iG6YCJeTRE74giCwcepTxhEZChk+D/H6EP7GjVA0DWmZCBQkKhYGigDTVjCQuIR07rFQUmrsZCJb01QQBvGYgRGXSAxo4tzTNGnSOAPFyspK6urquOCCC8jNzUVRFCzLora2lr1797J161aWL1/Oo48+ypAhQ/B6vUgpMU0TXde/m23gktT1Tjb5aHC6lGzevJlXX32VmTNncvToUVwuF/X19f/nt61/nzEMg+rqampra2ndujX5+fnOQrBtEwwGOXDgAG+//TYrVqzgwQce5KpRV5GRkeHssrFMDMP4RvHxbWBZVioev5yc+yrxeJypU6fypz/9icOHD9OnTx969OiBYRgsXbqUZcuWoes6GRkZDBgwID1wPMuUlpby4osv8vHHH2PoiUWJRE5X0zTy8vLo3bs3Y8eOpX///g12gFRWVrBy5UoCgQDDhg07o+00TZP9+/fz+eef4/V6GTly5CnPNwyDVatW8frrr7N48WJCoRC2baNpGi1btuTGG2/kpptuolmzZme03R6Pl+zsRmRn55zR63zXZGRmommnPcVIcejQId544w3effdd9HhiES4Rf6qqkpOTQ/fu3RkzZgxDhw5N7dqwLIuDBw+yePFi4vE411xzzb/dllNhWRZlZWV8/vnnRCIRxowZ8y9/RmlpKa+++hqffDKNSy+9lF/+8pe0bNnyDLQ2zTdBSsn27dv5/e9/z4IFC5CJXcXJ+HO73eTn59O3b1/Gjr2Wvn0vSsW8ruvs2bOHRYsW0bx58zM+ztJ1nb1797Jw4ULy8/OxLRtOkquM63EWLVrEY489xsEDB094Tl7jPH7wgx9w//33f6MkZJpvH8MwWL16Nb/+9a/Zu2evczARe0IIvF4vLVu2ZMiQIYy7fhzF7b5YHDZNkwMHDrBgwQKysrKwbAvXyQLiW+DL18vMzHTi71RIqKyq4r33JvLGG29QXl6OZVm43W7atm3LhAkTGDt2bLrY71nEsiy2b9/GPffcw84dO52DX1rP93g8NG/enAEDBnDTTTfRqVOn1GuGYbB582bmz5/P4MGDz3hbk9ebN28egwYNOul5tm2za9cuJkyYwI7tO056nqIqtGnThpdffpkOHTqciSan+Tc47ZFlTW0V8XgM1eXGskzcHg1LSsf4Qghs21EUC6mALRGaimlZCdWYREgbIVRsi4QHCdjii+KEKNJRdyeULzLxmlP0EUzLQNdjaG4FI25SU1dLTI/RtGkTbGzcLgUjFsNwu3H7/WiKgiWls4KZ8KJWpCAajeHz+7AslUBmBpkZfuK6gSqgPhTCtGxsFAzbxrIs4nocy7aRtsQ0TKRloaoGti0RCDQBVjxKTDdBCIxYnOqKo4RDQVweH26PH5/PTTTq/PBlolCjZeuJ7645N0NaSBmnuqoCZJz62loCRnOEZmFLG8uU2JbAskBRNGJ6mIhu49JUpKVh6ja2ZWIm7A9saeHWLBBhLEtBWuD2+JHyXxtQSSkJh8MNbBXAWaVPrtSfzFrhTBMKhXjzzTfZsmULN990M30u6nOcaiBpC2HbNhkZGaiq+p23M825T9OmTbn//vsZPHhwauCefOg9++yzLF68mM8++4wOHTrQunXrs9vYb8D27dt56qmnWL9+PaNGjaKsrIw1a9ac7Wal+YY0btyYO++8k6uvvjrVp9m2zf79+3nxxReZOXMmM2bOoEvXLnTs2PEst/bU7N+/nzlz5lBRUcGdd97JrbfemlIobtu2jWeffZZVq1axdu1aunXrRqNGjc5yi/+zMQyDmpoaampqKCwspEWLFqkFlPr6esrKyvj73//OypUrmTBhAjfeeGMqgShtiWEYCdXimbcm+/ICyqmQUrJ27Vqef/55li5dSvv27bnkkkvIzMxk06ZNLFu2jNdeew1N07jtttvOqOVOmlOTXDiurq4mPz+fVq1aoaoqtm0TDoc5dOgQU6dOZdWqVdx+++3ccccdqT7Dtm10Xcc0zVMuqH1b2LadiPd//Xr19fUsWLCADz+czJEjR6ipqTnjO7jSfD3JBeRgMEhRUVFqMcuyrNQC8tatW1m1ahX33nsvY8eORdMSFpqJ/ui7iL8vX8+yrISh54lJLn4fOXIERVUoLi4+LkmYnZ1Nfn5+elfuWURKSTwe59ixY4QjYTp16pR6tpqmSXV1NZs2bWLjxo2sWLGC3/72t1x44YWp93+X8Qdf6W+/xoq0praGiRP/xnPPPYfP50vtVNm4cSOff/45zz//PE2bNmXo0KFnvN1pTo5hGFRUVFAbrKVr164pO8Dkc3nbtu3885//ZPHixfzP//wP/fv3Bzirgq6vu57X66Vt27YnXJRL5rg2b95MTU1NWjxzjnLayWtNsaiuPYbXk4mqaQQCGVgILCkdWwoEFoBtoaoCy5KoCY9sO+HJLLATxQVx1MEy6bxsOQcSQaMgEF96XQjIzMzApalEIzEixBzRsqqgmyambWPZJhKTuB5BcwkMS2DbBrF4DDNuoMfjCQsQZ7CpqiqappKVFaA2GAQpsXRJXDewhYIGqIkJm5TSEZUjsEwTpAskaB43msdNJBrCiMecb2hlkJUdQFXApYFblQhV0rhxLppL4ejRo4TCIZLFK3XTQDdMbMvC5ZZYpkU4UodtuonWhdAUP4ZhYugmumESj+v4M/wYdpy4DaGQSbhO4tPceDQXmqo5awOqBykNorEIAoHH48bl8TgFNZVvnsCtra3l4YcfZu7cucSisS9eSIxvXC4XrVq14sorr+SGG26gsLDwO/vxG4bBhg0bWLJkCUOHDj1uArtv3z5eeeUVZsyYQYsWLXjqqafo1q1bOoGd5jiS24Xy8vIabAvNyclh+PDhbN68mQMHDlBXV3fKQaGUki1btjBlyhQWL17MkSNHcLvdFBcXM2bMGIYNG0ZeXl6DCUJVVRWffvopn376Kbt370bTNLp27crNN99Mv3798Pv9p2z7kSNHmDZtGvv27WPYsGH069cP0zTp3Lkz48aNo1u3brz00kusW7fu379Rab4TFEUhMzOTvLy8BlYayXjcuHEjhw4dorq6+mvjcdeuXUybNo0FCxZQVlaGojgKg1GjRnHFFVccZ5dQW1vL7NmzmTZtGtu3bwegU6dOjBs3jkGDBpOVdWp/6oqKCmbMmMGWLVu4+OKLyc/P5/zzzycvL4+RI0emklEA7du3p0uXLpSUlDgTtnA4nbw+R2jUqBE33XQTN998c6oPsm2bQ4cO8eabb/Lhhx8ya9YsevbsSe/evc9ya09NbW0tixcvZsOGDQwfPpz77ruP4uJihBDU1NTw7rvv8tprr7F8+XKGDh1K165dz3aT/+MJBAJcc801/PxnPycj01H3Syk5evQo7733Hm+//TazZ8+mT58+DBw48Cy39l/Dsiw2btzItGnTiEQi6T7vHKSgoIA777yT6667LvV8tCyL3bt389prr/HZZ58xffp0+vbtS6tWrc5ya0+NbdvE43FUVWXIkCE88sgjx9WoUBTllLYUab47VFWlqKiIxx57jO7du6eOx+NxNm3axDPPPENJSQlTpkyhS5cu57zVS9L28P3336dp06Y88cQTXHzxxaiqyrFjx3jnnXeYPn06W7duZeDAgd/KDp40p48QgmbNmvHII4/Qp0+f1HHDMNi9ezePP/44q1atZuLEifTs2fOctvtTFIWWLVvyzDPPnHBxOBKJ8Pbbb7Nz505Gjx79vRCn/Sdy2j1Ck7wA1dUZVFeF0FQPhsfr2F+AYxNi49hwqGDbEiXhxywTymwEjhczNkJxDiXS2Y7iOpEIVxCADdLxGFOEgqaq+P0ubNPAti2wQagCPaajZINtGaiqQNMEiiKJ6xHHPtsGadooQsXvzcS2JXX19YTCIRQFMjIyURUFv99HPK6jqBqGFcewdBTDQhECn9f5UQrpJNUVl5MgdrncZHr8mDGb+roolmmjKQoZPh+BQC4uj4Kux6kNVyOlhcfrxeNygS2R0iZuWNTW1RGKxrFtcLs0VM0mw+chMz+bytJa9u3ahbdxDqpLc1TtioIlQTfieP0uDMuiPhwmr1EhimnjUhTH/9qpBYltgduViaIquFxuNJcLRVH5V4o12rZNdXU1x44dS21jV1UnjCzL5NixY2zcuJFNmzaxatUqHnvsMTp06PCdJIildFReX115i8VifPLJJ7zyyits3ryZuro63G73cerxNGm+Dk3TUjsLvF7vKQdVUkoWLVrEs88+y9atW+ncuTMjR44kHA6zatUqHn74YXbs2MGPf/xjCgsLASgrK+PZZ59l1qxZtG7dmksvvZRgMMjKlStZv349Dz30EKNGjTrpNauqqnjllVf48MMPGThwIIWFhfj9ftq3b0+rVq3weDzocT29YPN/hGQ8KoqCx+M55URTSsmqVat4/vnnWb16NcXFxQwbdjmGobN27Voef/xxtmzZws9//nPatGkDQHl5OS+//DIfffQR+fn5DBw4kGg0ypo1a3j44Yf5+c9/zg033HDS30FNTQ3vvPMO77zzDt26daNFixZ07tyZjh07IqXE7/cfF4vODiSBy+VKx+k5hKIoZGRkkJeX18AaJDc3l2HDhrFmzRrKy8upqKj42udqMBhk4cKFTJs2jS1bthCJRMjLy6N///6MHTuWbt26NYjl+vp6Fi1axJQpU9i8eTOWZVFcXMzYsWO5/PLLvzbZV11dzbx581i7di09evSgd6/eFBQUMHLkSC699FI6duyYmnD5/X66d+9O8+bNqayspLq6+t+4a2m+LZydfX5y83IbFHRNxt/KlSupqKigvLz8axfwDh8+zMyZM/nss8/Yu3cvtm3TokULLrvsMkaPHk2bNm0aLOCFw2FWrFjB5MmT2bBhA/F4nKKiIkaPHs2oUaO+1p+1rq6OxYsXs3jxYtq3b89VV11F06ZNU68fPHiQTz75hGPHjjFkyJDUImGacwdVVcnMzKRx48YNYiMzM5MrrriCFStWUF5eTmVl5SmT11JKKisrmT17NtOnT2fXrl3ouk6zZs0YPHgwY8eOpX379g2uUV1dzZw5c5gyZQq7d+1GURW6dOnCddddx2WXXXZS323ngnD06FFmzJzBtm3b6Nu3LwMuGUAsFkMIQaNGjWjatGm6SPI5jsvlolGjRjRu3PiLg9LZwX3llVeybds2Dh48SDgcPmXyOrng/OGHHzJnzpyUgKF169aMGDGCa6+9tkHfBM6u5vnz5zNp0qRU39S+fXvGjh3LVVdd9bXJ8pqaGmbOnElJSQk9e/Zk6NChrFq1ivLycm6++WYGDx7c4Pn7wAMP8LOf/Sy9O/ocQlEUsrOzG8YfTv83duxY1q5dS2lpKaFQ6JTJa6cezw7effddli9fTmVlJV6vlw4dOjB27FhGjhzZ4P1SSqqrq5k8eTKffvopBw8exOPx0KtXL2655Rb69et3SoGklDK1IFJRUcHIkSMZMmTICceMpmmye/dupk2bRufOnbn11ltP3bemOWucvm1ITTV1dUH0uElNXR3VNXU0adaUDL8PoQpUAaZlOd7NQiATyWpFFWCrKAIsaQK2U7xR2kgJQlGQCcsRt0x4YCsCVVMRqkqGz09Ghg9pG1REo04hRwRYgkBWJhleDxk+D8K2iEfCYDr2HXZC3e3W3FiGSX1Mp6KimoqKSupCtQn7DIWc3GxH1SGdH6thGsTjJggFt8uFxyOwbQuvx4WiCqStoGpuMjMyEUIjpgtCph9F9RJo3JSKcC0BXSE/4OdoeTkHD5ZhmnGaNG1Ks+aF+LxuamoMampq2bZ7D7jdeD1esgKZoOpEtQjSZaFkeNBNC4/mJiOQgVAFprQxLQuvz+MU3lBUjLiF6TPJ9HjwuDRURSBtGyEUkC7cLs3x+hYuNMVJgn/d9p4T4fV6+eEPf8iYMWO+2CKc2La2bt06nnzySRYsWECXLl2YMGHCcR3ed4WUkkmTJvH888+Tk5PDXXfdxZQpU9IPxDSnRVlZGStWrCAWi9GzR0+aNGly0m2Vhw8f5r333mPLli388Ic/5OabbqZpflOklKxevZonn3ySjz76iK5du3LFFVegqioff/wxc+fOZcCAAakkomVZTJkyhT//+c9MmjSJdu3aUVRUdNz16urqeOutt5g8eTLdu3fn9ttvp23btilLn2Qy6EwX70vz3VFeXs7q1aupra1lxIgRFBYWnjQejx49ykcffcSaNWsYM2YMd9xxBy1atEBKycaNG/nDH/7AZ599lipW53a7mTNnDp999hldu3bl7rvvplOnTti2zaxZs/jTn/7ElClT6NChQwM1UJJQKMQHH3zAxIkTOf/887n99ttTRU5PNtkpLS1ly5Yt+Hw+iouL0wrE7wHJwnnJ/7/OH7WiooK33nqLt99+OzUJyc3NZffu3UyePJk1a9Zw3333MXz4cFwuF5WVlbzzzju8/fbbBAIB+vbti23blJSU8P/+3/9j79693H777SfdkVJXV8dHH33Eq6++SqtWrbjqqqto1boVzQubM3r0aLweLx7vFxOUL/9+TlWQLc25gaIoqfhzuVyn/HtJKdmxYwcvvfQS//jHP2jWrBkDBgzA5XLxz3/+kz//+c+UlJTwwAMP0KtXLwBqa2qZ/OFkXn75ZTRN48LeF6K5NDZs2MBTTz3Fzp07ueeee45TriYJh8PMmDGDP/7xj2RlZTF06NAGhUTr6+tZuHAhS5cupX///nTq1Indu3d/uzcpzRkjWbg2GX+n6v+klBw4cIBXX32VSZMmkZeXl9pNt23bNl5//XVWr17Nr371Ky6++GKkhLKyg7z++utMmjSJwsJCBg8ZTDgcZvXq1WzYsIEDBw5w2223nfSaFZUVTHxvIn/961/p0aMHt956K6qmEolEkFKesuhamnMc8cXzNylgOJWgxrIstm3bxhNPPMHSpUtp27YtI0aMwDAM1q1bx9NPP01JSQm//e1vadGiBfDF8/qvf/0rTZo0YfDgwei6zqpVq/jv//5v9u7dyz333HPSawaDQT766COee+45LrjgAsaPH4+u65SUlODz+ejbt2+DZKWiKGRlZaWtur4nJPu/bxJ/yaLGDz30EGVlZVx00UVccskl1NTUsGzZMlauXMnWrVt58MEH8fl8qYWWxx57jNmzZ6cEYFVVVcybN49Vq1bx29/+ltGjR5/welJKqqqq+MMf/sDf//53rrrqqga+3F89t7q6mrfeeovq6mp++ctfnvM7aP6TOX3ldbPmSFUgLYFpOspo4dLQ3C5URaU+HCcUiYN0vOmEEE5iWoKUFrblWG842msbKaSjxJYJVTYWNqAoKm6XC82l4XZrZGZ4yPR7qKkOY+ommurCUExUTcW2TRRN4Har2KZO3DIx40bih6WBsDHiEA/VUxE2iFoauP2oSgRTN4jpMQ4fOYrP53MKDkgbIcHQDRRVw1Y1rESyXFUUYvEIfm8GGT4/Ho8fT3Yh5aVHqa2tIBYPY0qbwqL21ETiBPw28bhOTbDO2Y5VF8alldMkv4BYOEo8ppObk4OhKCiKi1jcwu32YisuVI+frEKBV8tE83pRNEfRriDRpCQrI5NwZR0exY3bp2DFYwiPN3XPFUVBSgG2jaq5ENJGSoEENFU7LVsPIURqG/tXHzKXXXYZ+/fv5+mnn2bZsmWMGzcuZY0gpWT9+vVMnjyZFStWUFFRgdfrpWPHjlx33XUMGTKkwedJKdmwYQOTJ09m+fLlX3v+V5FSIqXk5ptvZsSIEYRCIWbPnv2dFTFL8/2kurqa9957j6VLl6b8NZP+cuXl5YwZM4ZrxlzTYBL6Vf75z3+yceNGWrVqxWWXDaOoTVFq0aRfv37069eP999/n5KSEvr27YtpmqxcuRIpJYMHD6Zdu3YpheOll17K9u3bqaqqIhqNHrfdKRqN8sknn/C3v/2NDh06cNdddx2nXkzz/SUYDDJ16lS2bNmSisfa2lq2bt3KwYMHufTSSxk3bhwFBQUn/YydO3eyfv16mjZtymWXXUa7du1SA81evXrRv39/tm7dyvr16xkyZAg+n481q9cQDofp378/nTt3TiVoLrnkEnbu3Mnu3buJxWLHxWM8FmfOnDn87//+LwUFBfz85z+nX79+p0wslZeXM2nSJEpKShg+fDh9+/ZNJw6/B1RWVqZU15dffjnnn39+6ln/VQzD4PPPP2fKlCnk5+dz7733prYFJ1X67777LtOmTaO4uJiioiKWLFnClClTaNWqFffddx+9e/dGSsmSJUv44x//yKeffkqnTp1OWKQnEonwj3/8gzfffJPc3Fxuv/12+vTpc8ok+8GDB/n88885duwY/fv3T09gznFqa2tZt24dpaWl9O7dO2X/ciKCwSBz585l1qxZ9O7dmwcffJB27dohhGDv3r28+uqrzJkzh2nTptGmTRsCmQFK1pUwadIkcnJy+MUvfsGASwYAULKuhOeff55Zs2bRuXNnrr766uOuF4vFWLBgAS+99BKKonDHHXcwaNCgVL9rWRabNm3ik08+oaCggGuvvZaKisq0z/D3hKRtzfLly4lGo3Tq1OmUBTZDoRCLFy9m6tSptGvXjv/+7/+mS5cuCCE4fPgwb731FpMmTUqJFDweD0uWLOHjjz+ma9euPPLIIxQVFRGPx1m4cCFPPfUUUz6aQs+ePU8oaKitrWXatGm8+eabtGnThnvvvZd27dpRU1NDNBoFSC0ErlmzhmAwSNOmTenduzfdunVLF2o8x0kWiV20aBE+n48LL7yQzMzMk55fXV3N1KlTWbx4McOGDePRRx+lSZMmqXo+zzzzDPPnz6dLly7cfffdqYKRSQHCU089RXFxMYZhsGbNGh555BE+/PBDLrroInr06HHc9cLhMLNmzeKFF16gSZMm/Nd//RcdOnRg586dlJWVkZWVhaIovPDCC8yfP5/KykqaNGnC0KFDuf7662nevPmZvH1p/k1s2+bYsWPMnj0bRVG4+OKLT5qPSfaVf/nLX9i/fz/33HMPP/7xj/F6vei6wYYN67n//vt5//336d+/P0OHDiUajTJt2jRmzpzJiBEjeOyxx8jOziYajTJjxgweffRRXnvtNXr37o3fd7x4oa6ujtdee43333+fSy65hIceeuikQkpd11m8eDFz5sxh8ODBXH755WmR4znMaSevM/w+sgKZqIrq+CYL4RRBFKBqbhQFLMPENCRSEYnktOPrrKgCLJkQVSsogI1jaC1tiaKAsAUooGpOQjwzw0ejrAz8Xg/B2lrqa+uIR2IgbbKy/GRnZ5GdlUFGhgfTNIgZBm6XB1WVGJaFx+1B0xQM3aC6tg4luwV+v4tI3MTjy0CRYVRNJRyNUl8fwbIlqqphWBaWbWPZhqPE1nVcmoZAxbIthCJwe93kNG6G8DcnZlagKjqWXommNKdRTja1FeXsP3CYTH+A3Lx8hCLQVEEoFKOguYvmhS2oC4dokpNNfVxHKB5CwTCGGcar+fH7G6FlKGiKBghUVQFFAgrSBq/PjW1bqIqKx62hSg23S8WlqSlPcenYkCMSRiwA5VA6WgAAIABJREFUtrQwTBOhfLsDFLfbTVFRERkZGdTW1hIOh1MFPGfMmMHTTz9NWVkZvXv3pm/fvqlVt+XLl3P33Xdz6623kpubi5SSGTNm8Mwzz3Dw4EF69ep13Pl33XUXP/zhD0+aRFQUheuvvz6lMNi4ceO3+l3T/N+kpqaG6dOnN3h4WZaFEIIuXbo4E9tA4KQLP7Zts3v3bqqrqxk4cCAFBfkNPisjI4M2bdrg9XrZv38/dXV1BINBDh06RCAQoLCwsIE6tVWrVvzmN7/BsiwyMjJSEw9wEkJTpkzhnXfe4bzzzuOee+6hd+/e6cTf/yGCwSCzZ89m/vz5qWNJa6R27drRtm1bsrOzEzZQx2PbNqWlpRw9epT27dtTWFjYQCHh9/s5//zzCQQCHDx4kJqaGurr6zlYdhCv10uLFi0aWEUUFhbyi1/8AsMw8Pv9GIaRes0wDGbNnsXf/vY3AoEAEyZMYODAgafcWppUo02ePJkLL7yQW265haKionQS5xyivr6emTNncujQITRNSxVs3L59O/v27aNPnz6MHz+e884776SfUV1dzbp166isrOTGG2+kf//+KcuFQCDA4MGDWbZsGZs2bWLHjh3k5OSwbt06qqqqGDFiBD179kyp8S+88EKuvPJKSkpKiMfjx+0oicfjzJ07l9dffx2v18vPfvYzhg4delwc6rrOmjVrWLhwIfv372fbtm1UVFQwfPhwbrnlFvLz87/lO5nmdAiHwyxatJBwOITL5UJKSSgUYteuXezatYtOnTpx++23c8EFF5z0Mw4dOsTq1avRNI2hQ4fSo0ePVIKuQ4cODBkyhOXLl7Nhwwb27dtH69at2bBhA2VlZVx11VX07duXRjlO/HXr1o1Ro0axYMECdF0nHo83uJZhGCxfvpy//OUvxONxJkyYwJVXXtlAZVhWVsb06dOpqKjgJz/5CV27dmXhwoVn4O41xLZttm3bxrRp0zh8+PAZv97p4Ha76dGjB1dfffU5sQOnqqqKqVOnpmwTbNumpqaGrVu3cvjwYYYNG8Ztt912UgU+OAt9K1asQNd1hgwZQp8+fVLjNJ/Px6WXXsq8efPYuHEju3btolmzZqxcuZJ4PM6gQYPo3LlzKvYvuugiRo8eze7du6mtrT1usTAUCjF37lxeeuklWrRowUMPPUSvXr3QNA3LsohGo1RWVjJx4kTee+896uvrsW0bVVVp0qQJo0aN4t577z1lf57mu8G2bcrKynjllVdSzyPTdGw6N23aRCgU4pZbbmHs2LEnTbjZts2RI0dYtGgR2dnZjB07lqKiotQcJmlntHr1apYtW8aNN96IEILly5dTW1vLuHHj6N69e0oQ07NnT8aOHcuaNWuorq4+rkheLBZjxYoV/P73vycrK4tHH300ZfEQCoUIBoNUVVXxwgsvEAqF6Ny5M4WFhWzYsIHf//73rFq1iscff/yU/Xma74akgvmNN97gs88+A5z5cFVVFRs3buTYsWOMHz+e8ePHnzT+DMNg+/btrF27lqKiIq677roG9jQ9e/Zk+PDhvPXWW8ybN48hQ4ZQXV3N7NmzcblcjBkzhpYtWyKEIBAIMHToUJYuXYqu61RXV+MvbJi8DoVCTJw4kddff51evXrxu9/9jtatW59wzp5MrL/77rt4PB5+eOsPz4lnTpqTc9rJa0URKAooqkQ4BtAIoaIKFU3VKN1/iNqaCAVN83F73BiW6ViHkKrKCBJs6RRyFFKgkHAJkRJFCDRVw+3xkJnpo1F2JgG/F7CJRaPU1NSAkrAKycggJycbj0vDMp2ijLYtcbk8ZGRkkpvXGJfLhaGb2FJQXhGkQ/s+7Nq+FY9LxRQSG0G7jj2I6TF2796JacaxpekUaEy02bIspGUhhUIsHsef4UJzuYjFo4TCOs2a5ZEZyKL8WBwXkJ2RiTRNJ4FbdZALLmhJ46bNCIXqsOJRDMukpiZIUdH5xOJxTNNE1FSiSwt3jodo1EARMuFpKlAUgUBN+F0nkhdSoGlupFRRhBev24tP8yU6ECdjLQQgHB9RaUtUoWILkAnvcUVTv/UkQW1tLbqu4/f78fl8CCEoLS3ljTfeoLS0lAceeIAxY8aQk5ODZVksWbKERx99lHfffZcePXpw8cUXc/jwEd58803279/P/fffz9ixYxuc/5vf/CZ1frLC7Yn4ctIlTZpvQvPmzbn99tvp0aNHKskXjUTZu28v8+fP509/+hMbNmzggQceoGPHjse9P1mJWdd1cnJyjkuYKIpCo0aN8Hg81NbWEo/HqaurIxwOp34zX37IqqraYFKULJZq2zZz586lrKyM+vp6fvzjH9OzZ89zvmBLmn+N/Pz8VLIvmWyJxWKUlpayaNEi3njjDTZs2MB9993XoNp8Etu2CQaDRKNRsrOzj7NYUBSFQCALn89HXV0dkUgEIQT19fV4PJ7jvKlVVW2gsEgqr23bZtmyZcyaNYtjx44xYcKEUxYZlVKyadMmXnzxRebNm8egQYOYMGEC3bt3TxfpOccIhUIsWrSIZcuWpY4lJ6znn38+xcXF5ObmnnInV3V1NQcOHMDtdnP++ec3mCAoikKzZs0oLCyktLSUyspKqqqqKC09gKqqtGzZsoEva35+Pj/5yU/QdT21xTSJaZosWbKE999/H9M0mTBhAsOHDz9hHCYLPb/zzjscPnwYv99Pv379GDZsGK1bt06rb84RIpEIK1asoKSkJHUsWUC9RYsWtGvXjry8vJP+vZLel/v37yc3N5cLLriggbLU4/FQWFhI06ZNOXbsGIcPHyYnJ4fS0lJs26Zly5YNnsG5ubmMHz+e6667Dq/X22AMbVkmJSWOYruqqoo777yTMWPGNFBFJn9PS5Ys4ZJLLmH48OHf2YLzkSNHmDhxIi+//PIJi1adCwghUurjcePGne3mpBIpX11AVlWVtm3b0r5dewKBwCnnUjU1NezZs4esrKzUd0vicrkoKCigRYsW7Nu3j9LSUjweD7t27SIzM5M2bdqkEodCCFq2bMmDDz6YWkAOhUKpz4pGoyxatIg///nP5OTk8OCDD9KvX7/U+y3LIhaLEY/Hady4MaNGjaJ///4oisLKlSv5+OOP+eCDD3C73fzqV786ZUI+zZlHSpnamfbl56tlWbjdbnr37p0Sw5wM0zQpLy/n0KFDFBQUUFxc3OCzvF4vrVu3Jjs7m0OHDlFeXk5mZibbtm3D7XbTrl27Bjs5mzRpwt133008Hsfv92MaX/QjyQXhp59+GiEEv/rVrxg8eDButxvDMDBNk1gsRjAYJDs7mz/84Q+0Oq8VCNi/fz/PPfccCxcu5G9/+xsPP/xwWohzlpFSUlNTw+TJkxvETFKU2K1bN4qLi0/pdW0YBnv27CEYDNK/f/9UnackyR31iqKwZ8+elKp7z5495OTk0LZt21Tfmiy6+Nxzz2HbNoFAgHAonGprJBLh448/5oUXXqB9+/Y8+eSTXHDBBScdm8bjcZYtW8a6desYOXIkvS/snRbOnOOc9uxQogAqAkfd6yiwQZUqhiGprYuQGcihtraWzEw//oxMbFNHkCjIhFOkESlACqQE2xFmO+ptRUEIcKsqfrcbv8eDLSVVFVUcO1qJS3OTm9uErOwAbpcLbAtpW8QNk3hMxzANpGkjpEpWVi5en4tGOU3xet1UVtfSOL+Q8mNHqK3YhTTiuISgcX4B+8sO4vL40HUdRUpHCS4UJ+kO1IfDICWNchqhuRVsS1BfV0dlZSWNW0kCjRoTDOQQqo1QURkibO6m/NA+crKcH4Lb5UYRgrhp4na7iUQjSGwKCvKpPFpOOFKPkDaGAX4lCyEElgWa6kr8mASWtBG2wLYFthQoQkUVbrA03IoPl+J2lNlCIoRTMBMpsbBTymtH7Q4SO/Hv2+PAgQNMnz6d6upqrr766tSEtqSkhC1bttCxY0eGDRtGy5YtU53JoIGDuOiii5g+fTolJSV069aNkpK1bNmyhQ4dTnD+IOf8Tz75hLVr19KtW7dv8Ruk+U/H6/XSuXNnBg0ahMftDJxsaWOaJiNGjOCJJ55gwYIFdO7cmYKCguMSbZZlYRgGUkpU9cSLQ8njlmlh2zaWZaUGA9/0wRkOh/n888/xeDwYhsHcuXPp06cPXbt2PS07oDTnJh6Ph3bt2jFo0KBU0eAvx+Mf//jHVAGy884777ito7ZtN4jHE8WGqjoWU8k4FEJ8Yfn1DeMxmWDyeDxYlsWiRYu4+OKL6dev33FJJcuyWLp0KS+88AIbN25M+XAXFxen7W7OQfLy8rjmmmu4/PLLU5PJeDxOWVkZS5YsYdKkSakFlIEDBzk7xL5CJBKhvr4et9tNVlbWcf2mz+cjEAhgGAbRaDRxfh2aph1XvMlZcPkimR0OO5MX27ZZt24d69evp7S0lPHjxzNkyJCTbqf2er2MGjWKDh06cPToUTZu3MjChQv5zW9+w+7du7ntttvOWs2ONF+QnZ3N8OHDGTNmTCpJo+s6R44cYfny5Xz66aesX7+ee+65l5Ejrzju/bZtEw6HCYVCNGnS5LjtzUII/P4MAoEAFRUV1NXVEY1GCQaDqWJ9X47XZAHTpDjiyzYM27Zto7y8nG3btjF69GhGjhzZIFZt22bTpk1MmzaNgoICrr/++tT2/e+CcDjM0aNHqaurS3x35ZwaL0gpsW2LyspKysvLz3ZzAGe30Q033MCQIUNStkixWIx9+/axZMkSXnn1FUrWlfDQQw+d0ELBtm0ikQjBYBCv13ucsk8Igc/nIzs7m1gslrL2CAaDuN3u4/qvry4gJ5PXSUuHZAHJn/70pwwYMKDBQk1ubi433XQTAwYMoHnz5g0Sn5dccgkdOnTgiSeeYNGiRYwYMeKElkxpvjsURaFt27bcddddKSWybduEQiF27NjB/Pnz+d3vfse6dev49a9/fcLdQpZlUV9fTzQaJTMz84TxlJmZid/nT8WppmlUV1fjcrmOK+iZfP4mjyf7Esuy2Lp1K0899RR79uzhRz/6EcOGDWsQf8k6PLm5uYwePZouXbqknu1er5drr72W5cuXs3LlSo4cOULr1q2/tXuZ5l9HUZTUbssOHToAiSRxOMKevXuYO3cuTz75JKtWreKJJ544od2LbdtUVVVhWdYJBV2qqpKTk4OmadTV1aV29oXDYXJyco4THiTPTxLGGf8ZhsHSpUvZtGkT0WiUH/3oR3Ts2PGUi9rBYJBp06ahaRpXX311WvD4PeD0pU3CRtEkCBshEolooaBIqDhahUv14Pf6kB6NWDRCOHSM/Gb56KaOLS0cx2YFoSS8ER2XECfZKiVgo6gKqgo+jxuXy0UwWMvRQ+UIS9I4N5fMrCw8HjcgMU2JbuhYllOcUFNd2BLqaoPst0rxZ2bQsiU0zc+lW88uVEeqaNK0CUfLGmFE6jDtEJs3riUUM7AsidvtwjB0DMN0FNcSDAxMy0ICRlUlup6FR/Pg94K0Yhwp20FuXnO0dhdSfjAXW1ocLtuN1yUpyM/H0KNE6uqIG3HHy1tz4VJVopEQjZvmU9iikHA8jtuGYL2OtHR0U6e6Okhudi4ejwcpbGeAqzgWIJZlg5S4VA1TtxzvceH4WSPBkhYCBUUIJMKxDLET/wuQyNNaYdJ1nRkzZlBaWpp6KNm2nfK/3LFjB+3ateOaaxxf4OSAvq6ujg4dOpCXl9dgsBwIBGjbti2qqrJ7927C4TDbt29PnN+exo0bNzw/M0BxcTGqqrJnzx6i0egpV/3SpPlXEEKkCvC43F8k0jweD23btqVLly4sXbqUbdu2UVtbe1xyQ9O0lBorFosd57GenPhYloXX50XTtFThnHg8nko0ft1vU1EUBg0axI033sjs2bNZsmQJkydPpnHjxqmCK2n+b3CyeCwqKqJLly7MmzePHTt2UFlZecKJidfrRVVV4oldPl9GSpmKu4yMDFwuF5qm4fF4CAaD6Lr+jePxoosu4gc/+AHr169nxowZTJo0iYKCggbbP6UtWbZsGc8//zw7d+7kjjvu4KabbqJFixZppes5itvtpk2bNgwcOJAMvzO4l8jUAsrrr7/OBx98wCeffEJRUdEJvaJN08Q0zdTkVdAwnhRFQVXVRPLKTi3qJc+Hrx+rxGIxSkpK8Hq92LbN8uXLGThwIMOGDTthbKmqSosWLWjWrBmWZXHFFVdw0UUX8cILLzBt2jQ6duzIVVdddXo3Lc23hsvl4rzzzmPAgAEEMhNFwr8Uf++99x5//etfmTLlIzp0aH/c809KiWEYX4mnhqiqkqop4Iz7Zcou7Jsmd3VdZ9++fanJ+dq1a1m5ciXXXHNNKvn9ZbuQn/70p3Tp0gVFUb6z5HXy9wVw8cWDefDBxzjvvPO/k2t/E+bO+wePPnJfg3aebfx+P+3bt2fIkCEowil0L6UTf5dddhmvvfYa06dP54MPPqBNmzbHvT8Zf6Zp4nK5TtgXfbn/M03zBP3f11NXV0dJSUlix67CokWLuOyyy1IFSOELlWP79u1RVbVBW5o0acKFF15I9+7d2bRpE7t27Uonr88ySauEXj17pf6OyfiLx+MMHTqU3/3ud3z22Wd07tyZ22+//bjPSMZfcrfAyZ6FQhENxDRJQc03HZdFIhHWrVuH2+1GCMGiRYu48sorufjii1Pfxe12p3Zkf3XM53a7adGiBY0bN6ayspKKiop08vocwOfzObvi+zl/x2T86brOiBEjeOSRR5g1axbdunXj7rvvPu79yXOBk+6qTAm6EvPlL8ffN+3/wuEwM2fOTNkjffTRRwwaNIhmzZqd8HzTNNmxYwclJSUUFxfTu3dadf194LST15rmQggNCUhFOMUVhYKtKpSVHcXr8qK5BJat4csMoOtxyg4fIi+vEZqqYmBjA6rQsCRoisC2bIQNiqqiuVTcbhf+DC9ur5tYNEptVQ1ZgUw8Pi8utxtVU1EUgbScHwYKaG4V01JRVA0hBEYsTnVNJeFoGEPXMfQWBHIakZmVgelx0fK8C6jRXASrjlEfiTr+0JaO1+NCN2NYUmLZEonEtpzEupLwmg6HIxw8fIT8po3wVR0i0+8iYtficmWSHfARqqukZUGAjIAblwei0RAu1YVbuLBMxbFNMS0syyIQyKJrt5643D6q6sOUV9VRdmAfdcEgofoQGb5MVE1FUxSEVJCWjSUtFOE8VHw+L9FoHMOUaIqFUAWaSKSrFeFYZKNg4zw8FCGwhEAIRz3+r0qvDcNg9uzZLFiwoMFx27bx+/1ceeWV/OxnP6NXr16prULV1dVYlkVeXt5xyjpFdVZhNU2jqqqKWCxGVVUVpmme9PycnBxcLhfV1dUY/5+99w6zqjrU/z+r7HLadIZhQAFBiiAgYqRoLIgK2K5iLDf6M8XkRtM0T0y7Lbk37aZqYowlRr1ETfSq8WuCSRQ1TzQkxkJERAUdEaXDtNN2Wev3x55zIjIYJSrE7I9/+DDPPmevmVmz99nvetf7hmEqXqe8LRhjKJcrdYF5MGpb4HO5HC+//DJ9fX07iH9xHLN+/XpKpRLDhg0jn88jhKC5uZkXX3yRrVu3EoZh3eFYqVR49NFH2bZtG5MnT667HbLZLEcffTTHHXcco0aNYsOGDdx66631TLF0u+c7H2PMDgseg81JKSXt7e0UCgU2btzI9u3bd5iPtS16vb29jBs3jsbGRhzHoaWlha6uLrZs2UK1Wq1fY6vVKitWrGD9+vWMGzeuvniTyWSYM2cOCxYsYNq0aWzYsIElS5YwevRozvv/zqO1Lck3fnz541xxxRU8//zzfOhDH+Kss86ivb19r3L/peyMlHKnBRTXdRk5ciRTpkzhrrvu4tlnn2X9+vWDiteu6+J5HlEUEQQBxhoUf3lwjaKIarVaP4/ruvi+TxiGVCpVrDXAaz9ECyGYMmUK733ve1m3bh233HILN954IyNGjGDy5Mm7/L5q5/R9n6lTp3LAAQfUI5lS9jyJgLzzAp7rugwfPpwpU6bQ0tLCc889x9q1a3cSr6WUeJ6XRAiG4U4Z6bWH6yAI6vOg9v/aNvdd3e9fPc5x48ZxzjnnUK1Wue666/jJT37CqFGjmDFjBkEQ8Pjjj/PrX/+a9evX88Mf/pCf/OQn9TFs3bqVNWvW8NJLL3H++eczZ84czjvvPEaPfmvEZc/zGTq0k87OXRcNvt20tLSxN+oHSqm6KFfD8zxGjRrF9OnTufPOO3nmmWfYvHnzoE5Vz/Pqz0Ovzki3loHrXAWtNdlstn69LJVKOx2/KxzHYfr06Zx99tmsWLGC22+/nR/96Ed0dnbWBRwhxC4FJCkl2WyWpqYmoijaoV8lZc8hhEA7eodrHyTzb8yYMRx88ME88sgjdcfpq5FS4vs+UspBOyJq178wDOvi8ivvv693Hmit2X///fnwhz9MV1cX1157Ld///vfZZ5992Geffeo7BlpaWnjhhRcGnde13YG1xZ6UvQOtB59/I0eOZPbs2dx///08+uijlEqlnV4rhCCXyyGEoFQq1RdRarzS0OV5HkKI+j14sPm6KxzHYc6cOVxwwQVcd911PPDAA1xxxRV87nOfG1QjqkXcbN++nYMPPjjdZfd3wu5nXgtFklI9UAIok2zl2Bj6in1ImcGPM6ASt4yXyaC0prunF9dxaWppoBoG2DgCmziIa0KstTFKSgq5LK3NLSgpKJbKZPzkPRzPSWJKrEUMZFFjDEG1SrUaEoYRxiYP5Fom7jGlJb1923n++YD8phwdw4pkGtrYb58WevOSvp5WNmzayObNG+iJSvT09dBXLBGGMZFJAjaUFEmm90DmtCQppXz55U1UiiW6t22jqaWZQmMzNjaE1TJh4BDFGeJ8BikFsRGAwnE0rqNxXY0xMVZYGpqbGT9hIo8uX05YLSOlJIoMvp/FmkSUMLEZcEwnWAAh8Dyf/tL2ZKUKg4NO3NYiKcZECoQFKSwCi7EmKc8UIoloeYO/f9/3OfPMM5kzZ059BVUIQUNDA8OHD6elpYWGhoYdMtZq7r1dxShonSw4RFFEFEb1ErDXc/zreahISflbiaKI++67j/vvv48wDOtC36sRQnLggQcyYsQInn76aZ59djVjx46tu7E2bNjAn//8Z4wxTJ48mebmZpqbmxm3/ziefPJJnnjiCWbOnFnf/tfV1cWll17Kiy++yBe+8IV6rnHtBu95Xr0w7Vvf+hbXXnst++yzD0cddVTaGP8OJooili1bxm9+8xv6+/sZM2YMba07f/iq5Yfut99+rFixglWrVjFlypT69rjNmzfzxBNPUKlUmDBhAu3t7Xiex7hx41i2bBkrVqxg7ty57LNPInCsW7eOq666iscff5yPf/zjzJs3r34ez/PwPI+pU6dy9tln8/LLL3PjjTey7777cuKJJ9LX18ett97K448/zmmnncaiRYsYOnRo6nb4O6a2gFK7x++KhoYG2traWL58OZs3b6ZSqeywRbOnp4ctW7bg+z6FQqF+fLVaZcuWHY8PgoCnn36arq4uRo8eXc9Q9H2fGTNm1Ofaxo0beeCBBxg1ahRtbW10dHSwatUqbrjhBtavX88///M/M3fu3J3m3yvd3yl7NzXhpVqt4vv+oHNQCEFTU1Pd0bdx48YdFvCstXR3d7N582YaGxtpb28nl8vR3t5OHMds3ryZUqlUFyXDMOK559bw9NNPs88++9TFZdd1OfDAAznllFPAwsaNG7nzzju54YYb6OjoYMiQIcnCzcDfzNNPP73D3AuCgFKpRLlcZsWKFQwdOjQVEPdyahnSr2VoqLlnOzo66iWPO84/U79e5fN5Ojs7yefzDBs2jOXLl+90fKlUYtWqVaxbt47x48fXt9AXCgUOOeQQTj/9dKZNm0ZXVxe/+tWv2G+//bjgggvIZDJs3769LnIedNBBdHR01MdpraVYLLJp0yY8z6OlpeUt/uml/K3UImlq96tdxRS2tLTQ1NTE9u3b2bRp0w4LzFEUsXXrVnp6ehgzZgxDhgzBcRxGjBjBU089xbp163aYf9VqlWeeeYbVq1czbv9xDB/xl/vv9OnTWbRoERs2bGD16tU88MADXHPNNVxyySXkcjkaGhqYMGECK1euZMWKFRx77LF1ITOOY7Zt28b27dsZMmRIWpz3d0DtmvFKp/Sr0VrT2dmJ4zisX7+evr6+Ha4tURTx0ksvEQQBHR0d9V6otrY2enp6WL9+Pfvtt1/9+Gq1yrJly+ju7mbGjBlkM0msSM08ecwxx9DS0sKaNWtYvHgxEydO5IwzzthhbLVxL1u2DNd164W2KXs/f0NhY5JLbbFIJQBDtdpPWLVMnDCKZ55dz8svb2BIeyvZbAZrLJ7roRta6C8W2bRpM21tTYAg6R0UCCmQWiarfa6D5/sgk+gLx3PxMj5SaqQUYGziPo4iTBRjBx4wpFQIaSA2RMYSBlWqQYjSinw2SxBEFEWVteteQIoupFRI6RJGho62PL7TTn/fdorl8sCD2F+2qgohcLRCiSTjRAqJqzVWWPqLFaLAUA5g/cZteJ5DU1MTBdfFmIjenl4yvk8mq8hkHKyxRHFIGFrCICCKYrTj0jKknX323YennnmGbVu209tbJJvNEQQhjquRjoNgwDltE1FdCIHjasBgbIxSbiLsS4UQiWt8II8FQRIpYuxfijNtPffl9aOUqmdX1z7M1wTsmqi8w0QbiEWoxSi8+oHQWku5XCaOY3K5HJ7v7RC78FrHZ7PZ1LGX8qayceNGvv3tb7N48eL63IqiiI0bN9LV1UV3dzcLFy7kuOOOo6Wlhb6+vh1eLwRMnDiRefPmcfXVV3PllT8Ekob4rVu3csMNN/DAAw8wffp0Zs2aVS/6mb9gPn98+I/cfPPNNDQ0cMIJJ9Df388111zDQw89xDHHHMOYMWMGFaRr2a2rV6/mJz/5Cddffz3Dhg1j0qRJ3H333SxZsqTPPs3aAAAgAElEQVSeOfbkk0+yZcsWbr75Zv7whz/U3Wvvf//7By2gTNmzbN26lSuvvJJf/OIXO8zHLVu20NXVxdatWznyyCM54YQTGNoxdFDnw5gxY5g7dy5PPfUU1113Ha7r8u53v5v+/n5+9rOfcffddzN+/HgOP/zwek/B0Ucfze9//3vuuusuWlpaOO2004jjmMWLF3PPPfdw0EEHMWHChEGLgjzPY968eXR1dXH11VezePFiOjs7KRaLPPjgg2zatIn777+fZ599dqfrtxCCGTNm8E//dCqjR496C36iKW8WcRzz2GOPcffdd7Nt2zbmzp07aOYmQHt7OxMmTGDp0qV1534tTqbWRr969WpGjhzJ6NGjaWtrY9y4cfzmN7/hySef3OH4pB3+f1m69F4+8IEPsGjRIuAvCyi+79dzateuXcvtt9/OyJEjOeuss3Bdl+3bt3PPPffQ3t7OAQccUM9pDMOQp556iieeeIJsNpuKN3s5Jk4i6X7xi1+wfv16Jk+ePGhkVm2L+qRJk7jttttYvnw5c+fOrS8+9/T0sHLlSjZt2sRRRx3FyJEjaWpqqhfrrVy5krVr1zJp0iQAtm7dwq233sott9zCGWecUd+qL4TAcRLHYuewTs444wy6urq4++67GTVqFOeffz4LFixgzpw5O7kKaz0Bl156KaNGjeKSSy5h//33T3dQ7cVUq1Uee+wx7rjjDowx9evWYI7S9vZ2pk2bxsMPP8wjjzzCCSecUL++FItFnnrqKV588UWmTZvG/vvvT6FQ4MADD+Shhx7i0Ucf5cQTT0xEapssIF9xxRX8+c9/5pOf/CRz584F/nL9y2azTJ48mfPOO48vfelL9TiTk046iQ0bNnDVVVfx5JNPcv7553P++efXd/l1d3fz4O8e5M9//jNjx45NPw/u5fT39/Pb3/6Wu+++m3w+z6RJk/A8byenqtaaESNG1CMPly1bxvTp0+uicV9fHw8//DDlcplJkybR0dFBGIZMnz6dJUuW8Ic//IEzzjiD1tZk99yWLVu45pprWLp0KR//+Mc5/fTTgR1jQcaMGcOHPvQh1qxZw89+9jPGjRvHWWedRUNDA7NmzeKOO+5gyZIlzJs3j2nTpoH9SzHq1q1bmTVr1i7jHlL2DqrVKg899BC33347nucxZcqU+m6RV+K6LpMmTWL48OE888wzPPHEE/U4oloh5G9/+1s8z2PmzJlIKRkypJ2DDjqI2267jd/97nfMnDmzHuu19oW1/Pd//zfd3d1cdtllTBg/AUh2GGQyGbLZLAcddBAXXXQRn/vc57jssssYP34806dPr4+pttPp+eefp6GhYdDdgil7J7stXscmAhkjBWBCqtUiQaVKHFhyvsu0qfuybn03L764iWohn4gzKKTU5Ap5okjT09NLPpPDz2QGBFaJ52lyvk9DIU8hX8CSFJ+Z2CBEkhE90FuYiN6AiSKsNYiBBEVroRoEhLElDmIcR+G6HpVKgLUVenqLaFfjaklYDclks2SzBfr7SpSKZeLYorWDJdnWKoTCGlBCkvF9PM8ltgYbGfKZDNpVlEolhIQ4DJCOxioBNh7I2ItwHWfATRwgSEogpZDE1mBig4mT78H1MoweM5apmzfzwtpNlMoBsRG4roMTu0glk6pMJVFCYiwoacl4blLEaCXCJCJyjEVYi5KSGJtEhliwEoQUKGTS2jggaL/hyaN1fVvbX0NKyYgRI/A8j7Vr1+50YYvjmBdffJEgCBgxYgSZTIZ99tkH13V3efy6devqx79Wy3JKyhulr6+Phx56aAdRTWtNQ0MD48aN48QTT+SEE05g9OjRu8yCy2QynHfeeVhrWbx4MR/5yEfQWmOMwVrLEUccwUc/+tEdyiTe/e53c9FFF3H55Zfz1a9+la9//ev1KIi5c+fy4Q9/mP32249yaXAnVktLC+ecc069ROjWW2+lubmZZ599lrvvvpuXXnoJSITPOI5ZsWIFK1euRAjBhAkTOPHEE9/kn2TKm0F/fz8PP/wwjz76aP1rSikKhQJjxozhgx/8ICeffDL777//Lp0Dvu9z+umnE8cxP/7xj7n44otxHKfuMH3Xu97FBRdcwCGHHFKfj4cc8i4+9rGPcfnll/O9732Pyy+/HEicPrNmzeIjH/kIkyZNqu+SeTWNjY2cdtppPP/88/ziF7/gpz/9KcOHD6e7u5vu7u76Q/yrqeXeHX300X/rjy7lTaK7u5vFixfz4IMP7uSS6urqYvPmzcyYMYNTTjml7tB/Nfl8nsMPP5wHH3yQpUuX1ovDWlpaWLbsD1x77bX09/dz9NFHM3HiRHK5HHPmzOF3v/sdS5cuZciQIZx99tm4rsutt97K//t/dzJq1CgmT5486JZQ13WZM2cOZ555Jt/73ve46aab2HfffZk9ezaHHXYYDz30EDfccAOrV6+uLyKuXr2apUuX0tXVxcknnzxo+VrK209vby+3334bK1c+ucP86+npoauri40bNzJx4kROP/109ttvv516JgA6Ojo46qijWLZsGXfccQfNzc2cdNJJCCFYsmQJixcvpqWlhXnz5tHZ2YnWmhkzZjBz5kzuu+8+rrzySt7//vfT0NDAXXfdxU9/+lMaGxs56KCDBi150o5m+vTpvPe97+XrX/86t9xyC/vttx8nnnjioKJMFEW0tbXhui7ZbJaOjo50G/Newvr167nyyitZsmRJ/Wu1BeTnnnuOvr4+5syZUxfnNm/evNN7tLS0cMQRR3Dfffdx9913M3ToUE4//XR83+e+++7jqquuwvM85s+fz7777osQgiOOOIJ77rmHJUuW0NnZyWmnnUa5XObGG2/knnvuYdq0aUybNm1Qt202m+Xwww/n3HPP5dJLL+Xaa69l1KhRjBw5klmzZnH//ffz7W9/m8cff5zp06cTxzGPPPIIS5cuRWvNwoULGT9+/Fv6c0356xhjWLNmDf/2b/+2gxO5Wq2yfv16nn/+eYwxnHrqqZx00kmDmrmEEAwdOpSTTz6ZP/7xj1x//fU0NjZyzDHHUCwW+fnPf16/P5566qlks1mMMRx22GFMnTqV++67j8suu4z3vve9APzsZz/j5z//OaNHj+bQQw9FiJ3P6XkeM2bM4CMf+Qhf+tKXuOqqqxg7diyHHHIIM2fO5JhjjuH222/nwgsvZOHCheRyOf7whz9wzz330NHRwaJFi3Yq1k15+7HWsmHDBr70pS/tsJgfhiEbNmzgueeeo1QqsWDBAs4444xd5qmPHj2a97znPXzrW9/iq1/9KnEcc+CBB7Ju3Tquu+46HnzwQWbNmsVxxx0HQGNjA4sWLWLp0qVcf/31tLe3c/zxx7Np0yZ+8IMf8Nhjj7Fw4UImTJgwaPRtNptlwYIFrFy5kquvvppvfOMbfPe7362bK4wxbN++nS1bttDS0lJfmEnZ+/mbnNfGBBgTEYUVgmqVzRt7yWWyaC0QImb4sByNDT6rn3uJTZtKtDQPwXM12tFoJ0tjQ4FKuUi1WqaxoQHtaHLZLPlsnkIhh6tdojBAuxJqMTvGYKzFGoONbSIqS4XjuBgssQmwtlovWkEKoiimv1jE9120lniOxsYWrMBxXayJCMMSQVCht78PIRMB3PN9ojAiNhaQKKVhoAxRaY2QgExcFpls4qCOBXgZH6SkVA1xHBfHdTAmplItg0gc6J7nY01c32Jh4hiwaNclJ/JMmTqVnr4K236+hN6+fqLIEhtDGEc4UqFIxi+wYC1aCVSykgBGYFUSqSKEBWNJHNpJLIseKP7CgKs0r68G6W9DSsmMGTNob29n+fLldHV1MWzYsHqsyIsvvsjy5cvr280LhQIHH3wwQ4cOrR/fOayznre0bt06Hn/8cVzXZerUqeTz+dedCZeSsisOPPBAbr75ZqrV6k7bP2ulTbUHy0wmU79JNzQ08MUvfpHPfvazNDQ01LO92tvbufDCCznllFN46qmn2LBhA9lslnHjxjF69Oid8tyz2Swnnngi73rXu1j55ErWvrgWx3EYP348Y8eOrR+vtebf//3f+fSnP10/X22MY8eO5bLLLqNcLpPL5SgUCrz//e9n0aJFgz7Q16hlHKfsPUwYP4GrrrqKcrm8y/noOA7ZbJZsNlufj/l8nk996lN8+MMfJp/P1539ra2tvO997+O4445j1apVvPTSS3iex9ixYxkzZgytra07LEZmMj7z5s1j6tSpPPXUU3R1ddXn2Pjx4+tCi+u6fPKTn+SDH/xg/Xy1MY4cOZIvf/nLfP7znyeTyaC15txzz/2rGXbZbDZ1HO5FlEolli9fzooVK+pfU0qRy+UYPXo0Z511FqeccgqTJk2qL4q8mloe9cc//nGuuOIKFi9ezP/+7//W8y07Ojo4//zzOeuss2hsbKwff8EFF3D55Zdz3XXXcd111yFEUio1bdo0LrjgAmbMmLHL7fqFQoGFCxfS1dXFTTfdxI033khnZyfz58/HWstVV13Fvffey7333jsQ1RaRz+c59dRT+Zd/+ZdBy9dS3n4qlQorV67kmWeeqX8t6XvJsO+++3LSiSdx6mmnMnXqVFzXHTRqo7bbpFKpcMUVV/CVr3yFb3zjG/XFsnHjxtWd0bX7ci2/tVb+dOuttyKlJI5jxo8fz0c+8hEOO+ywXe7+y2azHHXUUXR1dXHllVeyePFiRo4cmS6K/J0x2AJyzdCw//77c/zxx3PKKacwZsyYXUZgOY7DjBkzuOSSS7jsssv47ne/y+WXX16ffyNGjOATn/gEixYtqs+/gw46iE996lN897vf5X/+53+49NJLsdbWy5E/+clPsv/++7N9+/adzieEoK2tjZNOOonVq1dz55138qMf/YgvfOELnHrqqYRhyNVXX83NN9/M//3f/wGJUDVq1Cg++MEPcuaZZ6ZdQnsB1lq2bdvGfffdV59byQ4Ph+bm5vqi8cKFC3e56wmSa9HcuXMplUpcdtllfOpTn8LzvLpBZtKkSVx00UXMnDmzXlo/fvx4PvOZz/DNb36Tyy+/nGuuuQZIhL8pU6bw6U9/mokTJ+4y2qixsZH58+fzzDPPcN111/H973+fr33ta+yzzz5cfPHFCCG44447ePLJJxFCYK1l3LhxXHzxxcydOzfdVb0XYK2lr6+P+++/v/77qO2yb2xsZNKkSZx88sn80z/902s65RsaGjjvvPMIw5Brr72Ws88+G8dx6s+l8+bN47Of/Wx9F5zjOMyaNYv/+q//4mtf+xqf/vSn+bd/+7e6AWzBggV8/vOfp6mpie7t3Tudr/bM84EPfICVK1dy7733cuWVV3LJJZfU48WKxSJBENQz3lP+Ptht8bpc7qNU7EEIg7Cglaa5tQmtFVaCNSCIyOcEkw7Yl5df7mHzxs20trSTcTO4vks+6zGktZH+YhFXOzQ2NpLNZinkciglsRY0DkpqBpK1iU1IFETEsSEIqmCS8j5XOMTWoHSE1gopLCYOsSYp4nGFJAoiohg8xyKFBhKxXEuFdh02b+umUi5j4hjHcTDGIhwHEcdYY8Ea+oslfJOluaGAIcKYJPsul8lSjGOkFPha0tLaiuf6KCWQwmJNhOtI8tkcnusiBXhZDy0VhXwDSiiiKARhcX2fXL7ApInjeKHrRR55bAWQXNQZEO6xaiDqIxHWAaS0A/7qJBJEWBC1IiLt4Gey2IEEESEEYRATY5DEvOHGxt1g6tSpHHvssSxevJjvfOc7VKtVpkyZwssvv8xVV13Fn/70J+bNm8eMGTPIZDJMnTqV4447jhtuuIHvfOc7BEGw0/Fz586tHz+YeL1t2zauvvpqnlzxJGEU0t3dzerVqwH4z//8T5qampBSctRRR3HqqafR2pqKd//IuK77mh/+doWUclDht5ZzWBOr4ziuf+jcVZZ7JpNh5MiRdHZ2EkXRoMfv6nyQPNC/egW5oaEhdTD8HeK4DkOGDHnDr5NS0tjYuJP4WytNGTNmDCNHjtzl/HolnucxYsQIhg4dWt/mXltAeeV8HOx8kMzHWqZ7jVSU/vth9OjRfPOb3+SLX/zi4AsoQuK4Tn2rZs35L4Tg3Ue8m1un3YoQov479zyPWbNmMXny5HpOZw3HcSgUCvXFv9rxhx12GFOmTKFcLtePF0KQyWQoFAr1B5H3ve99LFq0iEwmU7/eCSHo7Ozks5/9LBdeeCG+79f7OE477TTmzZu302JlTZQv5AtpBuIeZvjw4fzHf/wHF1988aDzr3b9qs2/mujn+z6LFi3i2GOPxfO8umOxFsd1xBFH7FDCWItbKBQayGT+spPPdV1mzJjBpZdeWs/1fOXxDQ0NZPxk9+ipp57K3Llz8VyPxqbG+nFDhgzhwgsv5JxzzsFxnF1e/7TW9c+0Wus073UPI4TggAMO4Ec/+tGg4txrGRpaW1v56Ec/ynnnnZdEIQ4sCtdKtg8++OCd5p/rusl8eoVgnMvlOPbYYzn00EN3OL62Pb6hoQGtNa2trVxwwQWcc845O5xPSsl+++3HV77yFb7whS/g+z7Nzc0IIfjQhz7EmWeeucPOqdrfU0NDA9lsNu2i2IM4jsPs2bNZunTpoLvbagKz67rkcrl6NCcki7bnnnsuJ598Mtlsth7D2dLSwnve8x6OO+64HQwEQkh836OxsXEHEc/3febMmcPkyZN3MFHUCiBr5d5KKd773vdywgkn1M9XG+OwYcP4whe+wMc+9rF6jrrWmokTJ9Y/W7zyuuq6Lo2NjenCyR5GKcWkSZP49a9/vcv5V7v+5XK5Ha4XLS0tfOUrX9lht4CUko6ODi666CI+8IEP7GCmemVR7CsF8nw+zymnnMKRRx65w3yVUpLL5eomh8amRr785S/zr//6rzvcN5VSjB07lh//+MdUKhXy+Xz92qi15vDDD+ePf/wjWuvdetZK2TPsfmxIGFAs9uM5Lo6jEYDrKqxIIimEJRFarcFzBKP3baFjSDPbthdRIqKxUKCpkKchn0eqToxhoJBRJeWExoAwA85osMYmorJ2kVKhdZLnXC5ViKIIqcTAg1RSQOhoBy1CQhPXvy6VAmXRWqG1gxQKpRyU1uiBDx/dPdtRWqEdBcJFCImxMTY2lEsVgiAkjiHn+ThS4GdcXEcRVCv4nkc+n6W5uYnWllZ8P0MYBmgtieOBckoTU+zrQSmJ5zbS3NJMU3MDnucgrMHEMVo7NDQ2Mawz4uCDD+L5rnVs7+7GxDmMMqAT15FUCoRKBGmp0I5HZAyRidDKRbsumVwG13GwVoAVhCbCJi2XRCbCCBC6JoS/teRyOT72sY8hpeRnP/sZ5557bn3VzRjDggUL+PjHP87IkSPrF6YLL7wQIcSgxx9//PF84hOfYNSoUbtcnS2Xyzz44IM88MAD9TKp2kX4/vvvr198m5qamD9/PpCK1ylvPrWm+ddL7QNcuhKc8lZQ+8D5eudXOh//cXEch9bW1t3aUun7/qCRXq7rvqH3dF33r8Yn1BYKa67/V7KrxZVMJpM+IO/laK13Wvx6PdQefPP5/E5ff6O/99qupNfamSRIFgYHiw+RUu5ybr6adE7uXbiuS3t7+xt+nVJqlwvInue97vd8vcdLKXdpUhhsARkY9O9jb6MWTbBu3bq3/Fye5zG8czhDO964geWtoFbIXnOivhF2dc0RQtR36r3eMbiu+1eFPSnlLufTru6/Wuu/el1N2bO4rrvb82+wz3evdZ0ajNd7v97V+SC5/g02f2t/X4N1ZKTs3ey2eB2GIRJNuRwRRxLPc5DaJBnRA05gRJITLWyyulLIehRyQ0BoGhobyGcySCERSoPQCKGQwkEoiXIlRkRYkxSyKKkGEq0j4igirAxEg0RmYAuBwXMdTGyIwpgoivA9FxNXkSKJDnGVwnUcfD+DlBIlkwdxi6AahPT29SZZ1iQ50cpTiRiukjzpjO/R09OPjS3KWqQAJQ19/b14nkfGy+C5Dvl8lra2FnwvKaqsVMvEcYRW4CqJtQbtKIa0t9La1ka2UMDx/CSKZCDQW0hNY3MLw4Z38q53HcLDf3qEMIjw3OTnZCAprRQGISXWJDfeOLY4XoZ8voCX8VCuAgQmjLDGYCzExiSB4QiEARuZJCflddDc3Mz3v/99KpUKTU1Ng35Q3xW1FdjPfOYznH322fXSuEKhwAEHHMDIkSNpbm7ewbk1bNgwLrnkEs4666wdjp84cSKjRo3a4fimpia++93vUqlUaG5uTlZ/PZ+rr756B8fCYBQKhdTlkpKSkpKSkpKSkpKSsgcplUrcddddfOYzn6FSqbzl5xNCMHnyZL7+9a+nsT4pKSkpeyl/Q2FjSGwihJCEUYgVloxIcqGFUAMFiqCFxkoFaBQKRzmJ29kaKqUSWjpoT+JlMriej+N6KKWRUoGwiSHYJDEk1hqMjQmrVZTQCCETlzYMFB5aHG3wMz6WJA87jsFYi0CglEZpB2MNrnIQElxPE8WG/lKFyMQDW6EVQZg4toU1aK3xHJesZ2jIZWnKFwhKFSSWXEOOPjdxdWvHQWuF62oyGZ+21jawgv5iEaUljiMgCusrT02tjfgD37dyfYRUMJBhLYRAux7DOofT2tZFU3MTmzZvxViJVA7YRLC31iQxIIBWDlFkcZwMSrlYK6gGEX29fUk8ihJJyWMUExuQSid54fWFgb9O0gC7+1sraiuwhUKBsWPHEsdx3QU4WMj/GzleSrmTO0tptVsxECkpKSkpKSkpKSkpKSlvL709vTz22GO88MILr2k+ejNxXZfHHnssFa9TUlJS9lJ2P8zPGCDGYhBCgQ2JYwchDNaCFAphFcZK4hiCIEDEAkeGKEfhVDSu6xFpD08otK6iszl8P4PSblJIoUQiztok7xoBYRwihSCWsh4vYoylWikRmxjP99COBmsxUeKgtggc10MIjUUMxHOAsIbevl66e3oJwhhHaxzHwfM0UinK5QpRFGCNQWDxPJfGfJYhzc0QWcrlMm7Gw/cdpBQ4nkehkMd3NUpCJuvT2NSM0gOB9CbGmgilFL7v4XpJtIfreihX43geQqqBm3RSo5jN5jj4kBkYK/j1b+6l2FfCdx0EiUNcCJGMT4DruPT2FQmDkNg32FjSVyqyZcs2TCs0NxRQWMLIIIXAGIgtxAPv9XZSy8t6q45PSUlJSUlJSUlJSUlJ+fsiNnE97rG1tZ25cxew/7iJb/p5ojBk1aoV/PznN2OMGbQ/KSUlJSVl72C3xesoEsQhIA0SSxiTOKQtSK0xViYZy9WQaiXARgZhJQIS8drVZPwQpQNKpTKl/n76+/opNLZSaGoik8thhQKS4kYrBSaxUaOUg3TlQMegTURcaYnCAGsgig2NjZpctkBQrRIbC1Kg3Qyu46OEpBqUKfb30t/bh1YKpTSu4yKkwAIZ10MDcaQBS0NjgSHNzfiOJpfxKfb2k81nKTQ10NTahIkjkJDN5Whva6WxqZlCIY/ru0jpYKoh2vHIZrzEha0TkVoolYzL02jHRSoHKTUWibGgHYfmlmZmzppJNpvnoYceor+/GzBIAViBVYmALVVS+liplsjaLFo6uMrD8zL0F8t4UrFx9WpstUrjsA50UyOxtRjJ25J5nZKSkpKSkpKSkpKSkpLyeujo6OTssz/ApElT3/T3DsOApUvv5uc/v/lNf++UlJSUlDeX3c+8rsZUqyHWGsAihaBKhOsZHBeUVJTLRYJSCMbgaI0Qid5sIoGxIVEQ4WcyaO0QhSHGDORPhxV0t0a7Hr7v4TkeWjlIIWv5ISAE2nPIyCzaSSI7wrBKEASo2KAdB8/10dolNpYgDKiUy/T19VMqV4njgFKpH9epxYREhFGIkhLP9chmMzTkMxT7eshkM+RyeVqaG1GANIatlSJCKlzdTGNLE77nE5mkSDCby5PNF8jlc/jZPEI65AsaCURxiCUmtpDJZPEzWZR2EUogpRqIDlEIIZEIHKWRSiKEZNyE/enp7WP5449QrRYxJkw0ZwsxSVtwZGKCKCCKAxzj4HsujnLo7t2OEweExSI6iglKRXRLA0iROOdTUlJSUlJSUlJSUlJSUvYSlFLk83lyuTe/YDIMg7QkNeVvJggCNmzYgDHmdb9GKUVzUzP5wt5dnJqy91OpVNiwYcMbeo2UkqFDh+J53ls0qreG3RavK+WAaiUAYXG0QnsejvZwHB+sor8vIKhEmNiiRVIoiBRo7aBUElIRG7DGopUiji29PT1EAwK2n/UwJiKqlChJiaOSaA4hJVJqhFRICdbGCOXgZzSZbAGtJBZDHMf09xcpVyrEcUy5VKJaLhFWA1ytEI5PxvcoVypUy5UkI1omcSc1B7bneLhuM1JCS0sjuWwWYkPftu1IBGEYEFQrKAlNzY34mRyO65EtFMhkc1CL6FAWT2ukVuR0FiGTiA4pBAaJQCKsxFqJSGzQWGGTDkUB1kqkUjQ1NzPtoINwHMWqp56kVOyhUi4RxwYTD0R/CEsQhVSqFVzHw/OyNBUaKBX7sEbQ1NmJwJJpbkQ4PlJIbLJqkJKSkpKSkpKSkpKSkpKSkpLyGlhr6erq4v3vfz+rV69+Q68VQjBy5Ei+/e1vc8ghh7xFI0x5J2OM4emnn+Gf//lsNm/e/IZeK4RgxIgR3HbbbXR0dLxFI3zz2W3xWkoncUNLcF2HjJ8ljAWVckQUBBT7qxgrkhxp30FYiySJEhHWEBtLFBmkjPA8i9IK5TiJ6zmTI5cvoBwXa+KB3GuL4zqJU1lq7IDlWCmBQmFiSxCEVColypV+quUy1XKZOAqJoxgTxbiuRy5fII5jYhMTxQYvmyOqVikV+zE2QkpNNpPF8zJIIbDEaKXJ+BmkUhSLZZTjMHz4vlSDKgZDUA2QUpLL5XH8LJl8Adf3ESJxU+fyBQSaSrVCNQpQKolPcbTCIoitASNRQqKkHMgQT6JSkrxthRTgOJLmlhbGjR+P5zk8t/oZNm5YT0LQu24AACAASURBVDkoQRQnueDGIoAgCJPsbGvJZFyaGgsoKclmfaQSIARhnCRrS2tJ1euUlJSUlJSUlJSUlJSUlJSU1yYMQ1atWsWDDz5IGIZv+PXFYpF77703Fa9TdoswDHnwwd/x5z//OenXe4N0d3fzwAO/5Ywz3vMWjO6tYbfFa+0qhNJExhBVYnr7thOFkmwmS1CpYoxAKRfH0YlQ62ii2AAmKWMUoCSYOCKMIjzHxctkyeQKeJksjuujHQcBWGuI4ojIWsqVgMaGTPK+QmGsIaxUCaOQKA4RUuC6Hq7jkMvmiaOYKI6TtBFjqFQrKB2BSLZ4BNUqURjg+z7GgNYKrRWe66AdF4FASQiiiGK1Qi6XoznfSNbPIpQkqFaoBhX6+yr4uRAn6yCdDEK6SGURAirVCgiFkBLfySGEpVIpY4IqNgxQjofnaxztACoR5kVSKCmlwhiQWiKNSJzhboZcoYkhHSMwKLZu20i5v49KWCUyiQs7qlYJwhCpIxylaGluGeiANAhhUVLhkYjlSrsIId+kKZWSkpKSkpKSkpKSkpKSkpLyzqRW8hmGIUJIhg4dxoknnv6ar7HWsmXLRu64IykJLRaLb9NoU95pWGspl8vEcYzWmoMOOpRp0157ISSOI5Yv/xOPPLIMYwyl0t/X/Ntt8Xrz5h7WvbiVeMDpq7Um6+coE+AoB0cn8RtKKVzHQSuJ44BUEi0hCKqYOCaMAqI4IqMknuegHBDSYEWSa62kQGmXjBqICxEQVCsE1SpCKowdcA9LhetJrNFEscYYg40NQgjiJJaboFzGcV2CoEKpXASSGBObsURhhOO4uE4ydiEFURQSBiGVcplsPkfb0HYaG5rI+VmymSy+7+G4DmEQUCoWEUpjpURKhVQapUBpiYkNYVgligxxHGGtwfe9geNcMpkGtOPQ3d2L57lkMtm6mBxbg0USBBGVSpk4jgjDiCgy9PWXKVdifDdH7ARExqCkg4ksJjZUgyqun8EAjhZoR+N6GiUFQoDr+VSjZLykfY0pKSkpKSkpKSkpKSkpKSkpr5vm5hY+/OGLOfOs973mcdYYVq16kjvuSEtCU948fD/DhRdewqEzD3/N4+Io5Le/vZdHHln2No3szWW3xetiyVIugTGAgGxWobQGIyhXQpqbfPK5LABywOGLMElkhh2IuIht4nJ2XHzXxVpDtVIBqXGzebxMlmwmg5IKY2KCMMTEEUoqrLAIIdHSQSuN1hJr40QcBoS1hGFIGAYQRWDBcTW2alDaIZdrIApDKtUq0nERKsBzHcIgpBpECBETVCtJHEghT1NTM76XQTkO2YYCrnbIZLO4npfkUbcDUpLNFXC9DEKIJLM7MgPOb0u1WiKoVrDCEsUBWmdoaWumpbUjiUKxgp7uzZg4wPfzSK0TET2IKBaLbFi/nuefew5jDb29/Qgk3dt7CKpFlIiwxuI5HgaLwRJUK0CE52XJ+T6u52BMhDER+VyeajUEI5ICzDQ1JCUlJSUlJSUlJSUlJSUlJeV1o5SisbGJxoam1zzOWkuhUHibRpXyj4IQ8nXNvyiKaGhofJtG9ebzN2ReCxB2INdZoIRCCUm5WCafzdfd1sZaJAKlFCCIowBhk4LAjJ/BCIGxkii0+FkfP1eg0NRCNt+A47qAAqFwXAchJXGkwDhJm6sVINRAtIZFKoXrJK5rE8doQMjEFR5HIRIHKSVO7BBUA0CQ1Q6RicgXGgiDKlIoQGJMEgjtOA6FfAOu4yKlIpvJJJEeWiIdhXA12vFwXA8hJdrxUFpjTEy1UiEM4wHHdAjEGGsGhHiFwaCdDJ6fwXEchuihhFGFMAiI45hqGBBFMVJqpEjGVAmqdD3/AuVimZ6eHmJrMMYSxVWIQ4yN6StXyORaaW1toamxQD6XxXGT+JYwMORzBUqlclIQKQTYiFS93pmVK1cydOhQmpubkTKNVUl5Z7Fu3TqstQwZMgTf9/f0cFL+wdmwYQPVakB7+xAymcyeHk5KSkpKSkpKSkpKSkrKXsLui9cKlCPASMRAMaPn+DR3NOD7Lp7rYo1Ba4mjPASCMCgTmxAbmiRf2vFQjovWLq7n4bgurusSVANi24NSZRzlAAIkCGnRUmHjGCmSGBHtJKKi42gQdkAcNlhr0K6Dtpo4DDFKEWuDHHBjCyVxjUsYRlgEQghCpbFeUuQYRSE+Pq6XjCmKI5woJA5DRNYnikPCOMZTGisUSIUQinKpRLlSIaiWKfX1U61WQEBsYrSjcD2PjJfFGEUQGlzPR0qB1AJPZujsHMXWrZsp9fcTRiHVcpVt27qpVKsorWhrG8r2rT1s2rCJoBpQDips7+kmk9M0ZBwKOY99x45l+LBhOCoRvZV2kAOxK47j0b19O1JJqtUAqyHj+SDS3JBX88Mf/pDf/va3TJkyhfnz53P00UczZMiQVMhOeUfw+9//nh/84AdorTnmmGM4/vjjGT9+fCpkp+wRHn/8cS677DLK5TJHH300CxYs4IADDkiF7JS3jVWrVnH//fdTKBQ48sgjGT58+J4eUso/EE88sYLf/ObXjBo1isMOO4z29vY9PaSUfyBWrlzJTTfdxKGHHsqcOXNobm7e00NK+QchjmOefvpprrnmGubPn8+c2XPIDuzeT0l5qwnDkGXLlrFkyRLmz5/PzJkzcRxnTw8rZRfstnjd1OSzeZMkqMZgNCDxMx6uq3FdPSDIuigFWkmiMAYsJk5SKgwglaJUruB4PqVSiSiK6O/pxfEySfSGVLheBqkT17XjKBzHwXGSMkXP81FK1+Oa7UD8hVYSKyGKYmxssHbA+S3ACo12NHEcE4cRUoYIrQmjpOzRGIPGYuLEiaxVkrONkAilMALK5SpuxieIo8QZbULCIKJaDSiXi/T09iROb5FY+JV2UNpFKoW1ijAyaNclNhbHdZE6cY4jBF4mg3Zc7n/gAbZu3kI+m8fGUK0GvLT+JRoam5BS0jG0g7Vr14KxNDU20tO7HRsEDG0v0FzwETZEKUWisxpcx6NYLFKpVPB8jyCKcH2PyEqEtckvJWUHenp6WLNmDc888wxLliyhpaWFmTNnsmDBAo4++mja2toS53pKyt8hlUqF9evX89xzz/Hwww/z/e9/nwkTJjB37lzmz5+fCtkpbyvVapWNGzfyxBNP8Nhjj3HllVcyfvx45s6dy/HHH88BBxyQzseUt5Senh5+9atf8bvf/Y4hQ4YwY8YM5s+fzxFHHEFnZ+eeHl7KO5ytW7dwxx13sGrVKjo6Ojj00ENZuHAhhx12GG1tbXt6eCnvcLZs2cKNN97I1VddzdCOocyZM4cFCxakQnbKW461lk2bNnH11Vdz0003MWzYMI488kjmz5/P7NmzyeVye3qIKe9gjDG8/PLLXH755Vx//fV0dnYyb968VMjeS9lt8bptSJ5SqYW+3jL9fQHIkFwhh+9qHCGo/WdE4oQ2NgYh0drFyAgTG2JriKKIbVu34md8Cg0FMpkcSjtUq1WEUijtIpSDkgrPy+D5Hhnfxxkog1RKoaTAxBYhBdYaorBCFIdoJRHaQQmZCNM2xhibRDzHMWEYJi5tKYjimKBSoVoNUDqJIpEicVNLKTBxiJSCKDYYImJbxVpJGBhMbInjJNs6jJIIDinlwPgctNYIJRN3s1BEBkwY4Xo5tNZJtokQWJt8D83NzYyfMJFbl9/C2ue7yOcKOK5HsVRk3csvY6zBcVwia4hMTF9PP3JAnM9kMwhhicIqJg7xXA/XyxKFAY6rcdyBrGsk5UoVx8sibOok3hW1FtdyuczWrVt58cUX+eUvf0lbWxuzZs1i/vz5HHXUUbS2tu7poaak7BZhGNLT00NPTw8bN27cQcg+5phjUiE75W3llfNx06ZNPPLII1xxxRWMHz++vkNg4sSJeJ63p4ea8g7DGEO5XGbLli1s27aNrq4ulixZQnt7O4cccggLFizg8MMPZ9iwYXt6qCnvQIwxlEolNm/ezNatW1mzZg133nknHR0dzJw5k4ULFzJ79uxUyE55SzDGUCwW2bhpI5u3bGbNmjXc9n+3MbRjKLNnz2bhwoWpkJ3ylmGMob+/n/7+fjZv3syzzz6bCtkpbxtxHNPb20tvby8bN27k6aefToXsvZTdFq+149Pc3EpLi4MxCoRmaGsblWKRaqkfBSgpEQikUnhKo2RMpVpGux4yiunvL1OthmilcRpc8vkCru9jbCJC+04WoR205+P7Po7noh0P7Xi4bpJBLaVASIF2FVgwNgJp0dZJzi0lQoA1ST63RWIZEK+jkMjEBEFAXK0SG5skPwtJJpvH97NIqYijiGJ/L8bGBLGBKEIGhiA0CFlKCiIFIBRKJoK6VrWxKRASa5OccGvNwL+TsdUiKAbOjLWgHZeDps/gkYcf5eWX1/PCuheJjaGpuRlH6yTyo1Jl27at9Pb24bgOcWwQ0tLQ3ITSPhaDEFCulCiVq7S1daC0QxCGlColyuUyWjsoqZFK1x3Ev/71r1m7di1xHP9NE+udwHPPPUcURfV/v1rIXrt2LXfddRdtbW3Mnj2b5557LnH/p+y1PPXUUzz22GP09fXt6aHscZYtW0Z3d/cOX3u1cPinP/3pVY7sBUyYMH4Pjfidx5o1a7jlllvYtm3bnh7KHufxx5ezZcuWHb726oWVP/3pT1x++eVMnDixvrAybtx4PM/dQ6NOeadSE7LL5TLbtm3j+eef55e//CXt7e28613vYsGCBRx22OF0dAzd00NNeQdSE7JLpRJbtmxhzZo13HHHHQwbNoxZs2bVheyWlpY9PdSUdyA1IbtYLP5FyL7tNjo6Opg9e/bA9e+wVMhOeUuI43hQIbujo4MjjzySBQsWpEJ2yltGHMf09fXR19c3qJB9/PHHM2vWrFTI3kPstng9fMSB5LLbwAoiY0FA1kvEW89RmKCKUgqBIIqSXGnH0VhcisV+wjAijkK0FLiuS8ew4XiZDJG12NiAlYkYrF20dnBcD+04IAWxtUTGoEjEYSkVlqRAEiRae8iBIkkpJbFJMrIThTkZU1CtgFTE1TLlcoVisUQcRlgrEEJjTSLmBmFIFEYgFZKk4FAgEUpihMCa/5+9d42x7DrPM5/vW2vvc+rUpbvZ3ewmKUq8ihJJ0SYlWXfLUbPVigwEcBBoMh5jBhjHgAUhEOAACTxA5B8TezIBEo8D50eC/PQtf2LMRDE8hj0IAk00tiKblh2bpihaFO99v9XlXPb65sdaa+9TTUoii2RXk/oeobqqz9mXtXcfqrve89bz5bBSVAhiGIZZdnoLAaNDsLLGRGxidmwDbRMQDEsJVMHq8YTRyio/9T/9jzz77HPs7Ew5d/4s5y+cI806Dh8+zMpkheM3H2Pr6iYXL5znjrtu4/0ffB8bGxMgMd3ZwhRWV1YRbdmedaw0q2gMGJu0ozExNqiGXb7rX/u1X+P3f//3mU6ne39VvU3oum5XeL2Mme36xuLpp5/2e/YW4I/+6I/4lV/5FZ5++un9Xsq+M51O2dnZ+a7Pv1KQ/a/+1b/iPe95DxcvXmQ6ndI03n59Pfzpn/4p/+7f/Tsef/zx/V7KvjObzV7163H5JwTe+973curUKT7ykY/4m4d74Otf/zq//uu/zte//vX9Xsq+s7m5yfPPP/+yx5eD7HPnzvHXf/3XfPnLX+b4seP8yId+pG8kuqP4tfOVr3yFX/3VX+XJJ5/c76XsO1euXOHZZ5992ePLQeLZs2d58skn+ff//t9z66239mqHj3zkIx4k7oGvfOUr/MIv/AKXLl3a76XsO1evXn1ZoQFe/vrzIPuNYT6f89WvfpWf+7mf2++l7Dtm9l1LRdcG2U8++SS/9Vu/5UH262R7e5vf/d3f5Rd/8Rf3eyn7jplx4cKFV3zOg+wbjz2H16KraJhBgiALmjYyXm04evQINt1h88pFZltbLKbbLLrAfL5ge2eH+XSHACBCWFkBjaysrBKbMRIa1BLBoG1GjMcj2iYSY2Axn9OlBeOVEckCs/mctgHtKAF2DqdDCaiDKCEGUkqEGIkxliDb6BYJDQ1Xr25y7tw5tre3QQQJkSCBIA1msLO9k1vaVs8hIIKigBFUMBMoHwpI6pgvZmzPF8zmMyzBYt7x3HPPMVmdsH5gg9g0jEYjJisTbr31NsaTVYIGAAwp7W/hpsOH+Z9/5u/xK7/8y6hmH+O0M85fvACXLnLs2M188EMf5E8e+xq3v/MW1jdWCDGSkjCeRLrFDqGdMBqv0YzWWV3b4NzZs6xM1jDrSiMdjET9lv/q1atcuHDhe4YIPyi82kFhNch2bnx2dna4ePGiN11fI/P5nMuXL3PkyBHuvfdenn76aQ8b3gBmsxmXLl3y1+NrpL4ejx49yn333cfDDz/MsWPHfAbBHrhy5Srf/OY3eeyxx/Z7KfuOmX3fN0CWG7Hnzp3jW099i9/93d/lE5/4BF/4whf46Ec/6kOdXwOXL1/m8ccf58///M/3eyn7jpmRUvqe2ywHiefOneOb3/wmX/7yl/nUpz7FF77wBd7//vdfp9W+Pbh8+TLf+MY3vmtw4ezmlYLs//gf/yMf//jH+eIXv8iHPvSh/V7iWwYz4+LFi/53b+HVlA9eKcj+7d/+bU6cOME//If/kPvu858MfbWklDh79qy//gqv9vV3bZD9G7/xG/ytv/W3+Pmf/3lXyl0n9hxeG3noIlgeptiOWF8/wGQ0plmDtbUDbF25yJWL59jevEzq5oh1qII2ysbGOtqMGU/WSAgEJSUhoUgUTAPajCBEZouOtlFGzQjVyHy2oIkNKRid5sAYIbe1ESx1aKPMFx0hNjRNDq9t0ZEs60LOnDvDpQsXSIsFIeYBkxTNSYhluCOSBzt2HfPFFEsd89k8f57O0RAJsSVZQjBCyAMfRYWmaWiblpWVFdoDI47cfJTQjsrgxoagiqUuD27sOjQEVDR7rwHI9+Kuu+7mf/jJv8vv/d9f5s8f+2MuhwXzLjHf2eHsS88x29nh3ffcx/bWDtPpnHZlTAxjBGXUHmbctjTNhGa8RtuOMDvHuB2RuimXLl8gNmOaZly64PClL32Jz3/+89/3H9A/CPyLf/Ev+M//+T9/T4XKxsYGH/nIRzh16hT/+l//a/7qr/7qOq7Qea38+I//OO9973vzG1Y/4PzBH/wBv/mbv/mKTa+KqvLud7+bT33qU5w6dYr777+fQ4cO8U//6T/lD//wD6/jat+e/NiP/Rg/+7M/y9WrV/d7KfvOf/kv/4Xf+I3f+J5visQYuffee3n00Uc5efIkDzzwAAcPHmR1ddWbc3vk4MEDvO9973NVGDnIevrpp19VkBVj5J577uHTn/40p06d4n3vex9Hjhzx4Po1ctNNN/Hwww9z4MCB/V7KvnPx4kWeeuqpV/X3wWg04r777uNv/s2/yaOPPsoDDzzgzdc9UAexX758eb+Xsu/UN5Jezd8Fbdty77339q+/hx56yF9/rxFV5fDhw3zsYx/b76XsO2bGpUuX+MY3vvGqth+NRjzwwAN89rOf5dFHH+W+++5zhdJrJITA8ePH/fVHfv2dOXPmVf8UbNu2fOADH+Czn/0sJ0+e5I477vDX33Vk781ryU5rE0HEaNuG8cqYEFqatmX94EEOHj7K5oVDXDx/mqtXLjJenTJfdKyurnPT4WM0oxUSwtnz55lO51hSVIxm1NCOVpDYgCpt2xJjpEuJxfY0e5/pmAGgiGouP1tH6hZEDaTZjKYdkVPt3PyeTbfZ2drmpRdf5PLFi+zsbNNZDt/zLEWlCQ2Y5ccRgiixibSjFm1aJpO17LMuHusupeyzDjn8NbKDWzUQNBKj0sQGDRGNLRrygEZVYT5LaAh9QxylrBcoDSANkUc++CMsum1uPXqAcxde4tKlK8znC+bTBfP5gnYsmE5oY6QJLbGdoCHQNpEmNsRmTAwtQRtUlZ35NufOvoSGQNOuotr06pAHH3zQg+vCr/3ar73iN6IHDx7k4x//OKdOneLjH/84x48fZ21tjd/+7d/miSee2IeVOq+WW2+9lWPHjrleAHj++eeZTCYve7xpmt4pfPLkSd797nf3AWHbtohk1ZPz+qn+XH895p/6+Q//4T+87PGmafrhoSdPnuQ973kPBw4c2PV6BLx1vUfuv/9+vvSlLzGbzfZ7KfvO17/+df75P//n/Kf/9J9e8fkYI3fffTcnT57k1KlTPPDAA6/4WnRePQ8//DD/7J/9s++qaPtB4itf+Qr/5J/8k+8a4IxGI+69914+85nP8Oijj3L//fezvr7OZDLxv5P3yMMPP8y//bf/1r/vIb+B/PnPf/67/uTttYG1v/5eHzFE3v/+D/Bbv/Vb+72UfafrOr761a/yuc997rtuc21gfc8997C2tsZkMnFlwx4YjUacOHGCD3zgA/u9lH1nOp3yO7/zO3zhC1/4rtssB9aPPvoot99+O2tra6yurhJCuI6rdfYcXqsIZkBKhCaHuWLGaDRi1ExoRi06XmXUTFjduInFfIcuLUjJaEcTQmzpkrC9s41e2SQmoVsISIeoEpuGdjRCEKxLzNMcS0bbjljMOizAOMSs7UjKbDFnPp/RNDkIHrUriEZElK2tLc6fP8tsusXFCxfZ3t6kDZG1tVVCcWkLghQvdtWDiJDPj+SAPOVxj/n7k/y8Zrk1Knn4oqGICogSJKtKNDZ5cCMBIeRvcFKHpY7RqEVDPvZiPieEvOZM9l8341U+8CMfZnb1PJPvTDl+oKEZjYlxRNNEFkk4fe4ClzavoGvrjGKDav4QbVEdEWPDqAlcuniWSxdfYDrfwabCaGWNrltkNwr5mzMnk4d95m9Gjxw5wic+8QlOnTrFRz/6UW6++WbWVtdYmaz0Abc3rm58loek/qAT4zCodTQa8dBDD3Hy5ElOnDjB3XffzcbGBqurqzRN46HMm4Sq+j+6C9e+Hu+//34+/elPc+LECd797nf76/FNom1bd0UWDh8+zGi02+MfY+TOO+/MgfWnT/Hg+x582Zt5zt4ZjUYcPXp0v5dxQ3D48OGX/X3Qti333HMPn/nMZzh58mQfGNbXn/P6GI1GHDvmQ1chv/6u/ffx7jdMTnL//e/1wPqNQmA8HrlqAFgsFhw+fPhlj49GIx588EE++9nPcuLECQ+s30BEhMlk8oolph80ptMpBw8efNnjHljfmOw9qZQc0g6FMYMEUSNN0xK1IQRBtKFdmWDksBYzNEa6LjGbd8yL8kM0EBtlsegQYDqdM59vIihBhWQQm4bFIhFDpG2FzqDrElvzLbZ3dmjahpQ6mqalSx2bm5tMd7a5evUKm1ev5jUKjNfXiEFpNIfuGiKqDU1skVCuS8oAyJTyIEUrShBLpNKKFgwRK1evWAIkD5sUAVUhhAgSEQ1DKF2GPDZN03/DLiqcOX+WjY0N2nYEaN+GRoR2vMZDj3yES+deIE0vYdNLjFcOM1kZISFyYO0WNrdnXJ3OWWmMzsjBvgSCKvPpFt9+6q957LH/ym3vuJmVlRVSl73dKt76eyWOHz/O5z73OT7xiU/woQ99iCNHjrC6usrKyooHoM5bnslkwsc+9jF+8id/kr/xN/4Gd9xxR/9N8XKQ6DjXg/F4hQ9+8IN89rOf5VOf+tTuN1Bi0/9QkuO82YQQuOOOO3j00Uf5zGc+w0MPPeSBtXPdaJqGu+++m1OnTvHpT3+aBx980ANr57qxHFifPHmS977XA2vn+uGBtbOfXBtYv/Od72R1ddUD6xuIvYfXCp0ZQbO2QzSQTIixxQyMDg0NGhsgf+MZNLeoNSopQZzNMRNUG6azLSxBDELTjBhNNojNCEUIqnRmdF2HSNaINE3DouuYTqd0sykUP7Wo0LaJixcvMJtOWcxn/Y9ki2p2TkclquYWtSoaAzG2NO0KqgFUSJbyfpoQkzLIJ5GsI4hgyUiLRS5HW74hUpq6EgISQnZpyzBM0pa/37GsI1EJ2TctwnQ65fTp0xw7dqz4qfMOapA0cujoO3j4w5/iq7/3m6SdC0w3ryLJykDKEevjwNrqCCLMEbYXHV2a8uyzz/DSC98mxgQsUG1YmayxmE0JGhH8P8ZX4h/9o39ESonJZMJ4PPbA2nlb8ZnPfIZPfvKTNE3TB9aOs1984hMf5/3vf4SmafybFGdfOHz4MH/3v/u7/MzP/Aw/9EM/zE03HfLA2rluHD9+nJ/+6Z/m9ttv58EHH9ylpHGcN5tbbrmFL37xi7z//e/3wNq5rogIt912G//4H/9jD6yd646qcuedd/KLv/iLnDhxwgPrG5w9pxWXLl7hyuUrRBUmK2NWV/Pgv/FkktUXZtntHAOqgoiggEnKgxFDDr+btuHIzUeZz6d0XUcMDSG2xNhkT3OMdFkuTdCIYqgq8/mcLnV0iw5FSKljNp8SGmVn5yrz6ZTODBUhSCAEBVU0NIhEBBCpzuk2B+WxyXoQhICRupQVIgZmCSOhRFIyVCx7r5Pl5yUH16qS1QQhh9aiIV+vCiTDFCybRhivjHOobaCSw/HtrRkXzl/gyJHDOfg3AzPEDNHAbXe8m1vveh/PPPE1FvMtpnRICDTtCDFltLLCKIaiDhH+4skneOKJv6CJC2659TaatmU2XyAYTRPRYLlh7rwMl+87b2fqX8yOcyPgP77o7Dd33nknt912GzFGD6yd6869997LO9/5TpqmYdSO/KdNnOvKXXfdxRe+8AXG47EH1s51JYTAXXfdxT/4B//gLR1Ynz9/ll/6pZ/nV3/1f/+e25kZOzvb12lVzvcjxsgjjzzCAw888JYOrDc3r/D5z/8kk8n3/t7ezNja+v6DoW9U9hxeHzxwCOsS1i1oYqRtWkKIzOcLQgiI5kGFIQAihBiIGvIwxAQasutTVAAhxqYfctg0I0ajMRpaTARNiRJ9g3Usuq4MNMzvlqRkeWAjwmLRkbp5dleLoKLEkB2ZdZKi2wAAIABJREFUGrPCA5P8j0JRVBtCHOfhkNV5LQKpOozLtql6vhPZ/mGgAimHzyYCWG4vCv25RRQxsM7yKRP9MZfdndY3uPPE5xgjBw4eyE3upYFUoZ3wyEc/zdUrVzj37ONYt4NYxzwEmmbMysqEg4cOMVrZIMUJaMuZF57j6s5FUjKaGNnZ3mS6M6KJLUkEadT/kew4juM4zg8sTdO8Zb9pdt76+OvP2U/89efsJ03TcODAgf1exmsmhMCBAwdQVbqu4+zZ05w9e/o17f9KvmXn+iEieWbfNTNP3gqoKkeOHAEgpcSLLz73mvYXkbdcWXPP4XUgEDViorRtZG2ylnUeMQ8pDBpoYpOd0CK5ydI0qAQWXUeyjigx+5ybvF/XJZpmRDseEWJDaFoWiwVBlSTkljNgpSms2U+CaCKZgmkOrrXpRRgiChpKu9gQSb2vG1XQJg92LLoQEenD7T40RspDVrzWDD7qZFlhLUtDHkX7oY4GlLQby1rwkhMbsYl1N2Y7uXmeU2zh4sWLhBhYX9/oh0WWC2KyfoRHPnyS//PXn+T8c0+yujbm0NGbWTt0mGO338uhm29FmxFG5N6VQzz37HP8tyf+jOnOAkXY2d5ke2uEjY1mtFZV4I7jOI7jOI7jOI7jOM73IMbIAw88wC/8wi/wta997TXve9999/F3/s7feZNW57zdaZqGEydO8MUvfpFvfetbr2lfVeWHf/iH+eQnP/kmre7NYc/htQi9BxpgsZgTQ2A0GjEej0ldl5UdMSChaDRCIGggNi0ikFJXlBw53BZNtKMRbTOmaVqSgIbc1rbOsuKCgJDPmX3ThmFISgQCSbUfDJlzc83tbaXXl1Bc17HoSUKI2TudZNeQRCiu63IuIf9Bl9MCgigogGh2WlvCSsgdVXLoTlZzSFlybVjHpsnrS4YGZbqzg2p+DDPOnz1HExtWJpNywxOkDlCO3vIuPnrix/n//p+OQ0cO8I477uamw8dZO3CUZjwhNA2gxNGCD334E8yB5575Npgwn07Ln5lBUkK5VsdxHMdxHMdxHMdxHOe7IyIcOXKEL37xi+zs7PQZz6vbOQ8IfCs2zp0bg/r6+9KXvsR8Nn+NO8N4PGZjY+PNWdybxJ7Da7OUA1kxYtFOjFfGTCaT/KMTBiEGNEjxSucmtkr2QNfhYIuuy0G3Zq1IE5s+UAZyc9vyecwMQdHS5s5V5kQyIwQjdV1pSQfEEiklRIpzO2SFh4ogKCFEYhPQoAjWl6nzPtK3p60WnvNFg+To3KQoQGp1WopbRKSPgU1AargOw/+hSdWGtKXFDTEEbn/n7Tz55FO0zYgYIyklTp85w/Hjx2hHoxywF0VJaMc88P6PMU9z5rMrrG0cJIzX6SwgnaGN0o5XGE8C7cqER1dXeeyPv8Zjf/Jf2d66TDfviEFRLcG94ziO4ziO4ziO4ziO831RVQ4cOOAhtLMvqOpbTv3xetC97tgJJITYBNpRYLI64eCho8TYsJjPSGkOAskSKhC1DDAsLWwkonHEdDpjNpsSCHmQoiixaUAhVN+zKlr2k5AHIgaEoEoIAdX8EWLMupEY0dCioUFDJISWqC0hZEVIiA0xRlQbVBoMpcxdxIAOI5mRUsJSlz8wktTOd4l7pe6RMFvkZnTRflg9RnFzp84wUtlFEct+HdGsJEkqTNbWeMft76BLC7pugQnMZ3POnjnHYr5AVEFCaXcnNDa04zXmnWJEDGWREotuwWLesZjPMTNG7Zgjh2/m5qPvoFvA9uYOzz3zLGdeep7UzfM99vzacRzHcRzHcRzHcRzHcZwbiL1rQwjE2DJqlSaOmKysE2NgOtthMZ+hKCoBUWjaEU3T5uZ1zH7poJFF13H2zBmwVEJtJfSDGxs6S6gZqYqmcy48NJvNytDF3G7GShqvOVxOKeS2dYiIZcWHkVvOKtK7tOsRU/VKG9DH1JRhkCw1sa1vK5tVD/fywkp72+qyqw875eGOpY2dHeB5KKSl3Oo+eOgglhIvvPACgQZVYXtrk7PnznLkyBFiaOgr3AaLxYKrVzYZjVqCGCmMEJtgocv3IhmhiTSNcPDgQUJQsMRsNuPsuTO8Ky2y+9txHMdxHMdxHMdxHMdxHOcGYs/hNQAmWBJAGY8mxBDY3p4y29kGU2IzZv3AOm0zJoYmB9MxoiFCSiwW21y6dBFECG0kaEM7WiFIC1bUHVmKjYpAyAGziGYVR3VxSP2kEALJOgSIIa9NS+u7Kj1yzixYbXaLkFJaCsateK1zSC29E1r658RARUgieYCkVKN1GfZYni/GbKzuJ9mjrRixBPV96m2GhsDBQwdZdB2nT58mhEBU5eqVK8QYOHjgptw2L2ufbu8QVZnvbHF++ypNu8pNh1pkJatKmuLVFjNGoxE70x2MHGhvbu0UBcrSGhzHcRzHcRzHcRzHcRzHcW4AXkfzGlKXMNM8RzDl4Y0iQtd1pMWC2WzGeLxK07S5Aa0R1ZjbyJY4ffolZrMpMQRMhNiMaNsVRANWAldIhEAuQpuVELoaOiw3nKmhMGC5WU0ZJCkS+oCa2rLuP0sJovNxUkqggtYcV3obNpYMUUqYnc+dUm5l59g86z/qPaCsp7as0dLSrotEaEdNCcbL+ku7XEPk8JEjGHD+7DlSCcIvXrxEDC1ra6uINigw3d7hwoVzrK/fxk2HjzEarzEarzNZXWW8stJ7vmkCN998jHvuvoft7QvsTLfYmS2yykXDMKjScRzHcRzHcRzHcRzHcRznBmDPvgiVPEyxW8yZz2ZsbW2xublN6iBozsS7tCDGhvF4haZti+taSJ2xubXJ2TOngdykDtrSNhNibJEQ+jZz1WqILgXQJqhk93VtRNeGtGrIUXLxYIvWfSlDGEsYTcnDUx7suDwetrqqk5Fd2L0zpITRRj5GLWNTAnByW9ySkVLWneRNczjO0nEAmjiE1zXTtjIUMsbIkSNH2NjYoFt0dMlInXHmzGm2t7ewlHpH93PPPcPlK1eJzYTJ2gajyQrtaJyHYMZYdCzKysoKNx0+ytGjx8sffX5DIWj03rXjOI7jOI7jOI7jOI7jODcUr0N2HHrFhogwX8wgkYcgakvbjFBVptMdNAghhKzACJG2bZhOd9jZ2YKU/dkhNsRm1PuXVaRXY0hVc2htQpfAW7R0nus6DJFUSsSyKyi20tzOehHJlemUtxWx4qkuSXXJsq0k173XmiHiTpZIZSijkUNuDMSyqsTKoEmreyWDlDUrKVlRmUQsCZJTdMy6vI7aII+RYzcfY319I4fVZnRdx+nTLzKd7WAkklm+xzESYsA6I3WJ+XzGYjFnNpuyWMzLPVWOHruV0coGiwVsbk7zvdfQO7wdx3Ecx3Ecx3Ecx3Ecx3FuBPasDTGh6D0SEiKxicwXMzQENESadoV2NGJ7a5v5RkeMDU0cIQKXrl7i+eeeZTqdYoDGhhhHhBiRUHQhZohlNUkNo60E1DlY7nJAXMJzw2o2nbcp7Wokx9uq1vet6wDFnDYXJUidxlik2H1ovnTNvda6tK+FZW3J8nbDAMjq58by1zUjbprsu87nATHFypDIqhsBITQtx285zgsvvsDVzasAbG/t8OILL/COW2/BukSkYbKymgdTimKLjk5mpLQghEiIhlpCJHLXXffyxDf/ghAaFl1HiIIGy5Jux3Ecx3Ecx3Ecx7kBmM1mvPTSCxw+fPMbfuz5fMaFC+ff8OM6juM4bzx7d16LYCokAtvTOVc3tzhwYEajLWmeGDVjRALzxZz5fI5qZDadMZ1u8e2nn+Sll17MYTeBEEc0ozGiWhzS5LIyKXfDU2lBS3VDp6wfsQRkFUjVaPSaEbL2gxIJ1+GO1XFtllBVLNXA2MpAx+ICKftipfGddg9cFHJj2kglNGbX8/1aLfXhdv46rzeEIbiuwfxw8ZXcQo9Ny7Fjx+ieX7B5dZPFYsHli5d4sZyrHa3QzTu61GGaz5ssoSkPuaxOErM52IK0mNM0gZXxCjG0BHxgo+M4juM4juM4jnPj8NRTf8XP/dzfo23bN+X429tbb8pxHcdxnDeWPYfXIYTsShYwhK3Nbba3t0FhMe+QbSV2M0JQrl65woXzF5hOp1y4cJaz515iOtvJ4W1siKMxqg01dLZUBjJSdR+Sm8nJimdaB3+0CJaKz7ouLlkOccn+6t5XXZ7On6X4q+vz9dmsK1kOoItouwyatOG5cjCzlNfE0vPFwV0Pa6WlnUPkRNu2qObnc/O6lrrLdgLWWb9vbEYcu/kWvnnxL7h8IQ9xvHLpIvPFHBHj9PPPcfOxm1mZrGKEwcu95OlOqWN1dYKacO70Oc6dOU9KEELj2hDHcRzHcRzHcRxnX1lZWeHYsWNAbV4//6afczQacfTI0Tf9PI7jOM7e2HN4HWN2WEP2U6cEZ8+e4UB3iPl0xmIyQ6eBrc0tnnnmWaY7M9pWmc92cqMacnAdG2LICg0kDzocUuZqs4YaIJt1VDV1r/yguqaNIYI2rHizKYE4MoTF5QQl5K36j7xtSl3ZSOjFIUsBd21UpzJIMll2ZqvoEIIvKUiWLoIqtI4xDE/VwLtsk/rfp97xbclo25bxaMTT336S+XzB6to6qysNhw5sYLMpF0+fZmWyTruyRmiaPnhPJjkIN4hxxHvf8z5+//f+gOnOIutfmrjbe+I4juM4juM4juM415mDBw/yEz/xE8xmM5577rk3/Xyj0Ygf+qEf4sSjJ970czmO4zh7Y8/htaqiKgTNAXZCmM2NixeukKzDyD+Gs721iaWEIDRtQ4iKooi2xLBCbMYgSpcSaA7BU0q9N7r6r600n0nLio/cYqaEyFb1G2W/GjybgBU3Nb2qo35dGthFFSIi2dZR3NbD0MflyY/sOkY9Dv2d0H6Io4igNkg5UuoQgaaNiEoubCfoq9xlWGMpb5ev83Vmg0m+36dPP8fa1jrh2FEOHDpEtI7NK5e5fPEC6xJyezwBCyM2goZ87mRw6+3v4vitt/LMi88gloo6xZ3XjuM4juM4juM4zv6hqtx11138/b//95nP59flfKPRiMlk8qafy3Ecx9kbryO8DoQQUc26jBACyYRFZ6hGtrbnLBZAiKQ0z+rqBJoUQwkxEmMLEnrVs5j1QxYxy6FqcULngYdkL3ZKu0wfqQ5r7GvVQ/+6Dl0UGbzZKVle95J6pGyev855+FIgPQS7Irr7+PWJEjD3oXsfvu8uYasoKXVFG6L0sfYwP5I+tzbKIErrt+m6jiZGgghXL19i+8Aqqwc3WIljNg4epB2NQPMbB2agIbK2ttE7TjQoMQbW1iasTkaEpQa44ziO4ziO4ziO4+wnIQQOHDiw38twHMdxbhBe18DGGGP5WlHNbV8RQUMeABhjQFQRAtYlRCIaGprYEJsRGmIeupgMFaVL1fFc2sZA6lIJcpcCajO6lDUdtTmdqtKDEimXRHY51BbV/uvl9nYfQte6cx2iWELseuC6jl4zsqQEGXJu61ve1Yk9CE2kb1a3TVs2T6UaXs6JFOF3rV+XAYwpS8Drmwbj8Qqz2Yz5vCOhdOTWepfyu8chBhZdomkaRIcBkbP5gq3NTSbjEcdvuZmmCUuyFcdxHMdxHMdxHMdxHMdxnBuDvYfXqll7Ibk9LKqDhgPBTFBRYmhRlKQdMTa0o5YQI6oRQ5CUSAZoaUYjOdDO0xaxlPpgmhLiigyN6n49VLtHDsJry3qghsfWB955H+kHMcpy4twfdGmg43JgLVIa41VfYkuDHkuYbiBodnxLQkzzNkBssjaknq1vWcOuD2xoXhtGMxoR25a2XaFtJxy66WZUlZQWbG5uM97oWCw6iEqMDU07IoRICBExJYhw/sJFrm5vsbaxXkJ3D68dx3Ecx3Ec5weF559/lt/7vf+LO+64Z7+X0vPYn3yNxWKx38twHMdxHOcG43VpQ1Rzw1okB9WEoTmdFR+ghNLSHhGbWAYzCkggJUMkh8ZdcToXATT9gWqMK9mFbfmLHJyXoYykpSa0DUG09eF19lyn1CFlvb1OhKHlXdvXdVuTHE3XNnfNtmvH2/IG+bwCZtIPXsx5vJTVD63w2vJu2xZBy3UmekP3Uo5c8vv+HpsZsY2Mx2Oa2CDasLp+iCgd2k1JGNs724wXq4TQEmMs9yS3skNpk1++eoVnX3iBM+de4qFHkmfXjuM4juM4jvM2Z1AiwlNPPcG//Jf/G03T7OOKdrOzs9OXdpbX6jiO4zjODzZ7D69Fsy6khLdL8o0SZksJfC23skMeTqhSG9Bp8FsvKTpq0Ku2FBOLAB2qQtcZIiH7saUObMwIkKo7GoYUHcNSDpNVSmBcvdrQt7Fzbm6gmpN3SgBe1R/VKrLk0RYrjfGiwhYtDm+zbAOpxya3sKvRpG3a0lrPB8qt6xyyC/m4yaQs03q/d9uOmKyuE5uGRTI2Dt7EqIlsXblAkESMLV1npK4jdfmeJQ2oRTo1Njc3eeKJv+LClUucOXeJ6bzLfz6O4ziO4ziO47xtOXz4MHfeeScA8/mMCxfO7fOKXplDhw7163Qcx3Ecx3md4fXQPM6BrvaGjVIYzv5nJSs/oATBhlAczqK5GWyL3kmdG8eJqvrIwxZDCYxL2N1XpYfzINV3DalqQOqgx7JhP9QxpUEDsjyXsVd/DLaQqqoWltvcIGZLvWzK+ut1VTXJ7uY2xdUdY1OUHYJZKoMZ6Y9l5Pzc0m45ShMbVtfWGU/WmHUd49VVNtY2SJ2xvXWJjogRScC8WyAqqCaSJRazBd/85uN8+6+f4uKli8y7BaYBK8oXx3Ecx3Ecx3Henhw6dIif/umf5pFHHuHMmTP7vZxXZDQacdddd/HQQw/t91Icx3Ecx7lB2HN4jaQSCucgWUhFo5FDW9PaUC6V6iSYCnaNt5ouIbVBXevXkr3XNVdOif7YFDf0MCwx5eC6hr9907m0rXf7Q8o+NRAubedBqT34r2tize5sG8vBuKogVRtCDezTMLwxFQ+2Kv2ByPckBiWWoZb940tf977r0sfuh1UCIUYmkzXWNg6ySMbq6hobBw5x6eJlnnvxW6yMRtx2W2CdNVQicWWCastiAWdeOsOfPfYnjJrIjzzyIb7zzDOM48i1IY7jOI7jOI7zNkdVufXWWzl69OgN65ZWVWKMN5TOxHEcx3Gc/WXv4TUluE6lIWwJqW1sahCccoDbB8+SHc6Sw9ggQpc6VBUpCbKlBBqGr0X7Y+fStJbedoeYlcA7D43sg+tlh3VVeNTCdW1DS2lv1+S2blNC9+rCHnxrZQ1peMzE8u9V+3xcam26tK+TGSp1KKMVjYrk8LsqQZaCcsp2+RoSZl1+Pi8R1cjKSg6sEWXcrhBCYNEteO75F9navMzzzz/H7e+4nSNHjjC9qWM2X/Cdp7/D2TOn+c6zT2Oq3Hv3/bzr1jvZWF1HuSbcdxzHcRzHcRznbYeqMhqNGI1G+70Ux3Ecx3GcV8XrCq8NQAdPNSmB5tC2BsVaJCFZmZGDaivhcR8ks+x1lhJg11Z3Tp1ryG3SQY1bjTLAMTeha0o8BOgyrFPycMc+pl0axmhWFSJL9gypxu6sOam16+VtjOy6Hn4/BOb9cMZeYZI94AAa8uDKXNCubpJyjBpclwA+D4GkD9dVA6PxmIMHDyIaadtxvvVdx3Rnh4sXLnDuzEs8+c2/4sCBgxw+cjNdSjz11JOoCleunGd14wArk1VuueU2kki/FsdxHMdxHMdxHMdxHMdxnBuF1xVeZ1FGDqiHoY152GD1PFsZRoh1eUCh2OCWTglRGRzZvcvassrDSths0jeuVYBUNB1VrL2s+qCG0ctDCKUfiih9gD0E2cO6rG85D4HysnakekxANNerzcr1laCaXY3u+rgsaU6E2LTFj13v4lC8rgMo82NCfneg60N0ExiNW9bW1xENhNggaD+Q0szYmU6ZzbY5d+EcL54+A2ZcuXIejdUf3tGJcfDoUTavXmXeFV2L4ziO4ziO4ziO4ziO4zjODcKew+vEbpV01VWb5cctGaggSzqRlIZBipJSPwgxFf2HaihBr4EYXQnBlXxQMyPZEJDnKnL1dTAMY8wrymvYFRzX8Lp4uU36IZC5Mb0UXNeBi2WN2ofzggnluvJ5QElFg1Jb0yy3vJeGS4rAqG171Uj1ZVfVSX8TyzpqA7s+b0BsWiaTVdBACAFLCUspu8RLiC7AbLrN1uYWoorIHDpoRy2palU0gmhurTuO4ziO4ziO4ziO4ziO49xA7Dm8ll4HUqYcytBqtqrZqLMauzToQigNaKC3cUgJh/u2cw5iTRQh60X6drVo3s+snMd6H3ZuX9O7pGs63qs8qkakJO25hVy1J0NzO2+zdAEmpKoQKecwKU3num9pXFsZIClWG+lZV5Ld2PkYsS0DSIxd7ev+N3XgY7pGJ1KCdNFAO1oBUaSoUlKqje3B9x2jMl/MmE2nxEYJUZnNF3RmZQgmS3+CHmA7juM4juM4juM4juM4jnPj8LrCay3N5V3t5uUM1IyEEMoARjFDJJVic01irdeGSEmaa2OapdZxrmALJqnXjlThdA1s+3C3KjxsCLL7UF2WPNaUQ1DWwRCOIzVIzo/l2ZG6K5g3rQKSUgA3I5Xj1/UMiyohPEbbtjl0pupVyu0q17qsEem/kCHEF1U0av9cZ5afUyHlSZGYKIgSQqBLkDCiRkyM7emURZrna4uK1jq74ziO4ziO4ziO4ziO4zjODcLrcF7nsDc3ihUzQRVAesVG1XD09uclj4ZVhUd1Z+Rdy+ccXFebhZVWc+05101lKSjOuwxRcM2NsURQzaEuAKm0vJeHK0oZhqj9cMU+dK7baN43FIf1UPEua7SEoIOapDxfCtElTM/h+Wg0RkXrDMjleY2DOoRyvZaGe923w7Pqw8xIXSrqlbydiuZ1IIgEYgMaG0KITHd2mM1nbG+f4dKFC3SLef8n6TiO4ziO41w/fv/3f4dnnvk273jHu/Z7KW8o3/nOX/PCC8+xujrZ76U4juM4juM4bwNe18BGsTT4ms1IRYKtMgw5FCW7OUSoMubadK5xdD9osehEqmu6l0Iv5du7w96aQKey5xCWC0Pb2iyVVjWI5XA91bayGGjoBy3K8qBHoShJUr8GS1kDMgx+HFzbVVVSo2epN8CWrguhbRpUpdePsNQSx2SIkmuGXhL63rst9NeEJFDDUoelrg/jRfMHyWhiZH1tHVs/wHwxZZEWbG5e5dKly7SjsStDHMdxHMdxrgNN09A0WR939epl/vRP/yv/7b/96T6v6o2l6xYsFnNijLRtu9/LcRzHcRzHcd7i7D28TrnpLCpFG1KCYCktYJOhwUxfOgakaEEgWQeqOaM1Q1V7t3Xdvga31e1spcEsUgc9GiJV91G0GqUBXUPgPFIxe7FTCb8tCRqqn9tyyFyb3ikVrcfg4ZbiNlnuKFuqwyhz4J36IY45jU6pBOYl6JbiMhmN2hyKU691EF8Pxx8C/eGE5Ub2a6r3R0ipDLMsMvF651UFVaFpG7RZQeeR2C2YrKyzvT2lacdI0bo4juM4juM4bx533HEHJ06c4C//8i85d+7cfi/nTSPGyIc//GE++clP7vdSHMdxHMdxnLc4r6t5DcMgwZyZ2hDelhC1Zq85oCa3sKnt6hzmCvQO6C51UJzVVeFhZkVBnYUYywFyDcgVJdHR15j79Uneq4TeVqvL5fhVv1FN2FX30X9N6VBbHeqoddzkoEXpBy0uV8SrjqSk7zl1JwShaZo8vHFIm/ubJPWW9b7v/hd62XZ5vhey2PDn0JU/A0vlWovzpEtGMAhl/YrSaERE8psG3r52HMdxHMd5U1lZWeFzn/scjz76KItFt9/LedMQgclkwqFDh/Z7KY7jOI7jOM5bnL0PbCwDAZN1hKLDELFsCAmlFV3CZliKkxN5umEfDDOEylrD6PJMaXJXpQeWm866O73Oqg3LLuhrou3Sii775/QcE8tqk1cgZ8A5dLYaPkv2b4sIknYdfgjnl742M0jVk52d3TUMl6DEGHftX3UituuAg46kBth9qbuE1lZvqMF8saC/BQISIqSunF8x0cHFjWHWkSyhCiLpmvvmOI7jOI7jvNGICOvr66yvr+/3UhzHcRzHcRznLcHrCK+zCiP07ePymAhpaTqj5co0SjZTo2BoCbuXZiIuBbn5eFndUT3S+des5tCycYJypL6DvCSRLspsS7v0JUh2XteGs1G1J2UxlorTOreXMcshtJRwuebjVlUgOag36jY17JbhuvJqit1bUA1ZNzL0vQddSp+Ap/7xsjtVY5JYeswUWyywZKTOijNbSZZIyehSAk2k1JFsQaLDJNHR0aVFPl5aTuQdx3Ecx3Ecx3Ecx3Ecx3H2H/3+m7wyybo8yDDVRrDUfHVwM1Mj2xxGJ0s5wK6ajuXJhEvBNeV4LysDyyDO6AcXpqrJqI9aVpNYjsjLlMUcSlffdQ2Je91H/bIG59YvoW+C133LtZQqN12yJW91icP7wZSy6/hmRowNMYalYY/5HiWzovowLOV7lXcszfLSHq8tbkNKaG501tGlLg+hTOQPSxhdPkFKpLTIH92ClDoW3aLcA3YPqXQcx3Ecx3Ecx3Ecx3Ecx7kB2HNqaaRe65HD2y4Hy10OkXvnsi2HuzlktpRVFymlPok2yy3hxNK25VxSfNWpSzmwXfJBS0mVLXU50rWUH+tD56oeeYVrsGISSZYDaYbtloPuLD/Rvt2dc/UaMJeA23K4rGgOnNNyGF/T60RsAhrC0vrq8wzbima3tlkO/Mti6n20lPp7YCIkgXnKAbal3KhOKfWBvJnRzRd089zQJsFsNmN7ZwsRIYa4pGtxHMdxHMdxHMdxHMdxHMfZf/ZeuRUhJSsDEaUPP6s6pM+La4BdvhbLz9esNpXgOaUhsK6KECEH3HmwIX3gm1Lqw9xUmsuiRf9RG8p9YD54N4xlnccpDdNPAAAgAElEQVSw5n7QYpFK75qhKGUNZv0mQ/P6Wle0DeE82XmdTSR5LaJlWCMMre961UVF0itPWFonQ8YtZZv+DltptS8WSEpIMmyRfdZdSsy7jtQlusUit61T1q5Md7bZ2tqkS8lt147jOI7jOI7jOI7jOI7j3HDsPbw2QwhAIIlmv7SQQ2QbQlfZVZ+mhLYlWJYSKfcO6xwwSxnQmFvOxXt9zZBGs6LnMKMzikZjcFjXxHdZGT0YTSyH5UXVURUgQ9Y9HF9VSwjd9SE2klXh0q+vDluUch8UM6Er3umULNs/UqJpIqrar6n/YpB2c03mvvvSEdQUMUHLGwdRBOkSUpwhddeu3NPOErP5nPli0etYtq5e5ezZ0yy6rrTZ9/pCcBzHcRzHcRzHcRzHcRzHeePZ88DG3LpeIBJL21lIKbeNKSMZzRKUMLtqsFW0+JtLC1oSIgGS5OdKk1qLKgQRTIZQm5Qt2lYDbM0t7mSWlR2iQzNahJQSEspYx6WANmtPpKwzP2KSdR9DKF281ZanNNZjiAhoKKG69WFxTcvr8EoslMu3PrsfjdrlKY7ZAV5C7+UFDkMobdfjZjXiH7QmKYF1WcEiZWhk1xV3duroMKTLShdr8pp2tq9w8cLFsv6lwZCO4ziO4ziO4ziO4ziO4zg3AHsOr3uHhiRSojR6a106u5ilDBXMqotCCaep4bAIZh3VKS3ZWE0d9FgHIBbxyGAPqUXulEC16DysHwhZnSUJ0DzpsD8eJUw2s9KUrhqT2noelBxZL1Ld1/TnYallvnzdvR+lT4NTvYrcvG5HJey3YXvJV513lWEdyx7sa/Qk197PRZdy+xyhy/Mps3e7NquDAkIyRVVYLOZsXr1M1y3yXfLqteM4juM4juM4juM4juM4NxB7b173YWv2TmsJgVOyLCOR0qw2qaaKPs8VclNayj4AqCGSelG0Jelb3HmfHO+qSp+z9vtaGlQhWN6vZMPah+SG1JAdyroMEd2l6BCp2XXuN6ey4DwYMuWVSA3Ys/5DVHaF2vnBOuhxaZ0ijNo2a0Oqe5ulNZX7WWPkIa63/gHr73ttZedNEqkoQoxU3ixIpaWdLDexO4OI5oZ7l5jubJPoQJI3rx3HcZy3BX/0R1/hl37pf+Fd77p7v5dyw/HEE3/BX/7lN/Z7GY7jOI7jOI7jOK+avTevgdoQVsmtXhiCWkuGaX5OUvVb182EUJQeSUqjWTQLsi2V41GC3PJ8Ca1TSjnITmAhH1fLkMXsbu76JnY/RLJ4S6yE0L1T2oqSpAyLrOe1pea02RAQ18a0FK91HyaXgD4fs7pTpD+S1ma3QYxxKUAfHNzLv5e+kb37sV5HUq6hNtItGV3q6KwjWYeIgXWDdqU0w0WymiWEBhElWfFj226liuM4juO8lYixIYQAwObmVb72tf+XP/mTP9rnVd14zOcz5vM5kP89Uu+Z4ziO4ziO4zjOjcrrDK/p28rA0CaWvnydN7FqbpY+xK6+6prSmimSOnKnO/WN6CHYXR7EKKVRXULxEuzWx7U0wG2XCqMOicx6Eit1cSnb9IMhi6YkpVQeK2G6SQmRS0qdyL5oGTQoeWmDnqRfX7ISchtt0+ZGel1VPyxRXpYfm/U9611f70q1yeoU6/K9EzqEBWaJZB1d6lACQm6a1zZ8SkIn5GGTy1oXx3Ecx3mLcWBjgx/90R/lscce49vf/vY1f/87lfF4hKrywAMP8OEPf5jRaLTfS3Icx3Ecx3Ecx/me7Dm8VpSUjKC5vVyzVBVBrJiexchTBHNAHUqQK6XhvByZipQUvNSMU21cV49HbQ/3TunchO4HMRbVNZKHSfblZiOrRlJ1Z2edhoqWww5aDunb1fTHlRpUawm8lwdFSrm86sQWKQ5u6ddedSJVD9I0DTXBt/5jtygEWwq1c9Keh1Du3gSTEvWbseg60iIhpohJDqwtlLUrZoJqRDXka5EEKeW1+Tf5juM4zlsYDcpP/MRP8KM/+qPMZrP9Xs4Nz8rKCocPHx60Zo7jOI7jOI7jODcoe29eJ0P7oYYMAxRryKz5wdpgziFu3qaUp7MSpM4ttJQD4eKUXm4vD63o3N7WQSHdh92Dq3p4st+nKjhKiL28bw2Vxaz3alOGTfb7Z0l3yZzzwZbL5ma1PW69TiSlNLitVfrraJqmvzf1Xgzt7HLN5bH6JsCQbReJiknf5Daz3m2dDd35106y+1pQzMgt63p/pAbzYeleO47jOM5bl/X1ddbX1/d7GY7jOI7jOI7jOM4byN4HNpYQOJHbvoggKRE0lEGLObDV2o8WIVWNBqkPlM0gSB0wuCvSzSwn4+VTWgq96zGWpy7WrHoYrJhRakN6OX628nUZxlia10htb+egWER2N5RKoN07qutjS+oUK6up61AVtAlD6P5yT0hxY9cme9mo+rmXFCLDIQwskVJX7mENs8uN6RIE7e999rkI5a2HcipvXjuO4ziO4ziO4ziO4ziOc2Ohe90xCXQl9qy9XytDE1NJoUWGMLj6n0tMnTUhpCHkLb8kjJQS1iWwMkRxSbGRSsDbfy7qi+yqLosrjeRctpY+U7ZUQuHiw67t71ofr03mquxY9kzX8+3KyWVweEu53qEmPahOqoYkxEAIsT+A1WPUY9b2OvTX1PtYejXKsI56jSkZ3aLLpyuWE5L1Ab0YiFnxW0OXEl1KvUbccRzHcRzHcRzHcRzHcRznRmPv2pA+XM2O65zQSvFJSw5HzQia8/GULGs0ashc6tGDNsSwa5rNZjnIrr/vP1W3NWXo4zXuaBNQhgC7hsjIct960Icky7qN4djLXpK6fQ6wgaJLqQMZpb92S6n3fidLxZs9tKXbOM6ubbIvu4+wl8Lp4b4uITYoQ8r96xXYBtYZlvL6us5KyJ9YWEcnKZetUwDT3vFtZnSpWwr/HcdxHMdxHMdxHMdxHMdxbhz2HF6LCGKCJECzb1kllAazZIe1GCnliFZLyzclIxUtRx52KPn3JMSq16I4pGurGekD41SzYpVeOWJ5EblRLYKK9EHv8qDGvBBDa2M6lUYyIKrFdw3Q0Vehi6ta+3VSgucSmqfUryVZCa5L4J6PO4hQmqZdWkvWqPRDG3m5SaT+Pi09mIdCWn9MSjs8B9HWX5eJIgppMYfQkCjtcUtgiphikkhdR3JtiOM4juM4juM4juM4juM4Nxh7Dq+7rstt6ZT906ZZRyFSw+eC5Pg5lba1SXY3KwJKCU6FINnLLGS9RW4XpyVHs/TOjmXTRdWR9K7rqt9YtnfkDcv2iaTaq6Rr8Fw37hvZZZ9+2GRtmOcEPrfElzzYVhrRybqlhnfCTPsVN00cvNlLjez+9Ne2rkVKqduWGtNLWxQdSbK01KDWwXWdcjKe0lJIbpIHOqZECIqI5rD7Vf2pO47jOI7jOI7jOI7jOI7jXB/23ryGvjmdU938aM6SUw6iVdAyFTDHslkpArkd3Degrcue5qLgLjFtDoPrkMJrzi5VE1Kl2dA3rmtYbCmvobeUlPAWy61uyC3qGvDm9jWDfzq9/KrzGazegeFmCLXKjaDVhUJVdIgITWyoF2PLdWvoNSRW7lu9R0Mt23ZtMzS1LetKShieujx4kpSwzhBiDqz71Ft6X7YBaXHNOhzHcV4lFy6c4/nnn+H555/Z76W8ZThz5kV2dnb2exmO4ziO4ziO4ziO85Zg785r8rBEAbItJOssguZ2s6iWgHYIemueq5Y1IJLl1GSPs2YFh+Ql2VIL2TC60jImJVS0D6777Uor2ko7uc/TrQS8ZkBiSK2tZrx9C5pkZZDk7rZ2lXRLaVubLQXi/WbWPzIsvbS4JYfH7WiEoENevTSQcek29QMl6xbLR1++K/U+peKu7sdh9ppwQVG6En6LGEiXw3UVQlBMs3LE02vHcV4rs9mMX/7l/5V/82/+j/1eyluGM2deyvomx3Ecx3Ecx3Ecx3G+L3sOr/Nwwhzopi6heURiyYRzkJ1VH9IPaKyKjwR1UmIuTitIcVekLuVBjiIg2ofEOdxVRBKWDNWl5nM5ttSA2ZaC5fobLLuqy/NKDtkR3aXwkOXjljA8pdQ7sQXNuu6USlZc9CE2HDsfVrJCRcr+CG3b7hoCmdLQoM618EFBsitK7sNn+gurGpG67i4tSKnrQ+86vLFLiQSEkH3XZh1d+WwoXTdnkebXntFxHOe7cuTIEVZXVzl//jwvvfQC8MJ+L+ktx8GDB1ldXd3vZTiO4ziO4ziO4zjODc3ew2vLAWkuPAspgaiRSnCdfdD03ugcFFOax0LCkJCb11Yq0NI3qeu+aVCDSFWKWNmnnBtFMFQ1b5/AitOaMlSyajvyAkpg3gfgJRSW7I7O1ujsgq6uEWNpyKTVNnVpmEs+r6oWJUnWpkie6jiEzUA7bhGtTWspEpLqyoa+/bykCekt3MtV7zQ8n5vXiZQWWFrkgFqXmtSi5Q0C6x3lkLC0QKTNbyQsB/aO4zjfh7/9t/82o9GIxx9/nMVisd/LecsxmUz4sR/7MR5++OH9XorjOI7jOI7jOI7j3NDsObxODAoLLQ3qXEbOX9c4NJVAuaqxa9u5H3SYoCo5pLSYlwcuqgx6DUsd6JIShEGsYX2oLCXslV6voap9W7n0wftrGELiHKRXNchy41tYalaXJ2oYLqV9XVvUIvRe71QGTtayddOO+jZ3ry8pyKCkxnIePgySrM3w8kvvrC4fXZfb6HV4Jh1FwTK4ra0MbUwGWlrcQTSv1a0hjuO8Bm6//XZ+6qd+iul02v9/r/PqUVVWV1cZj8f7vRTHcRzHcRzHcRzHuaHZu/NaDJPsyDAp4a1qCUalD1yT5LA4t5nlmmGG+bNo3t5KytsbNOrnEsJWLUfdV0WLmiS3rQUrGpDB6AGlpV0MITmFLm1x8uBGra3u4qYO/WnytWGGFmW2aBl+SHFfF/92VaKYQeoz+LKvBESENrYoAUobvCpN+iBaaghvQ7bd+09kaKLXQLre45RIqWyYSkKN0Fmi6xagSiiDG6020cva0sKwzsOnN5I//uM/5Gd/9r9nPF7Z76X8wPKd7zzF1tYmcHi/l/K2RFXZ2NjY72U4juM4juM4juM4jvM253VoQ4o32XLCXP+XUhqUG7mWTUpZEZJMELG+dd0rOOjtzeXXUuO2agtJefSgSO/YFgYtSR3caCIo/XzFPvDuw+XS8u4QRCN1YqP17e4ybDGl3EiuYbLRh9jWleC4DnYsoXVtjdvSNfVV6nL9sYn9E0N7uvyuXEu/1uXP/f22IfSuRzEjdR1WpjR2yXIrPmWdSH2DgVQC6/Kx6DpEda9//PvOlQvPcuH0tzh8/D1MNo7m19x1wCxx5tk/o1vMOHzre2lHa/1zGxsbxBjZ3LzK44//GcPPHzjXm/rfy+rqKk3T7PdyHMdxHMdxHMdxHMdxnD2wd21IZ6RFApW+JZy6hIYcXKcSpoYSONcGcTKDLhE09KGvJcMklXZxDoFrUE0uHGdXNJCoDuyhvSwEYEmR8f+zd99xct3l3fc/p0yfnZ3Z3dnetU3V6rIkF1kIywZjgynB9CTAk5BAuElII/fruZ/klQAJyZ2Q8AoYYkIwwWDccZesLktWW2l7773M7E6fOeX5Y1YrrVdrycRWsX/vv+zdM2fOGc1O+Z7rd13mXKsQQ5pv5ZG+HwNZljDm0m3pfIX2hf4cc0MW0+1OJFNKh9ZS+n4vtIY+v3264ny+nfZc6xDmg/R0H22QkWUZVUkH5uZ8T+uL2p6cP4y5/zCNCwF2unLcvCi8vhDOGYZJKqWhGwa6bmDM9SI35q8emBeC9Lnz1XXjwimnO3r/pk+Da8I0TXqa99B19lk27voKdpcPRbVelftOxEI0vvowup5k064/wup3cT6k/uIXv4jL5WJ0dFS0UrgO2O127rvvPqqqqq71oQiCIAiCIAiCIAiCIAi/gd84vJZMGYy5Fh2KNNeH2kQyTfSLguvzLSrMucbXsqxgSiaGqYMBiixjzg1xPF8XnN6TmR6uaMrIkgqmkQ7EJRlFUua2O1+xDCAjS+kgNh1UMx9wp/tVp4PpdAsTab6iOl2wm24FcqEumbljNJFRmY/NJeb2nz4u5tqWpGczGuc7e1xo+SGZ6RBcAtkiIyvKfEBtvu5szwfZpnRhQKN5vg/23L6YGyiJyXyPa8M00E2NlK6jY6Tbuczt4HyPa8UE6fxwR12buyigoCiW+YGPN1LYqmtJpkZaCAWG0bX4Vb3vWGSS6dF2LDYXhrFwUN3OnTtZv349qVRK9BC/DkiyhNfrxe12X35jQRAEQRAEQRAEQRAE4bojmRellrFYDEmSsNlsF1p7LGFyahpd19M7WbDHC/+3aA+X2Kd0qS0v8b/mxb9asB9z8Q0ueR9XSLooUL7krZf6+aXv9/zW5wd0yRe36jBfl3Fe1Dd7wc/mN59vgH3htqZJMpUkHAqR0lLzgbY5V719fkClJMkoqjr/76ppKRRFweVyz/WvzUBVlMue05XSUnH62/bT3fA8U6PtaMk4Nmcm+aXrqN1wP1kFdfPheX/bfoY6j5JfvpGS6luw2Fzz+wlOdNNR/zQWm4uatR8kGpmk/fQTtJ9+iujsNMU1W/DlLqNm/YdQLQ66G1/Aas8gr2Qtfa2vMNhxhHgkgN3ppWzFTmo3fBh3ZgGSJJNMROg48ySz0wNUrXk/2QV1yIpl7mE36W8/kD6usg0ULdvGUNcRus49R3fDHixWO4XLNuMvXkXNug/i9hZd9m9GEARBEARBEARBEARBEIQLTNMkFArhcrlQLpFN/saV1znZWf+jAxMu5TcLP202Cxlu1+U3vEpSiSj1h35I87FHAMgtWY3dmcnMZB+tJx9nqPs4W9/3ZxQtuxlZtjA+cI6WE4+BJFFQvmFBeB0OjtBx5mkc7ixKa3eQiM4wNdJKdHYaLZkkONGLoSWJ1+5AlkN0nX0WwzDoPPMMSJDhLSTDW8ho7xlO732QcHCE9Xf8AR5fEVoyRn/rfsb6z5FTtApfXtV8eI1pMjF4jtYTjyMhkVe6lvDMKNOjHSRiUXRdY3q0HdViJ5WMcSUXUQRBEARBEARBEARBEARBuHK/cXgtCEsZ7DpC62uPIkkKN9/9xxRUbEJRbSQTEc4dfoiW1x6j+fjP8eZUkOErRtcSpOJR9FRiUfsSw9BIJiKoFjumoZNbvJqNu75CODhKODjKhp2/R2Hlzbgy85kYakRLxpke66OkZjMb3vOHZOXVIMkyY/1nOPrrb9J17kWKqrbhdGdjmgapZIxkIoKhL2z1YZJuT5KMR9C0BAA1a+9DUVRCwe/gySpiy11/QlZ+DU53zlUbGCkIgiAIgiAIgiAIgiAI7xYicRPeUul2IQcIBycoX3kHRcu2kuErwpnhx5tdRvXae3F7cxntPUNwsgddT72p/VusTtyZ+ahWO4qi4vLk4ckqxWpzI0vpIaCKYqFs+U5yS27C7S3A5cmjqGo7+eXrSMbCTAw2kkxG3uSZSdhdPpwZfhRFxWJ1kOErIsNbiKLa3uS+BEEQBEEQBEEQBEEQBEG4HBFeC2+pRGyW4FgnsiyTU7gSqz2D+XYakoQnq5QMbyHx6CyzgQF0LfkWH4GEIyOTrLxqLFbn/E+tVhfenEpkRSU03Y+eurqDHgVBEARBEARBEARBEARBeHNEeC28pVLJKInYLLKiYnd4keWFnWksVgdWRwamoZOIBjEN7S0/BqvdhdXhQbpoOKYky9icXmRZIR6dedMV34IgCIIgCIIgCIIgCIIgXF0ivBbeUoaewjA0JFlGUlSQFg4xlGQFWU5PDjUMfVGP67eCLKvIsor0ugGKspw+HtPQ4W24X0EQBEEQBEEQBEEQBEEQ3joivBbeUopqRVYsGLqOoSUwTWPB7w1dQ08lQZJQVfuC6uhLMo03HTQbuoahpzAvmsBomib63PEoFtvl7xcwDeOy2wiCIAiCIAiCIAiCIAiC8PYQ4bXwlrLa3DjdOeh6ivDMGMbr2nMk4yGi4an0sMXMXBTFkg6SpXTV9usrseOxIFoq+aYqtOPRWRKxmXSF9XmmSTQ0gaFpODNyUBQbkiTPh9imoS0Mu40U8WgQQ9NElbYgCIIgCIIgCIIgCIIgXAMivBbeUjZHJrmlNyHLCiM9rxGPBuGiUHhiqIHZ6UEysgrJzC5HUa1Y7RlIkkIkNI6uJea3TSWjDHe/RiIWXngnkoQkSZimORdqLwyXY+EZxgfOkUxELvwsMs3USAumaeLNrUK1OlBUCxabE0PXiYWnFgTts1P9TA23kEq+fqCkBEiYpvG2tDwRBEEQBEEQBEEQBEEQBCFNvfwmbw3TNHnhhRc4ffo0qVSK7du2c+ttt2K326/WIQhXgayoVKzcTV/LPgbajtDg+0/qNn4UuzuL8f6z1O9/kEQ0xKqtD+DJLkWWVbLyarA7PYx0n6K3eS8Vq+5ETyXoOPMkY31nwDSRLuqdLcsWLBYnqUSMwHgnOYUrsDuz5sNkWZbpqH8Gu8tH+YpdALSeeJTh7lN4corIL1mH1eYCJHy5VfQ1HaTr3PP48mvILVrD7FQfjcceJjI7jiQv7JutWuwoqpVwcIzZ6X4crmxsDg+KYlnU31sQBEEQBEEQBEEQBEEQhN/cVQ2vDx06xM9+9jMikQiSJLF5y2YRXr8DZeXXsOXur3N677/RfOyXtJ18AllRSSXjKIqFNbd+mtoNH8bu8IEkkVe6jqp176fp6CO8+uzfc2rv9wDwZBVRtvwO4pFgusraMDABu9NLQcVGxvrPceKlf6XhyE9Zd8cX8fhK07fLLiSvbC3Nr/2CM/sexDR0YtFZXJ4c1t3+BXKKViArFgCq1tzD+MA5BjuO8/LDX8VitYMkUVC+nuLqrcQjz2MYWrr3NuDLrSK7oIbe5kPs+e8/xusv5eb3/TmFFZtRVOs1ebwFQRAEQRAEQRAEQRAE4Z3oqobX0WiUQCBAJBIhFouJtgvvUIpioahyC5nZ5UyPthGY6ERLxnF58sguqCMzuwy70zvfb9rmyGT9jt+jpPpWpkZbSSUiuDML8Bevxp1ZQO2GDyNJMm5vIZIkoahWVm79NL68aoIT3ahWB4UVW+ZalIBqsVK7/kOs3voZJkdaiIUmsLuzyC2+CV/uMiw29/yx+nKruO1Df8P4wFlmJnsBE19uNTmFK1CtDm667fPYHJk43DkAOD15bH3/X1C6fCex0ATuzEJ8/mXIylX7U3rbmKbJaN8phrtexTB0KlbeSVZe9XzQLwjXA9M0mZnqo7d5D8NdrxIKDmPoGg5XFrkla6hcfTf+wpUoFtu1PlThXcA0TQLjXQx1HiEWmaKgYhMFZRtRrY43vE1wsofe5pcZ6nyVyOwYsqziza2kYuWdFFdtx+70XsWzEARBEARBEARBuH7d+ImbcF1SVBuZ2aW4M/MpWnYzpmkgKxZU1T4fWp8nSRIOdw7FVdvIL1uPaejIqhXVYkeSZBzu7EX7d3n8VKy8E11LIEkyqsXOWH/9/O9tDi/+opVkF9RhGBqyrKb3JysL9iMrKplZpbg8efP9tlWLHVmxIkkSLk/ewu1lBV9uFRneIgw9lT6nueO80c1O9VF/4Af0tx7BYrXj9Vfg9VeI8Fq4bhi6Rl/bPs4e/A/G+hpQLBZcHj+SrDA12sFo71l6mvaw5tbfpnrtB7A7fdf6kIV3sERshu7Gl2g+/t9MDrXNX8T0F65aMrw2dI2BjsOcPfhDhrtPkYhFMXQdkBjtbWSw/SgrtnyMlVs/hTuz4CqejXDDMU3CM6P0t+1noP0gM1P96FoSmyOD7II6yle8l8KKzVhsrkU31VJxxgfO0t34IhODjSRiQWTFQkZWMcVV2ylf/h4yfEXX4KSEG4mhp5ge66S78XlGek4Qi0yjqnayC+uoXHUXRcu2olouv8LW0DWmRtvob9uHaRoUVmyhoGLzgpaBwruXYWjMTPbR17qXVCJCXul6SmpuXfK7l2maREPj9LbsZaDtALPTgxh6Cpszk5yCFZSvfO+SK2YTsVmGu4/R0/QS02MdpOIRVKsDn7+S0rodlC3fic2R+XafsnAdMAyd2al+Os8+zXDPCaKzEyCBM8OfXjW+5v1kF9TNb2+aBuHgMJ31zxCNTC25X0mScWXkUrX2A4tyBtM0iMyM0nnuWQbbDxOZHQNJxuevoGzFLipX3nnJ93ThnccwdMYHztJR/zSTg43EYzPIikqGt5Ciqm1U33QvTo//wg1Mk2hkirZTjxENTSy5XwkJmyOT2g334/YWLvidaRpMj7bTfuYJRntPE48GsVgdZBcup/qmeylcthVZvraZlwivhbeVolqvuJ2GrFiwXnFQKqFa7G/4oViSlCt7gZcuv6+Fm8vvuDeOVDJG26nHGGh7lchMELvLha6lEIsjhOvJ1EgLjUd+ynDXKUprt7Fy6yfx5VYhSTLR0ARNx35G59kXaTj8E1yePMrq7hDtfIS3nGHoTA410njsYXob9xKPhtBSKWRZJpWMYbL0C+fEcCONR3/CQNsxMv0lrNtxDzlFK4mFp+hueJ7B9tdoPfk4GVkl1Kz74BW/LwnvLoahM9Z/hrMHf0h/6xHi0TC6pqVnhMgyw1319Dbvo27j/aza+mlcmfnzt42FJ2k79TiNr/6MmYlBUskkpmGAJKGojQy0Hmaw4zBrb/8i+WUbRIAoXFIqGaW/dT+n9/07E4OtpBIJDF1HkiRGexsY6jzGips/zqqbP43V7l56R6bJ7HQ/jUd/Qtupp7G7MlEUKwUVm0gPSRfezeKRAN1NL9J49KdMjXRhszvRb05RXLUdSVkcohi6xuRwE6f2/hv9bUdIxCJzr42kXxs7T9Pbso8Vmz/G6lt+e24GUjrwnp3u59yhh2g/8zSpRByXJweb00M8GqTz7Iv0tR6kv+0AG3d9Ba+/8mo/FMJVpKXiDHYc5rWX/i9Twx0k4/G5Qse67t4AACAASURBVAOQFYWhjtfobd7Dhp1/yLI1dyNJMqahMxsYpOHow4QCY0vuW5JksgoqKazcsiC8NnSNiaEGjj3394z0nCERj2JoOkgw3tfMQMerjPfXs+nO/yVW573DpZIxWl57hDP7HyQcGJ/7nKanP6cpzQy0v0pv0x623/u/ySlcAYCJSSw0SdPRnzEzNbzkviUJ3N5ciqq2LwivDV2ju+kljj//DwQnBkglEpiGMfee3khfywE27Px9Vm79FPLrikGvpusqvG5qauLUqVNomsbNN99MVVUVTU1NPPHEE5w+fZpAIIDdbqeqqop7772XHTt24HK5MAyD06dP8/TTT3PmzBmmp6fnt/vgBz/I7bffjtPpXPJ+BwcHefnllzl27Bh9fX3Mzs4iyzI+n4+6ujp2797N1q1bcbneOLDs6+vjySef5NVXX2VkZASAgoICbrvtNu677z7y8vI4deoUra2tmKbJ9u3bqaioQFUX/zNMjE+wZ+8ejh49Snd3NzMzM6iqSn5+PuvXr+eee+6huroai0VUxQr/c0OdR+g8+yyyqmJ3uRZVxwvCtWYYOiO9J5kYbMTlyaFm/Qcpqb51vsLVk1WCYaQIjHcxMdjCWP9p8krX4fLkXuMjF95pZqf7aTj6X3TWv0B2QRV1mzcz3HWcsf7mN7xdIjZLX8s+hrtOkl24jHV3/D5ltTuw2FwYukZOwXIstgeZGGoiFpoklYiI8Fq4pOBEF02vPkzX2T3YXR5Wb/8YxdW3YLG5mB5to7P+14z2NtF68nFcmfnUbfwYqsWGlozR27KXc4d/wuzUCLmly6m66f14/ctIJsL0t+6jp/EVehr3YXdl4c4sEBXYwiKGoTHWX0/9gQcZ628mp7CKmvUfxOuvZHZ6gK5zzzLcVU/bycfx5lRQsXL3khdBEvFZ+lr303n2BaKzM0iSjJaKX+UzEq43hp5icriZs4cfoq9pP4lYGMMwMA0DLRW79I1Mk9nAAGcPP0TXub1Y7Q5W3vxhimtuxWpzMz3aRkf904z1tdL82i/JzClj2Zp7kCSJeDRA59lnaD7+KDZHBhvu/hKldbejqFa0ZIzh7uOc3vd9us69hN3lY/PuP5kPvoV3FsNIrwQ58dI/M9rTSEZ2Hmt33E9uyRokSWak9yRtJ55gtLeJ+oM/xJe7jOyCOkzTJJWIEgsHkGWZ0uW34JxrO3oxSZJxZeZjv2hluWkahAKDnNzzXfrbjpHh87Pujs+TW7KGeDRAd8MLdDfsp+3UU/jyqlh586fEheV3KENPMdB+gBMvfZdQcIr8sjrqNn0Er38ZqUSE/tZ9tJ58hv62Yzhe+R47PvItrPYMME1SqRjR0NRc+9fbl/gOLGFzeXFmXHhumobO+OBZjj3390wN95BXvpwVmz9OZk4pM5N9tJ58lOGuBs7sfxB/8Wryy9ZfvQfkda6r8PrUqVN8+9vfJhQK8Y1vfIOTJ0/y4IMP0traSjgcRtM0JEnixIkT7Nu3j6985St8/OMf59e//jXf//736ezsXLTdgQMH+NrXvsYnPvEJ3O6FV/51XefZZ5/le9/7Hs3NzczMzBCPx9HnKgdUVeXw4cM88cQTfOhDH+LLX/4yxcXFi8rlTdPkwIED/MM//ANnzpwhGAySTCYBsFqtHDhwgBdeeIE//dM/5amnnuKpp55CURS+9a1vUVxcvCC8NgyDvXv38r3vfY/6+noCgcCCY7JYLLzyyiv88pe/5Atf+AIf+9jH8PnE0nhI96OuWXcvkqKk/yDFi/oVmZnqp/m1R4iGpihfuZOR7hNEZpZe7iQI14KuJQjPDJOMR/GXlJGRVYJyUbAnyQqerFJcmXmM9jYQmRlDS0av4REL71S6lkS1Oli59WNUr71v7kt2y2VvNzvdz1jfaQxDp2z5Tkqqb8HuSr9/K6oVf8kabr3v/yWViGBzerE5xdJkYTFdSzDWf4bB9qPYnBmsvuUzrNj8W9hd2ciyTEH5RjKzyzn1yvcY729hrO80ZXU7yfAVEQoOMdB+iJnJYYqrN7Jh15fJK1mLarFjmjq5Rauw2jNoevVRxvrOMDXaNj9vRBDOi85O0Neyh/GBFvLLV7P5zq+SV7oe1WJH1xL4cqs4a/0h4cAI4ZlRdD2Jqi6eQ2HoGhODDXSceRI9lcTtyxbzkAQAZqcHqT/4I7ob9pBTWE1JzW30Nu9haqR7ydtoWpyJwUb6Ww5isdlZc+unWbXtc3NzlhQKKjbhzV3Gq89+m5nJQQbaD1FadwcWq4vIzCj9LfvRUkmq1t5C7cYP43BlAenv+TZHJrOBQc688hDDXccJTnSTW7z6aj0cwlWUSkQYaD/IaF8Tbp+fbe//c8rq7sBicwISeSVr8fiKOfLMt5ga7mCo+1Wy8msxzfSFFcPQcXqyWb3ts/NVsa8nyypWR8b8/2vJGP1t++lvPUqGL5ft936Dkppb54sb/IWrUC0OhrteY3Z6AC0Vx/IGs1WEG1ciFqLt1GOEApPkldWy4yPfIiuvGtVixzAM8krXYbV7OLX3IQY7jzE91pFuu0v6eaTrOjaHm1VbP0VuyU2XvA9Jkhe0P0omozQff4SpkR4KKlex48N/hzd3Gapqo6B8E/6ilRx47Bsk4hGCkz0ivD4vlUoxOTnJ1NQUhw8fpq2tDafTyZ/92Z9RU1NDMBjkqaee4uWXX6arq4uHHnqI2dlZfvWrX+F0Ovn6179OXV0ds7OzPPPMM7z44ot0dHTwH//xH2zcuJE1a9YsCIoPHjzIt771Lc6cOYOiKNx9993s2rWLoqIiEokEZ8+e5Re/+AU9PT38+Mc/xuVy8ZWvfIWsrKwFx93a2sp3vvMdDhw4QCKRYOPGjXzwgx+kpqaG6elpnn32WY4ePco3v/lNwuEwQ0NDOBwONE1b9Bjs3buXv/mbv+HUqVMA7Nixg507d1JRUUEkEuHo0aM888wznDt3jm9961sYhsEnP/lJPB7P2/uPcwPI8Baw+pbfRgKsdo/4snUFtFSc9tOPM9J9iqKqLZTW3MrUcCuRWRFeC9cXWVaQZRVkKb3E3TTfYEGxlO4/LF4DhLeBx1fCutt/D9Viw+7yMT3ahnSZ5e2maRKc6CY42Ycr04+/aBV218LPEopiWdR/ThBez9A1JEnGk1OCJ7uU8uU70/3R517vbI5M/MWr8RetYqy/hUhogkRshgxvIVoyhqJayStdTvmKXeSXrl/Q0iEzp4K80rV0N7xELBxI99vERLRvEM4zTYOZqV6Guo5jtTupXLWbwoot86ugFNVK0bKbycqrQkvFsbuyUJRLt+8KBQZpr3+amalBCirXk0pGmRrpuJqnI1yPzHT1q8VqZ+XW32L5po+hpeIMdb36hjczdA1dS+D25uPKzKNy1ftwX9QySXF68ReuJK90DdMjPYSCw/MrnOLRAKGZESxWO1l51dgdF9oySFK6R6wvtwpZUYiFp4m9QU9Z4cZ2/nmUXVBJXtlaSmt3zBcaADg9uRRUbMaTXcT0aC8zE73pi26miZaMYhoGqsWB21uAM8P/Bvc0xzSJRabpbd6LYWhUrNpF2fKdWG3p92ZFsZJVUMf2e/83ydgsNkcmqhhK/45kmiZaKo4kKeQUVVKz/kPkFq+Zn6kjK+DOLKCk7nYajv6ceGSWUGAoHSabJqlkBNPQUVUXbm/hFT3/TNNgdqqP/taDWKxWVm37NDlFq+Zbg1gVC3ml67jrcw9i6NqCiu1r4boKrxUl/SCZpsnzzz/Pli1b+PM//3NWrVqF0+lE0zQ2rN9AKBTi0KFDNDc386//+q+sWbOGv/zLv2T16tW4XC40TWPjxo1EIhFeeeUVWlpaqK+vp7a2dj68DofDPPbYYzQ3N6PrOp/73Of48pe/TElJCVarFdM02bFjBzfddBPf+MY36Ozs5PHHH2f37t1s3Lhxvl1HKpXiscce48SJE8RiMXbu3Mlf/dVfsXr16vlj3rlzJz/4wQ/4+c9/Pl9J7XQ6URRlQcDa19fHD37wA06fPo0kSXzpS1/is5/9LMXFxdhsNgzD4L3vfS+bN2/mm9/8Jj09PfzoRz9i9erVbN68Gav13d3bVVYs81fJhSsz1HWUzvpf43D7qNv4ERzuHNEyRLguyYoVX24VDpePUGCI2cAA/uLVC9oqhAJDhIOjWKw2MnPKxVAd4W1hsTlQrUVv6gKpriUIBYeIR4L4i+pwefIJB4cZ669ndqoPSZbx5S4jt2QtzoxccfFVWJJqdVCx4r0UVGxGVdMXUF5/oU5RrahWO7IkIUly+vkkSfjyqtmy+0/mBjtmLprfIckKimKd/9IinofC62mpODOTvcxODeLJKiKvbN2iAbWqxU6Gr/gN95OIzdLffoD+lv3kFNZSve5eOuufeTsPXbhRSODxFbN+5x9itbmwOX1Mj7Zf9mYWqzN9Ua58A4piwZmxeMm8pCjpVXtzr43p8WUSsmJBllVMTIy53rJLHZwkK/NhkvDOY3N6Wb3ts9Suvx/V6sTuen1/aQlZsaKcn9M191wxTYNUMoppmigWC6rlyiqjDUMnHBxmcrgVmzOD4upb5oPr82RZSV+kFoO839EkScLl8bP9A3+FriWwO7MWv9ZIEhaLA1mSMUz9os9p6bY1pmEgqxZU69Itky9mGjqTw82EAuP48koprLx5UU9rWbHgzal4C87wf+66euWVZRkJCdM0kSSJBx54gHXr1s23+7DZbNTW1bJ161ZOnjzJzMwMyWSSBx54gA3rN+DOuLBddXU127Zt49ixY8zMzNDZ2UkymZzvWz0+Pk5LSwvxeJzs7Gze9773UVlZid1+IQix2+3ccccdrFu3joGBAXp7e2lubmbNmjXz4fXw8DAHDx5kZmYGr9fLAw88wPr168nIuLAUxOVy8fnPf56mpib27t2LYRgACyq1TNNk7969nDhxgng8zu7du/n0pz9NXV3dfKgP4HQ6uf/++zl79iw/+clPaGlpYe/evSxfvpzs7Au9kwThcmYDgzQff4TI7CTrdvwueaXr56qsBOH6I0kShctupqRmGx2nn6Ph8E+QJJnS2ttRVDsTQw2cPfwQ06OdFC7bQNGybdjsYkWK8HaQ3nRRv64liIen0bUkkqzQee4ZRntPExzvRUslAAnVasNfVMeqbZ+hbPlOLFf4wVN4d5Ekea6tzBIDm0yTcGCYwFgnkiyT4SvG4Up/PlQt9jes7k8lwoQCgySiIXz5lWT4iucCHkFI05JRQoEhUok4Tk8eTref4EQ3o32nCc+MoKhWsvJqyStdu+RQMcPQmBhqpP3UE6gWO7Ub7sfrrwLp11f5bITrlWp14MkqTQczV9hJRpIV7E7vks870zSIRwIEx7pQLVYys8uw2T1IsoLTnYMvr4qZiUGmR9tIxGYW7CeVjBAY78TQNdzeAjKySt6K0xSuQ7Ks4MzwL1m1augpwsEhwsExrDY72QV1yLKMZhpoyXTbEFW1I8squpYiFp5AS8WxObzYXb5FF4UNI0UoMEQ8MoMnq5DM7HIis2OM9LzGzGQfJibenAoKK7bg9FxBJbdwQ5MVyxte/DUNjamRVpKJOJ6sPLz+dKhszlX+67qOarGhKBYMXScaHieViGJzZOJwZy96/ul6en+6ruHJKsbpziEw0c1w1zGioXEUi42cguUUVGy+Lr6XXFfhNTC/MrGyspL169fjcCy8amWxWCgrK5uvjq6oqEhv53zdVX9VXbDd1NQU+tyUWICioiK+//3vMzk5SSwWY+3atdhsi5dgZGZmUlFRgdVqJRwOMzY2RiqVmv99d3c3/f39aJo2fyyvHw4pSRIVFRXcddddnDx5knh88SCS2dlZDh06xPT0NA6Hg7vvvpuysrIFwfV5Ho+HXbt28dxzz9Hd3c2xY8f4xCc+QVZWlqiSEa6IlorTfupxhrtOUFx9M+Ur78ThziIaGr/WhyYIS3J78lm34/dwuP10nn2Wg4//H1SrHUmW0ZIxZFmlev37WbHp4+QUrhCrCITrhqFrpJJR9FSK0d6zzEz24y9ewYotH8flySM42Utn/dMMdZ0mGQ8jKyply99zobJHEC7DMHTikQCjfadoOfFLhrtP4S+uo3zFrgWDod7o9uODDfQ278XEJL98Pdn5tVfhyIUbiZaKE4tMYZoGup6g5cQv6G16mVBwHENLgSRhtTvJL1vL6u2fpbjqlkWVY6HAMJ31TzMz2U/N+nsprd1JZHbkGp2RcL16q7/TxiMB+lpeYWygCU92AWV1d8y3X3Bl5lF90z2MDzTQ13IQt/f7rNj8cdy+IiIzo3TUP0X7qadwenxUr/0AnsusLBDemUzTJBIap/Xko8TCMxQuW0fRsq1zvzNIJWOYhkkyHuLsoR8y3H2SyMwYpqmjWOxk51dTs/5DVKy8c37lqKFrREPjaKkUsqoy0H6A9tNPMjM1iJ5Kz1BTrTZ8uZWsufVzVK25B1l8NnxXMk2DwEQ3zcd/DqZJ2fLbycypPP/bdOW/YWDoKRpf/SnDXccIBUYwDB1FteL1l1Oz/oNU3/SB+RVTpqETCgwCoFqdnDnwA9pOPkYsHJifuWe1OcgtWc3m3V8jr3TdNTr7tOsvvJ5TXl6Oz+dDkReHt263O12lLUmUlJSQlZW1aIgipCuez7cJicVi8xXPcKE6e9myZeiajmpRL/kmKUkSLpcLRVEwTZN4PL5gmEhfXx+hUAjDMKioqCA7O/uSx6IoCps2bSIvL4+JicV9siYnJ+nu7iaRSFBQUEBdXd2CKvDXH1NNTQ25ubn09vbS09PD+Pg45eXl8xXh7xamadLXspeeppcoqNhExcrd2BxLV1uapklv88v0Nu8hv3wjlat2vytbCwz3HKej/hkc7ixqN3yYzOwyUV0l3AAkYpEAwYkuIsFxZEXB5shBVqyEk3Eis9NMDDQyU9lHVn7tgpYignAtmYaOYWgYhoGWSlBcvZXV2z+HN6cCRbVSWLmFrLxqTu39N4a76+lt3kt2wXIys8uu9aEL17no7DjNJx6h/fRTJOMRdC2Jxeaget37qN3wYfJK1y1aAvp6pmEwMdhA45GfMD7QTGHlBqpu+oBoxSYsouspUokoqWSS0d6zhAPD+ItXs37n7dgcHqZGWuio/zW9TQdJJaKoFgeFlVvmb5+MhxhoP0Bvy36yC+uoWX8/DpdPhNfC2yoRm6Gr4XkajvwU1WKhZv19FFRsmm/5oFqclNbuACQaX32YhsMP03ricRSLFUNLoWlJvP5SVt78SSpW3CnCw3cj0yQaGqfhyH/SdfZl3F4/a275HBm+orlfG2jJCHoqxeRQF8GJAXQthcXmwDB0EtFRgmMDTAw2ExhrZ+2O38dqc2GaBsl4GF3TmJkYpOHIT/HlVbJq26dxZxYwGxig48zTDHWeJh79Z1TVQeXqu67tYyFcdaZpMDPZy7Hnvs34QDt5ZStYvf1z84M7TdMklYhg6DqBsUHO7H8ILZnAanfOrTqJEBwfZGKwmamRFjbf+cdY7W5MUyceDWLoBiPdJ5kYaKS4ZhslNbdhc2YyOdRI87FH6Gk6RDIeYudv/SO+3GXX7HG4LsNrSZLwer3p4PkSF13P94qWJInMzEwsFsslg+eLe0ovNb1almWC4SBnTp+hvaOdsbExpqenicfj6LqOpmk0NjYSiUQAFgTgpmkyPj5OMplEkiTy8/Ox2+1LXikuKCggJyfnktXUExMTBINBDMNAVVV6enpQFOWS2wIEg8H5xyoYDDI9PY2u6zd8eJ2MhxkbqMftyceTXYqiXqaPt2kSnOyhr2U/FquT0prb4Q3Ca0yTwFgn3Y17UFQbpbW3v+vC61BgiObjjxCdnWTtjt9N9ysUgx+EG8Bo3ylO7vkXxvrOUbnmvXMXXkqQJJlEbJauhudoO/UkJ1/+LpgGlWvet6hvnCBcE5IESJiYeLIKKK6+haz82vnKaqtiIb9sPUVV25gYbGFisJGZyV4RXguXpetJQoEhJgY70ZJJZFnGkeEhHg2ipxJI0hsH14auMdJ7gjP7v89g+zH8JctZc9vvkFt8E9JlQm/h3cc0dAw9hT43dL60bgdrbvltXBm5yIpKQcVmvLlVnHz5u4z2nqO78QWyC+qwOTIxDJ2Joaa5diE2ajfeT3ZBnVg1KrytYuEpWk/9ivr9PyKViLF884dZsfmBBcNqJUDXksxO9zMz2U8yHsXuysTh8pKIhYiGpgmM9TAx1EjRsq2X6IMsvJOZpkEoMMSZff9O68knsDndbHjPlyir25keJk/6InAyEcYELDYbVevupnrtfbgz8zH0FBNDTTQc/S9Ge5poPv5LMnMqqFl/P6ZpoutJDMMglUrgL17Jlru+ToavCEWxomlx/EWrOfb8txloe42WE7+koGITjitYUSW8M5iGzuRIC0ef/TsG2o6TXVDBtnv+gqy8mou3IhkPYWIiKwoVK++gbtNHyPAVYRoGgbEOzh7+MUPtp2k+/ii+3CpWbH5gvlIb0yAeDbFx1ydYte0z2ByZyIpKfslacgpX8sov/pThnnO0nfoVW3Z//Zqtbr4uw2tIt/1Y6sPMxT9//dDDN2N2dpZHHnmERx99lJ6eHiKRCKlUilQqNR9Sm6Y5/zP5dZWppmkSCoXm25FkZGQsGTYDOBwOMjMzL7lNJBKZr+oeHBzkr//6r99wAKNhGExMTJBKpUgkEszOzi5oi3KjCox3cmrPd6lYeSd1Gz962fBakmVq1n+IkupbsDkysV9BlZChp9CScXQ9CUtc1Hin0lIJ2s88OdcuZCsVK+6cq6wSXxyE61siNkt34wuMdJ/GX7ycFVseoKB8E4qaDv9M08Du8hEOjtBZ/yLdTS/hL15NdsHya3zkggCyrKKqNmRZwe7OwpmRu6gliMXqwpNVgs2ZQTQ0RTwyfY2OVriROD15bHzPl6nb+FES0SBTo230t+5nqPM1QtNDrLnlc1Svu++SvQpTySh9rfuoP/AgE4OtFFSsZd0dv09RxRZxUVtYgnShgCi7mPLl77nQmxiwO73kl22gpOZWpob/k4nBBmanB/AXZRIODtN59hmCk33UrL+Xsto70p/z32WfxYWrwzRNQoFBzh3+MS2v/QpJklhz62dYufVTuDLzuPi7TzQ8SeOxn9Jw5GGcGTls3v1VCso3oqhWdC3J5HATDUd+QtuJJ0glItx819dxZeZfu5MTrhpD15gabeXEy/9MX/MhnJ4sNu/+KstW371g6LHV4WHN9t+hYuVuLDYXmdllOFxZ6Sp908TrX4bbm8+hJ/8/pkf76Gp4nrLl70EinW1JgN2ZwbI178eXu2x+RbRVcZOVX0vVTfcw1HmKicEmpsfaKXJvvTYPiHBVGbrGQOdhjj37bSaG2sgrXcHW9/8lBeUbFrTkUhQryzd/nKJlW1GtDjKzy3Fk5Mx/1/DlLiMjq5j9j/4FY32tdJx5isrV71sw1NuTlU/12vtwefLm92tzeiko30jFyvdQv/9hBtoPseaW38WZkXPVHwu4jsPrK/WbBtehUIh/+qd/4uGHH6a/vx+Auro6Vq5cSV5eHpmZmdhsNiwWC4cOHeKVV15Z3KvahFQqNT9gcqkK8PNUVZ1vY/J6uq5jGAamaaKqKjab7Q3Da4DS0lIg3UbF6XS+IyoXgpM9BMZ7KFw2i2kal78B4HTn4HRfmz+gG8344Fk6z/6aeHSWeHSaxld/uuACQSwyzezUIKlEnPYzTzIx1Ehe6TpKa2/D7vRdwyMX3u2ioQmmx9pJJRPkFK4gM6dsPriG9BAzd2YhOYUr6Gs5QGCsk2hoQoTXwnVBUa3YnJmolnRQY5pGOrC5+H1bkuaHrGipBIahXbsDFm4YytxwH7e3CNPQKKjYTHHVNhqO/ISOM8/RceYpfHlVFJRvWnC7eDRIZ/3TnD30EOHgOJWrdrLm1t/FX7RKtFwSliQrKorFjqwoONxZuDLzFn3/sNkzyMwuQ7XaiIWniYYmSMZDDHYcoq9lPzlFy6ndcP8VFZwIwm/CMHSmR9s5ve97dDfsxeHKZO2OL1C97j4croVDy3Q9xeRwM+2nnsI0DOo2fZiadfddFEyauL0FGIbGkaf+loG2QxRXbad2w/3X5uSEq0bXkoz0vMbxF77DWH8T2QVVbN79NYqrti0IriH9XuzLXYbXX5kOoy9euSRJWO1ucovXUFa3g6nhHxMc7yEamsDlyUO1OJAUBavdPXcxcGHBpKpa8eZUYne6SURDhKYH4dp1bhCuEi0Vp6P+KU68+C/MBkapWHk7m+78GtkFdYsKYCRZwZtTkV6xKUmL2sWpFgc5hSupXL2bsf42AuM9RGZGyPAVo1qdSJKM0+PH5clddByqxU5O4QpkRSEcHCUWmRLh9dW2Z88efvGLX9Db20tWVhZ/9Ed/xD333EN2djYWiwVVVZFlGdM0CYfDHDly5JKDFi+u/D4fZC/FMAw07dJfRs/fpyRJlJeX87d/+7fU1NS8YSX3ebIsk5ubu2SP7GstmYjQ0/Qifc17CU70oGkJbPYMsgtXULP+Q+QVr0FLxWg79TgdZ58hMjNFb9MewsFhyuruIL98I32t+wgHhympuZ3hrqP0te4ju3A563d8iZmpXgY6DpNbvJqS6lux2jMwTZOxvtO0nX6M8cEGdC1JZnYZ1evuI5EIX/I4dT3FcPcxus49x+RQM6lEBLsrk4KKzdSs+xBef+WioTM3msjMKNHZSRLRCP2txxnsOLng96ZhkEomMQ2DvubDDLQdo2ZDgNySm0R4LVxTWjJKKhEF00S1OlDkxS2SJFlBsdiRZIVUIoaupy6xJ0G4+hSLHVdmATaHm1h4mlh4EsM0kC9q6XC+72EqGUdRLKKnpvCmSJKEpFiwOSzkFK2ipOZWhjqPMT3WRWC8c0F4HYtM03ryV5w7+BCpZHoZ/aqtn5r7nCOed8LSVNWG3emdmwVkXPJ7jyTLKBYrsqJg3T3u0AAAIABJREFU6BpaKsb0WAftZ54iPDOBy+On5cSjKMqT87eJhieZHukgFY/R17KPVDKCL7eKsuXvEQUqwptiGBqTwy2cePn/0t9yBG9eGRt3fTndKtK+uLWklowRGO8kFBglM6eY/NL1rwsmJSw2F1n5dXiyi5kcbmdypIlaRHj9TqZrCQY7jnD02W8SGOuluGYLm9771Te8wCvJyhuuZVZUOxm+IuS5QfOpZARZVnC4s1FUFTDhUsV7koRisSErFkwzjqbF3pJzFK5fqWSM1hO/5MRL/0I8EmLVtt9i7e3/D56skiXnmEiyjMTS7TxkxYInqxRZUdCS8fnnn9tbAKRzoKWyTNWWLpTVNQ09tTgTvVpu7CTuNxSLxdi7dy/Dw8NIksRHP/pRPvnJT1JaWrpo2OL5auhL/kNK6aGQ50PuSCSyZDh9/n6Xau/h8Xjm+2Wbpkl+fj41NTVLVmrfKLRkjPoDP6DltUdRrQ5yCuuwWF2EZ0bobniR0d6T3Py+PyMrt5rp8Q6CE31oqRSh4CimaZKVV0N2wQoGOw4z2nuGwHgnY31nMfQUGVkl6R5AQ020vvYYeipGQflGrPYMRnpPcOy5bzM51EZ2QTVZhdUkYjOceeXf0wMVUokFx6lrSZqO/zeNR/6LZDxMTlEdXn8FM5O9NL36c0Z7T7N59x+TV7Zu0ZWuG4nXX8nKmz9ONDwJLH5OxyNB+tuOEI+EKK7ZhDennPzyTdjsGVf/YAXhIqrViWp1pAfnRgJol3jjNPQU8UgAPZXA7vRcvme+IFwlsqzg81fiySpmrL+JicEGipdtw3lRhUMiNkNgvJNYOEhOYbXoZyhcUiI2w/jAOQLjHXj9lRSUb8ZiW9gSRJZVLDY3isVGMhxIX/ibk4yH6Dr3LA2H/hNdS7Fq2ydZseUTZHgLRI9r4bIsNheuzAJUi5V4JJBe4ZRfu2AbQ9dJxiNoqSSyakFR7cQj04Smh0hEIgx31zPa17jgNqZhoKVSGIbOcFc9k0NtlNRuI690gwivhStmmgbBiW7O7P93+luP4C+uY/Ndf0xh+SbUucFmr2cYGsl4CF3TUFQr6iVaLEmSjKJaUSx2DF0nFY+83aciXEO6nmKsv57jL3yHwFgvy9bsYuN7v4rvCi/wnl+Vv/jnRnqonmkgq1ZU1Y6sWMjwFmGzO0nGw4SCQ+SzYdH+UokIqUQMSZaxWMU8n3cyXUvS3fAcJ17+LolYhA27vsiqrZ/B6cm9ok4LSz3/ME2SiQimoSMrFlSLA1mxkJVbjSzLRGYniEWmsbt8i/YXj6Rn8ymq5ZKvkVfLjZ2M/obC4TB9fX0kEgncbjfbt28nNzd3UXANEAlH6O3tJZlMLsr6ZFnG7/djsVjmhzemUktX+g0ODjIxMXHJ8DonJ4esrCwURSEQCDA5OYmmaTd8eB2c7KGn8WVkWWXL7q+RV7oWWbGia+ney2cPPkRv8x6y82tZf8eXSMZDdNa/SNVNd1G74SN4skuRkNC1JOHgBJLUwOrtn6Zo2VYc7hwc7hx0LUkyHkZLpnuGJ+Nh2k8/wcRAM5Vr3suaW34bt7cQXUvS1fAcZw/8B6l4YsG/52jvSZqP/TfJRJRNd36FoqrtWCwOEvFZzh78ER31z9Ny4he4vYV4soqv3QP6P5SVV02GtxDDuHR/9OnRdgLj3ehakmWr76Z8xS7sTt8le2UKwtXkzPDj8y9jyHKC0d5TTI214fIWLLiYNDPZw/hAPYlYlJKaalwZi5c+CcK14stdRl7ZWsYHmuk69wKZ/gpq1t6H1Z6BlozR37qf3pZ96LpGTtFyMrPEsEZhsWQ8RG/LXtpOPklx9RacGbnkFK5YsI2uJYiGJohHZ9JhzFyVmKFrDHcfp/n4IyQTEZZv+Rgrt34Sd2bBomXKgnApqsWBN6cCty+P2ekRRntPUVC+cUElYjw6zfRYG1oygdOdjduTh6xYqNv8USIzI5fcbywSYLTnFOHgFLmly8kvW0d2fi1257trqLrwPxMLTdJ26nF6GveRlVfB5t3/i6LKLSjq0j38ZVnFas9AlmWSiQjxaGDRNqaRDh3j0cB8ewfhnSk9nHGQ+oMPMjHcQdnyW9h059fw+ZctOaTONNIXTU7u+RdmA0PUbriflTd/YsH7qmmaxGNBRvtOA+DOzMXhzkZWVDKySvHlVTDa10h/2wHKl79nQfW/noozMdRAPBrGk52HJ6vk7X0QhGsmPdi4kZN7/o1YaIa1Oz7Lmlt+5w0LWkzTJDIzyvEX/oGp0TZq1t3HTbd9ftHnulQyynDXqxiGicubi8PtR5YV/MVrcLg9hIPjDHYcmm99c+F2MYa7j2MYBm5vPg7XtSuuubGT0d+Qpmkkk0lM08RiseD1epdsz9HY1EhDQ0N6mOIlKlVLSkpwuVxISPT09BAIBCgoKLhkBffx48cZGxubHwZ5Mb/fT1VVFSdPniQQCHDu3Dm2b9++ZCuQmZkZDhw4gN/vZ82aNddtz+tYeIpYZBqHOwuvvxK3t3D+D2nllk9QUnMbDqcPpycPSVKwOTKRZAWHKxtvTgV2l49EbAaQMHSN7IJaylfsmvujkjEv8ViGgkOMD5xDtdooX7GL7Py6+avtlavuor91H7PT4/Pb61qSvtZ9zEwOUrPhXkpqd5DhLUKSJJyePGo33M9Iz0mGu19jdqoXd2beDbusVlFtb/gBLhqaQFEtSJKM1e7B4fZjWaJSQRCuJqvdTcWq3Yz11zPa18BrL/wjU8Mt5JbchGqxMzPZS+e5ZxloP0aGL5eyFbtw+4qu9WEL7zCGodN59hmaXv0Z4WA6hNFSCaKz0yTjcRoO/5TO+l8jywqSLFO2/A5Wb/9c+v3M6aNy1V1MDrcw0Hac48/9Iz2NL+H2FhCZHWdioJHQ9Di5pXWUL98lhkEJl2RzeHF58tC1FP2tR/BklWJ3+uaXfRq6xlh/Pd0NzxOdDVBQsYbMnHIAQoFBeppeZHq0m7Llt1K34X4RXAtviqyoeHOXUVi5iaZXH6Pr3LP4cquoWPleFNVKIjpDX+s+ehv3olqtZBeuICOrBNViZ/XWz2AYi4t8TBOmR9uIRwIk4lFKqrdz022fR7XYsdhESChcGS2VYGygno4zT2OxOVix5eMUVt78ht97IN3P1euvwOXNITIzwUDbAfLLNmC9KDxMJkKM9Z8mODGAw+Uhp2jV2306wjWSSkToa3mF/tYjZGYXsfa2L+DzVy4ZXAPpHsMWK/HYLIMdp0glIniyyyip3p7OK0yTaHiCluM/Z6D9OFa7g+Lq7dhdWUiSjMvjp3zFexjtbaS3+RX8xatZsfkBVIsNXUsw3HOCluO/BCCncDneXNHw+p0qGZul6djPmBrtobRmC6u3f+6yKzElSUKx2EklIwx3nSMeCZLpr6R8+c75z3fJeIimYz+jt/kgqqpSWnsbzoxsJFkmK6+a4pqbaT/5AucO/wSvv4ri6u1IkoSWjNNx5gn6Ww+hqColNbde04vK78rw2m6343a7kWWZRCLB1OQUmqZje9172/j4OD/+8Y/p6enBMAwkSSIRTywIn6uqqigsLKS/v5+uri4aGhqorKzE6VxYqdrV1cVzzz1HIBC4ZAsSh8PBjh07eOmllxgYGODZZ5/lzjvvZO3atYuqr03T5MCBA/zd3/0dgUCABx54gD/4gz/A7/e/dQ/SW8ThzsHu9BIY7eHckR+zfNNv4S9chWq143Bnzb1op0P3S4X6F5MVhZzC5TjmXuiXEpkZIRqawuH2kZFVsuBDizuzgMycClT11PzPErFZpkbbAPAXrcLu9M4fkyRJZOZUkJFVzGjPGULBYXQ9dcOG14Jwo5Ikmfyy9Wze/TXOHflPhjqOc/Ll76Fa7UiSjKYl0ZIJ/EU1rNr+GcpXvAeL1XXZ/QrCm2KaxMJTTI20MzM5vujXkZkZ/n/27js6jvu+9/57ZnvfBRZ90Xsj2MDeRapS1bZk2eq27MR27OQ6vjnPSXJynzz35tzYsePENZYcdVmNlEhRoth7LwAIEIUgCtGIRvS6bZ4/QFIsYFExRUnf1x88h+TMTt3Zmc/vN9/fcH8/AKqq4IlOP1/iRlF1RCcWMWPZ9zEYLTRW7qK+fAc6nY5wKISiqiRkTmfKgqdIyJgnZW/EpAwmK77MBbQ3HqGubDPHdr1E68l9RMRmYjDZGB7opLu1kr6uVix2J8m5S/DG5aGFw5xpr6Gt/jCjQwM0Ht9OZ9OxK47lYTTbyZh6NwVzH8VkubxOrPjycrjjSCu8g66W43Q0VbF7zT9z4uhqLA4vg70tdDZVMDLYjy9zOhlT7sRotp8vuzApTWNksBOd3oiiTNQXtjqipFHlSyrgH6GpehtHtvya8dFBYOJtkqG+bgJ+PxV7/kRDxSYAdAYT8WmzmXXb/0ALh2it209vRwsABz74JaU7np10GYqi4o5Oo3jFD4lJmkZkbC6ZU1dSuv15qg6+xfjoAMk5S7A6ohkb6aP15B5qS99D0zR8WfNJSJ97Y3aGuLHO3uM1HN/I6OAggfExNr/611e8dqk6AzFJRcxb+Q9Y7V6yp99Hx6lSOk5VsfW1vyU6sQC7O56gf4Tu9hrOtNUSDPpJK1hM5tR7z9cuNprspBbcTlv9ARqP7+HA+l/QeHwzrsgkRoa6aD9VxsCZdtzeePLnfhOz1X0j94q4QbRwiL7uRk5VbiM47qe17gjv/O6hK/4W6vRGMoruYtZtP8ZktpNb/BCtdYfoaW9i2xt/R3RSIU5PIqGQnzOna+hurZ54Ozm7mNzih1DVifs/k9XFlPlP0NlUTlfLSba89mPiUqdjtkXQ391Ie2MZY8PDJGRMJXvGA59pibkvZXjtcDjIyclh69atDA4O8u66d5k7by5paWnne0zX1NTwH//xH2zfvp0ZM2Zw+PBhBgcHaWhsYGRkhIiIidDV5/MxZ84cysvL6e3t5dlnnyUlJYWZM2diMEwEnPX19fzsZz+juroah8MxaV1sRVFYsmQJ8+fPZ82aNZSWlvKLX/yCv/u7vyM/Lx+9YeJQhUIhtm7dyi9/+UuOHTuGw+EgLS0Ni+Xm7B3r9qZSMO8Rjmz+DdWH1tB4fCuOiASiE6eQnL2EuLTZ111PWdXpMVk91wyOx0f6CQXGMTijJspdXNAjXVX1mK0eVL2OcyMqBMaHGR/pwz82SumOZzhx9J2Les6Hw0HOtNXiHxthbLiX8Bd4EDh3VBorvvEfBANjE6PPGq7eW0GIG0lvMBOfNgd3VBq9nXX0tFczPNBBOBzCbPMQEZNFREwmdlc8BpPtpnwbRXy+qaqOrGn3kZA+l1Bw/OoTKwpmqwebM+b8P+kNZuJSZ+GMTCJ3VhVnTlczPtKH0eIkIjabyNgc7O44eeNFXJGiqBOl1pZ9D5PVRV3ZB7TWldF+6jiqqhIKhdDCYTwxieTPfZisafdhsrrOvvY+hH9siHAozMjgICODg1dcjslqITblNNoVyoyJLy+d3kR86iyKb/0RpTueoaX2CAM97aiqjlAohN6gJzV/IUWLnyY6sUhCaPGRaFqYsZFeulprGRu+vLb08EA/wwMTjcR6gwGbM4ZwKIAWDjM+2k8oGETTNPq7O4COSZehKApaeGKQZFCwOqPJn/MIOr2RqgNvUnVgNXVlH6Dq9YRDIYL+cYxmK1PmP0z+3EexOm6+DmPik9PQCAXHJ573w2H8Y+OcOd18xel1+omSM+FQAIPBQlLOEubf8/eU7niGjqYaBs60o9Pr0bQwoWAIk9VGwbwHmbLgyYtKfyhnx0WZfftPMBht1B3bQkPFTnQGPeHQxPhrscl5TF/2PXwZ8+Wa+gWloRH0D+MfGz5bZ3qYseGGK06vNxiISWoFJhpS4tNms/iBf+bo1t/S1lDBYO829Ho9mqYRDAQwmEzkzb6XqUu/e/6NPJjIx2KTZ7DogX/m0IZ/53RjBYM9HSg6HaFgAEVRSZ+ylOJbf3TRfJ+FL2V4rdfrueOOO1i/fj3l5eVs2LCBnp4eZsyYgc1mo6WlhZKSEpqbm7n//vspLi6mqamJoaEhdu3axf/6X/+LgoICnnjiCdxuNw8++CB79+7l0KFD7N+/n+9973sUFxcTFxdHX18fhw8fpqGhgZUrV3LixAkOHjw46XpFR0fzgx/8gPb2dvbs2cO6deuoqalh5syZ+Hw+/H4/tbW1lJWV0djYiE6n45FHHmHp0qWX9fS+WeiNZjKn3ktEbDZN1dtoqd1Dd1stnU2V1Ja8hy9jFtOWfo+o+DxQrt6KoyjK2R5CVw+kQqEAmqah6vQoisql+dVEK+eH/6gRRtPCKKqK2eY5+wrFxetisXsnXpWMSj3fSvVFpDeY8URnfNarIcQV6Q0mnBGJ2F2xxKVMJxQMoKFNDFBmtKDqjBJaiz8fRcFij/xEgynqDSZcEUnYXbEkpM0hHA6hqDoMRgs6nZHLfrSEuIRObyTaV4jj9p+QM/OrnGmvZrCnlXDIj8niwuVNJSI2C4fHh8ninLgXUhWSc2/BG5836YC3l1JVPRZ7JEbpdS0mYTQ7SMpajCcqg+7TlZxpryEwPozFHklkbA4RcdnYnTHXLNkAgKLgic5g8f3/m4B/GItdel1/mRmMVtIL7yTaV0Q4fHmHrwspiorR4sRq96IBs279GwrmPspkg9JfMiMGg+V8eTlV1eGKTKJo0dOk5t/GmdPV9HXXE/SPoDdacUYk4Y3PwxmReM03gMXnl6KoOCOTufWR3xAYH7qu6SfeFPGComCyuMmYchcxSVM5c7qGno4axkf60ekNODw+ohIKcUYmYbV7L8sazv2uL7j3n8ib/XU6W8sZH+7FaHYQEZuNNz4PuztexqH6AlNVHdFJU/nKD1ddR2dJBeXsM8k5BpONlNxbiIzPpaf9BGdOVzI23Iui6rG744lOnIIrMhmrI/p8r/9z9EYLSdmL8USn09VSzpnT1QQCo1jskUT7phARk4nNGfuZ52Bf3BTuGmbMmMFPfvITfvrTn3L8+HF27drF0aNHJwZr8PtxuVw8/vgTPPXUkzgcDnbs2EF7ezvd3d28+eabHDlylPvvvx+Px0NhYSH/+I//yM9+9jP27t1LeXk5dXV1GI1GgsEgDoeDxx9/nK985Sv89Kc/RVEmTrZLM1idTsf06dP513/9V5555hnWrFlDRUUFtbW1GI1GNE1jdHSUQCBAVlYWTzzxBF/72teIj4+fdLDJm4XJ4iQ2eToR0ZnkFj/EyGAXpxsPU314FQ0V2zAYbcy+4ydYHTHX/jC4VnY98dqhqkyEWuHQxP3LBfMEA2MTtbLP3tfodEZ0ehMGo4nMqfeQnLMYneHyWuMKCkaL84qjVQshbhxVZ8Ao5XvE55WiXHMMAiGuRtUZsLlisTi8RPkKJ94E0DQUVY/eMHFuXRSwKApmq1teNxafGr3Rgjs6HUeEj8SshWjhMKpOj95g+chlj/QG82feo0vcHBRFxWybKC35UTk8Phwe38dbrqrDYovAbHUTEZtFKDA2UTZUVdHrTeiNFgmtvwT0BjMRMZkfa95zZY8iojNxRSaTmLVgoiycoqAzmDAYLFctuaDqDDgjfNhcscSmzEQLB1FUHXqD5aJBccUXlYLRZCfqE9TU1xsteKLScXoSSUifSzgUnDj/9EYMRutVz79znRgdHh9J2YsJa2F0qgGDyXrTlMy9YeG1TtXxt3/7tzz55JOEw2Gio6NxOC4uF7Fy5UqmTZtGMBjE6/Xidk9+g33LLbewYcMGgsEgkZGRRERM/uO2ePFi3n/vfQLBAB6P56LpLBYLd999N/n5+ezdu5fq6mqGhoawWq1kZGQwc+ZM0tPT8Xq9APzDP/wDM2fOpKqqilAoRGZmJi7nRLFyk8nE4sWLSUtL49ChQ5SVlXHmzBmMRiNpaWnMnj2b7OxsYKKus6ZpqKqK0Xh5D0Gj0UhRURH/9E//xKOPPkpJSQknT55kYGAAnU6H1+slPz+fwsJCkpKScLlcN3VwDROvfylM9Go22zw4I5OIiM3G5U1h95p/prP5GP6xwesPr6/BbHWjN1jwjw1OvB6rhVHP9uoOBEYZGewkHPqwJd9otmN1eNE0jXAogMXuxWS5uBC9pmnSm1MIIYQQNxVV1WM02UEGthOfAUVR0BvMEqyILwxFUTEYrdLDVXx8n6iDgoJOZ0AnDc3iE9DpjR977Jyb+Tf9xvW8VsDn8+HzXbk1NCIi4opB9IXcbvcVg+0LuVwuXIVXHg3TbrdTUFBASkoKo6OjhEIhVFXFYrFgt9vR6T5smUhPT+exxx5jZGQEmAisna4PX6U0m83nB29cvnw5weBEK8e5z9Lr9TQ3N9PX10coFJpYtysEz3q9nri4OKKioigoKGBsbOz85xkMBqxWKxaL5XMRWtccWUVtyVqypt9HasFtGE0Tg7aYLE4c7gQMRguaFp4YahyY6CKtnR3U8hqvfF2B3Z2AzRlFT0cdfd0NeONzUc+OGN3f3ciZ9hME/R++imEw2ojyTaGpZg+t9ftJyVuO0ew8H1b3nznF0W2/xWC0UTDvUVyRydLyLoQQQgghhBBCCCHEn9mXtmzIOaqq4nQ6cTqvXtNPVdXzgfOFxsfHef3119m2bRtdXV08/fTTrFixYtIa1JWVlbS0tBAKhUhNTSU6Ohq9/sqHQK/XT7rMzwtFUTGaHfR0nOTI1t8SCIySlLUYk8XFUF8bVYdeZ7C3nbTC5RjNDhQmekED9HaeZHigE0XVEf6IgwU5PAnEp8/iTNsJKve/gtniIjqpiMHeVsr3PM9wX8dERn62I7Wq05OSt5ym6u20nTxI6Y7/InfWwzgjE+nvbqRi70s0HN9C5tQ70emMElwLIYQQQgghhBBCCHEDfOnD609Kr9fT09PD1q1b6ejoQNM0UlJSyM/PvyiYbmho4IUXXqCtrQ2TycTSpUuJjIz8wpeiSEify5SFT1Cx52X2rfsph42/Qj07cmkwME5i1lwK5j2Gxe5FUVXiUoupLXmX+vItnG44SlLOQooWfvsjLVNvMJFb/BCDva00Ht/Oltd/gsFkQQuHiYjLwpc1j/ryLYRDwbM9vMETlUbxrX9DybbfcbLsAxqOb0GnNxIKjBMKBkjKXkDurK/L6NJCCCGEEEIIIYQQQtwgiqadr9fA6OgoiqJgMpm+8KHqp6myspIf//jHbN++Hb1eT1FREUuXLiUjIwNVVTl16hTbtm2jpKSEwcFBbr/9dv7lX/6F3Nzcq/a8/qIYH+ljoLeF7tNVDPa2EA76MVndRMRk4Y5Kw+6KO1+TJzA+THvTUTqaSgmHAkT7phCXWszYSB/+sUHsrjjMNs9FvZ9HBjsZHujEbPVgc0aj6gyEwyGG+k7T3VZBT0ct4VAAV2QyUQmFGMx2xoZ7MVmc2Fyx6M4WoA8F/QwPdNDTcYLejlr8YwMYzU4i43JwR6Vjc8Z87NpBQgghhBBCCCGEEEKIi2maxuDgIDab7aISzudIeP0pCAQCHDlyhJ///Ods3LiR0dFRnE4nZrMZRVEYGxtjcHAQg8HAypUr+eEPf8i0adMwm2/OQuh/DpqmEQqMEQyOg6adHY3cPOnIpaFQgKB/BE3T0BvM6PQf/3wMBccJBsbOfpYJnd58zc8KhQIEA6No4RDq2RF+b5YRVoUQQgghhBBCCCGE+KKQ8PoG8fv9tLW1cfToUfbt20ddXR0DAwPAxACTmZmZzJs3j8LCQuLj4zEapQevEEIIIYQQQgghhBDiy0vC6xtsdHSUwcFBRkdHCQaDwERdbKvVitPpxGg0yr4VQgghhBBCCCGEEEJ86V0rvP7iF1y+wSwWCxaL5bNeDSGEEEIIIYQQQgghhPhcU689iRBCCCGEEEIIIYQQQghxY0l4LYQQQgghhBBCCCGEEOKmI+G1EEIIIYQQQgghhBBCiJvORTWvFUXBP+4nFArJoIJCCCGEEEIIIYQQQggh/mw0TSMcDl8xi74ovDYajRJaCyGEEEIIIYQQQgghhLghDAYDqjp5gRBF0zTtwn+45K9CCCGEEEIIIYQQQgghxJ/NdfW8vtqEQgghhBBCCCGEEEIIIcSNIgM2CiGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOhJeCyGEEEIIIYQQQgghhLjpSHgthBBCCCGEEEIIIYQQ4qYj4bUQQgghhBBCCCGEEEKIm46E10IIIYQQQgghhBBCCCFuOvrPegWEEEIIMSE43EpV/SBubzy+OCfKdc+pMdBcSX2/geiEFOI9xk9tnbTgMG0NVRwrr6axpZOhsTAGq4tYXxp5BQWkJ0VhoYOSw40YPIlk5fgwX/+KTyowcJIt765ld7XKnDtXsmRWBnbdp7M9N0pg9AwNVUfZs6cOV84sFi6ZTpThs16rLzhtiJqDm3hv/RH0qUtYuXIJaZHXd6urhcc501bL0X17aA0kMmPBCqYkyQETX1JaiOG+01SXHeBwTS/e9AXcuzznz/vgqI3RUrOfdas20OOYyb0P3kN+rHwHhRBCCCHhtRBCCHET0Ohp3MPad/cTjJnNrcvTPkJwDaBgcpho3bKGjdt83HLnrUxN9/BJ8t7QWAelu9/nzTfWsvtoPcOqG19yCr5YNwatloNbV/G7rlGskbG4DIO090bzwHf+kqQsH+ZPFDSHaT22ndWv/jfvHBqnOewlKSWFKXGfj1uW4FAT+7eu44033+fAsRO093q5/69imL5wGlGkXxe/AAAgAElEQVSGT5jqi6vy955g9wev88zvt2Aq7MWTkEnK8uRrvGYYpK1mH+++/iJrNh/mZHMPqfOfIjb/FqYk3aAVF+JmofnpaCxj8ztvsH77QcqqmxmzZnH/U1ncvTznz7rowFALx3a9ye9/+zoj8bUYYwvIeTD7E/2OCSGEEOKL4fPxJCiEEEJ8YYVoK1/Hi3/ag5K0gJXzphDr+ui9zYzORObeuoKR1W/z/pogoXvuYmaG+2PUB9MYOl3Cqhd+z/Ovb6Ku38PsFQ/xtfuXkZ/qxWrSo2ghxobPUHN4I6+/+hbbDtYwbJ7N0qExgtpHXuAlVByR0XgjXNhsYbxeD07r56fKmc4STf6cu/jKQAenThzlyGmVkbEA2ifeL+JadBY3kV4vbocNc4QXb6TjOhqBdET6Crj1nttobqzh4L4KPAOjBENywMSXkKLHE5vF0vu/gcGs0Vi6n1PDMYyNh/hzfyN0JjtubyzRbivdzkjiYj/O75cQQgghvogkvBZCCCE+Mxpnajfx3LNrOBO5mIdvXUxWvAvDx3hiV1QTEXG5LL93iI4/vMY7bymYvnEvU5JsH6kX92jnEV753S/43fPraVPy+Pp3/4qnHl5OdoIHi/HCPnCppKSkkZEWh/PffsXbB/wEQ0HCH33VL+NOW8z3/99MHhqEiNh4YhyfnwhD0ZnxRCdTkJ9FYowbVfk09oi4HjqzjyUP/k+yFj6NaosiPt512bkfGmvhWEUrIYOP6UUJqCiYbB6Sc3PJTE/Apb/5e8eHxtqoqGxmTEmgeJrvzxjwaQy2VXKiqR+TbxYFPnls+OJTMVpcxKfkU5BXSEqshWOtn+4ShrvqOFl3ilDMQqanfthQqxq8FC39Nr9+7W78Bjfxid6P+AaSEEIIIb6oPj9Pg0IIIcQXjL+vgndeeZlDHVHMWbiIgiTnxwquz1MMuBOmsWJJJt2l77H6vX0094eue/ZwoIXNb73Iy3/6gJpuJ0vuf5RHHr6TgmTvJcE1gIrZ7iVn9gM8+ch9zM7yEAqGuP6lXZnO5CIhLZeiolwSY1wYP4d3K6peh06vk/DlRlKMuKOSyJtSRE56PE7LJedseJCKXWt5+aXVlJ4avLgnqapDp9ejKjf5EdMGqdq3jpdfeJOjDQN/1t6wgcFGdr3/J156YwenB6QR5stFh05vwGj4dIt2BEdaObLtDZ59fgPN/ZecU4oemzuOrMKpFOSkEGGTgiFCCCGEmPA5fBwUQgghvgC0IUo2rWbNxlMkFBUztciH5VPo9amoFlJnLqDAN8aud1ax41Ado9eVcGm0lW3lvfe3UNLYT3TOIpbfsoiCRCf6q9wt6EweCpfdzpLZuUQYQ4Sl2sJ5N3kM+iUToLl8M6+/8AIbDjYxGvg8nqhBWiu28cbzz/P+/lN/1m3Qgj2U7nib5//4JmWnBj+FckDic+lTvIhpwX6qD63nud+9xMGTfQSkPUQIIYQQ10ne/xNCCCE+AyOnD7Nxw3Zqx3zcmZ+Fzz15netwoJ+GylJKSo/T0NaDHxu+rBnMnzeV1DjnpINZGZ1ZTC1IZe2W9XywaS5FuclMSTBddX20UBsHd+7kUGkjI2EXi4pnU5iXguU67hQsEVO4/wkbfqsH9yQrpIWGaT1ZzuFDJZw41Y1fsRCTlMvM2cXkpEZPugz/UAfVZUfp0nzkFOST4P4wQQ+MdFNbcYj6XjfZBVPxWTsoObCbQ+WtaDYfM+YtYGpeEg7j5cmLFhqm5WQFJUdKONk2jNERS3pOAYX5GcR5bR9rcDD/cAfVpQc4cKSKzkGVuKxiCpxD1wjyNYbPnKKqvITS4/V0D4Tx+PKZM6+YnNQoTJc1GGgM9zRRdayEsuN1dA2E8fjymDNv1iTTa4z2NVN+pIoxRyJ5BYkMN5Swb98R6jvHccbmMnf+HPIzYiYZXFNjpKfp/Hp19YdwJ+QyZ94sctOiL1uv8aF2qkqOM2CIITs/gf6a/ezaV4kWVcSiRfPIirdOsu0BOk9VU1lZS/fw2QRLMRKTnEN2uof+5mqqT3Ywfj7cUrB5U8jNyyM52kz/6ZNUVVbS2htAZ/aSm5tHTnr0xNqHRmhvrKSiboiolAKKsrwo4REayzby3K9/xSvvlzPsNlB2YBOrtUo8sZnk5ucTZ790HUP0tVdzdP8+SqraCBijKChewNyZOUROUoNdC4/ScaqKIweOUN3QwUjISJQvi6nFxRRmJWA7+/XWAn001lZxvLqVsTAoeifJmbnk5SZiGu+i/kQVlSc7CWiAYiImKYu8giwizWM0lW/h+V//Jy+uK2XQoXHs4GZW66pxx6STm1+IL+JaZ2+Q/s4GKkqOcKy6iZ7BANaIZIqK5zJjSjqus5eI4GgnJVtf41e/fIaNZS3EajXs3bya4WonCWk5FExJw6ED/3A3teUVdPqdZE9NZ7zxMDt3lTHqyGbBksUUJJ3bqWGGe1qoLD3AkWN1dA8GMbviyCksZua0HGLdxol9E+yn+WQ15ZXNjIZB0TnwpWdTUJCC2d9DY+1xKk50nN03RqISMsifkovXeu57rjHa30Z16QEOHaunZ0TF68skMy2FxPhIYhPjuf7y+RpjAx2crDzK0WMnae8dxxaZTMG0mUwrSMM5yeU0ONZHU20pBw4eo/F0P2G9E19GAcXF08lK9lz20BUaH6D5ZBlVjSNEZ86jIGGU2rK97DpQw7ASSd70ucwuziPSHOTM6XqOlx2nc/jDxNdojyY9Zwp5KW4ID9N+qoayknqGwqAaXaSm5zEtP+H8MRjpb6e6dD+Hy2rp7PdjcsSSWTCD4hkFJHiM17VX/IOd1FWXUnlqote/wRZFWvYUCtI8jPa1UXu8hJOnR9EA1eCceIOmIBGDv4fKfW/zm3/7T9YebMCZ5uPgtrehwU6UL4OiGTm49aCFx+k9fZJjlS3ovVOYPy3u8tI/4wO01JVx4EAp9W19BFU7CWl5zJxVTG5qxGX7WQv76e9soKysljFLOgvnJjPYXMGubXs52RUmNnMqc+fOISveKg2OQgghxE1KwmshhBDihgtSe2g3B0trsCUsJCk+DvMkYeVA6yHefP6/2VxtZNay25m/rJC20vW8+cd/ZPW7t/GXf/UYS6f7MF36xK1YycjKID4iyP6duylbPpfs+MzLp7vAeNcJyitqae4aRWdKIzMrmfhYy3U9zCuqlaScHDRUdJeEqIOnS1jz6ktsLB3ClzuTgrQU+lqOseW193jhWQ+LH3iSx756y0TIGRqgoeIgWzduZNvuQ1TV9VJ414/4q8QcEtwK3Q2H2PL+O6zbfIDK2haiZzzCA8vq6S5Zy/o9FbR2DzIW0BGXdQtP/+j7PHTnNC5sEwgMNrDpred4a+tpEgrnkJ/opnrfOlY//wuGVA/xCXFEumxEps7l/nvuYG5B9DW2PEh79XZee+E1DrcZyZkxi/R0HR1V7/Hs8SPsLe8gTMRlc2nhASp3r+GttQfpU6OIj9LRWXeYdW+9xIsvz+Lhb3+HB+8qJvpsKDcx/VpWvXuAHiWKhCgdnXVHWLfqJV54uZiHv/VdHlpZTJSxnxOle9n0/gfs2F9CTaOeBfffRs7BAba/9R4ldV2M+EPoTQ6Scpfx6He+w0N3zSDS/OFyqvau4601+ziDF1+0ga76I7y36iVefHkmDz31Hb5+92yiTUM0Vh1m+6aNbNt9kIraILPvf4AZVaNse+VVdlV1YYpZQEhzkPXw7En2mx6zKURDyfs899p2WkcjmHP7Q3z7L2Zht9oIWcI0HlnD62v30e6PYOay+3jkiam4bAYUFMw2C+NdFax5cTPB9Hv5QXYhfe3VHNq5kQ2bd3K47ATjkQv49vfjKczyEh7qpLmphZbuUcJhP4Gxftqaaim39BI/7iAhLfei8FrThqja/yZ7D73GpqOn6BsaZmRUw+ObxcPf+T7f/uZSYi0fBqbDXdVsfvtl1uxqxZNezLSsVEa6qtjz7m946VkjxXd8kye+eQ9FKXYUvRV3hIvx9jW8/OpaKnsTue9bP+CvsxKJ1VtwuFzQv521q9aytwbm3/80P/Jl4LR30dLcRHPXuW0Y4HRzLeX2PmJHrMSn5l81vA77z1C2YzUvvLKNsahi7rprEYXBBra++w7/3+urWPi17/Ltx24lyakw0NVGU1MDPWMhQv4gIwOdNNZWoOuLJGCwYtK3cXzbBrbtPsCx6lHyb7mTxa02dr/4HFuOtaOPnMHAmJ2C7ywm7O+hYu9aXv7TFrrUZGbPySPF3UPFgY38+xv/jTNzBY88+Sh3LkjHorPijHATPLOOP720ivLuOO745vf429wUfHoTDpcb3dBu1qxazc7KEMV3PMHfJGbhteqBMGcaD7D6lT9xuDOGFfesYIZjkGO71vOfLz9H2tJH+NHfPHBd4bUWHKD26EbeeH0DTf54pk7PxhfTwaHt/80Lf/gjc+5+nKe/9RUK488m2FqAjvqDvPPKy+w8ESaneA7p6W46ag6y9tk3+eMfUrnz60/yyFeX4HMqDHTUsm/TKt5ev4vS440YfPNYea+fko61vPbuAZp7BhgaCePxzeCBx3/AD761BLvVQqCvhlXPvsqBhhGi0ubw1Sf+hlnus4GrYsRqMzF0ej/Pv3GE6Nnf5IdTJ7574UAfNYc/4JWX1tEUSGT2vALSPAMcP7iN37z1HObkRXzjqae4d2nWNfeP3mzD4zHRvWE9L7y6g9GoYh7+7v9DXpoHg8mKy21jYP9W3nz9Xer9qdzx9b8iPc+HobeLpvpausdCBMdDjA330HSyAsugm1TNgS/FwvFDG/lg0w72Hz5OnyGD+578e+ZOi/uwQVEL0N1UyruvvsCmY6NkzppPdrqH7pOHWf/Cav74hwRu/dqTPP7wCpJdKqGxM9SWbOGNN99jz6FyTo96mbPiKwS7BnnjuVUcqTvD4MAQijWO2Su+wY9+8n3mpZqvfYIIIYQQ4oaT8FoIIYS4wbRQK+Vlx6lvGiVlZgKREc7L6nhpgWY2v/kc//38BqJW/A9mzJ3HjDQL+SkqdWVHeHbNy7yTlUtachxZ3ktDKwWXL5G4aDe9G0qoqKln2YIMEixXjqKHTzfR2t5JX0DD7I0jNjoSl/n6+6Hp9Jf3HB/tPMKffv9LXtoyzJIHn+Qb980lzmUgOLqIoozX+fWvX+Slf//f9A+O872n7iYnMoRicBHpUBnoaKCyupfYeSMEQhPb5IzNZt6yOZQf3suWujraRt9G9S9j6aKv8k9f+RHB07t48Q/Ps27vWtakF5Kfm8WCLNvZHdrLgfWv8Mwf1mOe9Sgr7r2bKTE6ZufYGer8Bc+/s4vOoaXcujyPjJQ4XPZr9UTUOHNyGy/89tdsbozn/kef4O5FGTjNCiNz8tjwShdH9h8krF0SXmuj1Ox+i+de2Y81ZwWPrZxNrF3PWF8xr/3m33n2tVX8wW/E6XLz4IoszIxSs3sVz72yD3P2ch6/e87Z6Wfx+u9+yTOvruYPfhNOl4sHl8Vjc0URaQ3T01JD5Ykgo+tgZOlClj/xDzzqGKf24HpWr9lI6c43GNPMuDwevro0HRNjnNj7Ns+/shtD+nIev3cucWeX89Z//Qf/9crbPPM7A063m4eWxqIz2jAro3Q0VFBeraJs34ZZP5vCZXeh6d7nULufgHalJEzB7s1k5vRCNq7fwP7qLor1DryxkViMeoxJ01iycCYH9+zk8MEB5rniychKxGWduG012aOIinBijYglKqeIbJ+NcF8Pdo8DXbCXuspKSM9lZGyiArveGsuUBXdyR9NJaspLqLOkMGfp/Tx2RwZmixOn68Lb4TCtVXvY5bYwd9Zj/MsTKej6S1n1wh957f3NvLcugylFU7m7OAIFGO+p4r1Xfs1/rWqg4NbHeOyRZSR5TIT9i5ie4+OPv3uGVb/7V3r6hvnhDx5hRrIFV1QaU/IzibaNsb60le7eIYKahmqw4k3IZvacaZTu+YA1m07S0dXPeEhDb4kmf84d3NXcQO2xo1QbEpm1+D6eujsLk9mBy3W1W/oQzeXb+dOzv+WDk8k88v35LJpVhFnJQtffTNn+X/HO6hgycvN55JZEHNEZLLh9Jc2NdZQeaiUqcQor7nuSJVkWzCYYHWzHog/R03Scsko/YyYnbtdcMhbdDcb32VU7jj+sooX6OLbjdX7176/SFbmEp773JPOzIzAoAeYX5xL/yh945uVn+LeuHkb8f81Dt6ThjEyhMD+beGeAdUda6eqZ2DeK3kpEXCaz5kyj4sB6Vn9QSXtXH+Nni+yHxls4smMN725vYfrXHmbZnKk4DEHinUG6T6+mvqsPv6ZxrToYWniQ6v3v8Iffr6bXu5Qnvn0veT4XBs5g8zezb8szvPkSuLwJJH1nMS5dmM66vbz625/zdpmF2x/9K755ey4ui8r4omKy097gv3793/z+5530DY7zw6dvI9qdwNT582moK2Pz2hOcafej1xtZNH8BP/6/T2IeqeK9V//Ac6u3sf7dJKYWz+S+GbFMm7eAeSW72LZ7I+2OAlSzh0jn2euUYsDu8eL1unG7Y5m2YClZcfaJBqn97/Drn/2RBuNsvvXX32JxrhejLsj84nyS33iW3z77Aj/vOsPQ2P/ksTuv3sipGqx4E/PJz8vEEfwTJ1pO0zswjgbozU4SMqZTPKOJfe+9xrbSVrp6hwlp4I5IZNbyu+nobOLw7nqssVksvecJ7si3YLZa0SlD9Nkd2E1jNB6voDfCyeBo8IKa7mHOnDrKm7//v7y6T+OWb/4NT9xdgNuqwz88i7zM1fzXL3/PM7/spKd/nB//YCU+ixNfzmwWzDlFyba1VJ9oxq+YMIXmcuvT/4fvRPo5vvMtfvfrV9m+4V1SC+Yy9S/mfYTe+UIIIYS4USS8FkIIIW4wf18j9Y1tdA9ame51Ybdf/nMcHGmhpvIEDS39RBvt2Gx2TEY9Rm86yYnRWEJ7qD1xivYz42R5Ly/NYIqMJMLtwDhaRl19Gx1nQiT4rvSzrzHY18fA4ETQYLI5cFitk5SuODf5ENUHd7Bjx15qWnsZOV/fQcXqSmT28rtYMieBqg2reWv1LkxFf8HiZYvISnSjUwCXG+eKh3igrobaX73FO6/8iYysFOLvm0p8Wj4OcydH9u5i35HeixZrtLhIyCgiLzORSLtKry2F+Svu5qF7ZxLrscJoHF21FRwrf4PaqmpONXcxL8uGCoy0lbJr21YONSh8/esZZKRE4TYpOAuWsXTRLvYeqKDebyGjaAH33DENj81y1WMYGmtg27pVvLOpnaKvf4MVy6fh8xhQAJdzCovmz2bblh0cO33xfKOdpWxc+x4nBvN4at5SCjO86BXQYlwsXbiXXTsPsOXQNnbuWcSc4gwS/WVsfHcdJwZyeXL+soumX7JgL7t27GPzoa3s3LOQOcVppCUXsHBhMXt2bWN3VSdJ+Yt44BtPsTA3GqtRY3j+NBK9Nn75m1cp27+JLdvmMHNqCknBMjavW0d1TwaPP7aMKRlR55ezeOF+dm7fy8Yj29m5axFzi79GWkoBCxfO4tCebeworwerj2lzH+C2qW7GvvZN+sYMxMT5rrj/VL2N9BmzmTN7CruO7qK5qZmWllEKIp3oTU4yZs5h1qxCth/ex+m203R0h8nxTMyrMEJn5wA2bxaz5kwlwmIirE9iyuxb6DxZxq4te2i+YFmK3owrIpqoSBdmvYreYMXjjSMxKel8r85w4MPpze40Ftz6Vb55Zz4xbhtKIB1/Rz3lRyupO1lLbV0LweIIDNoAx/au583XPmDUex+Lbl1BfsrE8QE3zkX3cl9rPVWVf2DD6jdISc8k9VtLiDSYcLlduByXnGOKit5gxh0RgcftwHBBiHjRNhgUdJNswxVpY3S01FFdVc8QqVhsduxWEzoM+BIT8UWZ2FFVz8m6VvzLEjGb7URGRRPhsqJXFEwWJ1FxiSQlGUELEXB7mLdgNhUHNrPh8HGwRJE3837unR1F4KGH6BlS8Mb66D55gPfffJUdTRYeuvs2ls5IxXW2lI/LNZ+77+2grrKc57au4a2ENLKyvsfsJCNOlwu30wL4L9gBCnqDGacnggiPk0srAvn7TtNQU8GJDphrteOwGTFgJCa9mPkLu7E0Bglds263Rk9DCevfeJXDnfE8+fgDzC1IxKQDsFNYVMzM/LepONRJ2+luxjSwDTVwcOtbvPpOLXHLvsuddxaTFG2aiMhdLhbdfjcdjccp/8V61r4VR0Z2Lk/enkJ0Ui65uTkkePT0qlHkFd/Go48sISnaiRpMR+s/xaE9xzjReILK6ibuLS7E4yti/tLbmb/5AJtaW6mtqqI/nI737HU6ONRNV9cw9pT5LFuQjNWg0FN/jE1vvcjGE3DP03eyYlY67nPHwDmbO+45Q0NlKb95bz1vxaSSm/9jFqRcrcSUgt5ox+mKwGnVweCFh0jFYHLgiYgkwm2+oJlAQW+0EhEVQ5THhl5RMJjsRMYkkpRkAjRCQRu5MxYz2t/CjjXvczBw8VKDIy2U7nqLF988jmP2U6y8dw7JMef2s5N5y1fS1XSc0v/zNu+teoX03Hz+8p507B4f2Tn55KRE8F7NGDFJU7nn0ceZkxGD3RQmyTZO/aGd/PumZqrLKugMziVlknJTQgghhPhsSXgthBBC3GDj3R10neljOGTF7jBjnqSHs86awfxb7+aMaRaFK6aRfK53taLHoNehU4MM9g8wOjKOxuW1OlWjA4fdhkk/SMfpLvr7xsB3WWHfszTCYY2wpk3UKtXp0enUK4/qrJhJzC7m9ggv6qu/5dmX36eqI0Dy9K/ynb9YwvxpqZj7j7Bn9z7KG42sfCiD5CTnRHB9ltHuY/a8WRRs2EHFrj3s3lPGojlFTE+0oka6sdstTDZ+paIzYTKZMOgVbO54UlNTiY88W6vaFkt8fAwRTh0n+vsYHBomDKhoDLadovFUK33+CAwGPfqzK6MaIsjOziIlMYLK+mECYRWLy37l4P7s/uqqPsjOHfto02fy9bxsEl2GD4+BYsDtceNy2lAIXjBfiIZjB9l/qJxmdOzZ9Aoth3TnP7OnroZev0JwpImTtQ00nx4h1HaIAwfLadZ07N30Cq0XTN9bX0PP+MT0dbX1NLcHyc6z4HQ5sVlMqIqduMQ0MrMT8TgmNsgUm8OyO27lyKFDHF9dTdXxKupODaD1HuHAgWM0BTX2bX6V00cuWE5DNT1+heBIM3Un62g6HSKnwIrL7cJuM6NXHSSl55JfkEpMlBGiYki+2u47yxKRw7TpM8hL3sOxijLKymtYVFiMTQWzO4XUtHQS3Js5UV7GsfI65mTkY1JgrLuG6oYBLLEzmZLtQkVBNZjQ6yNwOR1YPlH4pOCOTiEzO4+ESPvEMdVFEJfgI9ZrpaKzn77+fvxhCPec4Oi+XRyoClH8aAaZmREXnbMGSwxFM2ZRXLSeA2tL2L93H8uXz2NRxkT5E0W5wnoqE//3qUVoipnk3NmsfOgp8pjCghmJZ7/bCjq9Hr1eh39kmKGhAcY1uOoLF4oOg8mCy+3C6bCgV+zEJWVTUJRJbJQRoqJJBLRwD3t3HWTnzgqM0Q+QlZ2D84LjoqhmEnKnMmveDN7f+gZlh/aw7/DtzEjKvuq+UZTJ/+/c/uo/VcqmtW8yJSeCFTMSMBpcZE+fhydtBI/u6ntUC/dSW7GXLdtP4S6+jRkz4s8G1wAGfAW38N1//A/mNQZJnbYAjy5MV0Ml+7dvp37cxZzMXNK9pguOm4rNm0VR8QKmpW5ifcURdu/Yz21Lk/EZDBiNZkwGFbM5Al9qFilxrokHM52HqJhE4rxGyrqG6OvtJwTodA7SC2awZMlUtv+2lIqjeymtX87yLDMQovNUHXXNPSROe4h0tx4t3E99zSG2bSlFdS4nO7/gfOPBxDEwEZtRwOxFc3hn/QscL9nDrr33MCel4NoPiIrCFU9fQL3Sf15hDp3eiNXuxulwYzOrcFF4rdHTXMuBrZupGbZxf1Y+mVEX72eLJ5XC4sUUZ77H2zVl7Nq8m3tuSyPZpKI3GLBYjKg6lciYFLIy43GcPcndEdEkJ0XBeAMDfT0MBoDrK/8thBBCiBtIwmshhBDiBgsODzE0OkoAA0ajAf0kKa1q8DLz1kfInBfC4nThMAzTWH6QnTt3sn5jCacHw8QHgoRCoUmXoSgmTCYDBr3G8OAgo6NjwJXCaxWLxYzJNBHABsfHGPf7CWpc4RVyPTZ3NDaXi+IpGaxxKhyqG8fry6FgyhQSvFba9tdysr6ZvpAdp8OG9bKi3jqiMzJJTUrAEdpHfW0dbaf7mJ4YNRGMcKUX/JVzG4iq06HT6S6YTo9Bb0CvUwiHQ4TDobOvnSsoTATzgZE+evsGGBrXiNYrgA5PZAQetxOdXkWnXiW0P0cbo76qkpraZqyRC4nyejFeMpOiKqjqxQGkFu6h4eRJGptGMGebMRk1gsEPw217/BRufzCVRWNhYnOKiLFOTN/QNIxpkultcYXc/mDK2emnEms/17V0IlhS0KHT69BdFNrpiEzLJicngxhLBd0dHXR31KM2n6T+1BCmDDNm0yXLic3n1q8msWAsTEz2dGIdE/sS5dzRMGK2WDBfz+ieF+4jnZPcqdOYUZTGgbXllJUco+H26RR4dQQGWmhtbaVzMEznQDllpeW0rMgl3RHmVGU5LQMKqYunEW+7cMcrZ4PPj7Qal64Ver0Bg95wUThmMpowmfSEQyFCwSAhNIbbTlFXW0eX34zd4cBuvXTBKp7EZNIyU/Fo1TQ11NPQ2M2ijLhPsoIfgw5vSjFf+04W45oZl8fK8Jl6SvbtYPMH77O7uptw2E0oGCR4zd7JF1AUFMWAyWzBar342IdGO2huqKGuZRRnih2H03bZ99lgjycpJZvUGJW9rU3UnainP5j9sbbQ6E4is2Aq2d5dHNjwDH9/upL9X7zbEOEAACAASURBVHuUh7+ynOw4H44YDd01uqgHh07TcKKCE5165kXF4rVfPIPRFkVu8a2kTw2jN1kwMkp72ymqq5rBNB2703XZ2AWKaiU+IYWczFjWVrRzqr6GU50hEhPg3EVOUS+9jqnodAaMBhUtHCYU+vA65ojNomjOIqa8c4Ca6hL2769hUVYRhkAXDSfr6BiI5s55WZhVCI100dZYzYlTI5im23G67JcdA701hoSUPNLjdGxtb6W2qpbeYAFRn8kTojJ544Q2TndHE8ePNxI25mB3uS8r7aGoFmJiU8jPSeDNklaa649T3x4kOVmP8uGOPtswe8Ge1ukwGAwomjbx3Q4jhBBCiJuQhNdCCCHEDaaFQoTDGtq5kHHSsE3F6vRiMnVTsf8dNm4rZ8yWyrTp8yieeoyyY9WAhqZdKW1SUVUVVYFQMEg4PHnIfY7NG0WkeyJ8Gentoruvn6EQ2K6W5CpGXC4ndqsZhVGsdhtmsxGFMAM9PfQPDBMMWQmHw4QnqTdrdEUR7fXgMIUZ6OtleHAYiLrqel6Lci71vmS32BOSSfLF4aSEmuO1NJ4aJDXPiYKGf3wcvz9ETHI6Sb54LNdIr7VwH+3t7XSfGUEfZ8RoMFwhaL9EuJ+enl76BhSy06ez/O4HyfFMtjAFvcmKzdxOWW8P/QMqWWnTrjm93XZ9g43pTJF4o7x47Crd4+P4x7ro7e2hb0AhLXUat6x8kNyIKyzHaMFuN1/f9l6TSkTqFKbOnEbKltc4XlZKeWUHeYuiqD92hBPdetKnz8dec5TyslIqT9xGcn4Px4414TckUliUeFn5iD8bhfOhmnb2z6GBfnp7BwiGjIRCISbLvQx2D5FRXjwWGBgcYLB/ALjR4TXojFY8XgNnWipY/+JGjtaPk5g/k+J5c2muOcLxHq5yLbm6yQ5BcGSQ/p4e+sdD2MMT17vL5lNteCKiiPJaCXQMMdDfx4gGlxdBujbV6GX60q/w7adbGfzdW5QfXE9bYyUHdm7lwcce574V0/Beo0dtYKif3s4OegMTDWPqZRumoDea0Z/9HC08ytBgDz29Y4QJE560IVHF7nITHROFUWtmeLCPvv4wJFxjg67QAqMaPGTmzWDBnDQOvXuCo/v3UndPIb7hRuoaWtH5FlGUPFGOJjg2zEBPN33jITzhEJOtnqJacbmiiIm2EWwZYbCvh+Gw9gmvwp8uTRtjeLiHM2fGCIe1C8L8C6lYHS5i4qIxUs/IUC+9vWGu9QqIcv4PIYQQQtzMJLwWQgghbjDVaMRg0KMSIhQMTxoqgEZf8wHefP6PbDpuZM7t93D/oiKSojXGjjoxTVZT4yIhQqEwIU1DbzCg0139J9+akEZqUjxeyzGah07RUN9GZ0+QmJirz6eqKqo6kRgrnMtcJnq5oUwMgNbXN8TwyOXduBWdBbPZhNGggt6Aqr9m9d6PzRI9ndvvXsmxyiZ27nybVbmZxEfcRbqji4MHjnJqJJm77r+LuVPjrn1zpI0yMjLK2HiYkN+PPxDg2kPBAQTwBwIEAiOM+1VM5ihiYy8f6PL8YoKt+P0fTm+8xvTXTTFiNBoxGlRMFjNGs45AIEDAP4LfD4ZPaznXQWfxUTR1OlNyN/FuZSmlZcdZXOjjyKET+G0z+eYTena+1Mh7x8soO1ZFrqGTmrYRIjOmkB372b7fr5w957XwCIODgwwOhsF88Tk8UebGismoTPSC1382t96h8S5Kt6/m+Ze3MBY9h/u/ei/TMuPwNwyx7ZqDk350yvnSJxojw4P09w9MUt5IxWg0YTEbUVQder3+2vW7r0iPMyaflU/+HVFJWbz03Cu8v7uGnevbaag7SX3D9/juE3eQ7Lpyy5QWCBIYG2d8bJjB/j6GAhoXFR6/fCvPl8/wj0zMMxwC9yUboTMaMdssGBQVnU7PJzsF9HhTcpi5YB4pG16muvwAB0ruBH0dTacD5N1VTOTZ6+zE9VgBwoyNDNHX1zfpMTAYTdgsJpT/n737DrPjOu88/z2nbuqcM7oRGqmRAYIECQJgDqYkm5RIKtKW5FlZXlujDbN+/Mwz2pW9z478rCzvji1rZi2PEiWRogIhJoEkEgkCBIkcuoFudEDnHG8Odc7+ceo2MgGCWT4fPiS7b9etdOtW9/3VW++REsfnx3lnty6862a3RkA6ESM8OUnEhdKL97PfTzAvl6BXYe17y9fOsizLsqyPEjuesmVZlmW9zwJFxRTl5xIUCWLxJKnUpXVkyfHDPPkv/4V/fvw4Zcs3cd/9m2maX01Brv+a+olqnSKVSpHJOBQWF5N7lQEIfbmL2LBhLUsWlCDcUY6+eYjTbQMXdGy+rCtUjReXllBcmI9U4wz2jzA+nrp0Mu2ailXlo7yqipKSoqtu1/USviJW3P4Yf/2fvs79q3zs+80/8p/+w9f4n/76O7w2VM/n/+dv8Bef30x9yTUEeSJAMBjA74ep0WHGx8dJX1PRaoicUIigP0pPRzvdPSNcvh7eZbS3h97+CVQwRMAfpbuzg+7uq0zfN0j8Wm571ymSiSSptJ/yqmoqKmsIhkIEAzF6ujrp7h6+4nLG+nro6R0g9q7dXh+kfsUa1t2wjLyZVo4fOcyO371Gc1+ChpWbuO/em1m7bjmBsRaOHn6F53acYCpdxLI1yyn8QMMpQX5RESWlRTh6irHhIYaGkpdOpl1clcFVDiWlFZRXlr3/q6pnOLHn13z329/l4EgRN9/7SW67YTE15YUErtIH+nr5cgsoKiulMKiZHh9lcGCQ9GWmU0rhuoqcohLKK6sous5gV2USJBIpAsULue0P/we++ff/xHf+9s/YuDCHvubd/OrnP+OZHae5zFlolgwGCebl4k+P0d9zhvau+OWXlY4TDU8TTYYoKCijrCREKjzJyGA/o4lLTwTaVbiZDNoppKSsmqrKd3YBw583h6ZVG9m4upiB9mb2vvxb3mzrZlosZsPaytkLAE5OPoVl5RQHYWZynIG+vstsv0YpRTrjEioooqKmhpL37hri9ZE55OeXUVEaIh2dZmygl5HL7WelUJkMSuZRVFpLzVUuvFqWZVmW9dFhw2vLsizLep8FK+ZQW1VOkT/C9FSM2CWJo0vXod3s3PkaPfESFiycT1157uyAh1rrS9piXEwlw8yEYyQyJdTUVVBaepVQVuSx/LYHuOe2G5hToOg8vI0Xd+zjzPBlArlrUDx/MQvn11Pqj9PZ2kpP79AlQXg6Ns7ExDQx5rB02UIa5uRd17KujUDqCIMjmqY7vsj/+fff5ht/9XX+xz//c/78z77Ig/feRENlwWUHibxkTrKU6uoqyktDTPad5lRbB4ORt05zNSCccqqrq6go9XH2+Gu8tu8A3VOXPi8yeIjtO3ZxuM1HTVUVFWU+zh7fw2v73rzi9Dt27uLgqclr2hNufJSRkREmM3UsWbaExgX1VFdVU1Hmo/vEXl7b+wZnJy9dTnToMDt27uJAy+S7eqd9bvky1qy7gaV1KU4eeJ4f/+hFpuRc1t+8gqrKpaxes46lNREO7foVTz3Xhr+iiZVLiq/7j1its21s3glBYd1cGhc1UhVM0Xe2g/b2nksC2kximqmJCWYylSxYtJhFCwrMs7P9wpVCK3XB21nP9tW+/HLNNuhr3obkeDuH9rzMjjeHyauYx+KlteRd0KT9MvOZbTqv0VqhL9P2463IUDX185awsCGPqaFeOtpaGbo4cNQJZqYnGJ9MUTO3kabli8nJticGs2/0hS0isj3HLz46UxNtHNm/m9dbYgTzy2lceSsP/+lf8Tf/x//Kw7dWMNzZSvPJNqbe4mpcoLic6oa51OTGOXP8TXZu38/YxWmvjtN1Yj8vvbCLnpif6toGljY14KRG6D3bStslgbcmHgkzMTqJUzaHxqYVzC18hx+/RIi6xcu5efMN5M10sufZn/H864OUr9hI43ll3zJYQW1DE0vm5RMeG6TzVDMD8YtfgySR8ARjE0kq6+axbFXTVdsmzfaWB7TSF7WE0bjKJZ2+Qpuq84+pa34P+qmoqmfZivn40uMM9LRw6syl+zkRjTA+Mo4oqaFx2Rrmv0WVvWVZlmVZHy32t7plWZZlvc98uQtYumQeNeVpRodGmJ5KXBQfKcaGBhmdmDa9O6cjxL3SXjc+zMjoJDMx02NV68uHpsmJUSampkkXLmBh4xyqLr6X/RKC/Kq1PPInX+aT966lIN3N8z/7Po//cjddl6ua9mh1+RAtVLaaLVtuZuWiIvpb3uDgsZaLAl7NWFsrZ7oGKV55O5s3rqO+yPHmqVBKmz6yWnFhbqbNP5fNPUwVYTZoO38alehh+1PfZ+veMRrW3M1dm29i9aqVLF+2lAVzqynKC1z7H0Uih0UrV7Js6Vx8sQ5efWk7rx3o4lw2p5iZmmZ6JoLSaVLJNOmUBpHH4uUraFpajzt1iuee+AE/+9Wr9E1lI88M492v8/Mf/ILj/Zq6hUtZuXIFy5bUo6ZO89wTP+Cnv3rlMtM/xfE+RV1jA8ELNkKZfuMXhUsjbc00N3eSu2QjmzbeRENhAYuWLWd501zUdCsvPPlDHn9qN72T55Yz0bOfJ374FEd7XOoWXrwc03v9evNg4RSxbNVa1q2ey1THYU71JahuupFV8/KQTiFLV61h7aoGxjtOMZkKMH/5GuryL/dqud5AnXh91s873oSfYDBIMCBJxuPEYjEyZI815R1TJlAz87jwfTU7nT4XNgeKlrD+ls1sWFXOaOdxDh88RPf0hcf4VPdZ2k934W+8iU23b2FRuQ8Q+IMhQjkhRGaCkaFBhka9fa0TDPZ0caa9j5ibIZNJk86W9Qs/gUCIYMAhlYgTjUXPbcNl+wAbqfAEo0NDjMViRGammJpMmml1konJMUZGZ0ygr9RsKCx9AYKhEAGZIRGPEo2lyL6/XDe7JG0eu0wQKWQhS1bdxG1b1pCX6Kb56Jscap66YB0z0T7OdrbQHZ7D+o13cOvaCiQCfyBITm4IkZlidGiAweHsvkky0t9NW2s3EdfFzaRm941KR+g8fYLDR1pJaBAyQEFpPetvu43bb1tHqcNVW/s4wVoWN61n/YoypgcO89vHv8v3f76HgRnXW8Y0p/c/w6+27mRQzGVOoY+y+SvYcPsdrKxMc/b0cfbvO070gkMgTn9fF81tkyxafQt33X0jxT7vLPYW7xnzepy/ny+UW9rIivVbWDff5WxHD6MzhdzgDdR47jXIZ8GyG7jz7vUUpPo5fWI/bxybvPA1iA/R13WSjolK1tx8N3fcVH2udYvOjqmgL1wD4eAPhMjN9RMPTzLU10P2V4RKTTHY08bpzklc1yXjDfwLIHx+AqEcgo4imYwSCSc5d0wpQKG0Nx6E1ue9ByXF9Yu56a57WVut6O1oZu8rh7nw10mC4aGzHG8ZZe6yDdz3wC2U+LObceWxIbTWaNe7eHTBNIqZ4dO88Pjf861/+BGvtUxc7XqxZVmWZVnvIXs/lWVZlmW930QRazasp2nRbt7o6WZkbAyX/PN+KUvKyssoKswjc6aFZ372QwLxHuYXztBxppfTR3vJ+GCo6xivvvQU8fH1rFu5gsY5Bd7zNRM9vQwMT1K9/OOsXLKQkmsY2U7IHOpX/QFf+485lJR9j8d/e4Af/T/f4Gzrp/jMow+yad0CSvK8wQl1mvG+Y7y8Yy/NneP48qqpqCylINekJ8IpZO29n+Fz7WcZ+deXePHpraxc1sjDdy4h3wfxiZNse347bZGFPPzHn+GuDXPJ8VKT9Mw04UiUhEoRmZkhFksAJonQboRIJEYioUgnk6RS54J1rWaYmp4hEsuQ8sWIJRKkFPglRPtPsO/VV3nuxQEONx/hmfnVFBXkkZubQ05uLgVF5dQvWMH69atYMKfkrVvdIihfvIUH/uBNjrX088bB3/CPf5dk6OxDbFxRRXzoKLuff4Y32qZRKsG+bT/nnwMujz54J2uabuO++97gSPPjHDy9i//6n3vZ+/wqFs0vRyZG6eoYInfebXzhy3eyfE4B/sot3Hf/Gxxp+QkHWnfx3/4vM/3i+RXIxBhdHYPkzb+Nz33xLlbU510QwGt3mGP797Dv9ZuovWMJhQEIDx3m+aef4dBgDR/7wqPcfctcQlJSuXQT9953N4dP/og32nbz/32rl32/W82SBRU4iTG6OgfJadjMZ790Fysb8pFoUrEY0VictIowPTXNzHQGrqXtyiUkZQtXs/aGtTTs6qFk2VpuvHk1JUFThlu6YCVr1q2mYfco85tWsnrN5Qdq1CpKJBImHHVJECUcjpDSmGpefJSUlFFeXsTM4dO88dp2XqgZIzE4Tl7tSjbdkCIajpJwFdFwmJlw+Fx/YJ0iHAkzE06STsSJxaIkFBT58mja+HE++/l2+v/Lr3jtxd/y3JplfOmP1lIUgOR0O3t2vMSB7mIe+PRj/OGdi8n1jvGcqjnMbVxATfAEB3b8ku/mu7TcWEOkt5WzIwP0uSWUyE7ajuzmN08upviTm1i1pIqSklIqKooIv9nGgT0v83z9FOmhMQKVy7j7nrUUXCbTDxQUUlxWRqGT4PSBF/nx9woY3jyX+GAXHaeO0hV1yESGOHV4F7/8heTGRQtZv66I4pJKyoqh/cxRXnnpGYqnQwyPutQtu4PVeXGikRgpZQLxyckMlJ9/gUxS0nAjH3v0C5zpGuHZwzv47a/XsbT+UZoqA2g3TOuBV9i+q426Wx/hC5+9n3nF5gwYqqilYeEi6kJHObLnN/zT30lab6kn3t9G52AvXW4ZpbKdjpOv8euf/5riT21maZFmpu8kr/W43LhpKbctLkAg0JkM6aRL2fwlrFm38pJ+1BcQIeav3sInHv4Exzsf5+TJl/mnv+ng5aeWMK+umEx4iJFpPyvv+DwP3dFEgV8gfHVsuPthHmvt5P/98QG2P/NL1q2exyc2VOPDZaLnCLtffIke/418+k8e47blJUjMIISJWJhI1CXjT5NMnHeHi44TjUwyOZ0ik0oSj0aIKy64YCR8hTQ2rWHTrU3sH0vQ2LSeNfMuHupSUlS7mnsf/iKtHUM8tf9Vtv7iNyyf+xgra4JoN0rn0b1s23aU0vUP8Sd/8ocsKPF+C+kE8fgU09l1iISJza6DQ3lVLUuWLoADp3n1uX/hWzlj3LjAT9+ZNrq7zuJUVaJaBzmx/1meeKqWj22+iaVz8iksrqaqVHCk5zSvbPsN9aqcsYFJChrv5Q82BIjHJpmayZBKxQlPT5NQkC/BF6pm3ZZP8qUvnuH//pdX2fXcE9y4fgGfurUWH4qpgZPs2fY8rZmVfPorX+bu1aVeCK9IJuPMzERRro90Mk5aZS9jaOLxGOPjUyidIpmMmr71BQ46M8bpQ7/ln771jxyP1dM+lsP8v/k0de/PUACWZVmWZV3E+eY3v/nND3olLMuyLOvfFkFucRHhvlMcOtxP9Yp1rFjWMBtsgSSvOJ/URB8dbe20d5yh+Xgzg5EC1mx5gM1NQQY622hp62BwHBqa1rN21QKKsumvnuHAi7/k2VeGWfPAZ3nogfVU5l1bI1PpBCmqmMOytTexpqkWEe7lyP5XePF3z/HC717k5e07eOl3z/LLn/2QH/70ad7sSDB37QP8u7/8Gp97cAsLq3LxOyZ0DOSWMHfRYurLHfpP7Wfnjld482gzJ4/s4dmnX+DUZDUPfO5LPPzAjdSV5OBGejj02u/42Y9+znM7D9I7GSM8EyYcnkbmleLEB9j77E94/KmXOX52gpnwDJFoGJlfRT4THHjpJ/zg8Wc50DpKJDpDLB5H5tdSU1VJUR5Mj3Rz6thRTpw6TWvraVqaT3Ds2FEOHzrIm/tfZ8/unbxxcoKC6nnMbyh9ywBb+vOoaphLVbFkvLeVE8ePcuCNPbz80ssc7XSpqy9FJ6cIU8P6W+/g7js3s3ZZA6WFxdTOnUtFgWast5OOzk46O8/Q0txC75hk4YY/5LHHHmHzmnrygxInkE91QwOVhTDW10n7+dOPwsINf8gXHnuELd70AMmx0+x55RXeOJWkui6Xyf4jbH/hZXZtf5Ynn9jKibEK7vvMn/L5h26loTQHKUD686lqqKeyCMb7umjv7KTLW073qKbxxk/whT9+lNvWNpDjDnH4la386/cf57lXjjMyHWVmapLxkVFSIkRRSTlFuW+vPkL68nHiQ/RPSJbc+hAP3buMAu8FkL48RHSA/imHJRs/wcc2LyDn/F7NOkr36Td57smf8OSvt3GwbYRINEw0Mk3EDVFUXE5lcS6hkGJioIsTR45xqrWZEy1D5Ncto7HG5dALT/Dk07s5NTBDJBohMhPGV1BKnhPh+O4n+MGPf8Urh7uZiUaIRCK4/jJqqmupKi+ldv5i5tXkMtZ5iF07drH/8ElOHnudbc8+z6GeIJsf+hJfeHgz8yrOtf6RgUIK8xyiox0cP36SluajHDzcwpRoYPN9t1AlxujsjNOwZgMbblzD8qXzKC3MIRhUTA13c+LwUU61tXCiZYBQ9Uo2bbmROSUh5GWOWSeQT9BJM953mubWTjrONHPqzDDBmtXcc//NlDPKycMn6B4cJROoYfWGDTTWleFXEQa7Wzh45BStp47T1u9St3AhRW4Lv/7JT/nt9kMMTkaYmZlidHiUlPaRX1JJab557YUTpKRqHgsXziGY6OaNPTvY/epBTracYO/O59m2u42iFZ/gy//uM9zcVEXI69kjAwUU5AVITHRw7Fgzp1qOcvBQM+O6lk33bmJOaJL21jBzVm9gw41rWdE0nyIxyolDr/HSK0dpbz9Dz9AwPe1H2P7cC+zv8HPHI1/mkftXUpLzluk1/mAh1fUNVBdLRrrbOdPZxdmznbS395DwNXDnJ/+ELzxyJ4uq881rKRxCBZXMW7iQ2uI0Z47u4eUdr3L4RAvH3tzFc8/upDeziIe/9BUeunsFZXkOsfFuXt/2c370k6fZ2zxIOBYnMjOFGywkP8eh682n+O///Ql2HupmJhYjHg3jBsqorG6gbPaOA0EwN0BsZpypSC53PPpFNi4quOTuEeEEKKyoZ+GieeS7AxzYs50drxzgZMtJ9r/yAs9vP0Fw4R/wp3/2BTatrCHH5zIxcIqdW3/E4z//NTsPdjETTRALTzKdkOTkFFNfVUggL5/cXMF453GOnjxN87FDHD7ehShbzV13rCM3fIbTYwWsu/kWblizmqUL6yjMDeHTKSYGmnnj4GnOtB6jpTNCxeL1LK9P8cazT/DkU1t55Wgf4Vic6Mwk4YyPnPxy6ioKCOWV0dC4iPpy6Dr+Gi+/vJuDx1s4fmgPLzz7Im2RuTz0pT/n0ftWUZ7vR6em6Dqxg5/+4Eds3X6EkZk48fgMM3E/+QW5MH2EJ//1X3ny+f0MTMdJJKKEYz5Ky2uor/Ix2H6Yl7Zuoz1awOK1d3D/Xcu47A0flmVZlmW954S+9oZjlmVZlmW9a9IMHPsV3/rb7zM197N8/d9/gfXzzhtUUSWZGO7m1InjtHaN4RTWsXDxUhYtqCNPjHPy4Buc6k1QVr+ElSuWUFtRSMDLZeJDr/Cd//3veGVsFV/9D3/Jx2+uv6jNw7VQJKNTjI6MMDzUR1fnWfqHxokmXXyBXAqLy6muq6OmqoLSklLKK8oozAtyyfhvOkNkapSBvm7Onu1jdCqJP6eQsooKqmrqqKutpCg/aKoRM3GmpyYZGx1jaiZKMqOQTpD8wmLKq6ooCGoik0MMjUwSS2RA+sktKKKiqoaSPEl0cojB4Qki8TRaOITyiqisrqWitJCgmGD/Mz9m675hCqrnUxaIMD45TSQaJ5lKkUzEmBzp5kzHDGs+8Zf8+7/4LKtqrhLAetvW132GtrYOeociBIpqWbhkGfNLU4yMTZOQJdTVVFBeXkJ+rrd/dJrw5Ah9Z9tpO9NJ/1iCnOJq5s6bx7y5DVRXlpB7fk/iK0zfMG8e8y8z/czprfznv/lb/vHXU3z8q3/JY4+sQw/1MxWHgtJq5jTMZ15DDaVFORe+XjpNZGqU3q52zrR30jcaJ1RUxdx585k3r56ayhJyAw7aTTA9OcbQ0CgzkThpVyN9QfILiigtL6e0uIicwNtPeeIT7ZzumMRfspCmhSWcHzPGxto41TlNqHwJTQsKLwrpXOLhacbHRhmfnCaaff1zCygpq6CirISCXD/ajTPS28rRw8fpHlNUzF3CyhWLqSiQhMcGGZ2YIZ50zXGVX0xldTXFeQ7xmRGGhseZiSZRWhAI5VNaVUt1RSl5QQnaJRaeYKi/m66zvQyPRZGhAsrKK6isrqW2torSwpxL3hvpxDRDPWdoaWmjfyxJfuVcFi1eyNyaIFND/QwMKcrrKiktLaGoMI+gT6LdOKN9Zzh6+Bjdoxnz/l+5lPrqktnw9zIHKsnoBL3tzZxoPsNIxEf1vMU0LV1IbXmI6b5mDhw8xYysYOmKVSxeUEtxro90fJKeM8c5fOwMU5l85i9ZybJFteTKOCNDI0yHY6RcjXQC5OYXUVZeTklJsdkn5y07k4wwNtxP99lu+gZGSKgghSXlVFRWU1tXS1V5EcGL1j2TmGGor51TLa30jsTJLW8w+6Y2l+jYAD19Kcrrqij1Bob1ZaYZHOhjYDyNFCliEXMHhi9UQFlFDXPq66goybmm1kBapQhPjtDTeYYzZzoYmlIUVc2lsXEBcxtqqSjJu+S11K55Tm93F929A4xHXEL5xZRXVFJTU0ttdSWF3gUdNxVjanyIweFxZqIpEA7B3ALKq6opK8olEx1hYHCU6UgSpSWBUB5lVbVUVZSRFzz/ok2akZ52zvZNU7PyRuoLrxTMazKpGBMj5jXo7R8ilglQWFJGRWUNNTU1VFdljx9NOhFlcmyI0fEp75iX+IO5FJdVUFFWSklhCFAkIuP0tJ+ipbWTsYiksn4hixctoKoAxvo6GEgWUVdVSmlpMYX5OfikIJOcuEchLAAAIABJREFUYaDzJIeOnGI0FqR+8SpWNs2jNE8zPTbK+OQU4WgShSQQzKO4vIKKsjKK8wOz+zkyPUpf91m6e/sZnU4TzC+mvNzs55raKoq9/axV2rwvB4YYn4qSVuAL5FBYWklVRQk5MsZQ/yBjk2FzHPuCFBRXUlNTSWmhj8joWQ7v30/nVCGrb9nEysa3vqBpWZZlWdZ7x4bXlmVZlvUBcVMTHHj2e/y3p4e55eE/5TMfW0PRBbclazKpBIlkGuELEgoGcKQXMCTjpNLgCwYJ+J3ZXq5aTfHaE9/mvz49yA0PfYXHHlxPZd476xKmVYZkIk48kSSdUSAk/kCInJwcggHn2gbv04p0KkEikUYJH6Gc0AXr/V7SapojLz7OT5/ronHzp3jw3hUUOC7pdJpMJtsX2iWdinJy+085MDGX2//oj7mjKXht89cuqUSCRCqD9AXJCQVxyJBRIBw/viskZlplSCUTJFMK6Q8SCgXwXa509m1Ofy68nuGR/+2b/Me/epQGX4JUBvzBHEJB/2UrdC/enmTKNcsJBvBdclXiPaAzpDMahA//RWGmVmnSGY2Q/kt+9rYWoV3SyQTJNPiDoWs/fq+JIpNKkUgkcbVDIBQiFPS99fy990UyrXH8QW96hZtRaBx8l9nWC7chSDBwlWXMLipDMpEg7QoCoXPvP+WmSSaTKOEnFLrwApRy0yQTCTLaRygUekf73swrTiqjcfwhQqHgFd8b3gqTTiVJpl0cf4hg0IfE9EfWWuI7/8nZvsZConWGVCJOKq1xAiFCocClF9WugVYZ8z7IaHyB0FXfN2Y1zGuTSJqLa8FQiKD/vSvXVZkMGVfhBK5tGy98DYLea/AO3k8qQyqZJO2ac0vQL73+8C5aXv7cN3scKmn6qr/lQXCF5WrXtDNJpNHSHJtB/7XdXXTtC3FJJZOkMoJQbs5bH6uWZVmWZb2nbM9ry7Isy/qAOIFS1tz9BT49/SN2HXyVNxoquX1t7Xn9fAW+QA75gZyLninwB3PxX5KtJunYv5VnX5tg2V2P8OB9qyl/h8E1gJA+QrkFhHILrj7xlWdyhXV+r2nGW1/h1088yaHwZjY1LKSm9OLK3awEYxVzqHaKKC669iBECIdgTh7BC14mP1fLUoT0EczJv+h57970s89z/OTkBbjWp11+e94Hwof/Cj1lhfQTuJ522hfPRzgEQnkEQu98XpeS+AIh8t/OzC/7vpA4b5GUXe82mPdxPhc/TTp+cnIvv+Ol4ycn791p9Jud1zUfVkLiD+Zcum+cy+wbIRBCeF9656t3uL5C+gjm5vN2Tlnv7fF1KenzEXgbp/i3/RpchTkn+S7YR0JKHPkWx+8VjsO3tVzhvPe/T4RDIJTLu3DasSzLsizrHbLXkC3LsizrAxQqamDzH32e2xfFeO25rbx6uI+Euo4Z6QTt+7fy622dzLn5U3z6oc3MLbu22+R/vynGezto7zjL6RMHOXz0BANTqUumSsdGOLz9Sba3xKmYt5LGine5iu99pdEaNBqtFfYmO8uyLMuyLMuyPqps5bVlWZZlfaAk+WUL2PLxz1D85h5OnT5JdWkBy+cXvY12BprJzkOc7IFld3yKG9Ysoqo497pulf/9I6lqXM7ypkb2NL/JD779v7B/2wqWLppHZUkuUqeYGR+ib2CagjmruedjH2PL+gWzgwV+9GhikQjhaIyMihKeiRCJKCiwlzEsy7Isy7Isy/rosT2vLcuyLOtDQZOITDAdhdyCQgqucBv/laRjk0xFNaH8AvJz/O9LL+mPCpUO09t6gBeffZpnX9zL6e4RokkI5hZRVTeXxcvWcvPGW7hp/WoWNlRRlBf4SO6/TKSXg69t59dP/IIXdu2nfThNdeN6br/7Lm6//XY23byeRbW5H/RqWpZlWZZlWZZlXTMbXluWZVmW9XtPu0lmJscYGZ0gHI2TTLumP21OLgWFJZSXl1GYH/pIV6vrTJypiTGGhscIR+NkFEhfkLy8AopKyygrKSYv9FFuh2JZlmVZlmVZ1r81Nry2LMuyLMuyLMuyLMuyLMuyPnRsA0TLsizLsizLsizLsizLsizrQ8eG15ZlWZZlWZZlWZZlWZZlWdaHju/8b5RSpNNpbCcRy7Isy7Isy7Isy7Isy7Is670mhCAQCCDEpYMQXVB5nU6nSaVSNry2LMuyLMuyLMuyLMuyLMuy3lNaaxKJxBXz6Esqrx3Hd8Wk27Isy7Isy7Isy7Isy7Isy7LeDVrzlsXUvosfkFIgpbThtWVZlmVZlmVZlmVZlmVZlvWe0Vq/ZQ5tB2y0LMuyLMuyLMuyLMuyLMuyPnRseG1ZlmVZlmVZlmVZlmVZlmV96Njw2rIsy7Isy7Isy7Isy7Isy/rQseG1ZVmWZVmWZVmWZVmWZVmW9aFjw2vLsizLsizLsizLsizLsizrQ8eG15ZlWZZlWZZlWZZlWZZlWdaHjg2vLcuyLMuyLMuyLMuyLMuyrA8dG15blmVZlmVZlmVZlmVZlmVZHzq+633iTx//F6KRaQQarRUIidAaKQRogUaTTqeZDofxSYGbyeDzSaQUCCFxHIkQGqXN96BQWpmfCR9aazQK4UgEEpTAJyUKF4QCoRHCh8ABDUgQCDQChEYrhRACrTUAWinQCqVBKxeNi9IarUEphdAa4c1HOg4+xwdIHMeHlKDRIMy6CgRSS4SUaKnNz8xiEEi0MvtBSI1CI4VEa0B76yhACkBrtJkbCAECtNIopZFSmH0pQGGeLLQGJRAIEMLsL2mmEVqbbdUaBWihcIQDWiCEAG8/KC0ATSQRJ57MoLX52Re//FWqqmqu+0CyLMuyLMuyLMuyLMuyLMt6N113eN3f18XkxCjSy0WFcAANWiE0aAFKuURjMWKRCCK7QEfiyGx4DSZ5lWjcc+Gwlgg0ONoExFogkUgv5AYXJGgkAp8pHxcm0DUxr8ZBoLwwPRuEo12UMsG20i6uVggkylU4woTCQjogJQGf34TqUiDRCAmuNkG0QCC0g5ACJRRefO0F4GY+WmukI1HaxNOgZ0NkLYTZ9uz24yARCJENmE0QLc0moQSgBVJrb3oTbHt5OS7KC8OZnb8WCkebCwkIYYJvQCFAQyQeZyqaQGsJWpFMJK73ULAsy7Isy7Isy7Isy7Isy3rXXXd47SWlprIXiXJNwGtSWS/OdQS5uSFcN0MykURoQTqtwOdVHguB9NJv7VVNa62RKIQE6QiU63oV1RqFMiGvFCilvWJlhZYCrRVaC4SUpvpYa7J57mzZMxrHkSghEBqklmgt8XmFz45wvBBcIhyJlCZgF9qslyO8Livaq4jWGqXVbIgsTTSMCZglSmULsr0vBIhskOxVeZvJvSpxlQ3Cs3PMVo6bwFlrU7Wd3Vbw5ueF3tkq8+wacN6ctFZevi/N11p5+13PBv+WZVmWZVmWZVmWZVmWZVkfFtcdXmuvyveCsNr7gclmvTYcQpKbl4fSkEok0UqRTiuE8COlQOHiOAK0nJ2P+UfiZvBaZEC2plohERqvtYY2LUK0MoEyJg/OTpldRyFMCK29oN1xBMo1VdPSi3mllKZNBxotXBDaa08ivYJmjRSmRYlGo5RrFunlviY/Nl8o153dIxrTkkRK6c3Da/uBqdQ2CbvXzsRr+4HwWobobOadDa+9PeMF4ELKc+G8np3YC7G9Zwpw3WxVuwn7RfbVEhJMXfn1HgaWZVmWZVmWZVmWZVmWZVnviesOr5VrSoe1NhXQUoLS7myIm+0/rbWLFJLc3FxURpFRmkwmhUpq/H4Hn18iFNniYryU1vSi1sJUWuP11Bagcc/1stYatAmaBRIQuO658PZcWxITcJtvvZ7VUiBw0V7VsvAqwfEqlE2vaZCzRdAapV0vW5ZeBXS2XpzZViFag8JUUJv1VN5WZVuXSK/FCgjhPV9IzGTCK8gW5nshvGrybOMRhXaV2dlez2tTqe6tYrYSXgq0MBcPNCaYN9Pr2Sp0ISRC+pBKeFXcNsC2LMuyLMuyLMuyLMuyLOvDQ159kssbm5gmmUp7VcnKC26F6bcMZnBCrxpYCEHA56OgIBfHL3D8Dq5Ko5SLyrhopWfDXxC4gKuyrUS8Cmbt4uqMN9BixgtqFVq5oLzKZ61RymvjITC9pSUIoRBCe5XLLkIKpGMGMsxWQmsTkeOS7TltqrNdpbzgW3iBumlJoszoi7NV0bP11N5/hDcQomlPohEqWyDtzgbkGuFVXJsN0GQf12ihzgX0ylR6gwapUUJxrlmJaVEiZi8YZCvRzf7Lloeb70yIrYVGOAKfkF77EDU7oKNlWZZlWZZlWZZlWZZlWdaHwXVXXkfiKWKRSSrLSgj4fHgJLkor5GzzDq+lBuZfv99Pfn4BMzMz+HyQSCYIBYKmQhllWklLM4ChALSbHdxQgBSzrUPEbNWzCcbxgmWNBim8hhlyNlgm24taghYaV2eQOF619rkWI9lKbYHjzTfbFcVrZaK9XtbZNtHZamWdbVXiRchCeF1MxOw/ILye4CrbVjs7viLoc/XbZNfFa5di2pjgVU2DFtLrcy29wSENIc0+ELMzzT6f2V7bElNl7RV5e/tSz05zza99JEp7+xkGBwdJpVIUFBSwYMEC6urq8Pv91z6jD4FMJkMikcDn8xEMBmdfd8uyLMuyLOtCWkM8DqOjipkZ8/dwUZGgvFyQk/NBr531b1EmA+m0+VpK8PvN/y3rvaI1RKOakRFNJHLuPFhZKQiFPui1s37faG3Oc5mMOdZ8PvPvlaTTMDammZgwXQFKSgRVVeItn2NZHwXXfQhnlCAaTzM2MU1FaQl+v9dl2STLXpad7YQtvMBXEAiEKCiAcHgGnz9IwqveDiBneziLbGDttfow4bZpk+Fkl0G2WNirz87+kWLya9MTOjsOoRDnslwh0EqjhEZKx6u59oZ21Aqhpelzne0brfW5ZXnbJITXU0RolNKz8/YanACSbNGz1GJ2bMvZQRqFWda5vtTObDidDcWlcEzzEe/pZiBKvGpqvL3qbY/WpoJbeftKgyO9MFyeC76zbV1mq8Wl157kGg0PD/P000/z4osv0t3dTSwWQymFz+ejsLCQVatW8eijj7Jx40Zyc3PfzuH0vlNKcfDgQR5//HFaWlq49957+dKXvkRlZeUHvWqWZVmWZVmXUAomJjSjY5rCAhOUvJ81A/E4nD6tOHRIMTSsZwPDYBCqqgTr1kqamqQNb36PKQXT05rhYU1urqCm5v09Bi82Pa05cEBx7Lj5TFZZKdi8SbJggU2vf18lkzAwqEgloaZGUFj4/hUeaQ2RiObIEcXRY5rpaU0mY34WDEJVpWDdOsmKFfIDfV9Y75zrwuSkOdcVFQnq6gQfRI1bPA4nTyoOHFREo5qcHMHGWyRr18pL1sd1ob9fs+c1l+5uTSpljtlQCBoaBFs2O9TX20I966PrusNriURpQTiWROtJyksLcBxpkmMFs+XJktle0AiJlIJgKIQGZqancXyQTKWRQuL4HBM8Y5poS/ONV/WMF9pKtGtKjKV3WX02WJbSC2cB1GxFtBRytitGdoBG4bXX0IDOBtBSmp7RmMDXVF2bZZhBH72WHN76mEpsPVu1rWdDYbyA2oTT3mQ42fA5W5muFVJIlFbeYy7ae8z0EteAQotz4b/U2rRjQaOFRCtp2oDg7Xu88B+vohxtqrxny62FNzilRqkMQjqmxclVSq/b2tr4h3/4B7Zv304kEmH58uXccsst5OXlMTY2xqFDh9i6dSuHDx/mL/7iL/jkJz9JUVHR9R5e76mxsTF+8Ytf8Mtf/pKWlhampqZobGwklUp90KtmWZZlWZZ1WdGo5vBhxdFjihXLJVu2SPz+9+eDaCoFzS2KnTtd+vohff6fTAKGhzWTky6ugjWrpa3w+j0Vj2tOnFTs369obBTcc7fzvh2DF0uloLNTs/8NxcCA+diVTmtisQ9kdaz3gVIwOKjYtk2RycC998j3NbyOxTSHDil27FRMTpoq/5ISExoODprz4MSEi9Kwdo20dwB8hMXjmjfeVBw/rli5QlJT4+A479/yzTFlguhTpzQTE+ax/HzNiuXn8qcspWBgQPP88y5tZzSJxHkzE6YSO5l0efCPHEpLbYBtfTRdf3gtTGsOV7mEo0lAUVZShN/n8wYKzFZHm9EYhRReIbMJdIOhHAq0JhqJIAWk00mkE7qgulhL0zs62/fazPC8QHq2ctkbjND7Hq2RnLsapbX06qvNQIUmJDYBuBm4EG96rx+0MGcDKSQoU51snu161eSK7DCJ59qKZNuamO9NwbfppJ2tnHaExFUa4TizA1lmh3IU2ZAZjcI11dfZViJeaxQhBI4wPbyzbUyE46CFqdA2qy1NGI95IJtLC0eY18U1O0X6HKTjIHCyJfNXfK3Hxsb43ve+xzPPPENtbS1//dd/zU033URJSQk+n49UKsXQ0BBPPPEETz31FP/8z/9MXV0dW7ZsIfQhK78Jh8N897vfZevWrSxcuJC7776bHTt2fNCrZVmWZVmWdUVam6rrzi7N4BDMm6tRb+PuuXdqfFzTfFLR2wc+B5atFqxZLXAVHDuqOd2q6euDU6cU8+aaNiLW7xetYWYGOjs0A4NQUWHClA9qXUZGNIcOK0ZGIHsTrfX7LZUylaWdnZq8PFOF/X5RCsYn4OgxzfgEFBbArRslq1dLYjHNmwcUb75p3hsnTyqWLBbk59vz4EeRucMETp8y55dwWL+vw4NlMtDVpXjueUVfn/ldH8qBaOTKz4nFNAcOuLS2mTvyly8XrF8viMdh3z5Ffz+cade0tipuvtn5QKrILeuduu7w2gz0B45wQClikTRCzVBWWkjQMeGzi8j2spjtg60UXuWvIBQKoZUmFgmjUCSTSYJ+Pz6/H9c1AbTXANrQXlG30Lha46jZLtGmfYarkNmWIkKANkGukOcGRTQFyMJrt6HMAIpkQ2jvLx+lkV6fD1fJc/2h8QZNFF64TrYK2wvBvRYe2YEWtVSzLTpMX20TKquMC0Khpbdsb99orRFSm0D5vDhcZ1uXKBObm9V3vGpsU7VtwnJ1XkW5nO1z7bou0vF6g6PRSpnqda8Htp5tYXIprTW7d+/mpZdeIjc3l69//et87GMfo6Sk5IL+0PX19ZSXlzM9PU1raytjY2Ok0+nZ8Hp0dJRt27axd+9e+vv70VpTXV3Nli1buO+++6ioqEBKU3G+f/9+9u/fzw033EBubi4vvfQSR44cIRaLUVVVxf3338+9995LKBTipZdeorW1lU2bNrFmzRpyLmq42NXVxbZt2/D7/TzwwAMEAgEymQyPPfYYd955J6+++ir79u27nreAZVmWZVnWJWIxaG83HzpzcmBOvSAagePHFTk5cOONknnzJMkk9PYqTp/WDA1pEklTyVddJVi2TDB3riQQMK0R2to0J5sV3d2aTBrOntXs2OHS0CBZvFhQUCBIp2FgQHHqlGZgQBOPQzAEDfWC5SskNdUCKSGRMGF0NHr1T+PBoOmXOTVlAhs0VFXBTTeaW+OVArTL0LBmbAwmxmF6Rtvw+gPmujA4pGk9rUilTHuFoiI4cVIzNqZZsVxw000OqdR5x8ygOWZ8PqisgKVLJY2NkmDQtEo4c8Ycg+0d5hjs79fs3OVSVydoWiopKhJkMqb6tKVF0deviUXBH4A5dYIVKyRz5phjMJUyx2A4fPVj0O+H0lJBUdG5Yyoc1jQ3K9rbTYgZCMDo6Hu5R623KxzWtLZqhoY1BQUwd65gZFhz4qSmuBg2b3aorDDhWtdZxelTitExc2yEQlBXK1i+XFBfL72Ka82Ro4qOjnPV9QcPKkZGNAsaBQvmmzutI1FzvjzTZqqjTc9fWLLEtDXKflRMJMyxmkxe5RgUEAoKSkshEtaMj5siteJiaGoSNDQIkknB9LTm+AlNIg6TkxCJQH7+e7qLLUzQ29enaDmlcaQ51xUXC15/3SUShRvWSdaskbiuqT4+cVLR16uJRE1v/JISWLxYssxreRWJaI4dU3R2mgsRmQx092i2bXOprBSsXm3OiZkM9PcrTp40585kwgTMDQ2CNasllZXmfOW65sLz5OTVz3WOA8XF5vd5tmXJvHmClSsEJ5s1zc2Xn4dSMDauaTll2nnV1MBdd5rzdzqtKS8TDA1pSkoE8+Z9MO1PLOvdcN3htYtCKdeEvFrhAuFoHASUFebjdyRecbLXzcK82RyTJKMBx+cjNy8XAcSiYTIqTSKVxq/N4I5mEELXC4Al2ebS4vxWInjzPtc7BCG9ntnZim2vd3Y2HPUeRHthc7antqlY9sJyrZFSIKRGaYVC4EgTKiuVbc2hZmvCtXa9ftcmDM6eFEylt3lMaY0j5bk+2rMtTLyBFr1qdu09Lxtaz26o15LEVGxLpBd4g8I8orJF2mSHrURpHEzVtdLK3L6kNFLobJTtVYhfXjQaZdeuXQwPD/OpT32KLVu2XBJc4+3b+fPn841vfINwOEx1dfVs3+ve3l6+853vsH37dsrKymhsbERKyenTp3n99dc5duwYX/va12hoaACgpaWFH//4xzQ3NzM1NYXf76eiooLR0VH27NnDoUOHUEpx//3309vby09/+lOGh4eZO3fuBeG1Uordu3fz/e9/n40bN3LPPfdQXl7OV77yFfLy8igsLOTAgQNX2HLLsizLsqy3L5HQdHQo3jxgQps5fYKpKdPioKYGliwxfSyPHnXZu08xPAyJJCjX/J135oyms0uw6VZYvVoSDsPJZsWJE5pk0nTm6xuA8QlNJKKoq3Pw+800e/eq2eDadUE60N6u6enRbN4sWbRIMjGh2bPH3Fp8NRUVgs2bJfPnCR552CGR0ASDprI6EDAfmnNyzw0ElR2DxfpgKQWjo6YaNBw2FzAKCuH4cfP5qbJCkExCc7PilVcVg4PmNnPXNZ+zgkHo6HS5+WbNTTc6xGKm3/mRo94xqGB4BGZmNAsXwpw6yMmBtjbF7lfMhZtYHNyMCYja2zVnuzWbN0mWL5dMz2hef93l5BXCmPOVFAtuvVWybp357JFOQ3e3CTIdBxY2CqIxs73Wh0N2YNfTpxXHT2hKS2F0RNDeoRkchPp6uOEGM9jhgQOKvftM0JxMmWPL8c6DXV2C226DxkbJ4JDpbz49Y6aJxeBks7lQFwhK5s8z4eSePS7HjmtmZsyxorW5uNHR4TI0pNmyRZKbKxgd1bz4knnsrQgB1dWCu+8yd3XL8/KN7HnPDKInvPGouGA6673lujA0rNm71+Qg8+cL/D5TIe/3Q0O96Uve3a3Y9qKit9f8fsxkAAEBP7S3uwwPaW6/3Zzrjh4z1f2JhImNBgZgelqxaKGgqcmELSdOKHbtdhkZMRdClDK/bzs6NGe7NPfee+4i9ZEjiv1vXP0XYygEN90k2XSrQ1mZ4P77JIsXm/7pHZ1Xvs0lk4HhIdNaxOeDOXME8+aZi99+v2DpUsGCBRq//4Mdo8Cy3qnr70iXHVTRlFyg0ChXMB1NoF1NWXEBvoBpyQHMhsSmm4UJS7UyLTWCObm4SuG6YZSryWTMm1OLDI6Upj0GXrArsuEzXoqtTQsOKWcDa7TA1fpcf2uRLZ7WyPOroxFeMH7uXjOdnYc3QKTADH5oniNQLmZAR6HOq9ZmNkQGafplo3G09ArHvWVocF1lenVrfe65swNGYtqUCJEt7gbNbDW58FqZSOFF0zrbBkWZViUCtHIBies9WWL2u8j2RdIa6YCjvatuSnnrfXljY2O0tbXh9/u59dZbKSsruyS4zpJSsmDBggseSyaTPPXUUzzzzDOsW7eOr371qzQ2NiKE4OzZs3zrW99i69atLFu2jEceeYT8/HzS6TTj4+O8/PLLfO5zn+PBBx+kurqaZDLJT37yE374wx/y4osvcsstt7Bs2TKklOzbt49HH32UyspKfN5fEjMzM+zbt4/R0VGWLl1KUVERgUBgNiRX7+f9tpZlWZZl/ZuRSkM0akLpeNxUhy5bJrwBxqB/QHHosObsWcjLgxvXC+bNFXR3m+q9ri5Nfr6iulqQlwfz5grGxzQ9vSAUVJSbSsb58wXBIPT0mj7E7R2agny4eYMZYKp/QHPihKnIystTlJYK0mnNxCQMDnHVVgtCmA/weXlmPS4eIyUc1vT1aaanTdVYQQH2VvkPASHMxZB43LT6ONutKSyAmmpBVTXU1ArTduOQqTAMBmH1KsGiRYKBQc2RI5qeHgiFNHW1ivJyQX29YHgYus5qMhpKS2DBfPN4To6p9N73uqKtzdxxcMM6U+U3MqI5dkzTdkYTCikqKsznmKmpazsGTR/rbLGQCSiPHFGMj8OSJYKVqySHD9m/6T9stDZV1JGICZEzGY0UsGa1oKpKEAwI+vpMz/LBQSgqMuetqirBmXbNseOa9k5zHqysFJSVwsKFgrY2E9KFQiaonFMnqKoURKOaY8cVb7xpwsn6ObBmjQmcT57UdHSaNjPl5YJ168x5cGzMhOlvRUjw+UwAWlxsljc2ppmegc4uc0EyGtWc7VYkkhAKQl0d72sv7n/rlGsuZqQz0NWlCQSgsVFQUQ6V3rHx5gFFa6spUlzYKFizRjAzA6+/rhgagsNHFPX1ppK+cYEgkdB0dZmLFBUV5tirqREIaSqud+126e421fVbNpuf9fZqDh7SnG4zx21FhTDtlsKaoSHOK7y8vJwcCIfN79I5cyQ1Neax6em3fqLrwvi4aaMTCkFZmSAW07S0aMIRM7huQ72gquotO8Va1ofedYfXvnOlxShtWmtIIXBdzUwsgUZTVl5IQIrZwQ+FN3Kg9vpYCy+klY4gPy8f5bokkgnSSdNhXkgH4TNtP7wGGpio16vezvZq9gZSNOMeer2ws2MgmubZpie2N5ChkN7ghrMV0BrlhctC6tnt0l6lc/YqqpptYZIh+0OB9JpNZyPobL9r8zN4GRgrAAAgAElEQVSzrWYGwiuv1tku3kogpWMidG91smF3tlc4mEEas58VXK+/drZ6/fw+2+Yrr0Ldu2igNEiR3W/C+695rYTXPkXoK5/FxsfHmZycJD8/n7q6OgKBwNs6TgYGBti1axdaax566CFuvvlm8r17qOrq6vj4xz/Ot7/9bXbu3Mk999wz+zOAyspKHnjgAdatXYc/YC4Tbtq0iaeffpqzZ88yMzNDU1MTy5YtY+/evZw6dYqmpiYKCgoAOHPmDM3NzdTW1rJ27VryzKcuy7Isy7Ks94wQ5z4gplKQVwmbbjUVVMEgBIOCiQlNU5P5MFlUJFi6xNxmXFGhGJ8wAeDwsGZkRLN6tWTVKkk4AkPDinTaBNe33+ZQWmo+uHZ2aLp7zG3TTU2C2283gzKNjWmUcjlwUHP2rAmaFy0S3HuPZMNNV9+WUA7U1lz+78RIxFS/HjmiiCegrBQWLBCUlNhPxx8G2ePQVaZWZcECwcaNDvn5prI6FoNFiwQlJebixJIlgtpaSf+AYmLC3A5vwj1NQ4OpmI4nFAOD5jNYba3gttskZWXm9T570gThQpjg6M47HCoqTMsZIVxee03T26Pp6lKsXu1wxx2S1auvvh3BoKl8BRMSnjqlON1qqnnXrZWUlQuwh9yHUrYYK5OBnBDceadDfb2p/gwGTZuZ1askixZqiosFq1ZJCgoExSWK/n6XoWFzUWR62hyDGzbA+LhiYsJcEFy3VrBsuSQYEIyMmLtTwhFzcW/TZsnqVQ5KaUpKNFPTLhMTphrcnHsFDz3oXNMAn7m5ph1FICDYuFESiyu6ujQ7dyqOHdO4rmmbFArCihWCDRscgsH3fPdanmyxpHLN78M1ayQ33Xju961Smob6/5+9N4217DrPM5+11t5nPnesWxNrHkhzMEnRtMxJAz1otBXHDaQRI4bhuNFAGhYMdAcNIwgSw3ACGGj4Rzs/ggRpt92W5VGK3AosR7JapqiJFCVZJMXiXMUab013POPea33941v73CpOVSxSrEt6PQQl1r3n7LPuubv2Oefd734+Qy0H5/RYt3+/ZXVNdR5f/4awvKxXdNx8s+Xuuy2lh6NHA1kGu3frcNpGAxA4ckQ4fhyyXN3SP/VTjnZb283Doeex7wjPPS+cOCkcPGC55ycc+/ddnTZk61YzufrlailLDalF4lUxi8Iff9pz7pxqRLJM9V93/5jhnnscbzDOSSQ2DdccXgfvAUOwVl+VvA4nNAJlENb6A8LFwPxMh1qebTSeTRReiLqXjY2RqrO0Ox2CEUII+GJM5i3BiA5nrMJW0aZw1Y++tNEdRRlU6g8tH294okXUUm1io7qa7hFCiEG6uqc1PCZe9qP/HRA1lwAEwUz0I8RWdlA9it1wTauGRMPyyoutqwsbi42hevXcsJFTT26y0XSWSY28+tnUk60hd6hicqsWbWwMwePPbkQ9cxrSO4yxqlIJr30wHQwGFEVBvV6n0Wi8Zuv6tTh+/DgvvfQSCwsL3HTTTZdpPfI859Zbb2VmZoYXXniBwWAw+Z4xhsOHD3PDDTeQ5Ru76czMDM1mk+FwyHg8Zm5ujnvvvZevf/3rPProo/zUT/0U3W6XEAKPPPIIp0+f5mMf+xh79uyZNLITiUQikUgk3g6cU3frTTdZFhY23kMtLBi6XfVwZhk0GqremJ42xHPwjGJr21potQyt5kYg2airc7XTUZfl4llh0NeQpd3SsGhpSShLodMx1HJheRUWz+qH8/37zVUNfHyty99XVmRyuf/Zc9Bqwm23Gm671aYPxpuQ6Wl1/lbOadBw5O67LWWp+2mzqfvgVFf92Bg9+dLr637QahlaLSaKwlqt2l8NFy7oyZb1nm6309YCzdKSMC50P63XNVg8fUa4807Ys8eya9eV117tg+q21da1CNx6i+XwYW1PJjYp8ZBXq6nK4KabLNEqiQhs22aYnlandZbpPmYMzEzrlR4i1XEQ6jXodlUJAbpPdDqGmWn1/S8tqTrGoG3VVlM91CK67akpw7nzwtlzwtKSBpKHD5srtmGrj77O6f9v3WrYvRvOnoWLF9VnXK1n61Y93s/Npobr20n1VBujx7rbf9RMTngBeG+480491oEe61R7ZZifM1gjeK8nZI3R/aqKLIzRfW9mxpA51XUdO6ahcLuuWqPhEEYjYTwWZucM1uqx7sRx4fAhPdG3bduVd4hr1c2IwDgOLx2PNVwvSsgzPW6ORnosHgyE6Wn1dicS70TehPNaJmGppsgBMRpoWwOlwGpvCAjzs1M0chdzWhvVIWrNMLE9bRBs5uh0uogPFCIUZUkgwxmLM2DiIEgxPga2Nraf47uoOISxGpIYZMMdbY36qsVsBNnxXjhj1ftsok7EqAIDVC1Rdb41IzZgLCEImVHXtYnVbP159NZVg9qgl3eC04Y4JQar7W+JwbetwnCDNRo8TwZLxhb1pWE5xEA8CNaqNkViezuEgMNOVClinIblNibeYhErhFJDexGDsRmvVVnIsgxrLWVZUpblFS/tezkXLlyg1+uxbds2ut3u5HmtmJubo9Vqsbq6SlEUl32v2+1Sq9UuC8ydcxN3uYjgnOMnfuIn2L59O9/97nc5efIk27dvZ3l5mW9961tYa7nvvvte1dOdSCQSiUQi8cOklmvAVwU2FVn8UPncc8KJE4G1dSgLbcOePiOxoKEtqiuFK8Oh0OvpbQcD9LLlp/VTuohuczDUD7Jra0JRCI2GmYQxb4RK2/CNbwYefTRwcUkD87vuNDzwgHo609utzYU12qyembk8GHFO95kXX9TW/tqqBh6DgTb3qitCr2YfHI9hbV2bj6MhfP9x4eixmBSJ7n/9vj7+2hqMx0Kz+cb2wfPntel/ZlGb3Xe+x9JuG1ZXk+t6s1Or6dDNRmPja+qJ1n3n6WcCp08Jvb4eF3s9OLPIZKzVlU60VcHjMM4EWFyEz/2VnzipyxJWVjZc2aurWn57I70miSdjvv71wHe/qxqR2241HDqs/vinjwROnYZHHlW16AMPpPb124110O3owMZLcU4b10eP6dUhK8saPo/HcPac6KwGNq5SeS0EDYLX1vWYMxzAN78V+P7jeqfq9bb04AoNuqsTMz9sB3p1FKwc7z/905aDB/Tk3lcfDjz9jHDuHDz+ROCmm+xlfxcTiXcK1xxeG6shdAjatgbVe+A1VNQBiIaVdVWAbJmZIs8diEesDkucNI5lQyGSZ452p8NaWWKDoSh10oexhhDb15eHzyZ6sA1CwFVDGMVN8lghTE5/msmQQ3VYExUkxlUDEs0lwxInwo/o5xacdQgWaz0aMQfVbhiHBH0sa8wkuKbaiqnWUbWg9bGsYRKiX/of1Z8nP6u1GvhL9Q7SxCDYT0JZYyzOxbN2MYjXRreJCpNKNWIIRoN8Xdtrf8qYnZ2l0+lw9uxZzp8/T1EWE4XH1TAajQghkOc57lXeoWZZhnMO7/3GgMqIqdr2r8Kltz18+DC33347X/ziF3nqqae45ZZbeOaZZzhy5AgHDhzg1ltvvazxnUgkEolEIvF2YC3k8VLlihDg+PHAw19TB2cV2EjQ4CWOk7kqRPQDt/dxdEpgMnjvUtqtah3qQD5xIkwag69Hq23YvUsHNIrA2bPCww97vv2YDkSbm4P3/rjlve+1bNlybYF44odMDAnzfGMfFIHTp4WHv+b5wQ90cF4R90F5+T54pd1EdH+r9sGqLbuycvnNWi1dR+bUg7y4GK5qyGKjqU3WkyeFJ58UWi1VoNRybb2urAjFWG/rvQZLy8tCu52Gk20G4oXJ1GqXf+QsSzh6LPDlLweOHRMGQx3wKbGAFl57Pt0rkLgPVsNiS68nSS4NDPWqAm1l+6AB9gsvCr31K1WvodPWGQInTqg3e3lZfe8/8yHLzh2qP9m+zfD/ft5z6hR87+8DBw6omiLx9mFQlcelr0Mi+rt++GuBxx7TAbZFsXFS7g2/3vqN19cgerLvZf07uh3d3228euXEicDJU3LFx8ky2LHDsmfPGzsDbK22w6ttHNhvuPcey9SUnlgZDnVo83isJ3b6fT2BnUi807jm8LrSMDtj8cFr0IrBaIJLZtQxFYBer8DIOvOzU+Q1gxePsfGoUik2jEwa03me0+l2WVtbxTpLMR5jarWJDsOGqrIdQ0wTKgMI4sxlSg2J4XC45NWyau5WQTLEQYqxrux9eFlwqoG2FT0dJ1FFgtWDkKq1TQzQKwu1ht/WOlSo4hHxuLhNEcHaGMbHNYXLjmhmI8tmo/FQKUKquZXGOH2lNpVeRL0gIlVjwmhLG9W0BO8pg2dYeryXS7b56iwsLLB7926efPJJHnvsMd7//vfTenl96BJWVla4cOECO3bsoNls0mg0cM4xHo+1uf0yinFBWZaThve1MDU1xX333ceXv/xlHnnkER588EG++c1vcv78eT784Q+zY8eOVw3OE4lEIpFIJH7ovOxtVq8nPPkD4fvf3/Cz3njYsLDVMBzAU0cCx45d5aYNZNnGpfSdLrz3xw133G5f0fQyVltpvZ7w7ccCR45c+RP7tm2GBz+oXuPz53Ug36Pf1nXv3An332+543bL9LT5oTfLEtdGdeHopfT7wtNPa4N0ZRVmZ+BHf9SwY7thXMAzz6iz9WofQMNx/WMzNvF//MftK05mmNgCL0v43vcC339crtjqnpuD2261rK6ql9Zl8LWvBb7zHU0qiwKWV6qrAuBv/iZw5Cnhfe+zHDqUdsrNgLn0Q21kdVUVME/+QPClKjduuVlbs+s93T8WF69++3muIZ6zOqzxp3/KMTPzytvmuV6FsLQEX3tYHe5X2vaOHYYPfsBy9qzug1kG81t0HkC7baIDHuZmVU2yvKxXL+zbl/QhbytmQ61VUZZw9KjwzW/qkNdmUz3Vu6NC6YUXdajxVW0+7mdVY7/dgg98wHLLza/8JVurx7oQhKeeEr729XDlgY0NuPde2LPnjeUWzhmmpnS+mup0VH1SrXfLFtWkjEY6APdVIplE4h3BtYfXCL70OBP1FyYGpZVHOvqZDQYfDOv9AmPXmZvtkDs72QoGxG4MEgQdJpjX63SYYmVlCWcdRVkiIuTO4TITxyJymZdav1S1qv0lL5IGHwRrrX4pVKG3DpO0xkZPh91YU2VECUGDdhOHLE6a3jLxU8dHBCsaxMexjVX4rY1rbVtbNhrklz+f1WPG582Y2OK2GohXwxV18doOr+5s3aTeLVRqFaO98CCU5Rhf6qlsaxxZntFttSjGfTS/fu1T29PT09x333089NBDfOELX+BnfuZnuP/++6m/ynVQ4/GYP//zP+cv//Iv+ehHP8o/+2f/jIWFBTqdDhcvXmR5eRnv/WXu6XPnz7G2tsaWLVvIr7EeYa3l7rvvZvfu3Tz++OM88cQTfPOb36TVavHe976X6enpa9puIpFIJBKJxFtNrwdnF7Xt2mpqaPi+B3SQ3smTwksvVe8uX53qPWP1XrXZ0NDa6Vx0Wk3Drl16WfClt6vC5f5A29n9/pV1EIOBxMv4he9/f6O5dsMN8MEP6CDJdjupQt5pDIbqP19dU3/6zT+iQ8empvQy+tNXagluXKAKqOd6qqutahH9865dVr2xsUkLG/vg4qIwGunfhSsGOk0YF9oM9wH8SH3Dr0bVLMxzuapBfInrQ6VXOHlSf2fT03D3j1nuvc+SZ/DSS8KTT17ddkCbtpWneDjUkHF+HvbutZPbhaD7n2o24fz5wCAql14PY7W56r3ufxOTp7wymA7x70XVHk/HxetPUcCZRWEpnvjau9fwkQ/rsON+H5aW/BWPQRXGaKN6ZmajSFmrwf79dvK7rrQj1bGu19PG89Uc60J4ZYv7asgydbE3GnqsXF7WE5Tdrpmocoqi8nenK1IS71yufYKdbMTNOklYG8deK9gxvNZStDUG74W1ng7/m+l26LSb+gISByZK1IGoskNdUY1GEwmB9fVVgi8py0Kb3l4wmbaJER9fPCxGzORFw9igahMMxrh4mUfAWT0DZqydvJGS+LNMXogsl1SdUa81GnQbPMGYjda0MYQQwPgYKAeq036lFzKjOhU942y1wW2svvgB2WWt50tFI/rfQcJlQXeQ2BAnYLF4iQsWQYLgg0d8oJTKP25xztFo16llNZzLsNYAjrX1Eb7wr3skdc7xoQ99iP/+3/87X/3qV/nd3/1drLXcc889NC6RJfX7fT772c/yn/7Tf2JxcZFf/MVfpFarsX//fg4ePMgjjzzCk08+yS233MLU1BQA49GYRx55hOXlZX7yJ3+STqfzhnfDiv3793PXXXfx+c9/ni9+8YuTx7rxxhupvU2Tg77whS/wJ3/yJ5w+ffoVCpTE20+9Vucf/fw/4ud//ufZsmXL9V5OIpFIJBLARrChV9BBo27ih0zh+HHhTPQNh6CXv4fKfhcbZcHHAHoE0+gQsx3bDe22BnbHXhLOnAns3WtZXhYeeyxw5Glhehrueo9l/37Lz37c8eAHr/xepVbTluLx49oUX1rSJtf0tMF7ePbZS1JMo6Hltq2G+fmU2mxqZEMRUg1f7HbBWsOpU4ETJzec19U+CButxhA0AB8MYHZWB+3t2GHodlUnc/yEXip/6JBlfV0btt9/XOh04PYftdx8s+VDH3Lce++V98E8h3pdh0LOzr4yVF9bE55+Rjh1GmamNYg/eFDXk9i8XKpssAYaDR0WuramOo+LFzduVzVFL72KoCz1d18NZJyfN+zcabh4Ue975Glh+3YNto8dCzz0UKA/gH37DPffb9mxw/I//hPDaHTlfbBe10G6vZ5Qr2tAfvaccOaMcOCADow8c0a4cEFihqGD/BKbAwn6r3XV8E59bTt2LPDc8/Hqj7ifTY51bBzren0dyJhlhmbTcGC/4Ykn9ATcc88Kd94hLCzovvvww4FnnxNmZ+CeeywHD1ruv99y661X3h+s1WDce7hwQTh1WpCgJ5ErDZP3cOqU8Pd/H+LVVHqicMcOw44d8MIL8NJx4RvfCPzYj+kVK9/+dmA41GPp1m3aCE8k3olcu/M6nk0MEiYKEUQIQWI4qm1kPesk2Phmu9cf0++fZ+e2OabajY1ToMThgaZqNOs/jWYDMYG11RVCGSiLAkyG9QFrwFlHYGPYopRWBztiwAvG6iBFbU0HQtDtWrFV6n75K+El7W1911Z9YrCA1QDbCIjdUIRULegQg+a43WrzLg6pFIRQPXGV9uSyznlsrsSWd0CwRPezBG2HW72Hj0fZovSEUID3sf2dUctzaq5JVsvJXIaz+pxYnfyIQ/AhPtfmylOW9+/fzyc/+UmWl5f5xje+wb/8l/+S973vfdx2221MTU1x8eJFHn30Ub72ta+xurrKr/7qr/LBD36QVqtFo9Hg537u53jiiSf4gz/4AzqdDu9///vx3vOlL32Jz372s2zZsoUPf/jDzM7OXuvuSLPZ5L777uMLX/gCn//851lfX+e+++5jYWHhMh3JU089xec+9znOnTuHiPDkk0+yvLzMI488wm/91m/R6XRoNpt84hOf4I477rgsoL8Sf/RHf8TnPvc5+v1+Cq83Ac45lpaXuPPOO1N4nUgkEolNQ6ulmoZaTcO/x74TWDwrcaiYUKvpbXo9ePxxodP2HDhoaTb0w+f6uobG//W/em680XDrLZaDBw0Hnzc8/oTw7HNCvx9Y2BJY72nw3OtpqNdqaZO11XqV6/hfg/EYLlwInDsvk2bYi0eFxcWXvdcxMD9reN/7bAqvNzmNBszOGRp1YTSEJ54UVte0gbh0UcgzaHd0/zxyJDA7AzfeaGnUDbU6+FU4dkz43Oc8hw4Zbr3Nsm+f5aabhG8/pgMg/+vnPNu2BYYDDVNWV9VV3eno4zebhm3brm4/CQFmZw17977y/fXpM8KFC4HTp4V2G265xXLrrZa3qbuSuAaM0ePQ1gV47nkd9vmNbwaOn9Bj1dKSMD+vx5q1Nfj2Y4FaTfUdrTZgmAyiO7MoHD5kOXzYcvuPGo4fF9bW4OGHA8dfEmp1DZZPnoR2Gw4dNDQbhnoddu+++uNgCBoS7t8v/OAHur2//Ixnx45AWcKJExowNhqwf586shPXH1W8aCu/31eFyJ/+qadW1xkOxmjzf21dv/fQVz03/4il1db7jsY6WPnP/syze7fqkG680XLggPD006pX+uM/9mxZgPU1fW0cDOCmmwzT0zoHYssWnRtxtYxG8OxzgS99KajLXfR1v/re9/5eOPK0DiTdt9fwj/+xYXbW8N4ftywuBlZW4P/7SuA739V988IFzaRmZ+HOO2xqXifesVx7eG00FA5BK8QTBzNM/lwJoaXyMce2cRDh5KlF+tMdFhbm45DFGNVaNprGUYXRbGoDu7/eJ5QF3pcYXNx2VGyEgMTgvBq2aACJ/mptim941SaBOxLXXjWmQYLENQRtdE+WE8PlaiBlFbGLhvO6DRufhdh6tnbSrKESisRwHiOIVGqQEANxCAR1WRsNu41W2fWyOy8UIeDLamBlybC3Qm4d3akpmt0OLqtr2zw2342xBCzVqquQ3FZtd17/ZTvPc+6//37+/b//93zqU5/iC1/4Ap/61Kdot9tkWUZRFPT7fW6++WZ+/dd/nY999GPs3LkTay3WWj7+8Y+zsrLCH/7hH/Jbv/VbTE9PIyJcuHCBLVu28Gu/9mvce++9NBqNaw59jTG85z3vYd++ffzd3/0dBw4c4K677npFm/vEiRN85jOf4fnnn0dEGI/HDAYDjhw5wtGjR7HWMjU1xY/8yI9wyy23vKHw+uTJk/R6PUSE2267k273VURribeFI0ceZ2npAqdPn6afrhtNJBKJxCai3TbcfLPl5KnA088IZ8/CxSVhqgt33G7YutXw2HeE554TXnxR6HZhxw7YeYNhz27DyqqwvAK9H2gra/8+YedOy/vep1cePvWU8MILwksvaeCS53DzzYYH7rfccMPG+9qrpWo+ToZUBeit67+vduPhMJ3A3+w0m4YbDxteesnwxJPC+Qt64qTTgVtuMezbZ/je94QfPCUcPw6tprBjp7Btm+HAPsPSRQ0InzoijMawd6+wf7/lgfvVXfP4E8LRY6rBCaJKm0OHDO97wLJ37xvfB63VVn+9/jLtoqh6x2Ubt9Ng/C16ohI/NKamDLffYTl1ynPsJTh1Gs6f1ytE3nOnZcsCfPnLgTNnNDycng4cOpRx6JDhmWeFlWXVjvR6Qqsp3H473HqrZTiCr389sHhW92lrVfcxOwPvfa/l7ruv7cSGtToD4CcftGSZzgw4elT3cdBjZKcDd9xheOB+R6eTwuvNQBXw3nWX4ZFHVNf1xJNCo6En0+6+2/LkE4FHvi2cPw+PPhrYsd2w6wbL3r3CM8/oibfvP6HHvDvu1P3gQz9jyXO9qum55/V4F4Iei269VWdFvJHA+lJEYNCHs+deObg0BD2hs7amjzUzrV9rNOD22y3jsZ7UOXtO9SHVc7B3N3zwgzoHIOlsEu9UjFySFg4GA4wx1Ov1VziZX84//59+mcXFM4hMEmmkKhVbG73MhiB+wzWNxrejUQ8EcmfZs2snnU5D/c7WTcYU6sObaMQIeO8Z9ocMeqv44Mmcw1mDs06vNUKixsMRRK8/ciYONoyBeMyd489mcM7p+qvHtDb6szWMrmLoiQJFtLm8oUmJAySNieF1DNMncbCJ4XeVEIs6ttGFiNXbOmMJUiJCbEcHjM3wBB1yGNSOYlAFiMvq1Gp1jJScXTxOb/kcrUaTZqtBo9nE5S2yeg2RwMVzF5mZ20p7dkEb8SbgMIhYjp8+y2DsIQi/8a9+k12797zu77woCk6fPs3Ro0d59tlnOXnyJOPxmOnpaQ4ePMjBgwfZu3cfU1Pdy9rOVVD93HPP8cQTT3Dy5EmstezZs4dbbrmFw4cPMzU1NbnPyZMnOXbsGPPz8+zbt+8yv/bq6irPPPMMADfddBOdTmeyrxZFwdNPP82FCxdot9uv+D7AhQsXePbZZxn0B7GN/0qyLOPQoUNs27btDQ16/OAHP8hDDz3Ebbe9h//1f/s37Nt74Krvm3hr+d3f/W2+9KXPs23bVv7gD/6A97///dd7SYlEIpH4B0JZwsWLwvKyBiezs9qKunSg4Wikra/jx4OqOGqwfZth1y5tBC4uCidPqmd1+3bDnj2WLIPTpwNHj+qQvXoddu8y7NtnabW0Ib20JJw+I5w9KwwHepuFBb2cfm7OXFNoE4KGQBcvXnnQU16D+TltnCWuH9rUE86fF4pCm/YLC7pvVYzHGha+dDxw4bwGIdu2GnbvNrRahnPntAlbjHUf2r/fUKsZzp4VXnghsLQMtRx27DQcPGDodFSfsLysOoXFRdXY5DVtHt6w0zA3b2i8cmzOm2I00kvs19Zksr+ny+KvP3rFhrC6KuQ5zM3pMMYKEf3dnT6jDemVVfX379ypreUsg+MnhDOn9bP3DTcYDhywrK0JR49uHB9nZuHgAcvOnZXfV49/p8+o5kgCTE3rcMWt2wxT3Tfn6K+Os4uL+ji9ngbb3Sk9hm/dutG4TfzwCUGPdYtRt9Xp6BUdlz7/3utr8kvHhcUzeruFBdi92zI3Z1haEl58MdDvqxLr8GHVg5w6FXjhBR1Q3GzADbsMhw/pyY+igPMXdHuLi8JwqAHytm2qLJqfN2TXWBMNAZZXhHPnXn/2gDHQbKk2LMv071Svp+s5dUpPcjsLc/Nww07LwoJ6sROJzYqIsLa2RrvdftUc7prD61/51V/i3Nlzqg0JKk2rvGmmCqtFfdHOWm0bW21Y93tLNJpTgGNuqs3Cli4m04Zw1QF2lU7E6g9BUKdzb32VwaCPijwsmXOEOFlVdSV2sh0jURsSPyzYOF5RRD3UGG1HV+qMykddDWK0MXMWrUpjjT6BxsikQW5NJf3QwNwYq21sYzBYDJaJlhpBjKEatUnErbgAACAASURBVBh049G3XWJx4IXSl5RB15c5R57VqOU18jzHOksoPIO1FZaXzrGyeoFWs8nCwg5G/WUsY1zWBGsIvmA0GNGa3srMjj04VwOj4XnAaXg9KrAY/vff+Dfs2vX64XVFCIF+v89gMCCEQJ7ntNttarXa6+43l94PoNVq0Ww2Lwu638lU4fUDD/wk/+7f/R6HDv3I9V7SP1j+1b/6JH/xF3/Ili3zKbxOJBKJxKZEgg5XGo8lNks3Pux6rx+ORbQ57dyGf3M0grLU+9Rqep9L336Vpfo5vdf71dKApsRrIKL72XgscRjZxr5SKWJC0OZetZ+FoAFeUVx+n0v3Qe9fuZ8mjUfi1bh0f3JOT7BUmYX3+j3Q42AV0FXHOPVd630u/TgpovcbjyXeV/e/t/Ij56X7uDGvvo7E5uHS/QYuPyZVx8Gy1H2sOp5Vr7e6b17+Gl3x8tfbV7vN2011YujSY3Q6/ibeCVwpvH4Tf7W0oVzpQ1SNYVXfEfUczhpMMNjozVC1R4kFMufwZKwPx9RWB0xNN8lyg4QqGI6F6lD9IAZjHZ3uFAEYj4aEACE2nAW9LM1gYtPbbkwikUotUg2UqULsQIiNag2v9Wu6DVNpqTXEjkMcgxq2qVrVE+8G1X9WgyejokNEw9lYxtY8Xt3ZYPAhEHxJ8AWIxVpHntfo1Bo6YNE6rHN6P2vAGtbWlzl/7jiD3hoSPGTTZK0Ww+E6oRhhGOlzFkpyK4TxgDAe4eqZhtqmGtBiMdYi4Y395q21dDqdNzxg8Vrvl0gkEolEIvFuw7yGCgH0Q/CrNfesrZQIr10W0KAxNU8TV0aDDQ03Xk6l6ni1rzca0Gi89j7mnHrbr9YnnPiHy+vtT869UgFjjIaLef7a+5Yxr31sfatI+/g7i9fbbzaOg5d/vXq9bTbfWa+31dDQ1ztGJxLvRK45vBaRiTJEQgAbw2EBvER3NFGPoeFtQBiOh+RZrlPWLYxDYGm1hxfP7GybzFVtZZl4qfWvXfRWG0e71UK8pyxKSu+xDqzEBrQRQtBhjhgfbR12MkXWxOGJQQRMoBKaWLMRVmMCUtW14xqC14mu+rMzmbY9OSTEuoFIFVQHbYtPhi5q89x7T1EGSu8pA1hjqec5zUaTLMuxLscabZSb6M2umttVa3o06DEa9RECxgj9tRWOHxuTIXTqGZYAYuLPEBApKIZrZHkDY/PoBa9GUIKPP1MikUgkEolEIpFIJBKJRCKRSGwWrr15LSEOQAwIQQvOAbJKw4EgouMBqxq1F2FUFjTa7UnqKwaKEFjtDTAOZqbaOKftZYkDG1UCos1qjMG5jG53ivWVVYqyoPSemnM6LlG0ES42aIgbVSHgQGxsHUc/tqgba+KrlktUIVEvoqF01JkEE4c/GmwG0YuiQTHgRVR3Em8rYgkBSinx4glB/d+Zy2k2m7isRmYdmbM4Z3VIo63WZCZTJU0crKhtcKNDKENsdYsQxkMG4wH1Wg3JOwRvCZWLXAK+GDEe9Wh257WVHp8SG/3cNnq8E4lEIpFIJBKJRCKRSCQSiURis3DN4XXleQ4haFAa68U6ANHqMDyjagwdlBggeFwMg1VbIZPhhqUXVtb6GGuZ6jTJMxe3r8MRrTUYW4XahizLaXU7rK2uEEKgLAPOWcSgwboYVXyIPoYG7Y44eRFiIIxAkDDxLscsWtcWvxaif9oZi+g3CcEj4jVkjj9PkEAQDx5CKBEsDkeW5zTqDZzToNo6h7VZ9GNbbBSRVMMcVXuigbg+tMEYISA4Y/EiFL7EGINzOWI9zgQwAe9VexJi+1oD/0BZenzQqfRSBoKI/pnYnE/N60QicZUcOXKET3/60zz++OM6VDbxhmg0GnzkIx/h537u51hYWLjey0kkEolEIpFIJBKJRGLTcs3hdRUqazRsVZFhNLQ20bWhWgttYYNQjAY08gwhhrMS1GUdBxuWRWBlpYcRmO62yVwMk63ZCKWNm7im83qddneK9fU1fCgRsXiJTumgmo+NeZRCkBJCdEfHQFomleOqLQ4GV91FZf0I4gVjA8ZYAhJd3JYQtH1elIUWzG1GLa/TrLfJanUym2OswWYWa3TLxmmorHMgo88EbUILGiQHTAzGTfx6XKEB7wPee5xzCIbM5gQTcJmjDPp8YWu4vE6tViOvN6g1OxRiwQvWWAKBYJ3+jmKmn0gkElfDF7/4Rf74j/+YY8eOXXKMTVwt1louXLjAjTfemMLrRCKRSCQSiUQikUgkXoc317yezD/UbnMwJkbXMnFCB/FYtN3ry5JWo4txmQ5KRINmbWbrfXwprK73yZyj26ljnEXiUEiLIVRDImNAXm82MMawur6GhKDN7Er5IRqbW0FDYxOHTE5CY0WIYXIMcEP82UyIehFLHNSogxbLECjLMm4H8jyj0WiR1+s4k5G5TJvjxsWRw6Kt8fiQqiexk9DHGAdGkKqJHoc5io2aEqpBkKr8KMsCa52G0GLIbQ2XZ9SbTRrNFnmzRV5v4Vwdm2U68NHpSQOMJcRfjnUW4xyEDbN4IpFIXInFxUWWlpYoioIDBw4zN7flei/pHcPRo89z/vxZTp8+zerq6vVeTiKRSCQSiUQikUgkEpuaN+G8FkLwgGAkSqNj0zropMaJRkSAohijdWwzCZWrYLYU0VayaLN4VJRcXFkDI3RbTYzxBARjnSo+nFaFo5CEvFan0xLW19bwZQkiuIz4+HG4pJHoeSaqQgRrTbRiC0F89D9XmbzEVjiEsfq2x1Jq6Jtl1OsN6nmNzDmsrVQgDgnabFYPtiBB9R5WqiDaqAs8hvVVK53qJxId7MglYTrEwF7AIwSbU+/O0Wq2qNVb1GstXKNOVqthbAaZrgVsVI4YQggb7e3gVSlSlpPBmIlEInG16HFDmJ2d55//81/jnns+cL2X9I7hP/7H/4O/+qs/nzyHiUQikUgkEolEIpFIJF6baw6vPWESUosAwWCtuq2rJjUwaRIPB0OajTbgCCKXKznshvvaiwbFYRw4d6GH8YZWMwOrrmdsqKTUk3BZRMjynGarRX9tleBDDKUtxqo7u3o8CQJWy80+wKQ+HQIYQxml10FC/BkMzuZkrkazUSOv1XBZhrGGzEavSQyIDUzc2aEKpUVUcyLRB27iQEer7u3JYMZLQ30X/d6XtLCh0pMEZrduxwfRtnfexOAINjbLYaJYidMeCRJYW1vj2PHjzG/ZQqPewGLVjx0b6olEIvFGqdVq7NlzgFtuuf16L+Udw/z8Vpxz13sZiUQikUgkEolEIpFIvCO4duc1ouG1BHVeV0G0RN+1MVG0YWLDLMQP7AZbDUUMgrFRoRG8htYGxFiCWEalcHFlHTEt6q0GmQETt4WYiX9ajCo28kaNprRYX11jNBrjvKNW1yGOQQJZdFQH0aDaECCYuO6ADwERyFyNRq1BVsvJ8xpOa9xkWaaDI42qQCRs+LSpMubKDWIMRkz0awtiA9WESBNvL0bQ8ZZm0qyufCwS1Sbq9ya6xEGCp95sMC4FkzcwWQ2JIbSIgBWsCMHHoNxGPYk1zM7Pg7FkeY1Qqj/c4vV3kUgkEolEIpFIJBKJRCKRSCQSm4hrDq8l/mMthBCDUiMxezWI2BiKCuNiSK2WgRFC1Xw2BjGClaAlYXwMaQ0iDiOCEWE8hgsrPaYRptrNSXtaC8s6tFCHR4KxjnqzSQiB3lpgOBgAhqymQxNL8QhCCGiIHQJGwLqMeqNJs16jltew1pBbfWqssxgbByZWRWjksmci9sv1WTHaqq4a4RIHQ1a30TZ3iPeN9zLq8hYEFwN5ZwFjdSCmqbrXGsILdhKcl8FryC0BH4c9GtHneXI/ETLnaLfb6tu2Dm/AA7Euf627QSKRSCQSiUQikUgkEolEIpFI/FC45vDaBgNBg1eDUeuGtbFtDSYEcAYIjMdDmo0GYlD1RxXFGtWMECDYqNAwQPDRG63bL4vAyvIaFmg361ireg5BMMFgjUOCBsJiDI1GU8NpHxiMhjSMDpMMvkQzX0Neq9Nut8nznCzLcVmOMRaX2Wj00ABYwoafWwgYYzW6ju3t6kep2t1GDBYdpFhF2mrv0Pvrz61DKh1u0tiuBlBiiA7xS73YitpWtCUeQkBCQPBRESI6zBImbffBuM9gNKTdbGLEMB4O8UEoxyUhCGUxJs8yJJpMNhOLi4t86lOfYn5+np/92Z9lfn7+ei8pkUgkEolEIpFIJBKJRCKRSLyNXHN4bYL+G0zlTLaEELDOQghYo+1mX44xUfEhUQNtJAa02iXWYDcYxKk6w17ibo55LqNxYGmlhwh0240YfIuqPrBYl+kASbE4Z2g121AGQn8dCVBrNMiaDucyjLVkWRY1INVQQ9VzmBgCB+9Va2IczjpE4iDGS1vXEqLqwxAwiHgN02ObOQgIpYb8ZAwGI9ZWV1lf7+Eyw/ZtW+l221iXYY3TgY4xLA8B1ZpYM9GywMbgxRACPpREgQlFWdDr9ynLgqlOl+FwyJlzZxkXBbt27KBZazAej3VYZeFxUYFyeYt887C+vs7f/u3fsmfPHh588MEUXicSiUQikUgkEolEIpFIJBL/wHgT2pAQw15tHDtrtIwcBEv0LHtPOS7JsxrWOIKYSasZ2fBjG6L3Og43lMn3LF7AGYMYx6gIrKwOMBg6nQbWatisTeTYbo7N5cxZut1p2u2u/qBZHsPajYqxieGtkSq0DpShIIjHWg2T8zwHiKGxxcX16bDK6rmoNhi91dWgxSCINZw6dYbz51e4eHGZYX9AUZSEULBt+xbuuPNHmZ6ewRAmehJsDPGpWtRVwK9qER8EXylQJIAIS8vLnDlzBmsttT11ev0B62trGAxl4SmdxxhDKEqchVAUBKNaFC9h02XYIQT6/T7D4VC1NIlEIpFIJBKJRCKRSCQSiUTiHxTXHF57SoL46LhWjYUxGQYTBygKoQwU4zG1Zgsqd7P4aMrYGHoo0ZeNjWqNoIMekaAqkkCc8GgYiWd5eQ1jhGajhssECR4X9RpV6xljcHkNC4T4Z2tsDKH1HyMb5o/RaMhgNCBQUsszjLHUanWMFUbDIdZqUxtDHOwoGzG4JuYbwxa9hu8hwLGjJ3nuuRe0xY1Qzww2WJZ7I44dO8GuXbvptDq4Wk4spoOItrCNi8MkY8tdAsFr81piO9wYgxHBjwvKcaGakTJgjcEZR1mWeF9SlqUqWgwUZUG708UjiLWIN5tOG5JIJBKJRCKRSCQSiUQikUgk/mFzzeF1MIGADjwUUzWoSywZHqEMJeLHOGewzhBlFzGg1SB6w+UsiPg42FBHGxqROAAyup9xEzf0qCg5d+4i1hiGwyF+XNJqt6nXa8zOzdJq1TEW1F2iQbBHQAIuc5ShjAG3JUjAl57+oE/hx9QaOQHIjMOaDAlCWRTU6w6JmpBK2y0SQAzWiM5gjMXuIIGi8Dz33PMcP3aCZq3B9FSXPLOIlKz31imLBhfXepxdvMANO7aS1xxBbHxCdOneeIxYjI1t7xCiLkQIpcc7T7SdaFNdBO8DEjxWp2JSel2LMyWjwRDvPWBoSFBHtnGqTLnO6fXi4iLnz5+nLEsATpw4Qa/XY3l5mSNHjrCysgJAlmXs2rWLbrerJyoSiUQikUgkEolEIpFIJBKJxLuSaw6vCarrkBhBG7EEtJGNsVAGxsMR9Vpdc90gGCd4gWAMOWjgi3qzndVQW73YEhvHggmxNY2g/xvwwXP2/EX6K2sYH6g36qwurZLXck6ePM3M7Aw7b9jO7Mw0WMGiag1B/dbW6ForPUnpS8qixFijzeZgyWvqww6xZR0kqFM7eBAzieERbY8bVMEBlsFwxDPPPMfpU2doZBlbZrq06k2arSajYoy1hqIQ1vtjBoMhRVlSD0HDemM1sA+AFcQELPpQIWjzOgRP8BpUI9pSDz5ou9pYSu8pyiFFOaYsxoTgsRYa9RpehIBhXJRICOR1i40DJK8XIQT+6q/+ij/7sz9jeXkZEWE8HnPs2DFefPFFnn/+eWq1GgAzMzP8xm/8Bg888ACNRuM6rjqRSCQSiUQikUgkEolEIpFI/DC55vDa4TQ0rfrT1upgwuABwRr9b2cNJg411KKyBsdlCJigTWqsidoRDbCNVd8z6PfVq61qkCCBsvSsrawThiVTzQbteoO8lpHljt5gwLnFc6wsr7Bz53a2b1+glrvY6hYK73FWh0rGB1C1hngaTUO7U2c8Ur+0Mfq9SVM8aIgtEtct0bMdn4KAUIaC5557gVMnTtGqN5mfnqJdr9NuNnF5rtqOcQ0jBl+U2piO7XJ1fgNYrG4eEx3XIoL3Hh8CPgQdhhk8RoRgBO/HeF+S1XJKX1AUY4IvMQaCL4FAljkchnEZoidcn4MgMjkJcb2Ym5tj//79rK6uAtDr9Th79iydTod9+/bRarUA6Ha7dDqd1LpOJBKJxGV873vf46//+q85efJkfG1NvBrGGA4dOsRHP/pRbrrppuu9nHcN6+vrPPTQQ3zlK1+h1+td7+Vsaur1Ovfccw8f+chHmJqaut7LeVewtLTEV77yFR566CEd0J54TVqtFvfeey+f+MQnVAmZeNOcP3+ev/mbv+Fb3/pWvMo38Vp0u10+8IEP8NGPfvR6L+Vdw8WLF/n0pz/NU089ld7/vQ7GGKampvj4xz/O/ffff72X864ihMDZs2dZXFy83kvZ1FT74L59+67p/tf8im3EYMRgjUOMwQfBilEVBYbSlzinjV4NZA1BDBiNvGP2G/vUBhETg2y0eS2Cc0790YbYRtaQdTgc4ouSqUaDuakOWWZpNBtktZxOu8XKWo8LS8scffElzl9YYu/eXUxNdXXAo/FYIMscZakvriEIYkFswDNCyAgiDEcaEksACVCKrxJlJETHtTAZPOmLwKlTZzh35izdRotOq00tz2k0GtQadbJaDkOQYNRpjajeBG1UT1J70eGQNmhoXrWiJQS8L/ES8CIYX2pb2xpK77W5HoRQlJigmzIi+LLU34ex+AC+KHEuY+PUwOWDLN9urLU8+OCD3HXXXRNtyLFjx/jX//pfs2vXLj75yU+yc+dOAJxzbN26dTJIM5FIJBKJfr/PZz7zGf7zf/7P9Pv9672cTc/MzAx5nnPw4MEU3rxF/OAHP+D3f//3+fKXvzx5L5N4day1PPXUUywsLPDggw9e7+W8K3jyySf5vd/7Pb773e+mQedXwDnHM888w6FDh7j99tuv93Le8YgITz/9NL/927/N6dOnU3h4BbIs4+mnn+buu+9mYWHhei/nHc94POY73/kOv/M7vzPRjCZem3q9ztNPP819990X86jEm6UoCh5++GF+8zd/M5UXroAxhlarxa//+q/zC7/wC2/4/tf8iUXY8D8bazBIdChryDoYDWnlGYIliA5rxGgQ66petYkJNsQhhRbxcTqhiA5rjGF4la1KEAaDIeI9swvTdLp1gi+pusP1ep3GaMx0t8PKWp+lCysM+gP27d3D/MIstdhMzm2mAyHjkEVLzngwZjjsMx4GxFvG4wHNepPZmbk46HEj4g2ikusqhPal5+y5FS6cX6bdaDPdbuOspV6v0Z7q0mq2GBeFhvKZo9ls0ul0aTaaagOPzfONNrcnWAs+/tlEdYl4JPgYqKuoRcQRAljjKIuAL/XkgIkalhBKfY6Moyw8xbiAWsBap35xY65jdK3Mzc0xNzd32deazSbdbpe9e/de89mZRCKRSLz7GQwGnDhxgjNnzgAwPT1Lu929zqvafKytLbO2tsra2hqnTp2iKIoUXr9FXLx4kWPHjnHx4kXa7S5TU9NxpkiiQiSwvr7G8vIyx48fTw2lt5ClpSWef/55lpeXabXazMzMX+8lbTqGwz4XL54HdLbO2bNnr/OK3j0sLy/zzDPPEEKgXm8wP7/1ei9p0zEejzh/Xo95x44dY2lpKYXXbwFlWbK4uMjx48cByPMaCwvbr/OqNh+lLzi7eBrQk+16xf/1ToDeHYzHYx566CG++tWvppN3V0FVnnlbw+sST4i9aULQNrXGpdpGLgJ5PccTA1LQRvEkjDYxAAYTrNazRRACRlQ7ogFs9EpHDzYxJEegM9Wm2c4py5JxERiMC/rDEY1GjbYIIpa1tR6j3pAXn38R7wu2bt2CzQ2U4GyGLwvqeUZmHePCsN4r6PUKut0pOp0a9XoN65w2lEUDcmttHChJDJQDq2trnDh5kpn2DJ16nZozuMxSb9ZxzlFGRUjmHLU8wzlLljmmpqa1vS5CEI81AWutakskDrW02livHNcSPAQDwSMECBDKkqIY6e1FCEEoipKyKAilB+/xJvq5RSjHI7CWWrOLsemDayKRSCTeuVTzKQDuvPPH+af/9Fe5+ebbrvOqNh+PPvp1Pv3p/4vnnjsyGQKdeGu49Pn80Id+ll/+5X+RFGcvw/vA5z//F/yX//J/6gDypBd4y6gGu7fbXT72sX/ML/3S/3y9l7TpWF5e4r/9t7/kT//0/57oGBNvnkp9GUJgbm4Ln/jEP+EXfuEXr/eyNh3nz5/jz//8D/nrv/5s2v/eQi59LufmtvDLv/wvePDBD1/nVW0+lpYu8h/+w+/w6KNfUy1uClnfMkSElZUVRIRGo8nHP/4/MD+fTky9nKIo+P3f/w+ICEtLS9e0jWvXhhgXvc+VExrEoh7sckwtU7e1sQGsQwSsUTVIIMRgVgNah6o0xASM1VuoY9pD1HMIgrEO7wXx4JzFOUN0gZDnGcZYvBdGo5KiDNRrOdJp0u/3KH3J0aMnKMvAloU5Wm2LMQGbZdTrNZqNJmVRsr62zkqjR17LyXKrfmwDXtTlnVlHp9vFFwXjYkR/MGAwHHFm8RzOwEy3RTPLcc7G4YyW4D1YofSB0WhEMRogocQBmRGK8ZggGdYYrDXaSzcG4yw6qdFijWjTWrS7DkHVJWLwEggiBB9wFoKUGITcOXLXwKJnhCSU+DIgXk8E2Kymufi17gSJRCKRSGwyZmfnue22O3jPe37iei9l07G6ukK3O329l/GuZ/v2G7j77vtSeP0yQvA8/vhj13sZ72ryPOOGG/Zw9933Xe+lbDqWli7w5JN/f72X8a6mVquzd+/BtP+9CouLp/jGN75yvZfxrqZeb3DTTbem/e9VOHdukS1b0hURPyyqkwHNZpNf+ZX/hZ07d1/nFW0++v0+f/EX/w+DQe+aT55ce+U2aimM0TayMVaHGSIMhwOajSbBgBVABIvFiiDWQAiawaJZbJCgaoxqw6gLuwquqeLxqPlwVlvQ3guhDIzGY4wYQoDRuGAwGJE5BwactdRqNcZFgS9Kjr54jH6vz87d22k0akxPT1PP68zOztNutRiPxzz/4vOMxiOMiYMnLxmm2Gg0mZ+fx4rlwoULDPsFxWhApznN1pmttBs1WnkNROhOT1GvNfClx/sCfEkxHtPv9RgORgwGA0T0TKGpnCQmesODIJQaYhsIRiZDFvVfDxT4EpZXVhHxzEy1yF0W29nCVKeNMYJzlrIYa1M7xBMDIoh4TAjYTXjJyI4dO/i3//bf0m63WVhIB9pEIpFIJBKJRCKRSCQSicTmw1rH9u03sH37Ddd7KZuOXm+dLHtzc+velC/CWB1kaGLAqyoNTWHFWqxzEH3MxkKolBdVIGxMDIc3AlViQxsMgYDFTlzTccvUazk90aGR9XqTwWhEUZbkeY28BsEHhoNxHG7o450FgpBlGWdOL9IbDOh22+zZDTPTs4hY2u1ptmxpsLK6ytLSRULwk/WNiwJjDLUspxwVNBptproz1LImnfY6xlp6/R6N3NGq1ynGY5zLmZmZpxgXDAY9lotzjMeq9jDWkdVzbGb15w8BMZbSe21gByCKUoKUiEDpPWVZaHAdwPuCEyfP0OsN2LowS7dVx1oHXjACtTwD8dFPLlqNJ7bYnUWssFmr1+12m/vvvx9jDM65672cRCKRSCQSiUQikUgkEolEIvE28ya0IcQhi3F4oWgQXYxG1Gq5Dl8UDaS9qqrVY221WVypQUIQMlGns4jHiVNns1GjtkTntZaSDcYa8izHWoOzjsxm1Ot1XObIs5xOXsO3A/3+kNXVFdb7PbwP1Os5vvDkeZ1Wu8tgNKDHEGdq1PIGzUYL5xx5rcae3fuY6U5Hh3QAhF5vnbIsaDaaDPt9RsMhjUaTLVvmmJ2fIcscvf6IsijIrcN7T7PRotnq6joGffrjMSbLMDZnVAzJ6xku14a4D15D6ugHtwEdzog+fxI91l6CnjAQoSjGrK2tYazFispEJD5r+ksKQNBn2oAYg5EYlsdSu4+t982GMYY8f3NnZhKJRCKRSCQSiUQikUgkEonEO5drDq8lDiw0OrIxjm5U73KjUcdYCxiCqLrDGhsL0IIz2qL2WrAm6JRBQINway1YIADYGMJqedpiyfMamcuQqBHJspzBcMi4KHCDIRIgy2pMz0zTmmrTG/TJs5ypzjQhCK12h1qjzvraGtNTs2zftoNut0uQksFgDeegKIYMBj1CKGk2m9RrOePRkKIYYY3F+zHDodBqteh2p2m0msxuyRgPR1jr9FkRR1EUuGxA6QNbFrZhjWHYH1H6QK1eJwRPURQYWw211MDfGquDf4w21qsAG6O3M5UTXAAjWGMwJo86FtW3GKP3t2IQLGL0ZINFTyBYtNFsjIlnIxKJRCKRSCQSiUQikUgkEolEYnNw7c1rEYj+ZFCdRyirMNVFS3XAWnOJ11qqHFpD2BC/L+p1Nhj1kADeg2CxQbck1mCMofQaWOMM6/116s2cksC4GFOMx+BBSsHVajTbbWZm5qjV6wz6AzCWQzceRkSYm59ldXmJVrtJXsswBvr9dcrxkIsXL3L8+HHWVteo5RndqS6dThfvS86fO0/NZszOzTEa9bkQzpHXaszOz5HVGpjpOax1jMfqtzbDIY1Rkxt27aK/tsLq0hKNRp1ut8MYkuo68QAAIABJREFUYfHMBRq1GnkeA2/jJo3oIAKBGGzLRH8i8ftiwOU1xn5ESaBuJr+cShWuLnABMYIWrj3GBExQHYrK0tO02UQikUgkEolEIpFIJBKJRCKxuXhTzesQBB2krqlpUY7J8gxtXBuMsRq2isRGsHqxRQzBCFiQoGGsnQxshBAEbVzLZHyjarEtxhokCFmzxuLFC5TBU2/klGWpbm2BeqNOZ3qGZnsKm2XcvP8g1hqWV1Zottu02x06nSb1ek5ZjhmXY1bXV1m6cI6zi6dZWVllOBiwtrau/uxhyWjoCcFz9uwi9bzO/8/em0dXVab5v589nTFzQiBMSSDMEFBGC1BRBmWwwKELRcuh1O7S6rK7f1V9b3Wv+t31q9W3q1f3uqvaasuiLKdCVEShEBURmQyQEEYZA4RAmJOQ6czn7Om9f+yTozGAQJmS6j4f/5B1zj577+zznnf4vs/zfQQKpmXi8el4/Rn0KOyFx6fh9vgxLIFLltFcKpqmYJkmbpdKPBqjR4+eeDx+PFlhTAmCkTZkSU4WhHRiqpM1Lp2o6WSUuiQlPcU7TEEkGVmy8Hk9hFpCGJbpeGHLMskSmkiiozCjQCCQZSciG8sxFyEZ4Z38Qq+3KaRJkyZNmjRp0qRJkyZNmjRp0qRJkybNN871F2xMiqq2LVKRwYap4/NlIMlKMoo6eSDJKG1Z0BHoK1KhwY6IKmyRrP4Itm0nBVuS3tkdwq5znCSB1++jORjEFY6QSGj4/D40j4phGLh8PhRVo2dREf6MLFRNIzsng5y8PCzTxu1xOfYkqowqKYTDQUxDp6HhAmfPnCUcipCTnYXH7SMajROPGdh2GMPQCQSiuNwWqhZEVWVUzU0irpOIJ3B5EiiqhmnaqJqGaVgoqkJGVgaBVpO8/AJcLhe5+T3RLjbRHgkgKRayoiLJzrMQHf7f2I647zzklK2HSBZxdGxBBF63Gz2hY1tOpLaU9CHvEK3BsQURWCAnwJYQKAgcWxdZkpFE0r8lTZo0adKkSZMmTZo0adKkSZPmLwjTNGhoOE9d3dFOryuKQm5uAdnZOd12bcuyaG1tJhhsv+T7brebvn2L6U7NxTB0zpypT2lAX6atrQVdT3TbtdOk+XNw/bYhfBEtLRBYtoWkyk65wKSHtUiWD3TU6mShQcco2/G5tgUKIMvOC3an4oGO+GoLx89Z2AJFVhwrDVnC7fGgqi4i0SjC40VRNQpz8ygoyKelpYV4Ik4gGEDIEvn5+eh6HAmB2yUjCxPTtBG2RSIRIxoOEYvEuNjUTHswTDgYxuN2k+H3o6oaoXAUl25hWSbxhIlugayEcLsUNE3D1d7GmVP1ZLZnU9CjB26vn3jERlZk3C4XkmTjy/BjeTy4PB7aWttQNIV4IoosgSzJjn1K6uEKJ+pcAiuWIBYM48nMQHIpqWcnkjYimqqQSBhOtLr4wi7EqZDpRLZ3YAvbic4Wztdu2zKWZSHLdseH0qRJkyZNmjRp0qRJkyZNmjRp/mKIhtrZuPo19lV+lHrNFoKYYVPYZyj/189+023Xbrhwhn/7f/8Gl6Tj0ZRO7xmWTVzK4Ikn/m9GlU/qlusn4jG2bPmIZW8/T+8c71feFQSjcfYfqe2Wa6dJ8+fiT/C8Tu4bSRI2NomEjtvlBlnGJhk1LWxATVpWOFHEUtK3WZbkpHWFDZZAkmQkvhDDk84XjgAuHBFW6hDDbQlJkXF7PbRfbEVTXCQSOi3NzUgS5OXl4XJ5SBg6iXiUcFjB43EhCYHf4yGuGyCBbuoYegJT14nHYyTiCWLROAKIRWO4NBVdN9ANA90wk7tYMqZlE4nHsIRGa3sr8XiE9rZWcnJyCba34s/Mxufz4XKptFk2LpcXzeXF5XKhGwY2FoaZwONyLFYUORldDsm/3gZsZAF2QqfheD1FpcV4C7IhGeUubIFkOfHTwhKYpu3UtxQ2StIKRCRFbmE7diFCElhCR5ZsEDIC1Sn2mPQTT5Pmfzq2bROPxzEMI/WaJEnIsozb7UZV1Uv+VuLxOB9//DGHDh3i9ttvZ+zYsXi9X504fHPE43HWrl3LwYMHufXWWxk/fvw1XU/XdXRdx+VyoWla+vd/g2LbNolEAsMwOmXSyLJ8xe8ukUiwadMmdu/ezYQJE7jlllvIyMjotvtMJBJ89tln7Nq1i7FjxzJ58uSrul4wGOTkyZM0NDQgyzJ9+vShuLgYv9/fbfea5toQQpBIJNB1/ZraoBCCo0eP8umnn+LxeJg5cybFxcXddp+6rlNdXc3WrVsZNmwYd0y7g6zsrK/9XCKR4Ny5c9TX15NIJMjPz6e0tJSCgoJ0v3gDIIRA13USiUSnSLKO9udyuS75PRmGweeff87GjRspLi5m1qxZ5Obmdtt9GobB/v37Wb9+Pf369ePuu+++5usJITBNk0QigaIoeDyedBv8lrEsi0QigWmaX2p/EqqqpPq/SxEOh6mqqqK6uppx48Yxffp0VPX6k52/jnA4THV1NZWVlYwdO5aZM2de8XqmaRKLxbBt+5LvS5KE2+3G7XZ31y2nuQpM0yQej2NZVuo1WZY7rUkuRSwWY/fu3WzYsIHy8nLmzZ2XtHXtHuLxOLt37+bTTz9l1KhR3HPPPZf9bXwZ27ZpbW3l6NGjBINB8vPzGTJkCNnZ2dd1H36XzKwBKouGNCNJLanXTVtwqCnB0soEq1ZZVzjDn0ZLc4RTh6r42RQvud7Oz7s1ZrG0xsvqD05Td2J8t1w/kYizf89xCgL7eaSs8/zHFoJaEmwLBK76fJZlEYvFOrU/SZJQFOWK7S8SibBmzRoOHDjAnDlzmDhx4vX9QVdJNBplzZo17Nu3jzlz5jBp0uU3ByzLJhaLdvqbLoWiKPh8PmRZ/qZvN82fyHX3ZLIiISNhCUf4lCwbNVls0BZJkTnped3hp/yFr7VATkVoS44VhrCRJcdqxLG1+CJ2u8N9xEakEi1sw8Dl0jAsm2A4gup2YVom7W3tKLKCx2+SnZNDZmYWqiKjIAOOEGCZBrFY1IlCTtqUyMoXftqKrJDQDaKxOIZuops2iqIRTyRwu9xYpkFC13GpMuFwGNPQHasTIbBtiwLTJhoKYBpxNJeXrJw8MjJtBAqJRAxZkcjK8BOLBjAME1u2U9d2/tRkcUvZRPXIqEAiGMabl4WsKAjbQkJGoGBjJAtZgikEKgJJAMjOpCTpKS6EQFYlbMlA1w1M3bESkQodATxNmjRw/PhxFi9ezIEDB7pMFjVNo0ePHowZM4a77rqLsrIyXC4X4Agg27dvZ+3atfTq1Yvy8vJuFa87hJqPP/6YwsJCRo8efVXXi8VifPbZZ/zxj3+koaGBhx56iLvuuuu6J4ppupdTp07x2muvsX37dkzTTL0uyzKqqpKfn095eTkzZ85k+PDhqYWmruvs3bOXFStW4Ha7ufnmm7tVvDYMg7179/Lee++hqipjx4694vWi0SifffYZK1as4ODBg0QiEQAyMzO5+eabWbhwIePHj08vnG8Azp07x1tvvcWGDRu6bOppmkZubi7Dhw9n+vTpjBkzplM/dP78edasWUNmZiZjxozpVvHaNE0OHz7MypUrmTFjBpMmTbqieG2aJgcPHGT5u8upqqqitbUVy7LweDyUlJQwb948Zs+eTY8ePbrtntN8PU1NTbz33nt88MEH6Lqeel2SJFRVJTs7m6FDh3LHHXcwfvz41MaXaZrU1taycuVKxo8fz9SpU7tVvLYsi+PHj7Ny5Upuvvlmbr311mu+3oULF1i+fDkbN25k4sSJ/OAHP6BXr17ddMdproaTJ0+mxuBLbZ4UFBRQXl7O9OnTGTVqFIriRFvGYjH27dvHu+++iyzLTJs2rVvF63g8wYEDB3j33XcBuPPOOy97PV3X2blzJy+++CIXLly45DE5OTnMnTOXR77/yFWJkGm+eUzT5MCBA/znf/4nZ86cSb0uSY7u4vF4KCoqYvz48cyaNavT+GoYBkeOHGH58uVEo1HuvvvubhWvDcPg6NGjLF++nHA4zJw5c7623bS3t/PRRx+xfPlyzpw5g67reL1ehg0bxqJFi67Yhi+FJMl4XT765WiMLfJ0es+0wbIExjGJc+eu60+8KgLtAq8qMaanh3xf58jrxohF7gmZ5ma72+5B1wWhgM2gLI2be3WeP1sCNElc9YaobdmcOHGCX/7yl9TX13d6r2PzpLCwkJtvvpnZs2czcODA1PvxeIJt27bx0UcfMXjw4G4Xr/WETmVlJe+//z6DBg26rHht2zanT5/iX/7lXzh58uRlzyfLMv369ePnP/85AwYM6K7bTnOdXHdPFgwF0A0dWVWxbQuXS3F8lpP+zLZwBGqnaKBTvNESyR1eSSAJGwkF28YJ4ZYEdtL/WkoVd0xGaksSIunjnDwU0zQwTR1FkUkYOsFwCMMy0FwuR2T2uLFNA2GZaJoHTXGuH4/FMHUdIaxURJGze6Tg83nx+bzohokkQTgcwTRtLCQUVWBaFoppYlkWwgbTtMAykGXn73XEYhsrESceNhACLI/AMk1i0TCqy42iuvG4XcRdKi6Xhm1bSLKMbRtJb28Fp9iiQBAnGGpDkS3C7e1kGkXYio1tCyzTxrIlLAtkWSFhxIkZMpqQwVKdSGzLwkpG7NnCRFMtkMKYpoxtgqZ5O9mKpEnzP51QKMTevXvZs2cPffv2JTs7G1l2NoJCoRCff/45mzdvprKykueee44JEybgdrsRQhCNRgkGg50iFLsLYQtisRiBQKBLRNoljxeC48frWL78HT788EOOHj2KbdtMnjy5kyCV5sYiHA5z4MABtm/fTlFREfn5+an22PFeRUUF27Zt44c//CG33347Xq8XIQSxeIxgMHhV7eNPRQhBPB4nGAwSj8eveD3DMPj000/5zW9+w5EjRxg4cCA33XRTKlJy5cqVtLW14Xa7GTt2bDrq4VsmGo1SU1NDVVUVhYWFFBYWptpgS0sLhw4doqKigq1bt/Lkk08ye/bsLwREwyQcDjsb7Gb3RTvBFxHiwWCQWCx2xTYohODgwYO8+NsX+eSTT+jRo0cqe6W2tpbq6mpOnTqFbdv81QN/hT8jnQnwbRGPx6mtraWyspK8vDyKiopQFCWVJXXkyBG2bNlCRUUFjz76KPfffz+ZmZmAI9IFg0Gi0ehlI0y/KToixEOhENHo5SNaL0ckEmHLli28/vrr1NbWkpmZSSwW66a7TXM1CCEIh8Ps37+f6upq+vTpk9rMsm2bpqYm9u/fn5oTPv3008yYMQNFUa6pP/pm7tW+6jHYsiwunL/Atm2VJBJxiouLu4iEuq5jmOm54beJbdu0tbVRXV3N+fPnGThwIH6/PxkoZ3PhwgV27drFpk2bqK6u5h/+4R8YMWIE8MV4GAgE/kztz+n/AoEA0Wj0a68XDAZ5//33+dWvfkUoFGLixInk5uZy7Ngx1q1bx8WLF8nJyblG0VNCkq8smGf4YezY7stmaWyQqNly6fckwOWSGFDaffcQi0okQiCf+tPPJXD6v507d1JXV8fgwYPJyMhItb/GxsZU+9u6dSs/+9nPGDNmjPNZYROLxVJr4u7mq2vwK2Hbdir7+FLnicViHD9+nIsXL5JIpP3Bb0SufxvOThAINuNy+5FlGb/fi43kRPjKTkVGWwJh2yiy5NQclGVAOH7Xjju244htkypO6LiFODYXyM6PW8YRrFPvS+D1eZAlQdjnJhYVGJaFS9jopoUtHNsN2zZJJKJoqhMhbtsG8UQcI6FjGgYSEqZlYZkWkiTh0jSysjIIBEMgBHFDJ6Eb2EioSasTy7acwog492OZVrLYJKguF6pLIxYPY8QTCElC2D40RcY2E0iagipryIpCTk4WYHGx5SLRWAxwRH/DMjFMC2FZqC4bPR4nFg1hWxqxcARV9mIYJqZhYRgW8UQCr9eLbsWJW4JIwiIWBpes4VLUjkeIImvYNiQSjp+2S1NRXRpOgPzViwMdP+zOKXSdd4IvZ63Q3USjUf74xz9SV1fHnDlzGDVqVCoytoOOBY8QAq/XmxZG0lySgoIC/vqv/5pbJt2C5tIQQmAYBrW1tbzyyitUVFRQVlZGaWkpffv2/bZv92upP1nPCy/8FxUVFYwaNYqMjAxqamq+7dtKc5Xk5eXxyCOPMGPGjNRmiWEYnDx5kqVLl1JdXU2/fv0oKytj0KBB3/btXpGzZ8+yZs0ajh07xj333MODDz5I3759EUKwf/9+XnzxRaqrq9mxYwdDhgwhK+vrrR/SdD85OTncd999zJ8/P7VBYpomZ86c4d1332XTpk2sXr2aoUOHMmrUqG/7dq9IKBRKCZ5jxozhqSefYsTIEaiqSkNDA2+//TbvvvsumzdvZtKkSQwbNuzbvuX/8WRmZnL33XfzyCOP4PP5EEI4ItyFC7z//vt89NFHrFq1ipEjRzJ+fPekZHcXtm1z5MgRVq9eTWNjY7dmyaS5PgoLC3nkkUeYM2cOkiSlxLq6ujqWLVvGtm3bKCwspLy8nN69e3/bt3tFbNsmnoijKDJTpkzhxz/+cWrDp4OOzK7ujBZPc3XIskxJSQk//elPGT58OPCFWHfgwAFefvll1q1bR1lZGWVlZTd8xpppmhw7dow33ngDy7L42c9+xm233Ybb7ebc2XMsfXMp27ZtY9++fYwbNy6VzfCnIkmQlSUxZXL3rftPnpRZeYXTuz0wZKjcbfcQCsk0npFpO/P1x14tkiRRWFjI3//93zN69Gjgi82RY8eOsXjxYjZs2MCAAQMYNnQYbs+N2/5kSaZ379788z//M/F4vMv7sViMVatWcfbsWe64446/iPX9/0Sue1TKy/Vz8aJGKBhCVdx4PB5Q5JTKLASObYiSLLooQLIFQlhJQ2wQWI6LtfSFj5jj2+yIw1YyehtsEBKqoiBJMqqi4PWqYFuAnRJODcMCIbBsA0kWOP2dRVyPOhHctsAyTCQkNMWFbYOh6wRiIWRZwu32oKkqPq8HXdeRZQXDSmBYNoplI0syHrfb8e6WcSLLFQVFUVBVDb/bi60L2iIxLMN0ji/UyMjIRnMrmJZJJBhAYONyu/F7vbQgIWwLw7QIRqJEYglsW6AqCqpm43a7cOd5aT8X5uyJk3gKcpAUJzobScK0bUdQR8GwLMKRCJneHqgWaIqCpsjJyHYJ25KxVB+SIqO5XKiaC1m+tkEhFArx/PPPU1VV1WVHSpIkXC4XvXv3ZurUqdx111306NHjzyYQJxIJKioqqKysZPjw4QwbNqyTeH3u3DmWL1/Oli1bKCws5LnnnmPw4MHf2MCY5r8Pbreb0tJSykeXd5oIjhgxgvPnz3PixAmOHTtGe3v7FQc3IQQnTpxg/fr17Nmzh+bmZlRVpbi4mDvuuINJkyaRk9O58nUgEOCzzz6joqKCM2fOoCgKgwYNYs6cOZSXlzt97RVobm5m06ZNnDt3jkmTJlFeXk44Esbv9/Pkk08yceJEli5dyvHjx/+0h5Tmz4bL5aJ///5d7GhGjhxJe3s7tbW11NXV0dzcTFlZ2WXPI4Tg9OnTbNq0iZ07dtLY1Igsy/Tt25dbb72VKVOmkJ+f32nzMRQKUVVVxaZNm6ivdyqYDxw4kFmzZjF27Niv9adua2tj69at1NXVMXr0aPx+P1lZWdx+++0sWLCACRMmpNJLs7Ky2LFjB0ePHuXcuXNEIpG0eH2DoKoqvXv3pry8vNN3PmrUqFQEbH19PRcuXGDkyJFXPFdHNM/GjRupq6sjHo+Tk5PDmDFjmD59OoMHD+4kmkQiEXbt2sX69eupq6vDsiyKi4uZPn06t9xySxfh5asEg0Gqq6s5dOgQQ4cOZfDgwXg8HsaPH8+MGTO47fbbUucoKiri7NmzVFRUcO7cOZovNkNau/7WUVWVnj17MWrUqE7fdyKRQJIkDh8+zJkzZzh9+jTjxo277HmEEFy8eJFt27axZcsWzp49i23b9OzZk4kTJzJt2jT69u3bqQ+MxWLs37+fTz75hGPHjqHrOn369GHatGnceuutXcbwrxIOh9mzZw979uyhpKSEW2+9lby8vNT7DQ0NrFmzhpMnTzJ+/PguKdppvn3cbjf9+/dnzJgxndrGkCFDUiLiiRMnaGpq+lrxuq2tjcrKSjZv3szp06cxTZOCgoKUN3ZJSUmnawSDQSorK9mwYQOnT59GlmUGDRrE9OnTmTRpUpcgna/S0tLC1q1bOXnyJKNGjWL06NHE43EkSaJPnz6MGTPma/vQNN8ekiTh8/kYPHhwKrIVnL6stLSUc+fOsXjxYo4cOUI0Gr2ieN2RMfDpp59SVVVFY6MzB+zduzeTJ09m+vTpnfomcALDduzYwdq1a1NzwJKSEmbMmMHUqVO/ViwPhUJs3bqVw4cPM3ToUCZMmJCK5p03bx733ntv6ppFRUUU9S5i0aJF9O7d+xvXD1QFsrO7L7guM+PKNcRkWcLv6757kGWJr1kiXhdut5uysrIu7W/w4ME0Njbyr//6rxw6dIhwJHxF8dqyLOrr6/nggw/Yv39/KsuytLSUO++8s0t7EkIQCARYt24dFRUVNDQ04HK5GD58OHPnzmX06NFXfN5CCNra2li9ejVtbW1MmTKF8ePHM3To0Eve28GDB9m1axeDBg3ikUceSW8k36Bct3gdjYRJJKLoCZ1ANEwwEiWvRx5ejwdkCRmnIciS0/E6th+OtzNCRpYkLNsEHP8dCScNQZJlhO0UEdSSmrYkSciKjKwqeN1efD4PEhZtLS0I4Qjb2BI+twefx4XX7UIWNkYihizspL8zTnS0rGKbFvGESVtrgOaWFoKhdgQCRVHx+b24vW5AQlYUDMMkoZtIsoymqrg0N5Yt8KgKsiIhhIKquPD7/CiyRtyQCMVdSHIGudl5tMfi5FoaGW4PrefPcOFCA6apk1+QT0FhT9yaRrtu0h4IUVt/CltxioBk+v2gGnjUGJZXAq/mDEpWNh6vG1lxCmXKto3mUonF4xAxScQNsjwybo+KW1VSxSCRJLBVNNXZIJBlDVXRkGWZa0kmMgyDgwcPsnXrVnr27El+fn5K/LUsi7a2Nnbu3Mlnn33G7t27+fGPf0xpaemfRSAWQhCJRAgEAp2sGxKJBJs3b2bp0qVUVlZy/vx5hg0bRiAQ6PZUqjT/vfD5fMmCsC5kWU5F4FwKIQS7du7it4t/y+7du+nZsyd9+vQhHo+zceNGKioqePjhh/mrv/orCgsLAWhsbOSVV17ho48+wufz0adPHyKRCB988AE7duzg2WefZdq0aZe9v/b2dt58802WL1/OyJEjue2223C5XJSUlPD000+nbFBu9MiMNFeH1+slNzc3Vdjr6yZxB/Yf4Pcv/56tW7eSm5tL//79MQyDqqoqtmzZwn333cfDDz+c2pBpbm7m7bff5r333kt5wOm6zrp166iuruYHP/hBKhLtUgSDQVauXMnSpUvp168f48ePZ9CgQfzN3/wNlmXRq1evTr6ImqZ1+m2li5Xd+LjdbnJzc/F6vVf1nbW1tbFixQqWLl1KJBKhpKSEnJwczp8/z549e9ixYwdPPfUUkydPRlVV2traWLVqFW+++Sa6rtO/f38AtmzZQnV1NQ8//DD333//ZQWcSCTCxx9/zCuvvEJWVhYjRoygsLCQuXPncuutt5Kfn99pgfLlQkSKoiAr6eysGxmXy0VOTg4+n49EInFFsUMIwalTp3jjjTdYvXo1mqYxYMAAFEXh8OHDVFZWsnfvXp566qlUhGMoFGLt2rW8+uqrBAIBSkpKcLvd7Ny5k+rqaurq6vj+I9/H5/dd8pqxWIyKigpefPFFhBA8/fTT+HxfHBuNRtm2bRsbN25k1KhRjBo1iiVLlnyzDylNt+HxeMjLy8PtdqcK2V6JCxcu8Pbbb/POO+8ghKCsrAyfz8eJEyeoqqpi586dPPPMMymRqKGhgeXLl/POO++gqiqlpaWpAuHbt2/niSee4N57773s9VpbW1m1ahWvvfYaxcXFTJkyBUmSUtYOHdmyaf7ykCQJr9dLQUEBiqJ87fhr246H8W9+8xs2bNhATk4OAwcOxLIsdu/eTUVFBfv37+dv//Zv6dmzJ+CM1ytXruT111/Htm0GDRqEYRhs3ryZqqoqHnvsMRYtWnTZa4bDYT7++GP+67/+i7y8PCZNmkQ0GmXXrl2oqsqUKVM61QbQNI2SkhJKSkq+seeUpnvoyLbv0aPHVbU/wzDYt28f//Zv/8ahQ4coKSmhb9++hMNh1q5dy+bNm3n00Ud5/PHHUxmmjY2NvPDCC3z00UcUFBRQXFxMMBhk2bJlVFVV8dOf/pTbb7/9ktcTQtDe3s7LL7/MG2+8wXe+8x0WLFhw2WMDgQBvvfUWJ06c4B//8R8ZOXJkeg1yg3LdI1ZWbg5FvXthFYKhWyDJuDxeNLeGoihEYwaRmA4oSS9nHO9qkoUYLRtJdoowIglsnEhrSUgIp4QjIJAlBVVVUTUVTVPJ8HvI8HsJtLejJwwUWUWWLWRZQZIEqibjdqnYpkEiamElDKcauayAZGMiSEQiNEcMwlGTmAmJhOUUYTRCtAeD+Pw+PB6P41knBIZuICkKsqxg2U4JSVlW0I04Hs2D1+PF4/HhyuzJxXOtBEJtJIwEqG7ye/ejLWrj90MsFudiczMgoSgqqqKQn5tDLBwhEonhcrmwFAVZcRFLWGi2io4XxaWSUZSJJvvx+DNQ3QooznNUbUGm30+sNYSMgtelYepxZJdT3FFW5JTlCpJAkt0IYdMR064oilMo8xpxu90sXLiQWbNmpaKwOtJIPv/8c1566SVWrFhBaWkpjz32WLcWyrkSQojUwBmPx5k1axbr169PR1unuS4aGxvZt28fiUSCESNGkJeXd9nBrampiWXvLKOiooK77rqLBx54gD59+mBZFlVVVfz2t7/lzTffZODAgdxxxx1aGu6pAAAgAElEQVTIsswnn3zCypUrU7+bIYOHoOs6q95fxdKlS3nnnXcoKSmhT58+Xa4XiUR47733ePPNNyksLGTBggWpCMbMzMxUZE0oFOrWZ5Tmz0dzczMHDhwgGAxy++2307Nnz8u2x9bWVla9v4pPP/2USZMmsWjRIkpLS7Ftmz179vC73/2Od999l9LS0lSl+IqKCt59912ysrJ47LHHKC8vx7IsPvnkE5YsWZI6/lK2Ch3Vv5csWYLL5eKee+5h1KhRZGVlXXY8OH/+PMePH8ftdlNSUpKOBvsLoL29nUOHDtHS0sJ3vvMd+vXrd9lNPdM02b59O2+99RbxeJwnnngilS7c1NTE22+/zdq1a+nRowfFxcX07duXnTt2smzZMoQQPPXUU0yYMAEhBFu2bOHVV1/lvffeY8CAAUyYMKHL9eLxOJs2beLVV18lFouxaNEixo0bR0ZGBhkZGRQVFXX5TENDA7t27aK1tZVx48Zdsq9Nc+MQCoWoqanh/PnzDB06lNLS0sv2geFwmE2bNrFixQqKiop46qmnUgvU2tpalixZwscff0zv3r3p168fPp+PAwcO8OabbxIIBHj88ceZOnUqsiyzc+dOXn75ZVauWJkaw7+Kruts376dl156iaamJh577DEmT56cyp76sl2Iz+fj3nvvJRaLpRfMf0G0tLSwb98+4vE4ZWVll+xTOohGo1RVVfHWW2/h8/l45plnuOmmm1AUhVOnTvHmm2+yYcMGCgsLKS0tRdM0qqureeutt8jJyeFHP/oRQ4YMSfVrv3/p97zzzjsMGzbsktHewWCQTz/9lFdeeQWPx8PChQsZPnw40Wg05acuSRI1NTXU1NQQDofJzc1lxIgRDB48OF2o8Qanw3d4586duN3uLllRXyUQCPDhhx/y/vvvU15eznPPPUe/fv2wbZvDhw/z4osvsmLFCgYOHMijjz6aqkPy2muvIYTgJz/5CSNHjkyN47/61a9YunQpo0aNYsiQIV2u17Fx98ILL5BIJHj44YcpLy/n9OnT1NfXk5mZid/v56233mLHjh20tbWRm5vLhAkTmDlz5nUUS5aIGLC6JkRti+5sJiW7UksImqMWrr7dvBktQWNU4n990kimp/PvJ2baNIoCpmd037xWlmUUTwZrjoQ40+541gvAthydrS1mXFPA4pUQQtDa2kplZSWyLF+xMLwQgubmZl5++WUqKyt54IEHeOyxx8jOziYej7Nz505++ctf8tprrzFmzBgmTZpEPB7nk08+YdmyZZSXl/OTn/yEnj17EolE+PDDD/nNb37Dyy+/zPDhw9HUrn1VJBJh2bJlvPbaa5SUlPD4449fNlPaMAyqq6v58MMPGTt2LPPmzfvajJY03x7XLV5nZmWh63Enshdnt0WSpWQFcI3W9gimZWIZzs/EEaQd5KQ3tuNv7fwncHyjbVsgyyDZgCQhqwqqS8Pv85CV4cPndROLRIgEQ+ixBJIAf4aHzIwMsrJ8+H0ebNskqptoqgtTSfphay4UVSKhmwTaA9jeQjxZKpGEhcebwIhHkWWFaCJOe1sQf4aJLCuYlo1lWWDbKLKCYRioqoKMjGEaeD0ybo+b7NyeKBl9iOptCDtOItKAZRWQm59PqLWJs+eacLl8+LNynagyWSEYDFNQ2JOiot6EolEK83KIGCaS7CYSimJaBm7Ng8frR/UquBQNJBlZkRwjcBynFp/XjW3bqLKCy+1GERKaqqAqTvQaUtJXPLkZ4NSRkbCEjWXZXKNziPMdJqPwysvLu6R1jxg+gvb2dn7961+zefNm5s6dS05OTmpBe+TIEdatW5dKGXG5XAwYMICZM2d2qhYPTod39OhR1q1bx759+772+K8ihKClpYWxY8cybdo0NE1j9+7d6YjrNFckEAiwZs0a6urqUsWhgsEgBw8e5ODBg9x+++3MmTOH/Pz8y57j8OHDVFdXU1BQwLx587jllltSg2F+fj579+5l1apV7Nq1i5tvvhnbtqmoqCAUCjFjxgymTJlCdnY2ALawqa+vT2UW9OrVq9O1EokEa9eu5Q9/+AN5eXmpyMWvs3RI85dBKBRiw4YNtLS0oKpqqmDjkSNH2L9/P+PGjWP+/Pn0Lrp8uvLx2uNs374dn8/H7NmzmTp1akpEKSjoQU1NDW+88QY7duzglltuwePxUFVVRVNTE3PnzmXatGmp9q4oCmfOnOHMmTO0t7djmmana3WINq+99hqKovD4Y49z5513XtECpKW5hdWrV7N7927GjRvHhAkTOlmkpPl26YgQtSwLTdNSxcxqa2vZv38/Q4YM4f7776e4uPiy52hvb2fnzp2cPn2aefPmMWfOnFQkdXFxMYFAgH379rFr1y6OHj3q2Mjs3MHp06dZsGABM2bMSPV9Ho+Hs2fPcuDAAdra2roU39F1g8rKSl555RXC4TCPPvooc+bM6bJxYhgGhw4dYufOnZw/f54jR45w7NgxJkyYwAMPPHDD+9f+TyEWi7Fr105eeuklXC5XyvP15MmT7N27lz59+rBw4cIrev5fuHCByspKYrEY06dPZ+bMmanFdlFREY0Njezb5xTnmz17Nn379mXv3r0cPXqUadOmcffdd6cWvxkZGZw/f54tW7bQ0tLSpf2ZpsmePXv4/e9/z7lz53jwwQe57777KMgvSB3zZbuQhQsXMmHCBLZt29YNTy/Nn0p7ezsbNmygvb0dSZJSc8IjR45w+PBhJk6cyIMPPtjFcuHLtDS3sG3bNlpaWpg9ezazZ89OjYl9+vQhEAh0iugvLCyksrKSlpYW5s+fz5133onX68W2bTRN48SJE5w9e5bGxsYuonk0GqWyspLf/e53aJrGM888w7Rp0/D5fIRCIWKxGO3t7Xz44YdUVFTQ1NSEYRi43W769u3L3XffzaJFi64oxqf589ARgfruu+9SVVUFOP1La2sr+/fvp66uju9+97upoINLnsMWXLhwIRW8df/99zNlypTU8QUFBZw6dYr/+I//YNOmTdxzzz0IIdi2bRunT59m0aJFzJw5M1UwMiMjgyNHjnDgwAHOnz/P4MGDO10vkUiwb98+nn/+eaLRKH/3d3/HXXfdhdfrJRwO09bWRltbG0uXLqWlpYXCwkKEEFRXV7Nx40b27NnDc8/9Hf3797vq5yRJEhlZgykr/1s0r8SwoRIDBzpiiaM4yRT27N723KtXX/71/1tCNBJG+UoWhgDcHi833XQtRSivDY/Hy53T55GXV4iSVO4jEcGGTTZYUJIZwDq+hF27t1/1OTuikleuXMmePXsAJ9M+EAhw4MABDhw4wNy5c/ne9753WcHXNE1qa2vZvHkzRUVFfP/7309ZMAkhyMvLo7KykhUrVrBhwwYmTpxIW1sba9asIZFIcP/99zNx4kQURUnNQffs2YOu6zQ3N1PUq2v/t3r1ahYvXkxRURE/+9nPGDt27CWzTDqE9bfffptEIpHs99LzvhuZ6xavVUVGVSQUNZkmIIMkySiSjKpqNDZepL01Sl5uHpqmYdiWYw3y5T2fL9mCSCIpagOSEMgSKLKKy+XC5/eSm5VBpt+LLAtaL0Zpb2tHCIE/w4vP5yc3JxuPW0MIi1gshm0LFEXD7/GSnZODpmlYpo1lQUNTG6VlYzhdfxKXpmIpEjoSA8qGE9PjnD5dj6GbSLLAsm3nnm2wTBPbtBCSREI38PoUFFVFt3QSBvTIKsDjzyDSnEDFJsvvRRKCYDBEY9tZSkt6UdCjN7FYBNuIE08YBIMR+vTpQzQeRzd0mgNtmMJGzXCR0CMoCFyaG0VxrFaQZBRZdlxAbIGwQdNcCCGjSG68Lg9uxZv0B5dIBXB02LfYAllWkxHvOH7kctJD+xsiKzsr5UvY2NhIOBzGtm1kWWbLli288MIL1NTU0L9/f3r16kU4HGbNmjV89tlnPP3008yfP5/s7GyEEGzdupUXXniBw4cPdzl+8+bNPP300yxYsCAl8n0VWZa56667mDlzJoU9Cjl0+BCyLDsbEmnSXIaO1PaOVNCOAnmxWIyBAwemivJcbqJo2za1tbU0NjYyceJEiouLOw3qubm5DBkyBE3TqKurIxAIEAqFUtEIpaWlnYTnsrIyfvrTn5JIJLpEApqmybp163hjyRu43W6efPJJbr/99rRX138jAoEAH330ERs3buzUHuPxOP369WPUqFH069cPl/vSE0chBCfrT3Lu3DlKS0sZMGBAJ+/07OwsBg0ahN/vp76+npaWFjRN4+TJk7hcLkpLSzsJz8XFxTz77LNEIhGKioo6TQhN02Tr1q2pieBjjz3GrLtmXdEXtqGhgbfeeou3336bAQMG8NBDDzF48OB0Qd0biFAoxPr166mqqkpFhna0wZ49ezJixIiUpcLlaG1t5eTJk8iyzODBg1N2SeB4Gvfv35/+/fuzc+dOGhoaaWlp4eTJkwCUlpZ2EoZ69+7ND37wA9rb2yksLOzUFztp0LtYsWIFjY2NPPTQQ8yfP/+Sm426rrN7924WL17M2bNnkSSJkSNHMmvWLEaN7Fr0Oc23QyQSYevWrezbty/V/kzTJB6Pk5ubyy233EJZWdllN7w6BKC6ujpyc3MZOnRopzHW5/NROqCUnj0LOX/+PGfPniUzM5MTJ05gGAalpaUUFHwRCVhYWMiiRYuYNWsW+fn5ndqJbVscOnSI1atXU1tby3333cfChQud9p6caneIi5s2bWL06NHMnj07nWlyA9Pa2spHH33Epk2bAFJjsK7r9OvXj9GjR9O/f/8rZnW2tLZw9OhR/H4/I0eO7PR9ezweiouL6dOnD+fPn6e+vh5VVTly5Ag+n4+hQ4em2nZHAb8f//jHqTH4ywE58Xic6upqFi9ejK7r/PCHP+y0UWNZFvF4nGg0SigUYvTo0cyfPx9Zltm/fz+ffVbBK6+8ghCCH/7wh+m55LeMbdtcuHCBJUuWpMa5jkxny7IYMWIEY8aMSVl9XArTMmloaODkyZP07NmT8vLyTmOm3+9n2LBhZGRkUF9fT0NDAz6fj4MHD6KqKiNHjkz1l5Ik0atXL5555hna2tq6bHAYhsGBAwd4/vnnaWpq4tlnn2XevHlkZmZiGEZq3tDW1kYkEuFHP/oRZWVlyLJMbW0tL730EqtWraJfv34888wz1zAGS3jcuZSUzsfvF4ybIDNu3BdzSEmSuj3r2u/3M23aXZfVGBwL2u7LaFAUlb59i+nZq3cy5R7a2gQnTlmYJvTocZFAYN01nbMjwvrNN99MfRcdBWsTiQSDBg3i5ptvvuJGl2EYHD9+nObmZsrLyxk0aFBqHJckiczMTG666Sbee+89ampqsG2bixcvUlNTQ25uLiNHjkx9d4qiMGDAAH7xi19gWRZ9+/bD+lIATTweZ/369Tz//PNkZWXxT//0T0yaOOmy63XDMNi1axfbtm1j0qRJ3HrrrShpu7gbmj/B6EoGSUFC/sLnSwJFKJgmNLeEcGt+QsEQfr8Pt8eLbulIOI3+C6lUAiEhUtYWTpqHLMvIsoRLVfG73fi9HiRZoq21jZbmViRkCnoUkJGZidvlStaKtInHTBJx3YkEs0GVXeTlaXi8frzeTNwujebWAL36lNLS2kqgyUSYCVRJolfffpxraMDl8ROLBJEsgbCdgpCOiQlEY1GIOQKt6vIihEw4FKKlpZX8/gpZOfkEGzMx9DDBQBy7/jTnz54i021iC/B5vOiJGLphoWkq4UgYVXWKJbS3NBOPR4gJG0UGRQVZUbCFhCqpjl84UqqQpS3AFiDLCoqkgeXCrfhwqy4kBad4ZlKwRjjlMWUkJL7YKECyk8ddf0u4FLFYDMuycLlcaJqGJElcuHCB119/nerqah566CG++93v0qNHj5R/1q9//Wtee+01hgwZwtixY7l48SKvv/4627dv58EHH2T+/PmXPf5KBXrSab9prpW8vDzmz5/PkCFDUgNmIpHg/PnzHDhwgOXLl9PS0sITTzzBgAEDunzetm2am5uJx+Pk5+d38rgEZ/DNz8/H4/HQ0tJCPB6nvb2dYDCIz+cjIyOj0yTL4/F0iigLtAcApy+tqqqioaGBs2fP8uyzz3LrrbeSlZWVTj3+b0ROTg533XUXY8aMSQnFuq7T0NDAwYMH+fDDD2ltbeXxxx+/ZLG8jnoE0WiU3NzcLiKJoijk5ubi8/lob28nEokgyzKBQAC3292lPbrd7k7tvsOKRgjB3r172b59O8ePH+fhhx9OFQC6VHsUQlBbW8sf/vAHPvjgA8rKynjyySeZPHlyOur6BiMzM5PbbruN73znO6lFgGEYNDU1cfjwYTZu3EhbWxtPPPEEY8eOveTGQzgcpr29PeWT/dVFaUZGBtnZ2ei6TjQaSR2vaRqZmZmdNkk6iph2RG5HIhHAaVOHDx+mvr6egwcPMnfuXObMmUOPHj0u2QbdbjcTJkxAlmUaG5uoqzvOkSNHWLJkCYZhsGDBgm/N9izNF/j9fiZNmsQdd9yRajemadLc3MyRI0eorq5Otb+pU6d2+bxtC0KhEO3t7eTl5XXpk2RZJjMzk6ysLE6dOkV7ezuxWIzW1lZkWSYrKwvXlxa/HQVMOyLzO2wYOoo0L1myhD179jB16lQWLFhAUVFR6npftgvxer3cd999qXac5sYkPz+fWbNmMX78+NRriUSCCxcuUFNTw/vvv09zczNPP/30JYuBddTkaW1txefzUVBQ0Kn9SZKE3+8nNzeXuro6WlpaKCoqoqWlBbfb3SVAx+PxdBqDL168CDi/iYMHD3Ls2DFqamp48MEHufvuuzttPmdlZaWyCHr37s2oUaNShZovXLjA0KFDWbx4MR9//DG33XbbJS2Z/lTqT9byysv/yYn6k5ddfvYvHs199/0fFOXapAohbE7W72XFe//PZc/t8fgZXT6dqbc9dU3nBtD1GDU1n7Fu3QuXPX9hzyIWPfQEY8dNvubzf5WOgorf+9736NfPiUTuyDw5c+YMBw8e5NVXX6WpqYmnnnrqkpu0pmkSCAQIh8P079+/y5imKApZWVn4/f5UZLQkSVy8eBFN07pkFGiaRnFxcSrTKhgMAl/4av/6179m586dLFiwIBWQ9lUyMzOZOXMms2bNSm169+nTh7a2Nv73//7fbNq0ifvvv/+yVg+XRJJQFBeKApom43L9uUVIxxb2WtvsN3oHkoRL+2Ju5XIJFMVCCFAU1zWvDSVJIi8vj+9///udvMjj8Tjnzp1j//79LFmyhIaGBp577rlL2r1YlkVTU1OqOO2l1sQ9evRI1TmxbZtAIEAgEKB3795d2o/b7U7VpQBoa20DnDnp7t27eeeddzh79iw///nPmTJlyhUDe4LBIB988AGGYTBv3rz0fO8vgOv+dQkpaTchO37WclIolYXExeYAMgoZvgwsK0E8HicWjZFbkIdh6Y6YKiclVOkL0RqSLwuwsVFkJ7rb63YE0HA4RMP5RsyESW52DpnZWXi8HiTAMg0M3UjudklIkoxpOBYhAhlfZgZ9evfH63MzonwYcSNCfmEBTecyUFxuLDNG/YljtIWi2MKZmJqmgWlaCNvGFs7OZUJ3OgDTtjBNA6/Lg0ezMfUITQ0nyc3Lh4GjaTqXiWlLnD1Vi0yCgvwCTCNBIhrB0HUkWcKlulAkiXg8Ql5ePr2KehGOx1CFRCisoygGhmUQaA+Tk5WN5HIDNrawsJJCtm07O2uqomEZVqoYZkeBSiFscLYYEFJyZ9527FqElEyl+YY1roaGhlR63YwZM8jJyUGWZT7//HN27NhBSUkJ9957LzfddFNqMZqTk8PWrVvZtGkTu3fvZtiwYanji4uLuffee7n55ps7Hd9Rfbvj+DRpvikyMjK47bbbOi2ULcsiEolwpOYIv/rPX7Fy5UqKi4udNOCv/IYsyyKRSCCESBWg+yqa5hRMNQyn3+r4v6IoVx1xGo1G2bhxI+BMTmtqamhpaemyMErzl43f7+eWW25h/vz5qYhpy7KIRqPU1tayePFi1qxZQ1FREb169eoUVQ3OgqIjSkdVtUtGn3QUqDNNE8uysG0b0zRRFCVVjOXriMVibNu2DUVxLLaOHTtGQ0MDvXv37vL5Dq/tl19+mS1btjB+/Hgef/xxxo0bl7a7uQHxer2MGzeOhQsXphYetm0Ti8U4ceIEr7/+Ohs3bqSgoIA+ffpc0m6jI1JRUZxaJtJXOk7ndceSxLbtVL8oy3KyzX59G4zH4+zatQuXy4Wu65w4cYIzZ85cNipSVVWGDhlKSUkJuq7T1tbG+vXref3111m2bBklJSWX9DNO8+fF4/EwatQovve976UiQW3bJh6Pc/r0ad555x1Wr15NdnY2JSUlXaIQhbDRdSewRVXVS6YPK4riZGl+aTzuaH+qql7VXFnXdQ4cOMDx48dJJBKcOnWKkydPUlZWlmp/jY2NrF27lvr6ehYuXMj48eOTaw7za86e5tsiKyuLW265hYULF6bGso45YU1NDa+//joffvghPXr04Mc//nGXz9u2ner/NE27ZBSgoihO7aPk/NGyrNQYfLVFFcPhMDt37kzNIWtra2loaOgkPvp8PiZMmMCYMWNwu92d5gtZWVnMmDGDiooKTpw4QU1NTbeI1y3NF6nbuZaZvdooyuz6LA43JVjxmU5BjwSqem0CpBAWJ080cPHQev7X5K5CVMyw+bxJsHG9HyH/4JrvPR6PcrzmMK6zW1k4susme0vUZPfpbI4dnfiNiNeSJJGfn8/dd9/NTTfdBDiim2maBINBdu/ezfPPP8+yZcsYMGAADzzwQJdzfDlSW9O0Lu3JsXx15oCJRKJT/3ct7S8Wi7Fnz55Uf3bs2DHOnj1LQUFB6joul8sRqwUMGTKkU7aW1+tlyJAh5Ofnc+HCBZqamq5NvE7zjSNJEllZWcycOZOJEx3LEyEElmURDoc5fPgw//7v/86yZcsoKyvj0e8/2uUcQghisVhqTfzV9YAkSalAR8NwvLpN08S27dQa5GqIRqOsXbs2tf7es2cP9993f5c1UQeWZVFXV0dVVRWlpaVMmjQpnfH5F8B1i9eKoiLJilN8UZawEciSjFBkzp1rxKN50Vwysu1CVlR0PU5z80UyszJQFBnTsrGRUCUFWwgUWUKybBAgyxKqquByqXi8btweF4au097ajlvTyOzpx+3xork0VFlG2AJLSjoaqQqy4XhKy5JEQk9w8WIT7nAYI2FgGr3JyMnCI0fJ9Hro1buUgCQTbLtIe3sY3TIRpoHbrWFaOqZtY9pJP24cIVvC+XGFwxHO2Y30yM8iI9BIRstRFKsXfp+bnJwcouE28rNVMjK8eP0KsXgYReoooqglbTycCU1mViYjRpYjqy7aIjEaW8I0NpwlEo4QCoXxun3J4pQSQjiKvyNMy8iSjMftJh7TMU2BqQgkAUrymUiy5FhkCxnHStyJlrcl59+2JfFlN5erwTAMtm7dSiwW61R8pr29nb1797Jt2zb69u3L7Nmzyc/Px7ZtDh06RGtrK1OnTqV3796dBsOCggKGDRvGxo0bOXbsGOFwJHX85MmTL3v8hg0bOHbsGNFo9LKdU5o014osy3g8HjIyMjpNrHJycsjPz2fL1i3s27ePffv2MXPmzEtGMXQM0Lqud0kh60i5siwLt9uNoii43W5UVU0tsJ1Ct1deLQshGDp0KLNnz6ayspKKigoGDRrEo48+2sUXO81fLpIkpSKgvxyR3NEe9+7dy44dO9i/fz8XL15MRed0IMtyahPFMPQuIklHCrRlWfh8vtQiRtM0QqEQhmFcVXsEGDBgAHPmzOHIkSPs2bOHP/7xjxQWFnaKLBRC8Pnnn/Pb3/6WHTt2MGvWLB5++GGGDx+e7sdvUDoWnRkZGZ02F7Kzs8nLy+PYsWNs3bo15YF5KfFa07SUOGMYBrawUfhiUWKaJqZhpNJ7O443TTPZBm3g6xcx/fr1Y+7cuTQ2NrJlyxbee+89ioqKLuuHrLk0NJcjoOTl5WOaJjt27GDbtm2cOHEiLV7fADiLW6f9fTlzpKP9nT17lvXr13Po0CFOnTrVRbzuWBx3iCodC+QOOsQgp66NY1moqiputxvLstB1/apqpQgh6NmzJ/PmzcM0TT7++GOWLVtGv379GDFiRCqlfu3atZw6dYpPP/2UQ4cOpT575swZTp8+TSAQ4Be/+AUTJkzgnnvuSWcQfstIkoTH4yEz8/9n782j7KrK/O/P3vsMd655SKqSqlRVBiojGSCEEDSAEFBQkDiAgoIgjq2reX+uVteye3X36+yveymvzeqATAYFURmkERQhYVLAkEBmICETVamkxnvr3nvO2fv949x7TagKksjYns9a9Ufdu8+uc8/ddc7ez36e7zd92HOwurqaZDLJ1q3beOihh1i3bh19fX1jKoeklJWgdfn+90p8369s7sVisUr7fD4/RlP91c6z/AwuB2VuueUW/uEf/qGSESmlJB6Pj1vdpJSiurqapqYmNm/e/IYZfAeBxjUeJ7Ym6agZmxXpKMltuwT9Axr7KCMV2hhGRjR1CcW72sdKnmSLmqw/yuMvFRkYOPpzz+cD8jmPKdXOuP3vG/bYuxMKhfzRd34ElFIkEokxVXO1tbU4jsOjjz7KTTfdxJ/+9Cfe9773jTm+/PyWUoZrDG/sHND3/crmnuu6hz1/C4XCazrPsqTIypUr2bdvH/feey+rVq3ia1/7Gk1NTZUKl5qaGvbs2YPWeszx5YrtQqHwmv9uxBuLEIJ4PD5m/JUrOU855RSefPJJHn30UVauXDnu8bFYDCFEGFjW5rCpXHlNrLWurJ0dx0EpRbFYHPd+OR5SSjo7O/nwhz/Mr3/9a+677z66u7u56qqrxpW0K2dq9/T0cPrpp0ca/+8Qjj14LS1KIdEwr1cKhDAE2jAwOIBUCXytQUqkI4lZSQpFxfBwFsdxSKaTeL6HNkDvH+wAACAASURBVCBMaH5R1sw2aCylSCWS1FRXY1sW2ZEstrKI1dTguOGAxoSKF1rrShDY84r4QUColBHKZFiWhUFz8GAvvlcglU7R0FzAjVXTMbmBwaRieLCW3v37Odjfx7CXI5fLMZLN4vsBQRDeXGUpUKxE+HmNNuRH87zcU8TL58kODZKpqSWZqSEo+hTzWWzHpliIMWr5gEaokja4bWHZFrZlhQFxpahvquc4odiwaRO9BwaRQlAseFjKRgcabTRChwaXumx4WQpAx2Ix+gcHQ41uE2BhhZrXJZsCZFlX3CCECSfhQiCFQIqj32XyPI97772Xhx56qLJLdejNZ/78+Vx66aUsXbqUWCyG7/v09PTgeR5NTU1jbiKWZdHY2IhlWfT29pLPj1YMRI7Uvlxi0tvbS7FYjIIeEW8KUoabP2XdwPG0zaSUNDY2Eo/H6enpqZS0l9Fa09vby+joKI2NjRUTlKqqKl5++WUG+kMTvHLWd7FYZNOmTQwODtLV1VUZ6/F4nNNPP52PfOQjzJw5k29/+9v8/Oc/p729nXPOOSfSKvw7oDwey9nV42XvSSmpq6sjlUrR19fH0NDQYcHosszNyMgIU6ZMIZ1OY9s2NTU17N69u2KIVx53nuexbds2+vr6aGtrq5Qkx2IxTjnlFD760Y+yc+dOvve973H33XfT3t7OhRdeWNG93rJlC6tWreLpp5/m3HPP5eMf/zgdHR1H1KSLeHtz6Bh8tYVGedGaz+crJneHBlBGRkY42H+wslGTTqepra2lWCyOae95Pjt37mDPnj20trZWSqVd12XRokVcdNFF9Pf3Mzg4yAMPPEBbWxuXXnopdXV17Nixg7vuuov9+/dzzjnncMIJJxyivxj24bpuJVMy4u1NWbqwPAcdb/yVpT9qa2sZGhqir6/vsHtg2ZTq4MGDpNPpSmlzXV1dRZ4kn89XNm5832fPnj3s2LGDpqamyqLXcRxmzpzJxRdfXEnoeOSRR2hra+Mzn/kM1dXVDA0NVUr4H3/88TFz6JGREYaGhip+Me9617ui4PXbmHD8icomh+d5YwLDQghSqRQNDQ08//zz9PT0jBl/IyMj7N+/n2QySVNTE8lkkoaGRjZv3kRvb+9h7fP5PDte3EFPbw/t7e2Vv5dKpVi8eDGXXXYZW7ZsYd++fdx1111MmTKFiy++GNd1GR4eZuvWrRQKBWbMmHGYzIQxpiJjV5ZresMoFf6OtycuBTQ2Ct57jiQWO7o1qg4kTz0leXCzGLdvIcBSgtZWwXnnHf36d2hQ8siDAm/dkft/M+seD12TlOWLXollWZVA4+DgIH0H+ph0iBliWVpueHiYSZMmUV9fj+M4TJgwga1bt7Jv377Dxl+xWGTnzp3s2rUrrEAtZVbHYjEWLFjAZZddVvEOuPfee+nq6uKKK64gHo+TyWTo6upi27ZtbNmyhTPOOKNyDywbAQ4NDVFdXR35ALwDOHT+l8vlxt3kVUpV/HF6enoYyY4c5oMTBAEvv/wynudVJN6qq6upqalheHiY3t5epkyZUmlfrnAaHh6mu7u7oiOeSCQ455xz+OhHP8qkSZP4yle+wg033MC0adNYsWLFmAScbDZbeQYvXLjwVT1bIt4+HHNufKhLLSo/oCkWc4zmhmhrm4A2Pr29fXhFD6UkSkkSsQSJeArPMwwODGBbAkuGN3ojQCiBdBRu0iUWd4nFY1i2QhuNZSuqaqupqq4hmUoTiyWIOS62ssLjdSkT2YhQ59kYPD+gUPQYLRQqJo7ZbI6BgWF273qRXS+sp+el58gN9xIEOSY0VTOxuQ5lCUZyOUYLhdAUsWwkKcBWCkspLBnKc9hKoQND/+AIu18+yO49L7N96xZ2732J0WKeIPAoenmGhgYpFooYrXEcC7v0ucId+DDgbjkujc0TaJ7QjFfIMTQwzEh2FIPE8/xKNmZ4/Uv6KhiEFNiOjTEaY0JjRIxBSBleUynCL0yEWdhGghYaTBiUDzceju5Ra9s2p512GldddRVf+MIX+OIXv8iXvvQl/uVf/oXrrruOb33rW5x99tnU1NRUHqp/TUbh0F1hz/OOqv1ryYiJiPhbCYKAP/7xjzzxxBN4nkd7e/u4AWIhBN3d3UyYMIFt27axY8eOwzII9u/fz3PPPUcQBMyYMYPq6momTpxIR0cHw8PDbNq8qaIhB7B7927+67/+i+9973ts3LixEqAsa2fX1NRw6qmn8sEPfpDR0VFuvPFG/vznP7/m3eqIdyZBEPDMM8/w6KOPks1mmTRp0rjGiFJKurq6mDx5Mjt37mT79u3k83/JCurv72fTpk3kcjmmTp1KQ0MDTU1NdHR0kM/n2bx5MwcPHqy037dvHzfddBPf/va3eeqppyrjUUpJTU0NtbW1nHTSSVx44YUopVi9ejWPPfYYhUKB/v5+7rzzTtauXcvSpUu56KKL6OrsigLX71B0oNm0aRNr1qyhv7+fiRMnVhayr6S+vp6pU6dijGHLli309PRU3vN9n+3bt/PCCy/Q0tLCpEmTqKuro7Ozc9z2fX37+fnPf843v/lN1qxZU7nXSSmpqqqirq6O+fPns3LlSqqrqysu9vl8Ht/32bhxI7feeiv33XcffX19h53HCy+8wNatW4nFYodpxUa8/dBa8+KLL/LQQw/R09NDc3PzuMZlQggmTpzI9OnTOXjwIM899xwjIyOV90dGRti2bRu9vb20t7czefJkqqur6eoK701bt25l7969lfYDAwPcdddd/Pu//zv3339/ZZNDSkkqFQa/Z86cycqVK5k0aRJ33nkn99xzD57nsXTpUq655hp+9rOfcdNNN3HDDTdwww03cP311/OlL32pUr78rW99iy9/+ctjKmki3j6UEwt+97vfEQQBLS0tR9RMLY+J4eFhnnnmmcPmeLlcjueff57du3fT2tpKV1cXtbW1zJzZTTabZf369YdlQe/bt4/rf3I93/nOd3j66acrryulqKqqorGxkYULF/LRj36UIAhYvXo1a9euxRjDyy+/zPXXX88///M/c++99x62QTc8PMxTTz3Fxo0baWpqYtq0aW/AVXttpJKC42ZIZnYf3U93t2TypFePIAsJVVXiqPue2S2ZPl3S2Cg4hryv151cLseTTz7Jww8/TCKRoLOzc9y5VFmjf8aMGfT29vL0008flvVcHmPZbJZp06bR1NREJpNhzpw5+L7P008/zeDgYKV9f38/t9xyC//2b//GY489/pfYRGmTsKGhgVmzZvGJT3yCRCLBLbfcwgMP/A6tNZlMpiI/8cADD7Bt27ZKv4ODgzz88MMcOHCA9vb2qIL0bU6xWGTdunXcf//9OI4zRgamTPm9pqYmtm/fzpYtWyrvlTeOH3/8cWzbZv78+Ugpqa+vZ/bs2Rw8eJA//vGPlURSrTV79+7l+9//Pt/+9rd56aWXKn2VtbNramo45ZRTuOyyy3j55Zf50Y9+xObNmw87J601/f39bN26lUwmQ2dn5xt0lSJeb44581oTIGSYxQs+XmGUQn6UoAC1VQ7xeDP7eofoO9AXlpkmEggUtu2CVPjBKCMjWRJuHNd1MaXtV8exSMRd0skk6VQKAfi+hzYmLGWxnfCBVDIrNEKjgwCjAwSlIKyBoufhBZrA01hKk4jH8Twfz/MZyeWxhi0sBYEXEIsniMWSCK3RfoDWYeDXmLD8SAiJ0SCFwo3FcG2LAAOBIRmLoWzFaH4UbQyFfAFpSbSEZEmzsZDXWMrCEgrf8kKJlJIOdaAFOgjQfrjwcuMJOjo62bvvZXbs2MtItoAflMpo/FAjt7JxICSGUB4k7tphiN0IRDmAz18C3RrQ4i+7FUIKFDLUxS6bNx7NwLEsTjrpJC644IJK8K6smRWLxcboE71SRuGVpUJlPS6tdaVcrpx1Wn791dpHGkURrycHDhzgpptuYs2aNYdlBJTNobZv386JJ57I8uXLqa2tJZfLHXZ8OXi9bNkyVq9ezc0334zrusyZM4eBgQHuuOMO1qxZw3HHHceJJ55YMVg87bTTePLJJytSC8uXL2d0dJSf/vSn3H///cyfP5/6+vpx3aqrqqo4//zzefHFF7nnnntYvXo1jY2NTJ06lUfWPsLaR9YyODiI53k8+uijDAwM8D//8z/s3buXWCxGY2Mj5513XvQAfxsyMDDA7bffzoYNGyr31nKmzNatW9m6dSuzZ8/mjDPOoKmpadxs0aldU1m2bBnbt2/n5z//OZlMhkWLFpHL5bjnnnsq2alLliyhrq4OpRTLli3jscce47777qOlpYUVK1agteaOO+7g7rvvpq2tjcbGxnEXS+l0mhUrVrBjxw5uu+02br31VpqamhgZGeHBBx9k3759PP/886xatWrM8eX/n9NOO21c+YmIN5/h4WHuuecedu3aVZHwKmeWlhcjnZ2dnHXWWUfUqMxkMixZsoS1a9fy0EMP0drayvnnn09VVRXPPPMMt956K/39/axYsYIZM2ZUsggffvhh1qxZQ3t7O+9///uxbZvf/OY33HHHHaTTaZqbm8ddMCWTSd71rnexc+dOfvKTn/Czn/2MlpYWZs2axfHHH8/DDz/M6tWr6evrq2it79y5kwceeICNGzdy6qmnjmuAGvHmk81m+f3vf8fw8FDlfqG1Znh4mOeff57NmzczYcIEzj777MMytA5lwoQJnHLKKTzxxBPcc889tLS0sHz5coQQPPzww9x+++0kEglOPfVUWlpacF2X+fPnM2/ePP70pz9xyy238JGPfIRkMsnvf/97br31VjzPY+LEieNW/sXjcU466SRWrlxZCVZPnjyZd7/73eMGZcqyJXfeeSfNzc0sWbLkiJ/ljaRYLLLuz3/igfvurgQMDsMYausnM3f+2TQ2to/bhzGal156lief+CW+N458gjGkMg3MnH0a7VPmjduH1ppdL23gyT/+Et87XD5ACFDK0NjYwJJTzuC47vH7eL3o6+vj9ttvZ+PGjZXXgiDg4MGDbNmyhRdeeIFZs2Zx3nnnUV1VzYGDB8b0UVdXx7Jly3jooYe4//77mTJlCitWrMBxHJ544gluvvlmAE477TTa2tpQSnHyySfzu9/9jt/+9reV+2s+n+eXv/wld999d8W0drwEpKqqKk4//XRefPFFbrnlFm688UZaW1upqamhpaWFX/7yl/zHf/wH27ZtY+bMmQRBwIYNG7jvvvsoFossX778LfUTEgIsK/w52uNei0SuFEffN4THqDdxyam1Zvfu3VxzzTU0NjZWXvc8j56eHp577jn27t3L6aefzllnnTWuPnVZzmPFihWsW7eOW2+9lcbGRhYvXsxobpT7H7if22+/naamJt773veSTqcJgoCTTz6Zu+++mz/84Q/cdNNNnHfeeRhj+M1vfsMvfvEL0uk0HR1Txh1/yWSSZcuWcdFFF/GjH/2IVav+m/a2NrpndrN48WIWL17MmjVr+PrXv86ZZ55JPB7n6aef5q677qK6uppzzjln3GSMiDcXYwwHDhzg2muv5a677qq87vs++/fvZ9OmTTz//PMsWbKE97///ePqUyul6Orq4r3vfS/XX389P/zhDxFCMHXqVHp6erj99ttZu3Ytc+fO5YwzzqhkXp977rmsWbOG1atXM2nSJJYuXVpZnz/44IMsWbJk3Kqk8vEXXHABW7Zs4de//jXXXHMN3/jGNyqVJlprDh48yP79+8lkMkdMuoh4+3HsdqjCoE0REwQEXp5iocCBA8PEnBipJCSTikktaVJJhz37DpDP56hK1+I4MRzXwUZhSYPvFfD9AslUCstSJOIJUskkmVQK13Hx/SIKMCocjGgNmDAjWpc0soXAsmxsrQm0QRtdMpsKMAj8IGA0n8d1bKQSKGkTeD4SBULgewU8JfG8AoPlne2S3o4fBARBSYe7ZKQmlMKWAlSosR3q+Bh83yPA4LgxhCXJFz1s20ZZYVZ0oZAHaXCsMDBrSucpkGgdABrLcchUVzNv3lz6B7Ls7x9idDSP52sCrfG0jyVkeB6U3C6FwbYkqiQTEmafG0xZJqQU7RdCIIUqmS8o0AJLKiRHX+JU1j/KZDKvKTOpXDJi2zZ79uw5LOsPwkngvn37KBaLTJgwgUQiwYQJE3Ach7179/7V9lGpR8TryeDgIPfdd19lElguS3Ych5aWFi699FLOO+885s2bh+M4Y4LXEC4cLrroIvL5PL/97W+5+uqrqaqqwvM89u/fz9SpU7n88suZM2dOZTH+7ne/m97eXm6++Wa+853vsGrVqsoEobu7m0suuYSuri4Cf6xUiRCC1tZWLr74Ynbu3Mnvfvc7Ojs7ueiii1i/YT2rV69m7969FeOMYrHIH//4R9atW4cQgunTp7No0aIoeP02ZHh4mAcffJBHHnkEOHw8NjU1sXLlSs4991wWLVpELBYbN3idzqQ5//zzyWaz3HnnnXzta1+jurqaIAg4cOAAra2tXHLJJZx44omVjcPFixfziU98gp/85Cf88Ic/5Kc//SlARS7kkksuYfbs2UfcPJwwYQIXXnghO3fu5LHHHmPy5MnU19fT09NTKad/4oknxhwnpWTFihXMnTs3Cl6/Tchmszz66KM89dRTQDgGy7qEDQ0NvPe97+Xcc8/lpJNOIpFIHLF09Pjjj+eyyy7juuuu44YbbuDOO+/EcRwGSuKjH/jAB7jwwgupr6+vtL/00ku57rrrWLVqFb/61a+QUnLgwAEaGxu55JJLOOGEE45o6NPQ0MC5557Lzp07ue+++7jttttobGzirLPOIpvN8tOf/pTbbruNe++9F6UU2WwWrTVLly7lk5/8JNOnT3/jLmrEayaXy/HUU09V9KHL48+2berq6li+fDnve9/7WLZsGalUatzy+Xg8zqmnnlrJGvzWt77FqlWrgDCTMJVKcckll3D22WcTj8VBwIwZM7j00ku59tprWb16Nffffz+WZVXkRT7+8Y+zbNmyIxqa1dTUcOaZZ1Y28VavXk1LS8vb2mS8p2eA3/7PH9j4h5+wbPLY/6uXRwLW5qbw4s4mOromj9MDBEGeTRufYt+G/+LMjrHXZqCgWT+YYcsWw5z5c8btw/dH2bzpKfauv5azOl/Rh4ChgmHd+iqElG948HpgYGDcZ7DrukyYMIEPfehDnHfeeSxcuBBljX8vcl2XE044gU9/+tNcf/31/OAHP+CWW25BSsnAwACWZfGxj32MCy64oCIDsnDhQj71qU+xatUqvvvd73LjjTdWMgYnTZrEFVdcQXd392FVBGWklLS0tFSSGtauXcstt9zC5z73Oc4//3yGhob4xS9+wbXXXltZx42MjFBVVcUll1zCxz72Maqqqt6gKxrxWjHG0NPTwx133FExzy6Pv3g8TltbG+effz7nnXceXV1dR+wnnU5z5plncuDAAVavXs0//dM/UV9fX0mEqK6u5pOf/CTLTlkWJspJycyZM/nsZz/LNddcw3/+539y2223AVSM4a+88krmzp07rnyiEIL6+no+8P4PsH3789x772/471X/zVe/+lU6Ojr4zGc+QxAE/OEPf+Cpp55CKcXIyAj19fVcfvnlnHnmma/ZKDLijcMYw8DAAL/+9a8PM28ve0O1tLRw5ZVXcsEFFzBjxoxxNzKEENTW1nLJJZeQzWa599572bx5M5lMhnw+T19fH7Nnz+aLX/winR3hGtR1XZYtW8YXvvCFim56Y2MjhUKB3t5eTjjhBL74xS/S2NjI0ODQmL8ppaS1tZXLL7+c7du3c+eddzJ9+nSuuOKKUgwulGrK5/M0NjZG0rPvII75rlAs5MjnhoBQfkIKQTyRwLYtjAIdBNhKUl/nEos30tMzwuDAQWpq6og5MSzXIRV3UVKEpn+OSzqdDg0Jkkksu+xEb6OEBSWFba19fM9HBwFe0QNtULLk0Gw0yg+wlAx1nbWP1iVdEm1CgwINri2QSMAiEU+gpMB2HPoHh0tuqGGmtFamZG4YgAYw5EbzaCPIpJOYIEBr8P2ARCJJLjuCFALXVlRXVeM6LralUAqM9rEsScyJlUToQ/MOJSXJZAqJJPB9hATXjVFTV8e8ubPYtXsvzz63tZRpHgbsjSSM5pdT0Mva4xIMQRjQLmuJEWoRuZaD44b/mEJIpBT4XgCY8Jijzr0+OqSUzJs3j/r6ep599ll2795Nc3Nz5cHU09PDhg0bsG2bWbNmkUqlmDt37pj25SBfub1lWcyaNYtkMhlJJET8zXR1dfHNb36T/v7+McGXsuFOJpNhwoQJNDQ0VIJ8qVSKq666igsuuICOjg6SySRSSqZOncrnP/95TjvtNDZt2kRfXx+xWIyOjg66u7vp6uo6THakrq6OD33oQ8yZM4f169ezb98+LMuio6ODOXPm0NXVRTKZJAgCrrzySj7wgQ/Q3t5e0eFUSjFnzhz+9V//lf379zNx4kSqqqo4++yzmdk9k9H8+Hp45c/Q3d39BlzViGOlvb2dr3/963z6058eU31SHo+pVIrm5ubDJl+JRIKPfexjnHbaabS2tlYy+9va2rj88ss5+eST2bRpEz09Pdi2TXt7O93d3UydOvWwxWo582H69OmsX7+eXbt2IYSgvb2d2bNnM23aNDKZDEEQcNFFF1W0Wct9SCmZMWMGX/3qV9m7dy8NDQ3E43EWLFjwqkZQQggaGxtpb29//S9qxFHR0tLCl7/8ZT784Q+Pa65UHoONjY00NzdXgi5CCOYvmM93v/vdStYNhNnXp59+Oh0dHTz77LO8+OKLFAoFGhoamDFjBt3d3TQ3N1eC0el0mve85z1MmTKF9evXs3PnTrTWTJo0idmzZzN9+nRqamrQWlc2FBsaGqitrQXCMdjR0cGXv/xlPvjBD1JTU0NTUyOpVIqLLrqIRYsWsWnTJnbt2oXnedTU1NDZ2clxxx1HW1vbuKZmEW8ejY2NXHnllZWqj0MpV/slk8nK+Cs/Cx3H4fTTT6ezs5Pq6uqKjmZTUxMXXnghs2fPZuPGjezevRsIx3l3dzfTp08Pje1K6+9UKsWyZcuYOHEi69ev54UXXsD3fSZOnMjs2bOZMWMG9fX1aK1Zvnw57e3tVFdXVzIkpZRMmjSJq666irPOOotUKnVEUyilFCeccAL/9wf/l0Qy8ZaVzI+MFBnoG2ZapsC5nWOTU144GNC/vZ+e3kEyR0iM9LyAgf4cE90s53bWjXl/f9bD2zbIuv39HKLac9R9/Hb3EMND/Uf1+V4rZfPDb3zjG3z2s5894pywXP3R1NRUmRNWV1fz4Q9/mJNPPpmWlpZK5WltTS3nvu9cpk2bxsaNGyv3s+bmZrq7u5kxYwZNTU2VAFBtbS3vP+/9TJs2jQ0bNrB7926UUrS3tzN37lymTp1KPB5HKcXKlStZvHgxEydOrJyHZVl0d3fz9a9/nT179tDQ0FAxOf3c5z7H8uXL2bx5Mz09PUgpmThxIscddxxTp05jwoTmo5aUfK0YDHk/oDcbkHLHbn4fGPUpjpOk8VrRxpAt+PRmx3qAZIua/lyA7xx7/7429OfG77836zNQUIwdsUeHZVnMnTuXa6+9doxvzqEJDNXV1ZU1STmRIJlM8r73vY/u7u6Kb1T5+7300ktZtGgRGzduZN++fdi2zeTJk5k1a1Y4p6v6y/98JpPhrLPOor29nWeeeYadO3cihGDSpEnMmzeP6dOnk0wm8X2fs88+m+nTp9PY2FhJKFNK0T6lnf/zf/4fPvShldTW1pJOp4nH4yxevJiGhgY2bNjAtm3bKBaLlfvqcccdF2XCvsWUzQ9//OMfj5mvH5q8UF4TNzU1VeZtVVVVfP7zn2flypWVBAClFNOmTeMf//EfWXHWCjZu2sjBgwdJJpN0dXUxa9asUKarZJ5d3vy4+OKLmTt3LuvXr+fll18mHo8zdepU5s6dS1dXF0opUukUn//857nwwgsPkzqybZu5c+fy/e9/n76+PiZNmoRSVuV85s6dy0033UQikagY2ka8/Tnm4LVXzJMdGcGxLSwrDDTH4w7IUJICDRiNwFCVdoi7tWRHPLJZHyl8Mqkk1akU6VQSYwQGQSKewLbtSkaxQWM7TijNrA1aayxLIqRESYnRmkK+gA4CkJR2CgVSKWxlUcBDBwFCKZSUpUC0CbOoLasy8bCUhe06OG4RrQOklFh2aedcCLQJMIGmkC9SKPoEASRiMSwBrmOHuteeR8yxSSTi1FRXU1dXR8yN4wceyhLowAMdoATkc1mUkrjVaWpqqqiqzuA6NsYE4Wd0XFLpKlpaW1iw4Hj27OlheGQEE0i00iBDw0YlBAiFwSCEwrJtfK3xTYAlLJTlEIu7lWsqhMLXAaZkrBmYgECAXRYef4OZN28ey5Yt48477+S6667DGMP06dPp7e1l9erVPPXUUyxcuJAFCxaQSCQ4/vjjD2sPMG3adPbvD9s/+eSTLFiwoNL+UD2uMkNDQ9xxxx08//zz+L5Pb28vu3btwhjDj3/8Y5qbm5FSsmjRIpYvXx5lGfydU1VVVdFiOxrKC4RXBn+VUkyePJkJEyZw4oknUigUwgdtKlVxXj6U8sN6yZIlzJkzh9HRUYQQpNPpw+RxLMviuOOOGzd7KxaLMXv27MNemzJlyltSfhzxt5FOp5k/f/5RH2dZFlOnTmXq1KmHva6UoqWlhaamJhYtWsTo6ChSSpLJJPF4fEwGtRCCmpoaTlh0AjNnziSbzSKEIJlMkkgkDhuPXV1d42b9uK7LjBkzmDFjRuW1V8sOinh7kUwmmT179ph7ymuhvr5+3AVoJpNh7ty5TJs2jWw2SxAExGIxksnkuBIyVVVVzJ8/nxkzZpDNZjHGlMZgElWq31ZK0dbWRltb25i/5zgOnZ2dY6pKGhsbqaurY+7cueRyuZLTvUsqlYyqud4mxOPxIz7rXg2lFK2trWMkbMpamieddFLle4dwjWMVagAAIABJREFUwy+RSIybwZ9KpZgzZw7Tpk2rmCjG43FSqVSlffneOl4Js2VZRxybh1LetDtUGuCtQBsQCDIxRX1i7PUYzCsaqwUTuiWnvnv8tUM+L3EtyciwNW4fxhhqUwEddZIzzjhCH6MS11IMH6EPbQwZV5N/A+3xMpkMCxYsOOrjjnTPEVJQXVPNCSecwOzZsxkZGcEYQzyeIJVKjhl/UkpqamtYvHgxc+bMOewZnEwmK3NIx3Ho6Oigo6NjzLnEYrExz2CA1tZWmpubOfHEEyuVColEglQq9YZLMTpOihFrCj94YgPVCR/bPvw7HCoK6qbXHpMXhhCSZCLJoKzlXx4dWxXpBZosSU6aM/ZavRZc1yVd3cBTPRzWv9aGYhHyvsZkajll4vhVCa+VstH2smXLjvpYy7IqkjKHopSiqamJ+vp6Fi5YSG40VzETTSQS465JMpkMCxcuPCzDP5lMkkqlKu2P9PcgHJvjzQ/j8TgzZ86kq6uLoaFhtA6Ix+Ok0+lICvRtQPm7X7p06VEfW05EfKXsmmVZtLe309rayklLTqJQKGBZVmWN+0qklDQ0NHDqqacyf/58RnOjKEtVNkAO/XszZ85k5syZY/qIxWIcf/zx4/ZdX1/P8uXLj/rzRby1HHPw2vc8jBbk8z6OrXAciVKhZIdAltN+wwxn4xOzFfHaJFUZiRAWVTVVpBMJLGUhlQMolAolNgQgpETjgxEYrUuZ0oDQBJ6PLwoYQrOggtZgDI5tEfgOnuUT2BaubWP8AlIIgkBjK4VtW7gxFyUVSjqAwAhJ0Q9KutUaRHiDV0ohRcnw0BiKjsPQcBa0QQQaYQksBfn8MLZlEY/FiMdc0ukU9XW1uLE4JtB4XoFA+0hhsFX4eSxbUVNXQ31DPclMGtsNdaJLfvdYtktNbR1tbZOZO3cOz6x/Fs/TOI4CoUItbmPCTAApMAYc1yXQBst2SCZSuAkXy7WQQhJ4AUGgCQwERofyK4YwcT4wYbb2G0xtbS1XXHEFvu/z8MMPs27dOtLpNPl8nt7eXubPn89VV11FZ2cnSilqamr41Kc+he/7PPTQQzzzzDOHtT/++OO56qrPVHbexmNkZITf/OY3rF27lmKxSBAElYfvr371q3DjRYTZ/yeeeGIUvI54Q7Bt+4gmPuNRNt2JxmPEG4FlWUc1vqSSpNPpyPk94nVDCFEJGL4WQiO81LgGuX8LSqlobP8dcrTfu5TyqMbr/3YcV9AyWXD8vPGDTLmc5KUXBJv/dOQ+lBI0Nx+5j2xW8NKLr97HO5XypnG5UuCv8Ubdp452LvB6UVfXxeJ3fY/de/qZ1Cp517sOX8MJBNW1dVjj+Lv8NaQUzJw1j//3B6vHVGtAuHESGhgeW3A5Hk9w6rtXMLFlciXIqgPD7j2Ghx7W1Mdh4cIEi058+0oDKaXIVGUOy7J+Nd6o568QglgsFkk2/J1hWValOu61oJSiuro60kCPAP6G4HUQeBg02ggKRQ8DuK5ASUkoYxHaACoRBlsRCokiHneRysZVgqBYBKmxHIUTi2HbLrbjIlWYFY0woTBGKHONMQZjfDxVRMlQd8cEBoMJTRsx2LYmFo+hjSYox2hNqKEhpUJZDgKBXQpa2o4i0JrcaJGiV0Qqie04KM8jfHwabNvGtS1MzJBKxEnHE2gvQApDMp1gOBugLAvHtrFshetaJJMJamprEVoyks0iLYFtCYT2w13OdIraumriqSRuLI5yXISyyh80LMeIxWmeOJGW1ons2r2HvoMHMUagpAUmzEQ3wiBNGPR2lE3gG2w7ge3EEVLi+4aR7BBxN4aSJfNGP7wuQllYUmGJsgTJXyedTvOVr3yFK664gqlTpx7VRF5KSXd3N1dffTVnn302mzdvpr+/v1IyMnPmTDo7u0gk4uO237Rp06u2T6fTXH311Vx22WVMnz69Yhx59dVX88lPfnLcSQz8RSv4aG6kERERERERERFvBsVigZ07X2D37p3jvi+EoKVlMp2d05BSkctl2bHjeXp69o7bXinFpElTmDKlExAMDQ3y4o7tHDywf9z2juPS1tZBa+urZw9H/H0gBTg2xGLjrx20Zkw27RhKhnlH6iMIBGHi7RtfGRrx5uK4KZonLsAI6JwqWHLya3BYfM0IqqpqWXzSu1/HPv+ClIqmpok0Nf3FjyMIYOtWzdbtmkwGpk2X1NRE2cMRERERrzfHHLw2JsCYAAjNAbU2aN9C2KEOs0QijAQUQSDwvAChNb7UKLtI4BWxLRvLcjGEQeVEIkUsnkApGyNASkr6zaKkfw1+UEQKhSUVaIMpGTQWCqNorXEdF6VUGOgOwmCtMQLbdhDKDktcTCkgLjW5XI6hkREKRR8hwgxJx1EIKSnkCwSBV9L0ljgxRUNthobqGrQXUMjncRMxYjEbKQW245JKJ4k5NpYlSKWSpFIZmqUsBU41JvBQShGPx4nFY9iOg+26WLaN7TgIqUpZ0KGNYjpdxaw5s8mNFliz9lFy2QIxx6Wky1L6LnTp3B1GsqMEfoDWBgLIjuboO3CQ2toaqlJJFAY/0AgE2gi0AX0UeteO44xbfvFasSyLzs5OJk+ezNKlSykUChXNTNd1x5QslfV+J02axMknn1wqMbFJp8e2L2sbHYpSikWLFh3z+UZEREREREREvJU8//wWfnrzj9m75QkmZsZO3fcO+aRaTuGssz5HQ0MH27c9yZrfX0MwuJ2G1Nj2+4YD6jrO4qwVXyCdqmfD+jU88vv/JKEPUhUbG0jqGTHMPuV8rrzySzhOJKkSERERERERERHx5nLsmde+IPANCIMUGl8LhAELgVICXQpe+0WfYsEj8DVCh+IfylbYTiizoSyP0XyR/OgohXyeVKaWVKYKNx4DqSrSFkKGchsCiWXbSCFLQd5QOkMIg+97GA0qsJBSEXcTFItFtDYIpbBsF0s5SCnxvAL50SzDw0PoQKOkwLGdUK9bCGKOy4gYJvAVYEinEtRWZUg4Dulkgnw2h+NYJKvSVNVk0DpAKEkymaCutoaa6ipSqSTxVBiMLxY9lJS4ro2lJLYdZllLZeE4cWzXxrYdpLQQ0qJswmjZDs0TmjlpyUnYtsu6P/+ZYjEHhAFrQSgZYoxBKYXne+QLo8RTcVT4bSClxUg2h6sUB196iaBQIF1fj0ynCIzBCPOmJzbYtn1Umc5H2z4iIiIiIiIi4n8DQ4MD6ANbWRJ/kUWNY6UGHh4dZc32NtY+MkRjo+bF7X0Udj/He1p6mFE/tiT7t8M5Hn1uN6l0jkxGs2XjflTvs5w9VTMxPbZU/+6BYXr27cL3/Sh4HRERERERERER8aZz7IaNhYBiwccQmjIKISgS4LgayzYoKSkURinmQtNES6lQCxtQGAKt8T2fmBsGsL1iAd/T+L6hUMhjuw6W4+A4DrblYJVkQtAGTKiJbbkOMQHKUihb4XlFvKKH0ppEKoltOUhhhSaGvkfR8xjN5SkWCvh+kXw+h5ICSzl4foA2GmUktmUTT8ZIx11yuWFc1yWVSlFXU40SAgvDwIEcQilirkMqk8Z1HHwdhAL3VdWk0hmSqSSxRAIhLeLxkrGBCUAYEAI35uK4cSzbRVkSqUI98LLMijChrImQgqamRubNn0uhWGTjxmfxihqtfaQAXbosQkr8wKfoFUqyLg4x18WxXYZHBnG0z8iBgyjPJ+86xDMJjKCU4h4REREREREREfF2QxtI2tBR4zCj3hnz/s5Bn8eHJSMj4LqQy0Hahqm147d/bn8RspLhYYMxMDoKDa5kWp1Fa2Zs8PrJvRYvvgnG3hFvPUpa5HzF3Zuz7BzwUUocluByIOuRT1Yz9VU2MaSUBAjW7szzlQf7x7w/NOrRF6Q4Y+7YsXlYH0awducoX3mw9KIJjRqDAEYKPv2keM+8I/cRERERERER8b+HYw5e50c9CvkiCINlSZxSVrMlY6AV2ZxHIe+jfY2CUpBUoJRFyf8Qz9PYlsa2JdoPGBkeIijJWLiBgyoqClKhpMK2bKQQCCkRQiGERMowa1jZDnFpETOgJGijMcaQHy2ggzCr2Pd8iqM5vHwh1JO2QkO0QqFIsRBqXRsBUkhc20ZZklgsgRtTWEpSW1MTOjtrTXZwEIzBLxbwvQKuU0NdfS2OE0PZNsl0mkQqjaUU2ggkEsdxkUqVJoEGIQh1u5UN0sIg0UaCCDPWjSmLgoiSVrdFbV0dc+fNQ0rBiy9sYzQ3RLGQR2uN1iZsLwxFv0i+WMB2XFw7TiaZJDcyhO8FpOvrEUYTr61GWQ5SqoqmeERERERERERExNsQwxGr5KSE1hbBkuWCzk7JE48Jdjx45MQEKQXt7YIVZ0kaGyV/+L3k5Yei4HQEJJLVdE2/gIGhNuItMG+uxDpktRhoQ7qqjvnHH1mSz3Fcli49jZj7Y4Txx7wfaIMbTzJr5txxjg5xXZeTl55GLPb/IUwQHhfArl2GDc8aEgnDe+akOPPMY5cyjIiIiIiIiHjncMzBa4FCCoWQYFsWMTeONhLPM/ieR3akQBCApRSuayOMQZowMCvRBNrg+5qC8om5BqUkAgulLFwnRiKRwrJstA5K8iAglYUbj4eGhaFmBlKCNAqjTRig9gr4hRyFfJ786Ci+5xEEAb7noYQkk0mjS2aHgTbEk+DlC4yOZtHoUG4kFsd1YyghMcSwlCKRSGDZFqMjWYRUNDU2U/CKGG3wih6WZZGuqsZx48TTaZxYPMxIl4p4IoUUNp5fJNA+Upb0BIXEIAiMQWtQUiIJs66NMRhkqGdtJFJauLEETc0TCAKfeNzlpR0vsH9/D4XiaNgOUblWxWIRo8OodDIRI5NOYSlJvK4GKQXSsvBNmAsvSvIrERERERERERER7yyEEKTTgilTJMcdJ9mzS7L3VdQ9hIDqKkFXl6SlRbJ1k6Dv9fRMi3jHYimX+vpuOro6mD4Nzjxb4bwiudmyrFeVj5FS0tbWwYQJrQRBMG4bpRTOKzs+rA9FW3snEya2ooPQcN33Yf16zWBWU1cnOPEkiykd8aP/kBERERERERHvOI45eG05FlLZaKMpFg35fBbfF8ScGMV8Ed83KGljWQopBJZSBLqsUR1OQiSGIPDxggDHdoi5idC0MZbEdRNYtgNoMJogCDCAH0A8HseyFAgJGPyCh6f90HhQiFKgN46lHALfD4/VoS6053to7QOGYtHD8z0QhngshjblyZRFPOZiKRsQKAm+1hRyozhOjNrqeuJuHISg6OXR2mc0VyCRCnCTDsqOI5WLDBOp0UajjQ9CYtkxpIRCKWOaIEBZDrYbx1IWoDBCIELfS4SQGC3DrG0tsJRPMpWhtr6JfL6ARtI/sJ98NkvgFwkCg9Ear1DA832U5WMpRW1NDUKCVAIhDFJKHCORUmIpa4xRYkRERERERERExDsEEVrFSBn+/LVZnRCgSu1FpB4XcQhSKmwriesKEgmJ6x79GkFKRSz2twWWlVTEY4nK754HsbjGtjWOA44TrmMiIiIiIiIi/vdzzMHr/oERenoHQ1kODEJKYk4c40ukkDi2xLYcbMvCtkOTQgtQSqKkhVcs4okwS9vzfWJS4sZcLEcirXAiLaRASSuU15ChZAiA53v4vocUKpTANhpEmAkglUBZMpTRKO3Ua20wmlJAt4jvFcjn80gpUVJi2RZBoHGkjW1b2EohpUQbje8FFAt5YgmXqppaMpkq0vEUiXgC1w21qouFAoVCHiMkRiiktBHKQqrw82qt8b08Whu0CYPxlmWBlEgsbCeBbbsMD+dwHBvXjSGEBHQlA9vzfYqFIjoICHyN1lDwNNoo4m4aXfTxjUYIReAbdBBQKBZw3DgajeuGUiiWHUqXSAG2HcMzCqkUb7pjY0RERERERERERERERERERERERETEq3DMwevhYZ/hQQ+tCTOX4y7JmIUOwPcDMpkYqWS4W66kwpIShEEqiTQaLSReEP5uWxauYwOGoldAFkZxEwlsN0YiFkNJVTLo8NHaJ9CmFNQNd/aVdLEsiTEBQRBmVRsDvufh+374moGKuLMxxOIS3/MoFItI26FYLOLaDoEf4AeGQIcSJAJwHIdUMkMikcKNJ0hUZXBth0Qiges6ICU6CBBKkUhlcGMJhFJggjBwHYRB63x+lGKxgEHjui6WFaOqpoqa2iaktAiC/WSzA+jAw3VLfQiB7wfkcjn29/ayZ9cuip7HwOAgQWAY6B/GK+ZAa9Dg2i7ahNenWMhj0j6OEyfuOjiuBcZgTEAiHsfzNP741XwRERERERERERFvE3YNQV/PMC8N+mF13iE5B0/vLeC3Ff/ykjG80K/55f4hNuwvjOnrj7vzOIli5XetNVv6PG4PRmlOH2LYGE6ZeWJ3jpqJ0YQxIiIiIiIiIiLireGYg9cGgS5NkyUCKUJjxcJogZgbx7FsHDvMjJZIlFIYNDrwQy1mA64Tw0iBQKEDsJSD4yZJpaqJx9M4rotUDkopLCnwvCI6ECipSn1IhLBQSoEIdbMt28YYjdYaKQRSCnQgCQIfYeywTNKy8LwiUiik5RBon0QiFUqYFD2EEWgdII3Ctm0yqTSxeAKlFDE3FgbgLYWyLaRjY9kOtu0ipMR2Yli2jdaa/OgonhdQKOQJAg+jPYLAxxhNAUlgbCw7TjyRwrZslCXw/QJBEAbdte/hBwGCUNO7UMhzoL+fvXv3kcvmGBnJkhvNoQMPrQvoIJRAyRZyxJMOmao06VScVCqJ69gIIUKDSTeOV/TROtQuRwdEmtcREREREREREW8/Jk3qoGPWBTza18ZIUTBhgqC66i/R67oW6Ow+kYkTJwHQNW0WS1ZcyssvbeGAPbaybuIkxYx5p1BTWwfAcTOP5+Rzr2L4wG4OqLC9AXp7DXv2GKh3OHnm6bhu7I3/sBERERERERERERGv4JiD18oCywKjBRKBrSwcyyVVGyceixGPuQjAsgSWcJBC4PlhgFX7GqPBsh2UbSOVjbIdHMcJ9dGMIJ/P43kBtsqDkKGMiDBIITA6QCCRApQVBl0tpUCCQWMCg8FgOTbKVgSej/YtAitAWgrP88OMb9vG9wM0IKTEK+ZxnVK2tO/jGhfHcXFjLtoEmCAAHSCEITA+gdFIZSGkhbRshJAUCgWGh0coFvPkhocpFAsYowl0gLLCv+k6LloLip7GcWOhlIotSagUzRMm0X/wAPl8Ht/3KOQLDA2NUCgUEUJQXVPH/t4DDPTvJZfNUfCLDI0M48YESVcRj9k0T2qhpbmZeCweyrY4DlIphJBYlk02OwwGisUArQRxx0SqIeNw8803UywWWbRoEdOmTcN1X8X9KCLiHcbjjz/Ohg0baG9vZ8GCBdTU1ETa9xFvGevWrePpp5+mubmZhQsX0tDQEI3HiDeV4eFhRkdHSafTxONvLxO4hoZmZs05n579p1FfDycsknR2Hvr/IUinM1RVVQPQ0trGeed/jGx2ZNz+hJRk0lXES5rEHR3TqfvIFYyO5ipttIF16zSPPGKoq1PMmFEbJotEvCEMDw0zkh2hvr4e27b/+gEREa8jIyMjHDx4kNbW1khHPOJNxRhDLpuj70AfbW1tb/XpRPydobWmv78fz/Nobm5+q08n4q9wzMHrdNrFdQVe0ZQyoBWxmEvMtYnHSjrVQqJUGFgO/ACMQQcm1KAONT/IF4o4MU2xUGBkaIj8aB7LiWO7sdAwxImVAsQSy5Yl/WwLx1Y4jotlWQgEQohQ+9qAFAIhRUnr2iCQKEsgjAAhUJZV0o4O8DwPoSz8wEdKgdEabULNaBChjrYUYeBaCrQxFApFbCRF38fVBuMH+P4onuczOppjaHgI3yuA0RgTZnpLJRFGEmiBHwiUpdDa4DguUil0STc8kUqTzeVYu3YtAwf7iTkxvKJPoVCk70AfsUQCx7apqalldHQU7Qe4jksuO4TxJHU1cZobqkgmHRxbIUumlrZlMTqap1gsIKWFrwOkbWEI348Yy9q1a3n44Yepqqpi+vTpLF68mFNOOYWurq4okB3xjufFF1/kxhtvZGhoiLa2No4//niWLl3K/PkLqK2NAtkRby67d+/m1ltvZd++fbS1tTFv3jyWLl3KggULqK+vj8ZjxBvOjh07uOmmm+jp6WHx4sWcfPLJTJs2jVjsrc82Vsoimagnk6mjpgYm/v/svXeQHNl95/l5JjOrqqt9N1z3wGMGbjBwM0QDGC5FShQpilztcXV72tOG9nT/bCx1saG4I+/W6OIYMhG8W8ZuhLSh0GkpLSnxdo9c6ihHpyHH0MxwMA5jYGbgPdq78pnv3R8vK6sb3cAMGgMzg/eJgOnKrJevComsrE/+8vtbJVmz5vqCKQhC+vqW09e3/B2NH0U5li1bOe8xY+HK5YSODktXFxTyXmjdTo6/eZwvfelLNBoNhoaGOHDgABs3bnQ9cjye28ybb77J7/3e79Hf38/+/fvZt28fGzZs8CLbc9sxxnD8+HE+979+jq1bt3LgwAH27dvnRbbnjhDHMS+88AJf/OIX2bFjBwcPHGTf0D4vsu9RlnxG1NfXxuxsF7MzVSqlBqiYQnuBXKAJlEAiXdNBYVw1tHXNE6UKQCTESYKxlkY9ZnJikmq1QrFYJJfPE1lBkhiEVCAkGokWrgFkGEbkcjnCIERrFxkihRPVUrjmjXHiojcAhAyQgZPbxiaZODdJQlxvkBgDShAnCfVq1cWGKImUCoGTv1IKjIlRShBbi6k1SKxrz24Sd4JvEkO9kdBo1F1EibAu7kQHTrArmTaclC4DWxiCIEIFAUgXXmgsSCnp6Oigp6eX5599nuErwxTyBZQOmJqeQmkNAoRyOeBxGk+CVSTGki8UUBIa9So2aRAEEWGQwyQxSgsiGRInBoygETdQQYCw0vvrRahUKpw7d45SqcThw4d5+umn+cpXvsLmhzazb8iJ7A0bNhCG4d2eqsdz08RxzMjICMePH+fIkSM899xzfPOb37xGZO+mp6fHi0PPbSdJEsbGxnj99dc5duwYP/3pT/mrv/or1q5dm4ns3bt309vb6/dHz21hdnaWV199lWeeeYann36aVatWZReuDxw4wKZNm+7ahetSaYazZ1/m+LHjXLkkqJQEh19p/T8QUrB+/UPs2LGXMAyZnBznjddf4cKFM4uOp7Vm06atbH94F1IqRkau8PprLzM8fDlbx1o4dcpw/Jhl+GqO9et2sH37DvytereH6elpXnjhBY4ePcqTTz7JwMAAmzdvZv/+/QwNDbFhw4Y7IrKNiRkff5M33/wpM1OGelVy7Wb7+5ezc9dj9PT0X2cUy8WL53jh0E+oViuLrtHV1c227bsYHFx7nREsFy+c44VDP6ZWqwKQJHDuvNsnLxUFUdBDd9ceBge95LpVpqen+fGPf8zs7CxPPPEEg4OD2f63b98+NqzfgFReZHvefay1TE1P8eSTT3Lo0CG+973vMTg4eJMi21KaOcdLz3+OKBRcOiN46gmRja91yPaH9/DRj3/6tr2O6akJ/vqbX+XC+dML/q9YCx2d3fyDf/g/sHz5yuuMcGvEcYMTJ47yl9/4ClK6116tWl57zWISKLRVGB07fVu2/V7GWsvY2BhPPPEEL7zwAt/+9rcZGBjwIvseZclnQVE+T29fL909kjgWCDT9y5cTV6s0KmUQLhHbgourCF0ltq3V0KFCxgmVSo16vY5MYnL5HLl8jjCXwwpITEIU5hBSu0iRXI4gigjTX1EYusxq5eJDnCgXWBNDDMpoBK6hoxQCa100hrXC/TKJa+hoYhpxjK3XsdSwQqCUIp8vkovySKkwSUy5PIu1hnrs8qHrsaXeMMzKKjY1v1ZIpJAopdFKooRAKJVKeIFAYAGD+01KmR5cROaOrYUwyrN772O8/NJhzp49x6UrV7BAsb2IkhohnYyfnJpiZnrG/SwtUlnauzqRKnQNLY2hVi1Tq9Xp7uonCHM04phqeZZqtYoQiiBUaaSI/zJyIyqVCmfOnOHMmTMcPnyYp556ii9/+cts3ryZoaEhDh48SL1ef/uBPJ57kKbIHhkZWVRkHzhwgD179niR7bkjxHHM6Ogoo6OjHD16lOeee46//Mu/zET2448/zq5du+jt7b3bU/W8D6lUKpw9e5azZ89mn/cDAwM8+OBDDA3tyypi7+SF67NnT/DSc3+KvfQj+uqS8qzgfNg6Fp+aMLy65Wfp61vG2rUbOX70MN/4838PY8dZ1bFQOJ2csBzd9SkGBlfT27uMw6/8lP/6n/4veswIPQW3vrVgapa1FcuZq4IXV/0KH/rQQz73+jZiraVUKnHq1ClOnz7NSy+9xA9+8IN5InFoaIj169fftgiXUnmSE299m7ET/zcb64JL42Le5/5oyTDbtg5rP8NHfvaTi45Rq9V4/vkf87X/+NvsWeQ7/3TNMmq6GPn5X+VXf+03Fh+jWuX553/E1/7jb7N3ZUtCJTVYW7GUpi3PfqeHKLj+GJ6bw1rL7OwsJ06c4OTJk7zwwguZyH7ooYcykXirIjtJGszMXGFsbIy2NsHRo/PHEkLS0dGZZfjfLLValStXLlIulxZdHoYRy5atpL29Y0njVyolzp07gzGuiW2SwNmzlrExQ7UKV6600WisJQh8cdPNYIxhenqa6elp3nrrrZsS2dZa6rWr2JE/54MbAvSUQKX//ImBCzOW/3Lktdsqr8fHR/nqn/47fnagQkdu/j49XbM8PdXLQ1t33zZ5XatVefHQj3n+e1/h4xvdvmct9OQtxsLlUsxzJxf/P+Fx+9DExAQTExMcPXrUi+x7lCXL62XLHyQMegFFYgAhKOYCKrNTNJTANhqoVIomcQJCIIVrplipVojjBo2G64we6ID+ZSsoFIsYAXFsAenEtQ7RQUgYuUaISOkEsIXmuZRQGpqqXCp0ECEBKQVSSowxCJFGi6AQCBr1GlJpTL1KrVyhXHbNFa0VCKGxNgg1AAAgAElEQVQRQiG1xiSWOLEgtYsBkQKBBCmJrcHGLl5ESIGSYFNRbq3FCgHGYkiQVmKVRCnp1geCQLnQDmud2LaAtVgEnd09fPqX/yEXL12iXK0yOTlJI2lgY0t3VxdRLqKrvYOpiUnKsyVWDfazbfsm+pd1o6SlUa+ChDAIEDKgGhtyYYhQisTMIpVG6xClNIjWAfYLX/gCL774ohexwBtvvLHo+1Aulzl95jSnz5zmlVde4cknn2TVqlUcPXrUXSTx3LN873vf4xvf+AZXr16921O565w/f56LFy8uePydiOzx8fG7MOP3Hz/60Y/47ne/y7lz5+72VO46ly9f5uTJkwseX2x/bIrsXbt2ZSLbszTGxsY4fPgwY2Njd3sqd51jx44xPDw877FyucyZM2c4e/Ysr7zyCk895SpiH3roIYaGhti/f/8duQOrVJqhEJ/n42vG2DfYtmD5040Kp0tXKKWyZmZmknDmNEM942ztXyibv1Muc3VmhFqt5tafnqQwe5qPrkkY6FiYt/zNYzNMzgyTJMm7+rqGh4d56qmnmJmZeVfHfS/y6quvMjU1lf3cFNknT57k1KlTvPjii3z/+99ncHCQLVu2ZCJ77dq176rIjuMaojHCzp4R/vtNxQXLT03U+eHUOSbGR687RpIkzE5P0FW7wN9f2bVg+Ugp5snL04yOXF7k2a0xSjOTdNUv3mCMEmOjV97hK1vI1atX+c53vkOj0VjyGO8XXnvttXnfeRYT2c3971qRfbM3Y4yOvMkPvvt5KjMjjJzQXHhjvugr1xPal2/kN3/zt1i5cvCmxjbG8MYbh/niF/4lKzoWao5GYignir2P/Qz/7DP/281NHHcs/v73v81Xv/IHDHa7Y6u1UC5bhoctDZNw6mQ3hfz/yIc//InrjmOt5erVq5w4ceKm5/B+I0kS3njjjXmP3bzItoQyYVtfwn/zUGHeWLGxvHylyhuXb+95TpzExLMjfPSBDnoL8/e9kXLMmXKZmZmp6zz71jHGUJ6ZZH2xzifWzH8PEms5Mhzz/7zcvIsl4dy5c7z++uu3bT7vFRqNBi+99NK8xxYV2d/6NgODXmTfbZZeeZ3rIcoJsK5KWgeSfCGgp6cbU6tQmZ2mUa2SNGrEcUIcJ1RjF+chrEFanEwVmlyhjXyhiAoisAaV3t4RhiFBoNFpRnW9bolkSGIU9UaDUFuE0Ejj8qKdIBcu7gOXK22tRUiL0gopJYlxER9SGcrlChMTk5TLFYwxIBRKSpQIsAZq1RrWuoxuIRVSOBkvU9krZVrRbV1VtUQgrCFuxNSqMXEjBiCODSPDI+TyOYod7SitCcOQ6akia9asJd/WhlCB0++pvEYo1m3YwH/7j/47/vRLX0JKwez0DLWkwcTUJGpW0dfXx8M7tvPGkVcZGFzO8pWuyYsxEKgAm9QIc22EUQEVtNFW7GBycpIo1wY2ySopbFY7DocOHeJb3/oW1Wr1lnas9wP5fN7tFzegXC5z+vRpTp/2t+G8Fzhz5gxPPPGE//dKebuLLYuJw+9+97uUy2VqtRpB4LPfb4WLFy/y8ssvLzhpv195J/vj8PAww8PDHDlyhGeffZYnnniCT3ziE3zoQx/yFw+XwKlTp/jyf/oyh144dLenctcpl8uMjIwsuqwpEk+fPs2ZM2d4+eWX50U7DA0NMTu7eHPEd4u2AAajkDVdC0X5qsmEq8FcgSlojwSrOxdff3l7g0ndWl8IQWdOsrpLMbiIvO4taKblu3/XzbFjx3jmmWd466233vWx32uUSqXrXli/ViS++OKLPPF3TzD4gBPZBw8eZHT0+jL5ZgmUpL+oF913YmPprr39vqCkpCu/+BiFQNI7Zai9zZ1cSgq688GiY+QDSd+UoXoLMTbHjh3jj/7oj+ZdNLhfKZfLlEqLV2VeT2TPjbaZnJx8x9uqVScJZ3/KP95UYXV3xLWHlrfGGzx9IWZ6evKm5bW1homJUaZO/YT/6e/1LXydDcPzlxuceOt6kTc3pl6vM3r5DD1Tr/LLa1sXd2wX2BUwUmrwzGiBC+du/D0jSRKOHTvGH/zBHyxpHu8nrLWUy+XrLr+eyB4YGGDbtm3s2bOH8fFJpIDOnKK3MP9iXmygK6fQt7nhsEAQBYregl4wh8RCMbr9sTvXew8SC91557nAUK/Xef755/nt3/7t2z6nex1rLdPT0zdcnonsY0c5dOjQfJGdxhp67gxLltfGSgxOXgspCcIcxWI7bfk82hrqHTOUZ6YoT49TKc+CjRE2QWAItKStUEDqiKjQ5ip/pcRYgbUSoQRSBagwQihNYozbhnLRI3EjAS1IpEUaS2KMk9ZpQ0aTGGQgSYxFKoXWikAH2CTBYqgnDcanJpkan6Bed1UnWmuQTkxL5XKljTEYY0gSQxzXXZ52HINxgloqhVIBxlqEtSglMNaAwOVdS00YReTzESsHVhFELvpEqQClNAKLAeLEEEiDFBKXbuIEthWKnbt28am//4v88Knvc/zoG8yqhEaSUKtXGB1pYG3ChvUbsEYQxzFRPoeSIUJIAiXI5SLCoECYayeKckxMTJOPchhTY3Z2GqVzBEGUnfpt3ryZiYkJX3kNnD17lsuXr18VMpe1a9cyPDx8ww9fz91nxYoV7Nmzh1WrVt3tqdx1hoeHOX/+PJXK4nmUcykWi2zfvp2hoSH27t3Lt7/9bV8t/C7Q19fHrl276OpaWFV2vzE2Nsb58+ffkQDs7Oxk+/bt7N+/n927d7N582aKxaKPtFkCpVKJ02dO+wsoN8HcaIeJiQna29ux1vpGzktgZmaGkydP+v3vJrDWMjMzw8zMDBOTE9nnRy6Xv8sze+8xOzvL0aNHmZiYuNtTuesIId5Rc8amyD558iRjY2N0dHRgraVQKLztc+fSHsLOFTnW9yy8MKGl4IURhTFLvCBtLZ05ye4VC4/JpbphtGpZag2utRYlLAOdwaLjX56RnKiKty1+ar6P/th3czRF9szMDKOjo/T1uQsU/vP35jDGMDEx4fe/m8Ray+TkJFNTU4yOjbJq1Sq01vdEY+/7hSXLayEsMo3wkFIQhgH5QgGtQ6IgoL2zi86eXkoTnUxNjFCamSKICtTjmHyuQHd3PzoqgJJMTE7RiC1YCcISBpogihA6AKmQgUZpjQXq9QZKKgSSBjFCSIQ0IKRrDJkYlJDUGxYd6LRxpCZODI1ajVq1ysjICJMT41TKZRLrXocQIJREK42wVSekhUQikNoJcCklgQ5QUgBONLsIE4tMc6hdpbd08loFBIEmCEKU1igdIZQCoZBSEDfqaK3ThpAGMefSs7A2zb/OcfCDH0SKBv3decYnR5mZLdGoxzTqMcZAW3seQ4IS0jWIDApIqQjDgDAI0DqHDnIoFSKlIEnqTE6OYYyhoHNIGWTRIb/+67/Opz/96bf90L0f+PznP8/o6Ki7YHENUkrWrVvHBz7wAYaGhtiyZQuf/exneeWVV+7CTD3vlP3797N+/frsVun7mb/922/xJ3/yJc6ePbvo8q6uLnbs2MHQ0BB79uxh7dq1rFy5kv7+fl5//fXblrd5P7F792527979ji4gvN95+umn+eM//mOOHTu26PLu7m527NjhhPWu3axd5/bHvr4+oii6bsWs58Zs3LiRz3zmM3zqU5+621O565w5c4bvfe97HD9+/IbrdXZ28vDDD3Pw4EH27NnD+vXrWbVqFc8999wdmun7h61bt/Lxj3/c//8F3nrrLf7mb/6GCxcu3HC97u7ueU2Vm5/NP/rRj+7QTN8/bN26ld/5nd/xd5sCJ0+e5M///M9vWIEohKCzs3Pe/rdmzRpWrFjBT3/60zs42/c+Sim2bNnCF7/4xbs9lbuOMYZTp07xh3/4hzdcTwhBd3c3jz32GAcOHGD37t0MDg7S0dHB00/749/NEEURQ/uG/P6Hu6vz1Vdf5atf/eoN1xNC0NfXx9DQEAcOHGDXrl2sWrWKlStXEgQL71jz3B5uQV6nVde4W8OUVEjhmiCEYZ4oCsnRTj4qUuzopl4rEycxJrFE+TbCKI9FUanVmK3G2FqdJBEoEqRWhGFEELkKYmEhiRNiGxPo0HVGMAIpFdaANYJGEtOI6yglSaQkkBFCuDznarXG1NQE9VqZqckpV3GMcE0fgyDt3u2uOAsp02xsl6kthQDhZLS17nYMmeVngzUmFfgyaxqJlAjce6KkROoAoRQWhUwzt7EGk8SEUejiTYylEadiPsugNoCg2NnDB4YOIBplrlx4k1p1FhWESKlRUpJYycTUDNVaBWk6iILILVMBUgYoFTmRrjWzM1NMTQ27angUUaHdier0duv169ff0g71fqK7u3teFYJSio0bN2bC+qGHHmJgYICVK1fS1tZGR8fSGn947hx9fX3ZVfr7nSNHjiy4UtysBB4aGmLnzp2sWbOGlStX0tvbOy/T1Ve4vjt0d3ezdevWuz2Ne4ILFy4sqNzq7e1l586d7N+/n507d2aS5tr90bN0VqxYwcDAwKIXae83Dh06xJEjRxaV1x0dHWzfvj0T1hs2bGDVqlXz9kW/T948DzzwAL/0S7/0rmdpvxd55plneO655xaV111dXfOaJ889FrrvMGR/et45g4ODPP744z5yCteD4+tf//qCx5vCevfu3dn+t3r16uz4p7XGGOP3v5tESskDDzzAgQMH7vZU7jpxHPPss88uKq+vJ6ybxz8pJaVSCXkbYq3ez2itefChB1mzdmHzy/uNWq3GX//1Xy8qr5vCev/+/ezfv3+esO7u7s6+D9/u2DhPi1uT10Kk0lMgsGBc88UwiFAyQClBqAKiQhFrY6xJsBZUoEkSS6OR0LCu86JMI0GSxCCAeiMmsTUEMs2WBqU0ibIopQhDQWBd9lpSrVGtVVFaoZQkCAK0slSqVRr1GqVSiZmZaayJnSCOQgKtCKRGpdEfQmq0ChFKIKTENptBWoO1AmzzFiDjcrCtdVEbIv2TVHKmESIibRYplWuYKKRKd/B0YOMOHIEOXDNJKRgfm6BYLKbdieWcjpSSju4+tu7Yw+zkFWQygxI12toL5PM5kJq+7iIz5Sp1Y8lrXKQLEkSAlJqkUePcpdMcOXKYnt52wijEJDbNGfNV1tdDKcW2bdsyYb1p0yYGBgZYsWIFbW1tXuJ53vM0o1SGhobYsWMHq1evzj6U/ZVkz51m2bJl7Nq1i/379/PII49kF1B6enr8/ngb0FrT1rawAeD9SLFYnCdg2tvb2bZtGwcPHmTv3r3zhLW/RfndQWtNe3v73Z7GPUGxWJx3R1NXVxePPPJIdsFk7dq1rFq1yh8L30WCIKCzs/NuT+OeoFgsZgU7iwnrucUMXlS/OwRB4GPjcPK6WGzlh78TYb0Y1dhy+HKFJ0/P3z9jYzkyUqNcvb133RprGJmu8NRpSc81DRvHKwlvjbSx77bOAGqJ5fDl8oL3wFjLW2M16nHrQnEYhuTzPm6qVqvNK555p8Lac3dY+qePdMnMUjrxinCZ1UoHWARCOMksdYQlBGFRAqyxSOXyqHUjxliQUtNoVLEGpEybNeYLKB0ikWl1sU2vjAt0EKCDkEZiqDfKmEYdYyw6CEBClFhmZmZp1Os06jWSJHaxHEKgdYjQEq0kEhBSIJRE6ZAgyCOVAikwuGpkaw3CCjdva7DWiV5rLSZJEBa3nmvXCNLJb6k0UmukcHEjNGU/uKwRhMuaFgoQCAGVSoVarUZfXx9ahzQNurCACFg+uJ4tO4d4+Yd/ialNUa8EKCxKayIVEbYXSASI0NJAUIsNxjYYuTrM2MhF4qRCqTxNb383hULRxZZIVw3uWcgnP/lJPvaxjzE4OMjAwADLly+nUCj4g5bnfcHWrVv5rd/6LXq6exh8wJ0UdnV1+S8lnrvChg0b+OxnP0uxWJx3AcXvj547SbFY5LHHHuPxxx+fJ6yb8TR3CyEEZ6YEJ87PcuhyglLz74A5crVKtLGRNRQXQvDmuODqxRnW9jQWjPfalSo9bY3sTj9r4fURwx+Xy/QXW2LUGItJ4PCVKqtXGX/+c5vp7Ozk8ccfz4T1unXrsgsmd0pYCwSzdcOTp8okFpSa/29+ebrOJdHB2hvkIwshaCSWly7V+IMXZxYsn6zEnJyNeHTb9fen1hjVeWM098nJasyZap4PbFvCi/QsQAhBV1cX27Zt88Lac8cRQtDT03PTwrr1fIkMuhlVu/naeUtXl6Cj3R1frID68pCff3Dotr6Gzo4uPvwL/5jXZ84RXtNIthHB2u3LefDB23fACqMc2x/ey+FNj/Ock000Yrh0yWAMGBkzOHCes+fP3LY5vFeRUtLf3++F9XuAJX8SzcyUmC2V0QJX/StA65BcroBSyv2XlRKppBPEQqAQWJO4HGssVkjCOEdXby+1eg2TJGil0UFIGEZonUNp7aqcE1fxLATpLUoJcaNBksQI4yRzvVFHBZJatUy9Vs2quJvxHUjlhLJULstaGPd3HaKDCBWEWdW1shbbzKG2gDQYa12utrUIC0bEYFR6eFBZBbWruFZubOmaUQqBk9xZtTpEUS4bX6QNIsulCoEO6OnpRiidVnK791zpiHWbHubi6aNcOfMqca1MlQShNEEQIYUkjHIEMiKnDEoKTp0/zYkTxzGmzPLlywnDkCQ2Lr87CNILD/52ucX4uZ/7OaIoIooif9DyvO/YsmULmzdvJpfL+fxqz11nw4YNrFu3jiiK/Jdkz11h48aNfO5zn0MIkeX73ysV1uvWPcjDj/46T88+xumiYs0aQW9v67zkkdiybtPDDA66W4A3b9nJL/7Kv+DKxTPkgoVf+h81gk1b99DT0+uev/MxSr/2r5ieuEqgWutfvGg4ecqS3xCwa/e+e+b9eD+ybds2Pv/5z5PP5xdE0txJ2tp6WLvhFxgeDrncJ3jwQcncU4QoMezoWcmevdePO4iiiL2P7qf0T/81AQsvnuSNZW2hk72PHbzBGDn2PnqQ2X/6rwhxsUbGwJWrlhMnDGE37NjSxc/87AeX/mI9GVu3buX3f//36e/vv+3C2lrLaKnBM2eqnJ9auI8fH6tzub70HHJjLcMzDZ4+vfBW/tm64ZWrMY3VS4/KqjQMr18u8fTphcfWq6WYY6NFfBjIO0dKxdatW/mzP/uzmxLW8xG0FdewdfcXyOctu3dLtm9rPV9KSVdX77s/+Tl09/Txz37jX1MqTQMLvUEYRgw+sO62bT8MQnbv2Ud///+ZPTY9bfmL/y8hSaCnZ4L49S96eX0NWmv27dvHV7/6VS+s3wMs+VOpkCtSy9XAxGipCJRG68DlXgUBUgi00u6KvRAoLdFCuYplXJNH14TREuiAIAwwiUJLjdYRYRghVZhFkwhcpTQ2ITEGa5xMxrrsaRfoYbBxgonjTHQLoVBKu5xqrRFSu0twrkMjQmm0ziF1CFKCFEgEYLFZfpLAIpDSpPEhacNKIVzmNkCaga2Vk9hSuApsIRXCOl+dXv/L8rSDMGgVYzcruI1lamoKrRXFjg4EIs0XdwOEhQ52fOAjlEuzTF09ha1UAENDawIVEYURHcUiYb6DWOWo1RPOnDxOzSSAJQg01WqZej1HoLTrkanFYsfY+x5/K6Hn/czNdob3eG4n/tZFz92mt7d3CV+Y7wzd3f1s3vJxLl4+SH+/YN8HJJs2zT1xExQKbdmt1ytWDvDzH/801Wp50fGEkBQKbUSR63uwes0GevuWU6u1hJEx8NJLhqow9PVpNm5sR0p/ofN2sWzZMvr7++/6/hcEOVas2MOmzRvZtNHy0Z9XRNf4xTCMaG+/fp8XKRWbNm1hxYoBkmRxSaiDgPbi9cdQSrFp02ZWrFiVjdFowBtvGCrW0NMtOHgwYvNWf67+btDX18eHP/zhO1LMUGxfQffqX+L7o8foqwv6l83/ElrpkOwYfJjly1fc9NhSKlateoCtH/gEP7YLq/4TBXZ1O0MHPrKkuecLBR7cvINXNh7gx9nd2FApw8iIIQE6B1exdduuJY1/PyKloL+/n4997GO3dPzTOk9P7yO0tcHatZItW+/ssTQIQlav2XBHtzkXIQTFYgdbtu7MHpuYsPT1J8Qx9PcPkz/pY+KuRSnFAw88wJo1a7ywfg+wZHkdBSFRGIHRhKGmLd9GEISu6li6ZoqBDpDCdTlUWhMGAVJI4jjB4mI+XJSHJtAaI4wT11GE1iEqCImTBJUK72bFssGCEgjjMpuNtcgslzrGCoWUzscKIZ2wxrrKaOHKnIXLDEGIAKFcJjWyKYpx6zT/LgUii4W2CJUmgFiVyWsr3K12Mh1DpFEgbsYLD57u9et0XWjU65jEbSRJDJOTU0ils1xlF5gCVkh6lq1m+56f4e++eZ6Z4fPkCyEd3T0UetrpWb6anhUD6DCPJSDKdXHl8mXePH2MWi1BIamWZ6mUI2xUQEfgZu7xeDwej8dzf3K3peGNkFISRe0UCkWKRejtlSxbdv35KqXp6Oiko+OdiT2tAzo7u+c9Zix0dycUCpa2NgjDe/f9eT8gmt877gG0zlEo5OjoFCzrl0TRzc8rDCN6e/tvaR7XjtFowKXLhkLBUGyH9g5JuMidBZ6bRwhxx+7C6+wa5JHHfpMLF6ZYs1rw0Z+75t8wlXCdXT03PbYQgvXrN/Eb//Pv0qgvzDi2uKzp3r7lS5p7Lsqx99EDrFw5gDXp93YDZ89anvi+oa0N9n2gwLbtvhHezXAvHf889x/+DuT3Dktv2AipmXVS2FqLViqLWbDGoqR0lddKpvnXGi2Vy3MWLtjeDSZd1bZICKOIMMyhgxArBdLKZlkyIBBWIkRavWxdk0iJwRiJRGESiZWJWwZpnp9ESJx0xsVkCClQKkDpIG0W6eR3q1Oj+8Nmv+FSrZurpdXbUjXldVq9bG26uqsWd1XZljnDpJXk7gqdE9OuSrxWq4FQSCGI44SJ8XG01uRyuVZzTJMghGZg7UPsePSDvPTjMt09Haxas46evpV09a4kKrSjggBQBLmYPXuHqFvL6PBlsIJ6rYY1hkacQOKqxH3ptcfj8Xg8Hs+9x+TkOMeOPsVrL7/M6aLk8nlB/5zYEITgoa072X/gIxQKbQxfvcTzzz3F6VNvpud485FKsf2RR9m3/8MEOuDC+dM895MfcPnS+Wx9C1y44GJDOjoiVi7fz7ZtH/KCwePx3BJa5+jqXk+lCisHBFu3v7viKJ9vY9NtyhYWQtLR0UXHnOrWJAEdGF56xdDRASsHJG1t/qKKx+PxvNssWV5ba0AYEBatBRZDLpejkC+glCSOY6SSSC1dQ0TlqqulkEjpojwsrumhSRKUlGgpXYRI4KRygnU52VnmRiqwmyHR1oJxOdTSWqxJMEisSACDTSyimTndzKImDfWXGh1oJ9XTymNhXRW3kDAnPARobt6mjSpTx23IGjE6CW3c31tTRWCzCJOWvU4jRoKAZnm20oply/u5cP4SQRCilKJeqzM2Okr/8mVOdCPAGqyQBPk2dg59iIQYa2t09fRRKHQidJ4EhZIBQZQjKkjWRTnCfJ7XDr9Eo/4qlfI0Jjauej3N//Z4PB6Px+Px3HtcuHCa44e/Tu/M0zyU0wTnBPJia/nxccPFix9l/YbNrF//ICdPHOEHf/2n5CaP8kDHQjH0+ojhysgIW7bupK9vOUfeeJknvvEfGJQj9BZa0mVVDP2h5chFy2uvlfj4LwxlUSMej8fj8Xg8Hs+dYsny2qS/Ai0JIkUun6OzqwcdBMSNGtgEIQOsNSih0Mo1TXQ50C57WilFvTFBo15Hpg0PpVRo7bKgFRIjXWC0kmBNFp4BWRW0xaTV0lZKEAnWSIS1GBKEdLJcSOmqRYRACpnORyOkwiKwFrcNa9N4aesUc/ozwjVbnO+gW40OrTXpEpnmX5tmygkgnBQXIK3Ixmk2ArSpLO/s7qJWqzM2Op5Wh0sqlSoTYxP09vUShAHWSjAGgSLf1kln70qmJodBRhgUjcRg4zitKk/QoaZQaGP1A2uZHJvk6JGj1Co1rl65QrG9i86u/vR9Weqe4PF4PB6Px+O5XVQqJQpmhA+srjP0QLBgeZut8VZjikq1AkCpNEO+doXHlzXYtmyhvBb1Cheq09Tr9Wz99sZVPrLOMNhxbQM1wX89Ms1oZYokSd711+bxeDwej8fj8bwdS28jLCRahYShIAhC2grthFFEHDdoNOpp5rOreA6C0P3SAVIrhNRIoTHWMDE+jjUGqRRSugptKTVSaYyxIAyJSd1qM9FDSJxXdoJXtFJFXLq0dLXEQimEkCip0+pot46SCiUkUikX95EObIzNYkIEaaSJbUV+iNaPmeu1aaSJzeSvE95Zj0XhAkTcc9z4Nt1IGASuMrwpt6Wit78PC4yNjKLSTPDZ2RmkknT39KCkwqV+u+0kcUJptkwuF6GlxKoEgcDoGGFDbGIIghw2FHR1drtlSUJpdoZKtcbgmo2uUaXH4/F4PB6P554kr6E/1CwvLjx172vTnNdzz+UEhUCwvLj4+j0FzYhuSW0hBMVQsryoFl2/M6cZl77KwePxeDwej8dzd1i6vEbgUqAlAk0ucnEhpWqVRq0KVhIEedra2wiDHFoHaB0gtRPTWEujVmNicsIJ5UAhpSaI8kgZuGxraXH9FY2T4WlltBAyzYlutkOUWGHBSpSSWRW0TuW1lNqJ9Ex+izT7utUcwKRZ1a2kv6ZkTps70sy0dlEiwqbPFXOCReZUYgvrsqRbCditpo4uF9sSpA0sIe3saC1aB/T29pAkCeMTE+g09mR6egqlJR3tnci0ilwKQa1aQ2CpV0o0amWCoI2uTg25HALQWqfrSnL5PI1GHYtFakWlWsHgmkD60muPx+PxeDwej8fj8Xg8Ho/Hcy9xSw0brTFYI3GuuClgodGIsYmlHtXpjfoJgsjlTatUXEuBjRuMjY1Sq1ZQUmGlQGvXrDB2dDgAACAASURBVFEqjRUSYQ0gkFJkDRidaM1cr4v3SGV0Ux27KBCTCmrpBHEajZFq6yyruimsBWkDSSGQYk40SPpqrbVZNrRAAi5OxNIa183JZjEcNhXd1pVyuyrtVJJLIAiDrLu9SOeCBR2E9PX3Y4HpySmwYIxhcnIKpQLaCgUXwQLUqhVmZ6ZpLy6jo6OLKFekUGgjXygQ5XKoZvW5gt7eXgYGBpiavko9rlKtx07gS+3dtcfj8Xg8Ho/Hcx9jTMz42HHePPYTpicSKiVJcM23xf5lK9m1ex+9vcsWHcNiuXD+DIee/yG1NMrmWjq7enj44b08sHrd4mNYy4ULZzj00x9Sq7kxkgTOX7AcP2q5WIRA9dLV8eh1x/B4PB6Px/P+YenyOu1KmCQxcaNBpVqlUqliLEipaMQNkiRGByFRLudEs5QgJMYYqtUKI8NXsdY4gSs1YZBHB1GaNu0Ur3Tp0fOzpq1I3bPIYjuyWJG0ItrFljgt3RTDQrRaEzbHagrlzFg3JTPNAVsrZ5EhC+Q2SDvH31vrejlmT7AIZJqdLcA4SR6ksSBZlIhtvh5BGEX09/eTNGJmZ0soNI16zOjoKHJZP21tbVjhok4uXbpAZ2c7y1euptjeiY7yRLm8q3SXKq0wVxSL7SxfsZLRsRVcunIBIRRKBUihm0rf4/F4PB6Px/MewhrLyIjhhUOGkeGE1183lMvXnqnOXR8uX7E8+2xCb2/CsaOGeqN1/6Hn/qVcnuTkie8wfuqPeDARDE+K7M5VgNGSZbZtPUIIPvKzn1x0jHqtxgsvPMvXv/R77F25cD+cqVlGTDfjP/9P+NVf+41Fx6jVqhw69BO+9ie/y6Mr0wctUIf1NUtp1vL83/WQj64/hsfj8Xg8nvcPt1B5nYpg67Kh6/UaJk4QQiNlQKCdNG406rS3F9MqZNewUQpJtVqhXC6BcWMpFRAEOVc1bW1aDd2sdBZZ9TLGaWyZVmCbrFFi+pw04sOmsSCtsGoy49yU3taarOmjSHOqXSyIw2Vqu8eyyui5y7Lh08xqm1aAW6fbhRRYY7LGj66C262rpWsYaa1LG3HzaeZlu4GjMGLZsmUYc5VKpQJCUK/VGBsdIQg0uSjAWEupNIsRILUGJNa6iwpSCoyxacNKlyne27+CfFsXSXyBSqWOQKGl9PLa4/F4PB6P5x5F3OA0zViYnIQjRy0jo5azpyzB4gWvgDvnHB21vPqapb3dcvqMpSt+9+fsee/RiGvQGOGRnhF+ZWP7guWnJur8cOocE+Oj1x0jSRJmpyfoql/kUyu7FiwfLcf84NIsoyOXbzhGaWaSrvqlRccYKcU8ebnM6OiVd/jKPB6Px+PxvJdZsry2QiBkmj0tFVJL4iRGKIWSAVE+RxhFVCoV4jhxcloHCCmYLc8yfPUqtVrVyV8VoHWI0hqhJMIYsAIpLEmSSuVMYJtURJvmTFKxbNPqZdLs6VZ6tUztddaEsZlfTdpR0TQ7K9qsM6RAuGrsObR6Mtqs6SLCad+5XypEJtZbgps0q7s5qWZVdEvQz5XmzZkLolyB5cuWc3XkKuVKGYDZmVlG1AgrV/S7ZpdWkc+1uYgQwMYJsahjTIJSCqVCpDIIIVm7diNvnTiGEJp6I0ZqgVTN98zj8Xg8Ho/Hcy8RBJpzs5pXjk/z7RP1BcvPTNSZLhiWo0kSsFbzxoji345NMNCRW7D+ifE6lU6DtQFJAsZKXroMvzc9QU8hWLD+m2M1tg7YLOrO8/4mUJL+YsCarnDBsthYumtvX/CipKQrrxcdoxBIeicNtRtdkQGUFHTnF59HPpD0TRmqvvjG4/F4PJ77gluIDRFY4Rol1uKESrVGo1FHixCbGKIwh5QBjUadRj1GRpo4TqjXq1y4cJorVy5Rq9cRSHQQoaM8Qqm0D6QTynZO1bRNxbIT1QaLk8PgmhdaY9JoENdIspVJ3ZLIrUaMaShJFjvSigoRWQK2i/1wwly2qqlJGzYiXMW0MQil5kSTiHlj2WZFuAsJB+Hmq9LnNKNNMvGeYm1rHrl8gf7+fq5cuUylUiVuxEyOTxBId6toGObApE0npRPvxiQgQMp07tbJ+SjSSCxBoIiiiEAFSKHAn/x5PB6Px+Px3HNs2PAQv/xP/gXnznyCQCmuDa/bbqF32UbWr1tLEEpmZ/Zw5vRvMTlxiUAtFM47kPSv2MLatb0oLZmcOMiZ01+gNDuBlteeDwp2SsmmzQ8Thgslosfj8Xg8Ho/Hc7tZsrxWSqKUAiAxgnKpQrVWIacgbiRIqYhtjIolpVKJmZkZ6vUGU1NjXB2+5CJDBGgdosMcSgW0THXa7LCZGW0FWElmeLO8DklT/LoYkyy0uhVXbVvrzz/Vt1k+dbPq2onnOVJbtKJLmt0UW88BkYZcZ5KdVvyHaDaENK2tiZaJJwzDVCw3xXbz+amAFxaStMhcWHK5Av19yzh96gQz05NYaymXpomTGCklY8NXWbZiObm8y8JuvlXN99FasCYhF4VgLNMTU8xMzmKtQKnAx4Z4PB6Px+Px3IN0dfdw4MDP8Oij+xdfQQjCMCQKXZW1tavYsuUXqNcXVmm71SVhGBKGEQDGrGX79pU0Go1F15dSEoaR66Pi8Xg8Ho/H4/HcYZYsr7XSqbx22jiODePj43SYTuq1Bkkco2qKarXKyPAotWoNpaBaK9GIG1hrkTpAqQClw7T61zUgJIuxTrOpm9rXCixJ+jgt65tmTcNcv22xQiDTdZwHF3Mk8tzH7Rx5a9Mqa7d+lnQ9RzA3q6tNM7LEGpfn3RTntDKx5wVpC0C4au4gUFnUSNY00rZ+NpkVb0n8fL6NKNRcOHeKOE7IF9rIhYL2Yhv10izTo6PkC+2EuTYXwZLJfZd9ba0lCHKsX7uJJ+pPUJ5NG2xqfeMwRY/H4/F4PB7PXUEgiKIcUbQwAmTR9YUkivJEUf4drS+lIpcrkHtnw3vuc2ZmLc8fMhiRLLq8Xk945SWDMtePJGzULUeOWr729cXHqNUSDr9kkDcYw+PxeDwez/3DkuW1lNL9SgWpQVCuNDB2hiRJsMZSq1cpl0qYxIC1BFGAlAKJREiNVjl0kEMImQaBCIx1TRizLOisAttVO7v+jM02iRZj3LouxmROxXQW3eEql8mE9dyu2aKVl51WZzd/blVwi0wqA5lQFs0KcJHVezdXyJS3sc0tymy5sQlCWIJAI6RI5wqmmcPdfL1zppA1c8SiZEC9XuPq1Uu0F9vp7++lu7sLbQ2zU5O0TU5S7FZIkQcrSJIYFUikBWsTrBCsWbeRvmXLOX/lopPp1p8Yejwej8fj8Xg89zNSgn6bAvtG3XLliuXY8cW/P8QNy5WrllU3+HqRJJaxseuP0UjHWOm/ong8Ho/H4+FW5HUaG9K8hVBKhbGCWj1BSkG5UqfeaJBYQ2JiJIIkMQi0y5tWGq1DhFTYOU0NjXX6l6awbTrdpgieI4ub5zPNOulmlXZzicA1ScwaIWJoCnLZNNiZnE4t8pyfRWa5W2dOc8V39nsaO+K2buctc9XY6XiZ2LYEYYiQqUi3LXE915U7h22bYSXutRpDoANsEjM9NUF7e55CRxt5LWkrtiOUwlhDpVoBK5BKUZAC90/tmu1EuYiuzg4KhRAtBUJcE7jt8Xg8Ho/H4/F47iuKbZKwCE+eKtMwC5dfmWnwVqmdTasVDwwuPkajIRkZhpeP1vl3P51esHyqknB8MqJjvbj+GHXJ6FV4+Wht8TGqCWcrOT6w3d85+l6jNHuVN179G86eeZNzJyWXzs7P5RdCMLh6PZ/45K+QzxducnTLxQtn+Zu/+s+USzOLLLbkC0X2PvY4ex/74E3PPY4bvPXmEb7zra+17pA2MD5hOXbMEoaW8nQ//X3/gDVrNtz0+B6Px+O5PrfQsFGilM6ynZVsNiCUqcgGpSLXoJAa1lgEGikDtNau4lppJ5yNwQqJMaYlpNMoDWPcmVPWWDGN0zDWpvI3jfBoVg9nOdVzY0HSeuismWKzGaTIKqjn9HOkqZ3nFSTPkeikcSGuMSSZdM4+xLLGkjQLxp04FwJh3eNhGDqR3Zpgtt0sZmRO40eDSauxBVprclGOer1BvRFjhcIKiVQu/kOkVfFJYpBKusaNysWaNGJDtVIhnw9Z1t+LDnQqxz0ej8fj8Xg8Hs/9Sv+yHn7hk5+gpzdHYZFvicXEsqNjOTt3HWT58sVLtI3Ns33bQV7Z9m8I7MIc9Q5jWZ/rZMu2A6xbf50xTIHt2x93Y7D4GBvbOtn76MGbe4Geu87k5HlOvfqHbC1eZFVdEx6dfwHi0qzhtdceYe9jf4+1azfe1NjGGE6fPsnf/r//gV/csPDCRjW2HJ8NGB0dWZK8LpdLvPTiT/jJt/6En1vb+g+SN9BfsExVDSdeWsbL21d6ee3xeDzvMkuX11I4MZpmPYtUXpPGd2DdYzqNCTFJgtaaIIpQWqcNGiXWGJIsEsSktcu2Jannymsh5kWKXJt2YXFxIS7ruZVn7bxwK4ejKbGz+mjBnJ+b65I1aZwbIdKqxhbNUvBsGzKbp53TcFI68SwswroMbmsgCIJsLJu5ajv/5/SHZoyIy6wOUUFAGOUJwjzFjm6QmsQklMsVch0JJjEIKdGBa8ajpEZJjUCSiJiJyUlq9Tqd3Z1Imb1Qj8fj8Xg8Ho/Hc5+Si3Ls3LmXDRs2XTdWMAgj2osdKHW9qmfNsv4tbN48SJLEi6+hA4rFdoLgBmMs3/qOxvC8t4jjKgVzlY+vF6zvXnjx4vWRmEsTJSqVyk2Pba2lWqsQ1Eb5xJrlC5aX6oanzpc4Pj68xLnHVEtTrC1U+cSajgXLr8wa/ur8FJMTY0sa3+PxeDzXZ+mxIUI5YY2rwhZCIqTNYi9Sj40UCqkF6BAdaFetLSUIiTUWKyzCWhIpssaFtlmubMBJ6DmxIBbApOJ3vpCGZkyHnRtCnVVW21SEC9HKuk4HzuachpZgrJn79KyCOgslESILCZFzokpakR+t7Otm7EezIlzQlNfNym2TNqNknkfO3gJaTSJ1FBBFOYIgQghNe0cvgUiQpkZsoVqtErXFLpYlcO+1ENJdTECCEUzNzHBp+Cqj46M8khjvrj0ej8fj8Xg8Hs9NNQe9HmEY0dMT3doYQUhPT98tjeG59xAIIi3pLwj62xaqiN6SIV96m+D1G44PuUAuOnYhMHTlY6SUC5/4DtFS0JnTi44fG0tH5KNsPB6P53ZwCw0bFVKIVFCnSc9NgSxci0LRjONQ0kVZKNnKmrYWk0losIlNs6+NU7x2TnZ1M9Oj2bdRKIS1aSPGuZK5VcncqqKem4G9MOu6Wd0s5oZNZ9XUqX62aWL1/M6MtKaVVo7bphRv2W47T6zLzICHQdiS182mkWkzyda4qXSfEy8ShhFtbUW0DkgsdHR2E2pFtTSJFBYhFSYxGGOx1oBNsDbBWA3CUqlUOH36FKOTE1wZHqNeN65xpMfj8Xg8Ho/H4/F4PB6Px+Px3EPcQua1cHEhYm4lc1NMu9xnSPOfpXSRH6SxHgiXAW1dZTZWYGyCkC1J21ze3Ja1Yo4IbonkZlS0TUV6s/FidqebtXMqp0Uz/cOJ3XlNGWkVcptmRIgT0jSHba6SyWg7R2qLdP7MkeFuwDk14WAsUoi08ro5ipkTG9JcP0vtnve+B0FIW7GdXL5APUnIF9tpbysyCdQqs1gRYFBYa4njhssFlwlCGuIk4cyZU5w6eYKpqSlqjTpGSKxQcztRejwej8fj8Xg8Ho/H4/F4PB7PXecW5DXzRLLIKpUFBptWXs8J07BOQBshwBgng4UAk6QC2ICdU4lsTdZ80ZhUIguZVUtn0tkal7GdZlynNeBpZnZaaW1bQhha2dIIm1U3u5/T5c1HM3nefLz1fJE2X2zOcW5udbN54/yq69YmtNYopeZucM78bHYRwAqbRqU0ZbxAKU2+UKStvZPQWgptbXR0dTMzPcPV8xfI56qsUKGrjJeanIwQIiJJYHx0nNcOvwzGsPnBLVy8dJlQh0v41/d4PB6Px+PxeDwej+fdw1jLyIjlie8nHD2a3NRzE5Nw7KhZ0Bdr3vgJnDtv+Yu/uLmxAWZLCa+/YSgmPnPT4/F47jRLltdpfsec2Om0kWIzKkQ0GxdahJVpJbLAWEAYsAIlRNpcsJkO3fywkVmch/uj2WTR5WtnOdJpRrZjTgZ2Wq8ssodTsd2U4mnFdbOhI3O2lVWSz9muw7htm3Rk0ZTL6bjpVrOUkmx7tCrEXUdJpBQoKdPK7mZt9dwK8FRYY7DWZNLcWJBKkS8U6ejsxgpJPsqjg4DEWi5eukKtWmZkZJhVK1fR09ND3A1xMsPly5cZuXqV02dOIZRi2+ZHWDOwjvZCW5rZ7fF4PB6Px+PxeDwez21CvM0NvxbKZcupU5apqZuTxMZYzp9v9ctadB1rmZ6yHDl68wK6UrUMD0PBvP26njtLEteYnHqLWs1y8aKkp6eVa66Uoqenl46Ortu3/SRhYmKMqanJhQsFRGHEwMDq+Xf9v8s0GnXOXzib+iqYnrZMTCTEMUg5Qb1eu23b9njuBLcgr1u6uNmQEGvSCBH3oWSsRWZrki6XND+13AdLM0YkrS62qWCWzn831a4xaTazcPK5aYPnZlZnAR0iHbUpoZv509nWyGJASJ/ffN7cMuymUm/mXpPK6eYqmXa+9iCUrZMq+eb80nlJpZFSthoy2jl523ZOtXVzapas+lpKRS6Xp7OrGyEVYZhDIEiShNlSiYnRYUaGL3PyxFt0d3fT27uMOEk4eeok1jQYmxylUCzS3tHFqpWDiEAj08abHo/H4/F4PB6Px+Px3A6Uklyuan73h5dZVlzYGHS0nDCZNFixIkdf782NbaykNCt5aTThX35/fMHyWmw4Py3IrwhZueLm516pakavhDz5eonZ+sLK7dl6wjA9/KNc/uYH99wChqnpEzz7w8+wojPHlROCZ77d7IVmKTcMK1c/xP/x+X9/22Zw9eol/vd/8xuE1MkF8xuONhJDTRb55//8f2Hnzsduy/ar1QpPPvldvvynv8+qbrf/NRqWCxcsiQEjKlwcOX5btu3x3CluSV5bXNNE0RS9wulamcZ0mGaF8jViu1mtbKxJYz1a1cduWTODoyma54RuZMaXbJ05pc7uYTvn7wBpo8SsMjrtmtjy1HPGaEZ2ZFkec4K15wRfN1ssZvnbc1bBipbQT414K+ZEoHWAkCr14a2RwInu1uWANJYlzcQW6XhRLqS9ox0hFVqHCCQ2sQjrbpeqVGYZGx/h8pWLtBW7wFrGJ4b5/9l77+C6zvPc9/ettQuw0XsHWEAS7BRJUZRESbZsS7bk2LEcx+3m5JwTH+deZ+5cp8wtkzs+M5kz+eOcTHIzJ7n3OEVxJrbjEndbbiqUKEpiESmSYgEriEL0jo1d1/fdP1bZe7NKJEGA1PsbQQD2+tb3vWthE9j7Wc96XjtkgXEoMjGMBVUNDSTi82Qv70QpCIIgCIIgCIIgCLeR5StW8b/+n/+N8fFRwrZ1xXYDlJY3sn59G9GofeUE18EYi6md97H1vv+BcbJX2W5QdoS2ti6WLX93cwM4Tjk7H3iK7sfKsK/y1tkYQzRWxv33P/iu5xZuHmMMYTPL5tIDfGZteWCkBMhqw6l4mheOzi5oDYn5Oc4f2c3/dn8RVcWFEttEwuEn/TH6+3sXTLzOZNL0X+ymbOwAv9lSFjyuy8AxhvMTKf789MyCrC0Id4qbFq993TjXANF73GvWaLQBS+W5ot3YC6NdSdZoV4E2KifYWl7jR9ddHci/WJ54rLUr0KqgAC/W4yqiNRh3LZUnVOc1fHQFc98frQKhOt/JHeRXe6P8sQaDMsY9PD/VO8jXNmhfqC/Y0/taQSQScZs7EpSSF3/iu7zxXNi5JpD++QiHI8RiJSjLxg7ZGO3dHmXyEryNIT4/x9xcHKUstEmhshCORDDGlcNDkSgqmcJy5N4nQRAEQRAEQRAEYeGoqqrlgx/6DTKZ9FW3K6UIhUJEItGbmF3R3NJMZ+dncZyrZ1pblkUkEsW+mvp8QyK0tq5m3fr2IJrhcmzbJhq90lEuLCwhC5ZXRXiorfDcZzWEFOzuXWijnqIkYrGjuZiaWOGFkZG4w/7pEFq/+5z1d4NlNB1VYXa2FJ4Dx0B5xNOqBOEu5hac18oVf4OsDc+BHTitcR3Syo38sJSdc1crP06EIBPagiCfyo/60ICFGyNiAjHZCoTrnD+5MCPbz5L2XdGWH82d1+QxyK3WeO5xT9gORHhfKnYFZNcp7h21UaBycSe+w9qYfDHbYAfnwxPtPcE6FAmTl2Pijs8/GcqA9ps0evOR+wOp7BDRohgoC0tZGOWgHceLGsmdQ9tWJJNpso5DKKSwlEUmmyWrNdr4jTZVLspFEARBEARBEARBEBYApRSRSIRIJLIg81vKomgBYzssyyZWXLJg8wsLg6PdLPWFIpk0eWbEKzHGkEotXA3z85DOLMjUgrBkuGnxWqFcERfcJo1BHEfuk+9u9pssKmNQSgexIX4DQ1/rVn42tdE5ERjfUQ1YrvAd2L2DWI28ZA+/tjwRN2j86IvMBNJ7XtMIV/hVOVkdLx8EheU2mvTd0oA2Kmh06JWWy6n2Z8jLvnZFcHcN33ntb1F5K+Zq9b7O+/3mu9qVZWGHw0FkijbuB5ZCY1yhX1koy8IK2aDdCwG2ZaMNpNJpsk4WMKiQhaXzHeaCIAiCIAiCIAiCIAh3N8bA+IThBz9cOOfz6Igme53p5+OG/Qc06czC1JBMOpw4qaldWHO3ICwqt+C89l3JvuDrCdh+40Q/nQPfk01+h0TPRe0K4CbIAfFxd7RyMdGeSzjnP3aXsvwqAu93IBcHLmsTuJ9RuXgSk+fS9gfkHlfkLNMUCNzKz6/OV8txBXeFhfKEdV/M9+8oUpYKYk2i0SIsZfnlFVyky/ee+w70XGSIn01toVyrOlr7Ar8bteLmivtNMS1CoRDKDmFbFtlMhkw2zeTEBHMz0zheFphRC3cVUhAEQRAEQRAEQRAE4fZjcJwUmGtHzSSTcO7cwlUwPeXqMlfDANksTA4vXA3ptGFiHGpE1hHuYW6pYSNG42V7uDnPlitSW5bC+K5syxvnO6UDN7Lvf/albS8uA0+M9hof+qp3nu6dJ/b6Adga48eJ5M0ciL/53majsJQvKptcnIjyas7LzPaFeGOM2/PRgNKe61spX2MOROn82JNAzMfysqhNENMRCYexfGU+l7oS9I0MZlE54RrPre7W658DT123DFo7GO1nbbsCu2VZaO0QsUOUlJSiNaSzSRztMDc7y9zsrNvwUWzXgiAIgiAIgiAIgiDcRRij0U4Kra8tbcWKoWvNwmkeIyOKE69cfZsCQmFobl64GhJJxdwEWAMLMr0gLAluXrz2UzUsK6cNK1do1W6QNEpZBaKzm8ih3DxnLx6EoLGhwfLGG20CkdYXc11B3E/KcMVtHYjFuiCGRHldEE3e0jkbuELjJ48oQHvf5Lod5xo15mJClGeh1nmRJH5DSOXtq72t/qpBRrY3l+W5t6PRnGBsjAlEb/dYg5PpPXS5Ldvk4kSC7xXamLy6TaCtW0phW4poNIodLiKdiZJ1soQjRcTjScrKI4ELXBAEQRAEQRAEQRAE4W5hLu3w8zNz9M9lsFSuYaJjDKNxh8r6EO97/8JpHhd7bP7l723+rxeGKYuGC7Ylspq+TC2/v6OCXbsWpob5+RBzM+V865U5RucLw6+1MYzFM3k6kyDcndyaeO27kn3DsjFBlrPviMbk/pG4qR06yHNWvjqN5602rvDte6XzHc2+k/vq/+Tc/GmNc8UIY5S71XMra5Pn4za5OA8VOK1zcwYysr+z3wQy3wnu15nbLX8DoIPxxhgs23VeK2W54nu+69k3W3vFBebqfBne5H3kCdnGuyCg/ZgUv+GjUmB5mePKwlI2ttKE7TAWVuDQvtaZFQRBEARBEARBEARBWIpkNYzrSqof+Axb73sgeNxXmhoam6mvWzi9I1bczH/+879jfn4O2yoUqA0QicbYuXM7tbULU4PjxHj6o09R31BByLpMDwOmZ6aIf+8bHDq0f0HWF4Q7wc03bPRylQ3abcToK69ao2zbc0Xn2h8G5FuqyTVUzI+Qdo3KKuck9iM5jAlcwkFMs1JgLC9G4/JfBl4+NSoXs6EMRhmv2eKVoUB+tIcxbuNDgrgPV2y3TN5e+bqzH3OCJ2b7GdpBfInl6cgWdihUmGTinT9zWU3Gj0UxuQxsP7c758l218pmHdfN7iewWDZoN7HfKCvwhFsKHIwbMYLBskAFcwqCIAiCIAiCIAiCINw9RIuKuW/rgzz90d8qeFwphW3b19jr9lBaWsaTH/44jnP1jolKWYTD4atuux3YdoiOjhU0N7dxNV1nbGyEPXteWrD1BeFOcAvitQoaLoIr+lpBdjSBOJvfDNDNsDeAHbicAw3a3+SJxb5wHZia/WQNo73sZ+PFbfvzeOKvLz7nTeoGheTlZ+etbTzbeK4Jo+eUVrlID+Wpvv6tFq6LWgXR1358ifHqIhDH8w/OK8H75elnU7unys/PNuSHnSgURlnuqDyne5Dj7a/raLTWaO07wy00bl1auxcUjNFo42BwMMqgcXC04x3jNboLCIIgCIIgCIIgCIIgLGGUUoRCYSKRazduXMDVse0Qtn1rLeVuqQKliEQiV90WDkufM+Hu56ZDd7RxwOjAL4zKCcYmz8hrAsnXE0+VK5bm0qRzuEJq8J2nLpvgS3eWXNyzH8Wh/Pxs35HsubRz8SA6L66EnEjsL+br1t5C+Y0X8zO7/cxtbXJistbuejpfZffdz54Ijsodm/9LLUj6Nu450v4cnphstCu4u3Eq7lk2JifSB98bcLTG0dpdQ7vRKNo4uXm0QTtZtM7i6CxaO2SdLBodHIcguhJr5AAAIABJREFUCIIgCIIgCIIgCIIgCMJS4qYvDRm066oOYi/c/GatDUq5Iiq2QvmRIK41G6P9rGbluagtT/j2BGWl3GQMpQKB2hepfdFaQ+CeVr772DhYlue8VuQaMHqZ3F7RwZzet24MiCdmK7/LYZ6u7cZS+xEp7oOWMhjjuHElXi3+HJayAvHbBGv7DSQNoXAI27a8uBMK4qwxfjiIH8eC65wORG3jif8m17NRgVaGrM7iOI7rwHYcjCdmu2K+xslmceys+5g2pDNpUukkCkXICsmVOEEQBEEQBEEQBEEQBEEQlhQ3f1+DUm5zQGNhuSpyntjsCb15LmqDH9dsCsTjwD7su5+9jb4l3Oick9kViF3R21a5nOv8aBJ3yqtlOHs1+JnZ13QbKy8DOjjMnBNb5YR0V3/PDz3J5WMbvxmlFSSheMK2IhwO5RpA5tfgK9HBN15etnKd1MG58cYFkSveRQDHyXoua1e81sZ1Y2utCSlFNpslZGdBuenXqWSS+fk5HKMJi3AtCIIgCIIgCIIgCIIgCMIS46ZjQ/CEa7DcrGdP2PUdvJZymzUW6LsQWJ/9mA3ticq+cznnZM5fyhQq27iitt9YURsDygpcy7nA7ZxIXeC49lzMYDwnOAWxJ362tJvrbeEnbGvjuZkDAdyvId8xbVBYGKPQfhyIzh1jIF5ToO3nn9hCXfsyDd7CcnO+jcIybqyIjUI5GksblNb4DTA1BscTsTOZDBknG/woEvE5JibGcbSD47vjBUEQBEEQBEEQBEEQBEEQlgg37bx2s561J8QqT6wlkIotzyGNlXMZ+87n/BgRfJey8RsuKs+lTLAtT9b2YkhcsdyP+gjmwwoiPLy8j5wLOhds7daP9jKp/ccNxvtamZyAbqn8RpC4NfhRJ8Yd7Dat9Nze3nRKKdAE9VtKAZpINIKyctcMcn71Qvk4N+PlLnJfeA/iu90Px40D8YVx7bjCvNYOGsg6CsdxCIUiKGVIxONMTkwEFwEEQRAEQRAEQRAEQRAEQRCWErcQG4Jnq9Zo33atjevlVmCU6wp2o0VUwW5+tnXuk3YjM4JsacgPg1YQOLuDTOtga07YNcag/Y6R3mDjrR+U64nByhe/vaaK/prGDdQOHNV47mu3yaQK1kGRF/+RF6YdrJELUPFn10YTiUQDx7hfau6AjBcd4gdaF561YG3/qD312gCO42BQaO+ca+26043j4GCwjOVFjFhYliKbyTA7O4PW2dx6giAIgiAIgiAIgiAIgiAIS4RbaNgYyL2ue9dyRWAdCNie2zinUxdEPBMIwN73lkEp7QrYxtfBPdFWqUD0tvLiPXyXtcpvypizSHsJIN4Y/Mf8SBEv2/oy4VgFNZkgxxvlSerBPEHRrvxtkWtMmYfy3Nb5zRCjEd95rS4blzsvvm5tgrVMrsQ84TqnbRu00XkxJb7I7oayGGPQjkYb0ChCykJrTSo5j2McDA6iXguCIAj3AmNjI5w5c4rS0orFLmXJcebsKSYnxxe7jHueyclxzpw5iWXZi13KkkJrh9HR4cUu454mHo/T03OOM2dOLXYpS47JyXGGhgYWu4x7mvn5OGfPnpLn31UYHr5Ef3/vYpdxT5NKJeX5dw1GR4e5ePH8YpchCLfELYjXnoBt/CaInqzra8PaYCxQWK6wm6fPKk8hNkYH2dZuFIcBo7GUFTiQA+HZchVwjeukNsZgLBMIzX6TSD/KJD9/21d6/UgTP1NaGeUKv5ggKgT/uDzF3Y/ocIt0v3dTP/K7MfqKtx9XQl7Ytx8d4q4X8jOv85NC8tYIjNh5Tm7/MRPUaPxUlCAyxdGOJ2Br7zw6wXFo72eBUljKJmSFUCgcR+cOTBAEQRDuAS5cOMNXv/pXVFZWL3YpS47R0SERb+4Ar7zyvPcmUV5fFWLk+bfAZDJp9u59keHhwcUuZcmRyaQYHJTn30ISj8/ywgs/4/z5M4tdypIjlUpw8eKFxS7jnmZ2dpp/+7evs3fv7sUuZcmRTie5cEH+XS40qVSSH/3oOzS3tC52KUuOVDJJPD5zS3PcfGyIT0GMhwrc1L5+68dzuP+pXIvIvDxrN0bDQhnXAewK1ioQi8ETmvMU3/zojSCKxHNOW962YF+vUG1MkLntZ2FbedEcxndXe4Kv8sXq4ADVZfnZJljbF8yDeBM/m9rL7va/j4QjBU7r/GaU+Tq5v81c5esCVRvQWqO1xuBgTBZF1rsw4D7uNp10neZunW5DSQevYHlzJQiCINzFlJSUUFlZSSQSYW5ulpMnjy52SUuaoqIiysvLiUQii13KPUNFRQXV1e4Fk0uX+rh0qW+RK1ralJaWUllZudhl3DOUlpZSV1dHf38/o6PD4nC/AcXFxdTU1Cx2GfcESilKSkqora1lbGyMoaFLDA1dWuyylixKKYqLi4O/F8KtEQ6Hqa+vJxwOk8lk6Ok5S0/P2cUua8kSCoVoaGjAyuvBJtw6/vlMJOb5p3/6W4qLixe5oqWH1pp0Ok0oFLrp599Ni9cKC20MtqVy4rQnHPumY6OM6wTOz5bWxnNqF0qmSnlWYz8L2hepAyu3L2KDLxq7Era/nZwb2Rjyd7OC5oquA1qbPLHZ29d1YucaNQbJIQAasLw5PJHbb/R4+ZpGG7Dynd/uIHdORSQUdmv29vUjswtCrv14EC/TGk+I9rcFoxVBVIjjOOisxjIKjMLCAm1htIWyXLHaUjaWstz9jF+Xl9EiCIIgCHcpRUVFfOpTn6K8vJzh4eHCu6aEAixl0bGsg6eeeopQ6NY9DILL2rVr+eIXv8i6detIJBKLXc6SpqioiG3btrF9+/bFLuWeYf369fzRH/0R+/btI51OL3Y5S5qSkhJ27NjB6tWrF7uUewKlFF1dXfzZn/0Zx44dw3GcxS5pSVNaWsrDDz8s4vVtIhwOs2nTJv7Lf/kvnD9/Xl7/XQelFGVlZXzgAx8ojK0VbolwOMyWLVuIRCKk02n6+3sWu6QlTSQS4f7777+pfW9evNbGFUrx+iz6RmDfIeyLwzov89kEOjFALq4DN59ZWcqVufPjO/Dc0sbP2FZYfqZ1EK+hAge2Cr5QefsQZGRr48+Zq8lvvKjyRO7A2Y1nTtZ5orrn8PZd1r6oHUSo+HEjyo9I8SuHUDicc297/ze+6J131P42TaBXe+fWXdjPsjYY13nt5Vv7x6uVQhuFwnbjVhQYyxWy3RPoX1AI5H9BEARBuCuxLItt27axYsUK5ufnF7ucJY1CUVZeRmVlpbx5uY1UVVXx5JNPsmPHDrLZ7GKXs6QJhUJUVlZSXl6+2KXcM9TV1fGxj32MXbt2obW+8Q7vYcLhMFVVVZSWli52KfcMDQ0NfO5zn2NiYkLEwxsQDoepra0lHA4vdin3BEopGhoa+MIXvsDU1NRil7PkCYfDNDY2LnYZ9xThcJgPfehDPPvss/IcfAdUVFSwa9eum9r35jOvvddFGo0yliv4Gg3KDkRkP/7DF0l9QddNmfbmMWD7YvUVf+tU3kfuIY3bIFF53/uuZm9G/CQOP5Ykf18/Q9vKG5/zYPsRJgTxJ/lO64I3eZ6gHQjjweO5SI9ASla+yG9hh0OB2M9lxxv8sTf+/oWD8uVtX9j2ndnuC1WvYaPRuW3aYCyFxmD8KwXKu0jg5XZf2WpSEARBEO4uotGovCAXFg3f0VRWVrbYpQjvQSzLory8XC4ICIuCZVlUVFRQUSGNkoU7j23bVFdXi5tdWBQsy6KxsZHf/u3fljuf3gHhcPimYwtvWrx2VE6CdsXUPOHVE0jBy5QOYjF8dF7etRXshvJkVO1PobAsy8uvJpcfnee41n6cSCDK5kRgS6lC17Y2gSvaF6RNgQie59T2vwriR/JE6CACW+XiTACT66oY1JoLOAHLtrFsm1wqdt4ceTEnhnzhOshgCdYKYrLzjks7jvu4J+6jPVHeuA0ufQu3wc/IBok6EgRBEARBEARBEARBEISbIxwOyx0VC8wthB16ArCfce2Jttpo7zFT0OzQ/dryjMi5/BA/W9rNX863MLv7aKOvcCgTiLS5gA1X/82P3DA5Ydp3NKu8PG5j3NgP5UaW+GnWQQzJlUvm5vNEYeMfhBus7Z4T75j1ZWsbDJFwGNtymyfmN3Ys0Kn9EGxzxer47SHdbO3cVq0N2nHX1p4b3BiDYzQO2r0QYAwYK4gc0UZjjPI+i/NaEARBEARBEARBEARBEISlxU17b12BWqE0nvCsXfHXszG7TmAHoz0R2eSEYm2MGx2SF3XhZknrnBDuu4SNF+ahLNdBHMR6WK7DWFmuY1sZHK1xPFHat2drT+UNPMyWQdkqFyitTdCv0HiistcG0Z3XaK9LpCcO5zosgsY7Bi+hQ7tidq4ZIjkh3EA4FHad4N4Z1Hk69WU+64KaA6e1d979b9xMcU+o1hpHGz9FxD0Oy+CYLBodNIfUnj1bGeWJ3Y7EhgiCIAiCIAiCIAiCIAiCsOS4aee1H1OBcTM+jAKtQSm7QAzVnk3aslSQKm0wrmpuqcC/bQWpGK4D2xe6AVCW6zoOYji8kBLPRYzyPckFWnFAfsSGxgRr+dEkuUwS30/u1udnaStjXJFcuwsY90ALndy+IxodPBYci5eqHQ6H8hpFevX4IndQaX7l3nEqz2qeFyaSG+FdENBOLtpE+xEprpKtPYE+eAj3Z2XZFhjLu0AgCIIgCIIgCIIgCIIgCIKwdLhp57UB8izL3iflibE6aBpolCu5OkbjaJ2X7ey6f/EE8PzoCl+m9T8bzwmdL90GDm1/fYMXUZJrrOjPGURUG1xh16icWJufae3P7veI9AeZ3AZFXr435NRyz7HtKdx5E3jubUwuA8d46+Vbqr2YlSsiPHIB14HD3bh2a2+TQevc+QviRBwNjkERgsBx7RbqOrC1d+pNkIctCIIgCIIgCIIgCIIgCIKwVLhp57XCFUAtA1ierGuMF/1sPIeyKZDHfQ3W8h3NKLC8yBCl3CaDym/gmBOq3Yxmb1XtO6e9NYzv5fYiNXy3c54IbHzhNz+jAzdexI/dyB1Xnojtj89v1Kg84d0bE2jcQb3qMic1QWPISCSKwgp04kCMN5CvWfsNJd3P/mN5ESImN3cQ/RHEl+Q5uY3CwsLxd1QGlHYvCyiFZbtnztFOMJ8gCMI7ZXp6kl/84odcvHh+sUu5azhy5CDpdGqxyxAEQRAEQRAEQRCEu4KbFq99l7DxRGfw3NKW/5Ur8vq6tK+8uo0W3f6GjpeFjeVrvyYQYd2drEAc1oGz2svQVipwVJsCcdaPF8k5uMkTe5XfNBJcx7fyF1dubvdlzRp9ETknVVsoz7nsurwhaBjpu7gdUFaeA1y7gnEkEiloBml0IF/nRYNcJkAHpZvcefQE9HwB29FO4MD2P7TONXB0J9RuDjlgcEBZOCaLo7NI7rUgCO8U27axLItkMskvfvEjXn7514td0l3D1NQEjuNgWRaWddM3PwmCIAiCIAiCIAjCe4KbF689odTBC8nQuVhoX+tVngvbdx672dRWLnjD1YLdL4xGebEjSnmisBePgbFclzUKS+nAJOxKwm6Uh2VZXvwIXmK1O6/vCFee2zjIxzYmEMN9PVl7oreF5bm4LfwMbFffNgWObjwB22CwLOVnpKDQbiSKCuRzUBCJRlxx3/jic259nSewu3O7J1Hlua8D/dw/yZ7IrbVG66wrTBuNUTqo2RiFUd4xaPfigMZgTBZF2I0PKdTrBUEQrsuWLVtYtmwZ09PTTE6OMzk5vtgl3VUUFxezZcsWWlpaFrsUQRAEQRAEQRAEQVjS3Lx4TS7d2RVZXRHVzwnJCcIatEIpPx7DU7mVH/sBGFduVn4uh7+Ah8Kb12sO6bqmTcGwXPRHnrPZG+dnYRvyhnhGZ1fY9kYrrwalPMHdH648Ed7K5Wcr5YrnefUYT6W2vMPQWruRKt5O4WjE1emvd2Lza/Pzq/OjTILkk3yHtXZd144OssBddzreec65ud0kF/fnZSk7cKILgiC8Ux577DFqamoYGxvzfu8L74ZQKMSKFStYuXLlYpciCIIgCIIgCIIgCEuamxevlXE/jMby5GVw3cuW5SuwoFXOzWzlNVPMNVs0uXgNb45cjrSrc7sOYpOTpb19LWXlEkEsP5faciM8/IkAv/Wj6952v1Oek9uPMFGeKq4BmzyB2K8RhdIGZbnucFcRVqCswNXsqdho5QrYKnCgWyiliISjKGxPxM9lcOcyub3H8wXuAmHZFDR69ENHtKPR2qtJG3C8c2+0m2ft54jjObHzY1ccg3HMZesIt8KpU2+ze/cv6ek5u9ilvGc5ffo4yWRyscu4Z6mpqeGxxx5b7DIEQRAEQRAEQRAEQbjHuYXMa+26qI1BeyEbStkYrVHKzonTlucQtpU7zgRaqicKk+dGzsnXRmsv8iP3uGW5cSLa8cTkQOVWaEMggudnbLvubjfiRHmiuuMJykEGh8m1fNTGoCw/9iMXgeI3iQxyqn3nd86enYtH8Y9CeTEixs3rDofCngvbBIK8CURoAse0wbNIB3nbeWv7zRz9tY3BcbSXFa6CjGtjDI7RXja3AR1y41fcnxSOo3PuchGubwv+8290dJh/+qe/JRYrXeSK3rv09p4nm81ckWEvCIIgCIIgCIIgCIIg3D3cQua1QTsOeC7jQHC2XKu0Nm6ytZWf06y8nGptsJSVs0dr18XtO5yNl0/tSdGuuzs/TcSPAfEbM/rO7UA49rK18xzYboS2J9h6irLyhOVcqrQJnNvGy6023txOXpJ2vsiutR/PQSA+u40pLSxcJzoaVMjCtm24TJA3weec29zP9/BjQXwRO/e9f/HAbciYzWa95owO2hi0yQnbrv2bQERXeBnZeefKcPfd9h+fGWZ2coCKmmUUlVR6FyMWHmMMkyNn0U6aitrlhCOxYNuaNWt48803mZ2d5cIFcV0vBVauXElFRcVilyEIgiAIgiAIgiAIgiDcBDctXqMVOBZYrqPZb4IYZD2TcyIrT4DGUkHGssF1bSvLdqMsyEur9gRXo0AZ5UVtuNnOKAtL+Q0VvdaPnvvaNTprb0fcSA/P3e1+uJnVJs8hnf99MM5zWWsMFrbX0NC4xmVfZvYiQ/yGjYpcY0dX8Nb47RqNUti+eO2nfhgvdgRvTt+Jrfw5gnSQYIxbEbmxxqCNJqszZJwsjtEYZbzGkwrfwG0b3EaTxr3g4K5qYVmhIDP7bjJfG2PoO72HnhPPs2nXf6CxYyvKvjPidSY1x6kD38HJptn0yO9RUdMebPuDP/gDtm3bxtTUVIEDX1gcIpEIW7ZsYcWKFYtdiiAIgiAIgiAIgiAIgnAT3LR4/fRHniCdyeC6lfNuzQ8c074jOj/J2su89lzKuTFcJl7n5lPB/3M2aoW/hpsPXSheG29M3n4F8+QtlLdo4fr52wP/d944f7TfCNIU7BIcrfIFfYVt29RUlREKue5rV4C2CvYqjBDxji/fpW1y2wKh3GiqK6LU11WSSCbQjsZxHLTj4Gjt1mbAsiwi0SIs20YBicQ8kWiUurp6QqEQZaUl3C1oJ8NI31uM9r9NOjlzR4XiRHyCoYuHCIWLcLJJ8p85GzduZMWKFWQymTtWj3BtbNsmFot5dzwIgiAIgiAIgiAIgiAIdxs3LV6vXr3ydtbxHuX25PGWl8VobKy/LXPdLpxsmqGeg/Sf28v02EWcTIpIURk1TV10rH2cipplWLbr/B66+CYjfUeobV5HQ9sWQpHiYJ7ZyQF6u18mFCmmY837SCamuHjyBfrPvMb8zDhnjvyUyZGzdHR9ADscZeDsa4SjMaob1jDYc4Dh3rdIJ2aIFJfRtHwHy9Y+TnFpHUopMukEvd27ic8M07ZqV1AT4NV1iJH+I9Q2raW+dRMj/UfpOfECE0PnCEeLOf76N6htWUf7mvcTK3PnLCm5ey4CCIIgCIIgCIIgCIIgCMJS5uZjQwThGmQzSU69+T1O7f82yfkpyipbCEdjTAyfpf/s6wycf4Ot7/8D6ts2YSmbwQsHOP7GN1mz9Tepblh9mXjdx4l936SopJr61k0k5sboP7OX6bEBMukUwz2HmZ8ZoaZpLZYd5uT+b4NSxMrrSc1PEQpF0U6Wkb5jDJx9g7nJAdY/+D9RUl5PNj3P+WM/Z6T/GKUVjZRVtQbiNcYwdPEgx1/7Bqu3/SaVdSuZGD7DwLk3SMxNk0kl6D/7GunUHA3tW4mV1XK7LkYIgiAIgiAIgiAIgiAIgiDitbAADPce4vhr/0JibpJNj/wuzSseIByJkYhPcPyNb3DxxCuUlNVTVtlESXkj6eQs8alRUokZNxs9j2wmRXxmFKM12slQWbeSNdt/i8nhc8Rnx1m97eM0dmylpmktkyPnSMyNMz12iYZlG1j3wGepaezCsmwGew5w6KW/49SbP6CubTNtqx5Ba4fE3ATx6VEy6UQQZQNuGEg6OcvctFsXGNpW7SI5P8nUaA+lFQ1sfPh3qW1aS2lFY2F0jiAIgiAIgiAIgiAIgiAIt4yI18JtxcmmuXhqN5MjF+nc8mE6N3+Usqo2r3mnxnEyjPQepe/MXlbd93GKS2vf1fyx0hoa2jYTLakknZyjvnUTLSsfxg5FmBnvBa/5ZtuqXSxb+wFiZXUAlFQ2c+n8fi4cf4nR/iM0dmx9l0emqKxbQW1TF+FIEcWlVTSv2EF1wyrEcS0IgiAIgiAIgiAIgiAItx/rxkME4Z2TSswwfukkAA3t91EUqw5cyUpZVDespqyqhfmZcabHe3Gy6dtcgaK4tJy6lo1EisqDR4tilVQ1rEIpi+mxHrLpxG1eVxAEQRAEQRAEQRAEQRCE24mI18JtJZOOk4xPYtkhYqW1uQxpj0i0lGisEu04JOMTaCd722uIFJcRLanEsu3gMUvZFJfWYNkhEnOTaCdz29cVBEEQBEEQBEEQBEEQBOH2IeK1cFtxsmkcJ4OyLCw7jFKFTzHLsrFDIcDgOOkrMq5vB5YVwrbDqPw4D4X7mFJoJ7Mg6wqCIAiCIAiCIAiCIAiCcPsQ8Vq4rVh2GMu2MdpB62xBE0QAbTRONgNKYduRK8TtKzFu98R3gTEOWjsFuxljXLe1MVihK0X1q88jArcgCIIgCIIgCIIgCIIgLBYiXgu3lUi0hKJYFTqbJTE7iqML4zkyyTmS81NurEiZGyuilAWKq4rd6eQMTjaNeRcKdjoxRzoxi9FO7kFjSMyN4zhZNz7EE7D9PG6jnYI1jM6STsyiHeeKmgRBEARBEARBEARBEARBWHjumHhtjGHv3r08++yzfPWrf8e+fftIp293sz5hsYkWlVPbvA6UYrj3LVKJmYLtE8OnmZ0YoKS8lvKaDuxQhHC0BKUsEvEJsnkNHLOZJMO9R0kl4pe5rxUKhTHG05ULxeXE3DTjQyfJpOeDx1KJaSZGzmK0prJ2GeFwDMsOEYoUu/nb81MF+dtz04NMDJ8hm7n8OaqgYG1BEARBEARBEARBEARBEBaC0I2H3B6MMbz44ot897vfZX5+ni9+8YusXbuWSCRyp0oQ7gB2KELHug/Qe+pl+k7vpaq+kxWbnqKouIKJ4dO8/fq/kJibZP2Dn6aipgPLDlNZu5xIUSnDF49w6cI+QuEojpPh4skXGOw5iNYOqFx+tWWHCIWLyGZSzEz0kYxPEY1VYDw12RjDuaO/oKSiidbOhwA4f+znDF04RGllHfWtmwlHY4CiorYDxR56u3dT17qBmsYu4tODdL/5fWYnL13hurZDUSw7zPzsGPHpQUrK6wlHS7DtEORnbAuCIAiCIAiCIAiCIAiCcEvcUfF6fHycnp4e4vE4ExMTaC2ZwvccSlHfuokt7/siR1/9Gsde+xcuHP8VdjhKYm6CZHyKzs0fpmv7b1FcUoNSivr2LXSsfYyzh5/jwC//ipP7vgUYLCtEXcs6Zsb7AYMxGgNEisqpa93AcO8xjux5lp4Tv2btjk8TLa4AoLSqjpKKek688U1O7v82RjtMj/WCUqzb+RnqWjdi2xFAsWzdhxjqOcTQhbfY84P/TFFJJU42Q6ysjrqWtcSnR9HGwXd3l9e0U1nXwaXzh3ntp39OVcNKNj/yBepbN2LZ4UU66YIgCIIgCIIgCIIgCIJw73HHxGvhvUM4EmPFhicpq2pltP8o0+MXcbIp6tu2UNO4mvr2LVRUt2PZ7tOvpLyeLY99kfrWjUwMnyaTSRArraOhbTNVDato73o/lhWirKoFpRShcBFr7/80xaU1TI9dwA4XU1LeEMR+RIpKWb31E2gnw/ilUyTnJ2ho30J922aal++guKQ6cHLXt25k50f+dy5d2M/sZD8YQ3lNB03LthGOlrJqy8coq2oNhPGyyha2Pv4l6lpfJRkfp7SymWhxxTtqALnUMcYwPniS0YFjaO3QsuJBymvasCz5NSEsHYwxzE0PMnhhP6P9R4nPjGK0Q7S4nOrG1bR0PkRl7QrskNzVIyw8xhhmJ/sZ6TtCKjFFbfN6apvXYYei191nbuoSl86/wUj/URJz4+7fuOpWWlbspL5tM5Gisjt4FMLdTjo5y9il40wMnyFWVkdjx1ZiZfXX3yc1x2j/MS6df4OZ8V4cJ0NRrJK61k20rHyQsqrWoC+IIFwbw/zcOCO9bzE3Pei9ht5OOBK77j6J+CSDFw4wdPEg8elhwBArb6CxYxstKx6gqKT6Th2AcJeTnJ9mpO8w0+O9lFY20bLyISLRkmuON9phbnqY/rOvMtp/lOT8JJYVory6neaVO2ns2EYoXHQHj0C4WzFGk5gdY+D866QSM1TVd9K84oHr6gLGaBJz4/SfeZXh3sMk4uMoZVFW1UrT8gdoWfkAoXDxHTwK4a7FGJKJafrP7CExN05F7XLaVj9yw+ff9Hgvvd0vMX7pJOnkHKFwlIq65bSvfoy61o1LWtcSVUpYECJFZbSseID61o2kEjMSgOIFAAAgAElEQVRonSUciREtrghEax+lLKobVlFW1UIqMY12CsdW1XdeNl5R1bCKWHk96eQslh2iqLiSkYFj7nagvKqN2ua1tK16lGwmQShcRDRWiX2ZOzoULqZp+f3UNHWRTs4CEC2uCHK461o2FIy3QxFaVjxITWMX2UwymFdZ9m0+g3ee+PQgx177Z3pPvYIdilAUq6K0sknEa2HJYLTDYM9BTuz/FpfO7SebSbpvUJRFKjHD+bef58Lx51m/87O0r3mfCIDCgpJOzTFw9nVOH/4hwxffAmD9Q5+jonb5NcVrrR2Gew9zcv+36Dv9KnNTo2TSaZRSRItjXDzxIl33f4rOLb9BrLT2Th6OcBeitcPU6DnOvPVjek68QHxqhKYV2yitaLqueB2fGebMWz/m9KEfMjl8gdR8HK01oUiY0spf0971KOsf+Dz1bZtFwBauSTaTZHTgbbrf/B4DZ14nk0mwbN3j1DR2XVO8NkYzNXqek/u/zfljv2RmcohMMglAOBqlp/YFVm76MOt3fp6yqtY7eTjCXYaTTTMx1M2pN79HX/crpBKzNK/cQV3zhmuK1042zdil4xzZ8ywDZ98gPjNONp1BWYqi4hIuntrNmu2fpGv7p64rgAtCJp1gsOcAp/Z/h8GeNwHo3PI0TcvuR9lXF/+0zjI5cpa3Xv57+k/vzXsNCJHiGD0nXmT11o+z4cHfkfcwwnXRToah3sOc2PevDJzdh3bSLFv/AVo6H8a+zvNv4NzrHN79VUb73mZ+dgonm8WybIpKSrl48iU2P/IfWbnxaZS1NAXsJaVKnT9/npMnT+I4Dhs3bqS1tZXz58+ze/duTp06xezsLJFIhLa2Nh555BG2bdtGcXExWmu6u7t55ZVX6O7uZmZmhkgkQmtrK+973/vYunUrRUXXvoI6MjLCvn37OHbsGENDQ8TjcZRSlJeX09HRwc6dO9m0aRPFxde/CjY4OMjLL7/MsWPHGB0dBaCuro777ruPRx99lJqaGk6dOsWFCxcA2Lx5M83Nzdj2lcLn5OQk+/fv59ixY/T39xOPx7Esi5qaGrq6uti16xHa29sIhZbUj7AQpQhHSwi/wz/+4UjsBk6N/KkVRbFKimKVuccuy5y2rDCxshu/+VfKIlpcEbirb4Rlh4iV1b2jsXcL2UyKc8ee48LbzzM1OkhRrIRMKh7kiAvCUmBq9DzHX/86599+gZqmVXRu+Q2q6ldiWTZz08OcfevHDJw7iJNJEo1V0bJi5xUXywThVjFaMzl6jrNHfsK5o79geqyXVCKBbdsk41MYc+1ItKmRs5x44xuceesXFMXKWLP9Y1TVrSCVmKb/7OuM9nfjON8kVlbH8vVPyB0EwjVJzk/Sd3oPpw/9gMGewyRmp3EyGcprhshmktfcL5WY4cLxX3H01X9mfmaMxo5NNC2/HysUYWzgOANn3uDMoZ8RiZRQUtFIaUXjHTwq4W7AGM3c9BAX3v4lZw7/iLFLp0knExijmZ8ZQevsNfeNTw/T/eb3OP7Gt8EYlq9/H3UtG3CyKYYuHuLSuTc5eeB7FJXUsGHn5wlFxIEoFGKMITE3Rs/J5+k++ANG+o6TTs2jsw4VtcNonbnGfprp8R6O7PlHzr71K4pLK1i385NU1CwjGZ+g/+zrDF08TjaTpKyyhWXrPigX74QrMNphdmqA7jd/wLmjP2NiuIesZ0JIzI1juPp7Z2MM8elhju75R7oP/oRIUTFr7v8Y1Q2rSCVmGDj3BoPnj5BOfp2yqjY6N39Unn/CFRhjmJ8d5fSh73Pm8I8ZHzxLNpPByWaZnx29ol9bbj/NxNBpDvzq/6H/zJtU1bexZvsnKKloZG5qkIsnX6Tv9Js42TRV9auoaeq6w0f2zlhS7+oPHjzI3/7t3zI3N8eXv/xlqqqq+PrXv87BgwcZHh4mlUph2zaVlZX88pe/5Pd///f5yEc+wp49e/ja177GW2+9xejoKMlkMhj3/PPP86UvfYmnnnqKWKxQGNVas2fPHr7+9a/z5ptvMjg4yMzMDJmM+0cvGo1SXV3N8uXL+djHPsbnPvc56uvrsS67EmGM4dChQ/z93/89e/fuZWBggHg8DkBJSQmtra08+uij/N7v/R4/+9nP+PnPf45lWfzpn/4pdXV1BeK11poDBw7wjW98g3379jEwMMD09DSpVArLsojFYjQ0NLBu3Y/47Gc/yxNPPEF5efkC/2SEe53h3kOcPvxjnGyGSLRoSd8uIrw3MVozePEgA+cPEI4Ws2b7J1l938eDi05ONk20uJy56SGG+44z1HOA2qa1FJfWLHLlwr3G3PQgJ/d/m+43f0hxSSUrNz/J2MBxJoZ6rrtfJhWnt/tlLp56hZLyWjY8/Dus2PAExaW1ZNNJGpdt59irX2Nq7CLT4xfJpOIiXgtXxcmm6D+7l8O7v8rc1DDNK7ajnSwXT7563f2M0YwPdXP26HMkZidYufFDbHjo31HVsArLspmZ7Ke8+rucOfwzZib6iM8MiXgtXEFqfprzx37BkVf+AWMMy9a/n/nZUfrPHLjuftlMiuG+w5w98hzKslj3wG/Tte23KK1sQmuH1sGTHC/9Jv1nXmNq7DzJxDSlIl4Ll5FJzXHx1Eu8+cL/SyY1T8faR8hmklx4e/d190snZuk7vYeeE7sprazlvvf/z3SsfZziWBWZ9DyNy7Zz5JV/ZHain8nhM7SteZTQdSLAhPcmyflpTuz7Fsff+FfC0RLWPfBJhnreZLT/zHX3y2YSDJx7jbNHfkG0uIRtH/hfWL7hSWKldWSzSVpWPsjh3V9lbOAk44MnWL7+QxJfI1yBk0ly6uB3eOvlf0Qpm/UPfZqhnsNcOnfkuvtlUvN0H/o+A2cPUdO0nAef+j9o7NhKpLiMVGKGxo6tvPHcf2V+ZoyJ4dMiXr8TZmZmOHnyJJOTk7z66qtcvHiR6elpnn76aTo6OpidnWX37t3s37+fV199FcuymJ2d5Tvf+U7BuHg8zp49e3j99dfZs2cP4XCYtWvX0tXVVSAUHzp0iL/8y79k9+7dZLNZHnzwQXbs2EF9fT2ZTIbT3af59fO/5rXXXqO3t5dwOMzv/M7vUFlZWVB3T08Pf/M3f8OPfvQjZmdnWbt2LY899hgdHR3MzMzw6quv8sMf/pB4PM6lS5c4cuQIRUVFzM3NXeFs3b9/P3/xF3/Biy++SDKZZOvWrdx///00NzeTSCQ4evQoe/bs4bnnnqO3t5dsNstHP/pRSktL78jPaClTVFpN+5pHsWzbdWTL1cp3xNz0EKcO/hvTY320dO5gpO9tknNTi12WIBTgOCmmxy6SjE9T09RJTeMaIkW5C3d2KEJNUxfl1a2MDZxlZqKfdGpOxGvhtpNOzZFOztC6aiedmz9KKBIjMTd+Q/F6ZrKfSxf2k07O07n5I6zc9BTl3m3xkWgpbasepaS8gfm5Mcqr28VxKFwT7WRJzI1TXFrDqi0fpWn5TvpOv0z/6Teuu18mPc9I3xHG+k9S1bCc1Vs/QUPHfUE8WE3DGjY+/O9p7dxFOFpCeVXbnTgc4S4jm0mQnJ+kqn4lq7Z8jPLadrrf/AEDZw9ed79kfILBCweYnRyibc2DrNn2SaobV4F352Rjx1aKS9zndKy8gWixmHOEK3GyaZLxCUormlh138eobV7H+bd/wYXjL19zH2MM8Zkhert342SzLFv3OKs2/wZFJVUAhKMltK9+lNKKRuZnRymvaZfYROGqONkU87PDNC67j67tn6KssoWZiT7gOuK1MSTnp+g5/jypxDxd2z/K6m2fJOa9RwlHY7Ss3ElRrIq56UuUVbXKnaPCVdHaYWaij9qWtazb8Rlqm9czNzUIXFu8NkYzOzXAhbd/jWXbrHvg03SsfTwwyITCxSxb90GKYpVkM0mqG9fcoaN59yypfxWhUAilFFprfvWrX7F8+XL+8A//kAceeICqqirS6TS7du3iK1/5CgcPHuTAgQMMDQ1RW1vLH//xH7Nz506qq6tJp9M89thjfOUrX2Hfvn0cPHiQI0eOsHz58sB9nUgk+P73v8/evXuZn5/nE5/4BF/60pfo6uqitLQUx3EYHR1l8+bN/OVf/SW9vb1861vf4uGHH2bTpk1BXEc2m+WnP/0pzz//PDMzM2zfvp0vf/nLQS3JZJIPf/jDPPvss7z00kvMzs6STCYpLi4OjtdncHCQZ599lhdeeIFUKsWnP/1pPv/5z7NmzRrKy8vJZDIMDg7yk5/8hK9+9ascOXKEf/iHf6Czs5MtW7Ys7QiRO0BZZTObdv0HUIqSika51eYd4GTTnD/2c/q6X6W+ZS0dXe9nZryPZHx6sUsThMvI/XtWlu1+XDbCskJ5bzYk8kZYGErKG1j3wOdcca+6nanRs1fEVl2OMYbp0QtMjZwnVlZNQ/tWSssbCsaEozEa2u9byNKFewQ7FKG1cxd1LRuprF2GssMMnHvthvsl4xNMDJ0im05S17KemuZ1BQKNsizKq9spr25fyPKFu5xocQUrN36E5es+RFV9J8n5iXfwmtsQnxlmdOA4oXCUxvatVNYuI/9vux2KUt24murG1QtZvnCXE46W0N71fpqW76C6cTXZdOKGd4xqJ8PMRD/jl7qJlVXR0vlwQfSkP6/8DRZuRDRWybodnyUcLaGyrpP52ZEb7qONQ3xqkJG+Y0SLS2hb8xjFlzWltUNR6ts2Ud+2aaFKF+4BQuEoXds/RSgcpbphjRcTd4P3INphfPAk06P9lFU30L7mfVfc2RmOxGhb/egCVn57WFJqpx/HYYxhenqaZ555hieeeIKKiorgRdHDDz/MQw89xPHjx5mZmeHSpUt88Ytf5Mknn6SysjIYt3PngzzyyCMcPXqUmZkZuru7SafTgXg9OjrKoUOHiMfjVFZW8swzz7Bjx46CaJGKigo++Vuf5IUXX2BoaIgTJ05w4sQJurq6AqF4ZGSE559/nrGxMUpLS/n0pz/NE088QVVVVZCbXVtbSzQa5cKFC7z22ms4jgMUZjQbY9izZw+7d+9mdnaWhx9+mC984Qvcf//9RCK5J1dtbS01NTWcO3eO7373uxw4cIDdu3fT2dl5hSP8vUYoXExl3YrFLuOuYrjvLc4c/hF2KMLqbc9QXt0uV3qFJYkdilBe0040VkZ8epj49BCOkyn44xufGWZ+doxQOExZVSuRqNyRItx+osUV1Ldt8l5vvLOLpNpJMzPZx/zcBNUNKymtbCYxN8HEcDdz00MopSiraqW6cQ1FsSq5+CpcF8sOU1m3HFAopYKG09fFGOZnx5iZ6McOu79PbTvCaP8xJkfPk80kKIpVUt2wmvLqNqzLGlwLgk84EqOmqct9H6MUyfmJG+6jHYf52RFmJwYoipVTWbeCTDrBcO8RZiZ6MTpLSUUj1Y1rKCkXA4pwbULhKNX1q0C5v/+y6cQN93GyKWYn+0nGZ6hp6qSipoP47CjjgyeIzwyjlEVZdRu1jWuJxt5ZDyThvUk4UkRD+30o68qeZddCO1lmJ/uZn52gtLKeqvpVJOOTjF16m7npQQBKK1uoa14f3A0gCFfDssM05j3/rtfjxMdxsowPniKbyVBe00ZpZTOzkwOM9h8jMT+BHYpQWbuc2ub1Sz6qZsmpVP6LlY6ODnbt2kVZWVnBC5hoNEpnZyeRSARjDO3t7VcdF4mE6ezsJBqNMjMzw8jICNlsroFIVVUVf/Inf8JnPvMZtNbs2rXrqk0d6+rq6Ozs5KWXXiIejzMwMBBkYoPbZPLcuXNkMhlWrVrFQw89RHl5eUEtlmWxceNGnnjiCY4dO0YyeeWTLB6Ps3v3boaHh4lGozz11FOsXbuuQLj2aWpq4oknnuCll16it7eXvXv38swzzxSI/IJwI+IzI5w6+F2mRi/Sdf8naVn5IMl5iQsRliZKWTQv30HT8m1cPPEypw7+G+FoCY3LtmPbESZHz3Lq4HeZGD5Hfds6mpbveMdNWAXh3fBuRGufbCZFYm6cbDqFHYpy6fwbvP3aPzM+2E0qMYtSimisgvq2jay+7xM0Lb9/yb+AFBaXd9ubwgCpxDSJuPtGZX5mlMO7/z8Gzr3B/OwY2skQChdRXtPO8g0fYuWGj1AiedfC1VDqhnebXI52MiTik6QSs5SU1zI1ep7eU7sZ6TtKcn4KYwyRolKqG1fRuelplq374Dtu+C6811Ao6909/5xsmvnZUZxsBmWHuHR+H/1n9jA5cp50Mo5SiqKSSurbNrFm2zM0LX8A612Ik8J7CfWuhGtwxev47AjZTBo7HGW0/yiHXvjvTAyfJZVwLz5HY+XUNq+ja/snae3cJWYy4Zq82+ef0Vn3IjEQKSrj9OEfcv7oc8xM9JNJJ7Esm+LSahqXbWXDQ/+OGokNefcsX76curq6K5ojKqWoqKjAtm2UUrS3t1NfX1+QZe2PKy8vDxzSiUSiIF+6rKyMD37wg2QzWdKZNNFo9Iq1wBWeKyoqCIVCaK2Zn59Hax1sv3jxIlNTU2itWbFiBY2NjVfUAhCJRHjwwQepr69nfHz8iu1jY2N0d3eTTCapq6tj48aNlJTErhjnH9uGDRuor6+nr6+Ps2fPMjIyQnt7+3suOsQYw2DPAS6de4Pa5nW0rNxJ+DpuS2MMly7sY/D8fmqb19G8cud70p3pZNOcf/vn9J3aQ13rOlZueoqS8gZSCYkLEZYu5TUdbHr43xOOxLh0bj+v/fTPKSmvR9khEnPjJOcmaVqxjbXbP0V96yZ54ScsGbTOkEnFcbIZxge7mZ8dJVZWx7J1H6CopJq56UH6uvdw5tBzxKdHQClaVu6UzE3hNmLIphNkkvMk52e4ePIlorEKqhtWsWLjkzjZDMMXDzHY8xbTYz1kM0nW7fgMRTFxgQm3jtYOmXScbCbF7OQQZ4/8lHAkRkvnTkoqmkjNTzJwbh89x19mdqIfg2Hlpqex5Q4A4TagdZZ0cpZsNsvUSA+nDnyXSHEZKzZ+mKJYJbOT/fSd3supgz8mPj3M9g9GaFq+fbHLFu4RjHFIJ2ZwslnmJoc4/vo3CIWLWLb+g8RKa4nPDNHX/SpnDj/H3OQAlh2mtfPhxS5buEfQWpOYm8Boh7GBk0yP9lBW3UrX9k8SLiphauQ8F44/z/HXv0NibpyHfuP/DnryLDWW5LsipRTV1dVEIpGrOon9rGh/XDQave44AMdxrmiOCBAKh5hPzPP222/T19fH+Pg4MzMzpNNpHMchm82yd+9eEolEMI+PMYahoSFSqRRKKZqbmykuLr6m+7m9vZ26ujrOnLky0H9sbIyJiQkcxyEajTI+Ps7Ro0evKoQDjI+PEw6HsSyL8fFxxsfHcbLOXS9eZ9IJJoa7KS6pobSi8ca3rRrD2MDbnNz/HVZu+gj1bZuvK15jDKN9Rzn+xr+yctNHqGvd9J4Ur0f6j3L60A+xQmFWb3uG6sbVIvQJSx7LsglFYoAhMTeJdrJkUnEsO8T87ATZdIrSqibXFSaOGWEJYbTGcTJoxyExO0XT8q2s3/l56lo2EIqUkE7M0NC2ibde+UcunTtIVcOvqapbQWll82KXLtwrGNznoM6Qnp9H1Visuu/jLF//IWKltWjtMNn1Pk688U1OH/oJ5448R23TOtpWPyp39Qm3jDEa7f0OzKRShKMlbH7kP9K07H4ixRVk0wmaVjzAsVe/Rm/365x96yfUtWygqr5zsUsX7gHcv8FpdDZLOhmntKqZLY/+J6obVhEKF5Ocn6K+bQuHX/ofDJw9QEXdj6luXC1NQ4XbgjGGbDaFdhxSiTjRWCXbHv8Dapq6CEdLSCdnaFx2Pwef/+8MXjjCqQPfpa55g0TYCLcHo3GyKYzWzE2OsHbnM2x86Hcpq2rFDkVJzI1S17qR1376Xzl/7EUaOu5jy6P/6V3f4XcnWLJq1bUEaaDg8XA4fN0X1dfbFp+L86tf/4qf//zndHd3Mzk5STweJ5lMBmK31ppEIkEymbxiLmMMMzMzgaBdXl5+TbEZoKSkhIqKiqs6vGdnZgN3+MjICH/9139Naem1RdVsNkt3dzfZbJZkMunWoZ1rjr9bmB7r4a3dX6Vt9aOs3Pw00RuI10opWlc9QqysjrLqNiJFZTdcI5tOkIxPk0nHwegbjr/XmJ8dpfvgvzGdFxcSjsitmcLSZ3zwJEdffZa+03tpWn4fy9Z9kIraDpRlk4xPcPHUSwyceYO3Xv47986c1Y8RihQvdtmC4GEwGEoqaunoepzmFTuDaJBwpJj2rscZHzrN5HAPwxffYnq8R8Rr4fahcq+J7XCYhvYtrNjwJOXVbcGQ+taNJDc8yUjfUSZHLjDcd5imZfcTjl79TkBBeNcYQzRWQmvnQ7R3PU7EiwYJR4ppXvEA8ekhhnuPMnbpFKMDb4t4Ldx2ikurWLnpaRqXbQ+iQUojxbStfpTJ0XMcev7vGLrwJjPjF6lr3bjI1Qr3GpGiGJ2bn6Z55c7g+RcKF9Ha+TDTYxcY7e9m8MJBJkfP0dixdZGrFe4JfA1TKWIV1f8/e/cdHcd1J/j+W9U5obuBbgCNnEEAzEFUpKhsBVuWg0b2eLye8WhmPG/Gb/b5nZm3+87ueM++3dnZM9Eee2Q5SrZlK9iSrEhKIkWJCQADQBI554xG6Jzq/QGKIgVIhCiSAMnf5w8dEV1ddav6VtWtX937u1RvewRPTvWZjx3uXErW3ctg50Faan9Hb/ObVG19eFXmX1+1wevlutDeIKFQiCeefIInnniCU6dOEY/HKSwspKysjPT0dOx2O0ajEb1ez/Hjxzly5AixWOzclWgQjUbRNA1FUTAajUsGpt+j1+s/NCgfT8TPBMwTiQRjY2NMT3/0BCgOhwOHw3GmrFeDmYkuRnqP4c4qR0suIxivKKRnlZOeVX7pC3cVWEgXsov+tnfw5q9dSBfiyJReVWLVi0eD9DS9QW/THhzpOdRc/2XyK2/FYFwIqqRSSVyZZcTC8/S17Kfr5Gu4M8txZ5aucMmFWMhPp9ObUFUd1jQPjvS8RTmtTRYnLm8pZmsagdkxQvMTK1RacXVSUHUGdDoDBpOZtPR8LLb0c5ZQVT2O9HycGQVMjXQR8A8RjwYkeC0+MUVV0emMqDo9Josdp7cY4wfqlcFoxekpwubMZH56hLnpgRUqrbjaKKq6cA/W6bDYXbi9JYtyWpvMDjKyKzGYLITmJ5mfGZLgtbgoFEVBbzCjqComq4P07MpF9U9vMOPJqcZksREO+Jmb6pPgtbgoFEVFb7SgKCp2ZzZOT9EHl8BospNdsInWupeYmxwgHJyS4PVqcvDgQX7605/S2NiIzWbja1/7Gvfccw85OTlYrVZMJtOZvNqPPfYYp06dWhy8ZiEn9vlSk7xH07Rz0o6cTa/Xn9leXl4ef/VXf0VRUdFHBsPP/m5VVRUmk2mZe395xWNhhrsPM9JTx7x/iGQyhsFoxeUtpXDNbbgzy0gmYvS1vkX78d8RmptmuLsWNI2c0u1k+KoZ7T1CaH6CrMLNTAyeYKTnCC5vMWu2fIHA7AhjAw24M8vIKtiEwWhF0zSmR9vobXkL/3gHqWQcm9NHQeVO4vHQkuVMpRJMDJ5ksPMAMxPdJGJhjGYHntwaCitvw+7OveIn75gebaPj+IsE56ZIzy5noP1dRvuOnvk8ODtGcGaMRCxKf9vbhAITeHzVZBduXlavdiEuldD8BOODJ4iEApRuWI8nt+ZM4BoWUoqkZ5aTmb+Boc46JoeaCM6NSfBarAo6nRGTOQ293oCi6BaG4mna+70hWGhcGo1W9AYT0UiAZGJxm0OIT8JgsmG0OAjNTy/UwSVT7hkXJsrTNOKxMMmk1EPxyamqHqPZgcFkRlFUFEX3wUsgCgsBHoPJSiqZIBEPr1yBxVVF1RkwW12oeh0o6pLD4RVVh8FoQ6c3kErGiceWfl4U4uNSVT1mqxudXg8oH1L/VAxGG3qDkWgoQCwauPwFFVclRVGxOTIX/l9duu333sTxqqqSiEeJx1bn/feaDF5Ho1F2795NZ2cnqVSK++67j2984xusWbNmUc7oVCr14SlMFLBaraiqiqZphEKhDw1OA0QiEQKBwDkTPr7Hbref2Y7JZGLr1q1s2bLlis9hnYhHaTvyLM11T5OIR7A7s9HpTQRnxxjqrGW4+zBbbv8LHO5chnvqGe8/QTwaZXq0i1gkiMnixO7Ko6dpN+MDJ5gcbmGkp57w/BSJshDx9fcz1n+cE/t/Rsm6T5GeVYHBaGVquJmje/6Noe56bA7P6clgWpkeaSORiJBMxM8pZyoZp+vkazTXPsXc9BC2tEwMJhszEz0MdhxktO8YG3c8Skb2mis6N/TsdD+zk/1EwyGGOuoY7W045/NUMkE4OE8qkaTrxBv0t75L2cb7cHmKJXgtVlQsGiAamlsYcmxxotObFy2j6gwYzWmoegPRcEAefMWqoTeYsaZlYrTYiIZmiAT9aFoKRTnrhaimEY+HSSRiqKpeJmsUF5WiKFhs6dgcmUwNdxIOTpGIhc95CQgLebET8TAaCy9dFKmH4iJQdQYsDg8Wu5twwE9ofhwtlUQ5q02toZGMx0jEwmd6agtxMej1JmxOHwaDiVgkQCgwuWgZTUuRiIdJJmIYTFZ0+tXZKUxceVSdHrsrB6PZQjwWIjg3tmgZTdNIxMMk4jEUVUVvkLSH4uJQdXpcmaUoqkp4fopoeBaz1XXOMpqmEYssxClVVYdOvzrvv9dkizQQCNDR0UEkEsFms3HHHXdQXFy8ZKA4HA4zODi4ZK9rRVHIyMhAr9ejaRqTk5PE4/FFy71nbGyMiYkJEonEos88Hg9utxtVVZmZmcHv95NMXvkTMM77B2g79gKh+Uk23voovqIt6PQmYtEAnQ0v0XH8JXqa3mDdTf+Bis0PMT89QGjOT27ZdkrW3k2GrwqD0Uo0NIt/vJ9UIkFB1a1k5W/EkXqlAioAACAASURBVJ6PxeYhFgkQmBknGppB01LEY2E6Gn5HX+u7+Io2UXP9l0lLLyAeC9HTtJv2Y78jHo3CWZ3kJ4abOHngSWYn+6i+/vfIK7sJo9lBKDBJ06Ff0Ne8D4stg023/gk2Z/bKHdBPyJaWRXHNHQTmxjjnAJwWC88z1neSaCSMO6sIu9OHy1uCzrA4UCjE5aTTG9HpjadvrvNL9kpNpRLEonMkE3FMZvsV/aJJXF1UnQ6Xpxi7y8f0aBdTI63klt14TtqGWHSe2cleIsFZXN4CTNbVN1xPXNmsDi+uzBL62w4yPdbB7GTvQh083UFD01LMzwwzO9mPTm/A4vAu9MIW4hNSVR32tGxcmaXMTOxjcugUofkJ7C7fmWUS8Qhz/gECM+MYzTZsaVkrWGJxNdEZTDgzCnC4swnMjDHae5Tc0hvRG94PUMejQfzjncQiYezubKl/4qJRVQNp6fk4M/KYGuliuOswhWtuOyd9XDIRZWq0nUgogN2ZIXOeiItG1enx5q7FbLURmBlnpLeetPSCczrnJuIRxvob0FIprGlezKv0GeSafLKPxWKEQqEzvaqzsrI+NEjc0dFBU1PTmdzWZ3svxYfVutBrpa+vj7m5ObKzsxel+0ilUhw9epSxsbEle157PB6Ki4s5evQofr+flpYWbrjhhg9NBRIIBDh27Bhut5uysjIsltX5di44N0ZgZgSLzU1u6Q14cqrODJWxOjLJLtqC3enDavdgd+XicOeh6vW4vSXkl+/AbHMTDc+CopKMx7C5sqnc/DkycqpQVT3aEscyODvCcHcdiqJSuuF+8it2nHn4MprtjPQeYXZq9MzyqWSc/ta3mRxuo7BqBxWbHsLlLUZRVDQthZaMMzXcxkDbO5Ss+xQWh+eK7RGXkV3Jhh2PkkxGl/x8Zrybw4F/IDU5SNmG+ymouBVrWuaqzHkkri0Wu4e0jAJ0+nomBk4yO9mD3Zl9ToA64B9icqiZWCSIr3gTVptnBUssxNkU3JmlZOavZXK4nZ7mN0nPrqCo6g4MJhvJRJTh7loG2t8lEYuSnl1OWnreShdaXGXMVjeZ+RtwuN9kfKCJzhMvY3V4SUvPRwNmp/roa36T6bFu7K5MPKc7EAhxMdicWfiKtjLYfoihrnq6Tr7Cmq1fxGRxkkommBpppfvk64SDc+SUbCA9e81KF1lcJVRVhyM9n5yy7Zw68Aw9Tbvx5q2nsOo2dDoDiViY0b5jdJ98HYD0rHKcGYUrXGpxtVBUFVtaFvmVtzAx1EFv81tkF26mZN296PRGkskYE4Mn6Tj+IloqSXp2GS5vyUoXW1wlVEVHRvYafMUb6Dl1gKZDT+H2lpGZvwFFUUjEo/S1vkV/6zuoOh05pdet2tjPlRmB+4SMRiMWiwVVVYnH48zNzS2Z7sPv9/PrX/+azs5OEokEiqIQi8XOCT6XlJSSlZXFwMAAnZ2dtLS0UFhYuCiYPDg4yK5du/D7/Uvmxbbb7dx000289dZbDA8Ps2vXLu68806qq6vR6c7Ns6xpGnV1dfzzP/8zkUiEz33uczzyyCO43auvki3kt7MyNz1M14mXURQVt7fk9BvwQpzpBSiqAihLBvXPpqgqmXnrsLtyPjJ4HJgZJjg3tjAhR2bpOb2GnRlFuDNLGe05eeZv0cg8E4Mn0VJJfEVbsKV5zwTYFUUlI6eGtIw8JgaamfcP4s1dh2q8Mk8do9nxkek/tFQKg9Gy0EPGmYM7qwKDcXW+GBHXFpMljcI1OxnrO87kcDsN7/yQef8g6dmV6PRGAjPD9DS/wUDHISx2JwWVO7C7c1e62OIqo6WSDHS8S9fJ1wifnlAxGp5jcqiVeDRKZ8PLTI+0LAy3U1RyirdRuuHTOFw5WB1eiqruZGKwifGBFo688R2Gu2uxObMJzU8w0l3PxFA7Lm8eBZU7sbuk/oqlzU3109H4EmN9xwCNVDLBzEQv0XCIyeF2Dr3yd1js6YCCzZlN6fr7ySu7EZ3eSFbBJgqrd9J8+Fla63/L3PQgnpwqNC3F5HALw131AOSV30h24eYrfq4PcXG9V09ajzzD/OkJFeOxMNOjnSRiMYa769nz9LcW2o6KgjuznMotnyc9qxyTOY28shsZ6amnp+ltGvb9mMnhZpyeImKRAGN9xxntbcRiT6Ow6nbcWWUrvLditdE0jdnJHpoO/5LZyR4AEokoM+M9JGNxJgab2fvsXy+8dFMUHK5cqq57BE9ONVa7h+KaexjrPc7EUAe1r/8Dg537sTkyCc6PM9xdx8RQBy5vHmUb7sf8gQlthUgl4wujtff/lFhkHli4/o33nyIRj9Pf+g6vP/EnKIqCoupxZ5axYccfY7GlY7KkUbLuXoa7ahnpPUnt7n9muKcOuyuHSHCa4e46RntPYXd5qNj8EFaHdMAR59K0FP7xLo7t/R6x8BwAyWSCsb5GUqkkI11HeP3JR0/PaaLicOey5Y6/xGr3YHFkUL39y0wMtTLceYx3X/hbcsuux2RxMjc9yED7fub9k2QWVFC+8UF0OsMK7+3SrswI3Cdkt9spLS3DaNxLMBhkz5493HjDjeQX5J/pMd3X18fPf/5zXn31VUpLSwmHw4RCIQYHB4lEImiahqIoFBTks3nzZpqbm5mcnOSpp56ipLiEmrU1Z3pzDw0N8fjjj9PQ0IDJZCIaXdzrVVVVdu68jVdeeYWpqSlqa2v58Y9/zDe+8Q1KS0vPrCuVSlFfX89jjz3Gvn37MBgMPPTQQ8ua2HElODOKKN/0GU7uf5JTB3/FQPsBXJnFZGRXkl20FW/u2mX36tHp9FgcXtTznEyRkJ94LILDnYPB5DhnUgSd3oTF7kHV6+H0SIl4JEA4MEU8FqHrxGtMDjefExxPJqL4x7uJRYKEA1OkknFAArpCXE6qqie35Ho23vrHNB3+FSPdR5kabsViT0dRdcTC8wTnJrG7Mqnc8jmKqu/GZElb6WKLq4ymafjHu+hqfJ3ZyfFFn08O9TI51AuAqipoqSR5FTtwuHJQdQZ8xdex4Zavc/LgE4z0nGB6rBeD0UQ8FiWVTOLJKaPmhi8vjBiSF4fiQ0RCfgY7DtDV+PaiDhGJ2VmCs7Vn/u3OysWTW0Ne2Y0AOFw5rNn6RVKJOB0Nr9LV+Ab9rfsBjXg0gtnqoHLLZ6i+7ksybFksomkawdkRek69wcRgz6LPZycnmJ18G1gYoZpTOkBB5Q7IKkdRdaRnVbD+5q8B0Neyn+bDz2O0WEklEsRjUdIysqna9nnKN34Gk9l+GfdMXBk0wsEpepvfYrS3bdGngRk/nQ37gNPpPXMKKajciSenGr3BRHbhZjbd/meceOcnjPSeYnq0e+EeHI+SSiTw5paz9qavkl9+i7y4E4ukUkkC/iHaj71KOLB4QkX/2DD+sWEAdHodvpJ11Fz/ZbClo+oMeHxVbLnzL2jY9ziDHceYGR/AaLaQiMWIx6JkZBew9qY/oLj6zit2lLe4dDQtRTgwQcfx1wjNzS36fHZqgtmpXQAoqoInp5j1N/8R2D3odAYKKnZw3d3/J43v/JjBjmNMDLagMxiJRcKkEgnyyjezaeef4M2tudy7tmzX5FlhNBq58847eP3112hra+OVV14hmUyyadMmLBYLY2NjNDQ0cPToUbZv3862bdsYGxujv7+fQ4cO8dhjj1FVVcUDDzyAw+Hgc5/7HLW1tZw4cYK33nqLSCTC9u3byczMZH5+nmPHjlFbW8vmzVvo7++jsbFxyXIVFhbw9a9/ndHRUY4fP86vf/1r+vv72bJlC1lZWSQSCfr7+6mrq+Po0aMkk0m+8IUvcMcdd2C3r84GnsnsoGrrw6S58xho38/YQANdjbvpbdqLIz2H/PKbqLru904PjfnoALyiquj0hiVn6D1bIhFDS6XQ6Qyoqm7RhKo6neGcHD8pLUkqlQBtoQddYHbk3Em00HB5i3BnlmBLy16YpfUq5XDlsv2ebxGNzJFdsHnVJusX1yaT1UXx2ntweoqZGDyFf6KTcGAKTUthMjtweorw5NTgyanB6vCc91ohxMelqDoKKm/FlpZJLHKemeAVZSHH5lkBQJMljaLqO3C4cxntO7aQXzMawGi04fQWk5m/Ho+vWnp8iY+UllHAltv/nPJNn15q+opzGC1peHPefxB57wF6484/xVe8jYmhUwTnxlGUhV7anpxqsvI3kJZeIPMGiEUURcWTW8PND/4XIsGZ8ywMVrsXd2b5mT/pjRZ8xdsw29IpWHMrUyNtREJ+9HoTjvR8MvPW4c1dd3p+GeXD1y2uSYqi4PKUcOMD/5lwYOo8C4PZ4sKTU33mD2ark+Lqu7C7chjrO4Z/vIt4NIjRbMflLSEzfwPenBpMH5jMTAhYiCFk5m/kji/9Pcn40ik436OoKha755z2nMFkI7/iVqx2L6N9x5geayMWmcdgtOL0LLQBM/PWSxtQLElRdLgzy7nj9/4XiXjkfAtjtrqw2t/rwa9gsjqp3PJ5XN5iRvuOMTvZSyIRxWxxkp5duXD9y127qiervSZbpYqicMMNN/Doo4/y2GOP0dXVxTPPPMOePXvQ6/UEAgFUVeWee+7h0Ucfxev1cvDgQcbHxxkYGOBHP/oR5eXl3HzzzaSlpbF9+3a++c1v8v3vf5+GhgZ27dpFfX09NpvtTC/re+65h0ceeYTHHnuMkydPp6z4QJvMYDCwc+dONE3jySefZO/evbz66qscOnQIu91OMplkZmaGQCBAfn4+Dz74IF/5ylcoLi5elFpk1VAU7C4fZRseILtwC/MzQ8xPDzA+eJLe5r00HXoagE07/xTzsvLTKudty+p0ehRFIZVKoGmphQe7s76TSMTe/zsLD3KqzoDeaKJ0/b3klF6PfomTVlFUbE7fVT37r8nqpLDq9pUuhhAfymROw1e0FW9ODZHwDPFoCE1LoTeYMVtdGEw2CVqLS0ZRFNyZZbgzL3w4u9HsILtoKxm+KiIhP8lEFJ3eJPVXLJvZ6iav/OYL/r6qM+DyFONw5ZJffgux6MKLGKPZgcnqWrXDRcXKUxQFu9OH3ek7/8IfQm+w4M2pwe0tPT1aMoSq02OyODGZnVd1JxHxSSlY7BkU19x9wd83mh3kFF+HN3ctkeA0yUTsrHuw/ZwOTkKcTVF1ONy5VLo/d8HrMBgtZBVuJiOninBgimQigk5nxGR1YzQ7pP6JD6UoClaHl4rND13oGjBZ0iiouJWsgk2Eg9NoyQR6owWLPeOKiHFdtuC1qqp85Stf4YYbbiCRSFBdXX1mosP33HzzzXzve98jEolQXl6Ow7F0bt5t27bx3e9+l0gkQmlpKWlpSw8N37x5M//6r/9KKBSiqKjonOVcLhdf+tKXKC0t5fDhw/T09BAKhTCbzeTn57Nhwwa2bt1KSUkJiqLwV3/1V5SXldPR2UEqlTpnkkS73c6DDz5Ifn4+hw8fprW1ldnZWQwGA7m5uWzevJnrrrsOl8uFqqpomoZOp8NgMCy6QNntdu666y4KCwv5zGc+w6lTpxjoHyAYCqKqKi6ni5LSEtavX8/GjRvJzc390MkmVwtN01B1RpyeIpyeIlLJBPkVO0jPrqB+13cY7DxI9fZHlhm8Pj+jOQ2dwUQsEiAeDaJpqTM9qROJCOHAJKnE+znOjUYbltNvOI1mBx5f9aJ0A++liRFCrA56owW7pFUQVyhFUTCa7RhlWLxYQTq9EWtaJlYyV7oo4lqjKHIfFytGUVSMJjtGk9yDxeWnKAoGoxVDukyILFaAoiy8LLY4V7okH9tli3oqisLWrVvZunXrhy5TUlJCScn5Z1YtKCigoKDgvMvl5eWRl5f3oZ9nZWVx7733snXrVvx+P7FYDL3egNvtwuv1YjS+nzLh+uuvp6io6MyEiw6HA5fr/SFFTqeTnTt3sn79eiYmJohEIuh0OpxOJ5mZmZhMJoaHh5mbmyOVSmG323E4HEvmqjabzaxfv57Kykpuv/12ZmZmiMViqKqK2WzG4/HgcrlWb2/r0zRNo6/lLfrb91FQeRu5pddjMFpRdXrsrhyy8jdisqaRiEXQzkzWqAAaGhrnHQv7IexOH1ZHOnNTQ8xND+DxVaE/3Tie9w/hH+8iEY+dWd5gspORU81gx2FG+45RVHUHJrOD9/KNBGZHaT3yLHqDhbL192Fz+iSQLYQQQgghhBBCCCHEJba6u+xeBnq9Hp/Ph8/30cPf9Hr9ksHwWCzGm2+8yZGjR/D7/Xz2s59l+/btmM3mRevo7OxkeHiYRCJBQUEBXq/3IwPQJpOJ3NxccnNzL2znVoF4LEjPqTeZGm4hEQvhK9qGwWwnHJigp/lNQvNT5JZtR2+0osCZyRvnpgYIB6dR9cazAtvL43DnklWwCf9oD+3HX8Dq8JLhW0NofpyWuqeZnx5aCIufjj/rdHoK1uykr3UvQ52Haal/hvKNn8HmzCY4O0Lb0d/QduQFCqp2ULL2bsnAJ4QQQgghhBBCCCHEZXDNB68/KVVVaWtv48knn2RsbIyJiQmysrIoLy8/JzA9PDzMM888w9DQEAaDgZtuugmPx7Nkz+urhaIo+Iq3U77xATobXqH29X/E5sxEpzMSjwWZnx7Gk1NB1baHsdgzUBQFb946LDYXfS37mPcPkVt2PeUbH/xY2zWY7FRu+RyzU32MdB8l4B/GYk8nEY9gMFrx5FYTCc6hpZILPbwVBa+vmo07/piT+39G65HfMthxAKPZvjCBo3+EdF85pevux2L3nOmRLYQQQgghhBBCCCGEuHQUTdPO5GYIh8MoioLJZJK0CMukaRq1tbX8zd/8DbW1tTgcDu6880527NhBQUEBiqIwMjLCgQMH2L17N+Pj49x444383d/9HVu2bDknNcnVSNM0AjNDjA00MjncTGhujFQyjsHswOUpJjNvPZ6cagwmK6AQDkzS0/wm4wMNJBNxfEVbKaq+E/94F8G5UTLz1pOWXoCq07+3AabG2pgaacWZUUCGrxq9wUwyEWNyqInh3jpmJrpJJRPYXTlkF23BbHExO9WHw52LN6fmTEqRWDTA5HAz4wMnmJ3sIR4LYjDacGeWkZm/gYzsSgwm28odTCGEEEIIIYQQQgghriKapjE/P4/NZlsyQ4UEry+CcDjMrl27+P73v8/BgwdJpVL4fL4zObHn5uYYGRlB0zRuueUWvvGNb3DHHXdgt187k0RoWopoeJZYJICmJdHpzZitLvR606KezPFYiEjQj6YlMZmdCzPvXkgPdU0jFgsSDc2iaUmM5jRMZgeK+tG5wuOxENHQLMlkDJ3eiNnqXrKcQgghhBBCCCGEEEKICyfB68skFArR2NjIoUOHOH78OIODgwSDQQAcDgcFBQVs3LiR7du3s27dOqxWqxxjIYQQQgghhBBCCCHENUuC15eRpmnMzs4yMjLCzMwMsVgMALPZjMvlIicnB7vdLsdWCCGEEEIIIYQQQghxzTtf8FombLyIFEXB5XKdSRcihBBCCCGEEEIIIYQQ4sJcQCJhIYQQQgghhBBCCCGEEOLSkuC1EEIIIYQQQgghhBBCiFVHgtdCCCGEEEIIIYQQQgghVp1FOa9TqRTJZFImFRRCCCGEEEIIIYQQQghxyWiaRiqV+tDPzwle63Q6otEosVjskhdMCCGEEEIIIYQQQgghxLVNVdUP7UitaJqmvfeP93pdCyGEEEIIIYQQQgghhBCXg8FgWPLv5wSvhRBCCCGEEEIIIYQQQojVQCZsFEIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOhK8FkIIIYQQQgghhBBCCLHqSPBaCCGEEEIIIYQQQgghxKojwWshhBBCCCGEEEIIIYQQq44Er4UQQgghhBBCCCGEEEKsOrpvf/vb317pQiySCtDR0EYizYXFqIf4HH2tR3n7jbcZ1+WT5Taj1ykfsQKNmd5GOqYNOO3nW/bj0RLz9Lef4Ej9Eerr6jjW2ER7Vx+TszEMVjs2kx5iQxw+0IUx3YXVZOCTbj0208LLv36aN49O4s7z4bQaUS/eLl0WseA47ScO8eauYyQ82bgdFvRX2D5ccbQ5Trz9Ar99+RAzOi/edAemZR50LRlmvK+ZQ/veoH3KQXq6E6tRfjBx+aTiAUb6mjn8zps0D6s4nBk4LJf4fasWoK1uN797aS+DIRsZ3oyrst7H5/po7ZogljLjsBs/8T1KXOW0BAH/MC0NB3m3vpMwdrK99ktbb7QQfU37eeWFXbSMabgzs0kzXwP9LbQwA62HefXFVzk1lMSd5Vv+fqfizE72c6J2H8dax1HsuWQ4roFjdpVKxeaZmpzEH9LhsH3863QyOkX7sT08/+K7DIUs+PK8mK6x6qClYsxN9nGi7h3qGruImorIduve+5SQf5D6vc/z0t5WYuZs8jKtl+i6phELz9DfdpR3Dxyl12+kKD/9EvcgSxKcGabxwC5efulV3tizn4aWISKKg8xMF4bVUhdSCQL+IU7WH6D+eBcpZxHeZV23UsRCfnpajrD/4BE6JwyUFGZc5l55GolYiNHeJg4fOERj2yyZpXlYP0YhtFSMwFQfDXWHOHy4joamXvxhhbT0dCyGlWudackYsxPdHD10gLpjw7grinHozv89IVaClkowP9FLW2srp04c5fDBgxzrTFC2Jper8DFugZYiPNNH9yhkuM3LvndpqQSzYz10tHcxMDqLZnLgsOivmGdB/UoX4IPic13sev5VhnTrub94kqO79vDmW+9w9FQ7Pf0xvvDtG1hf4sZs+Oj16HRhjrz4U46V3sWnb6/Gbf1kQeREaJgj+3ax662DnGjtI0gamZnppDms6FJR5vwzRFUHuQW5GEOdNPaV8p/+dz5uh+UTnjQpeg8+w79//5e0zeYR95bxh3eV47JcGVUsPtfFO2+8yZ63D9DY0kHfsI9vVWyi1JfOtfAcupIiE0d57mc/4Jl9A1Q/ZCbnP36W6gLHec6DOH2Ne3j9td3srz9FR/cwGx/+Z8rLCsiwyw8mLr1keISG2kO8u+9djp5oprNnmLK7/x9yiyvISb+0Leeov4nXn/spP/ltIzk7wzg9mdxSk3HF3NCXJ0lfwxs88ZtO1tz1Jb7wqQ2krbqWgFgVtCgjXSc4/M4e9tc1cKqli1nzRv7wz3LYUJV9SYME8bluDrz2FN/54R7M634PgyuPz97ou+qHC8YDfdS+8Su+871d6NY8hOrM54s7cs+z30mmBprY/8ZLvPHOUZra+3Ct+QyP+rZT4bvST+4kkWCQSEzDkubEdC0ET7Q4s+NdHHrzJXbVTVB285d59IsbMX68lTA71Mqbz3yf7/ymj5q7pvEWl3NDwZVeH5YnFZ9nqKuBPbvfpv7YMZra+9FnbeU//F+3srH49JHUIgx1HuKp7/0Dr3V7uH/SRnnlp8m8yIcoMjdKc91uXn1zP0cbWhgJu7n5s/+RnTeWXrqHcC3O5EADu154hY5QDuU+I61HXuDJ2mlKt9/PH//lt3jwuhVu22gx/GPt7Hv5ZfYerKepYwS9ZzN/VnIb1R953UowP9VP/d7Xebf2GMca2xgN2dj6wLe4c0f5ZQtsxEPTdDbs5fW3DlB/5CQ9o1Fy132R9fdsJ2NZHYU0ooEJTu5/gWd+8zr1zQNMzwZJKGYyfKVsu/XTfOUPvsimAssl35dzixVleqSJ13/zInsPHqOt14/NdzflD+zEt4LBdHHt0JJRJrvf5qc/fZNE7h384dfvwWf+6LqXSiaYGW3nWO1BXnnuVdqnNNY+8D/54ucuU6EvK41EZJqWul088/RLjGd9jX/5L/dwvtCglowy3LqXF3/3LsMxJ1leNxZDnHd2v4ij7F6+9NBGroRQz6pqxYQnjvPUD39Nv6GaO++txGO1Es8tozRzF08cOUrXtIu7Qwk07XxrUrBlr+G6rUO88uqveCbxCF+8u4b0Cwpga8wNHuZXP/kJv33tXdonnWy/836+cMsGinI9OCwGUrEg0+N9HHv3dV554U26B0ZIpn8BfyRO6oKOxLn7ojfqSESChCIKer2CcgXdO1SDA19RBTmG5/lFYyND8yrBaILz/oTiE1P1BlQtRigUQVF0KOpyrkgqtvRcSgqNPPuLkzS0TJF3d5jEJ6/IQiyLorOQkV1IRWkDe17roLFxAMeWILHkpb9qqDoDOi1BJBwmpSmouqsvUpKM9FO3fx+7XzxIv3ENmzdWsjHPvNLFEquRosOS5qGgrJS0I+/Qc+oYfk8O85HEZdi0HlVJEYuE0SU1dHrdVfYSaWmKqkOnasQjIVKJ5e63gsnmJicvHSU4QEPdCUpdO4gmrvyWVnCsmTdeeJq3ezK49+Gvcvfmq+1l4oJUfIbe1uMcPHSK4YkpJkb7aGk8zNFBNw/n30NSg4+34wqqTkUhSSQcIZHSoV9VT3yXlqIasDmzKC7NobXxDU7UNZK+qZhw/OxzQkFVVZRUjHAkhqbquRR3fJ3BQrqvgCy3geGmWk7Fa1gbiF/S56Cwv4/DrzzJr14b584/+wPuu9VCabrGWP/3OdbSyog/fgm3vkyKitHiIicvG1tyiLpDjfg2Fy3juqViMDnwFZaS29vKay21nIiUUnrrpT2mi0qhN+HKLKAwr51DL5/gSKOCLidIcpnfjwXHaHz7Vzz+y1pIL+W2+69DCY1wsv4d3j7wMi1tg4Q1N3/7158m6+O9ufqEdJis6eTm+jDMNHGgzk/1Dddz5xMjvgAAIABJREFUFdxOxBUiGQvSe/AZvvv95zAWDVJ9++18tuqjTwJVp8ftK6eqqoune5roms3lrtJKzhPzvnJoCYKTp3j95ToGpyYYG5tkoq+W3+1uouCuO4kDH/WaKxmf4+QbP+bffnYAy5rbuePmDRRkuTAyy/HdP+fpH/0TjoLv8cg2x+Xaowu2apoy8flWnv7ed9k3UcPX/vRTXLcmE5NeoXjtDWQYGvjRD3bR61/++lSDm+obPoUuOcO//+xxfmv4Sx6+vQKn5eM1TUIjB3j8H/6BnzyzhynLNv7oL/+cLz5wKzVF6Zj0ZwUDtTjbt26kqvRx/v4ff0VnLEI8pS0E2j/RiaOQs/lh/t//uYbJhJctW3OwXUHjH3SWTKq3eLGNvMZPX6xjeH6lS3TtMDjX8nv/x39l/aeDZFVuI99jWVQVE6Fu6k9EKC0vxJthQ0GHJ38tO++7jZeeeIWjbdMrUvYztBCtdafQ5RWT7/NKb/1rgGp0UbRmC7k5Iep276X22OBF3oLG7EAjPXNOfHmFZDnfr1R6ezn3ffVbFNwwSVrhBqrzL3FqhMtOY6qzjsP1J+ge6ifw7gEa77qJNbnly2jgacwNnaJ3xoInp5gc99UX2BcfpMeVWcwWbxbBgUb273kb/0V9gNUIjHXQN5HAnl1Joef9OqWzFnDLg3+Gs/Q+dN4K1le6r7JzcWk6Sy7X3/so/yPvTjR3KeurFwdrU7EJurrHiCmZVFVmoqJiT89n8y07aDu+n1dfblyRsl98ScZ6TvDGC7/m3dBOttx7GTapJYhEoiSSemx20+Wrc1qKRDxOUjPhySvH49AYad5HKKqhwQUF5RxZa7j3q/+Z3OtncBfWUOm5dq7Zis5Muq+Cm+80Eplq4MWnD7EoXKuY8JXdyB/9p39k57SR4uotOC/BITJYnBTV3Mit82McfP6XNPVe/G2cQ4sy1n+SN158k+HIDsoq83G7DGy49ff4a1cZvbN2qje4iM2NMdjXQ8S9lZq8lQgH6LE589h08w78g+/wy6dPLPN7Kma7l6ptt2MxRKl7/hecaL+kBV2SzmjDV7aV24wKQ4ee5oXGseV/WQszMdjEO+90U3Hn1/jUjnXkZTpQYjN0n9xG8Y/+ie/9poV3d79G45c+xd2l5xlufjEpemyuIq675TbGG57hR298jOCLEBeBouqwpGfj86SjebNxLSMPj6Lqsaf7yHFrBEMqFnsZW6+7ukbraak4kWgKizuPGl8h/f59+ENJCs7zvVQiROe7P+J//P0TBEv/hP/2td9nc4kTHUmC/lp+daiexqPDZL3Tw8Pb1q/6Y7Yqgtdays+7T/8bT7wV4it/+2muW+M9Jzev3mrBpFM/diNSUe1U3vAZ7j12gH/90Y/JzP6/uXtDFpZl5v1NxXp58Yff4SdPv0n3XDZf/Ytv8kdfvosSr5VFabQVA56CjTzw+99kbqCb/+85jUQqdVHeApvTK7jtgYqLsKaVomCymNHrVvvpcHVRdC6qtt9N1falP9dSM9S99AS/bCjg63+YiTfDduYzncmM0aBb4dzqKUaaXuOJnx1h05f/gMxsL9I/9NqhM1kwmwzoL/JlIz7Xwa5nf0G78VY++1D+OcFrReegZP0OStZf3G2uFlpqmsZDdXT2T6MZ9Qw0H+BQ/Ulu2lJMWcZHNwfi893sef5XnIxv4oHPF5HjvkyFFitPMWAymTEadRC9eKtNBAc4+PqzHBrO4o4vlp8TvFZUK7nl28gt33bxNngFUFQrvtLN+Eo3L72AFqK7YQ/PvtpG9uaHWVOZeeYjVW/AYDRe9GvmStFSswwNdNHeGSJjcx4FhedLe/bJxef7OPDaC7x6eJbqm+5k582bKc62X/KHOVXvIK98K/fkbMJitxPue5uB2t9C64WvU2d2U7L+1qv2frYcit6A0WLFrFcWB69RsDpz2LQjh02XvCQ6dHoTpsuQ90ZLzTE+3MzxExNQZcJkUgEFS5qPjbc8wEYgGZ2m+eArPPtyOxu/vGmFgtcLVJ0Og9l8ASk2degMVmzWlQxlKOh1Bmy2xZ2DPkoiNMf0yBDGyof4/Bd3kHsmd1saG264nfB0A6++2kpgZpjRqSRczuD1aaqqw2I2rfpA1mqTjM0z0HqAg93ZfOGzHzfd00WmxZkda2fv662se/hzlFov8R1UizM33sFbrzSx7pEvUHaB21MNNopu+Cr/61+uI5FWzqac5Z3jyXiEkRP1tM3pyNi6lfVFK3r0Ly5Fh8Vdye2fzsNgdWAzxTnY/5Pzf09LMDd8gMf/8YccHFrDP/3zl9hS4jxzXmtakuDUGEHFTUb6ZU5RdIFWwTVJY6zxtzz+k93YN3yWWzfkYzV+sFgXfrIpBi83PvQAnsk3+dlPXqV3OrTMIT0a/Yef42fP7qN7Ikz+pod45MEbKcxYInB9FrO7kge++gjb8qyoiibpMc5Q+IRd0MVFFaP70LP86CfPcbR1jMgqHA8WHKnj6R//lBf3NjF1iYdYitVoIUXSxUyTlIpPcPjVp/j5r1+nuW+W+DWWDic43Mjh4+N4113PDVtLMYd7OPzuYU61j33kkFAtPsmR3c/w86de5mSPn3hKzsZrzcU+F7WEn5P7f8cvnnyO+rYJYssda31NSzLeVcuLTz3BC3uamAlf3QctERxlsLeD/nk3eYWlFGZd+gCOakgj05dDGgO89st/4dv/9b/zg6feonVwbtnpAC5ww1gd6WRne3HazeiupPyAV4DVcDQvVxlSsSBzk8OMzS7U2A9WJS0VYqh9P089/mN21Q8QuQwp2S4V5cx/VrYQysc8XxWDnZyy67n/vhvPClwv0JnMODOzcJtNWOxevBnXzoiJK10yHqC34WX+/TuPc7Q/tcI55ePMjp7k2cf+iZ/v6b/0ET8twdx4E8/94B954s0+lE+wPUXVY/dUcvsDD3L3jmqcy7r1a8Qi85w8dIx5nZnSzddTcKmD9ZeVgs6Yhi/Xh8dtx7DMa04iFqTxtZ/y7IERym9+mDvWnz1RsIrZXsPX/vZf+P53/zd/cFfeaggMn9eK97zWkoO8/ounONjt5Jv/eQuZaebzHjgtOUfn8WOcaO0noKWRV1TB2vXleGzGJXqKKjjybub2TXb+26tP8OrdN5NzTxnO8+Qf0JL97H72JU70+Ylpbm687z7W5DoxnvceouCt/BSP/rmb3Ow0lprbQEvO09dykqa2Xsb8IVSjg+zCNWzYsIZMp2nJ4Hh0fojmxma0rK1UFbnOmYE4Fhyj7WQj04ZqNlf70IV6OFp/nO7RMPaMYjZt30yBx7ZkbxwtGaCv9RStHT2Mz0GaJ5uC4goqSrKwmS5s5tHo/BBNDY2094wSSlkprL6OvHCcjw7la8yPd9HS2knvwCihhJmsoho2byrH6zAv8bsuLN/a1klP/+nlC6vZvLliieU1glM9nGgYJL16PQXpCkMtxznZ2oc/rMOTX8PWLWvIclmWOPYagYluWls76OkfI5gwkllYzeZNlXjTzIuWj8wOcKqhF1tpDYUZGl3HaznV5cdTcQPXrSvAuWQPgRhDHU30jsy8n+tNMZFbXkOeK0JvWw8Ts+Gz8qcrpGVXUFmchcOiY6q/iY7+SUKxFDprNhuqi3GlLbw90xIBBjub6PY7qakuwuM0o6UCtB9+kR9850e8drgbS1U3R2v3Ex514clfS1mBZ3HeJC3B1EALDY0tDIwHMDhyWb9tG+V5bszLHMmwIM7UYDstrV0MjEwRTuhweotZv3kDhVlpp8+vFP6Bep77yXd54vkD9M+W0dZYz7uGKVzOPKorC0hPM6OQYHqojebeGEUVZbh04zTUH6dv0siabdupKs7Eevo80ZJBhrtbaW7pZHgyQEpnxZtbxtr1VeR67GdmXQ9NdtPWNYQ/uNBHx+wupKosH7fDwOxIOx29o8ydDhjozJlUrCkm0207qx5oBCa7aW5qpXtgiphixVdQgi/DSVZeNm6HjWUfLi3KeH8n7R2d9I/Mohkc+ArXsLa6GI9zqetkirnxHlqbW+geGCcYV7G7c6moWUdFoRfboh4/SQJT/bSc6kPvq6Cs2Et0pIWGhmaGZzTSc6vYuLEcX7oNohN0tvcwOhk46wFej7egkuL8TOwmhdjMAK2d/UzORQEFu7eE8uIc3PaFVoeWCjHa205zcwdD43MkVQsZOaXUrKsiPzNtGddWgDjjve30DI4TjGmADk9+JcX5WThMEYa6OugfniKcWMjXZPMUUV6cR7rDQCI0Qu2uX/LYD5/mQNMw64qaqT+4F39mGgUV5eT70jGpoCVDjPa10T2up7CkhNxM2xLpdibp7mihtb2PybkYeouLvJI11FQV43VaFv02iYifvvYmRsIZlJSVkm6YoOVEA609U6g2H1UbNlBe6F32qKALooVpP3KQ7nkn193/MMbOZxnr6KLz2LvUHrubbetyyLUv3n4iPMbRt57mB48/xb4T/VRmtXHk0NsEetLIKy2lIM+DWYVYcJyulh5idh+FBU6mOxtpbB7AkFXD5o1V5LhNp9eYIjA1QHtrM529Y8xHNazObMoq11JZmkPa6dReWmKGvs4eBkb8xLWFXvH5pSUU5Gagi07S29XD4Nj86fpoJLOglKKCbOxnum9phGcGaW9toaNnjFDSiCeniNxMD1m+DNLTXR+jp5dGeHaU3q52OrqHmA2DPT2PyuoqivO9WJa4rCcifvq7Wmlp62HcH0E1pZFTVEFNdRnZ6dbFdSQ6y2B3C4OzFvKLK8m2zdDR1EhTxxgps5fKtRuoLPVh08eZGOqht3eY+dj791SjI5uS4iJ8HiukAgz1dNPXP0lUW0jFU1JSTMGZ7vIpgv5hOlub6egZZi6iYXZkUlJRw5qyPFy25TUNo3Mj9PT0MDwVWSiDPYui4mJyvRZC00P09PQwPrtwLVWNTvKKiinMTUeNTnLywAv88LGfsbu+ixy1m4bavTDsIKugmOLibKwqaKkIU8PddA1FSPeVUFbgWnwuRmYY6m2juaWbMX8YDA58heVUV5eT67EtOs5aKsbMWC9tXVNYvUVUlqYx1dtCY2M7E2E9vpIa1laX43Odv8dOdG6Mvt5uBifDZ/5mcmRRWJSPNTFBT08/s5H3fyPVkEZOYTHF+RmokWkG+rrpHZ4jhZF0XwElpfmkGRW0VJTp0R66BoKkZZVQWeRG0aKMdtXym5/+gJ8/f5Bh/Xo6m+rYu2ectPRcikpK8S66cScJTPfT1nSCtu4J4jonxVUbWF9TgutjJILUUnHCwRCxpA6bw47hvWu1liQaDhGOaZhtdsyGj2pXayTjEQKBMJregsO+VHvrXMHxYfo6uwjasyksKSVzmWVOxsOEghFSeisOm+ljjSDTWbxUbb+fr+dUcOLYEerq6tnz3PfY/1Yx191yJ7fv3M6awvQl2/arUTIWYLS/g8EpjeyyDRSeHQjTEsxNDdJ+6hRdQ9PEsJKVV0BeXjZuRzrZmctNm5UiPDdGd1sz7T0jzARiGGyZlK5ZS01VEWkfOJW0ZISp4S66hwK48jeQ7/DT2dRIU+coUezkl69l/bpyPLazGwUa8cgsg12djAVMlG6oQD/Ty6njJ+gZC6C3+6hcu4Gach/mjxXr04gGp+jr6MYfd1K1pZK0D1RjLRFmaqyXluZ2hsb8xJQ0CsprWFdTToZtqTq/sM7e1kZOtg0wHzOSVVSJM3WhcwWkiAQm6W1rorV7mJn5KDprBsUV61hXU4Lz9K01EZ5lpK+Ntq52jjT2MBtPoM0M0Xh4L2q3DoPZSXZuHobZI/z8+//Gr3c1kMy20Hx0H3sCVtIycqlcW4Lj9PHTkhH84700n2pncGyaKA7yy6pZv64Szwf2W0vFmZ/qp6t7FEPWBkrS52g9VkfLcJK8ym1s31yw7NyzmpYgMN1Hc8NxOofm0TtyKF+zhoqyXJZ5azr36CUiTI90cvJkKwNjs8Qxk+ErpKpmLaX5rg8NhGjJKLOT/TQ3tTEwMkUUO76CUqpqqshLN33It05/NxVlbmqA5qZ+wmd1ktAZ7WQVrKGyII2MvAoylvhuMhxiemCAOXMBN9z9aTYuo9dpMhpkrL+ZtoH5M0/aOqOd7MJKClxRupqbGQ+e9RSpmnB6i6ipzsWQDOMf6eRUxwRJdFgcXsrXVrEo862WIOgf4tTRY7QN+NGMGZSt3cKWdflLpnTUUjHmp/ppOtlK/8gk4aSVnOIqNm6qJtN+7rmdiAUY6WmjayBKyfbrydbP0HWijoa2MVKWLMqrN7Kx2ncBvfM/UCYtSSwcIBhVsac5Pvq5Q0uRiMfQdOb373kfvjCx8CQn9j7DDx57kgNddr75ByWL65aWIhGPEInrsduMoKWIx8KEImBPs334PVHTSMajJHXmZT0rpRIRRjv28cvHH+NXrzdT8fnfx2e68IOnpZJEQnNEUhacaWZIJYmEoljt1oXtJaOMdbzDL3/47zz1yklKH/oyvqV+LE0jlYgQTZmxmJTT+xUmkjRi+2BDWtNIJiLENQvmj2iOaakE4UCAlNFMLNhFbf0QenMWG7dvZMnL8/vfJBkLMx9MYHWmsajf7FUhSTTYyivP7GMy6eALO27i3EG2CnpjOhtv/zwbV6qIF2DFg9dzHW/y/FvNxN33sbHCjfk8V6bofCvPfO+H7NlXS/vAJBHNTLonh+pt9/FHf/r7bC1xLzqxFZ2XbdurMD33PL/7zTvcvz0Hx3mGAYZH6tl7uJvZUBydpZzrthdity2v14eiz+LmB+5ENVs+0MDVmB04zG9+/TyNo2byCvJwm0MMNNfz0nO/wp63iQce/n3uub6YNLMOLTFDe0M9tXV1HDt+kpaOSW75k+9Q4HNiMcBY+3727ttP7ZGTtLb3/v/UvWd4nOd1rntP72hTMMAMeu9EISoLwF4kiqREqhdLllziEu84dpKdK97XdspJ9vF2bLmqi5LYewVBkAABovdO9N57BwZTzg9QEklJJBXHV3IWfn4f5n2/9db1rLWehffWf0Awlk9e5hWKalvoGZpFpNASkfosP/ofzxLj7cLdtsXKdDMXDh8i57aIoAhf5CtjVOWeomNCgJu7CU+jFpVcijYwnSe3xaBzfhhpwwp9tZkcOZpFv9UNs7cJZ8koReffZmaonMGpL883dtgmqco+zZWiPgRyJULrJK21lXSOiwmI38XLrx4gIVCPXPzp+1NUZZ8ms7gXZHe/LyIgbhcvv3aQhEA9Msc49WVFlJSWUVldz+12JU//9VOIu4q5nlVAU9cqwKfRehG7bh+vffMp4gJ0n1Wzd9imqL5+hsyiHuxSBSLbFG11VXSMC/Ffs5OXXj3I2mADMscUbbUVlJWVUV5ZTUOLkP1//SoufXmcO5tFXdcMhqgX+PU/vYZzkPuXaECIwDrIjeMfkFfXz4LDnU1PPc+BgEhEIjGOpW6uHT1Oye1hloTurN/9BLt2hiC8Y5EJhVa6Sk9x/MYgEbtfxz/QC1tPDeVlpZSVVVLX2ILN+2n+/kcGtM5yVqa6qa9rpKltkLklC46pXuqqy5jqUxOU7InJqEVx11R3OGapvv4hVfnZVLX0MDQ6g03kTFDc43z3L19mXaQJ5SOAbrblQQovneBcbhfaoBjC/HQIRurJOnKeMxfW8Ow3X2LjGi/U4mk6GutprG9jeHqRZcs0nU3VlFgGcTMl4qZepr2ylfLSMiqr6xhgLW+8EMlgZRZnL5fQOyUm9em/5cff3EmoScHsUA2ZZ89R0raMm9GMXmNltLOaa+dPIHANYeu+p9m1PgKDkwSRVIpjsYPMw+epaB7Gdc0L/PT7T5Gg0SEUCrEuDnHr3HmKaruwuqXzl3/zOpsSVHfAfjuDjVc5ejyPeacAwkO8UFuGKb/yLo29GvZ+5xtsSwr5zDB4kCyM3ybn0kVKWudRuRrQOgsZrs3l9NGPMYRt5bkX97I2xPiZkWZbHqb8xgUu5dzGrnLH7OnE0tQA5bmXOHFUQ2TaHg7sSyfE7IKYWVqrirmVd4uy6npaOu1seuVlIhrHKL6aTXldK8NTFqROXqzf8yqvPb+NUHcxjsVubpw6RWFdLwt2N5J3PclT5jA+ZQMSiEVYJuo498lVhsRR7H/Og6AAIau8tvVcv3SBgsZZNAYz7s52JgbruH75FHa1PxmPH2R3evRdAOdXiRCJVMR0VyHHL+TT3Gcl9eBf8doz29AYBAiFDmaHa7h46Sa1raN4p73K9765n7UaZyb6W2lsrKOtb4L5pRUmBluprZAz4uKO3dkNqWOEzuoSSsoqqa27zYJ2C9/99vN43g1eOyz0NeZx8fx1OqaluJtNKJmlo/4WF08fxdU3iSee2se6WD+cZDDZV0dxwU0KiqtpaGpHHXGQJ9a3MdZwjesF1bT3T2IVqAhYs5NX3niRLUn+qP5MQTYrMy2UFLchdk9gTdw6jKZebl4voDm/kcKCMrZujMMz0u0+sMLB1GA7TY21tPaNMbdoYXKonbrKQiZc3YmXSFieaaOtsoSyimrqmxaI37eHMO0keadOU9AwjNJnKz/9q2+zZ1ModssYtYVXuXytmlmRG2aTG7b5EWoKszh1VELw2p3s37eFmEA9YoEEichCR8UlLueUM7gcwMFvv8Gze7W4CITgsDDQeIOruSW0DclJf+bbvPLcTgLdBICdsY5Czp+5RveSlrDIIHSCKZoLjnOyzUHq/mfYtzuRhzClrGrAOk1z+Q2u3qhiwqrG3eiGYL6HW9dO84nVxJb9z/L4lgRMLp8eUBYGW0u4ejGLhiE7erMXTqJFBpuKuXLuGCrPOB7bv4/0pBBc5TA9uOoMyC+qor6xFYnvDnZlDGDvvk5mbjmtfRMs2+X4RG7mhVdfYteGQERCC9112Zy5UkjnmAX3oHU89uSzBAbdKe4nECJiloaC01yrmCRkw5O86BcAgN0yQVPZdS5dLWPc5oyXlxaWxqkruc6Z4+C3Zhv79m8nPtT9oYaiUCxBsDJCWdZ5copakPhu4pXXXmWf3heBUIDdMkldfhY3C6qZlEZy4JXXeGG/K7bRXpobamjpHmF63oJ6rJvGqiIWenVEokCtWmSovoTikgqqa5uYlEVx8MVvEnA3eO1YYbijguxLmVT3LKM1e+MitTB8u5SsC8eQGWLY8cRetqyLRKsQYFsc4XZNMTm5hVTXNtI7Z2TTri30Nwxz9UI2lU19TM6toHEPZdv+l3nh4DYC9Q++8wmFdsY6Sjhx5BK1vYvo/RPZ+eQL+AWKEdqtDLcWcPpiLk39y+j84ti0cz8m/ztBAUIhKzPd5F84Q+2Ikd3Pv4LetYfmhhKKSyuorm1kTBTKvuffINjXFdvCOD1tjdQ0dDEyOceSaoKu1moKJQMYAxLQ6P3uAa8djiW6G3P44Px5soqbGBidYsEixhiQzJMvvMLTTySiewRj1jo/QHXJLQrKmplatOPkGce2nWloV9opKiinZ8KK0GFFqDYRl7aJxDDdfUXv7MyOdlBRkENpwxA2qRq5RIq7bwRhgW4sjE2i9AwnKkiP2DFNc9Utrl0pYnDBztxIC2UFrczOulGcdYif91zCyRjGuvRtrI823NWGg6XZYZqriyit6WTOKkUudbC4uILaEMa6jA2Em1U8qohlTpgD4/HwDiV2bTL11RWUlpRSdOEtinOuEJe2mS2bUonw1/+3NHgd9mUmh9qpLC6itKKK2oZOpJ4pvPC9yM/Ba8civU2FXDybzYDDh7hYP1SLg1RfP8SxUSfW7X6Flx4LfmgBQ4d1ju7GW5w7fY2eZQOxCRG4OY1QV36O8yfPsXbXczx3YCMmNcxN9tNQnk9hSRU1tY0sKPxJ3TSOsvc8569X0jW0CizqTaFs2PEUzz37GBEeAsb6mynJv0V5VTW1jWO4+iWwbdKXurOnyKtsY2R6GZHcBd/QZB576jmeejwR/UOQUtvyLAOddRQXFlNRVUNTxwI+8Xv5q5hgnO5ygC5M9lB+8wq55f2INWpsc4O0NN1mcNaJuIwneOaZJ4j1/Rzkd9iXGO6s5MqZC1R22zD5++CmXqCh6AKdLc3UDS19vbG0zdPfUsy5k5dpndESmxSN1mWcxsornD9xljXbnuWl57bgpRFiXZ5loL2aouImWtuGWXRYEU4PUF9exIKzEIWrmZAlUA6X09Dey9iMBYV6jLb6IuQTTngG2DCF+aMROVic7qcq/zLXinoQO2lwzA/T2tRI77Sa2A17eOa5fST4q1ieG6O1tpjikgoqq2oZXdGRsusl/GZO8+7RXLompUSufwb1z35AvPHhlxu7dZau+qv8+qMTZJc2Mzy5hEjuiskvjA07DvLc01vweWRichtz490UZZ7makkfanMAHi4SZgabKLh2kk9EPmzYuZ/9j6/DpLl7IdtZmO6n4sYFsop6UBi90DuJmR6s5uKR91AHpfPUc8+xba3pK0EUgUCEWCLGMtPKqY9O0GXzJXXTFpLjdCgUX32uOGwLDHbWkl84QuKz3+ebL27EqHiEkr0CB0vTveQcfofrjdMoXM2se/wNng8QIRA4WJxqJ/PoSQqaZ1G4mEje8Tz7tgchZDVq3L4yS1vpCc7nTxC35/sERt37+3brPL2N18g++gk3KroYn5pgfkWBV2gKB179IS/tCubz48SBZX6M+sILXMxpR+TsjGB5nPbGWtpHJESk7OL5V54jJUjJwlQPRVlXuVVaSf3tPlbkobzsLGEh7whnsqvoHRxh3q7BNyyVg6//kOe3+H5t4MphX2FmtI2ivEJaBy2onWXMj/XQNyonOn03uzdHrUb2OlaYHW+nJL+SzsEhBgfGscqCef5HLxGi/jQQaoGh9jw+/iCfFV0sT3/rSXwlC/Q3ZnPk2AWu59ykpLoHoXMIBed/y3JjCJv2pLBYn095fSu9g6PMW+XE7vgOu4L6uXLuCsX1PcxaxBi8I8l47AA7U72ROFaYm+xLrjT0AAAgAElEQVSg+GYlnYODDA6OYREF8eJfvUKI5tO+LDLalc+H7+ZicVvDM985QIDSyuxYE5eOHONSdg55BZUML6uR11/k1/+3idiM59ke5/LIurNZ5uisvsLpixUsqc3o1Q7mZmawSyTIdBv4y5cSmBtfbe9ydg43CyoYXFQha7jIr37RyJpNz5IROkdZThmtPd309A8zt6xh22s/Id1nhsprRzmRWYvFKZZXfvAqgTSRe6uewZFhhgZHmbMa2P+DH5Fqvm/NO2wsTHZSeCObipYZZBoNMqw46Sao6FxG5RpG4lr3L3G+OlhZnuR2SQ4F1T2siKTM9d9mRBjCU6+8Slqg8mvOrv/e4rCvMNNbTF7NGCJpFEEhWuZHmsi/WUTH0DwihRb/8HiSEkJw/v8Rw8p/MXht43Z+Ng0DM2jjwzBpZA+OSnQsUnr5CCN6N3zjtxASPcrtqjxuFF6npqaZsQU5//gPzxDmrr7PeyXEFBGOUXmGhptZNAzuwkev5kG1G2fa6mgdncNiAydDEH4eynt4uB8sAuSqL16UFwYLePsXv+BslYK9r77M1pQI3DU2JoZj8TzzG/790Ac0NPUw93d/y5Pr/VGxwPzCCvMDlVy+kEXHqJzApxax3nGeKpzcMXvAh4U55DaN4m//CNFIAD4+qRyM28zY7Uu8995Vzn88gVdUAt7PxGLQrA65wz5OzpHf8OYfigh95qekZaRilM0R5DbBz372HtcyF/BP3MPmRH+UXrbVwpMPFAcjjRf4zS/+QM1iHM++sJ3kGD804kX6W4o48vtrLC0tfzH22rFAbdb7/PF4O4Gpm0mJDcJNaWc0Rsuvfv5rzn7QzKRFxf/6632EmZwQORaoy3qft4634p+6mZQ1QWhVDkZjtPz6H9/k7IctTFpU/Oyv9xFqsGG32RlvKeDypUJ6pwyIPxESZPIkYuM+1iSP0FiaxdX8fFpaulgQOPH3P3icUJMGoWOBumsf8tbxFnyTMlgXF4xW5WAsRs+b//wm5w81M2FR8rMfP0mYbpGFpWWGm25y4Xw+XRMmXC/q8TdqMfl6crulg96eYRaWvirqQozOO5oIk5Ajp2tonQhkyzfd0bmpkMkF+IUnEu9/jnOXa2mbiWDH6/74++hQ3PHSOLn7oZPOML7khK+vCSe1mJmxRVaWhyjPvcqNyh68N2ewsLwaoyiU6Ylet4OUwhvUtQ8g1waQnJZBlL8bLu7uqOUCPg+vtdNZcZFrC774BmzgmTQn5npucvjQRbJPv4NbQBT+HjoC3R/m2LDSmn+MP/7uLeodW/mbPUmkp3jjmAtg5nYBvzzxAcsaP3yNTxBuluERnMLWDcXkVbUyMaYlLD6NLclBuKgNOMummViapK7gKhdv1GI1WjC7zmA2uOHlqaG1o4WBoWmWLHYWR6s49tabHM2dI/WJg2zeshYvVyFz4314Z77P7w+d4pcNnUz96Mc8u2MNBo2R4MgY9OIPaK0rQSndwPT8KmWJSutFWIyE7qIznGitpl/qyej0AvY7RVmtCy1kHj1EZrWB576dRPrGIOTWCXTCXkpLmhgcnf0SzsUvytJEPWff/yOXqh3EpW9nfVI4Jp2MUdM05fl/4ERhA0sSLW5v7CbcrATbGEXn3+P3711HFLKDJ3dvY02gDpYm6QzK4t3fHeLI7xvoG53je689RpSPHCc3I27iMWpL8ylpEbKkVjAcG45PxBaeT0qnJf8Uxy/d4si7EvxCAvDcFY13SDxrAjK5nNlAVY8rYTtfwdXVGfkdD51EZcDbQ8nClAWX6EBCgkxo5CKWxmo488Hv+OTaGDHb9rN5Rwo+OgkLk/34XPuYtw9d4Nf17UzM/g+e35OE2eVBgJEIZ4MvkdHenPykj5rSXvSJY8wvO0AgQ+cZwBrRGIVXztJaU8qsdhtT86vrTunmR8qmdZQU1NDSsYjBJ4qUjdsJMapx93LFMdPL8tIo9cU5ZOY14poQyszi3WvWRn/9Vd7//TsUDpnYvu9JtqaF4CKxMD4QwuVP3uLYpbe53T7C/F9+kx2pgUiVrhjdZYy0VZKXU4dmUIJjLpEw/yC2H4hjcbCEM8cvcv3Mh6g8AvHz9iTa/OdgdrfTV19ETZ8A/01RBHm54GJIJSnuMoVVmTQU5VNWncGa4CTc7kMsFS4+rN2YRllxDbdb5tCZI0jesJ0ITzVavYKF8V5GO6rIvXqRsnYl8yoFC7F+KN1NuLR20tU7wNTMEg7bJJXXj/H22xeY1iWz96ntJIW5I7LO0hOay+F3P+LMu/9G99AU3339AEmhWgzeIYT5OHFiqJmS1kWS905hcYBI6oTJL4L4iXqyL52isnQac8ooCyurJ4xtqYdbl49xLneW9Qe2k5ERi5NgFk/lBI01RfT0rkYkP0wcthnq80/x4eEcFt1i2bYjnQh/d6QrHSz2VvOHTz6mZciKytmVpzaHoBDaGWrO4/A7b3G9Vc363QfYkRGBVmFnciiM7JMfcOTC+zS1DjL9/dfZuzkCmcIJg0HBdF8d+TdKkfiuYFuaJCrQTMa+SNaN1XD59Dnyz3+C1NUbP38f4s2+xMQGk3v1EhfL6pmWrUGm0qH9NMRRoEDr4Y5abMGKGp/gcDx1ahy2aepuneadP55iULGGPQe3kxbpgdQ+R2/YLU58+BEXP/x/6Rqc4DtvPMv6KPcH3skkCle8gyLwcrvAcEspYzNeDE8s4gDkGj3+4fEMN5dw5WQj5aMCkndMY3WA3MmTNampJFTVUlc/jqsxmIT120jwUaM16hFbJ1lcGKetvoisS8UIg9RsmbPcdX+wM9pexIn3/sDlOiFJO55mx9Zo9EqYHg4n59zHHD5ziIaWPqbmv8WBnbG4iOQ46wzILCPUFl6nbsaDJZuFmbVB+CfuITp1iqqcC5zPvMLRJTVm/yC8dwQ/EMAXK93wC/JFK5+jqbycGXkkShdPdE4KRHYzkbHhFORc4WxFJYvqGPTewXgZNKs6lWrQ6Z2R2CzYRC74BBiR2idYXByns6mErIu3sPtK2PD46ncLxWq8QpNITamlvrqaNomBkOg0dmz1R+VkwN3l3ki2se4aivJdCDQFsONACuKFNm5cPEPm9ZPY5HoCQkPYHv3FSPZ7F8AyPU3VVNZ0oPAKxHY7m8sfl9LSWkeEjwwU3vh4TlNx/QwXi1bYOuuEf+AOPmP3cCwz0FrMuSPHKGp3ELx2PQnhnggX+2hqvEr28VaGJmWkPflt/AP0OAulOGvNhEbH4r40QW1+I+Oj4B+ZyJZdGQS6iJA5eWJwld/Tx+HOSi6fPEl+wxymyCTWRvvhqhQxP9ZMWVk+F+el6F7djOFrGmciqQoPvxjczUHExCfRWFNBaVkZVVkfUp53lZiUTWzZtI7oYOPXjPb9M4t9BcvyErMTgzRXXOd8djch6T7M3bXpLYy2U3DxKJeKVtj97W+QsdEHycoozozSerqC9v7pR6BqszPeU8uFD3/FezcWWHfgJ6RnbMRVPIubYJyqnN/w8SEB7r7BvJDhgcNux7o0QVfjLS6eLQN9EHMrYiJ83EnbeZDkmQ5Kb93kVuEFOrpHWBGq+OHrG5HabSxM9FN76woXymYxBU1iF8fjYYjj8YOJjHVUkp97kxsXWukbmUeiduPFHUE8yDdjt1lZXpxnfKSbshvnye90Zqv7Rlbu+ujl6V5Krh7lZHY/QRsfIy3aE4VgjttFF3jv9+/y4R/6WLDK+fEP9uHrJARWGO2q4OTbv+VKg5j0g99gV0oAbkoHI50VXBqq4OrIMjzMR3+XfqcGbnPlo1/x1sUREvb+LRkZ6WilCxiks1Tf+D988qEVg18Yr23zQqJwwTcija0qHerlFjKvNKNxMRObto1ULzFimRonZ1eE7smkb2yiOL8TnIxEJW1je7gSlbMeZzEszw5QkX2UI5c68N/4BBtiTChFC7SUXea93/yBQ2/1MGuRov3xk7jbl5mfHqX79i3OnMjDpgtG4OrLrIsU30BPuq7X0t09xMIjUZPYmRzqIP/qZUwuHqRsCWRhqJmy4gJyL9Vyu22YZYGKH35jHa4PRS8czI93kXP81/zxVCOeKS/x1M5UzFoFy1PdVBVc5tBbH/NmQztD40t877WteCgFgIOFqR5unX2HDy90EpB+gC0bI/DQKpho19B16wwfftDM+KwET5/vs8bwFQtfIAT7CpP9t+me0ZKybw87Nsbib9ajUcm/FFRbnh+lsfgyhw8dJrtqgfXP69Frv5j592UilCjQewUS6ifjnY9LsHqskPasL156FTKhlIDINUQFXeHw0etYPRPZ9I1IQny1qwF2IikaVwNGZxvDoyICY4JwkQmwz3/atRUmhpq4elqFm1Ms+17ahmCmhWvHPuRS5imWBGYSEn9CrH5VFyuLEzTkH+PtTyrx3vgUm+K90UgtdAfref+Xv+DwO51MLkjR/ewlfCRqdHo3HKOVXLrSgNJ9FNejCvx0Pmw/sAb7dDNZRz/gyqXjLIm8Wbv2h4Q7P7rHcGVpipbScxw5kcuoOJxNmxPw0smZd7bTVvgWvyxtYGL573njiRCkCJHINDirBbTnfcIHuWP4p3yPV+4aYsv8BPXZh/jN2/l4rXfh4HdAIBAikbvgbjSjkyyyJFQTFb+V9VHeuLoY0CiUKAw6lLaLnHz3PLOaMNQhcZxq6GFZrMfHZ4my7NO8d+kGNa2TuHj8Axt8RUikGlycRHSdOMy714fxSfwe37hr3q8sTt3pSw4eKc4c/AsAAUKxHFeDB15GKfNzAlzN8WzbFoXZ1RlXzaPDfg77EmPdufzuX37Jrakovv/TXfjK5+mqv8oHH+Rgeix5de592p6HlLlZAS6ecWzbHo2XqxNajRSRSImbXo+k8Syf/DETi34zaa+OU3r+bX7/0QUyr9chNDQTvuslwqJc0blKKTz1Me9lD+AW+CwHfnLvCrBblxm4fY2PPzhN27Ivqevj8TEoWBi8xbEPztK2IMZvbRLRXuL7vmeFmcEqTh46TF4zxGdsIsxTysBkNr//XQ5L2iSSf7T2IQ5bB7aVOfraOxibtz4CNvblIhCAUCTDzTMAb8PX48v/OuKwWhlvrKJ12o7QxRmJpZBDh+pRmn3xCzIyP9rOjY//hUuZG3nljWeJ/bPYn//58l8KXjvs41SX32ZqwUqU2RuV5ME0FQ7HEkJ1FE+9vIe14SZkjlkGOzcT8Muf89aleq6d+COJ6amYHw/FRXHv5qY0e+GuklHeXEFt8wybQj1QfKU308HEwCAzyyvYAZWbO24yyVdPaPsMdQW5VDV1Mjy1hO0zPlARKjcfUrdsI9LLyo0jf+TQqRKCnn6TPTs2EOi+ml5u9PDGrBfQ2fA93rp6mj/8NoAQ/2+T4O2Cf0QSztLbnDlVQM/Y3Yx7q9QRKZu2Eev3MSWto6xYVUSn7WT3phg8XRUsDPsyUF3LezdaKbhVzzd2haHXrEYILPTkcex4JnUjgby8Pp4QPxNqMbi7vciuK9k0D9ZgEZjY9NiTxIT646J+8K3futjC2ffe5Uz+Ii/8fD87NsdhuJPfZfQwsNiUxZWCZibG7/2/+f5bfPLuScaM3+K727cS6aVBJABHkIHu/KtUvnuT7JNH2borBS+9BvFoAZ+8d5JR99f51ratRHs7ffZ+T8E1KtpyyD55jM07U/BK9yUgOo1t/TmczKqib8qKi2kte57ZQXyoJ3L7LH1tiej/9ed8mFnH1aOfsCEjDk9dCJKxQo58cJIR7Td4fds2Ynw/bced3qJsKlqvc+PUMTbvSMF7ow9+4YlsWhfFycxyuscmmbF4kLZtPxG6x9m+s4VZaQD+Xm5fqT+pysT6x7cTc7GMnvER+vpnWV6yI1BLUbn6sfGJnay5XEZPxRjjU3Yc9s+LKQpFFsZGVwhP3U5CuC8auRSJZwhJzmJqsy5T1thzT1tilYGgMBE+7hqkIgEKJ09CI9eQEK7/bP3ZP5tqDuwiHYkZT7BjQwQmNxXLk9EsdLXQe7KU8qJaBven4O/+ELofxzyNpbeobOxCHGvA7KHFWaVEoAohISYAl7OFVJfU0D+aTojZhKdfGEv+RjRyMUKxBq+gcNbEx+AigpXFWVROapKjLpBT0kDP1AQiQwzbD67H+Yld7GkdQuUdj49+iZKTH/PxsRzkyT9m566tJAS6rDq2PL3xNEgY6mjnzeNXef8tM/5+BrbGe6PW+mB2d0Z5n+UjFMtx1nrh72PESSWl/z4kemmojuKyBiaEwZg9Dbio5AjxJH7TfraVHMVZBQ+lCnZMU5Z5mE9O1xKw54c8tmcLQYbVfcKg2kJKTCYVTQU0NnYyNrUEZgWDtVc5/NFRKsZj+OttO9mUEo5GKgA88TIbcEx00/V/DnP+o3fw8vfF8GQKnr5RrM9I5uy5G1S1DiFUeLF2/RNsSQpC7yRkIFJIT0c3ZwvLqa7vZvu6aEL0fmzc8zh5BVW097fT0znE1NQS6D8FzARMD48gNMawYUMSQSYNYsE0xdeO88nRa1gjvs32XTtIjNCugjeeXngaZIz1dPLrj3M49I4nvr5GHk8LRPmA24NQLEfnbcbg6nSfcSpAIlejN/vjbXS7iz5i9Zla60VwaCAeWhVigQgXgx/h0fHEeokBO4sKCcp1QtpK8ympaOJ+OuyV6SaunvyEE1f7SHvtRXbtXEeQToYAMJnNuMpm6O3+N47nHuYjozdmDz1JwV5Ep6STEHmFnFsNLNrl+EWksG9vKgGeLthmYnGM9dL9/jVqKuvo7ttEpPk/vzq23dJHeWE1C0p/oiJDcJMLEMuDSElNIDOnjKz6CgqLKslIjWKt/93GkgCVm4ngkCA8dRrEAhHOeh/CouJJ8JXgsC0z7+KKIC2WvLxbVLSNM7/iRGj846wLkfHEnlaGF12JifVhtDmfM0cOk9dl4LUnd7F9YyyudwbQbDIgnh+m5/95i6vHPsDd7I2H+3b83ZwxeRrRuijvvRsIxMhVLph8vfHQuyAVzNzzvcvjrVRX1tA9Hc5Tnh6rYCIKIlN3sbV+iUlXwRfG94viYKytkLNHjlM7HsErz+9n23ofFCIBOFxYl5bAjZwi8ptv0949xJIjBPFsKzcvHuPYhRZC9/6Q3XsyiDLK78wRE1rVMkO93bx/6SSfaD0wmzzYFO1JROIGEm/lkJ1dzphdhjkonj0HNhLsrYX5ZhQLQ3S2n6ahtpbWjjHW+nnjv2Yz27aUUl7dRtdAP0PDwyxhRnJn3BwLk0wtqgiOj2ddUgAamZCxlhIuHPuE7NsKnv3+bnZvikN7J9/XZDIgWx6nt+u3XD91CJ2HGZPnXoJ1D7gmCkQonDzwMBpwUYsYu+uRUCRF7eyBt5cJvZsCRj+fU0pnd/xDgvE2uiARiNC4mQiOSCAhRALYWV5UoUjKYLC1lqK8Mobva9Y610nB1RMcPlOPacu3eHzvFmJMijt69kSrsTI20MkfT57lsIs7Xl5mdsQbMAfGk5YSz82sK9TU2FFp/UjefoDUKD9cZIvEGh0MdTRzubmB+vpWprYGY3jQ5wtlGAITSV2Xwo2btfSNjzM2No5daEQqUmEOTWH9+lRu3qxleGaWubklxJ+mvwmErMzPYZe7E5O8jhh/V9Q2OZGJGYx0NVGYU0Tv3fqUOeHhE0SQvydOciESiQte/hHEJ0R8djd13HUmOQQqfMLTePqJJIK8tIiW+3AXz9J5+y3ab9dT39TPluivTpkHsC7009zazbwklCd3J9JkqeLkh+fJXZbj+eprHNy1DuXwDYpOdNM9KGN2bukuwNPKSHsJp959k5PFyyTv/S7PPZOOn16BfXkU+WI3WZ/cpHJ+DQl28WpWoECB0TcGo28MK7ONWLvzQOJOZOJ2nnnuIL7381c6LAy2FnH8nTc5WThH9NZXOXBwG2FerkiEYJ03MNPXTX7TbUaXMjD8B8OkhRIlBu8I9J4BRMUn0VhbSXlpKVU3j1BVkEXE2nS2bF5PbJgXj5ig+WcVgVCOiyGApI3rGeoqJetG931v2Jkc7KSmrIwRy3o8vbxwUckAE3HrtzA4A90rj0JxscJofwtlhWUMzcegdTfj4aZGhJLA4AjCvOTkljVQXdPJwXRPFE7uRCSmkdJZzdXz5YyLNZiDknnm2TQCTK6wNEL92lB0v32TI7kVZF08R3xyPPvigkhKS6W96hqXy6aRqfVEpT7JnrRQjC4S5kZaiA315Le/epeyihtcOBNLcqI/kQ/gChZJVXj4r2H9+ik6Sq6Q32G5T0Xz9DTf4uyZXOa8X2D/3i14a4SAHZOzjYGGm+T/voqczPMkbdyIz3odlqkeyq8d4UhmKwE7f8IzezfgdSfVzl3vhGW6m/zM6wxPP+pIWhkfaqc0v4j+mSC2uZsxadWIUOEfFEmkn4orOY1UVLbzwlYvFDI17r4RuLlKGK13QyIUotDoCIyIJyH484lpWwojyE+PTCBAoHLDLzSOhIQ7oIVjgc62Ik6fyGLC8Bw/2bcVH6fV7za7ChhuyOHmr8vIzTxL8qYMXkzTERybRMpIHReO5dG5vIJDpGPzwacxKad5/IlOrOoAQh+Ft9lhA5EMU8h23jiQhLdByfJ0H1X553n317/lckMB548fIyF5DbujHkxpY7dM0F51gXfeucygyy6+/+JekoLvFHz1NOJucMI62crf/fNlTnykwisohNd3+CCwTNJZfYn3D2WzGPQqzz+7ixDd6h1XJ48nY3McJ25eo6Wump4J21eA1zZmx9q5eeEEhd3uPP+jl9mUFon7VwCH1qUpOutyOXc+k9zcW5RVtzBlUzF/9PcIrUt899tPEfww/QlEqLV+JGzaQ+rRHK6NzNPf049NGIBAKMXVGELylj2kHrtO9vgCoyMzSD+jEhRis9pYml/BJ3EXm9YYkAjurs1sRyiW4xWxkyfT4/HzcMI+14mnpIeyvzpBZ0MelS0/IFavAscy4/2VnPr4JD3iA/z4wHYCnEWAAx93FTPNWVz9xzzyMo+TtWMXf7HdSHhCKhNtEbx1pI4VoRA3cyr79qYS6u2CbbYdo6Cdop+ep632JtUd3yU89tG8P5aFcWqvv8e//+4M3cTxvb/dz+7UANRiWJ4xYe/N4tj/zOLYoSgyMn5MhJMIucaDwPBw3IQzzFlE+EbG4fFZFscKs5Nd5F4uZNwqIjkkGg+ZAKFAjt4/gc07oDnrXZRORjbseZVnD/giEkqQy6U4FDFMt7uwsmxF4KZmZWYOr7WbiQzzw1k8QbzfAvXfeYfqgstcK3yVDX7+yNRGgiIi0AqnmV0W4hMVh+dnfbEyN9VF7qVbjFmEJITGrNKCCEQonHxJ3rQT68A1/iBR4x//OK+8cBC9WIhU9ujgpN0yz0B1JqdzWzCkP8PmLRvwEK0Q5quiq7SKabP3Xe3twj6Yze8kSgLjHueVFw9iuNOeRKQhJDqalV4nlpbsOHsEIR+4wrV6FZuffhU39Tm6hXEEm+Qo3XyIWrPILcE0M0sQHZlMsOvn57bdukRv7Tn+/d/+QNlEKH/xd8+wKzUIZ8kyw50DvP23rVgkToQnr7uHJsVhtzDWcZM//t9fcaF0hZ3f/p/seyIFD9ksFX0fsTw/ytDQ7MMdtg4rc2NlfPCLt6ib+NNqcYllzkRs+S5/82rCn62op91uZaSnjxkbSCyzlGXnkZCwi/S0KMw6BUsTnbiKhvnnf/kl/zRh4ef/+3XC3P4bppPdJ/+14LWtn/auKSwrDpzd3JCIRA88jAQCBZFpO1i3NgK9WgRo0bt7oPhOI0UVPRR11ZGVWcEzGb44K+41dsVqHW5qKQLbEB1t4ywv2/hSosrVnmGxWLDZV1maRVIZEqHgq4sVCSS4GLzwX56j5u03OV/czewy+Ce/wndeScJVLWVp4CanzhbQNann+Y2RGFzuBvsEOHkmsvexRM4VdVGTc4wb5U8Q6hGGm1aJ3GbGVS5B+CXlYoQyF5w1EkQiAa5ecaxPjsSkXeXT1BhjiAhwQ3mrjZH+fhYtqxE8AhyMN5dR2z3OiiAalVKMUPipngJZlxzC+9duMzMxj1JvxNPd+SH8fg4GKy9y5lo1VsPTpK3xxvUubmeBUI1/gBm1UobgHvDaSlPeeXIq+zFk9FGcc5kW5adasTM4b8MuEDIzWE5F3RC71vowcud9fXofpTlXaFN9/v7ArBWHUMjMUDmVdUPsTgzA202L2WxAJZMgFGqJSV1HXHQAOrkAUBIav4MXny+hqLaX0vZiCsu62JHgw8Sti+RW9uGa1kfpzSu0l3/eTt/Myp12KqiqG2R3UhC+WiVePh5oFFKEQhkh8SmsCffH7CzGL2gND68oIkAblE762sMUNt2iJKeY7j3xeGkNiAUCXPwSiAowcKOqiqLcCvq2RuHh7IaIVYqb0g410U+GYjIoESJE6eSKUu2Hp16N7E8iZxRiCkkhZW0M3vpVEEasDSUyxISrWsTE0ADT8wvYHDyEV1KKX1QyGzIEOCWswaj9NDNBgEIhQywSMjM+ysziEjYevDFJFBpcFXLMHq4o5WJkAi/ik+IID/RFI/EjNAYQwOLQdTKv5NPYK+flH0RiNjrdk5GhNkSzbWsyl/PrKSy5SE7RTmICTfi4CBEKBazGYHxRRELhl+4F9pVlli0r9LblcPVGAj6em/DTq5Co/dm4PZ0VL3dkDzkTFofKybyUS7sliGcSIvDWfc6fLNGEsOel7yLxSkcVsg5/DyXYRinNzqKwqgv9+m8Q4me+B7QVyvTEb9lMwsV8Gi6Xcy2rkIykMIzhWmROalRyKWKBFO+wBOJiVg1BAGNwCD5GLQpHL6MjM8zP20AvwtU3lU3rYskrb6O+sICGtnQi/SNQi8BhHaCitBMnr2RCg8woRbA0UsP1rHxqO8TseyEcXy/XeyI5VboINm1O4WpeFdcrM8kp2EZ8uB+Bugdf1AUCIYKvnHB3xu9rTXlgUecAACAASURBVHshCrUzCpUPHnoXlDIhc/c8t9Nfd5MbeWWMCGOICg/E5Cb7fFULpJjCN7IxJZOC8hMUX8umbGsq4QGxOInVqFUKZBIBYn0wsfFrCPFeXbu4BhDoZ0LnLGBgbJTp2bk/Q2FSB5MdZZQ2zuEREUVYkO7OGKgISUwjITqbsqZiKm8VULVjAxG+kQ/hiftcBCIZaicZJrMHbs4qRIIFvENiWBMbQZCnjKDAiNUe2MfJzswlv7gJVUgGocEBuNx1sRRK3YhM20BSXA7lH5dx80Y+6evi8E00ro7lV4y1QChEKBB+YXd1WFewWCwMt5WRm5NPVJArkd4uiJUmEtavZ0qqQfmQb3TYx6ktvEFO4RDeTzzPmijzKnANIFASkvIE3/yhktQxHalrA1EK7QzdLuHmzQK6Ld7sDQ/BT393hJUE98Ak1qUlcLOglsr8XIoy1hMXvg5XsRKlSolcKkTl5kfUmjgi/PWr4+Tsg5+fD+6uIm5PTDA5NY0dEMs9iEtNI3FNHo3ZNVRW1dC1KZooowSw0d9cR/+8DP+0GHzcJDjskzSW5ZGXX4fU/CKhoSG43UVUKZS4EpqURkriDYqq87iVm8fGDckErn9YARfB6nr8ivUmFAoRfq3FKESm0CCTe2E06HBSCO8Dr+2MtK1GWrbNa8kICyfAqLhHz3q/BFLTksnJraSmKI9bxekkxmxCJxahVCpRKmWIJK74hcYQHxeEViYAJHj5+OLrpcVaPcXkxARzVgeGh2TciWQGImJWeT/r8hupra6hf3sYAU5ChFId/gHBBPu6UFPfQG1NLcMZvpgUAhz2adpbOpmxuZGUEImLRIRQosZNZsZo0OOsFN0DXn89EeBiDGRNQhqR/q6r4ycx4usfhK9RScvIJBPj4yzb+dJaKJ/K3FAnQxPzaMwRGCSzFE+MMjUvwys4no2b0gjy0DBr9yVp67PIonVs2JnwWUTk0mQbty4f5uNzTbive439+9fjp18dJ5HMBVcXLc5yAa4aM37+ftwfS7IwOkBPewfzag98AgIwfEmwycJYMzfPf8iHp+sxpLzMM8/tINLH+c58tTM3Ocb45BwSZ7c7Dt0/TQRiOVrPENa5+xIZm0hKfSXlpWVUlJzi30uyCV6zni1b0kmI8sXpT+AX/ZNFKEaucsZgMuJu1KL8kv3TZrNiWZqlt62KG5nXiTBuIcAgR+nmS1zqZjzHnR4hGkyEi8GXhI07YTqIhCjjHd0LEItXQZvlhXmmJiZYsgtwkchx1rnjbtShlgpYdvMkMmEDcaEeq21p/FibsYvx/kZKK96lr6OBsvI29qSkoXd3x6B3RiSQo/cMIXlDEoF3aCjUfjFs2TVNT0MJ1e9W09pQQnXz80SmfnV6vFAsReWsw9PojlGnQcC9kTWW2UFul90gt7IXf5cBCjNPUXLnmXVhgL45EVIWGepro7qmnafXaRjqquP6xWsME8LB1DRMd3HECSUaXHXemPRyeGTwWoizzpv49J0sDHqRFGv6bC8WicUoFFIsS4tMjY+yaOeB2cSPKitzI7RWXOd6WQ9e6cMUXT1F6Z1ntsVheudESFlidKCNyop2XkhPxdVgxOhpxFkuRKbQERiZQmJMIDIB+AfHf43WRThpvVm7cRvx4XeyQnRaXJ3EzA3WUtF4jp7WMm7eamF7VBwPIN9gbryP8uzzFHbZSX06gTh/zV3zWYjK1Ze4DbtJPnaVMy1lZJ6/ye5NL+I81kdx5hkqBqS8+MY2AnWfw0oyZxNJj32Hf7DFsOQWT6T7lyjcscJEfx3Vl89QPeZOxt6drI/3R/0Ap5ZAJMVJ501UwgY0ei/8/PK4fr2A5rqbnFi04WIK4qcvx31hj/zC74hVuPvEsmVTGFffaqOu4Dq3x9OI14sQiJRo3cNIjDFy7tQQVXk36HwpmhCNEBxLTI51UtswT9L+jV+sLSCQ4KT1J23rBkLupNU41G74x8Tg7XSc24tjDPTP4kCFbXGKnuosLt3qwC1titKs01Te+Rm7ZZL2GTFyLEyNdlBSdJvXtxsRiyWoXJxRiISINXqiU9KJ9lN/1k7Amgg8ZKfpnx9icNACjwBe2ywzdFac4t9/8S75fUbe+Pm3PgOuAcQSOVpPI4qVWXqby2nutxLhJAVsLC+N0ds3g0LtS2RsxGdc7bblWQabc8gqG0KujiY6IfrOPVKASAz25S4ammdQOcexNsUfjepzHq+l5UUG2zsYtwpwU2oJidtAxoZg1GJw2CUEx0RjVgvoW5qip3MI8AdsWJbH6OmdQa4yExUX+Vlf7JZ5hptzyCwZQqYKJ2Zt9J17uwChSIwQC221zVhlGsLXpmJyUj1gvXy5OOx2VuanmV1YRNRTTW3nEp5Bcpx0YezYs4vbRj0gQCAUIxSu0FZ7mxWpmojENMz3tWe3rTDe28+8UIy/h5SqG71EP/ZNdsa7kB4bz5jVQJiPBBwrrFgGaGufQarQEpOayKcsQQ67hYmePH7/r7/kfIWcb/7zX/D4+mDUIsDhwGFfZmHRhlzlTdL68M+DmxxWZkdq+OiX/8oH57tI+cYvePXAOrycbEz21XPhfCUOp2DWbQh9KE0WCJGqvEjbuY+QZfufYK8JEIql6HzdH6HN/7g4HHYW5hdxOGxYVwYZW4pg9/ZkPO8EAqh1/qTtfJ7HL5/ifx37HX7RyfzTGzF/NjD9P0v+a2lDbOOMTViw2gQolEqEwodYkwIZzi5qxHe/J5Dhl7iL5NAPqe6bpKO2lpH53XjrlPeAJAKhahWkFawwPjaBzfYgiEyASqlAJFo1iq3Liyzb7avpAV92eAgUeIXEYg4KYLL4MCeyqxiZFLIhchMb1yfjpZXQdbmQmu5JLAJ/jO5yJPcbRAI5ISlr8XW9TPtQK2VlHcxtCsRNKUMgFD6gkrHgzjMBUpkcseguQ14gR6mUIBSB9Q4Y/6nYlpZYttlZnh9nctaySkUiBJDi7WtGrZAy47A/WkqEY4GavHxaB6fRbw7BoFF+IdVYJBZ/wYB12EeoKq5jaFqAB5MMDXQzfdcqtrrFsGOvN0sWCPZQI2GYyuI6hqbBnUmGBu97XxvDjie8WLRAiKcaqfDT4oeCOzqRoVBKEN19kRcoCE5dT4T5LHUd7XS09DI300tVSR2Dkw60TDE82M3MPf2KYtse02o7Jqd72hEKBAgETugMTsg+82o/mhEjlHmxYUsSR7JqqKy6QUXzfqIC9GjlAlZmhhidWcZqW6G5+Aa1nTsJ83PFWWLldkEOk84RhAea7ytEujo3/jQTSoBCqUZ6d1aEQIxarUQiFmG1LLNisz88ilGgIGz9M/zAextyvTcmnZTpwSaqyqsoul7L4NQSNvUyKzbbZzQcD/1JgQABAuQaN1w1CqSfTjoBgIPxlirq2weZtXuh16lQ3F9RRCDDLzYGPw8dpbc7qa1pZ2xXIj6PTgd2jyiM4UQGe5JTWcbRt/+Vwc4G9j25l/SEAALiE7BJZA9MYwU7Q43l1Lb0IdNtwah1uRfsFqgITtqFITgDsVKNWinDOlNJdW0bQ2MreOt1qJRfTD/SeEQTEWRCp6qlraaOroFR1oZqEdz5AxFSmQTRXSiGSCZHLpUgFthYsVg/2zsEYj0JGeuIulRAa3ExRRUtrI8PJtggYaa7guo+GX7pQfgYlQhwMNFeR2NbH1NWLTqtBqXi/jGQ4hMZSYCXO/nVTTTUtzE0Mkegzvk/Ngh/stxZM19AQ+dprW+gvWcYqasWV436C44IodSdiPAgzEYnGhoaVotXzsbgpFnVskAAYokUifjutSRFLpciEQuxrqxgs9n+88Fr+zS1RcX0LjqT7u+Fmnmm7xjPDqU/UVFh+OZWU9VWRGFJLevWBhNh/HpXl091JhAqcXZ1Rn2fpWZb6KapsYXugSUCU9zQaL6YCqsyhBAS5IfRuZDu2020d/azkGD8D32yTOtHcJAfbtcucfnwr5gaamXv/n1sSYvAHBaBUSB6KFftykwn9fUNdE0qSfJ0R+d0931BgLNnJNv2e7HBJkap1iBjga6WZlrb+xGpo3FxdvoCmCCU6AgODsLPy5Xy0hZaWtsZmk7B1fWO/gCRRIJEIvncGSiQIJPJkEqE2GwrWK3WO3NEhHtwAmvXruF64RlqyiuobdpCmNEb4coQDbXdWOWehEf5oRSCdb6PluZm2nvn8Yh0xcVZ84UxUGoDCAoMxOR6g7a2Flpae5hPM6N5RGfG/8fdewfXeaf3vZ/39IreeweI3sEmkhIlkpKoLm2T1itp7XVZJ3au9zo3yczNxpM4GSdx7MRra/tKK61Wq15IimLvJEB0gOi9d5ze3nL/OCCIRhCktKud+53BDIfnnPdX3udXnvZ9vljcbi16GenrpqtrGMGcQVhY2LpMDZU2gszMbLLSIrl0oZ/urm7GFvYSFS3ceq5Kg1arRbPCo6nV6dDrdSB5EUVxmaJtc2iIyymivKqEoxeO0d7SSFv3ITIqIxAULzbbIot2Hz57P21NjdwY2E9ivgXvbC83eqYRIoopzo1YFcywmTNgq9DpDRiNK4vHqtEbjBhNOiRRDAZobPoEBdSRZOWVYMpIRJy/ztjQEDZTGo+UV1OUHjQSh8Rt48FnEtglagmNCAuecYqHwbbLHP3oBONCDod37aEweSUvsIOFhSmm5zXEV6SRkbmWa19kZnyM/t4xQmLvJyNzvXEbxUVf6yWOfniSSVUeh/c8QHGKAdfCJDNT4wz1ddBY38SEN5m9j+4k9gs0JgtqPeGxmdREJ5NfUsX29iau19VR33iEH9SfJbN0H4cPH2R7ccLnLjD2ufop3GYNoSIsNpHs/Bw4e4n3fvpfWRht58mnHmPf9hyS07YRl8gWlGoN0anlPPOdaO73mkhKj8LnnKavvZ5Lp49T2zuPLEcgBnyrKNMEgn1SazQYDKspFPTWRLIKqilO/w29vXOMDw1iE3cRevN3qNFoDehX3ec0hCemU1RTTvKv61icn2R4YBo2MV6vnCPVOuO+gnNugu6WZqY8kCG5GB4YWPFxAGN8OU99IxXFkkxOvA4xsMjYYAu1jdOYs/aSmh65xuknoFKpV+03d4aGiIRCnvz2/8NOp56kjGj87jkGbtRz6cxJLnfOIcsmRL9/Fd3JvUPBNT9Nd1Mjk26FVMWzetyIaKNLeeobSUimePISdUgKaBGCepYgoDMYCY+Kvje5F1RodUasIStlQoU5PJnCmn0UJR7hzPwc/Z3d2KRybh/jILI4O0JT3Q086khCo2MJWaPuC2ojUXHZVBYn8XbzCIM36uiZfJqMhX7qLnWgmHaSkbXasKTSWEjI2csf/GkpPqxEhq3ugCL7mRlu4M1/6mawT8fT/+aPuL9642KGK6HWmohNL+dgWikBn53xB/dQWfIWP/rBq9QOtXH6k5N85ZlS8u94EKuwRCRQ/sB+ct5sZrT3Mudrpyh/NAFBEfF6F5icckNggcGO81xqfomc3eFIngVGO+vp8RXwve0JG6x7FRqtCcuKyHFB0KA1hGAxCMiShG+JFtTrXKC7/hrDDoUYlZ+RVfIjIVkKePwbUYj6CPJS9UjykiVmSZ9TqTToDSvujyo1WqMJgxpkOYDfv/mpFUSAxak23vvJjzje4qDsib/kuUP5y4ZrCBYg9LuceGQFjeTDt5R4oYhu7NNttPT6sEZnU1IStSSLMq7FCa4cPULHvJqY/DzKy1dkK/vdzPbU0z6jIbq6lKKMlQZ2Ga9ngfbGLgK6EHIqD/DgjqzV/VFERElBJdyKjlYkL/bpVpp7vFjCsyktvdUXt32SK0c+4cacisi8PCoqbvUFJYDL0cP1xmkMlmLKazLv2nANoNIZiMytoDD6Ha73n+fVn3xMwf/7DEnGMLbt+SoJuiXjvOLH7eihrmEKvbmA8pqsNe3J+DwLtNR34pXVqL3jiKnfYn9VMiYtZBRtJ+Nm10Uvi4N1NPR7MYXmULM9adkZ7bEPceQn/4s3Tw+Q98Tf8Y2H87lZ91MWRRyjA4x71IRnV1JTfNP5quD3zHPl3f/DD9+9jir9BV5+6SGSrRILo9d545/+gQ/ajDz5h3/JM7vi7mx6ENQYQ7J58Onse5jR3z0EBHQ6bfDk1OjJr96zRI90Eyp0pkS2V2cReOsUR974gG+/UEze2iy33zN8uZHXih+/P2ggVanVW7isb/wFjSmL/KxIDBcGcC3M4QhI641pggatJnhh8nu9KPJmWolAaGIC4fqgAumYHmPGHUBU2JT/UVBZSEqMQq/VgKAjOjb4bwGZqZFR7F4/sqLc1jBnTswgKdyIVjXN5OgEgUCAuyBH27hPG06qQGhGNkmhJnrGe2hsHOPRklRClqIuJUlCVtQk5eQRbTFuzkMOKNI03d3jOD0BUi1WtOqtiZUiTTI8sojXryez5D4O7M0hbEObiYApLIEwfQ8jo4t4fEvf35NL2IbTI2AKiyfcsjV/lj4indR4KwYduOx2Ar7RYDteHRnFu3no/rzl9PaN2oncoJ17W/Ya0msepCr3E9qH2jh/8QaP1GQTkajhxvnjDGvzKMicoXWonotXe3mgLAOzpZez5yZIKT5EakLYHSLkvzioVMHLKoqCskVzmyEsifzQaCZ6G/j49UYmXBoiY2OJjgrDqFOzqCjIysbRzpvjlhn2FmRmxydYdLoRZVCCNot1MMYkkRAZgkkdYHZyGrfbw72uOW1IHo89/00GJxZ4+3Qd770+xI3mOq4+/CTPPnOQ4oy1XPxrITM5MsqczYEmwYBmg0wUlcZERPStghKexUmm5x24A/LSxK1vQNBEk5QYhdWiZWxuinmHC5/CHTyrt3eWRefsZlflB1xuPknthVp6DtaQHhVBV+11PNZUsrPTl6JqJeYnJ1mwOwnIkbd9rDE6gbioMCxakfnpGZxON/BlGa83hiLbmZqex+7woSzpw+uHoiEmIZ7I8BBU4jyzM3YcLpn1JdvXYkVq2xfX5WW4p1q4crWBxvoZJhenuPDuSvlWmB9pZdQWQHSPUnvhCq0P7iA7Nv0ejS4bO8sC9llm5xZw+G7WT9hITiOIi4shIkxP9+wc8zY7ni0ZD9dDY05n3+NfoX9wglc/usbRt0fpamug9sBjPPX0I1QXJt/xXPPPTzE1NYtDMqDVajdYu2pMIRHcXI2K5GBudpaFRQ9KJLd5mRoiY2KIjgpHLY0xP2fD5pAhfPO+3HR+rH2m1pxCRXUVJbnnOHKjnsamNu6vScIy3U7XuJeI5HxylwrVBZzzzM3OYfPKxN9mnxU04cTExhAVYaR1bJ6FxQVcMl+S8XpjKIqT+flZ5hc8KMHgrA2kSU14VDSxsZGopX4WFxZYtMkQfac7wc2T5O5Woj40g6LSSoozT3PpRguNTZ3sK9uJdq6b5rZRAuZ0inMG6Wtvpqm5j1152xjramN4NkDKrnKSQn43E6xacrDfHN/moxQwx2VTEp6OxqRnpnGMgb5hTHHVZGRlLhuDBLWekIgYQlb8UnSP0d54lWtNc8RW51JUnLUq00F0TTI21MuIM4KatExSY1eruopsZ3x8kL5BP7EVaWSuM26D6Bqjs6WOurZZhBgvI51n+eVPriOLEgoCGp2eiMydVG0rpig/5Q6O43uDoNIREpVG5Z4E0tISMQZe4Ue//IzmHgeRqQWUFn25xuvNYInOYe9jz/Nc7wxvflrHe78cor3lOtcOPcGTTxygMi/mzg8BNIYQEjPyCJ0bpvnsr2nqnkMXHospMo4oqz54R1SUrXODCnpCw2JJTgpD6fDjdTvxKsodbwQqfSgRsSnEh2mY8ftxuVx3+MVmUHA57EyMT6M2p5FTej+H98dsHLuk1mMNjULlnWR2rJ/heYjWmTBvNX3pDlDrLcSl5WJdGKP14m9o6JhGHRqLNTKe6FA9AgqK8nkiAFdCwe1yMD46hWCMI6t4H4cPblT0LBgtbAmJWmcUWz6r7hnr7xCCxkxoVAaZKVZOTfpx2RZwy9zeu6L4cLtmGJtwoBDJxvEwKoymEJJSE9DRi8sxzeSUixjfGIMjLlTROgwbWJ1VGh3WsOjbXOtEbLNDXDs1zrAjl4KOflw7kzFsNRZgiTs5ddsOngg14J/ro+d/XmR6+AYDU9IWjNfBrJbkvD08UPFL/vlaP5dOXuaFg88S5ptjsO0KLfNp7N8pcOlGN+dP1fPszvsRZ8dorm0lpuIvyN0KvQuAAMJSliNScI2Dgs/rZmxoHFlnJa1gD4cfS9hYflQajNatOjlWOOC2IOiiZ5H+2g9569MO1KGVPPTUo2SGrx5XwOthvHcQm6wl2RJP/BL9i+h1MXmjgR6HjuSqErYlBP9fDjiZ6DrDR5914dOYicsqJz9+mbALn8tBT30TcyozJcWVpK5c/4oPp62bhpZZzGG53PfwQyRbVlBhBPzYRweZcAkYoqLIzksCQPJ5mLpRT5ddS0JZGfmJmqXvu5jsPsMHxzvwaUzEZVZSmHDL7iKLHuYH62kZkQnPKaYs37KVSV4HlcZETNYDfP2ZCpr+5Tpn3vkXflaQxf/9QhmWmDRuPlUWvSwMXqd5RCIsvZjygjXtKQHc9g5qGyZRhFA0xjQOPVrBRuWNAj4PQ/XX6HNrSSqtpjQ9qK9IfhfDDe/y019dwWvZznMvPkzyivUg+j0MNjQxoRioKNtNbkTw3SiSj7mBU7z602OMeEJ49tAzpKu6OPrWBc6cucqQLZyv/9XLPPXYA6SF/R5ddL8gCCp1MFhIENDq4snOjVy/v6rUhEVHolMCjHdfpGUoQN623+/Y6y/VeC0IOvS6YISJJAbumfgcwUBkpBWNWgV6PTrVBgq0IgVpQJRg9NudTteQ7GJyYiy0jizgmu2gc8DB3uw4DHeoYK3TapcjoVXqm80oSKKEgoIcWGR+wY8oKazVhlW6UEItwcjg26fEfzEIyXyIrz11moHpE5z69WvU5Mfx1J4sTOIg5y60oURu59mv7CE5ynJH/lVFsbGw6CUQkPH7fcjKFi0OcrAgpRhwgy6etPRc4jdR4mSfF5dbRBLdCLoEUjNySfgilD7BgsWsQ6MW0BsNqFQ+3G4RUfQg6OJITc8j8S4KRHweGKJK2b+nkGN1fTSeOUfX13aRaJrh+PFB0mqe4aFtCj/42VHqzl5m4KkKNO7TNNsSeSw/leiQ3+/NRvKMc+nob3jvRBfWtFKqqiooKcrCxiXCzFom/Hd+xt1AlmVkRUER7dgWvfj9671PKl0IFrMenUZYjlK6Zwgmsqsf58/+rZWE9Dd5+8Oz3Lj6CUP9nbTf6OWP/+xF7q9I2yR9WcHlcuMPBHDabHh9Pja7n0MwekCWFRRFwm6zLxVF1a7e/wQtFosJvV6D4LtbOo310BhT2bWnmo9OXed80wXq2x+lJHWG2iYbsek7yUiNXJ5mSZaRFRlFcmCze/D5ZNCuiUfSWrCYDOi0X0SmwG8LweKvigxeZzCNzq+wLhpQZzZjMhpQ3zbi7XcMxUdv/WU65kLZdXAXJbnR6/fzmmKSoj7i+OlaBlsvcLX+INvLUkn7Avc8RZaCa1GRcTkcuN0eFPRr3rUGs8mE0aDlprJyz9MnGEgpfIBv/YWRqKRf85v3T9DYdIKxwW7ab3Tz4h++yOF9RURskocreT14vT68HhGXy41PuoP3GnlJ3sHncuJ0uTakZdCZTJhMBjRL/r/Pp+cbSCuupLKigPPN16i/3kjHgXIi+m+wIFkpL8xfosQAZHl5r3A5HbhcbhTWFo5SYzKaMBl1fDGZO78NKMhy8M/vduF0OvHKYFkzz1qTCZPJhHYpouu3uhYFM5mFJVSU53L+zQ6aGxrpe6QQfUc9PRMK1Y8/j9j3Pj98q4OWxiYG9pjpaR/EKSRSUvrbMax+EdAazGgNgGJjfHSQ3gEXsbmpZGRunurqnhmhr7ODYXcY+9NzyMpYTUHhmhlnqLcPlyWetIysdZQgt4zb4RsatwF8C5OMDfQx7o2gNL+KfQ9UkGhQoVZrMRgthEZEExsXQ7hV/4XXELgFGffiOG11Zzl56gLNXbOk1DzFY5V7eaAyhTuoC18qVLowssof5k++ZyQu5de8+/EZ2i59wlBvBx2dvXzr5T/gkd0ZrE2WWgs54GCo/QLvv/spffZwKu7bQ0VxHiGeMAYvWGAL1QVWQ0Cn02E2G1GpRbQ6A9otLF5B0KLTmTGb1KhlTTCD4nNADATwuL2IsoDOFEHutm2byrxvdgiP3YZLlIiQAgT8X5A5WXQx0nWZD97+hI45K+X37aWyOI8IeYDJK1bg8xjp10MUA3jcHiRFQGMIv+O4gXvX3bcMNTqdmbBQE4JahUanvyOVpSJLSJKCGPDjWlzAJYFuzUDUWi2m0JDgOhVAEETEgBOXO4Df72Bx8S4DyAQdkfHZ3HdfEu/89Dzv/eIVUpLieOlw7t3tBYKGkNhMah7cT/5rlxkVvXi26skXdITHZnHfwRreOHeczrqT1A88TIWmn9oLzSTvf4mHkk7S9u8/pfXySdrGyggbaqa+28Khr2+dMm5jKMiSiMvlQZZ1CNqQLcnPVmKp76YPLtsUdSc/o2tRR27hdvbtWBtNLuJyTtLU0IOst5JaVENuVJCX2+Oy01HfjFNrIbP4Jt+1iGNhmAufXWXG60dviiSnvJq45YNbwu2coLG+D7UplqLK0lX3EdnvZn6gkfYJiMwuZ8/OxFX98bud9NY3MSkZSEyuYkdFBKDgdTvoqGvCoTWTWVKxxHct4bKPcOH4Faa9fnTGaHIqVvYFAh4XQ431DAcMVORXkR2+RYfEOqgxh2dx8IU/pbbxr3n9Wh1v/egVSor/O0+U3jrTRa+HoYbrDPn0lBbUkBOxuj054GGu5yoNI150xgzK9z9OSeJG60rG55mj4WorPq2JnMqdJBsFQMbjHOPUW2/RMCVQ+MijHKhYqcuIeJyDnDvZiqSLoGxXzTLVd6VFGgAAIABJREFUiOhz0nHmHU60LaAyh+FfrOOD99VIiprM6sc5lF9ESXHeEhXx//8gqNSExscRohYIqIyYbsOXqNbr0AgyHu8kE1MSbPsdd/Qu8eXShqijiIrUo1EruJ0upE2joTdHUCkTiE3PJsqoXx8lpXjxeCUUNIRHRaDRbD50fWQVB/Zkc6prkkl7P+dONPLsjmRCDXeoCrrhh2oiY6IwajUo4hgDfXZ8O+T1J6kiI8kAWuKTEtHpfnuVXzSmDB598Xsoej3/62e1/PpH/4Puq9lEGDxMK9X86ff2c+ihPMLNdxYRQdCg1aoQVAqzY6M4PD5kLHdOORQMGAwaVMzT2dqPzVlIXIhpgykUmewfQR0uo9GrUTFPR2sfNkch8bf7/sAI2ug4wlZwTt0Wig+fT0JWDCSmJmE0hWIwBNvpbOtn0VFCQuhGFZ8lJgeG0UbHEmo2bfDge4AQSvn+fRS8fYljbee40vQNrIOnaXWl8Ux1BVUhbs6erOVY2xnq2vfS11aHOfNhslNib0/h/nsARZ7n0gc/5h9/8DaO1K/zb/YfZl9pPEYtdN5VGuVWoSIsIhyzQQ/SNCPDCzgcIpjWrClFRpEVFLREx8ViMhm516uUHHDjlS1kVz3Kt+MyKa3axZGP3uPIqUZOv/9TMMYQF/11KrIibrM2BAwGPVqNipnBLoam5nCJievSHUHB63SiaPWoQyIItRrRq2WmR0dYtNs3XHvKkuPOGhVLpNWCQXX3KuWtbupIr9xHddFx6jvbuHy5mUK9lz5fDNuzs4gPvdlhFaHh4VhMRgRphvHROWw2EdYVf1WWDJvBqFSz1by2xS8dgiqEsDArJpMa5+wEk3OLOANgXDeUYASKoIkgOjoU61qL2u8YAUc3Vy61IkVU8fzLL7OvMHwDQ46X1uQAUyMDHGvo5crFWg7sqyK54ovjYtNawgixmjFqZGanxpmbn0difbE4eSn7whweRWRYGEYVBDZ84uaQRS8+yUBywf08H51KYfl2Pv3kAz7+7CpXPn0DSR1CdHQ0D1XG3/YSpNLrg2ewd5TBwVEmZgMkJK43iMgBN96AgEpjJjQkFItJg3t+iunZOWwBBfMay6SyLCNhRESGExr6+WbZGJVPVXUF+SdqaWu4Tt3lVEIW5tCFbaMgN2b5LqQxWwkJtWLSKizMTDE7O4tExG3egYIpLIKI8PDPqcx+8RAEE1ZrCFaLDs/kLNPTMyz6FSxrrQNK0HEmqEMIj4wgPOy3qZwIhCTmU1pRSdanLXS2NlJ7MQ3jcC8eUxZPPHEY2/lhTh77Oe0tDZz5TMI1YiM8fS95ib//1d0l7zSjw30Mz1ooeCid1E37rOBamGdmcgbZGktiStqaIAORmbFR+nrHsMY+QFpGGgbZi8vjR1aZsZrUuGcmbvFdrzBuy34PHl8A9FZEjwe3w4m4xLF74MBDxP/OiiVKOOdHaL5ymhOnL9HeP48+KoOqgy9QU13KtqxUIkN0v4eOn1sQA35EjGRVPswfJmRSXLmDYx9/yLHTdZz64BfIaguxid9ld+Ymd2jFw0jneV79x7/neK+FR15+gccOlRNtUTPbeeMee6YE+bj9AfSWMKLiE9jKFqkoEqLow+9XYY2IIC4x9h7bD0Kt0WDQ63AtzDLU0818oIjoDeRL9NqxLS7i9AR/o8aHyznL5JQHJfdzyoDiZbz3Cq//4//go1bY/63/yOMPVxNrVbPQP3zn398D1GoNeoMOt22ewe4OZgMlbOA7QvI5sc3P4jOmEBey/vPfBhRFwWCyEpucurlMCDr0xggiIwxIQy7mxoeY8iqEmzc4i2UZRTBgtsYRF6tH7TFi1Ms4Fifpau8nsLt4A8oFmUDAi8ejwhpyay8UBA1hsbkc/FoF2tERXjlyitd+lEhCwr/lsYoNAgc2gUptwByeRHykgRlD+F2dX3pLNLnVB6hO+4yLw42cPF5LWHoDdUMxPPf9g+T7BGrSjnGx7xrHjjdS5q7DlbCX7Tn3FqF7C0FqHINBj89tZ6C9lWl/FfEb+JHkgBf77BhOfTrxW1DTtwzFj9M2REPDEIrBSkpB9bpil3LAyfTAJc43LRIet52Hn34guLaVAC77II2NY+gtGct8137XAt11J+jwWtAEwBwST1lV7nLUuCJ5sM+20djhxBK+m/Ly1dkKfo+LwcYGxiQz1cW7KYxbyRfiwz7fwemTNxBC4tn96DMUx2qWqDgGqW8cRWdKobB8qS/uRbqvfcYNjxmtH0yWeMqq81ZEsMt43XM013UgG0IpqKrk3q4+CgoCKo2RxIKHePm736St9x9oa/uMN3/9OPuKHyUYqCzj9czRXHsDSR9CQfX69vxeF13Xahn16UjI38WjD+Vs7LBX/Lht7VxtmEZvyqRiZzFGFSiyD/t0PZ+d7EYxRFF63/0krHB4S34XQ9c/5JO6WYwhD1CzPXGZasTnmafhQgMLGMjIP8gzD1cQY7ISFZtIclIcIXdbKECRcNu6+Oz9c4y5pc/luFNrjMRve4DDe9N/a8ZYQa0mLCePVJOKHvn2TjBFllEQEAQdut/XdLEV+FLVEpUmiayMcPRagfmZWcR75PtUZDsTk4uISgRVe6uIshjWDUzyLrDoCCATTVpmJLo7EF4K6jge+NrX2JUTg0HtovbIa5xsGMXpuxeTj0BcUSmZkRa0LNBS147N5V031oBzmlmbF0WfRXllCua1hrYvFCosYVZ0pgyeePGPef7ph6guLaCwZDfPfvPbfO2JnaREmu5AcxCEoI4jJTkco15grLOWjvFFPLclYLv1/4ImkdTkUAx6P02nP6SuexLXBtEK9pELvP9pA1O2CFKSwzDo/TSf/oja7gmct/n+B8camJhzb0meRNcoY1NORH025WUphIenkZIUhtHgp/nMx9R2TuDYoB3H6EU++rSB8dmttbM1CETm7GNPRTph6lHOHnub1167RkzJTgoyE0jI2cN9FemEyN18+u4veedigMKyXBKjzXe5mG/xaMh3k9J5j/BM1vHRux9zvslOZlkVeZmxGJcicO/UtrJV7vVVEIjKyScjPhKTykZXayczc/Z1BlvRPce8zU1AnUxBcRqRkatvUYqsrOugKIrI8voOecZruVjXw8S8itj0Ug4+8yL/13/4G7738gEywuzUn79M5/A0t99C1CSmphAZasU1eZ3T5xoZmHCu63PA0cvZU1foHppFsGaSl5VAZKiaid52BiamcYprfqC4mZu34fEKZBQUkhwX9blTmY1RhezZVUZGPLRcPMav3jiLEJdJZkbiCp5fgciMHNITYrCoHfTe6GFianHdeCT3PAs2Fz4hkW0F6cTGbMERtOKiJitriWskRCkYAbvZ75S7ESrBQmZ2JgmxEUiOAXp6R5meX2tWVXAuLGBzuDEnbCM7PZHIO1UF/K1CYrT1ErVdbpK3FZGXFYVBp0O37i+E3JrdVJXkEqn30FV7gbqmTmY9K+Zned6Ue1qPGksKmRkpxEVqmRnqZWBkDNu66fOyuLiI0yWTnJNLWkrCctSfwM19YHXDiiQhyevvDb6ZNuobWukdl4lM3MbeR5/nX/31f+Sv/+QZypIk2utqaekcwrXJca6PjCMuLpow9TzNVy9zvbF3w7XV23CFq9fbmfUZSUlPIyUpGtk1Ql/fEOPT62XEY7Nhs7vQx2SSlZlG7Od0cAjqULZVVFFRmoFvrInP3nuDy4MQn1VESsStK7HGlERqWhqJ0TrmRvsZGBxiYW22i+LDbrdhd0okZGSRkZFyx8jL5X5wyzC/EpIkIUmb3ZuCMrVloRJMJKWkkZYag+IeY2hggOHJdQPBY3dgW3SgjUojIzNz06yuLwJqfSwFJeWUF8Yy0d3Ax2++xqV+LymFOylMSyG/sIyywihGOy7x7hsfM+C0kF9RSNhdcX3dOreVJSfD7wLeuQlGB/pZ0MeRnJZBwh3S10VRIhCQMIVGEB0TuyoKTZEXGRsboG/QT1xCKhkZ4bgmblB38Sz1vW5AZGZiNMh3HZdMemZaMMNF8TPSUceZk2fpX5BR63TojQYMBiNmq/W2xeoU2YvLacPmuBc32NqHidimezj7wY/5r//p+/z9jz9h0B3N7sPf4rt//l1e/uZT3FeeTdTvueEaZOZH+mi8coUxn4GY1BIOPv0Sf/Xv/gN/8UdPkWWapb3xCvVtU5s6uQPOcVovfcK7R5oQzelUbC8meonIdSvENLd5Kg77PBOTTiLjUygoytuSA00JOLHPTzDjtJCQuo2CvHssXgKAgMUaSkJiDKJ9jLbrZ7jYMrduNIrkZKSrnhOfXmFRayEsLpEYo8T81Cg3mjvuQHt157kR3dN01h7hNx/V4dalULW7jFjrivn9HMv/ZjC7grLiriRgtlhJSo5Dck7QUX+K842zG4zbzVhvI8ePXGL2iyHb3hyKiNezwPSsj4iYdCpqCu5QvFBLaHgCBQWpaEQ706Pt3Oj3rvtWwOdlYWYO0RROSm4Z2dEWrCEppKWYsc+OUHvqE5rG1qeE+uxT3Lj6KRfa1lfeFNQm4rN389WXX+ZAvpa2S+/x01feoH54ffubQ0IMOPHIZpJySsi8QxHzVX3QmIlNKefBfdn45oc59fZrfHCqFmPBY+zeFktsUgn792Xjme3iyK/e4Pi1Wcoe3Evs504VEdAbzaSkJSB4F+hpPM6J2un18iP7mB1t4ZN3TzP9hcuPiN83y9S0B53eREJaxpqsLBnXwhjXjh6hyx3Fjkf+gCfuS0ZDUK4Xp1pp7fdjDQ/yXSsBJxN9V7nQ4CM7WWZyUUNYTCFlRSEokhevX0HyuZjqvE7Hoo7YjAoKktRIPg9eEYJR2TM0X+8CUziFNdXLRY0haBhvP/sOJzr95FY/wwtfrSJUvcR3PdVCS68PS1gOpaVRKKKLqf4rnL3uJSdFYWJRRWhMIeUloct9QfHhXOzketMcJms6FdXJCHIwm3yrCHhsdJ17h/M9fkBAZ4ykeO9X+eZjhSjueVouXGDEu/TeFD9uWyd1TXMYLOlU1qQstXfzvBXxusaou9KFbAiheM/jlCRsbNuSA17mu6/RPC5jjSykqjwCAQnR58c21MyNCT86fTS5Bam3jL1KAPtUE++9+gFddjXRGVUUpy4VFUVGDMwzNLqIoDGQXLCPRx/az97d1RRkJ9694XrpqVLAwUhPB21t7bS3f46/jm5GZ9330IetQxC0WBOqqcq1IImzTE5ucA9SIOB241dUmMKyyU79nUUB3DO+3JgaIZzy6nzCTFpmRgax+wL3dBZ7p+u41jZHRN7jPPFgDqGm9T4Mz+QY0y4fushiinNCMKyttrUOKuKLHufPvvt1KjMicI5e4Cf/+xVONIzi8t/+RiLL8oYcwNa0/Ry+P49IK7RfOErT8AIeceX3FCaa6+md9ZGx82n2lcRhWeLbUiRpSUmRWM/IsbkCI8lBLlxljYFH8vTz4Y//kZP9cRx46qt89avP8eTjj3Lo4H6qi1MJNWq3fvkWwqncXU5ChBXv1DXef+8SgzNOpOUGJRbmF/AFAiiKH583SHUgqKKo2lVKXISZ2b6T/OyVX3K+ZQz38vwGmO45wy9++B6z6jCslhS27ywhPsLCbN9Jfv7KLznfvPL74vL3Z1ShWMxrje8ykrT2/SiMNl6lfdRBWs3j7CxKJNQUTcWOEuIjrcz3n+IXP/olZxtHVrx3kenec7z6o/eYJASz6WaByiDnl8J6Bf5uoDaksnd/NamxBjouvMfVyRiqq/NJjDCg0adw3/1VpMZoaDlzHJu1kIKsJEI3sDAoLKWJs+RVW9knQYfBoEGtFvC6nHjFADIgL8na8hgUJSjTa8Yjy/IqLsM7jdYz3kPvyAx2rxe324d4sxKW4mF6eh63J4CyRPOx1EH0ej0ajRrZ58btCSBKBKPplqKIlxu92Y81bVoSt7P/vmJSYnT0Xj9HU+8oNt/qdz/T2U7fxCIxpQe5ryKLGKsa0GI06tFqBGzT48zYXMsGZ9E9Tm//OIsOH7IYCBpJl54WsA9w8Wwt/WPziAqo9eFkFu3lK996kurcBPB7CATETZRBgbjCGkpykwlTTXHu/V/w2puf0TFqR1zSVFwzHRx58zc0DjtQtCa02gR2PLCbgsx4/NONXKnrZmxm9SU5YO+hs2cMp7aAfQ9UkZkUurTxb8ZXLq94txu8YFUIRfftpjQvGe/wZa72yCRnZCwVarwFc0Ile3eXkhFvYKD5Es2dA8x7V7+D2Z5O+kfnCC+4n11V20hYjtxeal9hnWFLpTVg0OnQCAFmJqew2W8a+RUck/0Mjc1g98iIorjKcKbS6tFptaiFAG63B79fJEhDcFPG5WVnzmpDnIbU8t1sL88jxjBH45XrdPVNssqnpTjp6eplfBbK9+6lvDB9qSDL0hq8zVSvNsreeiOKZKPz2jHe+MWvOH29H5v37vYU0TXApTOXGA0kkF+QS5z19hc1c2wxVRWFZMSZcEw1c+FCHZ2DtuW+qLQ6dDotGiGA17N63oKOnJsX2Zv7xurnC5oYynbsoKwwFWX+BvX1bfSNrXb6BZyD9PUPMSdlsPO+GgqzI1EhoNUbghHQ/nlmpuexOZYu4YqPyeFhxibm8Mri0rsOPlF0jlN/tZa2ngn8Cqh1VpJza3ji60+ytzILXcCHz+/f2MGxBG1IJsVFhWQlGxlpP81br77GsUvd2LxBeZL9i9y4/DEfn25i2q3DrNOSVFhNTXUxiWYbNxoaabsxzOotx81g/wDD4z4KanZRXZFH6E0Z2WQflRVlKUNk6TGrZ5fw1DKqqspJD52j4VonblUUeYXpq3iGBU0kRZXVVJVlIiz20NTYQuewa9WzRPcIAwMDTPuSqd65g7Jt0cvR97d4a9e93GBBSZ0G18Ics3Pzy0Z+yTfH2OgI4zMuZEkKOv6Wfq7SaIMypRLxeT14vQHWypSyRMOiyMHxB6EmPq+M6u0VpFoddLU00twyyOptxcPw0CADI25yKrazY3shYbe0ndufz4ocNKSzdVv6LWiJzymivKqYcG8vVy+1YlOnUL0jF6tWQ1x2AWWVxVgcXbR0TqCKyKMkd21Rt5v9Wxr3EtXOyrnWanXotGoCfh8er4fgsSgv3z1vvid56f9WQlaUpfNzKeJwS+OSmRsfZ7B/BGNsEukZmesoWlZDwGy1Eh4RhkGvw2AwrLiHKdgnummtr6PXFkpCchrJMTAxOEh31yCSXgOKi+npMYbH/MTEp5CeEeS79i4O0NrSRPc4hBgF9OExJKSmEa0V8Xt9BDY4WBXJyUDzWT587wjNw5+DXkHxszDRwWe/+Rf+9vv/if/9i5NMyUnsf/plvvvnf8aL33iMHSUZRGwhU/EuGr31fu7xOhlcN/IyZdNKmXbNDdJy/RItvS4UBLTGcDKK9/Lkk09wf3k8UsCPz7d5FlrAMc/kUB9D876lSNSbOpyIy2ljds6xqawpioK05jPJM8NIXzMdk2YKKvawuzJuzRoJUh+tDiBQcM6M093cwqI1i+o9+ymI1az+ze3maKkPrOqngCU6npyyMpKNTnqbTvD6T9/kctfccl6e5LPR23CC9947waicQGJUFInpJZTmhWOf7uPamU+40mVb8Q79eD1BJ4osSwT8/g1ldiVE1yLTgz0MzHoRA74V8yvhcduYnrVvOL+Koty6yy/vp7cgqNVo9AZ0apmA343b5V/+nSEsmuzyClKNbgbazvD6T37JhRuz3DR9SX4HAy2nePftIwxKCWucggobOTA/LyTfAlNDTXTOhFCy6zAPrChOt3y3WnXJUhESnUzlgwcoipGYHu3g/OnmNc7qALaFcVrbxohMKeLBw3uJ1euJiMlg+33FmPzztF56h1deeYvGwZuBLxLuxUEuf/YO758aJTTGstyJlRKmNYSTW/0YL/7Rc+RbZrl09Jf8+GefMGBb2QEFSfTjcroRN5gun2Oe4bZGZvT5HHzywSXKiK1ChSUykYr9D5BhsNHddI5LHWb2P7GDSJ0Kc3g8ZfffT7p2ge7my/QFCrh/R+KGmXY3dStFlpFXbgfKLR0TWD5rDNZwsqt3kGnxM9F3kVf/+cecaplelh9Z9DDedZG3X/s1Xf7kFZSct1ujypLet/m3Vo5dpTZiNKhRqTWYLZZVOonkszHQdIy3Px0ia883+dPvPLJMkSd63cx0t9Lv0hOdVkpenMjsSBOffnqDqKIaQuf7mMFEQk4Zybpp6k4ep3MmgMdp58a1RuwaC1ml5YT6x7h65grTXgVF9uFc7KC+ZR6TJZWyiqTleZZFF+Ndp3j99XMIaYf4zr9+iZoM81I/Pcx0t9Dn1BGdVsq2eIn50SaOHmklungnYQu9TMtGEnIqSDXMUHfiGB0zIpLfw2z3NVqmBCKTy8hPEumrP03TsOcO83ZrvgOeRVrPv8OJq+PB2RbUmMNS2Hmghli1jN/jWdZ9ZL+Xue6rNE9AZHI5hSkS/fWnaRgK6qGK5Mc+3Uhtqw2TNYO9hypvW0Ml4PMw2NjMpKQncdsO8iI8dF85xrU+F77FBexiMCPhVkF4CY99hHOfnKFrYh5FaySztIpYwU7npU9pHBERVFr0OjUoMj63c03wmELAM0PDqQ+40rNFI/JSwcZDX3uRb3/7ZV5++d7/XvrWNzhYk/w5MlxvBR4u68rr+qvCYN3GI4+XY5DmaGvqZa0bQ5J8DPUNI2qMFO59nOKYZQnF7x7mxJs/5Ic/P0Lv3BdL8PN58CUTDajJ3nOIsuQTnB28wbDdS3Z8CGvtyoJGi1atAiVYCMAdyCCMYGRDwNHLx794jRuBEr71599mR24E+nW8lAozXV1MuAJk7XiIgsRQ9FuQFpUulp1P/An/XmXmh//8Kmcuvcnf/Y2Nvmef5uH9O8hJDkN381au+JkZbODEpU7sLj86axpREYblytJqQyqPvPQdmnqmePPCed54/TNy4p+jIMmKRgD3TD3v/vos7qg9vPTHT1OYHMbN4HD/wgLOgIik+LAtOhBFiZt+B0Vy4HAEkCQFn9e7SllR5EXmF7wEJAXF7cQjSsgKqARw9p/lrbeP0+QdRBGHOBEXhtloQG/Qo9cbsIRGkpiaT3FhKmFm3R28HGoyd3+FJ/ddY+SdOs69+wP+weTg6Ud2kRGtYWbgOsePNjNt86NIw5x4501MjoMcPlBO1t7nOLzrGqMfNnHl6M/4L7ZezlYXkhRlwGsbp7O9D1XiPl7cU0xcmJWovc/y6K5aRj5s4Oqxn/O3tl7O1hSRFGXAZxunq70fVeJe/mBPCXFhq/k8FWmEq2dqub8wmZDUMLQqBdvIJd564zNmTVV888XHKEqNQKfSkLPnGR7ddY3h6etc+/Tn/DdbH+drCkmONuK3TdB5ow8h/j6+uaeE+AgjAgo+lwtvQEKW7czPefD7ZTDcy7akJXP7g1TmHuXG8Bh5ux+kKDses04ANGTUPEBFzhHaR21U7K28faFG2YbN4SMQUFAcdtyBAMu0rYKe2LhozEYd/f11XK5rwuLvYrjfT+HOSpINDty+AKIsY19cxO/3w83kOcXHos2Jzy/iw4HbH3zuZt4ObUgoVqMeNZNcOfoW78b6KE7WMDXYT0/jEDafgts7SHP9FSyeeXZXbsMcG0uE2YTKN0jtxTpK42WYHESXXk1pXhRut4eAKOFxLGDzBJXXletapU9g77Mv8FTXMD8/0sAH73xKYWY8O7bFoFeBd76dYx+dZUJVwHPfeoaa/Pil16UnLTuD6IhQugcv8Os33kRcqCRCs0jvjX4mxhxIGj3u+S7OnzpJYriZHQVJgMxI00nORKWQmR5JcpgWUGHQ69FoTGQVl5AaF7Up950hppzHnzpEU8cop1uv8OaPXPS2nKYwOwGz2sPEQD8OXQ6Pf72E9IRQ1IJARtUTfP25TkZnP+Lykfc5XZZD7KEiIk1qFHGRxlOfcKU7QNWjL3D4/mJil3hIAi4Xbp8fUQng90lIK27QAbsNh9uLX1bwut34A35Wc/8JhKXtYHd1AeeuD6DLrmRbVhrhawan0sWx6/Gv8lTnID/5oJUjHxyjKC+Z+0sSMKjBu9jJyaPnGPKl88Tzz7K7NIWbvkfR7cDp9uKXFDxu95IM6peem0hGejxR4QJdVz7grTfNLO5MB/sQ3QNTDM160RlhsreOs6dOERu2n9yUaEy6SKIiQ7AYRfpbr3H9eh7KmJ0pdwyFxYWkxzpxOD14fTJelxO314eogE4Ac1wZjz33LN39kxxrO8GRT8vJSX2C/CQLKkSmus9z4lw7qtQH+cqzD1GYFoYaUGQ3brcXf0BB5fcjiiuuDrITu8OFxycRUHvw+vyICmgFcAxd493XfsCvTw6Scf93+Ot//Ty78iNvLzwroXjprT3O8dO1eCJfIC46alNeXUEdTlZWOonxoTAwTvP545zfU0Nuyi7izCpUuggiI0KxmkQGbtRzve4K2hkvM85QcvKLifAE+y7JLux2Fy63BKs4s7UkFz3EM892MjD2KxrPHOFERRHJz24n3qpBkex0XDnJpaY5cvZ+hScf3kFyeHC/McenkJwYT6iqlsvH3+ONSBXbc60sjvYwNN7NlEeLQXHQ3XSZ0ydziHyomHBkprqu0OuIoLgogbxYA6BaijQ3kpSTR2Za8qYRfYIqnOI9BzlQ20Dfm+e5euIN7DM9nCnPJzHKgGdxlJFJkczqxygpzgwWKY3K58CTz9LZPcp7187z6bEKCnKiqUgPRYXE7MA1Tp+rxxmxk2eeO0xlTiRqIRhl4/EE5U4MBBDFFdERSpDX2eWREAUvXp+PwJJM3oTaEE9pVRVlBWcYaIkhJ2cbuclr6Xe0JGzbyxPPdtI78iptl45z/EQZ6dF7SArVoEgOuuvOcvH6GMnbn+bpx/aQHqlblie32xV0Iga8eL1e/EtyClriEhNJSozGe6GNzz58k0jDPJnhfkb6+hkPeEbzAAAgAElEQVTq7MOvNxFwjtJce5ZTFxLZU5JNXEgoYeERhFlkRnvbqLt6kUiPilmbhqTscorSvLjcLtweGZ/KjcvtJqAES4UYI3K4//AzdHYO8ea5a3x29FNKt8VSnR2GGon54QbOn7vGrKmcp597kh350csOZq/Xi8ez5HwUAyuc7MGaAw6HG1nx4/N58XoVuIuq6/rwTErKKilKO8c1VxJ5RdspTA0qz7qwDIpKKihMOUurOo28ogoSN4gGVxQ3brcTl1vCr/fgct0aN2gIDQ0jIsKCs2WAtoYrXNgmIc8voA3PoCzfi9flxifJaNxuXE4nCksORSWAx+PE6fIh+oLy5pe5cwiL4mJifJj+IRexmWlk3oHvGsCSkMG2khJimvqYmZrEIRYQphGZH23h1CcfcrJuBMkQQlhUFBZ5gubxOdzqZHKSDYAfRVaQFSPW0HAiIjQE3JM0Xb1Ex4SGkgdrSLSqUCspFFbsYntON6P9nfSOHyQ65WYKv4x7cYyWq6c5XzuIKW0H1dF3T8+iyD7mxru4evYEp8/XMzgtEp1ewKGvbqeqrJCs9HisdwyEuReIuN1ObDY3ol+N076Iw69guktydMnnxe1w4JFF9D4XTseKWgOKm+GuejpVl6nOOUC8SQBBg0arx2AMJTE1h/y86E2DWNR6PUZrCCaVl5HuWj54/VXUU5nIC2N0tVyjZ0FG9tkY6a3n1Kkk8mPjKClaLnHLwtQQzbWXGaw8QFqkHkQHA60XOXr0KkLafp776mFyY9ZSvTkZH2rj2qVOcg5sI1Qv4HeO03jxGMevLFD84Dd59rEyQpd+Jvt9eOwOXH6ZgN+P2+VCwXozfwGfz4vd5kKWgsUhXR4F9AJaSwKFNQd5dO8lfnSsn9Pv/R/s061sL8sh2ipgmx5mYGie8JwHeX5/GZEGM/rsKg49+SDXet+n9fK7/NPfG5l/7kHyUyw4xjs4/fER2qd9BBxTtF3+kNff0bOragdF6dYN51ml02MMCcGs9jPe38AHr/0cw2wugn2crpY6OudkZJ+L8b5GTp/OIDc8hsqqXESfF8fCAm5JxOdzYrcHWFmiW1AbMYfEEh0uMDYzTO35E5SFJWCbmkYdU0Zu1UEe33+Bf/64m3Mf/DPO2RtsL88jNlSFfWaE/v5prBkP8MKBSiL1Aooo4ve48Phl/H4vi/PzSCRs3cAgqFBrtKhVMm7XAhPjswTKk9ASNPCNdF7iyEdXMRQ8w0svHSbzJqeuEiDgs2O3+5EkEa/biUcGgwq0pmgKdj7Jt77Rxv98tY7zn/yKEzvTOFwVhwYF9+IwDeePcm08ike+9iKP705BKwioI1PY/sjXeOR8N+/U3uDDn/8dE90XKNmWSojOz+z4KNPOUPY+84eUppsRUJBkCY/HG+TZDvgJSALG0ES2P/wC3+7v5z//4BRHfvUDwsNM/NmLB0gN0+Czj1N3/FVePTZMbG4NDx06xI7ieHQCBDxzdNR+xtFzc+z62p/w9L67L6CtMYSRkr+X+0vfoK9BIDbvEHsLwxGWPkvK28Pe0tcZbNOTU32IvA0iu0UxgN3uRFZEAn4bC4sS3Iz8lwP4XAvY3BKSNoDT4UBZenZ62SM898gJ/vtv2rl27Ef8jaOXU9UFxIdpcc+P0dc7jCp2N996ficxBgHRKeK1O3BLMlrJj2+Fx18RRTwL89gDQYeM2+lEIfz2+5KgwxqaTmVlKp8d8zA1OoJHzsasCsrS6I3TvPHaMaTcb/BXf/lt7iu45UAOeN2M9fRjU1SoNAH6a4/SfrEJbdYhdhSZef+tAWStnrBII91nP2HQk80jFgnX7CD19UOo9NFERqhpOnWaRWsFBToB2e9itruOtimwbMsiKz14Dsmih4nOs/zildfoVu/mL//dv+LpfVnLmW4Bn4ex7j5sikCyVmKw7hht5xtQZx5if2k4H73dj6TVEx5joff0R/S5snnEosLvcdJbX8+EqKUoJhp31ynahkzsOhjUnRRFwe12Y7OtzxpYmnE8CyP0d9/gUuenDD79HdLNqqADQ5IQ9GayyqtJWjqL/F4nPdevMy5qyI+NwdN1ktYBI7sO3Sy06Gai5SrtC2piynewu+x2705BDHgZ7BtG0uiJT09k+von1Pea2fOwHnV0HFEGGHRPUnvuMg/nlCBOtnL+bB2zJBNhkFGp1VisAq1nP6RzJoYH8jTotTFU1OSiv1hP//X3eOPjUr52oJBwrZeRzmucPd+EL6SAfQVbtdUIaPQRZBdHbPH7vy0oSJKbqemgA9OzOM28RyZ0nVIjoNGHU/3UH3H42PeoO/om15/PZ3tS8DxQ5AC20Qu8+1kf0dkP8+3vPETUTVOP6Gfs6i/5L3/7M0alBAbkNP7btwt+t8O8DdTf//73v/9ldkBnjUaYuMaZK30kVD9ORWYEhrVF1TR6RMcMM3OT9Hb3MDDYT193B4115zh25CTtU6E8+JWX/r/27jw4jus+8Ph37vvEYHAN7hsgQZAUJVIiZYmS5cixZcmyDtvy+kjicpK1y1sbZ7eym8TlqiTO4Y1rk02cZL2H1zpMSaRIiuIhnqJIECRuDAYDzACD+z7mvqd7/wAk8T4kK6HL/aniX5xBv+5+3dP9e+/9fjzzm1soMGm4utahKKxy/Gd/z8FeHc9+59s8ttmFQXV7D5xKrZ3y2jpq62spseSY8HbR3t7OpYsXudTZQUdHB23vnubIm3t5be9bdIymqNj0Sb709W/w2Udacdm1KOQAMkz5ZVSUOVFn5ulub2dgOMDk9BQ+93kOHziCN1LGE1/5Bp//5EYcRjWZ8CgX3j3FwT2/4K1zQ6zGUyTTCjSqLFprEay4OXHgZV7Zf57xpRjJlIhGp8CQV4Y+4efUmy/y0qsnGZqJEk+mURkMWJ0VOK06ZOkF3N1ddHZcpKO7m66uDi5dbKe9/QJtbW20tV2g7cJFRpf0VNeV3jKArdLnU1qahyyxxKi3l+5eN/09HZw/ew7PVBaHOYbHv4yxoILa+g20tmygtsKJNa+Y0hILQmyJiYCPwQE3/e4+Ojs6GZqIU9j0CF/88ue5t6EAnUqG2ujEVWxBiC4yMeZf+3x/L10dnQxNxnE2PsIXX/g89zUWoF8/x/HJd3n14AUmllUUuRSMDfQz4O7n0rtHeWPvIQZDJXzmS7/FFx5vpcCyVtxHbcynpNgK8SUmAn68nn76+/vWtxPDWf8wz335abY3FqJJT9J28hB7Xn6dk52jhOIxEkkZ5BKgsWO36FFfXbXrVv1ObyM304l7pZxnvv5FHthYgm79ulDqrWQmL+GNNfDMC0+wpdaB+vJOL0bw9Vzg5OHXeP3AOwxNh4glMii1SkSlhYI8MzqNCrU8yEBXH17/CKMBP/7xIKbCCkziNG2H97L/7S6mVuMk0qBWKzDnF6KKj9N2ci+v7DlMt2+BaCKFTK3F7CilKM+E5gbXlUJnIL08hs83gn/Ej2/Yg298BZW9ktaNeSz4/IxNTDAzH0Jtq6F1Qw35Tj2Lwz24h334RgL4fGMkVE4KzCl8nUfZu/cI3b55wrEYWeSkkwJ6qw2zUbdeJE2GwV5MaWkhBlkQT3cX/Z5hxqemGBm8xPHDR+ibs/CJz32ZZz5zH6730+TIMVj1xBfH8Xk9uD2DuHu76PfOonZuYEt5jqGRBWSGAmrqGqivq6Os0AqrfRw7epYOd4DlYIhQaJlJXzdHDx5jJFXBZ59/loe2VmK+yYCGTKbFUVSCwywnvDTFiG+QAXc/vT09DAzPoylo5TNf+AK776vBrlciAxRaG8VlpRRY5MwH+ujqdeMPTDE55uXC6SOcaJ+hsPXTfPFLT7Cl1olWkWTc08ah117h4KkepldiZFGjVGhxFFqIL7g59tpL7D3STmApRiojQ6M1UFBaiNWsf79mnUxhwCCfpX8wQfPup3j84Y049Ffvmwy9tZDS0iJMygjD/d30uocYm5xm1NvJyaNH6Z7Sct/jz/Ps5x6gssCILLOEt+s8h19/lQPH2hlZiJDMyFAqcqgMNqxmMzqNAZM+x9L0KB7PAB6vh96uHvwzaYoatlGgmGJqLoLWUUl9QwN1dVUU5BlQKzUIkXE8A4O4h0YIjAwztQLO8lJ06Vm6zhxg34ET9PoXiSQyKFRyZGozdqsZk8GIvcCFq8iGGJmgt6cH99AoU1PjDHS+w7Gj51lU1fOZ577E4zsbcRgVROeHOff2XvbsPU7fyCKxNChVcgy2AnTCKn3n9vHynkN0DM4SSWRQqjWYHCU4bFZkK10c3Pcmp9p9qAs3s+uBbdQU3zofeDo0yrmje/jnf/p/HLswxEpCjkzMobXYsdms6K4a6cqEJ+hqO8WRtw7zziUvc8EkiegyS0srhFNqHE4HeTYrsugk3kEPbu8oY6PDTC5lsTosZFaGOH7wDY6e7WNmNUIyLZDLpJGr9JjNFvTrwR2FxoyzxEVhnobg1CA9vX0Mj0wwMT5Mx9njnDjnR1/9CZ778tPs2OBCr3xv4NeIWowyOzZEv9uD19NHT6+HuZiBhs1VyILzzC/mcFbVU19fR311CfrMOOfOnOFsxzCLK0GC4VVmAm5OHzlO/5yRXU88y6cfaibvOqu0Lu+7OrODwkI7ynSIqfERBj0DuPt76enzshAz0LLzszz1xEM0lllQyNaKUFsdxbhcDhTxWQb6euj3jDA5Nc5g73mOHzvLdKaUx77wZZ7Y3UqhRUl8KcDFU2+w5/UjdHrniKQEFAo5BpsDvTzOUPtBXtlzgPO9k4TiWRQqJca8YvLtdgzvB84U6E1yFsYDJHT1PP7U59hSZb7mWUihNpNf5KLIoSM8O0RPTw9e/wST4z662k5y4p1BFCU7eObLz7BrczlGVY6lyUHaTuzj9X1vcb5nnNVYBrlCBio9FqMZu1mH1qgll1hmbMiNe9CLu7ebfs8YKU0pzdVWYsszzCeM1NQ3UFdXT6WrAJNehxCfZ2TITY9nhImAj4n5BMYCFzZNFM/5Q+w/cJR29xShRBa5HORaI2aTBZvZiNleiMuVjzqzwGB/N70DfianJhjqv8CJt88QiDp5+MkXeOpTWym2qhFTK/j73+GNPa9x7Fw/c6HU2gxEnRWbSUlkuov9v9jDmyc7mV6NIcqVqLR5FDjt2C1XF7a8QY+RqdGqYsxMzCDm38czX36C5qK1OikymRqtIsL0xDyKgu18/rnHqLJdFpgTk8xPeDj39gH2HzhCW+8Ewcv222i0YDdrUcqTzE8M09PtJjAewD82R1ppI88iY7zzGPsPnKB7dGl9ZZ8cs92GTpFitPsYr766j5NtgyxF08jkcnSWApwOJ5Zr7tuXNSs7T9e7hzl0fJbaXU/y1Ge3flAE9AYUagMmk47cyhhef4DFcIwpXyfn23pYoYj6OhfyyCwrcYFUZIrx2Sw123azpdqKQqZALsRYmBxmdCGBTJbA19WGZ0pG/b0P88CWcvTKtdyMJqsdky5JYLAf7+gi0dgK474+2t45zel3O5lcVVDevIMHd26jqvB69VFusM+5BAsTfby97yV+9rNXOd01g65oAw89/iRPfvZxPrFjE2UF5utMlPnwRCHESO8pXv7pzzl47BhHj52grWeE1WiKRGSBkeF+Lp67gHcygqmwGsdNpr/nUkGmfB0cPXiQNw+doHt0mXROhiiC3qDDaHWQWx6m/dRbnHNPs7y8Sii4xIS/m9MnTjMULOBTz7zA4/dXYbhJWhu5SgW5BPMBDwPDAQIjg3j9EwTTBqobq8nTRPB0DTC7skoso6eioYX6MhXjgxd5+9B5VhUm9FoZEwMX6Orrpe30YQ4eucCquomnv/Z1fnNXA5b1qF0mMoO7/QSHz0+jNRlRyRbpudBBb+8lThw+wIkLE9ibP8PXvv409zU4UAhxFib7OX7gDfbvP8yFwXmSubXZ6Fq9GkSRee9pDh48yKFjFxhfjiMgIMr06DRa8grzsVkcOJ1GMsEZRv3DeDz99Pb00t3jZnJZQe09v8Ezz35mvd/KUOtMOAoKMMqjjPs89HT30NvXw8W2NnqHl8kqNGRWRwnJS2jasIGmDRtpqCnBarj+SleZUrU2MWrUw8DwGGOjgwz5xlhO6qisr6XQFMN9qZ+5lRUiSQ2FrlL08UEOvfkmBw4exz0ZJJPLkUkLCLkUWbmB/DwDcpkCMZNgcaybCz2jTI77GZ1YQDSWs6FlA9VlxRQWmMmGZxn1DTPo6ae3t4eenn7GF0Sqtz7GM88/wT21ZqJLo7z71mu89vqbnO2eJJJaWwmBUoaoMJLvuHX9I5lMsRaAWZ1lcmaKkeFhAoERvAPdnH/nBKfe9ZC2beGZr36J3feUo1MIxEMzdJ99k/373+DwqR4WoznEXA6FTo9aZaI434JWb8dVUYpFnWB8qIuLnV4mpiYJ+Pp598RRLnjibHj0K/y75z9JXcHaPV6mUGO0FVLs1JJYmsI/7MUz0E9vbx8e7yQZQy2PPvk8Tz62CZMYZrTvJG/se4ODh8/in48hIqLUGlHKVFjzDQQnPVxqu8TIzBwB/whTsysks3J0BiXjbS/y4394lbZuN16vl2Gfn0F3N+0XOhldyFK2+VM8+cQnqM6/urj1bZApUSnl5GJjeCYdfPG7v8+umvVZyDIFKoWMbHSUwekivvLdr9NS+ME2RCFFaM7DsX2v8eobxxicipAjiyAzYjQYMZvSuE/v5/U9eznePkYkK5LLyjHbHRSWurCb7BS58iA6R8A/zKDHTV9vL93dPfimkpRseITnvvIM9zdaSUVmuHhsH3v27KfNu0hWFBHQY9TrsOTJ8b6zj5d//hqnemZICiK5nBqbw05BacENUprJUal1mKxKFke9DAxPImj1ZMLT9Lef5vjZAcTCB/jSV7/I7m3lV/yNTGIVf/tB3jg7SjweJpxQUdHyCR7/jR3ksUzHkZc56QkhV2jJr2jhoUcfos4pY3W2i9d++jreFQG11kZF0xZ27txKoVFGMjRP295/5OdnZtDZiqlrrkQeDNBx9giH3u4kom/hua99lc/ubsF+2eSfbDLIyKUD7D0zSjweIhRXUbbxQT79+APky1fpOPoSx91BZAotjrIWHvrkw9QXakhGFjn/xk851LWETKnB4Wrk3p272Fi+NkCWSqU4deoUP/zhDzly5Mg1/44dO0pH+3lS0wE6xhZYjcmQ5cIE+s+w99WTpEp/g9/9zgtsKTMiRyQVW6Zt379woGsRmUJDXkkj9+58kJYKEzJEEpF5Tvzsx+x3Z7nvc9/ltz97g3zXQCYRxn36ZQ51LgJK9HkVbHtwN61VFpRqFeFAF52eAGOjw3j6O+gaWMBUtpX7d1Qzcfwl3vHHQaYhv6KR+3buotGlR6FQY8nTszraT6/bjdvjpbfzPGfPXsQ3ncBW1sy2HTtocl2vrtndJkci7ObFv/0H9h07ybEjhzh6pou5cJZsOsj0xDCdF85xvm0ARclmytYLdcpkcnTWElwFcvztxzg/GEFr0CBmQvi7j/Pzf/4pl4LN/PZ//AOeeaSG9x4HxVyaxYF9/O0/HWc2lKB827M8tbP433D/PyATf9nre+6YyNLQ6/ynb/1XZhv+hL/7wVNU5V9dFFEgOOOlzz3EyEiAhUgOpUqLwWDAYLLiqtrApo2VWHWqa17WAGJTb/LtL32P8dJv8pd/9g1ayyzcYSwRyLAy7ccz6GNsbAR/YJZIMosoU6HTadFqDdjyCygsLKa0rIqamjJshmvbIwpxpoZ66ekfwD++ChoTFpMejc5ESfVGNjWXY9WvPchk4wuMjo4zNjLCzHKUbG4tUFVaXkZ1fRM25vENDTAcWCSRziFTGigqq6CuqYVCzSp+r5tB/xyxdBZQ43BV0bihlcpCI0phktf/2w/YN2TE5dSQTUaJRGMkU2kymSzpVJzg4gQzoRJ+7y9+zFcercN28yRjIMaZ9PbQ2dXDwNA4y1EBc345za3b2eiY4Xx/CGNeMdXV1VRVlmDRrx0fUYgyNtBNV3cvg75Jgkk5FkcRFTVNtG5upbbUjvayoKgoRBn3rH3e45skmFj/fHUTrVuu/fziuR/y7Dd/zDl/Ef/hR9+mwZglEoyRFUQ0xgKqGlrZ2lqD46qBj7Xt9LzfrtWEDHNeERU1jWvtKstDp5KTSy7iHxpm2DfGUihBVgCVzk5JaTHlVfWUF1puGNS9ycFkyXuEY/0G7tu5mcoi02WDByLz7kOcGLaz4/5NlBdele9aTDA3Psb42AiBqSWiyQzINOQVlVJeVU99hRODVkkuMUvHmeO80z7ISsZIZeNWdj64FauwyNiwl9GZVZKZHDKlieKyShqamshThZgIDDPkmyIYSyOiwORwUb+hlbqyPPQ3zCWfY2m8l/PvnqPHM0VSYaOyoYVt995DdX6CzuOHOdszgyqvint3PcQ9TaWYtRnG+89y4tQFfLMpbCUN3PvATuqdWeYmR/AH5gjHMwjIMFiLKCkppaauCqfNwOWHWxQSzAUG6e/vZ2h0kYxcj8lkQK83UlDewMbmavLN2iuvVTHF9HAnFy50MOCfI6uyUlbdzD33baMw6+Vc3yyi3kllRTllpUXYTDqyy27Od46yGEyh0ijXl7oJZDJgdzWyaWMNTuu1+fivJxGcYKC3i66eAcamQ8j0dlwVdbRs3sqG2hLM64HrDwhElycYdPfh9oyyHBcxmMwYdHrMjlIaNzRTUWRde9kW0yxNj+L1eBibXSWRFlHp7ZS4qmnaWIkmvYB/wM3I9AqJlIBMZaSotIYNLfUUOUxXzPLPhDwcPT5Kft0mWppKb5p3dGFiCHdfP97ReZKiBrPZhE5voMBVR3NzDQW2tcEDMRthZmKcEZ+PqflV4qkcMpWRguISKmsaKC92YNTKEVLLeHsu0n6ph9GZGBpLETWNm9m2rZ5EoB3vZBB9fhUV5WW4ivIx6lXIEYkuDnH+zGnae8bJaJw0tO7g/vtq0aZXmR7zE5hcIBRLIaLG6iymrKqOuooirOuFJrPJVcZ9A/T1DTA2F0GmMWE26tEb7ZTXNtNY68K6HhRNhGYYHfbg9c8QjKUQ5TochSXUNGzAZRVZmBhkYGiClUgSERWW/BJqm1qoKXOiSU9w7u0jnO1dxLX5UT754GZc9utV6L5SNr6Ab9DNwNAkoXgGlHry8oupbWyiwpW/voLjA7nEEuOBAP6RMRaWwyQza4U7FGoDTlcdLRvrKHEaSS75aT97mrbuUZJKB3Ut29mxpQx5YgH/0Ahzy2HSWVBoTDgLiymvqqG8xInpik4hEg9OMzzQR5/Hz2I4i9ZgxmjQY7IVU9fURHVp/jUB9vjqOH0dF7jU5WUuImJxltPceg+tDXom+wcIzOYorKqkvKyUIqcVRWyM7l4vgdk4Ko0CuWwtJVImAyZnJRs2NuJyGm5ruZ6QiTA14qG7s4sB3wShpBKbs4y65hY2b2qktMDM1TGsXCrE5Oggfb1uAjNBBJUBs8mIwWDFVdVIY305eetLH5ORBcb9HgZ9kyyHk4hyLbb8ImoaminNUxGc8eL2jrEUTCCgxGQvorpxI3UVRZgvf/MSYwxeOMdYWE/jvdupsN0oMC+SCM3i9/bT5/Yxv5pC/f45KKKmoYnaigL0KhmQI7Iyy/jIMCPjsywH4wgoMdqclJTXUFflosBuADIsTXrpbG+j2zNBNKelsKye1q33UGWOMOQZZCZlpaqynPLSEhx2I2oFJFbH6Wo7zdmLw0RECzUb7uX+7Ruwa+JMj/gZm5plOZRARIXJ7sRVWUdtpQundW32Ui4dZiYwRF9vP6PTK2QU+vePc0llPY31le/nHxayURamRvEMDDM1HyQlyNCZHLgq66mvcqJMzePpH2RidoVkDtQ6K8XlDTQ3VlGUd/uBz1xqju4LPSykHNz74FYcl72Q5hLTdF7oYzlXyPZPtGK7vJ+LGULLs4yP+BmbnFk/3yqMtnxclbXUVpZSYNMhZEIEBi5y6mQbo4tZ8kobuff++6gpVLMY8OAbm2E1lkEm12LNL6G2oYHSfC3hOR+eoQBzS1FyogytybF+v6rAYbrxII6YW2XgwilOtc1Tdf+n2L296rbyoOfSYaZH+unu87OSUKLT6zBZ86moqafInGC46wLuQBClKZ+yilqaNzZRsJ7aKJtYxu++yMXeMVIKM2azBVdFPY2N1divqCOQI7I8xWBvF0MTqwgKDSqlEpVKhc5oo6i0iuoqF9Y7yG2Zjc3QdWY/L716ivGQmtK6jWzbvp2tmxqpLM1/fxLBL50YZ258mL4eP6F0jkwmTfa9ZQEyOUqlCpVSja2wkqaWFootNz4JQjbO6sIkvuEAU9OzBOM5ZEoNlrwiqmtrqSgvRR4ew91zkcmIEpVcgUK+nmZLVGItqKFlUyNF1utUWrtyS8RDs3g6zvLuxQHmQpDnqmPzPfewsc5JbOISRw6fZy5ro3Hzdu7fvpky2wpnXv87vve7P2Kx5BM8//Xfo9UyzWwwBXI1RlsxjRtb2dBYifWyjhafucQr//2P+M6Peml66Gl+5/cfgalJIikRhVqPo7iWltYW6irz0bxX5Gt5Bp/Xx8TUDCvRDCg0mG0FVNZUU+y0IYTG8AcmmZ5ZIpEDlcaIo7CK2toKysudaGVrwZnAYCeXugbwj82RyGmwF5ZSU9dMy6ZmqkqsV9z/hWyC5elhOi52MOAdZS6YQmsppr5lKzVFepb9l1jVVFFbXU1lpYt8q+4mz4Qiycg83q6znG3rZ3o1h624lk1b76G1qYT0XBeHD55hKmmmbtN9bL+nEXN2luHRSWamZ4kkRWQKNSZbAaXlpRQXuygtNCMH0vElfD3v8PapLuZiKkpqWtnxwA5a6pyoZSLp+Cpj3k4udbrxjc0Sy6ixF5ZSXdfEpk0bqXZZUcpyJCJLjHj68Y/NsBRJAQq0RjslFVWUl5VRXmy9jd9ZkUwizPSom74BP0BfSUkAAA14SURBVJMz80TSSnQ6HUajCaujmJqGZuqrCtZXR4qk4kGmRgYZGZtkej5MFjkanYWiihqqKyqoKlnLeS7mUqzOj9Db2cNQYIZIRo3ZasGoN2AvrKR5YxOljmtjDsnoIr6+i3R0exiZXCKrtOCqqKV542Y2bazGrleQS8dYnBpm0DfOzOwS8cxaH8p3VVFdXkpRkY4FXx/uAR9LMQGZXIXe5KCytp7qygKSU+0cOXyGfv8ssawSe1ElNdWVlBQ6cBaVU1tXif0mA4u3PKq5GNMjnZxpi/Dg049TetmAl5iNMOXr4MzFNI8+9xiFl6+aFLPEQjMM9nkYGZsmkhKRKzVYCyqpr6+lvFjLvK8fr2+MudUEIgq0xnyqGzfQ3FSJWSWSTUWYHOqkvbOf4ZEpwkkltgIXlbWNbNq0iboKOyqZQDoRJDDYx+DwGEuRzFp/tZdSV19HZYWZFX8v7sFR5kNpkCvRmQqpb2qmsbEMww0PjUAqtshgx3k63GOkVPkUFtjQa5SoNGbK6zdQXWy65rrLpcIEeg/zyhsdpPQumjdvZ+cDm3FZ1aRjS/S98ypvnpvD7NrIrt0Ps6UuDwVZIkvDHH7pFS5NK6huuZ/dj+ykrlAHZFkcv8DffPNZ/uGSmk9/7Y957uEiNLK11EcytYWymmYaqhzXrJ7OpSOM9x3h5b0XSehKaN58Hzsf2EqpTU06scLAO3vYf3YGU8kGdu1+hK31eShYC8D3n3qR105OYHBtZNfDj7J9fUY/QDqdxuPxcPbs2eseOZlMRKfKos+pEB35yEQVWhVks2kScRmupnu5d0vVekpEyCSDuE//nFePj6Mv3siu3Y+yY1Px+vZEMskQ/WffpHdWQVXrp3iwxX7DZ6lsKszQ+V/w0lt+DK6N7HxoN9tb1v5WLhNjvP8Mx06245+OoLG6aGzdxo4d23AZI7S/8Y+81halfOMOHt79EK01tvfTY2bTIXyd73C2vZ/A1BKC1kFpRS0NjU20bGrEafw4i3n/MgmkE1O0vX2RxbRILpMidVnuKZlChUatRqUy0XT/I9Q7r3z3yiQX6T17nHcuBRB0BuTIUKsgmVJRu3kXu+7fcMUAiihmiUxf4qWf7Wc87eKxZ1/g4aaPUkvil+cuCF6vzTi4tPev+NP/EeD5P/0LvnB/2TUv2O/JpWOEw1HiySxKjRGrzYRaKb/hxSAKyxz9u+/yw4NavvFH/5nPP1CJ8SMu8xOycVZXgkSjMZIZAYVKg95oxW4zo1HduC1XNixDJBgkEk8jVxmw3sl3PyJRWKVt70948VSSh77wHPc32iGTJJlMkV7PEZvNpIiFFzj70t+zuuUHfOfZzZTYbu8CF3MJQqshYikRrdGK1aSFTIykoEGrUa3PRL+WkE0QCoZIZOToTWZMeg2K641GXO/zRjMmw/U//0Hwuowf7vtfvLCrDFksQgYNZqsVvUZ53UGPy7cTDoaJZ2TojWaMBg3Km33hl0TMxYglVWi1qvfTz7zfpkyUWFqN7jr/dyeEbJJwMERK1GKxGtGoFB9rH8ymo4SCUXIKPRazAY1q/WU1GSYYSaM2mDHo1B/kyBRzxCNBIgkBncmCUae+6bm6KTFLLBwiEkuBQovZakanuToIfHV719IgiEo9JuPaDHohnSSNEpVKeUVOdVHIIYgy5HLZ2vdCEdKCCqN5bZ/uvN0CyViYcCSJTG3AbDbc9F633grSiQjhSJR0ToHeaMFo0H6IwbrbJGaIxTIo1Wo06ttYJCpmiUfCRKJJBIUWs8WETqv6CMUXBFLxCJFoBqXOuL6vIulkCplCiVJ1/fObTUUJhWOISgNm052vjIC1aycSCRNLrP0WXd6ff1nS8RCRuIDGYMRwJ3UIPibZVIxwJIag0GEyGj7EoNx7RDLJGJHI2qx6rdGMyaBDdZN7mShkiEcjxFOgNRoxaNXIyZBKi8jlSlSXt0XIkRNlyOQyhEyCaCRC4r37t157w9+gm8llEkTDYRJZxfrfufU1LWRTxKIRovE0crUBs9mITv3xPSinEwmygFqruyagfi2RbCpOJBIhkQatwYTJqL/pObiVXCZJNBolIyjXiuipFQjZNNkcyFRqrtddsuk40UiUDBpMJhPaWxTTvu6e5NLEomEisTRytR6TyYT+dnLDfSwEUskUgihHq7t6xlyOZCKFICrQ6T/EbLp1710LsaSwPvhzq9RuH4VIOhElnsyhNphvMkB9PQKZZHztniHXYbYY0azfa3OZJNHI2nOYyWy6Zhbz2j6GicRzaA1mzEbtDa83UVjLOxoMJxDlagwmMyaj9rYKjl8tG5vk0pm3OdMdpqJ1G5s3NlBRkndbqQZ/lbyXC1uukJFNJYiEI6RySgxm8x0/Zwm5NPFomFgSdEYzJsN7A0Zr97+UoMFsMaJWyBBzM+8Hr1ernuB7f/YTvvqAcv33WI/ZbL7utftB8Lqfez73+/z1T75HozJCOJZFrTdhNq8NiH08BDKpBJFQmJSgwmC69f1fyKaJR4OE41lUWhMWiwmFkCGdzqDS62/j/vwBMZchHgsTjQtojRbMxvXjm0sTj4RJ5NSYrSY0d9jhhVyaWCREPC3HYLZivGZVoEg2vfb+kxIUGEyW2/rd+yhEIUcqESEciZMRFOiMaxO8PsLP0mV/N0wonCAnU2M0WzDpb/FMJQpkUjGCoSiCXIfFar5iYtRHJQpZktFVFpaCJDMiSo0es8WGzWq4o/5xky2Qy2aIJzIYTFdNckIkm0kRTwiYzLc/OHvH28+svWMmsnL0Jgtmw7Wr4z8uopgjFQ+xuhojJ9dgslgxG25WTFckl00SWgmSU5mvOg8C2XSCUCiBymDFfNmqPVHIkYyusBqTYcnLez9+JeYiBHpe4ltP/gFuzU7+/MUX+VxlklhGgcFswWK62WQmkVw2RWh5lZz66raI5DJxgsE4KoPtiraAQDYVZXk5hsqUh8107f4KgrCefvZaMgQEIU0sJsNq15NLRVlZCZNT6LFaLde5N4tk01GWlyIojXnYzR/+uQZEcukYy8uRtbYbrxw8FUWBVGyV1VAShc6C3WZcOyaiQCq2zPxyBquz4PqFGEWRdDLM6moUUanHcp1VqL8ORDFDaGGamfllQtEsBqsDZ2ERTrv+un1RFAXioQUiGSP5+bdeRfOv5a4IXgNkIqO8+X/+nrdGKvntb3+FLZXW677o3Jkkgyf/ib/+STfNT36LF57YgtN4t1cB/7iJzPf8X/7wD/6KxZo/4m/++CkaS260XCLOkR99D2/lt/jiJ5spuEWF+bvVlcHr/83XHm3GfqsJJRKJRCKRSCSSXxtiLkk4FCaZ1WBzWPgQYymSm7g6eP2Hf/4vfPOTtlt+78rg9b/nR//zv7DV+uv9NieRSO5e2fg8HW/8Cc/+zh7MW7/Jzw78BVus0g+KRPJR3TVXkcpUyaee/y0eq1th/4t76BxdJZ37CHF1MYH3zCv84ugcrU98g2cfbyX/piNuvy4EJrvPcsk7zfhwH76JZRLpa0tep6OztB95mYvhBjY3OK9cnvyrRmS9PrG4Xhn5rhivkUgkEolEIpHcJWQKLRa7kwKnFLj+uIjvPZSLIIjiHTyRr31JFAVEQXqOl0gkdyuRVDyKr6uPFbmOoroWKn9FJwBKJHeb2y4G/PGToc9v4vGnn0Z38hwdJ9soeOYhymz6DzFNXWRp6F3a/SIbdz/DAzs24DR9vEudfnXIcFQ1UpFv5N3uX/DXP5jmzJZmKlxOTDolYjZJaHmembkwOkcZrQ89Rku5He1d1FPujEgsHCaZySKIYULBFJmMyB2XbpZIJBKJRCKRSCQfipBNk4xFiWdE0ukk0UgYkRvnQX1PJp0iGo2RETOkklEikRzYpWCQRCK5+4hCitX5fs6c8yMoDeQVFd8kR7dEIrkTiu9///vf/7duxAdkqI1OKioK0Wn0OAsd6D5kDt5cOo2+aAPbNldj01+/kOOvJxmGvGJKCqwos6uMevvp7urk0qVOevsGGJ1aJCEaKW/YzAMPPcrOzZVYbpUX7C6VCY/y7vG3eHXPPs71TxJLp0hnZGTiKyREMw6b8SPkbJVIJBKJRCKRSCQ3JaZZnvFy+sBr7DvwNh1DC8SzAtl0knQmTRYdjvxri98K6RDj3jb2v7aXQ2+fJ7AQRRBzJBNp0ukUCl0eeWZpVa1EIrk7CJk4s953OXT4BL2jEYyOQhw2Exq1HJ2tGItOultJJB/FXZPzWvKvK5dcYXR4iNGJWVaCERJpAaVKi9nmoLCkgpqaMuzGf73CCh+HXGKe4UEfvpFxVqIpcgKo9DaKCp0UldVRXZqHTloXKpFIJBKJRCKRfEyyRFZm8Xs8BKbnCUYzIFOiM9kpLCmltLSMcpedq2toCdk4y7MBPB4/0/PLJDIicqUWi70AV6kLV1klxXbNv80uSSQSyVWEXJrQrJ/BkTkSGQERGQqlFkteAWVVVTgMUtxBIvkopOC1RCKRSCQSiUQikUgkEolEIpFI7jrS8I9EIpFIJBKJRCKRSCQSiUQikUjuOlLwWiKRSCQSiUQikUgkEolEIpFIJHcdKXgtkUgkEolEIpFIJBKJRCKRSCSSu87/B9Sjv479gGtFAAAAAElFTkSuQmCC\">\n<br>\n<center><figcaption><b>Figure 3.</b> DCNN without Atrous Convolution & with Atrous Convolution</figcaption></center>\n</figure>", "_____no_output_____" ], [ "Figure (a) represents DCNN Without Atrous Conv. The network is constructed with standard convolution and pooling operation which makes the output feature map smaller after each block operation. This type of network is very useful in capturing long-range information. For example in fig. (a), the final output of block 7 with a very small resolution feature map (256 times smaller than the original image) is summarizing the whole image feature. However, such a smaller resolution feature map is not useful for semantic segmentation tasks that require detailed spatial information.\n\n\nBy using the atrous conv as seen in figure (b), we can preserve spatial resolution and make a deeper network by capturing features at each scale. This is done by increasing dilation rare at each block. As a result, the field of view of the filter becomes wider which is required for better semantic segmentation results.\n\n\nWith a distinct dilation rate r, the filter will have a different field of view. As a result, we can compute the features in fully convolutional networks for multiple scale objects, without reducing the size of feature maps. This is done by applying strided convolution or pooling operation in standard deep convolutional networks.\n\n\n**Note**: DeepLabV3 uses atrous convolution with rates 6, 12 & 18\n", "_____no_output_____" ], [ "## Atrous Spatial Pyramid Pooling (ASPP)", "_____no_output_____" ], [ "ASPP is used to obtain multi-scale context information. The prediction results are obtained by up-sampling. In the ASPP network, on top of the feature map extracted from backbone, four parallel atrous convolutions with different atrous rates are applied to handle segmenting the object at different scales. Image-level features are also applied to incorporate global context information by applying global average pooling on the last feature map of the backbone. After applying all the operations parallelly, the results of each operation along the channel is concatenated and 1 x 1 convolution is applied to get the output.", "_____no_output_____" ], [ "<figure>\n<img src=\"data:image/PNG; base64, iVBORw0KGgoAAAANSUhEUgAABS8AAAFkCAYAAADBmM71AAAABHNCSVQICAgIfAhkiAAAIABJREFUeJzs3Xl0XPV9///nvbNrpNEuWftqyba877LxAhgsDNSYNbQkhJCUk4aUkl/Sc9qmy7fp6WnSNIF+IfCFQjCQsjmAAXnBxjbeseRVsmRJlrVL1r7PPvf+/hg0WJE3GdsY/H6coxMxc+/nfu4dS+folc/781Z0Xdc5C5fLhaIoWCwWFEU52yFCCCGEEEIIIYQQQghxxahf9QSEEEIIIYQQQgghhBDibCS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEIIIYQQQlyTJLwUQgghhBBCCCGEEEJckyS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEIIIYQQQlyTJLwUQgghhBBCCCGEEEJckyS8FEIIIYQQQgghhBBCXJMkvBRCCCGEEEKIbzBdc6L7ewH9Yk8AzQt64IrOS4hrnq6ha27QL/JnB0D3oft70TXPlZuXENcZ41c9ASGEEEIIIYQQV05g+CAB5zEs8d8H1XLeY3VfO76+D9B8HZhj/wLVknnJ1+1o72DT5k2cOnWKoqIiFi5ceFHn6bpOfX09H3zwAW63m7vuuov8/PxLmkMgEMDj8eD3+zGZTFitVhRFuaSxvgkCgQDd3T2cPFlDe3s7fr+fiIgIMjMzycrKwmI5/7+Pq6WtrY333nuPtrY27r77bmbNmjXuMYaHh9mzZw9btmxh/vz53HXXXZhMpnGNoXkb8Xa9imJKwBz3MIpqu/A5vtP4+ooxRizFYJsy7nkLIcaS8FIIIYQQQgghvsF0bRDd34WOzvliO813Gl/POvy9m1DM8Wi+tksOL/v6+vio+CN+//vfs3DhQtLS0i76XEVRiI6OxuFw8M4779DR0cFPfvITUlJSxjWHQCDA4cOHefrpp6mpqWHJkiU88cQTpKamjjlW13W6uro4efIkiYmJZGdnj+ta1zq/3091dTWvv/46O3bsoKurC6/Xi67rGI1GwsPDKSgo4MEHH+TWW2+9aiGmruv09vZSVVVFXFwcEydORNdhaGiI0tJSqqurKSwsvKTw0uv1Ul1dzUcffYTZbObOO+8cd3ipBwYIDB0i+IOjYY57FOUC/wcAuhfd3wWae9xzFkKcnYSXQgghhBBCCPENppozwK6jKIZzH6T7CDgP4+/bgh5woqCc//jz8Pl8HDx4kDfffBOHw8Hq1auZMGHCuMZwOBzccMMNHD16lG3btpGdnc2jjz6K1Wq96DEGBwcpKSlh+/bt9Pf3o2kay5YtIyUlZczqy5E5v/baa6xcuZLMzExU9Zuxy5qmaVRWVvLrX/+aLVu2EB0dzeLFi0lPT0dVVdrb2zl06BCbN2+mubkZn8/HXXfddVXu3+/3U1ZWxvPPP89NN91ETk4OqqqSlpbGP/7jP+JyuUhOTr6ksXVdx+fzMTw8jMdzKSXcOqCBArp/kMDgXgK2aRgjlpz3LMXgwBA2B8UUf0nzFkKMJeGlEEIIIYQQQnyDGay5qOZ0UM7955/macA/8Cm6fxDVkoAp+k5UW8ElXa+lpYXi4mLa2tp49NFHmT17NgbD+IJQVVVJT0+nqKiIw4cP89FHHzF37lwWLFhwUefrerD0eN++fURGRjJnzhyqqqrYu3cvS5cuxeFwjDre5XJRV1dHdXU1ixcvHtdcr3WDg4Ps37+fnTt3UlBQwM9+9jMKCgqwWCwoioLX66Wuro5XX32V999/n/Xr1zN//vyzrlC93DxuD/X19Zw4cYK5c+eGXrdarWRlZV3x65+fgmrJwRT353hP/w+atxP/wFaM4YXn/VlSDNEYI5ZceIWmEOKiSXgphBBCCCGEEN9kigXFcJ4gRfcTcFehDR8D1Yhqm4zRcROKah/3pTweD+Xl5ezZs4esrCyWL19OmC1szPvr16/n8OHD9Pf3Y7PZyMvLY9WqVSxbtoywsODxFouFqVOnsnjxYv74xz/yySefMG3atND7552H201lZSXHjx9n6tSprFy5kpdeeolDhw5RU1PDnDlzPr93aGhs4J133qG4uJjGxkY++OADenp6WLBgAZMmTWL79u0MDw8zZcoU9u3bx/79+yksLOSxxx4jKiqK7u5uduzYwebNm6mrq8Pn85GYmEhhYSG33347ubm5KIqC3+en8kQlH3/8MUlJSdx+++1ERkaG5tza2sqmTZtwOp3cdttt5OTk4Pf7qak5yfr171NSUkJXVxcGg4HU1FSWLVvGqlWrSEpKOu+zcLlcNDU14fV6mTlzJosWLSI8PHzUMTExMYSFhTFr1iyysrKIjIxkaGiI/fv3c+jQIaZOnYrFYuGdd96hpqYGgEmTJrFmzRqWLl2K2WwOjeX1eqmqquK9996jtLSUvr4+LBYLubm5FBUVsWLFCux2Oy0tLbzzzjts2LCBpqYmNm7cyPDwMHPnzmXmzJls376djo4OVt66kikFwb0jfT4f1dXVvPvuu5SWltLb24vFYiE7O5uioiJuvfVW7Pbx/7s9F8UQjtE+l4B9H4Gho2iuk/iHD2MMn3eek4wohvBzvy+EGDcJL4UQQgghhBDiOqb5O9FcFegBN6olCWP4YhSD48InnkVvby+HDh2ip6eHlStXkpuby8hGmx6Ph127dvGb//oN9Q315OfnM3PmTHp6etizZw8HDx6kpaWFhx56KFQeHhcXx9y5c9mwYQMHDx6ksbGRSZMmXXAeXV1dHDhwAKfTycKFC1m0aBFHjhxhy5Yt7N+/n+nTp2MymdDRGRwcpKqqirq6OoaHh2loaCAiIoK0tDTi4+PZunUrVVVVFBQUUFJSgtVqxel0omkazc3N/P73v+ett94iLCyMqVOnYrVaqa2t5bnnnqOkpIQnn3ySuXPn4g/4qa2t5a233mLq1KksX748FF7quk5nZycbNmygp6eH6dOnk5GRwZEjR/iv//ovjhw5wsSJE5kxYwZut5uamhp+85vfUFFRccH9QBVFwWg04vf76evrw+VyjQkvLRYLBVMKyM7ODjU26u7u5ujRo7zyyitkZmZiMpkwm83k5eXR2trKxo0bqaioYHBwkNWrV6OqKl6vl5KSEn75y19SXV1NXl4eM2fOZGBggJKSEkpKSmhsbOTRRx/F6XRSXV1NbW0tTqeTpqYmjhw5woQJE8jIyGDLli1UVVWRl5fHlIIp+Hw+SktL+Y//+A9OnDgRGntwcJBDhw5RWlpKY2Mjf/mXf4nNduHGOhdHQTElYYy8mcBwGZqvG//AlvOHl0KIy07CSyGEEEIIIYS4juneFgKuCgAUcxKGsJlw3tY+5xhH1+no6KC8vJzw8HCmT58+ahXc6dOn2bRpE5UnKrnrrrt47LHHiIyMxOl0snnzZp599lk2btxIYWEhBQXBknWLxUJWVhaZmZnU19dTVVVFfn7+eTuGBwIBTtWd4rPPPiM5OZmFCxeSmprK/Pnz+eSTTzhw4AB33nkn6enpKIpCTk4OjzzyPdxuNzt37uT+++/nL/7iL4iOjub06dO4XC5qamqIiYnhxz/+MTfccANRUVEYDAZ27drFO++8Q0JCAj/5yU+YMWMGBoOBxsZGXn31VTZu3EhqSio5OTlYrVa8Xi/9/f0MDQ2hadqYeQ8ODjIwMIDP52NoaIgDBw5w6NAhli1bxt/8zd8QHR1NIBCgpqaGF154gV27drF06dLzhpcRERFMmjSJiIgItmzZgt1u54EHHmDmzJmjQj6T2YTJPLqhjc/no6WlBb/fz7e//W0eeOAB7HY7XV1dvP3226xdu5b333+fefPmkZqaSnd3Nx9//DFHjhyhqKiIJ598kqioqNCz/fWvf01xcTFLly4lPz+f733ve3i9XjZu3MiaNWt45JFHiI6Opru7m+Hh4dCzAOjp6WHz5s0cOnQoNHZ0dDRut5vdu3fzq1/9iuLiYpYtW8bMmTMv/h/uBSiqFYM1H0NYLoHhajRPHbq/C8UYd9muIYQ4PwkvhRBCCCGEEOJ6pfvQvE1onhYUQxiqNRfFGHNJQ2maRkdHB01NTcTFxYXCwRF2u51FixaRnJzMokWLyMvLQ1VVNE1j9uzZpKWl0dzcTFNTUyi8VBSFmJgYsrKyOH78OHV1dQQCAYzGc/8p29/fz+HDh2ltbWXVqlXk5+cTFhZGQUEBeXl5VFRUcPjwYdLS0lAUBZvNRkpKMtHR0ZhMJhITE0Ol3u3t7QAYDAby8/NZvXo1iYmJANTX17Nv3z4GBwe5//77ufHGG0NhbUxMDO3t7ezZs4fSg6XU1taG7uliud1u2tvb8Xg8pKWlkZubGyrPHnm+g4ODF+zkbrPZKCws5Lvf/S4vvvgir732Gh9//DHJyclMnjyZWbNmMWfOHCZNmjSmy7iiKKiqSmpqKkVFtzFx4kQURSEhIYGbb76ZzZs3c+LECU6ePElqaio2m425c+cSHh7OwoULyc/PR1VVdF1naHCI7OxsTp06RV1dHTNmzCA5OZmYmBiMRiPx8fGfj6/S09Mz5j6sVitz587FbreHSvpHxnY6neTk5FBTU0NdXd1lDS9BQTFGo9qmEBiuhsAgAfdJjOESXgpxtUh4KYQQQgghhBDXKT3Qj+ZtBM2PYk3EYJsMyqV1mfb5fPT29oYCtdjY2FHvR0dHc8stt+Dz+bDZbLjdbvr6+hgeHqa9vR1d13G73QwPD6Preij4DAsLIyEhAb/fT0dHB263e0zZc+h+dJ2Wlhb27t1LWFgYhYWFREVFhQK4hQsX8j//8z/s27dvVNn2+SiKgsPhYOLEicTFxYWu09PTQ01NDREREUyZMmXUXpwWi4W0tDRSUlJobm6moaFh3OGl1WolISEBr9fL5s2byczMZMWKFUyYMAGr1Up2dnZofheaf3JyMj/4wQ+YPXs2GzduZNeuXZSUlFBaWsp7771HbGwsBQUFrFmzhpUrV4YaGimKgsViISMjg4yML8Joo9FIYmIiqampVFZW0tzcDARXed54440sXrw4tNK0t7eXoaEhWttageCemENDQ+i6Pq7nERERwfLly1m0aNGosYeHh2lt/XJjX4hiiEC1TgRAD7jQ3NUQvvCyXkMIcW4SXgohhBBCCCHEdUr396B5g8GTYoxGtWRf8liBgIbT6cTj8RAWFjamsY6qqvT391NcXMzHH39Mc3MzHo8Hv9+P2+2mq6uL9PT0MeXUJpOJiIgIFEVhaGgoVEZ8Ni6Xi4qKCsrLy0lLSyM6Opra2logGK7GxcURFhYWatxzZofr87HZbERHR6OqwWBX13WGh4fp7e0lLCyMmJiYUSGioiiEhYURFRlFTU0Nvb29F3WdM0VERLBs2TKOHj3K+vXr+fnPf86zzz5Lfn4+hYWFLF++nLy8vAuGlxBcOTqyWnLWrFn09vZy+vRpysrKKC0tpaSkhOLiYsrLy6mvr+exxx4LnWsymYiKihr1eSqKgtVqJSoqKlQKD8HP2Ol0UlxczMaNG2loaAh9xh6Ph66uLuLi4sZ8xhdDURScTicbNmxgw4YNo8b2er10dnYSGxt7SWNf+OJmVFMSijEcXXOheU5e/msIIc5JwkshhBBCCCGEuE5p/h50TwsoKooxBtU44ZLH0nUNv9+PpmkYjUYMBsOo9xsbG3nxxRd56623cDgcLFy4kMzMTMLDI+js7KC4uBiXyzVmXFVVMRqN6LqOz+c7bzjV2dnJ/v37qa+vp7W1lR//+MejSsyHh4fp6OhgcHCQ/fv3M2PGDEwm0znHO3MOJpMpFBSOzMXv92M2m89axq6qKkaTEU3Tzhu4novBYCAvL4+/+7u/48Ybb2Tr1q0cOHCA4uJiPvnkE954443Q/pxRUVEXHG9kFWVSUhJJSUnk5eUxZ84c7r77bpqbmnn3vXd5/fXX+eCDD5g/f35opejIvf/p56koCgaDAV3XCQQCQHBf05dffpm1a9ditVpZtGgRWVlZOBwO+vr62bChOFSKP14dHR289NJLvPLKK1itVgoLC8nOzsbhcDAwMEBxcXFoBeblp6AYHKiWVALDJ9C8p9H9/SjGC6/cFUJ8eRJeCiGEEEIIIcT1SNfQ/d1ovm4Ugw3VnALqhYO8cwnuj2hAURT8fn8o0IJgp/EjR45QXFxMXFwcP/nJT1i8eDE2mw2DwcDx48cpLS2loaFh7DTPCMcMBsM5Vxr6/cFu3p999hkxMTFMmzZtTHn5yL6ctbW17N+/nzvvvJOMjIxLulez2YzJZAqtKjzbfDweD0ajMdQ9/Xx0XR9T7mw2m8nMzCQxMZGlS5fS09NDXV0dmzZtYuPGjaxdu5aEhATuu+++cd+D0WgkMjKSyMhIkpKSMJqMVFVVUVFRQVVVFVOmTAEIha9nfp4QbDDk9XpRVRWLxYLX6+X48eO8//77hIWF8bOf/Ywbb7wx9BmfOnWKw4cPXVJ46fP5qKio4L333sNqtfK3f/u33HTTTaGx6+rqQvucXimKGoFqzSUwfAICg2jeWgzG2VfsekKIL0h4KYQQQgghhBDXIV0bRPe2gOZDMcWimscf4p3JYDBgs1kxm824XK5Rqyg9Hg8tLS309vYG9y0sDDbugWAI5nK56OjoOOtehSOdtyHY9Gekac2f6uvr4+DBg7S2trJixQqefPLJMXta6rrO8ePH+c1vfkNlZSWHDx8mPT193PeqKAoREREkJCRQU1PD6dOnR+3TqWkag4ODdHR0YLfbSUxMDDW/URQFTdNGhYGaFiy5Hym/PnO+uq5js9kICwsjOTmZvLw88vLyiIiI4O2336a8vPyc4aXf76empob3338fVVV54IEHyMzMHHOcwWDA4XAQERFBIBAYtVLU6/XS3d2N0+kMhcG6ruNyueju7sZmsxEbG4vX66WtrY3Ozk7mzp3LkiVLRn3GHo8n9JzG68yxZ8+aPWZsn89HW1vbuMcdF0P4F/teai4C7hoMYRJeCnE1XNpOzEIIIYQQQgghvtZ0fx+arwUAxehAsYw/xDuT0WgkJiaG8PBw+vr6xuzzOBLsGQwGDMYvSpA7OjrYtWsX1dXV6Lo+pizc5XLR2dmJoijEx8efdRWjrus0NzWzd+9e7HY7S5YsYcqUKZ83mvniKzMzk5kzZ1JYWEhXVxd79+4NBYYj8/vTFYZnMzKXadOmMTQ0xJEjR0YFj06nk+rqalpaWkhPTyc3NxdVVbHZbJhMJgYHB0cdPzQ0RGVlJY2NjaH7qa2t5Z//+Z/5yU9+wokTJ0LXtVgsJCQkkDQhCVVVzztfRVEYHBxk165dvPLKK7zzzjtjAtKR+ZaVlXH06FGioqLIysoKPQ+Px8OpU6eoqakJHe/1emlsbKShoYG4uDgyMzNRFCV0zkip/4je3l527dpFRUXFmM/4zMD3nPdBcGxd11ENo8fu6+tj586dHD9+fNQq3ctNUa2opmQUgxU94EZz116R6wghxpKVl0IIIYQQQghxHdK1IXR/T/A/DBGopqQvNZ7RYCQ+Pp6UlBQaGxtpampi9uzZoeYuKSkpREZGcuzYMbZt28aNN95IS0sLH374Ifv27SMnJ4fOzk5qa2sZGBj4vEmPSm9vLw0NDTgcDjIzs8bsvQifh2/lZVRUVJCfl8+8efPOug8lQGxsLLNnz+b999/n8OHDVFdXk5KSEuqAXl9fT1tbGw6H47yBWlxcHEuXLmXHjh1s2rSJlJQUVq9ejdlsZvfu3bzyyisYjUZuvfVW0tPTQx26U1JSqK6u5sMPPyQyMhJVVdm2bRvr1q0bdW9msxmPx8P69evxer1897vfZeLEiXg8Hvbs2cOGjRuw2+3n7WJuMBjIzMzkpptu5qmnfsuzzz5LaWkp8+fPZ8KECRgMBnp6eigrK2PXrl309vZy//33M2fOnNAYiqLQ3NzM73//exRFIScnh4qKCl599VX6+/spKioiNzcXs9lMcnIysbGxVFZWsnnzZoqKiujo6OCjjz5ix44d5OXlUV9fT01NDX19faHGRl6vl7q6Otra2oiIiBjbtMlsIjk5mbi4uNDYt912G52dnRQXF7Nt2zby8/NDIWtfX98VaNyjoBgiUEzx6J5mdF/HZR5fCHEuEl4KIYQQQgghxPVIc6EHBgEFRQ1DUSO+3HgKJCYmUlBQQHl5OcePH2flrSuxhQVXG86cOZPVq1ezdu1a/uEf/oHIyEgURSE9PZ177rmHzs5Onn32WV544QVaW1t54oknyMjIoKGhgdraWtLS0sjPP3t37fb2dvbt24/b7WbmrJlnLY0eYbFYyM/PZ/r06Rw4cIB9+/bx7W9/m6lTpxIWFsYf/vAH9u3bR1FRETfddNM5xzGbzMyfP5+//uu/5oUXXuCpp57i5ZdfDnXcjoyM5Ac/+AH33HNPaLVodnY2d999N0899RTPPvssb7/9NgaDgdjYWAoKCjCbzbS1teH3+0lMTOTee++lo6ODjRs3sn37dmw2G7quh8Z/5JFHWLFixXk/ltjYWL71rQcID7fz2muvsXXrVnbu3Bkqv/f7/fh8PlJTU3n88ce5//77iY+Pp6cnGGyHh4czdepUAoEAP/rRj4Bg4yO3282NN97Id77zHRwOB4qiMHnyZO677z6ef/55/uVf/oWnn34aRVGYMGECd911F16vl//8z//k1Vdf5fTp0/zVX/0VU6dOJTIyknXr1nHo0CFuuukm7rjjjlH3YDQamTRpEvfffz/PPfcc/+f//B/++7//G0VRSExMZPXq1QQCAX75y1/y2muvcfr0aR599NHzPpdLophRjFHgbkLXhtE1F4pqu/zXEUKMoujn2HDC5XKFlqSfa0NkIYQQQgghhBBfT/7BXXhOP43uH8QUfSuWCf/flx7T7XazYcMG/u3f/o3s7Gx+8YtfMHny5OD1/H46Ojo4duwYlZWVeL1eUlNTmTZtGunp6fT397N3716ampqYOHEiy5Ytw+v18sILL/Daa6/x4IMP8rOf/YyIiLEhq9vtprW1lYGBARITE5kwYcJ5/451u920tbXR399PQkICiYmJ9PT0UFJSQnl5OWazmTlz5jBt2jS6urpwu90kJycTExMzahxd1xkeHqahoYETJ05QV1eHpmkkJyczZcoUMjMziY6OHlUa3dvbS0VFBWVlZfT19REbG8v06dNJT0/H5XLh8XhITU0lMjIytE9kTU0NVVVV9Pb2YjabSU1NZdKkSWRkZBATE3PBv9lH9uBsaWmhqamJU6dO0dPTg6ZpREZGkpGRQU5ODklJSURHR6OqKl1dXbz88sv87ne/4+677+b73/8+tbW11NTUoGka2dnZzJo1i9TU1FDH9kAgQFdXF2VlZRw/fhyn00lKSgrTpk0jMzMTp9PJ3r17OXXq1OcrQm9CVVVKS0s5evQYRqOBWbNmMWvWLLq6unC5XKSmphIdHX3WsZOTk5k2bRpZWVm4XC727NnDqVOnyMjIYPny5UCwA3pMTAypqamo6pfbOU/zteLteAF/305UWzrW1H9HNSd/qTGFEBcm4aUQQgghhBBCXIf8/R/jbvstimLEFPNnmON/8KXH1HWdkydP8tRTT7Fr1y5++MMf8r3vfQ+LxRJ63+fz4XQ60XUdi8WC1WpFVVU0TcPlcuH1eoOdvI0m9u7byy9+8QsA/vmf/5klS5Zcsb9PdV3H4/GE/ha2Wq0X/fewrul4vJ5QkyKz2YzNZjtnWObz+XC5XPj9fkwmEzab7Zxl7mce7/P5Qt29R57beO/R7/eHrq3reqgbutlsHnWvZ4aXDz74IP/0T/+Eqqq4XC50XcdqtWK1Wsc8n5HP2OVyEQgEsFgsoWehaRputxu3243ZbCYsLAxFUfB6vTidTgBsNts5n/vI/J1O53nHNplM2O32Lx1Wjrm+vwtv9x/wda1HtSZhSf47DLapl/UaQoixpGxcCCGEEEIIIa43uoauOUHzgjEMxRB54XMugqIopKWlsWrVKo4fP86mTZuYN28ec+fODb1vNpvP2jFcVVXsdjt2ux2AxsZGNm7cSFdXF9/+9reZNWvWFV1YMxJYnq0h0AXPVcd3rslkCq1WvBLHn4uiKJc81sjippEg+nzHne8zDgsLIywsbNTrFzPuyNgmk2lMF/nzjX1ZKWYUQxQAuu5D9/de4AQhxOUg3caFEEIIIYQQ4jqj6+7gfpe6jqJa4DKFlxAMohYsWMB9991HW1sbb7zxBi0tLeMao6+vjy1btrBjxw4WL17MPffcQ3h4+GWboxCXRDGjGD/fOkDzowd6vtr5CHGdkJWXQgghhBBCCHG90dzogf7g96rlsq28hODquJiYGFavXk14eDjt7e10dHSQkpJyUefrus7g4BC6rnP//fdzyy23kJ6eLtuZXWVWq5WZM2fy53/+58yfP/+8Ze3XC0UxoRiig/+h+2XlpRBXifz2EUIIIYQQQojrjK650AMDACiXObyEYAlvUlISa9asweVyjWvVpKIoJCTEc9fqu1BUBYfDgcFguKzzExcWFhbGokWLmDFjBjabTT4DAMWAotpBNX8eXsrKSyGuBgkvhRBCCCGEEOJ6o7ng8/ASxXrZw0sAg8GAw+HA4XCM+1yLxYIl/sJ7IIorR1VVwsPDpVz/T6lmFIMdPTCIHuj7qmcjxHVB9rwUQgghhBBCiOuMrrvRtWFgZOXl+ANGIa5HimJCMUaAHgitXhZCXFmy8lIIIYQQQgghrjd6AHRf8HvFGGzacyUuo+t4vV7cbje6ro96z2AwhLpSfxP3sxwYGODTTz9lz549LFq0iFWrVp1338jxHh8IBHC73fj9/jHP1mg0YjFbMJm/fIfy8fL7/Zw4cYJ169YRExPDt771LcLCwti1axc7duxgwYIF/Nmf/dnXdw9NxQCKGXQdNN9XPRshrgtf098WQgghhBBCCCEunY6uBz7/XgGuzH6GXV1dvPPOO/zhD3/A4/GMek9RFMxmMxMmTOCGG27g9ttvZ+LEid+YINPj8VBdXc2WLVuIjY3ltttuO+/xbpebsrIyPvzwQxwOB0Uri84OwiK6AAAgAElEQVT7F3tdXR3PPfccu3btQtO0Ue+pqorNZiMrK4sVK1awcuVK4uPjL8dtXZCmabS1tbFp0yZSU1O54447MBqNnDx5ko8//hiHw8Edd9xxVeZyZSgoysjPS+C8RwohLg8JL4UQQgghhBDieqNrwS8ARQHlyuwo5vP5aGlpoby8nOjoaDIzM0ONXwKBAP39/ezbt4/PPvuMzz77jMcff5xFixajql//ADMqKopvfetbrFixgri4uAuuNNR0Da/Xy/DwMB6PBx39nMfquo7L5eLkyZNUVFSQkZHBhAkTQu/5/X66urqoqKhg3759HDp0iCeeeIKMjIzLeo/n4vf7GR4exuVyoes6DoeDe++9l6VLlxIbG4vJdPVXhF4+X4T9+sjP0RX6+RFCBEl4KYQQQgghhBDXHR0YWa135YOXsLAwVqxYwY9+9KNQA5iRkvKKigrWrl3Lp59+SkJCAtnZOSQnJ13xOV1pJpOJlJQUUlJSruh14uPj+c53vsNdd60Jhb6apjE8PMy+fft48cUX+eijj8jMzOSxxx7DYrn6jZCMRiNJSUkkJX39P1dQgqXjwBc/RxJeCnElSXgphBBCCCGEENcdLfilXLmS8TMZDAaio6PJyckZ0308IyMDj8dDbW0tR48e5dixo1RVneDAgQMUFBRw8803Y7PZvjhBh+aWZrZu3crAwABLly7FaDSybds2EhISSEhI4I9//CP9/f3cd999rF69OhSSrl+/niNHjtDf34/VamXixIkUFRWxbNkywsLCQIe202188sknDA8PM3PmTCoqKti8aTNd3V0kJCRwyy23sGrVKjo7O1m3bh0HDx7E6/WSn5/PmjVrWLJkCUajkaGhIUpKSjh48CCzZs1i+fLlGAwGnE4nJSUlvPvuu1RVVaGqKlOmTGHBggUEAuMrQx4pvU9MTCQvb3TJvaZpJCQk0NXVxfPPP8+BAwe49957SU5OJhAI0NTURHFxMXv37qW9vR2DwUBmZiY33XQTt9xyCzExMaOu5Xa7KS8v54MPPqCsrIz+/n7Cw8MpKCigqKiIwsJCzGbzWefpdDopLS3ls88+Y8aMGdx88814PB5KS0spKSlh2rRpREdHs27dOsrKyvD5fKSnp7Nq1SqKioqw2+2hsXp7etmydQsffvghbW1tOBwOlixZwo033khdXR319fXcfPPNTJ8+fVzPchwPfVR4qet+FEWiFSGuJPkJE0IIIYQQQojrkX7usuSryW63k5mZSVxcHIODg/T19WEymXj33Xc5evQoOTk5TJ48OXS81+elrKyMl19+mcTERBYtWkRTUxPr1q0jLCyM2NhYDhw4QE5ODk6nE6/Xy549e/jNb35DbW0tEydOZMqUKfT19bF7925KSkpoaWnhoYcewmKx0Nvby9atWykvL2fGjBn09fWRkJCApmuUlpZy4sQJjh49isfjoaenh7S0NOrq6vjggw9oaWnB4XAwZ84cXC4XpaWlvPHGGwQCAZYsWYLX6+XTTz/lV7/6Fc3NzUyePJm0tDSam5s5evToWZvvXCpVVYmOjiY3Nzd0X4ODg/j9fsrLy3n66afZv38/ycnJ5OXl4Xa7OXbsGPv27aO8vJzHH3+cxMREAIaHh/nkk0/47W9/S0dHB/n5+UyePJmurq5QAPr973+fBx544KxzcblcHD58mNdffx23282yZcvweDyUl5ezdu1aJk6ciMViQdM0cnJyaGlpYceOHVRWVhIIBLj33ntRVZXu7m7eeustfve73xEIBJg1axZ2u50tW7awfft2FEWho6OD3NzcKxdeAsHS8RHXxs+REN9kEl4KIYQQQgghhPjK6LqO2+3G7XZjMpmIiooiPj6eCRMmUF5ezqFDh8jPz0dVg6W5AwMDHDt2jJaWFubNm0dqairNzc10d3dTX1/PihUrePrpp8nJySE+Pp729nY2bdpEWVkZd955Jz/84Q+JjIzE5XKxZcsWnnnmGYqLiyksLGTKlCmhkuuTJ0+Snp7OY489Fgo73377bZ5//nmKi4u58847+fu//3tiY2Opra3ld7/7HYcPH+bAgQPMmjULTdNwuVz09vaG9n5sbW3lgw8+oL6+nnvuuYdHH30Uh8NBV1cXH374IS+88AI+3+XrYB0IBHA6nfj9fsxmMxaLhc7OTt5//30+/fRTli5dyuOPP86ECRMIBAIcOXKEZ555hvXr1zNx4kQeeughAGpqanjllVdoa2vjkUce4Z577sFmszE0NBR6hm+++SZTp06loKDgrJ+xy+Wir68Pp9MZes3r9dLc3Izf7+ehhx7i7rvvJiIigq6uLt58801effVVtm3bxi233EJERARVJ6p466230DSNJ554gqKiIhRFoby8nOeee47du3eTlJSE3++/bM9QCPHVk/BSCCGEEEIIIcRXQtfh9OnT7N69m9bWVgoLC8nKyiI6OpoFCxZQVlZGSUkJRUVFxMbGwufHHzlyBIfDwYIFC4iMjERRFDRNIzw8nKVLl3LzzTeH9nbUdZ358+cTFxfH4sWLmTRpEqqqomkac+fOJT09nebmZhobG5kyZQoQLMe22+1Mnz6dxYsXEx4eTlxcHDNmzMDhcGC321myZAlTp07FYDBgMpkoKCjgwIEDtLa2jumsDsEmNk2NTRw5coQJiRMoKipiUv4kFFUhMTGR4eFhtm/fTkVFxWV5toFAgPr6erZt24amaeTm5hIVFUVVVRU7d+4kMjKSNWvWMGvWrFATpbCwMKqqqnjmmWfYs2cPt99+OxaLhfLycsrKypg6dSpr1qwhNzcXCJamBwIB9u7dy4EDB0Kl/hdjpMRdVVVSU1O5/fbbyc/PR1EUYmJimDt3LuvWraO1tZX+/n4MBgMVlRXU19ezcOFCVq1aRXp6OgARERE0NDRw4MCBUWMLIb4ZJLwUQgghhBBCCHFFeTweDh8+zHPPPRcKFTVNo6+vj/LyckpLS4mJiWHVqlVkZGSgqiqzZ88mPj6esrIyqqurKSwsxOP1curUKWpqasjNzWXmzJmh4E1VVRISEsjPzx/VlCYqKopbbrkFn8+HzWbD4/HQ19fH8PAw7e3tofkNDQ2NKtmOjIwkOzs7uBcmwaYzdrud8PBw4uPjR3VON5lMREREoCgKbrf7rHtX+nw+2jva6erqYurUqWRmZqJ83mBnpKFNXl4eJ06cuOjnqus6fX19bN26lb6+vtDrfr+f9vZ2Dh06xPHjxykoKODOO+/EbDbT2tpKS0sLOTk55Ofnh+4BIDw8nNzcXGw2G42NjXR2dhIdHc2pU6dwuVzk5OSMakCkqiqxsbHk5+ezc+dO6uvrL3ruIywWC+np6aSlpYVCR4PBgN1ux2Kx4PF48Hg8uN1umpub8fl8ZGVlkZCQEBrD4XAwfdp00tLSQis7hRDfHBJeCiGEEEIIIYS4opxOJ5999hnHjx8PBVS6rhMIBDAajUyfPp2HHnqI2267DavViq5Dbm4us2fPZtu2bZSUlDBnzhz6+/s4fvw4LpeLOXPmkJaWFrqGqqrY7fYxDYFUVWVoaIgNGzawZcsWmpubcbvd+P1+3G43HR0dpKSkoGnaqPNMJhN2uz1Urg7BFX0GgwGLxYLVah31+shx59qz0u/3Mzg4iNfrDYWgZ7JYLERHR48KEy9Gb28vxcXFbNu2LfSaruv4/X7sdjsrV67k4YcfZt68eQQCAfr7+3G73TgcjjHPymg04nA4QiXhAwMD2O12enp6AIiJiRl13wBms5no6GggWNI/XiNB5Z82+1FVFVVV0XU9VGLe39+PqqpERUVhMplGjREVHUVCQgINDQ3jnoMQ4tom4aUQQgghhBBCiCvKbrczb9487rzzztCqyJGwMSEhgaSkJCZMmEBERAQQbOicmJjI/Pnz2bFjB6WlpbS1tTEwMMDRo0eJjY1l/vz5WK3W0B6RI8Gi0Tj6z9zm5mZeeukl3njjDex2O/PnzyczI5PwiHC6u7vZuHEjw8PDZ5332cqPFUUJfY2Hpmn4fD40TcNoNI4KRc+c/3jFxcVx++23U1hYGJwXCgajAUeEgwlJE0hKSiIxIRGL1cLAwMCoOfzp9UbmoKoqgUAg9OXxeFAUBaPROOa+R14Hxt0tfeT8i3mWmqbh9XpRVRWTyTTqHAUFg8E4asWtEOKbQ8JLIYQQQgghhBBXlMlkIi8vj9WrV38eUCooCqH9Is8W2tntdgoKCsjNzaW6uprDhw8DUF9fz4wZM5gyZcpZg7QzO0F7PB6OHTvGhx9+SGRkJE8++SRLliwhLCwMg8FAZeUJDh8+TG1t7ZW8fYBRoZvP5xsT9GmahsfjGVcAOLI359y5c7nvvvtGBasjz/bMkFRVVcxmM4qi4PV6xzS20XUdn8+P3+/HZDJhNpsxGAyfr4bV8Xg8aJo2asyRUBEYtRrycjOoBsxmcygEHjVvdAIBPy6X64pdXwjx1VEvfIgQQgghhBBCCHHpRlbnjZRLh4fbsdvtWK3Wc642VFWV9PR0FixYQH9/P9u3b6ekpAS/3x9qwHMhHo+HlpYWuru7yc/P54YbbiA1NZWYmBjCw8PxeIJl4+cq9b6cRkqyLZbgCsjBwcFR74/swfmn5esXoqoqFovl8+cajt1uJywsDIvFMmZ1p8lkIjo6mvDwcHp7e+nu7h71vs/no7enh+HhYaKjo0Nl4gkJCei6TkdHx5g9JT0eDx0dHUCwrPxKMZlNREZGomka3d3do5oi+f1+Ojs7aWtruyqfpRDi6pLwUgghhBBCCCHENSkmJobZs2cTFRXF1q1b+eSTT0hKSmLu3LljysPP5syS5JGViCO6OrvYvXs3VVVV6Lo+7tBwvMxmMxMmTCA+Pp62trZRqz19Ph+NjY0cP378ioZvJpOJ9PR0cnNzaWlp4ejRo6Puu6+vj6PHjuLz+cjPzyc+Pp7w8HDy8/NxOBxUV1dz6tSp0PGaptHW1sbhw4eJjIy86E7jl8Jms5Geno7BYODUqVOhwBSgv7+f0tJSmpqartj1hRBfHQkvhRBCCCGEEEJck0wmExMnTmT69Ok0NDTQ1tbG9OnTycrKuqjzLRYLycnJREZGUl5ezrZt2zjddprDhw/z/174f+zcuZPs7GwGBwepra1lYGDgioWYBoMhtJK0tbWV//3f/2X//v20t7eza9cuXn/9ddra2i4qlL1UI6tZV65cicfj4fXXX+ejjz6ivb2dmpoa3nzzTT788EMyMzMpKioiLCwMq9XK9OnTWb58OZWVlTz//PMcPHiQzs5ODhw4wEsvvcSJEydYsGABhYWFV2zuNpuNKVOmkJOTw6FDh1i7di01NTXU1NTw9ttvs379eux2+7j3IhVCXPtkz0shhBBCCCGEENckRVGYMGECCxYsYOPGjcTFxbFw4cIxnbrPxWQyMX36dFavXs3atWv5+c9/TkxMDLquk5KSwurVq+nt7eWZZ57hpZde4vTp09x0001X7H6Sk5O59957aWxsZOvWrRw6dAibzYbBYCAzM5Nbb72VTz/9lEAgcMVWYEZGRnHHHXcwODjIG2+8wU9/+lMcDgd+vx+n00lWVhaPPfYYCxYsCJWdp6Wl8b3vfQ+/38+WLVvYvXs3NpsNl8uF3+9nxYoV/PCHPyQpKemKhb9Go5H8/Hwefvhhfvvb3/L888/z7rvvYjQaiYmJYcGCBdjtdpqbmyXAFOIbRtHP8RvR5XKhKAoWi0V+8IUQQgghhBDiGyQwXILn9P9F87RgjFyGNeWfrsh1vF4vp0+fpqOjg7i4ONLS0sbdUdvpdPLee+/x7//+78ybN49//dd/JT09PfS+ruv09fbR0NiA1WolIyMDm80Wet/v99Pe3s7Ro0eprKzE4/GQkpLCtGnTyMzMZHBgkN17dtPY2Ehubi4LFy7E6XTidrtJS0sL7eOo6zp9fX00NjZisVhGXefMPRdjYmJIS0tD0zTa29vp6OggPj6elJQUVFXF5XLR0NDAsWPHqK2txWAwkJOTw9SpU7FarXR3dxMXF0dqauqYPSv/9Lk0NjbicrlISUkhISHhop9pIBCgp6eHmpoaKitPcPp0GxaLhezsbCZPnkxaWtqYgNjj8dDW1kZ1dTUnTpxgYGCA6Oho8vLyyM/PJykpCZPJhK7r9Pf3U19fj8ViISsrC4PBQEdHB6dPnyYuLp60tFQ0TaOzs5PW1lZiY2NH/ds4cwyz2UxWVhY2mw1N0+jv76eyspJjx47R3d1NbGws06dPR9d1fvnLX1JfX8+vfvUrioqKLvp5jIfmbcBz+r8JDB5GDcvGlvF/UVTbhU8UQlwyCS+FEEIIIYQQ4jpztcLLL02H2lO1PPXUU+zcuZMf//jHPPzww+Puaq3rOl6vF6fTiaZpWC1WbGE2VFVF0zRcLhdutxuz2Yzdbj9vaHg5jHQWH+mObbPZsFqtV/1v70AggMvlwuPxoKoqNpvt/BmADj6/D6fTid/vx2g0EhYWdkW7jI/QNI3W1lZ27NiB3+9n8eLFxMTEYDKZsFgslJaW8tOf/hSTyczTTz/FrFmzrsw8JLwU4qqTsnEhhBBCCCGEENccTdNoamrizTffZPv27cyePZvly5dfUlA2sjDHYrGMeU9VVez2YPfzq2UkKDxzhehXwWAwhLqUXxQlWIofGRl5ZSd2tksrCoODg2zatInjx4/T09PDd77zHWw2G9XV1bz55pu0tbWxevVqMjMzr/r8hBBXjoSXQgghhBBCCCGuGbquU1FRwYsvvsj+/fvp6upi8uTJPPLII2RkZHzV0xNfEUVRSE5OZtWqVdTU1PD000+zbt06bDYbvb29DAwMsHjxYh5++OGvJFwVQlw5El4KIYQQQgghhLhmKIpCeHg4OTk5qKpKdnY2ixcvZtKkSVelPFlcuyIiIrj99ttDHcdPnTqF1+slOjqagoICZs2aRXp6+hUv+xdCXF0SXgohhBBCCCGEuKYkJyfz4IMP4vP5sNlsREREjLvRj/jmUVWVyMhI5s6dy+TJkxkedqJpAcxmM+Hh4disNpCWHUJ840h4KYQQQgghhBDimmIymYiLi/uqpyGuUQaDAYfDgcPh+KqnIoS4CmQttRBCCCGEEEIIIYQQ4pok4aUQQgghhBBCCCGEEOKaJOGlEEIIIYQQQgghhBDimiThpRBCCCGEEEIIIYQQ4pok4aUQQgghhBBCCCGEEOKaJOGlEEIIIYQQQgghhBDimmT8qicghBBCCCGEEEKMmw7DTp2uLh1NO/shRqNCZCQ4HMrVndtl5HRCQ4NGa6uOLQxyslUSEy/9fpxOaGjUaG3RsdkgJ+fix3O5oLFR42StTk+Pjt8PViskxCvk5CikpakYDJc8NSGEOCsJL4UQQgghhBBCfO00t+hs3RqgqUlHP8cxigLp6Qq3rDCQnPz1CjADAWhr09m/P0BFpY7TCfHxChHh+iWFl4EAnD6ts29/gIqKkfEgPFy54HiaBu0dOnv2BM8dHASfD3QdVBUsFp1Dh2HWLJ3ChSp2+9frWQshrm0SXgohhBBCCCGE+Nrp7dWpqAwGaefT19uHI2KQNWuyrs7ELgOfD06e1Nj6iUZLi47PB2432GzB7y9lvNpajS1bNZqbgysm3R6w2cDnv/D5vb06e3YH2H9Ax++D1FSYlK8SHgG9PTonqnSamsDl0jCbYNEiA0ZJG4QQl4n8OhFCCCGEEEII8bUT0AgFeZF6BZna+tB7Oka6lam0qLcxOOSkrbUL+PqEl34/9PTouFw6c+cq6Drs3Tt6fammQWenzrFjGk4npKYqTJumYjbD0JBOZaVOU5NGTIxCbq4aGm/eXAUU2LPnXOtVR/N6oaFRp6w8GFzm5yvccotKYoKC0ajg9erkTtTZsUOjrU2nqzt4nYgIBZ8PWlo0jh3TaW3T8XrBHgYZGQpTp35Rrt7VpXPkqIbHHXzPaoWDBzW6eyAiAqYWqEydquLz6VRU6DQ2acTGKMycqRIVFRzD7YaaGo3qGo3ISIUZ01Xi42UFqBDfBBJeCiGEEEIIIYT42vJ5e7B5t3BDzBuh1wIYqPSvoEW7DV3T0QKBr3CG42c0Qna2SkKCQmSkQkWFhqqODhsVBczm4D6Unx3QqKtXQntY1tbqfLItgNsNC+YrOByQlR0M8yIjFSpPjB3vXFwuneYmnd5eiI6GGdMVsjK/2NvSZlPIsyrERCt4PDoOh4LNpuDxQGWlxifbArS3B8vLrVZoaYGTtTr1DTo336SSkaEyNKRTVhbcR7OpScHjhYYGHa8PTEZobw+gKJCbq9Dfr1NSohMToxMXp4TCy4EBndKDGidO6OTlwayZl/UjEUJ8hSS8FEIIIYQQQgjxtaVpPpRALxEGT+i1ACq2wOf15MrnX18jRiMkJCgkJCh4vaCoY49RFIiIUJgyRaG+QaGxMRjeaRqUHtTo7IS8vOBqzPBwBbs92FjH6w2ee7HcbujqDjZFcjggOVkZ05THYuHzPUUVFCW4KrStTWfvPo2mpuC+ozcuV4mKhvo6ne07NKqqdKIiNeLjgytLPR6d3j5QFJ0pkxUWLlSpr9M5UKJz+jRU12jk5hpISFAwm6G7GxqbdCZPDu692d2t09ioo+mf7w0a8TX70IUQ5yThpRBCCCGEEEIIcQ1RFC6qa7fRCKmpKgsW6ME9QCt0Tp8O0NMD8XGwcIFKUpKCepbw82LoerCE3eUKzsliCa6qPNecR/h80Nqm0dioY7XBlMkKBQUqJhOE2XQam3RKD+o0NgVLxiGYL6sKxMUpFBaqJCerxMVqNLcEaGyE/v5g06G4eIXUFIWKSp3WVp3BQR2zWaGtTae/H2JiIDtbwWy6tHsWQlx7LvFXmBBCCCGEEEIIIb5qFgvkTVSZM1vF74empmCoOXuOSl6eelEh6PkoSvBLJxhmXgyfL1hm7nSB1UJotaSiQFgYxMUqKAT35uzv/+I8oxFiYyEhIRh0RkR8URbu9wev74iAzKxgINvVpdPZqTM8HAxEdR0mJCqkJCtfu9W2Qohzk5WXQgghhBBCCCG+tkzmaPympVT4ToVeC2CiRZv3Fc7q6lEUsFoVoqKC+0r6A8FVmxHhwWDzy45tMoHdDroW3P9yeFjnT5NBXQ+utgwEgh3MNQ38vuA5qhoc48wxjSZC5eX+M7qdqyqYTV+sOj3bClSrVSEtNXi/AwMES8U1aGnRsVohM1NKxoX4ppHwUgghhBBCCCHE15aqmuljPjv8mWe8quAlIvitrqNf7JLBryFNg/YOjbLyYDdvmxWGhuD4cZ3MTJ2kpC8X5Fmtwb0yjUad/s/DwoyM0aHi8LDOwYMalSd08iYqTJqkYrGCaggGmi53MOAcCSw97uD/Go3B8c/nT2cfXJ2pkJamUFamU1enEwhAXx/Ex0NWloJRkg4hvlGkbFz8/+y9aXRb1332+9v7ACAJgOBMihIpaqZGS7Iky5KHeIrdOLEzN0mT17dOm3blzbDWe7tuk3fd3H64adOu1axmJU7S3g5p2tpNmtixE8dpbMlzJGv0pHmeRXEQZxIkgLP/98PeByRNyrYc2Zad/Uu8JAI4G+eAJITznOf5Px6Px+PxeDwej8fzjiOdsmIVQJ40/cwZ999sRqhFMMTjOcrTI2/vzr6J9PYKz+80HDsmNDXBTTdq6mpto/e27cY5Jd84ZWWKlhZFXZ11Oj7/grB3nyk6JrNZ2LPH8OxvDIcOCWfOComEjYanUzA8DGdOG0ZHrYDZPwCnzwgoqKxUVFdfvLiaTitmz7LFQSdOCrt2W/dlY6OiocG7Lj2edxv+eoTH4/F4PB6Px+PxeN5xNDVpPvH70N5unXdTEYtpystnUF9f/9bu3G/J8DDs3h2yZYswmhMGBmzrd6EAD/8y5KmnDPX1sGaNpqtLeOFFoaQErrxSc8UyTTwBjz9u2LXLMK0BlizR7N0nbNliGB11641CoRt++cuQp58y1NXB9e8JaJk5UfyzpUCKtVdpNj5hOHFCePDBkOemGVJJGBi0zeK9vdDYCMuWaiorFWEIra2KHTuFl14WRApUVSmOHxeOHhUy5bBooRUvz5y5OIG1pARmzFBUVcHZs9DfL2QytqgnmfTipcfzbsOLlx6Px+PxeDwej8fjecdRWgqzZ2uami5cJKM1xGKpN9y2/XYRhkJ3Dxw5aqPgEbkcnDsH55QwnIWqKmH/ASGbhSuuUCy/QlNVpViyWHPqpLB7j7B9u5BM2jbyI0desV44tl52BLLDk+dZgnU6rl5to+BbtxhOnYHOLiHQNv4dj1shcv16WxIURbuvu1YjYti9R9j8nBCPC7k8VGRg7VWaVav0hHmYr5cggOpq6wg9fUYghJpqxawW/Y77Xns8ntdGyQWGf2SzWZRSlJSUoJS/cuHxeDwej8fj8Xg87xbCoe2MnrsHM3qGWMV7KJ3xF2/3LnnGEYbWTdjdbePQk1BQklAkk3beZBhCJmNdjFpbh2ZPj9DXJyRKFFWV9rYLrgeUlChqa+2aUyFiI+Ld3cL5bqGn283YLLNCZW2tbQZPJMa2KRRsrL2jQ+jqssJleRoaGhQ1NYp0WqGUdZV2dgr5vFBebu/T2pYAdXcLA4NCskxRXz82zzJau7fXShrJpKKuTr0hMfRiMLkTjJ77DuHAC+jkHMpa7kHpsjf3ST2e33G889Lj8Xg8Ho/H4/F4PJ7LiCCAqipFVdVrG4lqayc/JhaDujor5o3n9ax3IZSCZNLOwJw2TZHPgzFCEFjB8JWt4NF+1Nba45gzZ6ykJ+7axiNKS6G5WfFK12c8boXOqeZYRmtPdfwej+fdhRcvPR6Px+PxeDwej8fj8bwulLLCoXVAvj7hMAimFjc9Ho/n9eCnQXg8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeD/X8TUAACAASURBVDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyeyxIvXno8Ho/H4/F4PB6Px+PxeDyey5LY270DHo/H4/F4PB6Px+P53UZEprxdKfWO2N7j8Xg8bx5evPR4PB6Px+PxeDwez9vCCzt38J3vfJvjJ09f8DFNTdexbv3/jdbBhNuNKXDi+BNs2/a3F9w2WVbNwtYPM3/hJyfdVyiMcPzYY+zcec+Fn3vGDO76zKd57++973Ucjcfj8XjeDC6JeHnfvf/E0GA/ShnEGFAaJYJWGgQEIV/I098/QKA1YSFPPBagNCgVEAQKpcCIQiuNEGJEUEoT6ACMYDCoQKMIwEAs0BgxoEJQoHSAEvuPmb04phB3lUyMQamxq2liDIggIhgTFp9PAAkNCCgRCBRaa2JBDIVGBzG0tseDUiitUaLQaJTSiBZ7n7top9CIsa+DCgxGBK0Cux8CCoUo0EoV902h7QEoECMYI2it0O42Y/fS7p9R7kqgwohBafsY5Y4NEQwgyhCoAMQ9XgAEIwoQBkdGyI4WEAElcNfdf0JDQ+Ol+NHweDwej8fj8Xg8ngvS3t5O956n+GjDMHWpyaenW04P86utNaTL8wTBxKlnhTDPkUOnUSe38KerMpO27R81bDoTsLV7HqOFT0y6f3Q0y7FDR0m27eCuK5KT7u8aDnnpzHH27r3Si5cej8fzNnJJxMvTp47S092JVlgBTAVY1dKgBESBMSFDw8MMDw4SGe9jgSbQmiDQRcERJ16iFEpplGgUAoGgtEacWKiVBgwQggZBo4jZIZ5KgVJO5hMClBMOrUgoGJAQYwQxBiMhoRgUGhMaAqVQKJQOQGsSsThKadAKbVVHDE6URKEkQGmFUQYnX6JEUGLXERF0oK0gixUMrYgoiLLCLe74FQFKKXvMYEVVEbSykQXjXlAt4jaywmYUcggxaPsUxfVFGQJR9huhlBU+AYMVMgezWXqHRhDRIIbRkZFL8WPh8Xg8Ho/H4/F4PK+KMYZUELKyMUlTRXzS/eezhrIhRWMjxF9xdz4P3R2CTsW4ujk1edvhkJODOc5KgRkzJj/38LDQ02GoHopPuf3ZgTxnTsHISO4NH5/H4/F4fnsuUWzcyoSIcgKgoDTOAunkvECRTJYShgVGR0ZRosjnDcSwwpxWY4Ieyv4ngsagNOhAYcLQioFYJ6ZGWdHQiDMrGkQrRAwi1hmJiBMNI8OhFPc3CDTGPacWjYgm5oyPgQqcCKpRgUZr7aRCu1+Bclf9xDkiRTBiiiKittIgVmDUGBMZMt1fnBipnAuT4nL2uMREQmi0YuTOtIKjiHVtRscKbj3l3KXjZrY4/2lxJRHj9F1t/y7Oieocpfi5Lh6Px+PxeDwej+cyQCmY1aL49B8ElJRMPH0dGQl4OKnZfvjC5y+JhGLJXM3/+MzkU9/e3oCHY5pTHf78x+PxeC5nLol4Kc7lN0GsdHeIi40rAZQmmUphDORGRxFjKOQNSsXRolA6JAgUiC6uY/+nCQtWOLT/t+KjQaMEF622mWcRYwVFrB4YPTLaR6W0i5BboTUIFCq0rkntZD6ttY1pI4gKbZZa2eOzaWwb/44i8caE9imd7mf1Q/sXE4bFV0QwNk6vtVtDOSemi6krQAzGPUf0mmoX9bbSZSReulcmEkC1HvtmSPHBTsR0WyoIw8jVasVeFX23lAY0gv+H2+PxeDwej8fj8Vw+JJMwY4aitHTiuUo2q6iqUq/qvVBaUV4OTU2TH5RMKiorFKe9ecPj8Xguay6JeGlCax0UsQ5IrcFIWBTxVNFJGaKVJplKYsKQghHyhRxmVIjFA2JxjTIqMhfiVDpE3DxMlBXclHaPCVHRvEgRECs0KjSgCMMx8W4slm4FTvulXVFrhSJEnGvRxriVjb07UdIgaCeKogQjodMWtXNARn7RMXemCBisg9Lup3FHFUXXtYvYg1Jue6WxD1POkKns10o5N2kUPDd2PqcOnEhpnFPV7WLkhNUKUVY8Fqwwi7IxemTsOZWOoY1yLk7/j7fH4/F4PB6Px+PxvJ0YA729woEDhlOnhdFRSKWgZaZm/nxFJnNx520XWm/mTM2C17leNgunThmOHBV6uoVCAUpLoa5OMWeOoqlJEwSvuYzH4/FcFJdEvDzf3YcJ85QkYi5+bJ2NUVZbiRXTROyVr0QsRnl5iv6+PlABhXwebcAUBB3T6CCwBTRK2YIaY52FkaNTJBwnCILGzmqU8c5KcIU+9kqcKOXmRhpAEGPFT63jKMAY9zjBxdYZm12JQoydHamVFRSN4OLoEo2SdK7IsbmWtg9HocTd7oqAlLHTOoXQzdbU7jkF7fbZHovzRSqxblQBjBVEbRpdMCokknUpSqcGF6IvxuQFXXSk2mC+BmVcEZIipjQFCe33bVzk3OPxeDwej8fj8bzb8Z//LzdMCKfPCE8+FXLokDA8bMXHIAa7doUsXqy48YaA2trXJ2AaA2fcegcPjlsvsOsdXWTXq6ubej1joKND2Lw5ZO8+oX8A8rmxNUpKhNpaWLlSWHuVJpXyhhiPx3PpuCTi5UA2x/BgD/W1VSSCGESlNmKssMiYozGS2OLxOOnycvr7+4nFYGR0hNKSUifYGSsIaoUQ2BXCqNxGgVZFeTFqEbdmSbu6FSCNEzwZE/cEiGZRahAlhBLatnDnWYwi5pFTU6HdulEq3kXZxc2ylHHCpV0g2jP7P6WcZqii1ezfNaBMNFbTblN0XEavktsXF5ePBFHjYuGitJtzqV05kEW5WLqK4vzF7SnO2tRYl6V7SvdaFmvSXzdDQ0McPnyYs2fPks/nKS8vZ9asWTQ1NRF/5UTty5xCocDIyAixWIySkpLi993j8Xg8Ho/H4/G8ORhjGBgpcHbA8IoycQDODxXIl5gLbl8IDd1DOdoG8pO3HQ7pHgopCy+8fT40dA1Ovf3Z/jw9WU3mbTB39A8IL71kePllIR6H5csVdbWKY8eFgweF3buFaQ2G9euD1+V0HHDrvfSyEI+59eoUx45NXK+6eur1enqETZtDtmwRcnlomgGtCzTpNPT0CgcOCCdOwPCwIR6DdesCYpeoYcPj8XguydtJwSiGsnm6uvuoq6oiHneuxTEr49hsRedaVEqRSJRSXg4DA/3E4iWMjOYQiZNAOzdiVELjItzua0FhxBBEz0FkFnT+xegfPXFlQC7OXoyDR1qessU4Rim0Dpxs6OyiYqxjUgVjcyNFxp7LHZNSytoolWCMFNeO/I/gIu4KGzsf6wuyQzmVONUymksZFMXJSBTVKrDhc7e5LSKyR8t4vdE5No1bMnKmBtqJoXpM+Ixi/ZGoq7SLp79O2tvbefDBB9mwYQPHj59geHgIYwzxeJxMJsPSpUv5+Mc/zjXXXEMymbyon6e3GmMMO3bs4N5772Xfvn3ceuut/OEf/iF1dXVv9655PB6Px+PxeDzvaioqqhhJz+Vvn9tPujQkEVcE485SB/KlLG+tIzaFEhaLxaisqOBsWMnXt45Oun8kb1DpOj45Z86Uz11SUkKmspq9fSUTtjcG8nlhcCSkpHYa62e0/PYHepGMjsLgkFBZCbNnKW68MaC8HOobhM7OkMFBaG8Xzp4VDhw0DA9D0wzF0qWaRAIGB4X9+4VTp4XqKmhoUHa9CpjVorjpJrteQ/3Yeh2dNgb+SvEyl4OTJ4WXX7bCZesCxXtv0TRMU8RjitGcMH+e8NTThnPnhM4uIZsVyssV+TycOWPYtVtoaxPyOTvDdOZMxZIlmoYGez7fdV54+SXDyKi9r6wUnn/ecL4bysthyWLNkiWafF7Yt084dcpQXaNYfoWmstKuMTIChw8bDh0yVFQoli3TF3SSejyedxaXRLzUaIwoBoZGEdNLbXWaINBWOTRQtCdqirMgURqtFSWlpQjQ39dHEINcLo9WmiAWWOERO0RT62gWpowVhqOR0MaqtSusKQqLWjtxDqKGbwG00sVUdFTQYyPtxrkfnQCpbcxa4eLhCiJV1EbTXTmQ2x/rxJSia1OKoiBOoLTipHsYQSQ+Rs5UMWilbdQdDYSIu81GyG1mXNSY+KtFbFkPgiiNGG3nW+Jee5z4i3OUYlvgpWi3VK6cSDCmgNKBFWxfw3p58OBBvvWtb7FhwwYGBwe54oorWL9+Hel0ms7OTnbs2MFDDz3E888/z+c//3k++tGPUllZ+YZ/vt5MOjo6+MlPfsL999/P3r176e3tZfbs2YyOTv7w4/F4PB6Px+PxeC4tCxcv5xN3f49f/aqf+nrNtes18+aPnY8opamtrSWYwg4Yi8W44aabaL7vwWK/wHhEIJFI0NzcPOVzl5WV8b73f4CFCxdOOAXq6YbntoTs2WtYsCDJ+mtn//YHepFUVSluvingmvVW7KuqssaU0hJ7Mqs1JEoUJSUwPARbthka6hRlZTBnjubIUeHxJ0KyWVh7laaxUVFdHbB+3SvWKx0z+sTjqtgDO55sVjh1SujpgcpKWLFcMWfO2GzLsjJFWauiuloxMiJUVCjKyhSjo7B/v+HxJwzt7fZ5SkqsyHjosHDihHDTTZqZMzWDA8LLu4TubuHUScVoDk6cEHI5iMXg3LkQpWDuXEVvr7Btu1BVJdRUq6J42d8v7Nhp2L9fWDAfli9/q75bbw0iwsDAAEeOHOHcuXPk8wUqKjK0tLTQ3Nw85e+Ix/Nu4dKIl8pGs0MTMjg8AoTUVFUQj8VcUUzkjrSzGJVWxcZsUJSUllFuhKGhQYyCfH4UHZROcBeKtrMjx3WZTxQki85FV0bjvkbExsKjVLdo56+08zGtSCjFoh+lcY93RTdOAdXW52kdjSg3r1KcKBiJmrhYeRThjkpxokIeU3ROBkoTGkEFgSsycuKnc4/aKLlg3FzMKG4eNQgppQjcDM8oxq6CAFHRK+HKhEwUG5exRvRA2e9LaPdSxwJ0EKAIIsvsBb/XXV1d/P3f/z0///nPaWxs5M///M+5+uqrqa6uJhaLMTo6SltbGz/60Y/46U9/yve//32ampp4z3veQ2lp6W/9s3YpGRgY4Pvf/z4PPfQQc+fO5ZZbbuHxxx9/u3fL4/F4PB6P5x2FiC3x6OwU+gcEraCiQlFbq7jMPv55LkPS6QpmtiynYZphxgxYujxgxfIpFLQpUEpRW1tHbe0bS0xprWloaKChoWHC7e3twrnOkM7zQsM0RWXVWy8KJRJQX2/PywYHbcnOuXbh4AFhYMC2p7cusG3rixYpjh9XnDxlxTtjYOdOQ0cHzJ9v3ZiZzFgre7Ree7tw4KCdXzljumJhq5oy6j0yAufPC8ZARQamT1eT3JklJbYR3p6rWvdqW5vw3HOGU6eEpibFDe/RVFbC8ePWpbnvgFBRYaittULqyIjQ3QMgLFqoWLNac/yEsGOH0NYGBw8Z5s4NqK+3+3n+PJw6JSxaZN+Hzp8XTp4UwhBqahXl5e8O16UxhrNnz/LQQw+xYcMGTp06RTabRUSIx+NUVVWxfPlyPvzhD3PttdeSSCTe7l2+JORGcxw7foy+vj6WL19OSUnJ271LnreRSyJeKtt+Q6ACMIbhwTza9FNdXUFJAAiEqCjLXJyDKcYKeVopSstKERGGBwcwGHKjoyTicWLxOGEorpRHxq6IiTN1KiEUITDFKZE2Ph0aJwjash5krA1dxNiYNxRj5CKCEBLdVLRImqi+JyQ0emw+JCHFbHc0YxOZWBoUCbQiiDZuziSAoeCEW1OwDemi7ZVCFb02Iigtxa+LUz6j6Lqxsqm9ShY4N6Z1bdpDMuMcpbo45zIMQ3TgZoMiiDHWvepmYEoxwj4ZEeHpp5/mscceI5lM8uUvf5k77riDqqqqCfMhm5ubqa2tpa+vj4MHD9LV1UU+ny+Kl52dnTz66KNs3ryZM2fOICJMmzaN6667jltvvZW6ujq0m9u5detWtmzZwpVXXkkqleKxxx7jxRdfZGhoiPr6em677TZuvfVWkskkjz32GAcOHOCaa65hxYoVlJWVTdj/48eP8+tf/5p4PM7tt99OPB6nUCjwmc98hptuuolnn32WzZs3X/wvgMfj8Xg8Hs/bxOioFQ2HhoTaWut8eivHdmezcOCAYefzVgjJ5eztJSVWeFm5UrN4kfYipsfzBhGB7m545lnDiRNWQJw9W3H99ZpZszTxODQ1aa5aK3T3CHv3Cu3tIV1dUFsLV6+1rssxMw/09MCzzxpOOKFvVoviPW69V75/iEChAMNZnHPSuiqnYvy2+Ty0tRmOnxBKSmDx4rFIezotnD4t7HheOHHSio5gT7W1gpoaxbp1munTNXV1hrNnQ06ehN5eCEOorVU0NSn27bOx+cFBIR5XtJ0TevugugrmzlG8wyoYpqRQKLBv3z6+853vsGHDBnK5HFdccQWtra2UlpbS3t7Oiy++yI9//GNeeukl7r77bj71qT+gtPSdL/R193Rz33330dnZyde//nUvXv6Oc0nEyxCDMSHaxZ9DoH8oi2ioKU8TDzTOnOjSzFGBj0a5dvIgFiOZsrMRs0MDFEyekVyehEAsHrczGSV0Mxo10XBJNT5Kjls7cl7qqNBmLNIdFfZE4lik08mEhm/lHItOLBVBa4XSghGDQRFoq8oaE0WzTdETKhISSZFjkW9AabecFScDrYv7GrlTFTZljnOzCraQJxItiwfqIulaaaxXVNxjDPYWE5k0neypwAgB1nVpxNhIgBG0irrOxTlEp2ZoaIgnn3ySc+fO8dGPfpQbbrhhknCJe23nzJnD1772NQYHB2lsbCzOvTx16hR/93d/x8aNG6murmbu3LkopdizZw+bNm3ipZde4otf/CIzZ84EYO/evfzbv/0bu3fvpq+vj1gsRnV1NWEY8uyzz/L8889jjOF973sfJ0+e5N5776W9vZ2WlpYJ4qUxhieffJJ/+qd/Yt26ddxyyy3U1NTwuc99jlQqRUVFBTt27LjAkXs8Ho/H4/FcfojY9t+nnwk5fx6uWa+pqnrrxMtcDvbuNTzxZMjp0xSFy4hz54SenhBjYMVy7cs7PJ43SDxuBbn+fitknj0rvPCCoTwNs2bZiwML5ms6OuDpZwwnTkBFBVx5paa1dfLvXjwOVdXQ98r1yplSwFTF8+4L2Vwmk8/bmHk2C1VV0FBvI+5gY+s1tdaeMzRkBcfytL0vFrOia329IpGATMbGwk+etPM4RSCTsXNADxyw8zU7O4WKCjh10hqkGhoUM2a8tRdy3gxEhNOnT/PP//zPPPzww7S0tPD5z3+etWvXkslk0FozMjLC8ePHeeCBB/jpT3/KD37wA5qbmrnlvbe83bv/W2GMoaenhx07dhAEAYVC4e3eJc/bzKX5CBEJhMa69gyCCRX9gyNIQaipLCeWsJFsoCgSKuXKdMA6AIHSsiQmNIThACYU8oXQvkmqAoHWNh6Ni0QzJj5GVkeREO1Ey2jOZCgyNt9SReZJQTvBMoqvq6hZJxIhVfR3WxCksOU3dhuFCQGlEdca7mrFiyKibSO3DshAtDOOuqGTAmFo7KxOkbFti4VBgFGuYd3tkVB0kyql0Mq6VoWolEjb540cpiYENKHbWONedxmLuOsAAnFv7Ma4/Z6arq4uDh48SDweZ/369dTU1FywkVtrzbx58ybclhvN8dOf/pRf/OIXrFixgj/90z9l3rx5KKU4duwYf/M3f8ODDz7I4sWL+djHPkY6nSaXy9HV1cXGjRv51Kc+xQc/+EGmTZvG6Ogo//7v/84Pf/hDHnvsMdatW8fixYtRSrF582Y+/vGPU19fXxzs3dfXx6ZNm+jo6GDhwoVUVFSQSCRoabHDt425iLYij8fj8Xg8nsuAMLQC4ZEj9qQ+mx1/Qf/N5/x5Yfcew6lTtuBjxXLFihWKMIQXX7JlIadPw759hpYW25Ts8XguDqWs0/DmmwOGh+H4ccPTTxv27BFSSUNdnSKdtm7I6ioINBRC+zuZKYdXmtWUczbefNO49Z4x7N4jJMetN/7x8TikUvY0Npu1guMrexJErNsyNFBWGpUe2T+1ZoILMlpTKfs+VhhX9K41JOIQi42dv78yol5aap2XlZVW0D1xUmiaAafPCKWlMGvWuyMyPjw8zPbt23nkkUeoqanhS1/6Mh/84J2Ul5dPeFxTUxPV1dVks1n2799PR2cHhUKBWCxGLpfj4MGDPPzww0VDUFlZGQsWLOC9730v69evL8bMe3t7eeKJJzh37hzXXHMN586d4+GHH+bkyZPEYjEWLlzInXfeyVVXXVXsHIn2c8uWLfzqV7/i6NGjiAhz5szh1ltv5YYbbig6JkWE7u5uNm7cyFNPPcXp06cREerq6lizZg133HEHzc3NZLNZtm7dyn/913+xe/duUqkU3/72t5k/fz533nkntbW1b903wXPZcEnEy1gkYCmFERut1sq+afYPjwBCdW2GhFbF8hvlmmPEzbGMxEwdKNLpNMaEjIyOkB8dsUvrABWzsyBV5K6MymmKJUButqOYYimQRPlyDW54pp2J6YpslHblNlEcG8G4Wm+lpXhc4pyOCpyJMYqwu8s/Stt9UdF+jfkplbvPHqtdQDl7pURTPE3UeK7GRc7dfE03KxRsSU/070QYlQBJNOdyzJ3pxjjjBnYCtjdJK3u9zE39dNZ8Jz4r5Qp7pub8+fP09PSQTqdpamq66FkaZ9vO8sQTTxCGIR/5yEdYv3496bS9xNbU1MTu3bv55je/yRNPPMEtt9xSvA+grq6O22+/nVWrVhF3//Jdd911PPTQQxw/fpyBgQEWLlzIokWL2Lx5M/v27WPRokXFN/bDhw+zd+9epk+fzooVK0ilUhe17x6Px+PxeDy/DcPDcOSI4dRpIVkGM5oUQ4Pw8i5DWRmsWaNpmakZGBAOHxEOHzb0dNvP06kktLQoFi2yzbwiVrTcu8+wb5+dERcLYNduYXg4ZPZsW6YRi9nnPXrUcPCgoavLrpfJwPx5ioULNRUVdr3BQeH8edsk/GooIJ22c/b6+oTz5+1n1oYGewxLl2qMASHk3Dmhq8s6u/r7xYuXHs/rJJoj29UljIwImQpFQ72iuhpiMc3+A0J7h9DeDoND1snY0WEFyFzOioeDg7BnrzBrljBtmpq4XkbR0DC23oEDYud8ttvtxp2GAVBaCnV1iiAm9PfDyVNCS8tEUXFoyLo39+0X5s9XtC7QlJRaMTIM7dzMqNzWGBgdsX/GYrzmWIlXvnMEgRV0m5sVu3YJx44JJrSx8ro6G6t/Nzi9e3p6+M1vfkNfXx8f/vCHue22WycJlwDxeJxFixbxta99jd7eXqZPn04QBIyMjLB582a++c1vcvz4cebNm8eMGTPo7e3lkUce4Te/+Q2f/exn+dSnPkUikWBwcJAnn3ySZ555hv3793PixAmqqqqYNm0aR44c4b777uPgwYP8xV/8BVdccUVxH++//35+8IMfoJRi4cKFFAoFnn32WZ577jlOnDjBXXfdRUlJCR0dHfzLv/wLP/7xjykrK2PJkiUEQcDx48fZunUrL7/8Ml/96leprq7mxIkT7Ny5k+7ubkZHR9m2bRvZbJZbbnlnO0o9b5xL8ittwhBQGK2tRTC05TRKoGCE/uEsYbehpjJNIh4bczwqF3gWO3tRaSepBZpUOo1RgjGGMJ8jFmqMElvOE4ltYp2CkT8ycnS6np6x0pyic3FsTqSInVKpnKPS5cMxxjghddysySjG7URMg0TF43Ym5rj4OWKj5dpm04uzJm0M3Yql0VxMu3cmMncW/4xeG8Z0ynElPONuUba9PDo2OyfTipwmkkm1nZoZjQyN1lfimuTECcNK2yi9ufDl+mw2Sy6Xo6SkhNLS0gu6Li/EyZMnOXnyJPX19bS2tk6IdcfjcZYuXUplZSVHjhwhm80W71NKMX/+fGbMmFF0UgJUVVVRVlbGyMgIuVyOmpoa1q1bx3PPPcf27du5+eabKS8vxxjDtm3baGtr4/bbb6elpWXCOh6Px+PxeDxvNqOjVpTcvt2QTtuyjd5eOHpUaGyE1gXWybh5s+GFFw19fTaGLQJBDA4eEk6eEq6/TtPcrGlvF3buNJxts48rKDh0SOjosBfVm2fCwICwdZth505DdzeM5uzH53gcDh8WzrbBtddoqqsVJ04ITz9jirPnLkQQ2AKQ91wf0Nys+NhHA0ZGhJISRW2tLRkxBpJlVjwQ7NdTFEF7PACMjGRpbz/JuXMdmFCx62XNSHbM1ZVIJGhsbKSxsXHK7fv7+zl27NiE84fxpFIpZs6cSUVFxZT39/b2cuTIEfL5MeW+u1s4ctgK8DU1GbLZecBbZ34QgfZ2w4aN9ndyYat1XqbTimxWGBy0j9OB9eH09grPP284elSYMQMWtmp27zEcOSJs32G48QbN+fPCYxvseq0LFLfcYtcbGRlbL9BM2TZeVqZomamor4OODnjhBaG2xrBwob1Iks3aERLPPGvfa8rKYNlSqK1RpNP2IsrpM8KyZdYJOjAAZ87ak92KCnsxZGTk9dvGlbIXUWbNUuzZa1vL+/vte01jo2Jawzv/QkkUm96zZw/pdJqrr76ampqaCz4+kUgwe/bsCdufPHmSH/zgB+zfv59Pf/rTfOITn6C8vJyhoSGefvppvv3tb/Mf//EfLF26lFWrVmGMYWhoiMOHD1NRUcHdd9/NNddcQyKR4OjRo3z7299m+/btPPvssyxdupR8Ps+LL77Iv/7rvxIEAV/5yldYsmQJhUKBrVu38rd/+7fcd999LF++nBUrVrBr1y5+8pOfEIvF+MpXvsLKlStRSnH06FG++93vsmHDBlavXs1dd93FDTfcwMDAAN/4xjeYP38+//t//29mzpxJfX39W/Hyey5DLtHMSymKZVZFNIiygqZWUBDoH7IOzNqqDDoeuKsueoJop5x7UiHoWEA6XY6EhpwI+UIBQ4xAIUF9XwAAIABJREFUaduyHVjVTlToBDttrZHjWrlFjIuC4wptrHColZ1XadvEnZjntgyUtnMflYuTK4qWaOOi7YKb3akUKI0xQkzZWZd2xqQVFVU0z1KiULpCGYDAOkQpYBvOtRvVaRCtnRiq0MoKj8ViIeeinCCW4gRRN7tSJHRzMu3+BsUCH4VETlQdDSzRiBZMQdyxKZSOMfnalvthicXQWlMoFOzMiYuMJZ0/383Q0BDTpk2jvLx8gtUcoLqqmmQySX9//4QPDwDl5eUkEokJgqnWuji7VEQIgoC1a9cybdo0XnjhBc6ePcu0adPo7e1l27ZtaK1Zt27dlHM6PR6Px+PxeN5MRCCfsyJBdgSyI0IqCYsXKRqnK8rK4NAhw/MvGDo7Ydo0WL5ck0rBnj3CoUO2iKOmWqipEaqrbSHFyIh1XyVKYGaTYuZMe/Kez1ln5tathq7zML0Rli3TJJNw4IBw8KB1SVVWwFVXaYazdn5mR8erH4cOoLbGFnikUgobZlFjV9qxounp00Jfn3WElpczIYbq8Yxn18s7+OH/91d0t/UwUB7QfUiRSo39vPSPGBasup6//Mu/nGRAyOfyPPnkk3znm9+guXJyO0s2ZxhVZdz+wY/wJ5//wqT7BwcH+eUvf8m//P13aKkeS5Xl89DTY2jvCWk/Xc38Of+D/+PuP7iER/3qKAXJpKK0FM62Wedye3tIeTl0dtmG7bIymN5oS2n27TO88KItxrnySs0VyzTxBDzxhGHXLkNDA8xs1pSVQVsb9PULHR1j6508ZePW06crMpnJv6uxAJqbFWvWaJ54wnD8uPDQz0MatxqSKRh0YmRPj33vWrpEU1lpR0i0LlDsfF54+WVrGqqssm3jR48KmXJYtNAWjZ09e3EnlyUltiG9usq+RgMDQnk5zJmtSCbf+e83YRjS29tLZ2cnmUyGlpaWSefPr8bo6Cj79u1j27ZtzJkzh0984hMsWbIEpRTGGBKJBFu2bOGpp55i06ZNrFq1CrDGoUQiQWtrK3fccUcxop1Op1mzZg3btm3j2LFjFAoFBgYG2LRpE6dOneKuu+7i5ptvJplMIiIkk0n27dvHSy+9RFvbOVassGnKj3/849TW1nLTTTdTWWkvKFRUVLBu3Tq2bNnCrl27MMYwbdo05s6dSyKRIJPJsHTpUqZNm3bpX2jPO4ZL0zaurQhpjHVbgo13E4qLNCvEKPoGbQS8tjJDPB6AhIiW4gTgSISKIuTxWEAqlcIUCohR5AsF62bUCuPcl3Z0YyQ+RrFxhWAIohIeCYofpgRDNLlXFUtu7DaRPVEFUUHO+LKcYuDbzecUAh0gaLQOcd3nNnatAjfW0jaeu2ma7vlxn+/MOBekfS6titIrY4MpVTGqXhRatUY5sTOyU9o3srD4TEppgsBuo50QK+JmaDo3aTTp0ygr5Baj9xegsrKSdDpNR0eHbRAv5IknXn+F2+joCMYY4rE4wSsHlwCxeIwgCAjDcKygyDFWvDSZ8Y9dsGABy5YtY+PGjezdu5dFixZx8OBB9u3bx5w5c1i6dOmkFnKPx+PxeDyeN5uo8AKsUzKVtAU7CxZoSkqsozEM4cqVmtGcML3RxsQTJRBo2+Td1wdn24SBAWhs1KxcCT29ho4OK2QsXqxYvVqTTCq6u4WDB4TOLqisgNWrNGvXauJxRW2NITtsW4CPHBUWLIB5cyMX5WsfR2UlVFe/4nOZ+3JwUHjxJSvCjoxAdbUVE6qq3vligufNYaC/h7KBl/jCUsP0THzC7wrAs8eHOHqsiTAMJ4mXoQnp7uok0bGHT86tnLR2z3DIpnY4ffLKKZ87l8vT23WO2sGDfHL+WFZaBKQezg3k2Xz+HO3nTl2ag32dKGV/x9ZdrRkdteMh9uwVgph9n0gmYcUVihUrNe0dwtatwvAwXLFMsfwK66Zeslhz6pTdbsd2oapKiuvt3fuK9cpg+RVWnJyy0Nk5Hdestvdv22o4fdZe8Ijeu+JxWNhqW8KjkqDqasW119oeiD17hU2bDfG4fQ8sL7cXTlatsg3kF0sQ2PVbZipOnxEK4r5u0VO6R99piAj5fJ58Pk8qlSoW4L5eRkZGOHLkCP39/cybN49Zs2YVz6e11lRXV7N48WIee+wxDh8+PGHbVCrFokWLqKqqKt4Wj8eprq5Ga83w8HBRvNy9ezexWIzFixcX91EpRUNDA5///Ofp6+ujoaGBeDzOggUL+OM//mNisRjpVIqenh76+/vp6+ujUCggIkUj08WOqPO8+7kk4mUU0Q6UJjRj7kPlimkCFSDGujOHhvIoGaSmKkM8oQglRGknZEURayVFx2Q8kSBVXs7gQD860ORzOVQiUYxD66J1M4qNm2LkWgI1IVItThw0r3DvRQ7LSAQLQys6ouzfJwpnVtDUYsAYFz8X0FYNtKM1FYhrLbd7BghaB9hAfYhISODWlKjN3ImxWusxJ2v0nIyLkEejOF1EXEVCpApsJkdF8XKbCxc3E1REWZcmgpgQE4YUJGQkHxKGMm7Nqamrq6O5uZk9e/awc+dOrr/++ld9E+3r6+P8+fM0NjZSVlZGaWkpQRAwmhudsi0sn88XBwtfzFWl8WQyGa655hqefPJJtm/fzo033siWLVvo6uritttuo7GxcUrh1OPxeDwej+etIgisY6i1VVNfH30ehFmzFI2N9nNbImFdV1YstM7M7h4YydoIeiJhSzVK3PldoK2gUVlpRwP19NiZeJHQUZa0s/EUVmwoL7fJpK4uO+ty6VJNJqNeV+GPvkC0tK/PRlQ3b7bu0bIyWLJEsXTpGxMnPL8bKKXIlCgW1SZoqphsjDjWPcoxxSRzA0Tnf1Cd1CyqmfxDdr4s5MhQgdwFn10IlKI+HZty+6pSxeERPeVzv9nE4zBrlubOjGLtVUJXl51Jmyyz8x4bGhQVFYqREfjgBzVhaCPY1dX2PWDaNMWddwRcf71QkrAzKxMJxR0fUFy1xq2Xs+8NdW69ysoLN3RrDVVViqvWBMyZbWPo3T1CPmd/12tqFLW19kJFJIDG49DcrHn/7YrVq2wzeD5vZ2o2NNgSr/Jy+5yNjZpPftK6XjMVqljoU1mpuO22gPXrhWRyYplQZaXive8NWL3ans+nkvY43g0oZTsxouTjK5OJr0U+l6erqwsRoaamhtJXDBaNx+PFAt6+vr4J98ViMTKZzITzZrs/Y4lUYwwjIyN0dnaSSCSorq6etH5zczPNzc3F27TWnDhxggceeIA9e/YUhcpCoUB3dzfZbHZcobLHM5FLI14ihIWQQLn4s3JCWbG2JirI0YRGMTicR+lBqqvSxANdXAUFoseKZMCWySRKSkiTob+vh0AH5J0qHw8CgphytThEQyHHxVYiV2U4LgmtCI3YRnJw5dx2/4zYmZoup11cM/rdsXM5A+ecdBU5yjXryFgCXpyYqYkKecbET+u4tG5LzZiDdOLriXOCutdNKefi1FYQjcp1is3iekzq1EHR3ilE0XplfaFGCAs5CoUQ3LHG4jHKk0nyuWHXSh5e8PtcWVnJunXreOaZZ3j00Ue59dZbqaioKLaHjSeXy3H//ffzwAMPcPvtt/PpT3+auro60uk03d3d9PX1Tbp62tnZycDAADU1NcVSnotFa83q1atpbm7m5ZdfZs+ePWzdupVkMsmaNWvIZDJvaN03in/jvTzwYwI8Ho/HczmRiFuRYXx/oFK2uKKrCw4dtm7Kkax1NPX02jgmWMHRvMb8yDCE4WEhmwUEOjth40ZDSYkp3t/fDya0s+oGBuznlTc6ElycCLpli2HbdkN3jxVYVq5QXHtNQE3NhQURj8dzYeJxK/LV1ipyOXs+FwRW2It0pXRaTTmWIdr2lWLea633atg4u42QT5+u3FzeV18jHrfCaXW1Yq57zlhMkUhMdNiWlVmx9pUkEnYURmPjFHH2GNTXq+JFoHcTgQ7IZMopLy/n/PnztLe3X9T2oQkZGRlBKUU8Hp9kDopuV0pNaSx6NTPRmOkrJJ/Po7V+zU6JqDzoG9/4BseOHWPhwoWsW7eeqqpKtNbs2bOH//7v/3brX9Shen5HuDStJTImN1rt0DoOQ2vBtC5CBcpYMTIMhcGhEfKFPJXpNOlUGdq1iBtjXCFPNKPSoBSUlpYiJsPg4AAmLFAo5K3TMxRUzM1ylNDpfRolCuPS3kobG20vxsCtozLQCmNsDDvqqREZa+JGXPy9aHW0cy0VrjWdEKPUmGvSzY9AhU5QdC3hSlEIhZhybekKUNo6OJUmdJ09sUmux0gOtbcbMRNEGONi6WDQaNs+rlz7uhFCEyKhoSDR/FFNEASkUiXEYwliOoYOFBAwMDBK6ETNCxEEAbfddhsbNmzgN7/5Dd/6u2+htWbt2rUTruRks1kefPBB/vEf/5H29nY++clPEo/HmT17NnPmzGH79u3s3rWbxYsXF9vScrkc27Zto6+vjxtvvHFC0/jFMnv2bFauXMkjjzzChg0b2L3bPteCBQumFFrfDB555BF+9KMfce7cOS9gXgYkEgk+9KEP8ZGPfIS6urq3e3c8Ho/H8zuO1rgT/bHPdaOjtvBi0ybDmbPC6KgVGcWJla8lWI7HftYd26ZQsA3Cw+P7TBSky23Lr4iNf549K2RfozRDKaiuUsycaednRttu2hSyfYctzaiqgqvWaK66StuGYh968Xh+K4LAinsX6iZ4q9eLLrZYver1rXGpj+Hdjg40VVVVzJ07lyNHjrBz507e//73XzD5KCIM9A/Q0dnBzJkzCYLAaigijI6OYoyZIEgaY8jlctYU9gaMQ5H4WVJSQhhaofTV6O7u5sEHH2TPnj188IMf5Atf+AK1tbUkEgmy2SwPPfQQGzZsuOj98PzucGlmXrrxjAYz1vItgjE2Dg3WjWh/VwStIAyFoaEc2aEuGhuqyaRK7Sc5J8aJKJSKHI3WtVlaVoZgGBjoxxQMhXweVAwdGrS28XTbJu6i0gVti31QtgFdW9eidU0ajHEri57o2IwEwnHuTeuujOZl2sC6EXH7qMci4pEL0rWWR+tGyweupEgQTPTCKSm+RmOeU3f80WMRNG72oxhcVTqilBUtEfKFEGPyEIbO/RkjEY+TCMoIEnHiQYzAzQzVSoMSAoTQKJSbPfpaOtvs2bP54he/SG9vL5s2b+Lc/3WOa6+9liVLllBRUUF3dzfbt29n06ZN9PX18Ud/9EfccMMNJJNJSktLufPOO9mzZw//9u//Rro8zfXXX08Yhjz++OM8+OCD1NTUcNttt02Yr3GxlJWVcc011/Doo4/y8MMPMzg4yLp166ivr5/whr1v3z5+8Ytf0NnZiYiwe/duent72b59O3/5l39JOp2mtLSUO+64g+XLl0+y2r8a9913Hz//+c8ZHh5+w8fhuXRorenr62PVqlVevPR4PB7P5cG48/dIAHz+BcPBQ/bDWHMzzJunKU9De4ewZ4/Q0/v6ltYaYvEx0XB6I6y/RtM0YyrnkqK8HE6etG3jXa/VNq7tXDsbZVd0nRee22LYtsPO4pw+3c7yXL5cU1Gh3hWz5zwej+ftoKqqirVr1/Loo4/yzDPPsGPHDq6//vopH9vX18fPfvYzHnjgAX7v936Pj33sYzTUN6CUoqOjg+Hh4QkpxHw+T0dHB2EYvmqL+atRWlpKY2MjBw4coK2trdhfAtacdOLECU6cOMHMmTMJw5D9+/eTSqV473vfy9KlS4uP7erqoqOj4zUFUM/vNpdGvFS2Xds4C2FxBiMUvyYS4qJ5jM5tWBDDmbZ2hjNp6upqXMmOk+r0uLini0KXufaq4cFhTCFPGBZQBMUZl0q5Qh0nnEZlOwoQN7/SOkVlLA4eCa6I2/fIMYlr8baxbaU0Y7vj4i9RIZGTWJWIK/M2WJHTiqdaaQKtGevEcYFyJ87aoqEoGm6cIGoFYaVsO7mJZoEa+3oYEfLGEBaiwqICI0N9xIOAdHmGZHmaIFYCKig6X5UKME7AVVFlj7KOWDfm81WvhcXjca699lr+6q/+ivvuu49f//rX3HvvvaRSKeLxOPl8nuHhYVpbW/nSF7/E7bffzvTp04vN4O9///vp7e3l3nvv5etf/zqVlZWICF1dXdTU1PCFL3yB9evXF68SvdGfx5UrVzJr1iyefvppZs+ezZVXXjnJzXn69Gl+9rOfcfjwYUSEXC5HNptl//79HD9+3M7fyWRobW1l8eLFFyVetrW1kc1ae8MVV6ymomLyAHHPW8OePS/S3d3FuXPnvJjs8Xg8nsuWnl6ho0PIF6zYePVazbJltoDxpZcMBw68/s9FWttCoFSS4oe7ygrFnDnaFUDaa/Io+9gwtMaC4WFhaPA11g5gZMS6OoeGhF0vG3buNAz0W+Hyhvdo15Luo+Iej8fz25BOp7n22mtZu3Ytmzdv5p577iEej7N27doJppyenh5+8Ytf8N3vfpfh4WEqKyupqKigdWEr1dXVHDp0iMOHD3Pllba4yhhDZ2cnO3bsIJVKsWzZsoveN6UU5eXlLF++nA0bNrBz505+//d/n8rKyuI+3XvvvTz99NN89rOfZc2aNUX3Z1lZWVHnyefzHDx4kI0bN1IoFAjDkGJZ8riOEJ9m9FwS8dK4mHJEVBCjnIvRznfUGBNa96P7wVMocrkcCJzvGSCVKiedLiVqvrER7HHzLwMb+S4tS4JoskP9hMZGt12HdtSY437Q7X5FMyeVYPeBsV8ClJ0HEQSBE03dJzytGWvltvMplTtWO7PSOhejeZR2xqcbZBtF59VEIdC+JioqCHeiqZ1/KeMcnSIFhNA6QqNZoAKFfIG8CbEjPBVBEBCPJUilS1BSoKP9FEO9nSRLy9BmFJMbJIgniZWUIBLS3dlNZXUDqepa5wqNvgvaxeSdKvoaJJNJrr76apqbm/nUpz7FwYMHOXPmDPl8noqKCubOncucOXOYPXs2mUxmwhtrbW0tn/nMZ1i7di27d+/mzJkzKKVoaWlh0aJFtLa2kslkijH8D3zgAyxdupTa2lpqa2sn7Edrayvf+973ANsyPv55ZsyYwV//9V/T1dVFOp2mtbV1kh1+5cqVfOtb3yI7nHVu3MnEYjHmz59/0Q3l0c/4smUr+V//59eY1TL3orb3XDr+7ltfZ+OGX/p/8Dwej8dzWSNRNFyc+JhSZDKK9nbhxAkbx4ZIaLR/V9FVZ2VLLoaHbUS8pMSWZ0xrVJw8ZedlHj8uzJollJcrTp0ybNtm6OiEmc2KVas0CxYoamoCchduNik+ZyplSzlOnDC89LLQ3W1j8JWVdmzT4cMuueQoKbFz9iY1lHs82Ll5p3pz/PfhPDPKJ8dXd57Nkp9z4bkJ+YLhcFeWXx3sn3RfTzbkhfMBrUsuPNd/cLTAi6cH+NXByfe1D+bZ3VfOev850vM2EJ2Lfu5zn6O7u5uNGx+nra2N9evXs2TJElKpVFGE3Lx5M6Ojo9x1113ceuutlJWVsWjRIm6++WYefPBB/uEf/oE/+ZM/Yfbs2Zw6dYof/ehHvPjii6xZs+aCbs7Xory8nGuvvZaHH36Yp556ih/+8Id8+MMfBuwYtQceeIBMJsP8+fPJZDLMmzePXbt28etf/5rW1lbKysrYsmULP/vZzygpKSGZTNLW1sbp06dpaWmhpKSERCJBZ2cnx44dI51Ok0gk3rJRcJ7Li0vUNm6KcyLFCWDRRxZdFN+cWh6V22iryJn8KKVlGSBgaHCUZGkCFXOPt4sTaF10bQq2dassmcSYAtnssJ1XGariLEqtsQKiK7NBlBVXdSRGjrWUG2NQGgrOYRntqxLQ6LHE+Fia2/4RNX1HkW8XKRfExri1na8JLhnuxM+oES+yVCuXKRcnUEYRcKUCJBQKYUjBhKAUMSdWxssSxGNxdEwj+ZDsYB99PZ0M9J2nrKyMqtoGcsN9FLJ9SD5HYURhwjw6P0p+pAwlVWgdt3M3AVGuMlKF6EC/rjEk8XicWbNmMXPmTFatWsXw8DDGGBKJBKlUikQiMWVJilKK2tpaqqurWbp0KdlsFqUUyWSSsrKySYOBm5qaaGpqmnIfMpkMa9asmfK+RCLxmleQphJELzWVldXMnbOAefMWvqnP47kwNdV1vmHe4/F4PJc1SkEmY8W/M2eEri548qmQ3bsNvX3W4VhZCV3nobNT2L7dEI/b5t+yMjtLbmgIduw0nO8WFrZq5s5VtC7QnDgR0nYOtm03tJ0T0ino7ILTp23r+PRGW8BRXm5bf18vuZwt6enqEoyx4unx49Y9OuHYgOpqxXXXaS9eeqZk5qx5tF79GZ5+8SgVeSt0V2TGflaSdSWsuepGElNU1icSCRYvWcyKmz/OISYnbAoGZi+s5fobbp7yuVOpFCuvvJL9197JofhYm/PoKLR3GDp6oLZpBiuuXH8JjtTjuXhSqRQ33ngjqVSK//zP/2Tjxo3s37+fTCZDLBYjl8uRz+dZtGgRn/zkJ3nf+95HfX09SikaGxu5++67GR0d5YknnmDnzp2kUimGh4eLo9W+8IUvTGgEvxji8TiLFi3iS1/6Et/73ve45557+MlPfgJY52V9fT3/8/P/k+XLl6OU4kMf+hC7du3i/vvvZ9u2bcTjcUSE1atX84EPfIB77rmH559/nj/7sz/j7rvvZuXKlSxbtozHH3+cL33pSyxZsoQvf/nLrF69+lK+xJ53CJemsAdVFOMkGh2pNGKMLesRCLRCGYV2uWkb7S6ggVgQEBJjcCRHoj9LpqKMWFw516RtVdR2FCZgm7iVttFog5AbGbXNi+7SsyCYaFam1k6ENEUV0kbLoxIe59CUEKM0SrRzSEpRlNVaRWMp0YpiiY89ukjE1JFKSeT8VETFQy6iLbblPLpCbh+unSNMERqDCQt2bqVotAqIxxOkE6XEYwkCHaADjXYlQWjFwGAv5ztOMTw0gDEhxCqIJ1OMjgxh8qMoRu1rZgrEtSC5LJIbgZLAirlONEVplNbWfHkRaK1Jp9MXXbCjtaa8vLxY2OPxeDwej8fzu0xDg2LZMsX588K5djh2HE6fERrqYfUqTb4Am58z9PTAnr3C9OlCU5OmZaZif63Q1gZtbdDXJ5SWGubNC1i4UJHLaTZtNpw+bbcLNIQGytOwcqXiqqsCMpmLFxVFrMszKqk1xpYCDU4ROzciZLPeueaZmpaWedzyvv/FcH6Ahga44YaARQvHfia1DqiorJyy/TgIAlYsX8G0/+f/JZyiMRkgkSihpnbqmX6JRIK1a69mVsssey7l6DovPPOM4eWXhYULS1m6bNpveZSeV8MY6O4R9u01nG0TwhBqaxStrYqmJv07XfoVjTK77rrrmDt3Lp/+9Gc4dOggbW1tFAqFYqnPvHnzmDlzJhUVFcVtE4kES5cu5atf/Sp33nknB/YfoLevl0wmw4IFC1j8/7P3psFyVffZ72+ttYcez6x5RhIWIAQCxGAGS8GAweVUxY7fit/iTXKd4Sa3KkNV6qZSleRLqt6q+1blS5ybONf1pnyTG5PcgId4INcxoxCDIIBBDAI060g6ko50hu7T3XtYa90Pa3cfZIlB+IgjYP1UAul09+7Vu1vdvZ/9/J/n8stZuXJlb0Jx4cKF/NEf/RFf/epXWbNmzRnrqFQqfOELX+CKK65gZGSk536s1+vceeedrF69ml27dnHgwAEAVq5cyVVXXcX69eupFNF/t9xyC3/5l3/Jiy+8yLGxY9RqdS6//DKuvPJKBgYGGBkZ4ZlnniGO497j+ZM/+RM+/elPMzk5yapVq1iyZMmHs+M9Fx1z5Lx0Qh8WrDFOFDOFxVEXzkuDy4csxDuDpZN2CIMQ4zpvSI1hYnoGjWZwoEqgZE/c6+ZSuo+xIrdSKKqVKlZr8kyTa41UTiB1E9HWiaQSENrdvhAnu63kiO4ouCncmEX+Y/f7lTBYW3xQFts12rk13WMvRMi3j4gXF1rbFSoNGFuMnxfXlQKtNTo35NqgjXOJxmFIuVRGBSFKBkipevtBFk3pQrrHJoGkPUMnaRWZnJZWc4rRQxnKGmpxgMQ456mVuCKkjLTTRIUlhAwREqTtDqyD5l0Lxz0ej8fj8Xg8H4B6XXDbZxSbNrkpoaEhwduNZOWyYPNmyaKFgtEjltaMawNfvkyweLEgz92fx09ZohDWrJGEIVx2mcuXHB11DeV9fbBmjaBScYU9m6+RLF8uGBuznDrtxMZ6DRYvdmPl/X0frFQnDOGqq9y230Ezmr1u5IQIj+dcxHGJoeHl9A8Yhkdg1WrFpZ96/y/Kaq3G2tq6D3Tf3dy+nzVU9B+3vPGmpv+Qpa9fUC5/stQzY5w7+6mnNUODgltvVZxnitb7Rms4csTyyCOat/ZY2m1nXirFlj17BVs/497nPukZunEcs2bNGlauXMn112+h3W5jjSWMwt449bmIoohLLrmEFStWcPPNN5OmKWF47tuUSiUuv/zyc24nCAKWLVvGsmXLzvh599/Q5s2b2bBhAzMzM4ATO6vVam8as3u9G264gSuuuIJOp0MQBNRqNeI4xlrLrbfeytVXXw1ArVanVIrZvHkz69atI01T4jj25qdPMHPjvLSmKMAxRf6jEyuDwo1osVhbtHEXNkptLWmeUapWe6qfBTJjmG62ERIG+qooJXrB4s6n6LyVziAoUCqgXu+nOTVNlmfkOidSytXlWFGMnxtkL3fHAgqsxLx9fNsKCt3VjbwXbs/Cl1nkY3ZzKQUYUZT/CGRQbLdwdjrNthh3L65rrcQYyG2OtroIqxUEqhArw4igECqVkq6kR3bX1AvILLIgKdygwpUQmcLVaS0m6dBK2sRRhA1rGC0xhbCMNegsIU2alOtDSCm7xlMn2CKQwn7iPxg8Ho/H4/F45poggEULBYsWnvuLlhBQrwnWrROsWOEKdIJAEMcu3cdaqNWcUCiEEw+FcKLopz4lWL3auZXC0ImiXaeI+vKdAAAgAElEQVRStSKorBIsXQpp6ka8u9v9edxMUrqx9cFB/8XR4/m4kSRw4KBh1y7L+vWQ57063jlnetry/POaXa9aKmW44QZ34uWVXZY9ey0D/YYVK84v1uLjjFLqA00whmHI4ODgBVqVm6ysVqtUq9V3vZ5Siv7+/jMcouB0jjiOz8qzDILggq7b89FhbtrGgW5+pLGGrr3Q5V9Kl4ApimZrinxIo50oR+GELMpvsIJcW6YabYSU9NXKhIEqtt/NvBQ9Ec9aQRCEVOo1GtNTGGPIc4NS0uVUWuOESYwTKot0TotyKxdOlOyKeAbX+A2FFlmMiIvilLQpRFRVZGmCwBiNtWc+HmMNxmrQYEyOS9BUBGFIKS6hlCTojoGr0DWZI7vVOS67E5BCFqVAbmrePW6LwaKERFtLpnNXFKRCrNQoYUAYtHZj76ZwXxoLCEOea7SxbpI+Nxhr3d8pnLPeeenxeN4nr732Gvfddx+vvPIK+XtZbzxnEccxd999N7/4i7/IwoUL53s5Ho/nIkApl0H5s0JBV7AMz+4zIQicIPlOCAFRBFHkD/49no8z1sDJccuuVwxZCosWCzpty549ljVrBFu2KFoty6uvGvbvtzRn3PvH0iWCjRvdiPbp05b//E/Dq68aGg04fNjy8COG9esEGza4Y+Kx45ZduwxHjliyFPoH4LINkg0bJFHkBMnDhy3JOxSACaBahSVLBCdPWna/4SItrr5asPUz7qzK+nWW6YZl6RJBqeTfuzyeTzpz0zZeiIq2aK7GWKxwoqXoFvZgChHOzWxnaZtSGPR6vK3tNnY7ES/PDFNTLYSF/nqVQHXFRDErSgrVy5oM45hqvY9ms4E2uXM6WufV7JbyzLYNW4x1B9lSiq4qWDR+A6rrFgVhVE/j7I6YW2MR0iCEdJmeBkBijHOfZnnmDKYyIApjynGVIIwJVIiQAhm43EoFCOVERVE0j3eVQ1m0mLv7FIUwKoqfF8meArQ2ri1dOjE2kCFGGFSgyI3bX8gIFcZEYURYionKdTIrQVukkBgMRir3HBWarsfj8bwfHnroIf7lX/6FgwcPYsx5huZ6UEoxMTHBhg0bvHjp8Xg8Ho/n58IC0w3LSy9ZJictixYKGg3L6QmIYlh90vLcfxpeeMHQ6UAcuyiJPXssR48KfuEX3LH9K68aDh12l504Ac+/YAiU5JJL4OhRw0MPGw4ccOaiKHIZvQcPaqanLdddpxgbs/zkIcPk5LldMVI64XLrVsmpU259fXVYv2622KuvTxRu8p/PJe7xeD4ezJ3zstd/U7R+i6J9HNvLhDRWI3HuPp3lVGo1hAowxXi2RCLsbCO3zg3TzRZBoKhXY0ThprTCtYWbbklQIZDG5RIIQaPZcA7CYmHdlnOLy3dUxTq7reLibVZDSyEmFgKeKR6bNcV4uaQo6pEYa8mNceHQRW5kEAaUShXCOEaJgEAFzjkqlHuXxiLeVgDkxtNlT1gVQhVKppOCC7sktnCbuv3tCoKEgDzPkEIV2xAoGaHCgLhcplSuEJbKhKUKSsXIIEQqhVBONEZITPHkSCkRSoGZTRb1eDye9+LEiRNMTk6S5znr1m1geHjBfC/pI8O+fW9x8uQYY2NjTE9Pz/dyPB6Px+OZF/bueY1//n/+J08+tYdaNeTpJ2CgfzbzUinFTTffwv/6u//bWaU9xhiefupJ/u5v/waj9c9uGm00g4NDfOlLX+Kzd9191uVpmrL98cf5n9/4uyLyy5EklmNjlvFTmoP7lrDx8nv57B03z+GjvnBYA52O5fRpVxx72WWCu+6SDAzAzIzlyBFLGMLllwuuvlpy9IjlkUcNe/dblr9puekmyW23Sh59zHD0KFxyieAzn5EsWSKYmbG8+KLhjTcsIyOw9TNObHz9dcMzOy0vvGhZutQQxTA05GIszoUQMDjojjqnpyBLnUB54qTl/gdyTp+GagXWXyq4cqPL9fV4PJ9s5ijz0hbtbBbRrRsvnJbGNfX0xsgtkGUpzo4pe6JiV5jTRVO3ss5ZmGQ5pycbgKVeKfdGpoVUGGuQylkFi4F0oiimVrE0G42icc66MzVCOOemtVhhi5xHtzRjLVIKZ8DEYqwu8h+7mqwtXKFgUtdknlrtRL9AEcUl4iAiUN0xcIWUCmucs9HlYFqsKca7bVeIFEUWqBNru65Uuo/Izu6X2RzKQrC1bl8ZGRLXh6iUK0RxhTgqo0olgihEyBCrFEopQBYj5wJjzKx7sxgTNzov8kE9Ho/n/eMK1SyDg0P8+q//Ljfe+Jn5XtJHhq9//S/5wQ/u7+1Dj8fj8Xg+iRwZPcixV77Lr6yYYmEtdL0DndnLfzqWsENIfuO3fvusgpEsy9izZw8Hnvkh//XKs7P2pjqa147E7Fy25JziZbvd5s3drzL+8k/40oYzG2nMAjhZyXl9eoBXd132kREvuz4UIWBkRHDjDZLly53xpdOx/OIXFMZY6nXBwICgXDL89CVX+nX6tBM2ly2T1GsWIS0D/bBurSsB27/fsG+/+86yapVg0yZJuewyKt98SzM+bhkdtdxwg+Keu51z8p3oRmC8/rrL6z11Gp54wo2qJ6kbZz94yNJswK23Kn4mCtHj8XzCmBPxUmN6IqW1gBFFsLgpMi5nBTmspdPpUI4rgHJN34XT0WAx0hIUveLaOqHQpIaTp2cQRlApBSBd1iPS9HInu+KitZYgDCmXK7Sa05jc9MbHu7mV3fuzxtklrQVt6M5uO6+8EORFAY+xpngMAiVDAhVRLkUEUUQQBAgpCKTqVY879ym9M4OmK0pa1yju2r9Ntza8qB+3s8U8VsyKukrQ/VWEgxb72ZDlhoGFi9HaUC5XCcMyAoWR9HIyuyP23fsx1tBoNDg4Osrw8AilOEYK6fIxXVjnXLwkPB7PJ4woilm58hIuv3zTfC/lI8PIyMLi5JLH4/F4PJ9c8jyjT3X4zKoyy/vPbkxOs5yn8/yc8TTGGIzWLK7CL6yunHX56VZO+7Cm026f87611lids2YgOOftj05nzBzRdDqtD/DI5pcghOEhWLBAEBRH/eWyoFK17NsHr+92o+NTU5aJCXccn+W9Q+GzMAZmZmB6GtIUXnvNcvKERkgnip444W53esKZhxa+QznZ25macvdrDOgchoYEn71d0EnguecMx47Brlcsl15qWbnSuy89nk8yc5N5iS0arY3LvCyESFs0ZFshikHrov3bGJQKAIEstDqMRchihNpoJ1oKsEJirCTJLKcnm9iBCnGlRCBAWFuUArmsTIPFCjdiHZYjSlRoTTdIkhSlFVEcY4Ur0wmKjEpj3buzKLIrDa6dXBsDBpSKKEXOyRiEIYEKAUEQBEVxkBsFtz3Rz4mxTn/tzdIjrCjyNS1Wdj8RikxQC1a4BnXJbBFRb+wd91hdvidFlihYoymVS6S5RYQlRBC59vGui0cWDeS6EEplMZ4uYXBoEIQgCCNM7vJDJdo9Fx6Px+PxeDwej8fzISCEIJCCUiApnaN4Kgqkm5h7x9u765zrtnEgCZUhPZcaVyCFeMfbl0JBqD6aB0hSujzKbplXnsPBQ4ZHHzUcOmRJEtDa/c4y53R8rzE8Y7p9F257M63ZGwwNuQzNUuxKeA4eNLQ7596OwGVaLlrkhFUpoa8PtlwnuPZaVyqUpnDihGFqynLypGXFCnFOUdXj8XwymBPx0ha/pHRnvxCyyGykcBkWzd9Y0qxDFAUgLKbrfBQCKyzSGmcSRBcincBa5YQ7697ATk3N0I+lr1ruuSfdyLVGKdV7MxVSUSqXscZgGoZOuwOIYpxakFvtZEqDEzGNQViQKiAulSmXI6IoQkpBKN1ukkoiZFGY0zVCnvEOP9uoTiGkImcdobYoBpptXe92iEOvs124zE1btIkbXPMawvWQC9H1XjoR1iJ7wmlutKtNsgaN2xHCuv08eztLoAKq1ZrL25QKLUADhV12Ll4SHo/H4/F4PB6Px+OZJwRnOihnZiyvv2557TVLHMPmzYJPXSppNCw7njScOPEe2xNu1DsMIY5cZuatt0jU20RfKZyAeeKEZccOw8TkO29r2TLBtm2SWt1tMwigXhfEsTPyDA6ACpxYmmX+GNXj+aQzJ+KlNK6x23bLegwIKQu3JQhjQAnAkKYdyqWSa8p+2xi0m4p2bkcjixFqARhd5Ea67eeZYWqygQSqlRgp3Hi2xSKMQArlchxx1y+Vyk6c1IZ20qEkXJmQ0XkxIS0Io5hqpUoYhaggIAgihJCo4ixfVwC0Zjaf07WnSyddFg7T7kPpujuFFUgkxoqepOmmt4smcSwIiQCUU3F7zsru9kTRUNQtMer5OwuB01rjMiyNwaKLEXHryoyg53btpC3aSYdKuYywgrSToI0hT3OMgTzLCALlWocusjNaJ06c4Fvf+hbDw8N8/vOfZ3h4eL6X5PF4PB6Px+PxeDwfGdIUJk47Q9DixbD5asnq1ZI9ewx5Xhxf2p/xshQj3da6Qp1aHfr7BVNTLqdyeFjQ1+f+vn+/O15duFDQ1y9Yt07QbJ57LUK42w70C6xxrsuZGTh0yLJ+vRtDP37CFflEA07U9K5Lj+eTzdy0jRs3IW1673YSYwxSSTAGKZy7UecpwrpRa1vEQLpi7Z6X0Al7RmCVKJq4Z7Mbu/pgkhompmawFurVUiF8WrRxcqlUyjlArUQqQaVchVxjWk7bi0olgrIrshFSoVRAGAaucVuI3ni2KERAo7UbaxcKJRXWFkU8s1Kiy6gsLJkGgbXaiamFm9G5KfPCaBnQaic0pqeZmZlBKcGiRQup16tIFSCFckU9hVjqitNN4eKUhbg4W7xjjEGbnGKAnSzPmGm10HlOvVan02lz/OQJkjxj+eIllKMSaZq4D6NMo4IAKYsCn7l4QcwxjUaDhx56iJUrV7J161YvXno8Ho/H4/F4PJ8gurmIPxt7aVxy2Xsew3Rv/7MY445hL86joLklCKBUdBJNTcOuXYZDhy379roRcmvh1GnLwYOWahXCCHINR45adu40rF0r6O8XXLrelfvs3Wv58X9oFi8WHDli2bPHMjIi+OztklWrJNu2qXct7AkCqFYFUQSfulSw81nLzmcNJ064Y+f9+91k55LFgiVLvHI5H6Rpyq5du3jwwQe5+eab2bZtW69Q2OP5sJmjsXH3SeCiKw1KujMowlgkRc6i1uRpThhESKEwVvRcjdjZfExBkXtZZDz2WliFRFtQQmCFIskMU9NthBDUqqVZ8c0ajHEipBUgrUAqSb0+QLXahwXCIOiV6XRthqKoHnedPS4LMzcZxmqkdGJiWFSimUIkVcX6XFlRd190NyuKs1bFfLmxWCk4enSM8fFpJk5P0G63ydIcYzIWLxlh01VX0t8/gMD0xtORhYhL10XZFXjdh6w2rh3d9nJHLZOTkxw7fhwlJOHKiJlWm0ajgUCQZ5pcadc6nuUoCSbLMcIilUS/n0//DxljDK1Wi06nc86gbo/H4/F4PB6Px/PRxeKm8s71VV8by1tvWf726zlhmJ9xWZbl/PR5Ta7PfVtjLZ2O4cUXDX/9f+ZnXd6aydn1gqGavdN9f3xStapVwaXrJW+9pTl2DJ59ztLX54pwbrhe8sxOw5Ej8PTThptvlqxcIXjjDcuJk/DkU4ZWS3L7ZwWbN0umG5aXXrI8+6wlji1ZBv19sGGDYOlSJ0hG0fsTuQYGBDfcIGm3Da+8ZvnpS26HSwmXXiq4+WZJf78XzOaDPM85ePAgP/rRjxgZGWHr1q1evPTMG3PUNp5jrC4yLt0YsxABAlEU6FhMbsjSlKhcgW52o9VFSuRs6Y0t8jKRxWi1cUU/WONG0Q29hvHEaiYnGkgspVKECizWaLqZyl3XI0KgwghJ4Q4VAilkIUK6X8VdApAkHdpJG0NOFAYIIYmiGCEtSSdBSkUQBCBwbs9iRBuKtXVby7FY7cRXY+DQgSPs2bsPq11RT6ycsDo5k3Dw4BGWLV9BrVJDRWG3n7xoKBcglBs0txZhnEhsdOG8LNyhQgiEteRpRp6kaAE2N0ghUEKR5zlaa/I8L9rOIcszqrU6GouVEqu7RUEej8fj8Xg8Ho/Hc2GJopj9rQp/9P8doByefSBycsaS9VvefEui5JlKojaSo8cDXtvf5te+e/is22baMm0WMLSqn927z1YhsyzgyImYvW80eONE46zLO7klGlrNrw19NCa/pIRlSyVf+RVXwtPfLyj8N4QhrF8v+fIvOzdllrp27+XLndi4arVgYsIyOCBYsUKwaLFgeFhw6pSlXIaVKwXlkqC0CO68Q3Hlla5IJ02hXoMlSwQLFwqq1fM7mAwCWLFC8vnPCzZvtoyfcoW2g0OCpUsEIyMCpS7AzvK8L/I8p9VqkabpfC/F8wlnbtrGhcHgCm+s6DooNRLQWHKTY3WKUgKpXPM4xX9tMQw+m+VosVb30oUFTpBzBUBF9iOqlw2ZZjknTp5GCUGn0yHPNNVKhagUMTg4SKUSIyROmTROCNRYsAYVKHJTBHwI5zq0uabVbpHplKgUYIBAKKQIsMaSZylxXMJaM5sJ0s3rtAIpXGt5d8bdWEOWafbs3cfowVFKYUz/QJ1QSYzNmWk1ybISE40ZTh4/xbLFCwkjhbGFM7QQVbXQrrioaAw3xhTj4haTa7TSFNPuzqlqLVobrNHIIk8z14Ysy1FCkbQ7aK0BQckal5EpVNGePr/q5YkTJxgfHy9EVsvo6CgzMzNMTU3xxu43mJqaAiAIApYvX069Xn+bk9bj8Xg8Ho/H4/F8VLjmuuv4i7/8v3np5Sbj4+d2OpYrCxkeCjkr+NCGrFl9B9dd+wPe6RhGBRF9tZVU62dfbm2VSy75RSZvuPTs2wuoVmHjFVVuumntB3tw80ClAmvWnH1sJIS7bO1ayfLlzgAThU7ctNblSua5y7YMCpWgry5IU4tSTuDsiogjI4LBQUG61sXDBYG7/IOa8sIQFi10YqnTyCxhOCu8ejwez5yIlxg3rt3NChHWFehoctc8nhvSTkIcxU7XMxahLNqCEYIQeqXbRrh2bdc0jsuLxBaOw8I1icX915AZzenx07SmmgitiUsx08EUYRhydPQY/YMDLF22hMGBPpAWifP+W9xouRSy52i01pLrnDzLEVJgtGsIDyM3Zm4Kl6WxBmM0GA1W9GRYrHOPClwzOUjanYQ339zD2NExYqUY6a9TKZUpVcqkmRN0s8wy00pptTrkeV40truxeYEbwUdarDBIKIKTnfPSGI3RTqjEOpeq0cYJf0KSa02Wd8jylDxLMUYjJZTiCG0MRkjSPMdqQxhLZFEgNF8YY/j+97/P/fffz8TEBABJknDgwAEOHDjAvn37iKIIgIGBAf74j/+YW265hVKpNI+r9ng8Ho/H4/F4PB+EoaEhtv3CLVy3xdLpnO+tFbCi+P1BUMDq4vfZdBuwy+UPuPmLECkpHs/sUZ8Q7rEGP6MOvNv4t1Jnb+fn4ow1+FHA+SBNU44ePcr09DQAnU6Hw4cPkyQJY2Nj7Nq1q+gIEVSrVZYvX947Nvd4LjRzIl4qlBPNuv5JKV1wstGARQr3ZyUFoii1cUZFJxzmxiBM0bItRTF27gRMIV3eI7jLXa6mGw031pDnmuZUE91J6SuXqcQxURQSBIqZTpuTx08yPTnNkmWLWbxohChUxSkhS6Y1SsrZ9GbrbNHaakplQbUWkSbSCZLCXdZzippuxmThIbVFi3ixCwyW3GTs2bOPo6PHqMQlhvv7qJZiquUyKgwRQBZGCCvQmXMZ2sJd2m0XB4l0m3dZnMaVCGmt0cY4AdJYtHHOTCMsWqdonRNEIbnOnGipc4QAo3PAEAQKRUCau6bybpbkxRBYPTAwwMqVKxkYGABgZmaG48ePU61WWbFiBZVKBYB6vU6tVvOuS4/H4/GcwYsvvsiDDz7IkSNH3DSC55wIIVi3bh333HMPGzZsmO/lfGxoNBo8/vjjPPbYY8zMzMz3ci5q4jjmxhtv5O6776a/v3++l+OZR8LQjTB7PJ754/jx43zta1/jySef7E17Tk1NcfToUf75n/+Zhx9+GHATkJs2beJP//RPWbHig544+HDJ85y33nqL++67j/Hx8flezkWNEILh4WG+/OUvs2nTpvleTo+5aRu3omgRd8KgNhZphRtFRpDrHKWco090m7KtAOEkz0L7K/yUAmuL3mtJ0dRtUUq5/EhB4UZ0IlvS6ZBnOX2lMkN9NYJAUiqXCKKQWq3CVKPJqYlJDuxLODV+mpWrltPXV3cFP8KNtgeBIs9dFZoxFivBSoMmxRK4oOfU9JyY1oC2enZk3BQZl5Ze8ZDODEePjnFy7CT1UplapUoUhpRKJaJSTFB44K0RReitdePtOEdlT7W1RYO66XatF3dlDFrnaGvQ1iJ0jiiCO3OtnQhqLCZzDecSV46k89w9H0KiDegsR6mAWWl49n7mAykl27Zt45rN15BrF6p98OBB/vzP/5xly5bxe7/3eyxduhQApRSLFi3qFSl5PB6Px9Nqtfjud7/LN77xDVqt1nwv56JnYGCAKIpYu3at/zydI15//XX+4R/+gYcffpg8P7sgxDOLUordu3ezaNEitm3bNt/L8Xg8nk80YRiyaNEi1qxZgy0MU2NjY4yPjzMyMsKaNWsA9969ZMmSj9T3hqmpKe6//37+7u/+jiRJ5ns5Fz21Wo2pqSm+9rWvzfdSesxZ27gtJqe7rd2mcA5aY2gnHSphgEVirCvrQQgEoLq+SmFnW667TePa4iyZLuuxK4Z2tTVrLK1OB6s1gwv6qdVjjM7pegfjOKaUpPTX60w1Wpw+NUWr1WH1qhUMLxgkKpyJoQxcIVBRsiMJSdspSadN0tFYLUnTDuW4xODAUFH0MyvxGWuKdbnHrnPNifEpTo9PUi1V6KtWCaQkjiOqfXUq5QpplmE6FhUoyuUytVqNUqns0kAL5+msm1NjpARd/F0Uo+tWY40uBFWDRYNVGANSKPLMoHMnDotiDN+Y3O0jocgzTZZmEBnXqI4TUuf7nOfw8DDDw7Oh2EIIyuUyfX19rF69mtWrV8/f4jwej8dzUdNutxkdHeX48eMADAwMUa/3zfOqLj6mpiaZnp6k2Wxy7Ngxsiz/SB2EXMycPn2agwcPMjExQa3WR3//IFLO97eriwtjLM3mFJOTk2f8e/V4PB7P/DE8PMy99/43vvhLX8Ri6XQ6PProo4yPj3P33Xdz77339sbGy+XyGcfsFztpmrJv3z7Gx8dRStHfP0i1WpvvZV105HnOsWOjdDoddu/ePd/LOYM5ES9zNKbwTWIMsvibKCyVNjOEcYimEMjAOQp7YqRwI9iAMNKJgNZiMQjrxs6dAFfkShY5mBRj1Fio1auUayF5npNmhnaa0eoklEoRVWuxVtJozJDMtNm/dz9aZyxcNIIMBOSgZIDOM+IwIJCKNBM0ZzJmZjJq9T5qtYg4ipBKOYeidQKplLIoFCoawK1hutFgdPQoA9V+anFMpAQqkMTlGKVUr4gmUIooDFBKEgQB/X39SKGKXE2NFAYpi7F1W5QaSedY7WZcWqPBCDDatbNbMHlOliW9tnJjLFmWk2cZJtegNVoU+ZzWkqcJKElk6wg5NzGoHo/H4/HMB918aoCrr76Or3zlN9iw4cp5XtXFx3PPPcm//Ms32bNnd7HPzHwv6WODeVsczx13fJ5f/dXf9RE3P4Mxmh/+8Nv8/d9/zRVQaj3fS/J4PJ5PPGEYsmzZ0t7fW60Wu3fvJooiRkZGWL9+/Uf286zrJAVYunQF//2//zX9/YPzvKqLj1OnTvAXf/G/c+TIoYtuemRuxsaFcl5H282EBCtxOZh5ShS4bEshDUiFtSCFGw13LeXuz8IKFG6U2gqDkO4aLmNS96rnLBYhFVpbrAalJCoQFLPghGGAEBKtLUmSk+eGOAqxtTKt1gy5zjlwYJQ8N4wsHKJSkQhhkEFAHEeUS2XyNKPRnKFcahJEEWEoXT6mcCPjYFFSUavXybOULEtptdu0OwnHj59ECUt/vUwliFBKkmSpK/3R7nHk2pAkCVnSxpocBQQCsjTF2AApBFIK50sVAqEkTpmUSGGxxrWbF7PlbnTdCnTRHG60QUkwNkdgCZUiVDESd9bB2hydG2zuhGAZRE4XnYsXhMfj8Xg8FwGDgyNs3Hg1mzffMN9LuehoNKao133G4IVm8eJlbNny6Y/swd6FwhjNrl0vzPcyPB6Px/MJpFKpsmXLp714eQ6OHRulVrs4J5bmxmZXjCUL4dyIQkhM0T7e6XQol0oYAbJoD5dIpLVYKcAYp8HhtDhjjRuN7m4Yl4VJN1eyK48WY95SOhek1k6wS9IUYQXGQFq4LwOpQICSkiiKSLMMneUc2H+IVqvFsuVLiEsh/f39xGHM4OAQlXKVLE3Ye2AfSZogRFE89LYynXJcYnhoGInk1KlTtFsZWdKmWu5nZGAhtVJEJYzAWmp9fcRxjMk1Wuegc7I0pTUzQ6ed0G63sdYp26I7ky6K3FBjseROxBSukd1aU5zZt675nAydw+TUNNZqBvoqBCroNaD31aoIYVFKkucpaO0E0F5BkEYYgxQXn3y5ZMkS/uzP/oxarcaCBQvmezkej8fj8Xg8Ho/H4/F4PJ4PiTmbERbSFdmIQuBzo9SF+CglUqleHqOQRau10a7wxlqsEIU4WAhqxci5E9MEBoNE9rImu+3bpSikZV1pUByVaXcSsjwnDCOCCGJt6LRT185tdZGtacFYgiBg7OhxWq02tXqVlSsEA/0DWKuo1fopl2OmphtMTJ7GGN1bX5plCCGIwog8zSiVqvTVB4iCMrVqAykkM60WcaSoRjFZlqJUSH//MHmW0m63mMxOkqZutFtIRRCHyEC6x28MVkhyrZ0D00DRaY6xORbIc02eZ064NKB1xuiRMWZabRaODFKvxEipQFqAEcEAACAASURBVFuEhSgMwLrRcufSdA4Ai0UoiZWWi9V6Wa1Wue222xBCoJSa7+V4PB6Px+PxeDwej8fzsSaOY2666Sb+x//xP1i7bq2fIvDMK3M0Nk5RsuOck1gnRGZJQhSFrnzHOoefLop9pACkcxZ2R8ONsQRFC7k1GiWVy2wULkPTFpmXzpQoEFIQBgFSCpRUBCogjmNUoAiDkFoYoquWVqvD1PQUM60ZtDbEcYjONGEYU67U6SRthOigCInCEuVSBaUUYRSzYuVq+vv6ybIEMICl2WygdU4pLtFptUg7HeJSmZGRIQaHB1BK0Wp3yNOcUCm01pTiCpVqnenpKZrtNq00RQQKIUOSrEMYB6jQCXPa6EKkdPmg0oBFY4r9Z4scS22NE4ytJctSmo0GSIm0uPH7Yq+5J8kAxu1pQbHtQiwtTK26cL1ebAghfImAx+PxeDwej8fj8Xg8HxJKKZYtW8aiRYu8icgz78xN23hRWOPkMIrqHjfGXSrFCCkBgbFudFuKrnBmUcK5KLUzWGJcywzghFAppYt1NACyEOGceVIiCcOYQAVYaxBAoELaSYcsy5HtDtZCoCL6B/qp9lWZac8QBiF91X6MgUqtRlSKaDaa9PcPsmjREur1OsbmtNsNAgV53qHTbmFMRqVSoRRHNBsJeZ4hhUDrlE7HUqlUqNf7ictlhqQi7aSuxdtKQJFlGSpokxvD8MhCpBB0Wim5NkRxjDGGLM8Qsltq5ARfKaQLfhecIWB2G9tFkQnqfmTd/hVhMY7vxveFcLeXVmBxTkthilxSa5G4NyMhuq3pHo/H4/F4PB6Px+PxeD6pyCJ6z+OZb+bGeWktFPmJ4Ma5Ta6L2e6inRuXp2gLAU4I29UhnQhnTJFfKXpCKEXzpSuFkgjjtmSlQAhBrl3uJUrQbDUplSNyNGmakqUpaLDaosKIcrXKwMAQURzTbrVBKtatX4+1lqGhQaamJqhUy0RRgBDQajXJ0w6nJ04zeugwjekGYRhQ7+ujVquhdc7Jk+NEMmBwaIgkaXPKnCSMIgaHhwjCEqI/REpFmrp8S9FuUyqXWbZ0Oa3mFI3JSUqlmHq9Rorl+PFxSlFIGCoEEiEKJ2ZRwuOyMIsW98LtaovLrQAVRqQ6IccQi96T040KdVmgFqyw2MLNKYRBGDcO79pZ7Vy8JDwej8fj8Xg8Ho/H4/F4PJ6fmzlzXhpjcREITjXL8pQgDHCOS9EbR7bGFo5Al4tprcAIC7K4DFC9wh4wxgISK7o/ccIbQiKkwBpLUI44fvo0udHEpZA8z122poUojqn3D1Cu1pFBwGWr1yKVYHJqinK1SrVapVarFLdLSfOU6eY0E6dPcmJsjKmpKTrtNo1GE6MNnY4mSXKM1pw4cYI4jLBCkec5pYobDV+wcDGlSkhcqpJpSyQlYRQQhm6EPI4COu02CxYsolSqUOqbIcPSmJlEClkUAonZ5vYi6LOX9SmKTNHuULiQSKGplEs0TzXJdO6yMKWkqFByje62ECmxSOn+j3bD5RQOz9k79Hg8Ho/H4/F4PB6Px+PxeOaXuSnsKUa/jbHFyLMly1MqlRpCqsJF2b1y4dKUxeizpSdmFm0/RaFMMSZtTCHYUWRn2kIIddcTAsrVCuPTY0TNgCQJqVQrhKWALMuIKxVUELJoyVIqtT7CMKR/oMbA0BA6N8Sl2I2nB5JAKJrNafIsZWxsjNHDh2k2Zxjo66MUV2i1OnTaKcY0ybKUqakZojgnCKcJAkkQxnQ6CUknISolqCAkzw1BGJJnGhUoavUaUxM5g8MjRFHE4PAiwpMnmJyZQgYGqQKEdPvCdvM/6e4v3L7rCsFFiY8bC7eU45gkSTC669S0vQIlWwiSThTVIBMwAovC4sb6pZAIW8zvezwej8fj8Xg8Hs8c8fJLP+Vb//iPNKab55z1CsKIKzbexB13feW8U6zSNOHll57mkYf+39nM/7dhrWHBwuVsvf3LrFlz2Xmv/eSJI2x/9Lu89dZPUersQ2ilFFdt3syvf/V/8Vn9Ho/HcwGYm7FxXCt4V0jTRrsGa0Rh4rPFr8I72C2aKeyExtVoo8CNjhuLOaM8xolvxro8R2ssSiqMdc0/calEEETMtFrYUhkVhCwcHGJkZJhT46foJAnT01NYKRkeHiJNOwgscSSRNifPDdZokqRNq9mkPdNi/MRJJqdnaE43KEUxtWqVIAhpNFvEqSbXOZ1Uk+oEGTSIQ0UYRsSTExw+eJD65AQjCxYSl6t0WhopJXEUIaShUqsQxyXiUomJ0xOoUNFJWkgBUkikfNtHrnCt7VaAbie0p5uU6jVEpHr7zhZj5GGgSJLMuVXt7Li4a0hyztYuxhrnzrTuJWCMRGuNlAbO+XXC4/F4PB6Px+PxeM4fay1vvvkWT//wH/n82nOLe6+N5/zDiy1OTf4yQpxfq3G7PcWunz7D1BsP8Lm1Z+fzHW/m7Hx+KQcPXcKGKy497/UfO7Kf15/+FuvjPaweOHv7b57K+O7BE3z61i9yxeUj5719j8fj8bw7cyZeCgAhMBiSJCWOYpCy124trAGCYmSZolyG3riy7WZcaosQkiLZsWgYp3B3Wjf6bHoyKBiBUJK4XGLy5GlCFZEkKafGx5FSMDQ8RBTFJFlG0pmh2VSUSiHSWiqlEkmaYQWkeUqeJuRpQqfTptNJaLfaWKDdbhOFAWmak2YZaZYXrlGXuznTbqNNyOnJU3Q6TSYnJhgYGGB6coJqvY9KpUoUBUxoTRSXCcMyURyR5hkGTZYnlKIArEDJwl0KxaN3LeHSgklSxvYcZOmalZRG+qFwuVpjEdr5J6225LlrFccaVPHBbwuR0xo3Lm6FRdsUKQxYiSUAYZFScDE2jns8HzbGGDqdDnmen+FcllISxzFBEJzz30qn0+Hf//3fee2119i6dSvXXHMN5XL5gq2z0+nw4x//mFdeeYXbbruN66677rzuL01T0jQliiLCMPT//i9SjDEkSUKWZWe9Ht/tuUuShMcee4wXXniBLVu2cNNNN1GtVi/YOpMk4fHHH+f555/n2muv5dOf/jS1Wu09b9doNNi/fz9jY2NIKVm2bBkrV668oGv1nB/WWpIkIU3T83oNOsHiTX7yk59QKpW44447WLVq1QVbZ5qmPLvzWXY8uYPLLruMbdu20dfX975uNzo6ysGDB0mShKGhIS655BKGh4f9++JFgLWWNE1JkuSs118YhkRRdM7nKcsyXnrpJR555BFWrVzFnXfdyeDg4AVbZ5ZlvPzyyzz88MOsWLGCz33uc+d9f9ZCnmckSYJSilKp9LF5DXaShLptcM+aJee8XOQJT+1vcuiwmY2zep+0WhlTp9qs6zPcsyY+6/IDk3DizRlGj0xTfe+3hLM4fixBdSa57ZKAjQvP3n5N5vzzyYTDh1OuuPz8t+/xeDyed2dOxEspBRKBLrIthTYEQrmanm42oygUy+ILh+zlWlpkz6Ep3Ci0deU+TrdUhWuT3v8RYDAUndyYLCOKQjJtmG7OEMYRuc6ZOD2BFJJytUrfwAD1ep1AKVRRX54kCTrPaLdbzoVYjKlLNZunqaRzM7bbHSde5galQjpJQhzF6DwjSVOiQNJsNsnzFGMMxhq00YzkmlZjmjzrEEYl+gaGqNUNFkWStJFK0Fer0m5NkWW5c5wW9+0eqnEZlzInKEkCLJ3pJqWhPqRSWKOLbneFISuKjCC3loCi5IeirbzIFLXWIgOBERlpmpGnbpRcLHQCqMfjgT179vKNb/xf7Nq1C+1awwB6B0oLFizgqquu4q677mL9+vW9EaEkSXjmmWf48Y9/zOLFi7nyyisvqHiZpik7d+7kwQcfZMGCBWzatOl93V+73ebxxx/ne9/7HsePH+crX/kKn/vc597XQb7nw+fQoUN885vfZOfOneR53vu5EIIwDBkeHmbjxo3ceeedXH755cSxO7BK05QXXniB+++/nyiK2Lx58wUVBLMs48UXX+SBBx5AKcU111zzruJlq9Vi+/btfPvb3+bVV1+l2WwCgr6+Ops3b+ZXfuVXuO6663qPxzN/HD16lPvuu49HHnmELMt6P+++BgcHB7nsssu4/fbbufrqq894Hzpy5AgPPvggtVqNq6666oKKl3me8+prr/Ltb3+bO+64gxtuuOFd39fyPOeVV17hgQce4Omnn+bUqVNorSmVSqxevZovfOEL3HPPPYyMeCfVfHLy5EkeeOABfvjDH5Kmae/nQgiCIKC/v59PfepTbNu2jS1btvTe5/I858033+Tb3/42W7Zs4ZZbb7mg4qXWmj179vCd73yHa665hltvvfW87+/YsaPcf//9PPLII9xwww189atfZfHixRdoxR8+kRL0lc59xFGJJEsWw+fulEh1fkclExOSZ5BU9qlzbr+/pBjukwxuEHz2rvM/4tn9quT5Y4K++Nzbr8WucPVtL0+Px+PxzCFzIl5ON6ZIsxQZBBijiSLlchaLfEZtnUDpSmMEQkh00SSOsAhrEKjZwnJhMUX+ZfdEo+g6NYvG8q47URRnJ/M8RSlJkqVMNRukOiOMIicylmJMnmG1JgxDQuXuv9Nuk2cp1uqeo8A5qhSVSplypUSWaYSARrOFzjU5AhVYcq1ReY7WGmsgzzXoDCkFpmxdVqcx6KRDp+lEyVLJMJHntFtNgihGBTGlOKQTBURRiDEaISXGZEW2p8KV7VgsHaYbEyipaU5NUc+WYJQrStLaoI1Aa5BSkWQJ7UwSWgk6cGPxWpMXjh1jNVGQY0UTnUt0DmFYPmOs3OP5pNNoTPP888/z/PPPs3LlSgYGBlxmrLU0Gg1efPFFHn30UZ566in+4A/+gC1bthDHMdZaWq0WU1NTZzhELhTWnN/9WWvZs2cP//qv/8qPfvQjdu/ejTGGm2666YwDQs/FRaPR4OWXX+app55i6dKljIyM9F6P4+Pj7Nq1i8cff5ynnnqK3/md32Hr1q2Uy2WstbTbbaanp+l0Ohf+9WgtnU7Hld29x/1lWcZDDz3E3/zN3/D666+zdu1arr766p4A+p3vfIfTp0/zh3/4h1x77bVI6U+vzSczMzO89tprPPnkkyxcuJBFixYhi2mR06dP8+qrr/L444+zY8cOfuM3foN77rlnVkDKchqNhvtzrt/tbn5uut/npqenabfb7/oatNbyyiuv8Ld/+7f85Cc/YcGCBVx//fWUSiXeeustdu7cycGDB9Ha8F/+y5e9E3geabfbvPnmm+zYsYOhoSGWLVvWe/11Oh12797N9u3b2b59O7/2q7/Gl375S9TrdcCdxJmenqbVarmT+ReQrkN0enqamZnzv7+ZmRl27NjBN7/5Td566y1qtRrtdvsCrfbiZOFCwdatCnWe4uXx45KxA4JTB97ZpVoqCVZcKtm29fw/T0qh5PWffDwcsB6Px/NRZG4Ke0zC1PQ4UVxFSUmlWsbgDqqkdOPVxtkwnbhnQEgJWJd36dIx3Si4oVdO4zybRSu2EL3xdGHfdrmAcrmEEJZSJabdsmRaE1lDmmtMMXZtTE6StAgD5xA1JqeTtMmSlDzLEAhyrdG5RghBFIb09dWZnm6AtXSylCTNMEIQWOcc1Ua7YhzcenSui7IhCKKIMAppd5pknQQr3Fh8TUlMniBCRSBDpFIMDNSx5IyPj9PqtAEn+mY6J8s11miC0JB2OrRbDYzp0G7OEMgyWZ6Tp5os03SShHK5TKo7dLRhJjG0mxDJgEgFyOLzVkmFNpYkcXmaUagIotA1mp/HwWH3C+Pbxxjh/Y3WXmjarTbf/d532bdvH/fccw8bN24kis7Mp+mOBVtrKZfL/sDYc04WLFjAb//2b3PjjTf23JVpmrJnzx7+/u//nu3bt7Nu3TpWr17N8uXL53m1783+/fv567/+a5544gk2btxIpVLh9ddfn+9led4nQ0ND3HvvvXz2s5/tuRGzLGP//v1861vfYufOnSxfvpz169ezbt26eV7tuzM6OsqDDz7I7t27+cIXvsBXvvIVVqxYgTGGXbt28fWvf52dO3fy3HPPsWHDhp4Q4Zlf+vv7+eIXv8gv/dIvUSqVAOduO3z4MPfffz+PPfYYP/jBD9iwYQNXXnnlPK/23Wk0GjzxxBNs376dTZs28Vu/9VtcccUVBEHA2NgY9913Hw888ACPPfYoN910Ixs2bJjvJX/iqdfr3H333dx7771UKhXAvf7Gxsb4t3/7N370ox/xvX/7HldsvIItW7bM82rPD2MMu3fv5vvf/z7Hjx9/X5EbH0fCUNDXJ1Dq/I4fWi1BHL/7baSEUgn6+s7/2KRaESjpxUuPx+OZL+ZEvBwarHLyZEhjukGgYuJSCZTsqYxuclyCcmKltCCMxVpdBGKCRfdasx2iyG104qCm6940YAWBUgghCZSiXA7ctnAlP0IIskwDFq0zhLQoBaDppK1e1qbOcgSCQEVYA1maMtVuFMJbRBQElMslsjRFSkWmEzJtULnLYSnFscvulLi1KYVSiiAIqcZlTGaZaLbRWY4SklIcUqv2E5YUuc6ZmZ7CYoniiFqlwmkhsEaT5ZrpmRYznQSjLYFSBKEhjiPiwTKTR5uM7ttPaWQAoZw7EyHIjRtVN0BmNM2ZGerlBQQaQiUJlSrUX4HREh1UEEoShjFBGCGlOq/nvdFo8Fd/9Vc888wzJElyxmVCCKIoYunSpdxyyy3cddddLFy48EMTCDtJp+dC2rBhAxs2bDhDvDxy5Aj3338/27dvZ9GiRfz+7/8+l156KUqd3z7wfPyJ45g1a9Zw1VVXnTG6unHjRo4ePcq+fft44403mJycfFfx0lrLvn37ePjhh3nhhRcYHx8nCAJWrVrF1q1buemmmxgYGDjjNlNTUzz++ONs376d0dFRlFKsX7+ez3/+81x55ZU94eCdGB8f57HHHmN0dJQbb7yRTZs20Ww2qdVq/OZv/ibXX389//RP/8TevXt/vp3k+dCIooiVK1dy1VVXnTGWu3HjRiYnJ3nrrbfYu3cvJ0+eZO3ate+4HWsthw4d4rHHHuPZZ5/lxIkTSClZvnw5t956KzfffHPP3dml0Wjw9NNP8+ijj3LgwAEA1q5dy5133sm11177nq60iYkJduzYwd69e7n66qupVCrU63W2bt3GF7/4RW644YbeCYL+/n6effZZdu/ezejoKM1m04uXFwlhGLJs2TI2bdp0xnN+5ZVXkiQJu3fvZv/+/Rw7doyNGze+67aazSbPPfccjz76KHv37qXT6TAwMMBVV13F7bffzqc+9SmCYPar6szMDM8//zwPPfQQe/fuRWvNqlWruP3227npppve8zUyPT3Nzp07efXVV9mwYQOXXnoppVKJLVu2cMcdd/CZz3ymt40lS5Zw+PBhnnjiCY4ePcrJEye9eHkREAQBixYtZtOmTWc830mSIITg1Vdf5dChQxw6dIjrrrvuHbfTda3v2LGDHTt2MDo6ijGGRYsWcf3117Nt2zaWL19+xntgu93m5Zdf5j/+4z948803SdOUZcuWsXXrVm677bazPsN/lmazyQsvvMALL7zAmjVruO22284YKR8bG+PBBx9k3759bNmypfc+6/F4PsZYdwKmk3QIguA9v9ufL12zDkCpVPrAx+LWWmaaM+zbv4+xsTHyPKder7Ny5UqWL1/uj6E9HwpzIl62Wk2SpE2aJEy3mjRmWgwuGKJcKoEUSFwGjBRO1HJj3y7bESsLF2MOWCc+4sauhZTYwq0ZFpqmEAKpJDJQlOMylUoJgWbi1ClsIWxiBOW4RCWOKMcR0hqypI38/9k78yCpqjz7f+5bcs+sfa+iFvZFdlGwQUBRGwEXWlxbZXp0OrqnYyYmYiaiY2b+ml/ExGwdM71OtyO2Cg2K2CMoigtgUezFvoNCFRRF7ZVVlVm5vOX+/niZKWWVtjjQbdt1DANN7rsv8+XN+94993zPkXbK3xFHHalo2KZFPGHS3dVDR2cnvZEepLRRNQ2fz4vb6wYEiqpiGCYJw0QIBV3TcOluLFvi0VQUVSCliqa68Pv8qIpOPCnoi7sQSoCcrFzCsTg5tk7A7aGr+RJXrrRgWkny8vLJLyzEreuEkybhnj7ONTZiKyoul4ug3w+agUeLYfkEeDX6+/txW1l4vC4UVcHCRrFtdJdGLB6HiEkibhDyKLg9Gi5NRUuV1yAE2Bq6pmAjURSXowJVlGvKGTcMg+PHj7Njxw5KSkrIy8vLTFyWZXH58mXq6+sz4Q0/+MEPqKmp+b1MblJKotEoPT09A8IF0uEVq1evZteuXTQ3NzN+/Hh6enpueDnlML5e8Pl85Obm4nK5UBQlU8I7FKSU1NfX88tf/pL6+noKCwspLS0lkUiwdetWamtreeKJJ3jooYcoLCwEoLW1lVWrVvHWW2/h9XopKysjEomwceNG9u3bx/e//33mz5//me8vHA6zZs0aXn31VSZNmsTtt9+Oy+WiurqaZ555hqysrEwQwDD++OH1esnJycHtdv9OtbuUkmPHjvH888+zY8cOsrOzqaiowDAMdu7cyY4dO1i+fDmPP/54hpDv7Oxk7dq1bNiwAYCKigoSiQRbtmxh7969mTLhzzp3b28vr7/+OqtXr6aiooKbZ97M6NGj+Yu/+Assy6KkpCRDXAKZAI70b+vrElbxdYbb7SY7Oxuv1/uFvrPu7u7MmIhEIlRVVREKhbh8+TIHDhxg//79PPPMM8yZMwdN0wh3h/nfN/6XNWvWkEgkGDFiBEIIamtr2bt3L0888QTfWv4tdNfQKcbRaJR33nmH559/nmAwyIQJEygsLOTee+9l7ty55OXlEfB/onRLz4+apqEo1+6/N4zfL1wuF9nZ2fj9fpLJ5Ocu0KWUNDY2snr1ajZu3IimaVRXV6OqKidOnGDnzp0cPnyYZ555hvHjxwPO5s0777zDCy+8QDgcpqqqCl3X2bdvH3v37uX8+fN8+9vfzqhBP41YLEZtbS2/+MUvsG2bZ599dsAGVH9/Pzt37mTbtm3cdNNNTJo0iZdffvn6XqRhDGMYXynE43EOHjzIunXraGpqYv78+TzzzDPXxSvfsiwuXrzIa6+9xs6dO6mqquKv/uqvqK6uvqZ+bNumpaWFt956i/fee49Lly4RjUaRUqLrOrm5udx0003cd999zJkzZ1Cl4zCGcT1xXcjLYHY2JaVFWIVgJCykUHB5PbjcLlRVpT9mEI0lATXl5YjjXcknCdgCsIVMhfFILCkdj0wnwgeQKEJF0zQ0XUPXNQJ+DwG/l95wmGTCQFU0FMVCUVQUIdF0BbdLwzYNEv0WVsJw0ggVFYSNiSQRjdIZNejrN4mZkIibTgiP0UdY78Xn9+H1eLBsG6TESBgIVUVJlV476ekqSSOOR/fg9XjxeHy4gkW0X+6ip6+bpJEEzUNeSTndMZuAH2KxOO0dHYBAVTQ0VSEvJ5v+SJRoNIZLd2GpKorqIpaw0G2NJB5Ul0qgOIiu+PAEgmhuFVTnOlq2JOj3E+vqQ0HF69Ixk3EUVwhVVZ0gonRmkpAIxY2UtqNpFc6DuvIlFohut5sVK1Zw9913Z1QYTkl5gqNHj/CrX/2KDRs2UF1dzdNPP31DjdI/D1JK3n77bX7yk58Qi8VYtGgRH3zwwfBO0TC+FFpbWzly5AiJRIIJEyaQk5PzmYv19vZ2XnnlFbZv385dd93FihUrKC8rxzBN9uzZzS9+8QvWrFnDyJEjWbBgAYqisGXLlszv5qmnnmLMmDEkk0neeOMN1qxZw7p166isrKSsrGzQ+aLRKBs2bGDNmjUUFhZy//33M3r0aDRNIxgMZtQqaQ+6YfzxI+172dfXx/z58ykuLv7M8djV1cUbb7zBli1buOWWW3jssceoqanBtm0OHDjAc889x/r166mqqmLZsmXouk5tbS2vvvoqoVCIp556ismTJ2NZFlu2bOHll1/m1VdfpaqqKrPQvxqxWIzNmzfz0ksv4XK5WLp0KZNumkRWVtZn3g8uX27mo48+wu12U1VVNay6/CNAd3eYEydO0NnZyZw5czKqtaE2dUzTZO/evfzmN78hFouxcuVK5s2bh8fjoa2tjbVr17JlyxYKCgoYMWIEFeUV7Nu/j3Xr1mHbdkY5LqWktraWX//616xfv57q6mpmzZo16HzxeJxt27bx/PPP09/fz6OPPsrMmTMJBAIEAgFKS0sHHdPS0kJ9fT1dXV3MnDlzyLl2GF8d9PX1cerUKa5cucLYsWOprq7+zDkwEomwbds2XnvtNYqLi3nmmWeYNGkSQgjOnTvHSy+9xObNmyktLaW8vByfz8exY8f4zW9+Qzgc5qmnnmLevHkoisL+/ft5/vnn2bBhAzU1NSxcuHDQ+ZLJJHv27OFXv/oVLS0tPP3009x2222ZzcN0ufimTZvwer088MADxOPxr+WmjRCCfc0m33urFSEYtO643Gsw+ht8qc+u6zomOhtP9nKmbbBPaF/CgpwAk/1frhxf0zWaYh7+X20HBYGBBI2U0NJnIYssdG2YvBnG58O2bZqbm9mwYQOvvfYap06dyii/rw4J/bLo6+tj27ZtrH55NXv37aWjo4Np06alQhG/OEzT5NzZc/z0Zz9ly5YtRKNRJk6cyLRp0/B4PLS3t3P06FHWrFnDoUOH+LM/+zMefvjh4ZDFYdwwXBfyMhQKYSTjmV1OIQQipZbQNJ2ucBTTMrGMdL64JG1freAka0sFROqftDLStiWKAsIGhEDRHG9Gv89DKODD53UTi/YT6e0jGUsgJPgDHoKBAKGgD7/Pg22b9CdNXJoLU035YeouVE2QMEx6wj1Y3kLcIY1owsLjjWPEYiiKSn8iTri7FyNgoSiKU5ZtWWDbqIqKYRhomoqCgmEaeD0Kbo+brJwi1EAZ/clupB0nHm3FsvLIyc+jr6udpuZ2dJcPfzAnlWyu0tsbIb+wiNKSUiL93TF7ZgAAIABJREFU/RTmZhM1TITiJtrXj2mZuHU3Hm8AzavgUl0gFBRVkI4IVxXwed3Yto2mqLjcblRboGsqmup8H4iUr2iKDHZ8xAWWtLEsm2usHHe+Q0WhoqKCyZMnD0r0nDRpIt3d3fzkJz9h+/btLFmyZEDwyZkzZ3j33Xc5evQo3d3duN1uampquPPOOwekRQKZ9u+9916mvcvlyrRPL0Q+C1JKOjs7mT59OgsWLEDXdQ4dOjSsuBzG56KnpydTxqWqKrZt09vby4kTJzh27Bjz589nyZIln5tEe/LkSfbs2UN+fv6gncn8/DwOHTrEG2+8wf79+5k2bRq2bVNbW0skEmHRokXMnTuXrKysTH8NDQ0ZZfGnE0jj8TjvvPMOL774Irm5ufz5n/85t91225+sd9bXDZFIhK1bt9LV1YWmaZkAqdOnT3P06FFmzJjBfffdR0lJyWf28dFHH7F79258Ph/33nsvt99+e2YRXVBQwOnTp1m9ejX79+9n9uzZeDwedu3aRXt7O0uWLGHhwoXk5eUBTgnnpUuXaGpqIhwOD0hCB2fR/v777/PCCy+gKApPP/00d95554Dx/Gm0t3ewadNGDh48yKxZs5g1a9Z1USEM4/ogrRCzbRtd1zOVDufOnePIkSOMHTuW5cuXf26ieDjcw759+2hsbGTp0qUsWbKEESNGAFBVVUVvTy9Hjhxl//79nD17llAoxP79+2lsbOSBBx7grrvuysx9Xq+Xy5cvc+zYMbq6ugYFjyWTSXbt2sXzzz9PJBLhySefZMmSJeTm5g5oZxgGJ0+eZP/+/TQ3N3P69GnOnDnDzTffzEMPPTQkwTmM3z9isRgHDtTz3HPP4XK5MqFk58+f5/Dhw5SWlvLII48wevToz+zjypUr7N69m/7+fu68807uvvvuzD2ytLSU1tZWjh49yt69e/nmN79JeXk5hw4d4syZM8yfP5977703o0oPBoNcuXKF2tpaOjs7B40/0zQ5ePAgzz33HE1NTTz66KMsX76c/LxPnhlaW1t5++23uXDhAitWrGDWrFvYtWvnDbh6nwfh+GDdyDMIQXn5N5g2bw26ZjFpgmDKVHVQm4oRlV+qtDUUCrHsgeVU1YxiKO5TSkkwlMWUyZO/1PsfO34CP/x//05ra9uAMCFpw8fnbbZtl5RXlOLxDm+2/Wng04Psi68nW1pa+NWvfsUbb7xBRUUF99xzD1u3br0u7yoSifLuu+/yox/9CIC77rqL7du3X3M/UkquXLnC86ue5/XXX6ekpIS/+7u/49Zbb81Ub8ViMRoaGvjtb3/Lhg0beO655ygrK+OOO+64Lp9lGMP4NK4LeampCpoqULVUmZACQiioQkHTdFpb2wl395ObnYuu6xi25ZSGX/0jv6osXMgUqQkIKVEEqIqGy+XC5/eSHfIT8vtQFElXR5RwdxgpJX6/F5/fT052CI/bhZQWsXgM25Koqo7P4yErOwdd17EMG8uGltZuqkdO4WJjAy5dw1IVkkJQM2oCsWSci40NGEkToTjkno0EGyzTxDYtpBAkkiZer4qqaRhWkoQJBaF8PP4A0Y4EGhYhnxchJb29vbR2X6K6qpj8wlJisSi2ESeeMOjti1JWWkZ/PE7SSNDRE8aUNlrARSIZRUXi0t2oqlNqj1BQFUc1adsSaYOuu5BSQRVuvC4PbtWbIS4zN/J0+b4tURTN+UwCbClTvpfXb6c3FApx06SbCAQCtLa2EolEsG0bRVGoq6vjpz/9KSdPnmTEiBEUFRURiUR488032bZtO3/xF89y//33EwqFkFJSV1fHz372M06ePEl5eTnFxcWZ9tu3b+eZZ57h/vvv/8xFsaIo3H333SxatIjCwkJOnjyJoijXZYdrGF9fdHd3s2HDBjweT4Z0TyaTxONxqqurmTx5MqWlpQNKXq+GbducPXuW1tZWZs2aRWVl5YCSitzcXMaNG8fmzZv5+OOP6e3tpbe3l4aGBgKBANXV1QNI/FGjRvG3f/u3JBKJQR6bpmny/vvvZxRu3/nOd5g/f/6wau1rhHA4zFtvvcXWrVsz49EwDGKxGOXl5UyaNIkRI0Z85q63lJILFy5w+fJlqqqqqKmpGWAdkJWVxZgxY/D7/TQ0NGRI0gsXLqDrOtXV1QM2qSorK/n+9/+SaDRCSUkJqvrJY4VpmtTV1bF27Vri8ThPP/0099x9z+f6wqVDUtauXUt1dTWPPPIIY8aMGQ5U+wqhr6+PDz74gD179mTGoGmaxGIxCgsLmTBhwqBx9Wl0dXVy4cIFFEVhzJgxGbsMcAjxEZUjqKwcQX19PS0tLXR2dnL+/HkAqqurBxCPpaWlfOc73yEcDlNUVDRgLrYsiwMHDrBhwwZaW1t59NFHuf/++zPk+9VIJpPU19fz3//93zQ1NQGOl+w999zDpEk3DZfCfUXQ399PXV0dR44cyYw/y7KIxWJkZ2cza9YsRo0a9ZkbHlJKWltb+eijj8jNzWX8+PED7rE+n4+amhoKCwtpbm7m8uXLBINBzp8/TzKZpLq6moKCgkz7goICHnvsMRYtWkR+fv6AcWLbFidOnGDjxo2cO3eO5cuX88gjj1BUVJR51O7v72fXrl2ZcvHFixcTCv3+7tlSOsGmDm78PBsMllNTswyXy+aWOQp3LRpMXn7Ziihd1xk9ejQ1NTWfKUxQFGWAj+61ICcnl7nzbh+0brAsOHjQ4nyjTUG+OuA+eD1g29DTIzl7zubyZUkiAX6/oKJcMGqUIBi8tnVbur9z52ya0v35BBUVX7y/eByammwuXJB0hyWmCR435OcLqqoUysoEfxKFbSL9ISXIL76eTFuaPfzww9x+++3s37+fDz/8cFA7KSXt7e28//779PT0MHfu3AFe0vF4nGPHjrFr1y4qKyu56667ME2DeDzOtGnTWLZsGYZhcODAgWv+aLFYjPr6ejZu3EhWVhY/+MEPePDBBwets6uqqigsLCQej3Py5EmuXLmCaZpomoZhGHz00Uds3ryZEydO0NPTg9frZeTIkSxcuJDZs2dn5sy0z/+VK1eYM2cObW1tbN68mYsXL6Fpjuf/kiVLmDFjBm1tbbzzzjuYpsndd9+d2fxMI5FIcOTIEXbv3s3o0aNZtGjRZ67ThvHHhes0uyogVARKKmnaCZBRpYppQkdnH27NT19vH36/D7fHS9JKplK65VVUmQApnCTxVJS4IpwbjaIIXJqG3+0m4PUiFEF3Vzed7V0IqZBfkE8gGMDtcqEKAdgkYiaJWBLTMsEGTXGRm6vj8frx5gZxuXQ6unooLq+hs7uLnjYTaSbQhKC4vILLLS24vH5i0V6ElSpvz5S7Q3+sH2IQygLN5UVKhb6+Pro6usirUAll59HbGsBIRujtjWM3XKS56SIBt4EtwefxkkzESBoWuq4RiUTQNIXS0lLCnR3E4/30SxtVAVUDRVWxpUATmuMXikilnQtsiROGpKioQgfbhVv14dZczrwqHK9LJw1dfhKAxCdEMcJOtbs+oyKNWDyGZVm4XC50XUcIwZUrV/j1C79mz549PProo9x3330UFhaSSCT48MMP+fGPf8yqVasYO3Ys06dPp729nRdffJE9e/bw8MMPc/9991NQWIBhGGzfvp2f/OQnvPDCC4wdO5YZM2Z85nv5Y0iDHsZXC7m5udx3332MHTs280CdiCdovtLMsWPHWL9+PV1dXaxcuZKamppBx9u2TUdHB/F4nPz8/EF+WKqqkpubh8fjobOzk1gsRjgcpre3F5/PRyAQGPAg7/F4GDNmTOb/e8I9gDOX7t69m5aWFpqamvje977HvHnzyMrK+lqWnv2pIjs7m7vuuotp06ZlFmDJRJKW1hZOnDjB5s2bCYfDrFy5kokTJw463rIsuru76e/vJzc3dxCxraoq2dnZ+Hw+uru7iUQiKIpCT08Pbrd70Hh0u92MHPnJuE9bEUgpOXToEHv27OHjjz/OLO5z83KHHI9Syky55qZNm6ipqcmohodVl18tBINBbr/9dmbPnp1ZDBhJg7b2Nk6ePMn27dvp6elh5cqVzJgxY0jiORKJEA6Hcbvd5OTkoOsDicFAIEBWVhaJRIJoNEo0EiUcDqPrOsFgcAD54HK5qKyszCg9o9Eo4IypkydP0tDQwPHjx1myZAn33nsvhYWFQ45Bt9vNzTffjBCC1tZWzp8/n1Ehm6bJAw888DsDWYZx4+Hz+bjllltYuHBhZtFrGAadnZ2cPn2affv20dvby8qVK/nGN74x6HhpO2r1np4ecnJyyM0dOCcpikIwECQrK4vGxka6u7uJxWJ0dXWhKAqhUAhdG+jPW1ZWlrEViMWccuV0SN9LL73EwYMHmTt3Lg8++CClpaWZ89m2zZkzZ9i4cSMej+d3KpZvDGSqJEsgbrDyEnDWdoqGqoKuKbhc15fhctaMN+5zqKlw1qthWaDrFqpiD6n4/L/AtuFys6T2Q4sz5yTRiPOapkmyQjBxomDePJW8vC92YtuG5mbJh7UWZ89KIlf1FwrBxAmCeber5H9Gf7YN7e2SPXttTp20CfdAMum8rqrg8UgKCiTTpgpuvlnlMyxgvyZQEKTGgrw28rKoqIiVK1cSCATQdZ2jR48O2S5N5nd3d/PCCy9w7tw5fvjDH1JQUIBt2zQ0NPCzn/2Mc+fO8f3vfz+11nZxxx13MGfOHIqLizl48OCX+nQ9PT3U1dXR1dXF0qVLWbx48ZACIV3XGT9+PD/84Q/p6uqivLwcRVFIJBLs27ePH/3oR5w7d47KykqKi4vp7e1l48aN1NbWsnLlSh555BFcLhfRaJRt27axbds2zp49y4ULFwgEAoRCzubR7t27OX36NP/wD/9AdnY2e/bs4cCBA/h8Ph577LEB83h3dzevvvoqH3zwAd/97neHN8C/Rrgu5KUUqXJjRX7iXyJAkYL2jh4UVAL+AJaVIB6PE4vFyMnLwbAMh0xTUhSa+IS0hNTLEmxsVEVBVQVet0OARSJ9tDS3YiZMcrKzCWaF8Hg9TjK5aWAkjdTOmECgYKZKxCUq/qCf0tIReH1uJkweT8KMkldQQFtTANXlxjJjNJw/S3dfDCkdFYBpGpimhbRtbAmmZZJIWkgJprQwTROvy4NHtzGSUdpbGsjOzYOaKbQ1BzFtQVPjORQS5OflYxoJEv39GMkkQhG4NBeKEMTjUXJz8yguKSYSj6FJ6IsYqKqBYRn09ETIDmYhXG7AxpZW6kIJbNvZZdRUHStpZcKQ0mX4MvVwoiCQ6VR327lCUjjl/NebuGxpaWHr1q2Ew2HuvPNOsrOzURSFw4cPs3ffXqqqqnjwwQeZPn16ZjGSm5ubMS2vr69n3LhxTvu9exkxYgQPPvggM2bMyLTPyclh165dbN26NdN+GMO4XggEAsyfP58FCxZkFkqWZRGNRjl16hT/9V//xeuvv05lZaVTBvap35BlWSQSCaSUmQCST8PlcgKzDMOZt9J/qqr6hW+4/f39mZIT0zQ5ffo0XV1dFBQUDJOXXyP4/X7mzJnD/fffP8AvLV22+8tf/pLNmzdTUlJCcXHxIAWmbdskEgksy0LTtCEVLrquo6oqpmliWRa2bWOaZmbR9kXGUywWY+fOnaiqY7Fy7tw5WltbByzcr35Phw4d4n/+53/YsWMHM2bMYOXKlb/TCmQYfxh4vV5mzJjBo48+miGWbdvOlO6++OKLbN26lby8PMrLy4e0MDAMg2Qyiao6Xuaf9r1Lvy6lE+BoWiaGYaAoSmrMfhFlUJwDBw6g6zrJZJILFy7Q1NREZWXlkONe0zTGjRtHdXU1yWSSrq4u3n//fV588cWMx/BQfobD+P3C4/Ew+abJPPLIIwN81mOxGBcvXuSVV15h06ZNBIPBjCLoatjSJplMpqyftCFVeKqmOlVaV92P0+NP07Qv9KycTCY5fvw4H330EYlEgsbGRi5cuMCoUaMy4y+tHmpoaGDFihXcfPPNqTWH+Tt6v55IKS+FAP4UpHJ/XOjrkxw5YnPosERVYdJEQV6+oLFBcu5jiXlUUlhoM3u2yhd5XOzrkxw5mupPccjP/HxBY6Pk3EeSI6n+5swZur9wWLJ7t83uPTbxBJSWwOjRgoBfEA47fZw/L4lGJZoGt9yi8iWFrn8EEAOVl3xx8tLr9WYEDz09PZ/bNisrizlz5rB9+3bee+89pk6dymOPPZaxtdq5cycLFixg7ty5mfksbavyZecS27bp7u7m2LFjmefOqxXnn4bL5Rpg1WHbNpcvX2bVqlUcOnSIhx9+mEceeYRQKEQ0GmXHjh38+Mc/5qWXXmLSpElMnz4d27bp6+vj3LlzBINBnnjiCb7xjW/gdru5cOECP/nJT9izZw+1tbU89thjjBkzho0bN1JXV8eyZcsym/GmaXLp0iXq6uoQQjBx4sThfIuvEa7LdKKqGkJRnfAdRTgJ1kJBqgqXL7fi0b3oLgXFdqGoGslknI6OToIhP6qqYFo2NgJNqNhSoioCYdkgQVEEmqbicml4vW7cHhdGMklPVxi3rhMs8uP2eNF1HU110sktIZGAoqkohuMpqQhBIpmgvb2V3qiPZNLANEoJZGfhFv0EfR6Ky6oIK4K+rg7C4T6SloVtGrjdOqaVxLRtTFs6RCDOgk4gMJIGkUiEZluSnxck0NNKoPM0ilWM3+8mOzuH/kg3eVka/oAXr18lFo+gCidEB+moEbFtDMMgGAoycdJkFNVFd3+M1s4IbS1NRCNR+voieF2+VDiRQDrR7SliUkERCh63m3gsiWlKTFUiJKipayIUZ58IqeBYiTpqWVs4/21b4losOwAyCbWJRCKzULZtm3A4zOHDh6mrq6OsrIx7772XvLw8bNvmxIkTdHV1MXfuXMrKygY8PObn5zN+/Hi2bt3K2bNniUajnDx5kq6uLm677bbf2b6/v384QXkY1w2KouDxeAgGgwOIoJycHPLz89m5cydHjhzh8OHDLFq0aFAAiaqqTlCYECSTyUHlRukydMuycLvcqKqK2+1G0zSSySSmaaaCzn53gvTYsWNZvHgxu3btora2ljFjxpCbm+uUqA3jawEhRGY8Xq1IzM7OJj8/n8OHD7Nv3z6OHDlCW1sbFRUVA45XFCVDohuGMejBNl2GblkWPp8vQ3Dquk5fXx+GYXyh8QhQU1PD4sWLOX36NAcPHuS3v/1tJoAlDdu2OXzoMD//+c/Zv38/ixYt4oknnmDixInD8/hXFEKIjAr36nLb7Oxs8vLyOHv2LHV1dRw/fpzLly8PSV6m0+TTpJAtbdSriBPTNDFNM6M6Sbc3TTM1Bm2+CNFSVlbGkiVLaGtrY8eOHbz22muUlpYyatSoIdu7XK7MJlVubh6WZbF//3527tzJ+fPnh8nLrwCEEOguF4FAYIByPCsri7y8PC5fvswHH3zAiRMnaGxsHEReCiGcNUOKJDQMY8Dfp20Q0uSmy+VC0zTcbjeWZWXKPX8XpJQUFhayZMkSDMNgy5YtvPLKK1RUVDBx4kQMw+Do0aO8/fbbXLx4kQ8++IBTp05lSuEvXrzIxYsX6enp4Z/+6Z+45ZZbWLJkyQ0IjrIz4oYb7Xk5jGtHPO4QjqEQVFUKFixQCYbgbJGko9MiEoErLZLmZslHH9n0x6CsVDB+vILLBdGo5MxZSfNlSXY2FBQIp78gVI5w+svKgrNnJR0dFtEotLZKDAM+7T6TTMLFiw75GY/DmNGChXcolBQLdB0SCRg9WvJhrU1bm6StTRKPSwIBgWHAlSuSEydtWlskSQN8XiivEEwYr1BQ4DxTdHVJjh23SSSgolzgdsORozbdXRAIwvhxCuPGKZgmnD7jlNHn5MBNkxSyskTmmp0/b/PxeedzTpyokJ9/AzbxxUDyUkrzemuAAGdjbcyYMTz22GP88z//M+vXr2f8+PH09vby29/+loqKCp566qnrOjfYtk1PTw/t7e2ZjaBrUS8mk0lOnTrF7t27qays5NFHH2XKlCkO52DbBAIB9u3dx9ZtW9mxYwfTp08HSOWlaIwcOZIHHnggM3/n5ORwyy23sG/fPj7++GNcLhczZsygoKCA48eP09DQwE033QRAPOaU0jc3NzN//vwhgySH8ceL60NeKhoi5ZMiUBCK83u2bEm4pwdF9WLaNigKikvBo/lJJGL09UVxuVz4g34M0ymlFtIZ1GnPTImNpqoEfD5ysrPRNI3+SBRN1cjKycHldqGoqlPtLJ0fm0yRgIaRxLQcdaQtnTJpZyffpqurDTOZIBAMUFgcx+3JpmZEIT1+lb5wLm0dHXR1ddBn9NPf308kGsU0LSzL8YVRUkShmiL/pO3s+ra0JjAScfr7egnmXMQfzMEyTJLxKLpLx0h6iMdMwEaoDmGo66kEdU1zStNVlfyifMYJheOnTtPW2eMQHwkDTdWxLRtb2ghbpErGU8RkioD0eDx09/Rg2Y4yU0NzPC8dbSUoaV9RiRDSeQgTAkUIlC/x4GIYBps3b+bDDz8csKBNJpPYts306dN58skn+cY3voHH48E0TVpbWzEMg6KiokHKIE3TKCgoQNM02traiMViX7h9e3s7yWRyeNE7jN8LlFQwmWVZxOPxIf1TFUWhsLAQr9dLa2trpqQxDdu2M+O8sKgQv9+PlJKsrCxaWloyISjpBXUymeT06dP09PQwcuTIzFj3er0sWrSIxx57jIkTJ/Jv//ZvmQToxYsXDyvY/gSQHo9pdeVQO+6KopCXl0cgEKCzs5Pe3t4BZGTa5iASiVBdXU0wGETXdXJycmhqaqK7u3vAHJv2M+po76CyqjJDJrjdbubOncvjjz9OQ0MD//Ef/8GmTZuoqqziWw99K1N+e+bMGf7n+f/h4MGDLF26lG9/+9uMHDly2JvojxTpks30GPw0MZRGMBgkJyeHeDyeCdm5moyPRCJ0dXVlSNJgMEhubi7JZDITipJub5omDQ0NNDc3U1ZWlvGzdLvdzJo1i8cff5xwOEw4HOb999+nsrKSp59+mry8PBoaGnjzzbfo6Ghn8eLFmbJxcJ6p3G43Ho8noxQdxlcbjnWVktkUHOo7S5d+5+Tk0NvbS0dHx4A5UEpJT08PXV1dBIPBjN1LXl4epmnR2dlJPB7PEPemaXL58uUMUZom610uFxMmTOCJJ57I9FlXV8eIESP43ve+R3Z2Nr29vfT09NDX15fxkE0jkUgQiUTo7e3lvXffw7Is5s2bd/3JS2mBNHHkpF9bidwfLbKzBbfPU7lllsTvd1SSUoLP74TaKgq4dIc87OmF/fttiooEHg9UVyucPy/54AOL/n64eabilJnPVZl1s8TnFxSkSD2fLxWSK0DTxJCqy3hccqlJ0tUFWVkwZYpg9Cglo6z0+8HnE+TkQH8McnMEHo8gkYCzZ222bbe5ckUipUOMxuNw+oyksVEy/3aVigpBb6/k8GFJV5ekvNwhPc+fd3w5dR2am511eHW1oKvTUYHm5kJOjsiQl319kvp6m1OnJaNHC4Zw0LlOUECkn1UkyBt3j/D5fMyZM4fly5fz61//mv/8z/9EURR6e3v5q7/6K6ZPn35dS6PTc2gikcDj8QzYqPwiiMfjfPzxx4TDYW677TZGjhyZmd8URSEnJ4dJN01iy7tbOHv27KDPOnHixAHe1Lquk5eXh6IoRKNRhBBUV1czbdo0duzYwYEDB5g0aRICQbinh127dqEoCnPnzh22e/ma4brcpRxfyvSAdPwmk8kYZlJSWVlM46VO2to6yMnLxuNxg5D4vH6Sqk48nqCnJ0woKwCW49soU/0pmoqm63hcOh6vF01XkdJG01WycrMd9aGqpCoeLGzTImklnfRyaTsdpRZyhmmBDYYlUU0Dr9tNNNqPaUkSRgOa5hgsS6lg2xYlRdl4XJKP+sNE+vuJJRJIW+A4dDrl8bqqogiHrFWEQFdUbMumOxwhETMIRhOoahu6SycQ8KOozq6VaSbwuN14PCputwuRKul2FAgO4aq53BQWl1Lc3c3pcx/RG+4jEo3h8XgxDEeRoGgOKakIJy0cJEIR6C4dKZ2dVEVRQEqEooKwUztCAmyZ8e+07dS1AoQirrnEVNd17rzzTqZOnYrb7XGsAxQFv99PcXExFRUVmdCRNNHzu8po3W43iqJkSnu+aPt0u2EM40YjrcjZu3cvhmFQWVk5JEEohGDChAmUlJRw7tw5GhoaGDlyZIaE7+jo4MSJE1iWxbhx48jOziYUClFTU8O5c+c4deoUt9xySybNvKmpiV/96lc0NTXxl3/5l0xOpWaqqkpeXh45OTnMnz+fCxcu8Nxzz/HSSy9RWlrKLbfcMkwIfY1hWRZHjhxh165dRKNRysvLh3xgUxSFUaNGUVFRQWNjIx9//DE33XRThgjq7u7m9OnT9Pf3M3r0aAoKCnC73dTU1FBfX5+xI0gnL7e0tPDSSy9x/Phx/uzP/izjMaeqasZPrqSkhIceeohf/OIXrF23lrLyMhYuXEh/fz8bN25k586dGaJz1KhRXzpMYRh/WNi2zenTp6mrq6O7u5vbbrstM299Gvn5+YwaNYra2lpOnz5Na2srVVVVgEMGffzRx5w/f56ysjIqKirIy8tj5MiRGS+stra2jDdge3s769evZ8eOHTz88MMsXrwYcMZ6VlYW+fn51NTUsGLFCn784x+zYcMGRowYwdKlSzEMgxMnjvPBBx+gquqAMBbTNDl//jxnz57F4/EMCKoaxlcPtm1z4cIFPvzwQ1pbWxk1atSQVQdCCEpLSxk7dixbtmzhxIkTLFiwILPxEolEOHfuHG1tbcyaNYsRI0aQlZXNqFGj0HWNM2fO0NzcnCmRDIfDbNq0iTfffJN7772XRx55BEh5ZwZDFBQUUFBQwMMPP0xTUxMbN26ksrKSFStWcNttt/HTn/4045N59Wepr69n9erVlJWVZTyMb4hvu51AWjGEoiMjjHhRAAAgAElEQVTUayMohnHj4XZDaaljDxaNOiE7ra2OmrKvD0pLBGPGCHJzBePGCi5cEFy8KKk/YGPbcOCgTWsrjKwRTJyokJ3tkIsD+muTnD3j9FdSDGPGiCFLvWMx6OiQWBaEQlBWNridxwOVlSlBk1NUSEuLQzJeuCApK4W5c5330dAo2VFnc+KkJBS0yM9XkRJiMYcglVIydoxg2TKFixclBw9JLl+WnDlrU12tUlDghAJ1dMCli5JxY5330NkpaWh0QoRycwWh0I2xThJCQ6gp9bc0kVYYqPjcY778uQT5+fksXbqUU6dOsXnzZrKzs1myZAmLFy++Id7gaZuWdIXEtcAwDNrb27Ftm/z8gkHvT9d18vPzEUIQDocH/J2maYRCoQGl3ukqjPQGPUBuTi5z5sxh27Zt7N69mwcffBCv18ulSxc5fPgwFRUV3HrrrcMl418zXJcVgo2FSPldQhIjESMRj2ElITfLhddbTHNbL52dHQQDAXw+HwIVl+5GKCqmFSMaieJze3G73Q6PJgQul4bP4yYY8BMMBBCAaRrYUjolHLoL0r6YhuO9aVsW0rYQpEg4CUnDwLBsLMNRcfq8XgzDwjD6ifQn0CJ9aIrAMkw8Xh8ejx9sN7ZlYdsSoSgZ9aYQCtIGRai4PR7cuoaFBEvi93hQddUJqJGSRDyBUBVsBfy2Q04m4hJNU9GEiqkZTol8yofSsoVzTtOZIDxeHzUjR9Lc0kJDQzPRaALLFo4Zr+mQeGniWBEKEqc83OvWnQgeKRApAjeluUQRwuF6xSeZgkIRKCipcnhxrVXjaJrGrbfeyvLlyzMPf2nZt8fjGUQ2frqMNj0JpSGlJJFIYNs2brc7Uy4GIvP6Z7Uf6nzDGMb/BV1dXaxevZq6urrMDdA0zUw4wNmzZ5k1axYLFy4kNzeX/v7+Acenycu5c+eybt06Vq9ejdvtZvLkyfT09PD6669TV1fH+PHjmTVrFqFQCCEEd9xxB/X19fzv//4vRUVFLFiwgFgsxtq1a9myZQvTpk0jPz9/SKInKyuLBx54gPPnz7N582bWrl1LYWEho0aNYtfOXezctZOenp6M5UM4HObdd9/lypUreL1eCgsLWbp0KSNHjvy9XONhfHGEw2E2bNjA8ePHM999OoTnzJkznD17lkmTJrFo0SKKioqGVB6NGjWKefPmsWrVKl555RWCwSA333wz/f39vPXWWxl12uzZs8nLy0NVVebOncvu3bt59913KS8v55v3fBPLtvjtb3/LW2+9xYgRIygoKBiSIA+FQixevJiGhgZee+011q1bR3FxMX19fWzfvp2WlhYuXLjAqlWrhkx0njhhIgsWLsgQpsP4w6Kvr4/NmzfT1NSU+b4ty6Knp4dz585x5swZRo4cyd133/2ZZEsoFGLOnDnU1dXx4YcfUl5enkkxPXr0KOteWUd3dzf33HMP48aNIxAIcOutt1JbW0ttbS2VlZU88MADaJrG22+/zeuvv04gEKCoqAiXyz3ofH6/nwULFtDY2MiLL77IK+teoaysjAkTJjB16lRqa2tZu3YtHR0dzJw5E7/PT+PFRt5//31OnjzJvHnzBiS8DuMPByfUYSvRaCQz/mzbpre3l/Pnz3Pq1ClKSkq49957qa6uHrKPkpIS5s2bx759+3jrrbcoKyvjjjvuAKC2tpYNGzbg8/mYP38+ZWVluN1upk+fzpQpU6ivr+c3v/lNxnNz27ZtrFu3DsMwKC0tHbLyx+v1Mnv2bFasWMHPf/5zXn31VSorK5k/f/6Qtgpp24RNmzZRXFzMbbfd9pmf5f8EaSPtfkcxpvgQ6uAwjmF8NSAldHbBjh02jRedsu4RIwTz5ipUVSnoOlRUKMy6WbKlW3LihKStzSnfzs2FWbc4CeBpfYqU0NUNO+psGhtT/VUI5s5VqK5WBgUPSQmmCf39DinpcQt8vqFJwauPTZeLX2iQuN0wYYLC1KkqLhdkZTnl7gcPSRoaHRISUpaywiEeZ89WKCtTKC62aWmxuNQE4W4nJCm/QFBWJjh92uknEpHouuBKi6SnB7KzHdL2hu3bCxdCy3Wuj20gzc4bdCIHaZFCeXk5fX19uFyuAdUG1/tcwWCQYDBIW1sbra2t13S8bdvE43HgE1//qyGEyHABn64SSivofxe8Pi9Tp06ltLQ0UzpeVVXFkSNHaG9v54EHHhgySHUYf9y4PvIGIbFlEmlZWGaCZDxOV2cfbpeHgB/8fpWKsiABv4vmK53E4v1kBXNxuTy43C50FFQFLCOBaSbwBwJomorP6yPgd4hLj8uNYSYdwk0l4xEJn5SKIx1yTtN0dNvGsiW2tFNhAxYSgZkq73S5dBRVoCoCKylRdBWEwDQSGKqCYSToTaWmkvqBmaaFZad8OBUnSEOoKroiQHU8Nj0eN0JITNPAQuL1uBGaSjxpOCEImoq0bRKJOEKRuDSHmJOp9wmO8hNsNJeLUHYOU6dOpSvcT3t3L7FYHMO0sWwLwxZowinREqTSjoRE1xTUVJm4oz51JPoKMpUvLlOKTRVVVVA1DWGnPhPXntkjhMDr9RIKhb6QMkFVVUpKStB1nebLzZnJLQ3Lsrhy5UrmQdDn81FSUoLLpXPlypXPbJ9MJikpKRlUVj6MYfxfEA6Heeedd64iZQSK4swJpaWlPPXUU9x3331MmzYNl8s1iLwEh0x84okniMfjvPvuu/zd3/0d2dnZJJNJ2tvbGTlyJH/+53/O5MmTM+dZsGABbW1trF69mn//939n1apVmKZJW1sb48eP5+mnn2bUqFFY5uBSdSEEFRUVPPHEE1y8eJH333+fmpoaHn/8cY4cPcKaNWtobm7OhBwkk0n27NnDoUOHEEIwZswYZsyYMUxefgXR19fHtm3b2LlzZ+a1tI9lUVER3/rWt1i2bBmzZs3C4/EMSV6GQiEefPBBopEom97cxD/+4z+Sk5OTIeXT4/rWW2/NkImzZ8/m6aef5sUXX+SnP/0pa9euRUpJe3s7lZWVPPnkk9x0002f+cBZUuyoLxsbGzMeSHl5ebS2ttLd3U1dXR179+4ddJyiKHzzm9/kpsk3DZOXXxFEo1F27drFgQMHMq8pioKu6xQUFLB48WKWLVvGnDlz8Pl8Q1ZDqKrK9OnT+c53vsMLL7zAiy++yJtvvomu64TDYaSU3H///Tz00EPk5+ejqipTp07lqaee4oVVL7Bq1So2btyIEILOzk4KCgp48sknU4EnQ6ssCgoKWLZsGY2Njbz33nusX7+e73//+9xzzz1Eo1F+85vfsH79et5++200TSMSiWDbNrfddhsrV65kzJgxN+yaDuOLIxaLUV9fz/HjxweUIWqaRl5eHvPnz2fp0qXMmzePgD9ALB4b1IfX6+X222+ns7OTtWvX8i//8i+sWrUKIQRdXV34/X6efPJJR9Hk8YKAcePG8fTTT/Pcc8+xdu1a3nvvPTRNo6uri0AgwLe//e0BgRmfRk5ODnfffTeNjY2sX7+etWvXUlpa+gf1Y5MyjrT7nAot4Uaow+rirzJ0DYJBCAQgHIa2NsmJEzbZOVA5QsHrhbFjFdraHZLzwgXHK3PaNIVxYwerJD/dX2uqv5wcRz35aQLTsSdzohG+qNDFMCRd3ZL+fsjJgaJikfHS9PmgIN9Zr0Yikp4eSbqASdegIB+KU+2zsgQ52YJLlySG6Sx5Q0GorhKcOStp73D+zcpyVJi2DUVFYgBhe70hhAuhOuQl0kCaXTfmRCkkEglOnjxJXV0dxcXF+Hw+tm/bzsKFC5k2bdp1PZeiKGRnZ2eqwA4cOMDSJUvx+oZWeEopiUQiGa91RVEcQdpVAqOrnw/TZelSyi9dFaaqGhUVFdx88828+eab1NfXk5WVxa5du/B4PMydOxff1zvu/k8S14W8TCb6iff3AjZI2wkU8PrQXRpSBduycKkKBXluvN5CWlsj9PR0kZOdh8flQXO7CXjdqIqgPxbD63ITDAbx+RzyUte1FOmmowkNUg6blm1iGSa2ZWEkDbAlqqLiculY0kY1LVRVcXwdbRPbdsw8pC0xDRNscOsp1SEaPq8PVXGMwLt7+uiPxZwydVXDVh3VpbCc8nOEpD8Wx5aCUNCPtCxsG0zTxufz0x+NoAiBW9fIysrC7fKgayqaCtI20TQFt8uDy+VCVZ2HflVR8PsDKChYpolQwO32kJOXx9Qpk2i61Myxk2cdpamUSNtGKoBMJ29K0npKoTihQkgnYEikrpoQArfmwuV2Q0qdqigKpmHh5I1bXHNizzVCURSmTp1Kfn4+x44fo6npMsXFxZkHvtbWVo4dO4amaUyaNIlAIMCUKVPIz8/n+PHjNDU1UVxcnJnsWltbMyqkiRMn4vf7r1nePoxhfBqjRo3in//5n+nq6hq0+E4v1EOhEKWlpRQWFmZIHr/fz3e/+10efPBBampq8Pv9KIrC6NGj+cEPfsCyZcuIRCKZPjVNo7i4mJqamgFl53l5eaxYsYKZM2fS3d2d8dPUNI3S0tKMFYNlWTz77LPcd999mdfAIQemTJnCP/3TP9HW1kZZWRlZWVl885vfZPy48UMu6NIIBAJMmDDhul7PYfzfUFVVxT/8wz/w7LPPDlKfp8djMBikuLiYoqKijPrH5/PxxBNPsGDBAioqKjLK3qqqKp559hnuXHQnfX19mT7THsI1NTVkZX2iwsnOzmbZsmVMmjSJjo5OLMvMtC8sLKS6uppQKIRlWTz22GPcfvvtmTEHoKgK48eP5+///u+5fLmZwkKnjGj69On09vZ+5ucWQlBUVJQpER7GHw5lZWX89V//NStWrBg0BtMhKIFAIDMG04sGIQTTZ0znX//1X1FVNVNuGwqFWLRoETU1NRw7dowLFxpIJhPk5+czbtw4Jk6cSElJSUbxHgqFuOuuu6mqquLo0aM0NjYipaS8vJzJkyczduxYcnNzsW2bZcuWMXnyZAoLC8nNdRaXiqIwcuRI/uZv/obly5eTk5NDYWEhfn+Axx9/nJtn3szJUye5dOkShmGQk5PDyJEjmTBhAlVVVcOLoD8wCgsLefbZZ7nnnnsG+Uunq30CgUDGdzJ9L3S5XNx5551UV1dngs3S88rDDz/M5MmTOXHiBJcvXwbIKHLHjh3rhEWkiI9AIMC8efMoLS3lyJEjXLhwIbPJPnnyZMaPH09+fj62bbNw4UJGjBhBdnZ2xoZAURQqKir47ne/y1133UUgEBhSdQnO/XvWrFn86Ec/wufz3bjQPTuONFNJx4ob1GFvuK8qhIC8PMHChSrRqFMWvWOHzdFjEq/XpiBf4Pc7asj8PFBVx3dSVSE7C7xeMai/3FzBwgVOf42pEu6jxyQer01BgdPf1e11Hfw+kLbjfxntl3xa8iKlo7a0baeE3LbBSDp/Ov6cn7RVFNBdqZwMyznu03+naakNCgHqp1gLr1dQXi7IzoLeXidMqLwMmi5LPB6oqrpxJeNASnmZCumUJtLsvmGnsm2bixcvsmbNGjo7O3n22WcxTZOXX36Zl19+mfLy8s9NA/8yyM7O5tZbb2XLli3s2LGD+gP1zJ07d8i2fX19bNy4kfXr13PXXXexfPlyioqKUBSF9vZ2+vv7BwicDMOgra0Ny7K+tHJUCEccMnv2bN5++2127tyZUWHW1NQwffr0a7bCG8ZXH9eFvDSScaLRCC5NQ9McotHrczk+topwyD5pI5BkBV14XblEIgb9/SaKMAkF/GQHAgQDfmwpQAp8Ph+6rmeUglLa6C6X42dpg7RtdC0VmqMojpoxnsC2LFBEyjReoCoquqqRwMC2LISqoipKKmXbMSfWNM1Z/GlO+qDuduFyJ7Bty9nN1VO7+EJgSwtp2STiSRJJE8sCn8eDKhxZtK6q2IaBx6Xj83mdB6W8XNxuH5ZloGrCKQuXFqqAeH8UVVVwZwfJyckiOzuE26UjpZNmrrncBIJZlJeXMWPGNC43t9IXiSAtBVu1QXECe9RU4plEIoTjFWraNqa00ISGqrnweN2ZayqEimlbSCGxbIklLSwBuqZww7aorsLUqVOZO3cumzZtYtWq5wHJmDFjaGtrY926dRw4cICZM2cyY8YMfD5fpv3GjRtZtWoVAGPHjqW9vZ21a9dSX1/PjBkzmDlzJj6fj56enkHnTKeyffzxxxkF26VLl5BS8stf/pI33ngDRVGZOXMGCxcuHLBwH8afHrKysrj11luv+Thd15k4cSITP+UQrqoqlZWVX5iEEUJkvLI+D5qmMWHChCHJRo/Hk/HETKOmpma4jOKPEMFgkBkzZlzzcemUyk+rxlRVpays7AuHPwghyM3NzRBBn3e+0aNHZwiqq+F2uxk/fvwApdFQ7Ybx1YTf72fKlClMmTLlmo/Nz88f0v8yFAoxdepUxo4dSyQSwbIsvF4vfr9/kBpDCEF2dhYzZ85kwoT/z957Bdl13eeev5X23id1bqC7ASJ1IxEgEUki0RQVrGzZHs216soeWzWWyxqPXq6DHkbzooepqTvzMCWVy1UqX7kk2Vcuq0oidSXKJhVIgTkAJEgEEiQyCAINdD5hp7XmYR00CTbAJERy/6paFM5ZZ/U6p/c5Z+9v/f/fdzMzM3XAUalUKJfLsyKnUoolS5bMemi+kSAIGBkZmZM2Pn/+fPr6+li/YT31en3WtqZarRbdHNcJpVLpkt91b4VSioULF86xMJBS0tfXx7Zt21i/fv1smN6bj6c3UqvVWLduHStWrJitzC2Xy1Sr1QuOv0t9thpjLnlsvhEhBPPmzZuTlH65cbaJy/3mkZBh0TZ+neH9H72HYxw7OjoE/f3+JwwdBw86nn/eceo1mJmBUglGRx379vs28FIE9TocOOBYutQxf76g1YKz5xxxy1GrCebN8/NFkePgyz4h/LX2fG/OaIki36qttG/LPnHcsXiRF0jPU284nnvW8tJLjpERwciIJAy9GJnnPqSnnROLtT6h3FrQem66+ZuvRt/8b6Wgr09w00LB3n2OI4cdNvdVpL29virzilpoC4mQZVARzmVXtPJyYmKCn/70pzz88MPcfffdfPGLX2RyYpJ9+/Zx//33s2bNGv7kT/7kotY775Vqtcr27du5/fbbefTRR/nWt76FMQG3337bBVWUk5OT/OxnP+Ob3/wmExMT/P7v/z61Wm12Q/HgwYO8/PLLs4ni1lrOnTvH008/Tblcnk0Jfy9EUcTatWtZsmQJzz77LD09PUxMTPC5z32u6NZ5n3JZ3tJZmuJyQSvLCIwiCCRK+ZZtgTxf9tf2VcyIAkWpt0LaKRFC09ndSa1cRiuNVAGgUMqgtPEPlRJL5j0ZrWtXSgJYcp2RxbEPnsktsbXgHIHR5FlAajLyTBMag8tiH26TW4xSGKMJowglFUr6rR8nJEmW02y1sM6C8CciSimfxi19e3YSBExN13HWIXKL1AKjoNWawWhFKYooRSEdtSp9vb2EUQmXW9I0JrMZSjiMFLMBRN093fT191HpqGHCyJvSAiDQJqSrp5fFSxaxbt2tPLfnBdLUEgQKhPJenK6dGi59Cl0QhuTWoU1AuVwlKofoUCOFJE99anruwLrXW+6x4HLnv1WuMD09PbO7Rjt37uS5556jo6ODVqvF6dOn2bBhA1/5ylcYHh5GKUVPTw9f/vKXybKM3/zmN+zZs+eC8evXr+crX/kKI8MjlzTmnZmZ4b777mPnzp0kSUKe58zMzABwzz33+L+xlDQaX+SOO+4oxMuCgoKCgoIrjBB+w/qdVjZKKalWqxcNSPtteKPHV8EHh3f7dz8fSPlu03evS2wT2uIlIirEy+sM5+DMGcsvf2UZG3OsWCG5+25JtSKIY9+KDb4qUQiYnHTs3m155RXHwACsXCHZt99y8GVH/zOWD90lOXfO8YtfWsbOOZavkHz4bkm16udr6/ez7eFvJooEixcJ+vtgdBR2P+vo67OsXClRqp0evt/xm52Ws+d8peaqVb5itFLxlaAnX3XcmkAQwPQ0vPqqv+bs7PChQ63WO78GFQJqNcGSJYJ9+x1Hjzmmpr1IOjggGBi4ClV3IkCoGi4de8fiZZZlvPTSS9x7771MT0+TJAn79u1jamqKp556im984xsYY+jq6uIjH/kIN998M08++ST/9m//xsDAAH/8x3/MwoUL6e3t44/+6I948cUX+cEPfsDq1avZtm0bZ86c4Sc/+QmHDx/GWsupU6c4deoUxhi+9a1v0dfXRxRFrFu3js997nOXXOf5zegvf/nLjI2N8Ytf/ILTp0+zY8cObr75ZsrlMufOnWPXM7vY+fBOpqen+eIXv8jHP/5xyuUyq1at4qMf/Sg/+tGP+Pa3v81f/MVfsGTJEk6cOMEPf/hDnnnmGTZs2MBdd931nl9+pRQDAwPccccdfPvb3+anP/0pHR0d3HnnncXG4/uUyyJe5nmKw2KdIE5SHBCGAiUlvo3Zx8AooXw/s9BIFKVSiFSGUEnyJAFp0YEiiCKMCTFBiFS+KhLRzvlux5E753AuI00SlPDpUy73Ap61OQ6HMZaoFGGdJbd+Z8c530MtpULpAIHAtFO7TaDIraXRTEjSBKkkJghQaQp4Tw5jDKHRuMhSLZeolsq4NEcKR6VWZrqeo7QmMAZtFGGoqVTKdHf3IJxgpt5AKjBGImyKEIJqrUp3bzflSpmwVEYFIUJpvFrr/SnDqMTA0BALFg5x/MQJzo6N45xASQ3Oed9P4ZDOi56BMuSZw5gyQVBCSEmWOer1KaIwQrUT0m3my/+F0mjpg4TeqetlrVbja1/7Gn/+53/OypUr31VLlZSSNWvW8Ld/+3d86lOfYv/+/UxMTFCpVBgZGWHNmjWMjIzMznl+/N/8zd/yyU9+kgMHDjA+Pk6lUmF4eJi1a9f68aXy7Nr+5m/+hi996UusWrWKKPKC8F//9V/zp3/6p3Na3s5z3ivw7aqLCgoKCgoKCgquNtNTkzz33JO8/NLei94vhGD5yrVs2rydMIwYOzfKs7uf4NjRly86XmnNqtXr2LhpK0ppjhw5yDNPPcL01MRFxwdhxJq1G7l13W1FS94NjnNNXN729y8qL687hPCCoVZw/DiMj1tGRx21qhcPjx/37dGDgz6U5sCLll27LcbApo2SW27xVY+/ftCyZ49l/jzvAWk0HD8B4xOWs6OOWseF8w0NelHwzWgNC28SbN4s+fWD3lPz3p/kDA1ZymVfrXnihOPsOZg/D1av9qnieQ4rlgt2P+vYs8eiJHR1w5EjjkOHHLUarFzlxcvzYuY7JQx96nl3N7z2GkxOOaoVWLrswrb3K4WQIUJ34pJRXD7Zvqh+67CZLMs4fPgw3/ve9xgdHZ31f2w0Guzdu5cjR47MXo8ODg5SKpX4/ve/z8TEBF/96lfZuHEjUkrK5RJ33HEHf/AHf8B3vvMdvvvd77J48WLq9To//elPeeSRR3DOkWUZ9XodIQQ//OEPZzdsPv/5z7+leAmvh92VSiV+8IMf8Ktf/Yr9+/fT1dWFMYZWq0UcxwwPD/OXf/mXfOYzn2FwcBAhBAMDA/zZn/0ZzWaTX/7ylzz77LPUajUajQbj4+Ns3LiJv/qr/+1tK9HfjlqtxpYtW/j+97/PqVOn+OQnP1kE7L2PuSzipXM5zuWAD4ex1mEzjTC+5VsiEc4LmbmVpGmOyC2ZtCiTkKdJu2U7xOFFxXK5ShSVUDrACV9u7v0bxWwreZanSKHQUoFzuHZATxw3fctPEKKU8kJnDhKBQ6BNgFQaIWT7cYC0NBoNpmZmiJOs7e1hCAKFEJI4jsnzFJxFCkkQGfp7Ounr6salGa1WTFSOiCIvtuogpFarEAUarQXVWoVqtYP5QvqKVGdxeYpSilK5RFSKMIHBhBHaGEwQIKRsV0H6GJ1qrZM1t6yl0YzZufNRGo2YKAhB2NlqSedse+0BM/UmeeYT08mh3mxwdmyMnu5uOqsVFI4stwgE1gmsA/su/C6DIJgtAX8vaK0ZGRlm8eJF7NixgyRO0EZTq9UIw3DOSfG7GW+MYf369Rc8/ryHUEFBQUFBQUHBjcjRo6/w83u/z/TLO1nWM7fT5OUxy6N9H+Xo8YX0943wysvPsWvn39MRv8iCjrkX1S+dszw69PucfHWEanUeTzzxKAcf/7+4pc+iLnLtf3AcTp76z6xafSthODdZu+AGwra8eCkEQpYR8uJhHAXXhvO+lFu2SFqx5cUXHXv2OJTy1YWlEtx6i2D9esHoqOOJJxwzdbhlrWDdOklPj2DNGsmx444DBxxPP23p6pLcsUXSalkOvOjY87xDacgzP98tawWbNyuii7y1hYCOmuC2zZIggKeesrx6Ck695tDtNWkNK0b8mlev9inovb2CHTsk1lr2H/DemkHgW8YrFbhts2TTRj/nu0UpP//iRYKTrzpcCr09giWLJe8gsPq3RwSItlesszHOTs3++1IYE7Bp0ya++c1vEsfxJceVy2VGRkbo6Ojgr/7qr8jSjFWrV1Eq+ffpeXupL33pS2zbto1qtUpnZxednZ187WtfY+zcGO4S1/VaaxYuWHjR+y54ekLQ2dnJXXfdxfDwMF/4whc4ePAgp06dIsuy2VCfFStWsHTpUrq7u2cfGwQBt9xyC1/72tf49Kc/zYEDB5icnKRWqzEyMsLatWsZHh6etYjp6+vjq1/9Kl/4whfmWAqVSiU+/vGPs3TpUubNmzfr6w7ekmj79u1897vfJY5jhoaGrkgCe8H1weWpvMwEeeaTrqWwZFYgHGgESgkcEpwkS3KSuEWeWYT1zd8qUARG+8o4ndKME1qtJnErptrRQ7WjgzCKQKrZ1mYhBc76lnRtDFKcF/l85eX5tG9nQeUaKRWlsEQSp77FWkqUiTA6QApJmsW0mnWmp6ewuUVJQWAC79cpBFEQMDMtyHIFOGrVMj2dHZQDQ61SoVVvEASGSkeNjlDB3oUAACAASURBVO4OnM1BSaqVMj093XR1dlKtVihVyyhlSJIUJSVhYNBaYoyvspRKEwQlTGgwJkBKg5Ca8yE8xgQMDg2yZesWjAnZvXs3adLAp4f7JHHnfHWpUoo0S4njJqVqCeX/Gkihmak3CJVi7Phx8laLWn8fsloldw4n3LuPG/8tMca8qw+Zdzu+oKCgoKCgoOD9wEx9BlV/lS3dU2xZOLd1+ddpg/949Sy7d9fp63ccPTQJY8e5a/EMN8+bq0j8rFHnl0fHeGZ3i1rNsW/fOL2tE3x2qBd9EfXyx9NTjI2fmxOaU3Dj4T0vZ0AYhKq9bcXY5cDmCfX6KPX6NCdOCF566UIB3hhDT0/Pe7ZuqtfrvPbaa5c8PiuVCvPnz79kKvxbkWUZExMTjI1d2B5sLRw7ZpmcElQqncDlC04JAli6VNLRIRi/09HOygPhw2+6uwXd3YJm0/GpT0qy3N/W2yuQ0qd1f/Yziu3bfFXl/PmCMBR0fFaw443ztX/X+cdeqqhaSi+o3nG7ZHiZoF73z/88WkNHh6Cv7/VUcWN8evmnPy3YutWRtX+nEF4w7e0VdHb63zkwIPn85yFJoKvLV5QK4f//Rz6q2Hybr6ysVl9fYGen4KMfVaxf74W6atU/z6uBEAHCnA/tSXDpubcVL5WSDAwMMDAw8I5/z7Zt2y56u9b6op6+W7dufcdzvxOiKGJkZISlS5eybds26vU6zjmCIJgtIroYQRCwfPlylixZwu/8zu8Qx/ElH3Mxn/7zaK1ZtGgRixYtmnOflJKenh7uvvvu3/6JFlz3XJ7AnjgniTMcPpRHCEFCThA6dABKSuJWk6ThQ3O0Uu28cFA48tySplm7tTcljWOyxJJlljhpYsIAbQKCIMDoAC19NSRtr0YhJToMiIRDaYUyijRNSJMUZS3lSgWjvRCYW0eapaRpSrPRIokTsswLpkoKtApIsxzrLMr5EJ9SOaRaimjUpwmjkFq1Sm93F0r4F3DiXAOhFFEUUO2oEQYBmc3bKVhd1Do6qFQrROUyQmpKpapvc3c5QnjX4jAKCcIS2oQoLZHK+4EiFKB8e3ugEFIwMDCf9RvWEccJ+/Y/Txo3sDZDCrACbPs1yfOMOI3bbf0BURgSmJDpmUkCmzFz9hwqzWiFAaVaGSfg6mxTFRQUFBQUFBQUvFucg0jBYM2wsMPMuX9e1VCelgjhvfCkgEogWNBx8fF9FU3QkLNjhYBaoFjYadByrgDQU9JMFO3iNz4ux2XjuGwaocoIfXWKAs6cfoZHH/p/0Exx4oDmoZ9deN3RSC3rbr+Tr3/96+/almBqaop7772X//69f6K/NreMbybOKHfN50t/9qd85GO/+67X/uKB/fz9N/8/jh09TE/l9fkd3m/ylaM5pxdv4Hc/9n8Aly+5PQh8a/jg4KVfD2MunqwdBDA0JBgauvC+t5vvrZDSi4dvFBDfDmO8oPh2omK5DCMjc69FgwBuWujDeS4298DV8rh8MzJCGi9COhtjk2PIaPjqr+MqoZSio6PjguTwd0JReFRwubgs4mWrlRK3EhAOrSWBCdEq8CE4uaZRT2m1MmxmUdAWyQRKadr5N6SpxWifIG6znJnpKR8ogyPMApRWxFL59HDtqy2F9KnZQkikdDgBygSUpCZyzs+NxTlHq5lgrUO4nDzLSJoN0laMEIJAKzo7O4jjhCROkFLiJEghCY1BGUVUCogihVaS7u5uqpUKwlrqk5PgHFkSk6UxYdBNb18PJvDt35VqjXK1itYa6wQSSRCESKVQyrfZCwFKKoQyIDUOiXXSp5g5ia8pBRBtr05NT18v6zasRyrBoUMv0WpMk8QtrLVY64vEnXAkWUKcxJggJDQRHZUKjZkpsjSn2teHdJZSTxdKB/5529lfVlBQUFBQUFBQcAMhpfe1u+vDkpHlkscelrx0/6Uv6qX0gRef/IRk3nyJs5KJh67igguuCS6fwqYnweUIXUMGb99CejloNUfpyx/hP632QvubBcpHTs+wb28P1tpLBnBeimazyenjh+gY3cXv9c+tSj4ZJ+x+tZujR95bQMj42DlGX3ycbcEpVkQXtti70PFM1uKhqYg4aXI5xcuC6xchy8hgcTs+vYmNDwJFBWBBwZXisoiXwimkUAjpMNoQhSWsk2SpTyKvz8TeB0MpwtAgnEM6L8xJLLl1vspSZUShQymJQKOUJjQR5XIVpXU7Fdsra1IpwqjU9q70X3xSgkRhrSNPM5I0Jo4bxHGLVrNJlqbkeU6Wpigh6eioYdthN7m1lCpV0laLZrOBxfp286hEGEQoKXFEaCmplMtoo2nO1BFSMW/eAEma4KwjTVK01nR0dhKEZUq1GkFUalekKkrlKlIa0iwhtxlStr+YhcQhyJ3DWl+tKvFp4s45HNL7WTqJlJowKjN/YIA8zyhFAceOHGb07GnipOnHIWZfqzhJqLRr+ivliI5aFS0Fpd5upBQIrcmdQCIQ7nWptKCgoKCgoKCg4MZBCCiXBAsWCpYtlRw6KDgyt+DygvG1qmDRIsnQkKS3Fyav3nILrhEuO4eLjwEgVCcyWHK1fjM9Jdg8VGKgOvcy9LWpmNfe68zOoSQs6jLcNji3jXVeSfDaGYG179HywEGHcaybH7F2/tz5G4nlkRPSh8MWfDAQBmHmIXQ3Lpskb73ir7+L6vSCgivCZREvVaCRymCdJUkcrVadLBNEQUTSSsgyh5IGrRVSCLRS5Pa8R6UX1SSOPM9I85zABERh2Yf2lCqEYRltDD5925LnOQ7ILNTKJbRup5jjyOKU3GY+eEYIhNSEYYSWhjzLyPPce3NYR5qnWJvh8KJjmqUgoBRFWOdLo4NAUyqFaGUAgZKQWUvcaBIEEd2dvZSjMgiIkxbO5TQbMeWqJawEKFNCqhDpCymxWKzNQEi0CZFSELcrJslzlA4wYQktfcu4E23fEQdC+B1xqSTK+hb3SrWDnv4BWnGCFYLx8bO0GnXyLCHPHc5a0jgmzTKUztBK0dPdjZAglUAI5ysunfBBQ28QgwsKCgoKCgoKCgoK3l+4fAybnPTXSroXEQxd6yUVFNyQCFVDRovIp5/FpaPY7AzSzL/WyyooeF9yWcTLiYkZTp+ZxObtqkgpCYMIl0mfzG0kRgcYrTHGoJVE40NllNSkSUIqHFmakmUZkZSEUYgOJFJ70U9IiZICJRVIkEKBgDTzj5FCYp1P20Z4Y1epBEpL30ade39Ma51vU49j0iwhTWPiVhMpJUpKtNHkuSWQGqMNRiuk9AnhWeoDh6JySGd3Dx0dndRKVcqlMkEYoJUiSVrErRgnJE4oH7qjNFJ5g15rLVnWwuY+GR1nvWl0u9LSBGWMCZmuNwiMIQwj7+/Zbn93SNLMkiQJNs/JM4u1EKcO6zSlqIZNUzJnEUKRZw6b58RJTBCWsFjC0KC0RBvfui4EBCYidQqpFFc9saegoKCgoKCgoKCg4CpgselZbDKKkCVEsKBIGi8oeI8IWUNGy714mdexrZcL8bKg4ApxWcTL6emM6cnUVzQKR6kUUo40Nocsy+moRVQrZcB7O2opcTgfTOMsVkjS3CGVRGtNGPgqyySNkXGTsFzGhBGlKEJLhW1XX1qbkVtfvekQSKlRMkRriXM5ee6rKnHMCqP5+Yiz8+aOziFKkjRNEUmC1IYkTQlNQJ7lZLkjt74FXeBTsyqVDsrlKmGpTLmjgzAIKJfLhGEAQvp2BKUoVzsIo7IXBF3uhcvct6m3Wk2SJAYsQRihdUhndyfdPfOQ0pDno9TrE9g8JQzLCKVACLIsp9FoMHrmDCdPnCBJUyYmJslzy8T4tE8ftw4shCbEOp/AnsQxrpYRBCVKYUAQanAO53LKpRJpZmfT3woKCgoKCgoKCq4/pJQcnhTsOzzNb45nKCkuyFp8cTSmtjLzm/3t8QfGJMeOTrG4J5kz374zMQOVDNmexFrHU6cy/t8npmYDe1x78z+3sO9Mi5EFdnZ8wY2Hy2dw6UmwCSLsRoaLr/WSCgpuWISqosIRUgDbwLZehNr2a72sgoL3JZdFvHQIbLtaTyKQwgfrxM2YKCwRGENgFNaBRKKUwmGxedb2sYQwCHFSIlDYHLQKCMIK1WoXpVKNIAxRKkAphZaCNE2wua/E9HP4dG6lFAjvm6mNwTmLtRYpBFIKrPIp3MJpH5SjFWma+jXrgNxmlIXwLexJCk7gbI60ChMYOqo1SqUySimiMERqidQKZTQyMGgTYEyIkBIThGgTYG1Oq5mQpTmtuIXNU6xNvbjqLA5J7jTalCiVaxhtUFqQZS3yPCPLMmyWkuU5AoWzjjhucW5sjFdfPUWj3mBmpk6z2STPE6yNsXkKwtGIG5QqAR0dVWrVEtVqhTDwBtk+YKhEmmbYHAQKbE7heVlQUFBQUFBQcP2xdOlK1m/5Sx5s3MXpbsXSpYL+/tc7Zvpyx+JlN7Nw4SIAVt+8gf/pT/+W068eIzRzBcd5TjCych3d3T4Jdtu2u9Hy/4asNTsmSeDoUcuxY9B/q2bb9s0Ewdw054IbA5edw573u9RdyHDJtV1QwQeSVgtOnLC8esoRRYIliwXz5r3+WeacT3E/eNBx4qQjjh3lkmDhQsHIyMXT1a8JwiDMfITp9JWX8UF/KX2dLK+g4P3E5fG81P4H60NfjNIEOqTSXaJcCilFEQLQWqBFgBSCNPMCm80szoI2AcoYpDIobQiCgCgqgRO0Wi2yNCfWMQKBUAKwSCFxNkMgkQKU9qKbVr613GFxucPh0IFPDc/TDJspcm2QOiVNM4RSGGPIshyLb1FPkxZh0K6WzDLCKCQIQsIoxLocl+fgLEI4cpeRO+vDg6RGaoMQkjhOmJ6pk8QtGtMzbU9MX8mptMQYQxiEWCtIUl+BqZREGUlZVZk/cBMT42P++WcpcStmeqpOK/Yp6V3dPYyeOcfE+Ks06g2SLGVyZoowElRCRSkyzL9piAUDg5SjEkZpdOgFYCF8lWu9PgMOkiTHKkEpcMWH7UX4l3/5F5IkYfPmzaxYsYIwnGvUXVBwo/LEE0/w/PPPs2TJEjZu3Eh3d3fhfVtwzXjuuefYtWsXAwMDbNq0if7+/uJ4LLiqTE9P02w26ejoIIqia72cC+ju7mXlqo9x4uQ25s0TbN0iWb789feHEIIoKlEu+7Tl+QNDfOx3f49Wq3XR+aSURKUSYeif5/Llq1i4cBFpms6OaTTgySct1lgWLdKsXFl5PXCy4Ipw+PBhnn/+eUZGRli+fDnGvEXq0rvEpmew8SEAhO5GBjddtrnf9nc7y+nplMeP24sG9hw828JV3nsRRT3O2X+6wePH5wr1xyZTDtcjht/j3M45ztZznjhRZyaZG/rzwukm9WRudfONgLVw8qTjyadyuroE27YqSlfISSDPYXTU8fTTlr37LNPT0NsriCI5K15aC2fOOH7zm5x9+x0zM/5xWjs6OmD1KsFdd6kLNm6uHQKhOpDhIvKZ57HJa9jkODK8eu+r9xNZljE+Po5Sip6enmu9nILrjMsiXtZqIVEoSBPXroBUhFFIKTSUIo2SXixTyguLeZaDc973se1BiZS04pQgsiRpwsz0FK1WC21KmDBCSoUJorZAKNFGtv0zNYFRBCZEG+3FTSG896UDKQRCCv97rEMgUdogXA5CoLRue0fmvnVcabI8Q0pfcWmdI899erfW2t+e59CeM44TDJIkz4isw2U5edYkSTOazQbT01OkaQzO4hworZHKJ63nVpDlAqV9QnoQhEilsDiElFRqHTQaTR5++GEmxiYohRFJkhG3Ys6eO0dU9lWt3V3dvuoyywiDgEZ9BpcKertLDPZ3Ua0EBEYh26FGRmuazRZpGiOlJrM50mgc/v6CuezcuZOdO3fS1dXFypUrueOOO9ixYwcjIyOFkFlww3Po0CG+973vMT09zaJFi9i4cSPbtm1j06ZNhZBZcNU5fvw4//qv/8prr73G4sWLWb9+Pdu3b2fTpk309vYWx2PBFefIkSP88z//M2fOnOGOO+5g+/btLF++/LoQMoWQBKZCqVSmXIauLklv76VbuKVUVCo1KpXaO5rfmABjLqyqjCJHrcNSKjlKJQiComX8SnP48GH+/u//nlarxcqVK9myZQtbt25lZGTktxIyXT6Jbe71fpe6igyHEbJ8GVf+1tRqN5F3foz/cfIo/d2Srq4LP89bAyG337b5PdkSVKtVVqxazTM3refBxM65v6kdvSuGWXXzmve09vmDg9y85SO8sn8Xo8mFFYL1uuPwmGLewvWUonf2XrueiGM4csSye7djZASy265M6WCW+SruX/3acuSIo9mCuAVKOZL49XHNpuPAAcszuxxZBiuWC4YWCE6edBx40fHcHkd/v2XHDoW6DvZRhO5GldeRzzyPS8+RTT9IEP7JtV7WDUmj0eDf//3f+eEPf8iGDRvYvn07mzdvLoTMAuAyiZe9fRVmZjqZmY5p1FNQGZVamchojBJIpA+dEdZXQ55PGFcGREaWW6zzid8T4xO0Wk2q1arfCY6EFw+lTxTXSLSQaB0QBCFR6CsitdKodpq5tQ4paHtjxuS53z0W0iCNFzdt24PSOYHNc7IkJbcWlCDLM5JWTJqkCCWRUiHw4p+UAmszlBJkzmGTlNz5KHGbCZyDPM9J05wkTcnzFCF8OJHWBq01QkmElID0HpjCok2IMgakAIRvsZeSjs4Oenp6efKxJzlz+gyVUgWlNBNTkyij26+j8iKrtbSaMVhBbqFULqMkpEkLl2cYExCYEjbPUFogZECWW7CCNEtRxiCcLPTLi9BsNjl69Cj79u3j2Wef5cEHH+T73//+7Anljh07GB4eLtqoCm5IsizjzJkzvPjii7zwwgs8/vjj/PjHP2bx4sWvC5kbN9HdUwiZBVeePM85e/Yse/bsYd++fTz++OPce++9LFmyZPZEdsOGDYWQWXDFmJmZ4bnnnmPnzp08+OCDLFiwYHbjcvv27dd043J8bJQXnn+A3U8+Q7UqOHJQ0Nv7xveB4Oa1m7jr7k9SqdR47dRxHt75AIde3o+Uc98vUirWbdjKnXd9giAI2L/vWR769c+YmZ6cHZOmvirr1Vcdx14J6e26k0WLfrcdKFlwJWi1Whw5coSDBw/yzDPP8Ktf/YqFCxeycuVKtm7dytatWxkeHvahn+8CGx8mm3kcbIYsDaGqW7maLVfdPatYveH/RDDD5k2CLVsuVJ6U1u+52r5cLvPhD3+UkZERnL34xUy5UmHhTYve09oXLVrMl//3/8L42NgFt1sL+w9YfvozGBrqIQiuXviRc3D2rGPvPkuWwrx5glbL8cohx+LFgk0bFc2mY/9+y5GjjnodtIaBAcGamyVDQ4LxcceuXZYXXnBMTcGJE44HH7KMDAtWrpSkqRc29x9wjI05hID58wVrbhbcdJP/DJie9q3dbxQg34gQUC77CsvRUce5c441NwuUgsefmPu3imOYmIQogvnzBHffLRkYEBw+7BgdzRkfh1dfdSQJV6xC9N0gZBVVXocI/h2XjpHPPI7r/DjCzLvWS7vhyPOcEydOcN99982e/y1atKgQMguAyyReRqUSvX19dHdLskyA0PTPm0cWx6TNBgjviOnwO8Y68GKfi2N0oJBZTrMZkyQJMs+IShFhKSKIIpyA3OYEQYSQGmUCgigiCEP/E4WEJkBKhVS+fdwLpWBtjshAWY3An6AJIfwnvQDnhP+xOVmakNmcJMtwSYIjwQmBUopSqUoUlhBS4fKMRmMG5yxJliMyR5o60swyU/dt4Qhw+KR1pTRaSZQQCKXaIqxAtE8UbPt/lJTtk0oxqx06B0FYYuPm29i96zmOHj3Gq6deA6BSq6Kk9knsQjIxOcn09DRCCIR0SOuodXUgVeBTym1O3GqQxCld3X2YICLNMlqNGVqtFkIoTODTxouLwbem0Whw+PBhDh8+zO7du3nwwQcZGhqaPaHcsWMHyQ3aNlJQcF7IPHPmDHv37r1AyDx/4lAImQVXiyzLGB0dZXR0dFbIvOeeey4QMjdu3FicyBZcERqNBkeOHOHIkSPs3r2bX//617NC5pYtW9i2bRsjIyNXdePyxMmjHNp7DwuTB1mlDHpUoN6gpew/mzN27jQrb17P8LIah145wKP//n2qk/tZ1DX3tH/P6ZyJqWluXX8H/f3zeW7PMzz+P/6BLYOg29qkc9CVwaqq4/lXLc8+l/Gxj3/ouqhEfb/jnKNer/PKK69w6NChC4TMVatWzQqZy5Yt877/bzVXNkZefwrbOopQJWTpFlT4Xpuo3xvaVOnqXk0YwsgKyabbLl/ZnJSSvv5++vr7L9ucbyQqlVg2PMKb+87zHJzIeewJS7UG4iKbBFcK52ByyrF7t2NywjF/vmB62nFuDKSExYsczzxjeWaXpdkEY/x6X3zRb0Z86EOSPIM9z3tP2zSDM2fg6ae9PduCBb4C8qHfWM6eBal8nc1LLzlOnBDc/SFYskRy6pTjF7+wTExcXDQWAoYGvQh5002ST3/Kb7q88opFyrmPqVYFd9wuWXOzoFTyXphaQ6XikPJ8iNgVfnHfDUIhw8Xojm2kZ3+CjU+QTt1P0POfodjkeU/keT57/ne+sOK8kLl+/Xp27NhRCJkfQC6LeDlv3nKM6fWt0E6AEFQjQ3NmklQJXJqilETgqygRwrcwO0srbpKlKWmaIACjNf3zBihXq1gBWeYA6ZPEdYA2gU/nNgakxOEFwPNfE0Lp9r8cQoI2IRLfPi6VxFrrBT4hcChAkCUxUmls0iJpNGk0mqRpjnMCITRCKKTW2NyR5Q6k9m3gUgASJyWptb6dHN+mriQ4Ibxw6BxOCLAOS450EqckUkk/HjBG+aZt5xD4Ck6cwyHo7O7lD//nz3Py5Ks0mi0mJidI8hRyR1dnF2EU0lXrYHJ8gkarzuCCftasHWHevB6UdKRJCyQExiCEoZVaIhMglCK3MwilMTpAKX3BB+x//a//lV27dhVCHPD8889f9HVoNBocOnSIQ4cOzV7YDA0NceDAAS9kF1y3PPDAA/zoRz/i9OnT13op15xjx45x8uTJObdfTMi85557LhAyx8fHr8GK33888sgjPPDAAxw7duxaL+Wac+rUKQ4dOjTn9jcfj4899hj33HMPS5cuvaAis+C9ce7cOfbs2cPZs+eu9VKuOS++eIAzZ85ccNv5jcs3C5mrVq2aFTKXLVt2xYXMVqtB2Y6x9SbLtkVz24dLJLyYzdBqNdvrnqGcnuXOQcfaeXNP+13S4ljaIE39OU69XqfHjfOpxX2Yi4gwdu8kp1szWDvX8++35bHHHuPHP/4xeX75577ROHr06JzvV+ccMzMzvPzyy7zyyis8/fTT/PKXv2ThwoWsXr16VshcsmTJm4RMh42Pko7fQza1E2yKKN+Erm0HWVgf3eg469usz44BwrFiueBDH5L09sLMjOPYcS/4rbtVcOutXmh86DeWgwcdQ0OO22+TbN0qSVPLq6dgyRLBnTskg4OCs2cdjz/hOHUKli8XflwCTz1tefllR1enpb9fYALoqMGlqniFgFoNokgwf77/yTI4fPjizykIYHBQzM53Prxn/wEvzJZKMDAouIxWsL81QnWhq1vJJh/CZdOkYz/BZdMEfX+MUG9tJRDHMUeOHOGllw5epdVev8zMTHPgwIELbruYkPmTn/ykEDI/gFwW8TIs9RI1fbtxbnO0kZTKhu7ublzSpDE9SRbH5GlMluVkWUac+ZZqrEU6iIIAITRRuUKpXEWZEKxFKdduEQ8wRqPbHpVJ4ghlQG4VSZoSaIcQGmm9X6QXSIVv98b7SjrnENK128sluQObW6SyNBpNJsYnqTcaWGtBKJQUKGFwFuJWjHPeo1NIhRSqLYLKdlWnFypxvqpSIhDOkqUZcSsjy3L/+mQ5o6NniUqRr57UhiAwTE1WWbx4CaVKBaGMl1/b4iVCsmx4mP/0hT/iO//tvyGVYHpqmiRLGZ+cQM0o+vr6WHvLWvbt28PCBfMZGOzHGIO1YKTG2YQgqhCEZZSpUKl2MDE+QRhVCF0+W0Hl4408Tz75JPfdd98lTd4/SJRKJX9cvAX1en1WyCy4/jl8+DD3338/hy915vQB4+3E9osJmf/xH/9BvV4njmOMKS6AfhtOnDjB7t272bt377VeynXBuz0eH330UR544AE+/elPc9dddxWbR++BQ4cO8d3vfpenn376Wi/lmtNoNOaIl+c5Xwl36NAhDh8+zK5duy6ohNuyZQszMzNXdH2Rhp5A01ueW7XWXdKE+o2VPoKSEfSVLz6+s6QJ1OvjhYCykfSVFfoi4mUtVIxeoar7AwcO8E//9E9kWXZF5r/RiONL9ODyupB58OBBXn75ZZ5++mkeeOABlixZwoc+9CG+8IUvsGzZMj/WxuSN3WSTv8BlDV8lFixCRiuv1lMpuAoIoKfHC4wLF0qEgGbL8alPSWwOnZ2Cnh5BtWp5/gU4cRLOnXOEISxeJKnVHOq0o6cbVq0SaC3Ytdty4oSjUoG1awS3rJXEMTQajiNHvDA6Pu4YGpR84hOCt9p3CAK/hvNOB+/0be4cTEw4Hnvc8uSTPuh32TLf9v4uXROuLEIjg2Xoju2k5+7DJWfJp39DXl6Nrn3oLR9ar9e5//77+fa3v3111nodk+c5Y2+yZnjz/W8lZN55551s2bKFWu3G854teHsuy1veOon1vdIIKTFBRLVao1IqoZ0l6ZimMT1BY2qcZmMGXEbscgQWoyWVchmpQ8JSGScVSIl1AodEKOETyIMQoTS5tf53tBOzszQHLcjbrdK5tW3RUsxWekojya1DKoXW3nuSPMflliT3AuDE2DhJ26hDaw3SV4dKpdst6BZrLXluybIEnCXNMnCWLMmQSqOUxjqHcA6lRVsExftdSo0JQ6JSifmDgwRR6IOIlEErhc9P2CcjswAAIABJREFUhyy3GNlOUne+idwhcEKxfsNGPvd7p9n50C95cf8+plVOlufESZOzoynO5QwPD+Oc8AnppQglA4SooJWgFIUEpkwQ1QjDiPHxKUphhLUxMzNTKB1hTDi7Z7Zq1SrGx8eLykv8DvipU6fe0dglS5Zw5swZGo3GFV5VwW/D+STjoaGha72Ua86ZM2c4fvw4zWbzbcdWq1XWrl3L1q1b2bRpEz//+c+LasHLQF9fHxs2bKCrq+taL+Wac+7cOY4dO0a9Xn/bsR0dHaxdu3Y2YGrlypXUarXC0uA9UK/XOXz4MC+88MK1XsoNwxtbesfGxqiUK9x+++2F//V7RGtNtVq9IOn8g0qWZe/4dRBCUKlUWL16NTt27GD79u309va+4X6NUN0IVfXiJRZsA2wT5HVgGFhwWdAGenu97+X5isRKWZB0wJEjjsOHLa0WTEw6JiZ8xWaWXrr9OssckxPQakGSwMOPeF9Ma2Fq2tFqQb0O4+OwZMn5SsnLi7U+mfyRR3Ke2eVoNGDFCsFdd8nrJGn8QpxrYJN2J5MQICOEfnsbgyzLOH36dPH9+y45L2SOjY0xOTnJ4sWL0VoXYbrvYy6LeCmEQ7ZbuKUUBIGhVC6jdUBoDNXOTjq7e6lPnGNy/Az16SlMWCZJM6JSme7uPkxYBqkYn5wkzXxqOcIRGC/6CW1AKqTRKK1xQJKkKKmQSFIyXwUprffTxOHyHCUkSerQRreDgzR5bknjmLjVYvTsKJNjYzQaDXLnn4cQIJREK41wrXZbuq+mlNoLoEJKIm38rnRZeIERsM4hpaDdJY6QEqUURhmM0RgT0KE1SocIpfzupxRkaTKbZm6tvcAvRTjX9r8M2fE7dyFFRn93mXMTZ5merpOmGVmSYS1UaiUs/nlrbdCmjJSKIDAExqB1hDYRSgVIKchtwsT4OayzlHWElGa2dfxLX/oSf/gHf4h115OpyLXhG9/4BmfPnr1oNYCUkqVLl3L77bezZcsWVq9ezd/93d/x3HPPXYOVFrxTtm7dytKlS9+ysuGDwn333cd3vvMdjh49etH7u7q6uOWWW9i6dSsbN25k6dKlDA4OMm/ePPbu3fu2PlsFb8+GDRvYuHHjOxKQ3+889NBD/OM//uOctqHzdHV1ceutt7Jt2zY2bNjA0qVLGRoaoq+vjzAMGR0dvcorfn8wPDzMV77yFT772c9e66Vcc44cOcL999/PSy+99JbjOjs7Wbt2rfcC3rSJ4eFhhoaGeOKJJ67SSt9ffPjDH2bhwoVv2+nyQeCpp57iH/7hHy65iSOEoL+/n9tuu43t27ezfv16brrpJoaGhujq6rowrVtoVGUDwbz/lXT8J+Qze7HxMfL60+jO371Kz6jgSqMkhCFo7a8h8xyOHfN+lUePerExy/ztSQpavXVGq3OQ5f56FuuFSm/n5unvh+5uAcJXcL7yihdHL4ZvGxcMDws6Ot6Z6GgtnDnjePChnN27vWi6Yb3gzju9b+b1durpbIu8eYC8cQCkRpVXE/T9L+/IV7ZarfKJT3yC7u7uq7DS65tGo8Fjjz3Gz3/+87ccp5RiaGiIbdu2sWXLFm699VaGhoYYHBwsNhDfx1wm8bJddYkPnlFSIQUEQUgQlAjDAFHuoBRVqda6ieMGeZ6RW0cYlQnDMk4omnHMTJzh4oQ8FyhypFYEQYgJAy9OtluvM5dhdAB5TmIFoVQ4C84K0jwjzVK0EuRSYmSIEN7PsdWKmZwcJ4mbTE1MMD0zjcIRBgFSa++liUBIH7gjhKBt44lEQFuMdM4bFstZ/0xw1rYF3PPp6n68wL8mSkqkNgilcCgkyjteOovNM4Iw8O3t1vn0b6nekORoAUm1q4fbt21HZHVeO3GQuDWD0gYpDUpJcicZm5wmjptI20FgQqTUaHV+TIhWBqM1MzOTTE2codGYARRhqeZPGNvtdsPDV9fE+3qmu7v7ghNBpRTDw8PccccdbN26lRUrVrBw4UIGBwepVqt0dnZew9UWvBP6+/vpv0Km7jca+/fvnxO80NfXx7p169i6dSvr169nyZIlDA4O0tfXd8FJQVHhdnno6elhzZo113oZ1wUnT56kXC5fcFtvb+/s8bhh/QaWLL348Vjw3hkcHGThwoVFyy7eNmffvn0XFS9rtdqsYLl58+ZZwfKNx2JxTL43FixYwIIFC671Mq4L4jie870shKCvr4/bbruNbdu2sX79ehYtWsTQ0NCc89Q3I1QnqroDl9exraPY9CxZ/QlUdQtCdVzpp1NwlXjjKVm97ti/37HneUdg4JZbBCMjgplpeOxxH8DzVkjpW72l9H6Wd+6QjIxceM6njaBWhddec+zcaRmfuPS6FiwQ9PTIdyReOgdjY46HH8nZtcuhFGy5w6fTz5snrjvhEsBlo+RTD0IeI4I+TM8foiobuZQP6BsplUps3ryZtWvXXvF1Xu9MTEyQZdlFxUutNYODg2zbto2tW7dyyy23zAqWxbX3B4PLJ16eT/H2cTNgwWhDYEKUNCglCJQhLFVxLsO5HOccShty60jTnNQ571XZbgnPc59SnmYZtpkAou0tCUppcu1QUhEEAuMgs468FdOKWyityJXEGINWjmarRZrG1GfqTE9P4WxOmiWo0GC0wkiNUgqlDEJqtAoQUiCU8GE7uHYFpADXbgnHeh9M5/zHkmj/l9fjGYVzCCmQUiKVQUiDOJ96fv7DzPo3o9FmNkxobGKcarWKMYGf7/y3kZB0dvdx862bmZk4jcynUTKhUqtQKkUgNX1dVaYbTRLnKGl8Sz8ShEFKTZ7FHDt1mH1799DTWyUIA2zu8MWexW73pVBKsXr1arZs2cKWLVtYvnz5rGBZLpff8qSxoOBGYGBggA0bNrB161ZuvfXW2Yuinp4ezPXkil7wgaC/v3/2eFy3bt2sgN7b21scj1cArTWVSuVaL+O6oFarXXCMVatV1qxZw44dO9i0aRMjIyMXVPsWFFwphBD09vayefPm2QrLNwqW76bzQcgSqnwLqryGbOoJbPMl8vpudMddV/AZFFwr4thXRCYxDMyHTRslS5dKXn7F4pyvunRutmYFeD3F2zkIAkF3F5Qif5vSPlncOd/KffJVR7Xq6OoUVKuCxUsEvZew+xUC+noFUfTONrwbDcfevZZdu/ziNm4Q3HmnordXcF3umbsE2zpIXn/OV12WlqMqm3knwiX493kURXM2LD6IOOcueB0KwbLgjVwem1vpnRml9EnbCO9ZqbTBIRDCoZRC6hBHAMKhBDjrkMr7Ueo0wzqQUpNmLVwOUrbDeqKyb3MWEiUlubWzrdzaGLQJSHNLkjawaYK1Dm0MQgry3DEzPUOSJKRJTJ5ns4njWgcILdFKIqEtVkqUDjCm5NPApcTiP8Wdswgn/LqdxTo7G6xj8xwxmxDuk9WR7QpOpZFaI4XyAtd5sRfa9foCYwKE8OnnQkKz2SSOY/r6+r1Hp/PjhQOEYf7CZaxet5XdD99LHk+SaIPCi8GhCgg6KuQChHGkQhBnFutSRs+c4dzoSbKsSb0xSW9/F+Vy1bett1vwC+bymc98ho997GOzJ4znBcui6qzg/cDq1av5+te/Tnd3NzfddBODg4N0d3d7/9+CgqvMsmXL+Ov/8tdUa1UWL148ezwWgmXB1aRarXLbbbfNppieFyz7+/uvqWBp9P/P3psF2ZVdZ3rf2me4N+cB81SoAlAjxeJQRXEWi1RL3SK75Y5oyhGt7mh1hMJ22G9WtB/c4QhHKOQXhx1+ssOO8IsibD84Wmq1Bne3hubYFCmSRbKqyBqAAlAYEkACmUACOdzp7OWHvfc55wIosqqUYAKo9Uko5M17zj77HoCJvH/+//oLzq0XfP+1Nf7N6716VFHi3I0hBz+slEVwf+Z5wWvXc378xnUOzN6579PXhzz+KR++1wS8V755bsC//A+rZC6VOYb5eN4rb64OePaw2riQe8z8/Dxf+MIX+M3f/E0+8pGPcPTo0fqHiX+bey/FIbLpX6TaeAk/uEq1+X3y2c+Cff//0JHn0Ika0M2b8JNXPUuXlNNvhgi5+uBuPHdemZyAIlRCsHRJ+d73PI8+KuzZIxw9Krz+uvKDFz1oWPfkKeXq1dBsvn9fxr59whc+n/3Uwp68gCKHH/7Q873vefr9MH+z3w/X/Yu/rPju9zy7FuHRxxw/+YlyYy3E2197Xbl6raq/1jkHhw4Kn/mMY35+59+L6egG1eaLaNVDykXymc8jzn4Y+F7J85wjR46YYGncwba8M12/tcnGxiaZENx/Qph32Z0IcyMBnMNlLgiEImQI6qswxxJFxVGOuswvLtIf9PDOk2c5eVFSlh3yvEuWx0KcKjgeRcJfbu89o+GQqhohXlFVBsMBWeHo90ITblVVrVi7C9fNc5wLbeROfPg4L8mLDlkRHJIqkKmiaQ6lAi4Il45wLVHwMgKfxdkhsYm8dlxmYW0XxNBg5NTg6Iw/7up0Jur1JRYEbW5sUeQFC4sLoTgoXgsgyzs8+sQHuXj2Va6cfYnRYJMeFZLlFEUHJ46yO0GRVXQzT+aE0+fPcOrU66jfZO++fZRlSTXyOBccqkF43o6/EQ8fv/Irv0K326XT6ZhgaTx0PPPMMzz91NN0J7r2htTYcU6cOMFjjz1Gt9s1Ad3YEU6cOMG/+Bf/AhGpBcv7xRHz2LEnef6z/zVfGfxDqgXH8SeEA/ub70t+UeHwkWMcOnQEgA/8wkf5rf/qd7l69RJFdqdA9XGERx97kvn5MGvthRf+LvNz86gf1d8S9vtw6k3P6dNw4mnHCy98ICaDjHvFM888w+/8zu8wNzfH4uLitn0tFNfFdY4hncPo5kn8YAkdriCFjdF52JicFB4/4Th5suLyZfjO3yhzs8qhQ8Lzzzv+5m88Fy/Ct7/t+eQnHUcOC6+/oSwvh3KejQ3HZz/r+PSnHIO+D83iX/E4F2ZmHjksPPGkY2oqFAR1Oj/7/dHGRhA9X35FaffBhvmccO68cvAAzM0pt9YV9TD0sLQES0uNRTTLoN+Dj33sXty5d49WN6m2XgMEyXeRTd0nG3sAmZyc5Etf+lJdqmqCpdFmW/4lnOhO0ev2wI/IXEaR5eR5gXpFCocTIc9ysiw4DrPckUsWI9fgMoklPEpRlhRlia88ucvJ8w5l2cFlZR1Nl+QP1IrKe9R7vCpoLM2J/6ejCj8a1UKnSEaW5WFOZZ4jLqceaCkZkuXkeReXl+Cy0DgOgKB1gU4o53Eu7MX7WFgkEmZuAsQZmHkWREwnwYEpLkOiPT/9LFvD4RRl0Zgxk4PTK2tra+RFzvTMTOhPT4KnQGdylmc//gW+vbHO2pXTaLWF4BnmOUXWoVN2mJ2eppyYZZR16Q8qzr75On1fAUpR5PR6mwwGXYosDx1JuQmYd8MagI2HmdvnCxrGTjIxYe23xs6ya9cudu3adV+Og5mbW+D48V/irXMfZ99e4TOfFp54otlnSBbltbi4e89+PvfCr75tc7WIhBFL0Xl57NjjHDnyKFXLQrWxocx+xzNCOfqI48SJ7n15bx4m5ufn79n3npLvwXUfw2+eREfX8YNzZCZePpCIhJbvL/+jjOEQFhebpvGyDM3c3W7GxYsan4cjRxydDjzyiHD9urK4IBw6JOzZLcwvwOoKTEzAo4+GOPiTTwpzc3DpknL9RnibuLAABw869u1rrvdO6HaFD3/YsX+/vG3LebcbioCOHRc2795XhQhMT3NfuC5B0eoG2l8CV+A6R5HcxLb3SlmWPPnkkzzzzDM7vRXjPmRbxMtOUdIpO+BzyjJnamKKoiiD69AJzmUUeYGT0HKT5TllUeDEMRpVKDHmHUXOIs/x4oNw2emQ5yVZUTKqKjJVPFrXo3kUMkF8mNnoVXH1XMoRKhnp+ysnLgiWaJwtGWyOEjLjiBRIViDOhch3LCJCmlIinCD1F1tFspgA11gYhMaovMQoj0TRMW35zm/2wuvP47EwHAzqpsWq8ty4foPMZUxOTdXlQAKoOBb3HuUDH32Bv/rj89xcvsDERMHswiKTizMs7nuExf2HyMsuSkmnO8+lS5c4eeY1Bv2KDEdvc53eZgftTJJ3IITxDcMwDMMw3p/c38KckLlQvFgU0O06Jid/SlGLhNFE79Qp6VxGpzPuwFdVOh1PkSt5DtldHJzGg4Pki7jyEQC0WsMP3iKbem6Hd2W8F4KIJzzxxJ3v3kRgaio8d/RoGPtQlkJZBh/M7KwwHIYYeBIg5+YyBgMly4ROJzgc8xyOHnUcPAiDQXg3m9Z5t2QZ7NkT4ug/i717H4x3pOq38IPzITJezJJ1T+z0lh5oRMRSYMbbsj2FPRCVuehAVCXPMjqdDp1OB/UamrazOFMyC5Ht3GXkeQkCXqsoFDryvMBLRdnpUJZd8qJEneDUJVtiuLA6JLkXXRA0HR7vHY4MXznUVUiMZofmboc4guiIhJIdJ2RZEVq7Y1kQUQBtXmC8ZLy0S4KkEEp8RHBZnAsk0b2oGnca3KLBlZk+0ywoEGLbEl6NCPR7fZAMJ8JoVLG6ukqe53S63aYcyVeI5Bx+9Cl+4bnP8oNvbTK/OMOhRx5jcc9B5ncdoDM5Q1YUgKPoVjz//CcZeM/K1cugwqDfx3vPcFRBFVyiZr00DMMwDMO4/1hevsh3vv1HfPOr32Z6KudH3xPm5prv2yrv+fBHPsGX/sF/ytzcAufeOsW/+7f/mjdef4XiLtFjBX7x45/ji3//N+h2J/j+d7/Jn/3p/8vm5nr94+zRSLlyRbl6FV6ZzekUf4fDR75M5uwN5oOIuA6u2Ifk0+hoHd9/a6e3ZNxDsgxCwKb5OiESBMvbXZOdzt3j3yLByVmW9h7xDqp1fO8kAOImcd0nd3hDhvHwsi3ipaoH8aGIJxcUT7fbZWJikjxzjEYjXOZweZh7mUV3pROHk6yOjPvK46uKzDlycRR5QVEUZHlOBWFOpmotAJKky1rMUxSPU0V9hcfhpULwaKWNo9K54AolCJrO5eRFaBuPVTthjmVsCpf4OR8lxyDWaiwqihqnpy7iCSKkDx9D495MMzKFlnoZI+ZFisVDlmfs3beHC+eXKIqSLMsY9AdcW1lhz949odwHAfWoOIrJKT7yyc/jqVDtM79rF5OT80g+QUUWfkLf6dKZdDxWdigmJnjlRy8y7L/E1uZN/MgH92qc/2kYhmEYhmHcf1y6dIGrp/89z3W/zjO7OrhKkOvN8y8vD3lZ+/ziJz7H3NwCb509xavf+kP2bLzOowt3WqW+f2nAi3nOZ37pV+l2J3j1tVc49Y3/m195tKBtsPRd8IeVv7k44CevTjEc/DpZ10Y8PJiEuXxS7kc3T6HDy+joBpLbiCTDeLeov4XvvwmAZDO4zrEd3pFhPLxsi3jp468idxSdjO5El9n5RYqiYDTsg1aIK0A9TjLyLCNzLs6BDLMnsyxjMLzOcDDAkcWym4w8L+qCH+/CwMjMhabyaG+E2gWp+OiWVOdAKsQ7RBVPFUXLLFw3Co1OXNxPHgp6EFQJ19AoNhIcnqFxHJBQtjOuQTZDhFVTrlzi/EtfOzZBgigq4FTqdVIRjDoQHHML8/T6A1avrUZ3qGNrc4vrK9fZtXsXRVmg6sB7hIyJ6Tnmdu/n5tpVxHXxZAwrj45G0VVakZc5k1PTHD3yKGsrN3j1J6/S3+pz5fJlpmfmmZvfE+/LdvytMAzDMAzDMLaTwaDPlNvkuSMln37kznnFZQY/liH9QR+Afn+LOdngl450+MDeO0uHhpXntA4ZjcJMzP5gwP5Oj793bIbc3fkN4UZ/yIVqWI83Mh5MJN+FK4/gN0/h09zLn4N42e/dYGnpZYbDK5SFsLEx7t7tdDocP36cp5566l2v7b3n4sWLvPjii2MzW9ssLi7yoQ99iIWFhXe9/vr6Oq+//jpvvTXuVPVeOfWmcuaMQ/0joB9512sbDyqKjtbwvQvgCqRzCMmmd3pThvHQsl3VdeRZSVmGuTpTkzN0Oh1GoyHD4SDOfAxN42nuTpEXuDxDXI6THK+e66urqPexnTs4NJ3LcVmO9wriqXzU1lKiWxxBVwwCn6RUucTpki54CSXGwTOXR3dkOCZzGZmERnBtRaa915SER4jfoGkT+ZbmYa31aYy0a/t7vRgL12QQjUenuLvGi5RFnLWZxE2XsXvPbkBZuboS2s9FWF+/hcscC4uLMa7jw0qqVCPP+vomnU6HzDk0qxAEn4fovFaeouiipTA3txCeqyo21m/R6/c5fPQE3NdzngzDMAzDMN7fFA6mcsdUeef3bJOFu010FIoMpsq7Hz9ROHLXLvyBMgvH3k287OYujhgyHmQkX8R12nMvz5JNPnvPr3vt6g95/cX/nsNTN7i85vjGS+N/J69uCY88//f4vd/7vXc9e3ZtbY0/+7M/41/9/v/GB/feOdLgRs/TnzrMP/6t/4x/8J/8w3e991Mn3+B//Z/+B7aunGL/THt9ZXNT6S8pP974NJubvwfsetfrGw8e6nv4wYVQmlvM4jqPYy4gw7h3bI94iRCmQDqEnG5nkixzbPR6DPs9UEdRTDA1M0VZdMnzgjzPcXmBy3JQz7Df5/qN60FQLDKcyyk6EzhXIOoQp4R+HR/E0OiMrAtsJNXhOFQU1JFlLrogNTZ/h4i4iKubvVOhTop7Q4iLJ4EyeSuDyBjLfUgzLUPZjmg8V9KMSxlzYoqGWZLNBMxYco7EuZhKEQuMIDb7qJLnBbt27aIaea7fuE7mHDjHzZtrZLljdmYOF12kDqHf6yNeGWxtMOxvUhRTzM8V0PUIkOd5bD93TExMMBwOUBSXZWxubQUZNO3BMAzDMAzDMIyHDnGTSLEPsi6MNtDB5Z/LdQf9G+zmJ/zjEzn7pvM7akK/cuMWb55+MxpC3h39fp+1a5fY33+LLy3M3PH8+bUB31rb5MrlS+9p77durrFx8Sd8euYqzyyMj0zQBeU72uNPb5xjOOy/p/WNBxC/hR+eB0DchJX1GMY9ZtsKe9R71DuCVpgEOBgOR2ilDDoDFjt7KIpOmDeZBUclTtBRxerKNfq9LTKXoU5C03jZxWV5iFaHvHZo8E4OyKAkhoZvDW7GJEbWUzFdhuKjQOmCQBij0WmWZZpVmQRLAbz6GCtvRcPjq1XVejak4MIONLoq47pBf9Q6hq2amsg1Pta60McBRVnUP2FMgiwaZmHu3rsHFeXmjZugIRZx48YaWVYwNTkZIvhAv7fF+vpNZmb2MjszRzkxzeTkJBOTk3S6XbLkPs1g165dHDx0iLW1KwxGPXrDUWhcd7lpl4ZhGIZhGIbx0CJINoNk0+jwBlpd/9mnbMdVRZjtOh7f1WH/9J1vQ1+7VnD6b/FGpMwdB2ZLHt9153zXMoM3Ru89YSYI06Xj+ELnrusv3aoobmW8e9nVeFBR7aPDa+GBK5HiwM5uyDAecrZHvIytNFU1YjQcstXrsbXVwys4lzEcDamqEUVR0ul2g9DoHIjDe0+vt8Xy8jKqPgh4LqcsJsiLTphBGSU+F6ZHjs+aVInao9Sx7TpWHh2RIbYeZMnGqdlU06S1kqBYK5ZJZCQt2BxcR8bvEDfBaUu/VQ1dPpr2FZvHNZ7sg0jatI2nQvX0eoSy02HPnr1Uw4r19Q0yyRkORqxcu0a2dw+TU1OohKj70tIF5uZm2HfgEaZn5sg7E3S6we3qXBYdphnT09Ps33+Aayv7uXT5AiIZWVbg5M6fghqGYRiGYRj3P94rl5Y8X/1KxalTFT9+yXPrlsLutzse3jqn/Ps/r1hcrHj5ZeXOSZrGw4i4iSBeDq6hfgN0BLJNoTzDeD/g++goCv9SIPnizu7HMB5ytsl5GYXAWFQzGPTxowqRHOcKijyIhsPhgJmZ6ehCDIU9Thy93habm+txvqQjywqKohtck1H0S4Ka1A3jgA8yposOTB8j4knATBFvjbHwZlglteKYRE9VX5f+SKsRvC1sSjBZNs7I+rkQW4/VQWFmpUYHqAa5VZzgva+Lf4KDMxybu1AYlIrUU1EQybmp0CmDgOn9Fba2tkCEfr/PtWtXOVDkuE6BV2VjYx0v4PIccKgqVVXh3AjvNRYWZTiXsWv3fian5qlGF9jaGiBk5M6ZeGkYhmEYhnEfIiTTwN1RhbWbcOpNuLGmnD+vTPTf3gumqly/obzxhjI9rVy6pBw369j7A+kiWYhXq99C/TqSWeO4YbxjdIhWa8Ec5KYQN/GzzzEM4z2zLeKliiDOhfmPLsPljlE1QrKMzBV0JrqUnQ5bW1uMRlUQJ/MCccL65jrLy1fo93tBvMsK8rwky3Mkc4j3oIITpaqiqFgLmD4KkanxMM3AjK3gsRm8Pb3SRfWyLuFJ8yuJjTo+Neto3QwkSHBjtqgfJaERwhcuxr+plFpY1calqVp/HqV2RTYCbVs0TTsXuhOT7N27l+Wry2xubQKwcWud5fwqB/btCWVHmjHRnQoRcUBHFSMZ4H1FluXBXZl5RByPPnack6deRSRnMBzhcsFl6Z4ZhmEYhmEY9xMTk5NcGUzyv3x/hd//4Z1R3+UNpdiX8cyeSRYXYePWJK+cnOL3vvY6M507I7NLt5TJI8KePV2mJmF62vGVlwdcuHnxriLphTXPxw9Dlt1ZiGI8WATn5Wx44Hvo6IaJl4bxLlDto6M1kBzJ57HZa4Zxb9mm2LigEopy+qOKrd6A4XBALiVaeTplF+cKhsMhw8EI18kZjSqGgz4XLpzl8qUlBoMB4MiLkrwzgWRZ7AEKgqK2XJMahcUgVHqUIA5CKK9R72M0PBQJNTMpGxGxKeKJofQ6dt5ExaWegBli30EwdY2bkljYgwTHpPex1Vzq67fX0uQIDUNCw4xJkTiLsom218JrRLXZx8TEFHv27OHy5UtsbfUYDkfcWLlOEYXXsuyCj6VDLgiv3lcg4FzcuwYm4rc4AAAgAElEQVRxttMpcAJFkdHpdMizAicZ9oXXMAzDMAzj/uPYscf5Z//Fv+Ty5X/+tsfMzx9m374jlKVjc+M5rvza/8j6+srbHr+4+Cj79y+SZY5PfOLvc+nLj+C9f9vjjx49RlHcOfPPeMBwEy3nZR+tbu7whgI3bigvv+xx2bt7P7JyzbN8xf/Usp/BQLm4pLz08tv//X473jzj6f0UF7PxPsQP0Godkdwi44bxc2BbxMssc/VPYCsvbG5s0utv0c1gNKxwLqPSEW7k2Njc4NatdYaDATfWVriyvMTm5gYI5HlJXnbJsoJWnXcQHtM/RCqgjlrhq+PgoSU7RLhdXfDTOCiT6CljcfD4TD2fMrkug/DYEjVTO3g4/LZzQqN4KxFeX68p7ZHaIKot6RNVyrKMwmISNtP51C3mVKmlXOl2J9mzey9nTp/i1s0bqCpbG7cYViOcc6wsX2Hv/n10J8IszHSr0n1UBfUV3bIEr9y8fpNbN9ZBhSwrLDZuGIZhGIZxHzIzM8dHP/rxn3FU+/u4vTz51F7unNB+9+OPHDnGhz507B0fbzy4iHQhmwsPfC/EX+8xqsqplRG//8Ob7J68823oj5d7XHDKH/9J9VPHI9yN9XXPiy+PuHlmg8m79PJcXh/yoxsdDk0qrqje9d7Pv1Xx2iXP/3PzFscvDu54/o2VAZvZMJpRjIceHaH+FvgR5F0kW9jpHRnGQ8+2iJd5lkfxMky/HI08q6urzPo5hv0h1WhE1s/o9XpcXb5Gvzcgy6HX22A4GqAoLivIsoIs70T3Xxg6Xs+njAU2teynglLFz9Ooflp3iLf0TUVFcPGYoINKS0Rsf15b4p1Gl2U4vp502RIYk7vSp8i6+jDPMwmnxHmZjeWymb0pwc1ZFFn9D3RdGqTNY1+roo2IOzExRafMuXD+DKNRxeTkJJ1CmJmeYrCxwc1r15iYnKHsTuHyPMbU45xNH0TVouxy7NHH+cvBX7K5HguW8vynD1MyDMMwDMMwdpD38n3auz3Hvhd82Amx8Sheav/nIl4eOPQMJ57/HX58+QpF786/Y4PpggPzH2PpknvXfwNH1RwTU5/nxu5bfKt3p/g+ECXfs5+i8ymWlt793nv948we/i85u36KK3fb+0TGM098kEeOWPT+/YDqAB2thgdW1mMYPxe2Rbx0zoVfUSDzCJtbQ7zeoqoqvFf6gx6bGxv4yoMqRafAOcHhEJeTF13yoouIi0FwwSux5CZFt5N4F9yOoZ8n1eRofWyIsbcck3V0OzgXqQVLael0qYk8CJ1NMU/bwXlbtXi9bjzG1R+lA2rJ02u6YvrHWPHqEVGKIkecxL2CT3M40+ttbaEu80HJXMGg3+PKlSVmpmfZs3eR+YV5CvWsr91gau0G05LhZIJKhaoakRUutKFrhYrwyGMn2L13H+cvXwxi6k+JWhiGYRiGYRiG8RDgSiSbDiOx/ODnIl7+wi8c47/97/5zlpY2qO6S3HaSUZSzlOV7mak6yWj4cXq9Eyh3j4XnWZdOZwGX3cWa+TNQf4jB4J8wGN666/NF7jh4cIYTJ7rvem3jAaQlXoorkHzXDm/IMB5+tke8jLFx54J46VyGV6E/qHBO2OwNGAyGVOqp/AiHUFUeIQ/zJrOcrCgRF+ZcplKbYLyUphQnaXpJCJRGLEySW/JJJpdmekYIJTl1EQ6eJJC6pGDW4mRUEVuPmwhAI+61hc/6vzF2Hq6uY88JrfVo5mAWZYm4KKRqI1y2tdKgYWoKq4fX6j1FXqDViJtrq8zMdFmcnWIid0xNTyPO4dXT622hKrgsY9IJ4Y9dcc7R7XaYn5tlcrIkd4LIbQM3DcMwDMMwDMN4yJDgvnQTcXbfvZ95OTtb8txz+/jgB2sPyDYzE3/dCzJgT/x1J85BUYB1Wb1P8AO0iqVpkuNMvDSMe842FfY4siyvZztmLhXQuCBkesiyTmzjHqBeEXLEFeR5cF26LA8Rae9x4vDeN4JkjFKn4eF1sU6MU3vVKP7FCHfd/t0u3dH6H0mlObb+vESPZGwMb/S7VolO61NJRCXGxUMxELXoWMe862IhkmE0CKciiIbPl2UZhMxmg/V165j5WPGPr/ec5zndTpfBYMBgOEIlQ8XhsiIUArnwy1c+/JRRgtgsCMORp7e1xcREyd49u8iLLIqjhmEYhmEYhmE81EgJWYmONsD3fi6XzDKYmPi5XMow7hlKhVab4YFk4KZ3dkOG8T5ge8RLJ9F+H0QxieIlMb6Nhs/lTnBk+Koiz3PKskNW5LGgx6FeqahiJNxH76I2ImVbvBQJsev2fMsWSoiLh1mPzTzLoAs2OewkYtb+SKH1OB1LXdLTjpA3bkxJVtD6Gq7ep7YKhxweD6I4jQKqh6Io6rW01ip1/HF8UBfuqFIUJVlRUHYmKMoJpmcXwOVUvmJzs0d3tsJXHnGOvCjC/XY5mcsRHBUjrt9Yoz8YMDc/h3NtD6thGIZhGIZhGA8t4giOQgVGO7wZw3iAUE/zvxmHSLGTuzGM9wXbExuXLAiWBBemiEOcNkXg0USYSYbLBfKSvMiDW9O5OGtFUdFQrO2kLq7RZFf0EETIVixcAXwU/sYFSUgxbR2fOR6dlRqFUJFm1mVcuNlzDJ579e3TawdlLfWJ1CFx14qqN5HvZval1v8VUrVQEC+Tc9PHMiLGdMT6FtCUBOWdgk6nS1F0EMmZmV2kEI/zfUaq9Ho9OlMjXJaTF0VwYYoLYjLh95u3brK0fIWV69eoKm/apWEYhmEYhmG8L5AoYGoUYwzDeGcoaGqtl+C+NAzjnrJNhT0ZTsLMRpfmUCYBUUJFjaQ4dhZizC5zzaxJVXwtQoJWikqIV4dS7tbsypTpTr09kiGqsYinLTI2TsbGRdmegXnnrMvkbpT2sMnaTRnlR43R7fFmHpptadRQkyjaqJ06Jqy6WgEti7IRLyXNtgz3s1k3Tchs4uVl2WFqapo8L6gUZucWKfOM3sYNnCjiMrz3sV3cg1aoVnjNQZStrS1OnznDtRvXuXxlhf7Ah+IgwzAMwzAMwzAechxCFk0h1c8+3DCMiImXhvHzZptmXkYXobSdjEmYDHMfIc5/dC5Evomx7jjDEQ0OTlTCP56uEenS8+la0ZQZhcBGSEyjIlVSi7jUse2wkLack8kWShD2xkp5aJ7zKSIeBEnSsumQWozUlqgpSMs9KrHEJ03grH/3ihOJzsu0irZi4+n4lsuzRVGUTE7P0J2YZFBVTExPMzM1zQ2gv7WOSoHX8A3JaDQMc0FdhTjPqKo4e/Y0p988ydrNNfrDIV4cKlm7icgwDMMwDMMwjIcQEdc4L9+modswjLvREvxFQLZFVjEM46ewTeLluJCYHIKC4NHovGyFqVVQFbxIiCgkIdJXUQD0MdYcFlf1dfmO91FEbLkla9FRfZixGWdcJglO44Euzd+sP08zLFPSntOLas5tWsIbXW8sEh7Ld9Iem7mVTXnPuOuyuUSe52RZ1r5gy9CZGtNBJc27TGKskGU5k5PTTM3MUXplcmqa2blFbt1cZ/n8BboTPfZnZXDGupyu6yDSoapg9doqr/zoB+A9Tz3+NBeXLlHm5Xv54zcMwzAMwzAM44GjXRBq4qVhvHNuF/zN/GMY95pt+hGB1mJicjIKUs+XRFJxjSLqohNR8ApIcD1mIrFcJsWzY6M2bkxkTCJgEAZdM0cy2TGB232KdcK7FhKTg7NxXKZCH1rXqp2kresGfLi2jytLEhepm8WVxuUZLn2bQ1TDfXFOyJyLzs40ebPtAI2CJR7VdgM7uCxjYnKa2bkFVISJzgR5WVCpcmHpMoP+FlevLnPwwEEWFxYZLQij6haXLl3m2pXLnHnrNJJlfOCpZzl68DFmJqfizE7DMAzDMAzDMN4/2OB7w3jv2Htow7jXbJu/Oel0qZAGfCtKDl4VVx8JaBQAa/EwnJcmO4aH8fMuFPkkac/7OJsxCp9JDWzPrKwD2kJ9jTS9Mp3SFjXrLzct4bJtw0yFO2nuJVGcHHN30nKBtm6MRKekEu6DpBg6gssynAuSYegkas3b1JbbMm1Nqd2XzmV0u13m5hcQl1GWXQShqirWNza4vrLM8vIlTp86yfzCIrt27WFUeU6fPoWvRqzcuMbk1DQzswscPHAEKXJcLF4yDMMwDMMwDMN4J9y8eZP/8Fd/yb/7sz+lKMbn/3mvTE1N8/kv/DK/9vf/wT3bw/lzb/FHf/gHvPLKy5TF+NvcyivHjj/OP/1nv8X+/fvvyfXX19f52le/yh//0R9S3nYPVJVud4LPfPZz/MN/9OV7cn3DMIyHmW0TL5XYFn6b0OdiTNsnh2J0VSYBL0XGvfoY627ch8HJGOddprh3LQ5G5dDXG4jDNesmn/BpbX0MEIty6jh4VCkbnbK1Rops11nu1mDN1uDLZlKljJs/JexJx/bUmq2pQp6XiMuiHtqsBEHobOTgGMvHNyKqCJ1Oh5mZGSTLwlo4tIqt7VXF1tY6Kys9li5dZHp6HlVl9fpV8jzMFu1OTqIOFvbtY2tjk9HtTUSGYRiGYRiGYRg/hdXVVb77H7/K8nf+gE890h17rj9Sfryi/J9nPLe2fu2e7eHNk6f55p/8XxxzZzg6Pz4O660bI/78pQ8y9J/m+PHd9+T616+v8K2vfI3Vl/6Azz06fg8GlfL6quffbWyZeGkYhvEe2BbxMumGSSdMAp7Gsh71Ck5arsgQe1Yfu7h9UCBVGsHOxeKf4K5s/JwuiofeB4FO6g3EWPddREvQcK22UKnN4yCYtqp06vKc5jhNZT/xqHSsoohqeHlpqmc9X1PxSagdOzN+LFCWZSj3od5KK/6eXJ7NzUzPp/tRlCWTU1OIy8jyrHZqoq0JnurZ2NxkfX0zCsV9hqNwrmqQQ/Oyg/T6uMrm3RiGYRiGYRiG8c7p9QaMNjd4dk/Orx2bGHtua+gZDDf449Pr/PVf37t4+tKFPrp+k899oMPTe8bFw5eu9Hnj5IDvfW+D5eV7s4e1tQFLb63z3K7sjnvQHymZbvDKxsY9ubZhGMbDzjY5L2O7eJ211joynspyiAlv7z1OssZdKSlOTj0T0kHdNJ6i3h5whDmVWouJrhYuG3/i+IzMNEsyuSJdGs3ZKvmp51Z6ons0NYSnfSWpUFGVKADGV60hF56EzOSw1FbMXVXJ6vsRRdsoWOZlwViOPf2WboYoeB1frzWTRrKcTncSxOHEoVrhqypGzdM9FLJM6PX6jKqKPBecOIajESPv8ZqKliTcU3NeGoZhGIZhGIbxDvEeUGGycEyXbuy5TIRu7igLYWr63u1hoiv083D92/cwVTq6BXS73LM9DIfQKYWJ7M7rF5kyWTqyzN5nGYZhvBe2p22c2LYNoaSnbsehNQwzBrVjyY6oIuLrkptUYJO0TkmzKdW3RLvkqARcED6bSu8Uq24lu9PeahGPWtTUJDJSS6/U+msMb0sjqxLz4QguFA0ltyTgVeqim7i1Zk5lWqGejxnl2Ph8cl6mZ6R1xWav8ePWDwmTq1WcIyuKOjLvNfzChaZ3FUHFIc7h8gx8EIIzl+EV+oMBo2oEKJIJzknLYWoYhmEYhmEYhvHTGetOvQuZgxMnHP/0N7O3P+hvyXe/4/jzs3d/IyMCe/Y4Pv3Ljk9+8t7s4cLFjH87cGSn78nyhmEY72u2r208CnJB8IsCWCrOce2jhLa2WZ8nQQDVOgeekHpcZBwhGV2Cjf8wiH4udt1o7f2s5cLaZam1+zG06IR4urZcmukAqfPvQmOZpCVwxus2CzZ3Qz2CQ6KwmsRcHzcsTupYe6fTwYlr7k9boKyvkl6TtiLjaTalQ4JVFZ/Gg2qI2oe5otFJKY48z5EsJ3OO0XDIcDTg+uoq6zfXqKpRFHMtNm4YhmEYhmEYxvYhIszPC0895X72we+RK0tCWbzN9YHJCeGRR9w920OnIywstPoNDMMwjG1j2wp7UE/Mdoc5jy6IlM4JmlyZLh6XnJK1GzH5H5O0GePSRDEyqZZRBGzpnq2CnNoHiaY4eWvlWvxrextVcJJERW3i5CLxeq1joxAb2s+jc9RDXacuaT8pNt5yeyYxFxdnUWod0y6LEuda/8BJfbnWyMt4P1KZURRXlRSLT69UwSneV6hPszbDP6DOObyvKLOcqalpfAWDUY9KR9y6dYv1W7dC4Y/9Y2sYhmEYhmEYhmEYhmHcJ2yPeJlS1U4abVCC0ObDIElE3JjoGNyYEuY5xng4dbGN4uLx6rUW6ZKYFwTRlJQO4qavxUI/FkOXupCnuXQzY1LwpOS5AD4+aH4a1xT1NDFxiRZK34qkp0Igief6+GzCa5qlGdZy0WHaKRvBsC7bIb3W+mbGT91uy9QmTl4/Frxqa99aa6tOhMxFt2fRpRx2GFUjyrLLxkaPmdmydoEahmEYhmEYhmG8U65veX58dp3CpXFbgd7I892lAQsHhvf0+l6Vizcr/nT9Fq9dG4w99+bqgDMT9/b6IsJaz/Py2XWm8uZ9m1cYVJ4fXO7DPd6DYRjGw8r2iZexEAdJTkmtZzmGf7l8VBvjpxRQX89zlKROEr2VGoTP5JVsOxqTk/NtJprgEDzVHUeoSng2uhW9tnyc2sS5pXZaNmvWMmI6OZUAjUmUKdpdn9bcIA33IB2vqrhMKMsCERfF13EHptRLNY7L1hTNZh+pUCgKmRoFYZ9i8qpxuwIuzBx14nCSkYmnyAocrnZovt2dNQzDMAzDMAzDuJ3du/fw+DN/hx+8OOCrW3D4kLC4GEwRlSqPPzPF85/55Xu6h0cfO8bf+Uf/nKUzb7BSjr+f6RxVPnPkSZ544ol7dv3FxUU+9blfZmP9FisT4fo3bignT3lEYPcTE/zdX//cPbu+YRjGw8z2FPbEuYqKD3HqpLx5j2RZLczd4elrWyppCnV0XMOrRcs0k1KjYJdcgnViXATUxRj17QJcmj8iTcxaNLadN3sY214UN1VD8Q113DuIra6lU7bT6G1pVaOg6CTF4kGjgOqcI8vzsa2qpuby8T1pWlibGZhpbmfjyQzXGlVVcLOmBL5kIFU4QlztCXUCFRoi5oBzIPWahmEYhmEYhmEYP5uZmTmefuZXefyDz7J3L7zwOcfTTzfFOGVZsLi4eE/3cOjwEf7Jb/02a2s373hOBKampti/f/89u/709DSf//wXeOaZD9Sjzd446Vn7VxVlCZ/6ZMmXvrTnnl3fMAzjYWabxEupC3cgiH4uzoJsGwpVGlEs1MIokNUuxzvGV0ZRLkSyW/MtU7JafZz9mKZZpnWi+JfER9IelBAUb83PbF1bYxtPU8ITnZK1ezEKqK7uJ48uSqlHX/rYhh4nW0ItjqYX1/wuImRZVs+mTK8hLko77C7E1vDaxRlF1eRNTc3klcdXHu+TM9ThCfvyPgjKqh6vFUqFiuKpqPwovkYr7DEMwzAMwzAM453jnGNiYoH5+Tl274ajj2b3tJznbpRlyYEDBzhw4MDP9boJ5xzz8/PMz8/XnxsOPQuLFZ0S9u1zLCzcu7Z1wzCMh5lt+RfFawXqa78g0giG2jLyaS35RfFMNDZz3831qK1z01xHbRLYBAG07RMMrkXqtu3glAwuzSYe7ltxdRqRMCmjSbeMF2oX74ylwMN4ySBWxhfrfbieb6usyf1IamBvXluW5WRZXu8tLBXO93Hfqh71QXANcfpwl1UbkbZ+rFB5T+V9uIYP0XivVbOOV3w1wvsRla/wvmJUjfD4+nUYhmEYhmEYhmEYhmEYxv3AtoiXiq9dlUEsDO694P7TunSH6F7UWHijPomLRGGNIM6p4tXjk/jYLstprxNFvsaxmdatgqSX5lfWwmUjJrZF0PAaUgpbo1g4flo9WhIHuFo0FdHgYEzzOaNrU1XDHEltHteLRlEyL3KyzFG3mredpZrC4a52kXrfiLHJdZnuQbx1eFFGfkRVVXjv8VVoHldNYq6nGo2oRqNazBwMB/QHPUDIXG6N44ZhGIZhGIZhGIZhGMZ9wfYU9khouBZ1oUVb6jBzjEO3lUIdEwpboyIbhTC5H+OTSWFV34iPQbwTVD2ZNHMu29H0sKS2m2/G95BmZo4NnRx7YXEGZP0yGydmne6O4qu0xEma+Zixyhxc20SqOBGKIm8KgG7fg7bvTJyXKcFJWd+beJykV6ThfvgqCZNBvPQa3Jjee3IRRqMReTYCCdMvB70em5sb+BjDNwzDMAzDMAzDMAzDMIz7ge0ZRBKFS3CoCBqFvSSEOQllPWP6HiRdro5Z+zTjMrkK4xqq7UvpuLJJnImJ1C5EJLoV22FvbUTKtlSYHJzJIRr2156TmcqCBIkzJxUfmtDjteJRcQ/U1w6vwKEqTRzcN6+xFi8Z03bbN3Zc17wtXe+CtxNRwalDEDIEKo/zingfY/Hh3lZRxBwOhwyrUf1HsbmxzurqNSpfUbWcrIZhGIZhGIZhGIZhGIaxk2yL8zLEmZNrT1Cta3GAUFij6kO9dSsGLuIax2V8LjgRU+GORJci9XMtWRNJ7kuJ8zST6KhBNJRWM3nTWJ6sm639x/7tpqgntotH02YSUENjeCoCIuxBXNOALhpLi2LuW5JQKqin3r8TATxlp0Rcox83ftVx+bBZUW97Lgmv9fjO8KtS8FoLo74Kwqz3FR4YVUJVVeR5iYiytbnB9dXVWgQ2DMMwDMMwDMMwDMMwjPuBbYqNE22VHp9sl16Dr1NAY5t2iJbL2GmhkTsKghLmZwbrZji5kebC7wK1szOdQ/1sI+ypKj41BsWDNV6/3m4UAyWJn7FUJ11TPbEJPbWmB7EylAxJfR2EVvxbGdcg0yZTn3h0maqnLDu1YzRttXlBGqPjyrhm2YibtWO0finh46qqCL3qEt2ewZ2qVUUFOHUxYu5wThgNhqzfuon3o+Z6hmEYhmEYhmEYhmEYhrHDbIt4qY3cF9x7LoiAvhYwo9uw0SnHS61rATA+doqIDwKmJh00inbSzHis5zMKtctSarESWhZJ6hb01pzNpoE8zra8TTiUek9az/FEoqRar1NvOsifjugIHUei27I9U7JTJuel3HZcc1+Sbqn1tbTZYku4bLTNpuwoCJdJZA2h/DAT04cWcoRcHN57er1NKq1QKky9NAzDMB4GVlevcfrMKebnF3d6K/cdZ86cYm1tdae38dCztnadM2dO4tz2TGp6WKgqz7WVqzu9DcMwDON9yGDQ5/Tpk8zPL+z0Vu47Ll26yPnzZ3d6G3dlm8TLVBiTSnCirJe0Qa+oA8EFYa+lz0lUCFV9PdsyRLEV1OPE1Q7EWnh0Uot3EtvJ1WktNKaSoBRlb8/fTEpfirSnmZKiEoQ/tI6Kk15Xq0G82Xh4HL4XbbfxJMUzxdVpDftsouOiQp5mXraT4q1r1EbMlpMzfU7rPWrdCZQi85WvooDp432s6tfh45+FiuAkI3c5glBVvnlhhmEYhvEQcPr0G/wf//v/zMLCrp3eyn3H8vJlLl26uNPbeOj5+tf/gnPnzlgh4m2oKktLF3Z6G4ZhGMb7kCtXLvG7v/vfUJblTm/lvmNra4u1tRsUxfaEtLeT7d3RWIxbajdl0u9SPDv8v7RqxJt5liFG7RANDsAgWEotFkIUGluKXzt6XUfRo3PSxefqc+NGvWo9czPNwnStaLYmd2UU/FLTuKtfoNw2P1PHHKFj8fY0mzLO7kyPy7wc+2a2XUbU1knTc3qXj8dUTcDHVnGlQnWEMIrCcPh8KB0SnKR9hkKhirhhEy8NwzCMB5jJyUlmZ2cpioJbt27yyis/3Okt3dd0u11mZmbsG/htZHZ2loWF4Oa4cOEcFy6c2+Ed3d9MT08zPz+/09swDMMwHmLKouTAgQMAbG5u8J3vfGOHd3R/k+c5+/bt2+ltjLEt4qXg8KpkThpxMgqHyXSoosEJ2J4t6TU6NcclM5FoNUyzIJNIWVs5k4gJSTQMEmZ6npYbUWmf5upynTCB0mtLbIznJrdjKuqpk+MAHnBxjVaUfeyaqvXrw7Wdn+GgsKZQFEXYczw3jcwcG3KZ4uFxpiVRiEzP1UcLdVS8qir8yONUQAWHA+9Q7xAXxEonGU5cOE/TvmJG3zAMwzAeUCYmJvjyl7/M1NQUV65c2ent3NeICEePHuWLX/wieX7//YT9QeXpp57mt3/7t3nyiSfZ6m3t9Hbua7rdLs8//zzPPffcTm/FMAzDeIiZnZvlN37jNxiNRqytre30du5rRITFxUV+/dd/fae3Msb2iJdeg1BG7NlJRsDkEEzioG/NfEwR6LhGHdcmzGcUF5q9UwN4mv4YCnPSjE0JTeS049VSOzBrMTMJltTGxuCo1LRms6fk4nTRCZpE0fp8IQh8kkTFsAetBdIkvMb4e4qbS4rIp50TxMvk3iT6PZPoGTeUPgoTM1vTKGPTempkD25MDc7LON8yvV4vgldByEDBC6gLQmascie1rZvv0jAMw3iQcc7xsY99jOPHj7O5uTk+8sUYQ0Rql6DFmrePxV2LfPGLX+QTn/gEw+Fwp7dzX1PkBfML88zNze30VgzjvsR7uHlTefNN5dIlZTBQJieFQ4eExx4Tpqff3dfuer3TyqWlZr2Dh4Rj73C9Xg+Wljznzik31qAaKWVH2LUIR4869u8Xsuy9vmLDuDeUZcmHPvQh9u/fT6/X2+nt3NeICJ1Op3aq3i9sz8zLaAT0eERdECvVg2S1iJji30kkqxu28WPjHrM0a7L9XkPSf26LNUsQ9CTNwqzNmuNR7KBpCu2wdYqhC6kMCGohM4maKi2xM3xQx8JlfB+1Y1FaO6zdljRSoiSR15EVOePvFW6LkKcb0RJv6+fr37UWNpMz0/sgc3qNv9JzXlEneBRNSrFEkTjO7byzasgwDMMwHiy63S4HDx7c6W0Y71OSKDw7O7vTWzEM4wHGe7h0Sfnmf6x4/XXl1i2oKigKZXHj9iYAACAASURBVH4efuEDjk9/2rG4+M4ETO/h8uW43mvKzdvW+8AHHJ/5Ket5Dysryne/6/nJq57r14OQ6T3kuTIxAfv2KR/+sPDRj2RMTGzn3TCMvz1lWXLkyJGd3obxHtkW8dJLI0EGMa0lvEWBDOJMyToWXZ/dmnfp6tNS3tz7tITgnIvzK2nmR7Yclz7FyWtRrpl16UTGXZtea1dkEiR1TARtOTXTR213ZRIh6xGYSfSMW2padeq9tvvJXZbhsmwsct408TS3L07IZDxK3lyrHpPZel2+qsLnY5QcH6+sQZBNFk4lzcgEK8E0DMMwDMMwDMO4P1hfV156yfP974f3eU88ISwuwPnzcPqsMhx4du+Gj388e0fv5dJ63/teXO9xYXFXXO+MMhh4du+CT3zi7uutrSnf/rbnW3/t2dyE/fvh2WeFqUlh7aZy+rTy+hvKzZtKlsHzz2XYRBLDMLaLbWobjwJgmnEZRTuvPn5Ox8puwscuGhF1TOMjnq/axL/TOV79uO4Zj69FuvrY1DceEFpN5U2leTOPsxZZW2JmnWu/8ydPSuPaTHbPFDGPgzXDXYmv2d92bUUpi4LMhfKcdrHPmE6ZhmDqHVcn1QOF2ZrNs94rvgrX9l7rvVaqVPjwMlWJleMhaq4eVYm/m/PSMAzDMAzDMAxjJ9naghtryuQkHD0qfP4Fx+ys8PobntXrys11WLqkXL6snD7t2erBgf3CE084yhI2N5VTp0LcfG4OFhelXu+RR8J68/PC6697VleDs/PSZWU4hE5nfC/DIZw/r/zgh0G4PH48nH/okFCU0O/BmTPKN77hubYSIum9DyjT08JoFByfr73mWV5WhiOYmIBDB8Ned+8O74SvX1defdXTH4TnOh14+RXlxg1legqeeMLxxBOO4RBOnfIsXQqO0WeeDvcFoN+Hs2c9Z84qM9Pw9NPv3JlqGMb9zTYV9giiUQyUIOQ5cbXj0vvYHE4Wjw+zMcN8xihg+jSHEpIVMTgxgyiotSlRggiqcT6kKuIcPl2TNPtRURGcS0U+sWE8+kIFwDXzLtUHkdOTzJBJPPVBlq2v1SrYAbJk1fQhAp9KflIRkKYyHCScG19ekRd12zlppubt2mXrHrf1zEaUbWZxSnwNqkrlPZXXlCIP8zidUo1GOPK6HMirxlIliWJnZbFxwzAMwzAMwzCMHWZuTvj0pzI+8mFlZkbYvz+8l5ydCTMlRSDPBOfg2gq8+KJn//4g+h096jhzRvnLv/JsbCgf/ajj8ceFT30y48MfCuvt2xfe487ONutlcb3b2dpSzp1XVlZgdhY+/CHhqaccRRGen5mGqSlhYQHWN2D3LqHTEQYDOHnS87Wvey5eVKoKyhIGA/jxj5Wzbym/9NmMQ4eCe/N731euX1eOHA6i56k3lV4PihLOXwh2paOPCMvLyje/6VlcFOZmtRYvb95Uvvs9z6uvKieOC08++XP6wzIM456zPbHxGFMO6mIQFb0HkawRw4RaPMRJFONS5pv4uRger1PRwYEZhMNoLxQXzqtj2DGkHl2ESLNqq4enJkWsk38xapF1NL3JpDeOUpdeWnSRBuU1XEDDCx13cpIazn1rdmd8LVFKLYp8rD1dCfenffz4ztP8zWg1rWP67SPCffK+aqLtPkXkg5LpveJDkrwWQr0HlzlQh7/d6GkYhmEYhmEYhmH8XOl2g0MShI2N4K5cXg7R7PV12LcPTpwQFheFJ58QTp8Wzp5Vvv/9MBbsxRc9S0vKY48Kzzwdjtu1C0DY3IQzZzzLV5U3Xg/r7d0Lj5+Qu0a9ez24djWIj3OzcPiw1MJlYmICjh1z6a0xqrC0pPz1tz0nTyoH9sMnP+WYnxPOnVO+9deel15Wpqc9i4sO74Nb9Nq1kCY8flz41V91XLigvPSScu6c8vprnqOPZOyKbs3l5SCqPvFE2MPqqnL2rNIfwPw8zMyY69IwHha2KTZObODWWgnUuqQnRqoliZLRiehjcQ+ASlOaoykA3syHbEppNP63/UUoiIa1WFdrobe1lUcnZC1sJvFOWk7M1K+TXJPU5tFG0dPWdVO8PO0niZGSXocipKq1IL6mtYv01T7tva1Vxph9Ora50UlZDJv3MZreOEFD23gz1zOeV3mowl6S4zIqofGxD895va3S3DAMwzAMwzAMw9gpVGF1Fb7xDc9b54Iwd+iQ8OlPO44dCxHxRx4RPva88Bd/qbzyY+XqNc/ly8rCPHzsFx2HDztavhlWV5VvfNPz1ltKvw8HDwqfievdPjVNNcTGN7fC291OV5iaurso2HZtDoehcOj0aaUs4ZlnHB97PqPTgV27lctXlB/9SDlzRllZbVXzSnCdfvITjiNHHGfPepaXKy4uwer1UDK0d49w8KDwxhvK0pKysaHkuXD5snLjRnCHpntjGMbDwTbFxoMA5hRwTZzZ1aU6EoSx1hezpMG55GhEwDVCJ14RSQU+jbIXZjSGq4rXOs6NtMXS1Dge3Y6tOY5KinG3BcMYQffQnvko9TVJVtDxoh4huikb92ZzbIy7jwmBWt+bsuwguEavTJtRxs5JMzhTa3pavo6QpxsZj/W+Cm7QdG59kuDIqGqHLCBRCBbBZeHOVTHibxiG8W5YW7vBn//Fn7C0dH6nt/LA8NJLLzIY9Hd6G4ZhGIZh3OdkWXBidjrQ6wfx8cxpz/59wuHDwuSk8PTTjuWryre+pZw8qczMwIc+7Hjm6Ttdku31+v0wb/L0ac++/cKRw3Jn7cNdEo0/i8FAWV1VNjZgYQH2HxC63fDc9BTs3Rvef9661cy1BChy2LsnCLTdLiwsBNfoxYvKaBjeys7OwmOPCidPKlevKteuKbOzcO58cIfui/flLvUVhmE8oGyP8zI5BaPoCNEt6dJHQeRLumRS3lLxtROoVEPs2iXtT2sRLpzkanHQa3JWBvUuOSrDXqjj68mv3giLLeciBHFUU1GQRyVdXFD1467HeH4jpDqS69OrDx/XhUStyHoF4qT1k64gGJZlOVYGpL6WL1vR8NsEyHrr2tzHKKC2BczKV7UDM/3yvinwCQtWqFbRO1qBOCodUfkRNvfSMIx3inMOEaHX2+Lf/n//mq9/7S92eksPDCsr16iqCucc7p3UhBqGYRiG8b5DBHbtEl54IWN9PcSi//rbnhd/oJQdz65djsnJ4Ibct1fIMmVzK4iAuxZhclLuWG9xUXjhcxnrzylvvRUi3C/+MK636MaclSJhrcnJYPbp9ZTNzVb6MKIKo1FwRna7YTTZYBBHlDnotFyQzoXZl07C8cMBMNV6rgNZLvXjFGVP71InJoQjR4S5OVhbg3PnlEOH4OJFjTM/pZ6DaRjGw8H2iJc+CmTxsfhmLGTS+iS6MOsSG9HoPEyzHInOzGCBFKQuvZG6jYYg6sVIuJOmfTzW6iAozrkwI1NBa7un1K5HiW7Dej6mpiZ0rfVEH0VPFyptAEeagRn0TY9o20oaBEwFnJOwTw2FPxL3nIRcBMpOSewXiuJjc/26tKheO9xEabkva0U23eQocnrv8X4UhEn1qPh6z6qCxkIl9UEcDtHzEUIR2sbta7xhGO+CZ599liNHjnD9+nVWVq6ysnJ1p7f0QNHpdPjgBz/IwYMHd3orhmEYhmHcJ6iGOZOrq0q/Hwp2DhwI7xmnp5UzZ0M8fOmicms9iIXXroV5mMMhdDuhOOeNk8rx48qePfIz13v55bDe+jpMTY3vp9uF3buDMHrzJly4qDzySHBwJjY3lVde8Zw8pRw/Jjz6mKPsBPGxqoLDs65l8DDoh9/z7M528zpC3nrcJsvCfg4fFl59Nezf+xArX1yAY4/d6TY1DOPBZptmXsaWboJ8J1FESznxRhD04AWRFI+OKqek2DegQW4UaX3Jail2klyPoR48uiZ17LAm+t1yNsbjnLRKftIh0egYhM14tMQ9iNQN4uFwiSJsPaUztKRH4THtR6NK6eLL8N6HSH08qeiUcTboT6G9t6QMj0XZU/K97bD0wXVZ+eBkrWdnEu8ztZgZkvzhz8tJVjtRDcMw3ikvvPACu3btYnl5OX7dN94NRV5w/MRxThw/sdNbMQzDMAzjPkE1lNF89WsVq6vw+OPC534pOCIHg9DAncwsAqzdVH70o1CMs28fnDguvPa68sYbyt69ns9+xrG6Cl/5aljvxAnhhc/F9YZKbyteWGiHA2smJoRHjgi7d8PKCvzoh8rePZ4TJxxZFoTW199QvvZ1z/JyOOfECdi1KExNwdZWKO/54AeD43J9PczDBJidExYWhH7/nb8RFYGZGXj0UeG114J79NYtqEZwYH/TzG4YxsPD9oiXouGXehxSOzC9anAhRpXNS+NmdK2Yda0Mtgpq2vJjEupSqrsRGZtznbgmEe7SXEpXj3dsdNC4YoydC8RyHqkj7BL/FfBARksgTHtM8zZdcIcGRVAITejB1RhVTHwsKpLagRoilmXRCWU+UehMkfZmJmeMjLcFzrGv56msp3Ueiq9Co3h4AQpVvPf4MM8yCpQpyt8uMqJStNLbrmP8bXjttVf4xjf+igsX3trprbxvOXnyJ/R6vZ3exkPLnj17+MIXvjA2L9h4d9w+osQwDMMwHjS896xvXOHKldP4Cl552THoNym1iYkJDh06xO7du+/ZHvr9PktLS1y5cuWO50SEhYUFjh8/Tta2C24jqsrKygpvvvlm/X3Rm28qly9VFAWcPTvB8vIR9u7d+zPXEgkin/dw5kyYHbmyoszOwJVlOH9e6XZh//4QqT75hufFH3jyHD7yYcezzzq6Xc/XvuH50Y88e/cK+/cFQ86ZM2Gt1dWw3vJymBXZietNT9/5fUmew5Ejwkc/6vj61z1vnla2/qTi8CHP5KSwvh5av69cgT174InHhbk54eDBIKT+6CXl5Vc8RRlawM+eVU6fUaZnwrGLi1KLme+Ublc4fEiYnw/35OZNZXKS/5+9+w6O87wPff993nd7A3aBRe+NBCvYRVIkZUlWcVNkO7Zi5zg5OdIZxz65x4nv3Jk7yST/3XPvTGacHJ9kjuLEie1YcYvHiUxJUTFFkRQr2ECCAAEQvZcFtmDr+z73jwWWBMEi2QLr85mRONx9y7PLxWL39/4KtbXiho9BUZT720fU89JcKNOWmAtF1kLoSNNECJ2rEcSFDEFdZLdbCEguHCQXZFyaRymyx8kNv1kIVi70kTSNhWBiLsopsoPPrx2ss5CBmM3uzPaCFAtBVWMhoJirwZZXR/6YUiK0xbLvqyXwi1PTc30qFzM/r6ZnXi2PX3wUC1PPs1mOAqvFumTim7w2iLlw4+KwnmwduVxyHnnN/bk1SIlhmAu9QkWux2W2D6a50JtTgmkBufiYBYZhXs0uVfGHj8Ti629ycpzvfvfbuFyeu7yih1dfXw+ZTFoFiFaYen4VRVEU5eE1OTnO+0deof30v1NXZOXtIcFxb/aLXsaQRNIaW/c8xZ9885srtobOjkv872//FaGxfgKepTXDc/MZPKWN/Lc/+iPWrVu3Iuefmprixz/+Ma//20+pKcjWQUejkunhbBuvf+vVmA09xp/+2Z/d9liLfS53bNeIx7Ol2KdPSywWSGeyZeHr1go2btCYnpYcPyGZC8PaNYKNLRrBQsG6dYLBIcHlLsmpUyZPPKHljne5a/nx1q4RbNms4XTeeD0+n2D7Ng2rBVpPmwwOwtCQxGLJDsnRNKirE+zYLli7Njvpu6AgOxU9Y2SzQg++Z2K3QTwBTgds3qyxZYu2rGz8g9D17PGrqgQjoxJpQkW5oKZGQ7USV5QHz0fW89I0jOzU6oUIpDRNhJZNlTRltl9j7k1kocZ8MUtRE9rV9Egzm8WZDWZmMxlZzAyUAom5mFi5EKtcKANfHMyTG5qTKyLPHuf6DExzIWC3EFEUC4HFa2Z65zI35ULfSrlwbAN5zeD0q0FW01wszyYXfMwOJtLQyGaiYoKwaAtX/JYGZGXuz6vZpov13YvBylwA2Lw2mGnmhvJkMpmF4TwGpsxOZs9laAqZW/JCBXm2zPya5+pq59L7Ryw8QTQ0jK+gGoc7PzelfqVJKQlNdGMaafIKa7DaXLn7GuobOHXqFNFolJ6ey3dkPcqt1dTU4PP57vYyFEVRFEVRHjixWIT56UvsCpzl2VpPtjPYwhevhGlyaCzB5Y7KFV3DzPQU4x1H2eaeolFzLLnvcjJF20CC8fHxFQtexmIxxvo6CM6c4fGgFwDpA9MjSRmS48MJLl384JmnNhvU12t4PILhYZPpaUinJQ6nIFgoKC8XFBQIYjHJY/s0MgYUFmTv03UoK9P4xLOC7dskTicECwUOR3awT8sNjldWLigK3nxC92KfyZ27NGpqBZMTktAsZNISh0MQCEBxiaC46GoA1GaD2tpsefrwBpOpaciksz01i0sE5WXZknHITh//zGd0kslsUNJiuRo03btPY/367JTxa4cJ5ecLPvYxnTXNcmHb7KRxRVEePB9J8BJTgKGBls1oXByCk+v1yNVMRCGygU00keuxKMlmbQpNz5YysxhjlFeDjWJx5E42k9M0s/0yNbE4UGehH2Y2grlwi7m4Y7akeyG7MxvaWphefk2G5LV/X4xBioUsSxOJhr4w0EZmExcXw4wLJePZXppy4f8LmZQim9GZXU92Iru+GLxcrPqWC2Xni495MRNTLB4jVx2e2ya7Iq5uKyWmNMmYGdJmBkOaSCEXBg9lsz5NCboEsRDMNA1j4enV0DRLrmfm/ZR8KaVksOsQfe1vs2H3f6akehNCvzPBy3QySsepn2JkUmzc8wdYA1W5+7729a/TsqmFUCikymnvAXa7nc2bN1NXV3e3l6IoiqIoivLAkRJcVkFVoZ3t5UsDh/G0ZCRi0LfCn9EFkG+HlmI7zcGlqXx2HUbjK1MuvkhKicMiaCyws7V06fmThiSUMGi1fLjnwG6HqkpBaalOIpEdSmOxZEumF6vfbbarAcBr2WzZqdvV1Uvvq6oSlJVlj2cYy493K5oGPq/A6xHU1bJkTXa7yE0Fv34dFRWCkpKrj8FqFTjs11Rhki1XX7Nm+eNwOKCuVoPa5ce2WqGyQlBZoQKWivKg+0iCl5945uOk0mnguga/ixmTLGZEXg1LLg7CyZIL/SBzaYvXHEIsxiNBilwQcDGNUnD1HFczC7WrTSavm1V2taOmuPaGmxC5cvfFs8klu8jrthVLbr8223MhGRMEWCwWCvxeLBY9F7SUUluy35IcUJm78ZptrttWZgPF/jw7wUIf8XgcwzAwDDObhbmQiSkQaELD5nCgazoISMTnsdlsFBUXo+sWvJ7rxsvdw0wjzcTAWSYH20gl5u5ooDAem2GsrxWL1UEmfU3XbGBTSwsNDfWkUqk7th7l5iwWC263G8uNPlEpiqIoiqIoyr1KZIN0VutHF6CzWG7c2/IDL+nXWNNvek5FUR5uH8k3+VWr1JTU39xH80ae53NRWnL7JtB3kpFJMdbfynDP+8xN9WNkktjsHgKlq6le/Th5BdVoejbzc6y/lcmh8xSUrqG4ciMW29WmK5HQMIOX38Nic1LVtI9kfI6+S+8w1P0+85Fpus/tJzTZQ/Xqx9EtdoZ7jmK1uQiUrGK09yQTg+dIxsPYnV5Ka7dRvfpxnJ5ChBBkUnEGLh8kFh6nouHR3JogGygeHzjNxOB5CsqaKSpfz8TQefov/YqZ8R6sNicXj/2QwvJ1VDXtw+UNIoTA6/XeradcURRFURRFURRFURTlgaDSkJQVlUkn6Gz9OZdO/phELIQnvwyrzcV06DKDXUcZ6TnO5sf/kGDFBjShM9p3kotHX2HV5ucJlDRdF7wc4uKxV3C4/QTL1xOPTjHUdZi5ySHSqSRjfaeJhccpKGlG0y1cOv5jEAJ3XhHJ+Tl03YZhpBkfOMdQ11EioWHWPvJl3L4i0ql5es6/xsTQBdy+Yrz+8lzwEikZ7TvFxaOv0LT5OfIL65gZ72K4+yjxyBxpa5yhrvdJJaIUV7bg8hbyUQWjFUVRFEVRFEVRFEVRHmYqeKmsqPGBM1w4+gPikWnWP/oVyuoewWpzEY/N0H7sh/RfOoTLW4gnrwy3r5hUPEJsdpJkfA5pGkuOlUkniIUnkKaBaaTJD9bRtOWzhMZ7iIWnadr8HMXVmygoXU1oood4dIq56VFKatbTvP2LFJSsRmgao70nOfPu39F56ucUVWzA3rgH0zSIR2eIzU2QTsWvKdPPFoOnEhGisxMk42FAUtG4m0RshtBkH568Itbt/gqFpc148krU1GVFURRFURTloSOlJGNKMtfN/8zeJle8vZMk2+M/Y3KDNYBxB9pLZWet3uA5MCSGKVGt8BVFUX49KniprBgjk6K/4wCh8T4aNj5NY8tn8PorF4Y3mZhGhonBNga7jtC46TmcnoIPdXyXp5CSqk3Y3fmkElGKKjdQ0fAousVGeHoAFoZDVTTsoqb5CVzeIACe/DJGe0/Se/EAE0PnKa7e/CEfmcAfrKewrBmrzYHTE6C8bgeB4kZUxqWiKIqiKIrysHE4HKQsXn5yYY4TQ1GA3AX9tCGZydjZtca14muYMn385ZEr+J1Lv+aG4ia+xnp8Pt+Knd9ut2PaPPzrhTBnhmPA1XyIjCkJpa1seHJlnwNFUZQHlQpeKismlYgwPXIJkBRXb8bhCuQ+xAihEShpwusvZ3ygjbmZAYLl6z/iFQicHh+F5euwOa5+UHG4/PiLG+lrP8jcVC+ZVBxNt37E51YURVEURVGUh0NBQZBN275C++Ut2AOClhaN2pqrF/UtVguNDSs7J6FpVTN//Gf/g/HJyRumExQGgzQ2Nq7Y+QOBAJ/9wpdoWrMhd/7BQcmBd00sFti3zsLnPlu/Yue/nyQSMDJiMj4usTsEVZWCwsKr/2qmCeGwpKdHMjYuSSUldruguFhQV3fj6eqKojzYVPBSWTGpZJRELISmW3F5C5cFCG12D3ZXHqZhkIjOYBqZj3wNNqcHh9uPpuu52zSh4/QUoOkW4rEQppFWwUtFURRFURRF+TU5HE4qKjbS0LiesjJ4+hmdzZu0O7qGQEEBjz/55B0957UcDgfr169n/fqrCRltbSZDowZ2G+zerbFlq36LI9wdpgmjo5LTp03y8mD7dh2HY+XONTUlOX3GpL1dMjcnCQQEjz+u5YKXpgnj45JDhww6OiXz8wsZrELi9UBTo2DPHp3SUhXAVJSHiQpeKivGyKQwjDRC09B027JekJqmo1usgMQwUkhp3vhAvwFNs6JrVsS1118F6LoVIQRmJr0i51UURVEURVEURbnXJZPQ22dy4qRJfb1g0yaJw/HRBwYzGRgcNHn3oElPTzYomUwCSJKJq9vF45LOTpNTpyWagE0tgupqwcCA5OQpybnzkny/SWGhjlXlnyjKQ0MFL5UVo+kWNE3PDdi5vkO1KU2MTAaEQNdtCHG7q7My24n7Q5DSwJQGkqvdKKWUufVoFusHOC8qwKkoiqIoiqIoyj1DSpiZkXR0mKTTEAwKEkno6zOprNBoadGIxyWXL0sGBk3m58GiQ1GxYPUqjZISweys5Nw5k7YLknAYRkYkR46Y1NUJGho00mkYGDDp6pKEQhIEFBVl9y8vFwgBkYhkZESSSt14nUKA0wmBgGB8QjI2JmlqFOgWOHVq+Ze7ZBLGxrLBzcoK2LxFo65WI1Bg0t1jMDEBY6OSVEpitarsS0V5WKjgpbJibPZsybY5doV4ZBLDTKNjz92fTkRJzIfQdMtCWbklG0gUYJqZZcHOZDyMkUkhP0QEMxWPkkpEspPLtYUyDSmJx2YwjAxOdyAXwFzMDJWmseQc0syQjIcxDWPZmhRFURRFURRFUe40KWF2NpuNGA5LSooF4YhkchKMjElFheDsWZNTrSaxGOgWMA3QdcnwsGTvXo1MGs6clQwMSFJpGBuH4ydMTKlRUiLp6pK8d8hkYiL7HUjToL1dMjQo2btPo6pSY2xM8vY7JnNzN/6eJASUlgr27dUoL9N4+ilBsEjQ12ty+vTyfTQNLJZs4ommgdWaPYbVkv270LKPRdNU4FJRHiZ3JXgppeTo0aN0dnaSyWTYsGEDmzZtwmaz3Y3lKCvE5vBRWLaGkSunGR88S/WaJ7HZPbn7Z8YvE5kZxu0rxFdQhW6xYbW7EULLBhczVy/fZVIJJgbPkYzH8ORfexaBQCClXIgrLv0FGI/OMT3aQUn1FnRL9vWVTISZGe9CmiZ5wVqsVicIDYvNme2/GZ9d0n8zOjdKaLybTCp53SMUsOTciqIoiqIoiqIod4Zpwvx8NmBpmpK6OsEjOwTBoCASkfT2SgwD1qwRrFubDTQeed+ko0NSUiLZsllj6xZBPC4ZH4eqStj5SDarcmpacuyYyeCgpK5WsH27RjojOX1a0tEp8fpMCgsEmg4OB6RSNw4mCgF2O9hs2YE7JSUCw4CB/hs/JqdTUFurcbHdYGYGThzPDvYZHJCEQhDwQ1OjhgodKMrD5a4FL9955x1++tOfEo/Heemll2hublbByweMxWKjuvkJ+jsOMth5BH9RA3XrP4HDmcfMeBcXj/4z8egsa3d+gbxANZpuJb+wFpvDw8TAeUZ6j1NptWMaafou/YrRvtZsRuY1vTM13YLF6iCTThIODZKYn8XuzEMuRBOllFxpewNPXgnlDbsBSU/b64z1tuLOC1JUsRGr3Q0I8gqrgUMMdhwkWL6egpJVxObG6Gz9VyIzw8sen26xo+lW5iPTxObGcPuKsNrd6LoFbjjjUFEURVEURVEU5aPn9wt27dSortYQAmIxyZNPahhGtmQ7GBQMDJhc6oCRkezgHIcD6us12i5IpqYkhQWCDRs0rFbBmTMGA4MSlwvWrxds3qyRTEpSqWxAs78fQiFJWWk2mzKTuXk2h90uCARErkelYdz8cdhsUFcn2LFD4/33TU61Si51dp1U0QAAIABJREFUSBKJbBB06xaNpiaBfu/NPlIUZQXdteDl9PQ0fX19xGIxZmZmME3VU/CBIwRFFetpeewl2g7/E21H/pnei29hsdqJR2eIR0PUb3yKVVs+h9NTgBCCoqoWqpv30X32NU6++ddcOvETJBJNaATL1hCeHgQkUppIstmdwYq1jA+0cf6979Lf/g6rt38Bu8MHgMcfxOUt5OLxf+HSyZ8gTYPZyexlvjWPfJFg+Tp03QYIatZ8nLG+VkZ7z3DoF3+B0+3HyKRwegooLF9DLDyFKQ0Wszt9BVXkB6sZvXKGo/v/H/zFDWzc818Ilq9T08sVRVEURVEURbkjLBYoLICSkqsBQq9XYJrQP5Dti3n2XLbMPBzOZmymU9k/bySdloRmIR6HVApOnTLpuSIxzewxEkmIRrOZkNXVUFmZrUj7KGQyMDFxNWu0uBgKCwUzMzA5KekfkFRWSrxegXZnB9orinIXqZ6Xyoqy2t3Ur3sGX345E0PnmZsZwEgnCZavJ1CyiuKqFnwF1blgn9tXRMvelwiWr2Nm/DKZdBynp5DiyhYCxY1UrX4MTbPgzS9DCIHF6qB52xdxuAuYm+rFYnXg8gSzPS4Bm8ND0+bnMYw006OXSMRCBCvWU1S5kbK6R3B5CnKZnEUV69nxzP/FaO8JwqFBBOALVFNSswWb3UNjy6fwBiqxO/MA8OaXseljf0iw4hDx6DSe/HJsDl+2Ect9TkrJ9FgHk0NtSNOgrO4RfAWVaJp6y1DuIVISmR1hpPc4k0NtzEcmkaaB3ZWHv7iJivpd5Afrci0jFGUlSSmJhIaYGDpPcn6WwrK1FJY1o1vst9lnmJHeY0wOtRGPTqNpOl5/BWX1j1Bc2YLN4b2Dj0K536USEaZG2pkZ78LlDVJSvRmXN3jbfSaH2xjuOU54ZgAjk8LhyidYsZ7y+l34ApW5vuCKotybNB3sDnIDbAwDhoZMDh826e2TxOPZoKBhZAfi6Pqt56BKmd3elCANmAlBNHZ1j4JANtNzcWhQX58kkbh5z0uPR1BbK/B6b/9eMjcnaW3NDgmqKBc89ZRGMCgYG5McOGBy+bLE6TQpLxf4/eq9Sbn3pVPzjA+cYXaiB4engKqmvbf9fJdOzjM20MpIz1EioWEMI43THSBYsZ6qpn2480ru0OrvHSoSoaw4m8NLef1OgpUbSMXDmGYGq82F3Zm3LENRCI1ASRPeQAXJ+TlMM43V5l7Y1oK/uPG67QX+4iZcvmJSiQiapuNw+ZkYbsveD/gCVRSWNlPVtJdMOo7F6sDu8qNfd26L1UlZ3XYKS5tJJSMA2J15uT6cwYr1S7bXLXYq6ndRULIaI5PAYnVid+ajafd/DUNsbowLR75Hf8dBdIsNuysfT36pCl4q9wxpGoz2tXLpxI8Y7jlOJhVf+FkVJBMRhHib/vZ3WLPjd6hatU8FgJQVlUpGGek5xuUzv2C8/ywSybqdXyavsOamwUvTNBgfOMOl4//C4OUjRGcnSadSCCGwO130XzrA6m2/TUPLp3F5Cu/wI1LuN6ZpMDvZQ/fZV+lrf4fo3DiltVvw5JXcMngZC4/RdfZVLp/+BaGxKyTmY0hpYrHa8OS/RVVzK2sf+V2KKjaoAKai3MMESzprMT8vuXRJcvqsxKJne17W1QqisWwW5fT0rY+naWCzZv/0emDXLo36uqXvAVYr5OUJJiYkB98zmZ29efCyvEyQl6/jvc3HMSmzGZ3DI9ksz4JCqKkReDzZjNKSEujugcmJ7JAiFbxU7mWmmWFuqo+OUz+jv/0d4rEQBWWrKarYcPPvJlISj01z6cRP6Dz9c+Ymh0jG5zFNE6vNhsdfxGjvSTY99lX8RQ139gHdZfdsJKKnp4f29nYMw2DDhg1UVlbS29vLgQMH6OjoYG5uDrvdTkVFBXv37GXrtq04nU5M06Szs5P33nsvt53NZqOyspJ9+/axZcsWnE7nTc87NjbO8ePHaGtrY2xsjFgshqZpeL1eampqeOSRR9i4ceMtjwEwOjrKwYMHOX/+PJOTkwAUFhayadMm9u3bR2FhIZcuXaL3Si8SSUtLC+Xl5eg3aN4RCoU4ceIE58+fZ2hoiGg0iq7rFBQUsGrVKvbs2UN1dTUWyz37zwlCYLN7lgzsuRWrzYXV5vqAhxY4XPk4XFcn+YjryhY03XrbzIPssTTsrjzsrrwPdG5Nt+D2FX2gbe8XmXSSnrbXuHLhLWYnR3G43KSTsVwfUUW5F8xO9nLx2D9zpe1tCkobaWj5FP6iBoTQiIXH6T777wz3tJJJxbG78imvewRNv4ffI5X7kjRNQpM9dJ97lSttbzA7OUgqPo+m68RjIaS8eUuc2Ylu2o/9kK6zb+BweVm97TPkFzWQjM8x3P0+E4MdGMYruLxBatc+pTKIlZtKzM8yePkQl0//nLG+s8xHZjHSaXyBMTLpxE33S8bD9F58i7bD3yM2N0VJzQZK67ajW+xMDV9kqOsoXaf3Y7N5cPuK8TyEWR6Kcr9KJmFySpJMQFElbNumUV+n0dNj0tqazbqUkiVDR6VcyLSU2eE6+fngdGT/7nRAY6OGlDA9LRkdleh6tj+l0wVlpdw0q1IICBYK7B/w15gQ2f+khHQ6mwEKC1mji/NcBeqCinLvkpJkIkx/xwE6Tv6Msb6zpJJxMqkUdlfekuHA10un5ulpe40zB/+eRCxMZdN2Smu3o+kWJocv0H/xPS63/hKb08f2p76Jze6+gw/s7rpnv8m1trbyN3/zN0SjUb7xjW/g9/v54Q9/yMmTJxkfHyeRSKDrOvn5+bz55pt89atf5dlnn+XQoUN873vf48yZM0xMTCzZ7u233+brX/86n/jEJ3C5lgbGDMPg0KFD/PCHP6S1tZWRkRHC4TCphSwIh8OB3++nrq6Oz3zmM3zpS1+iuKgYoS1905RScub0Gb7z99/hyJEjuWAjgMfjoby8nIMHD/Liiy+yf/9+XnvtNXRN50//7E8pKipaErw0TZNTp07xyiuvcOzYMYaGhpidnSWZTKJpGi6Xi6KiIl599VVeeOEFnnn6Gbw+lV2k/GbGB8/QdebfMDJpbHYH4gEog1ceLNI0Ge0/xXDPCaw2J6u2fo6mTc/lWjoYmRR2p4/o3BjjgxcZ6ztFYWkzTk/BXV658qCJhsfoOPkTOlt/gcOdT8PGp5kcusDMeN8t90snYwxcfo/+jkO4fQWs2/WfqFv3NC5vkEw6QUn1Fs4f/kfmpvqZm+4nnYyp4KVyQ0YmyVD3Ec4efJno7BildVuQRoa+S4dvuZ+UJjNjnfScf41YZIa6DR9n/a6vEChuROgWwjODePLL6D77S8IzA8TC4yp4qSj3EU0jFyyMxaC7WzI9ZdDdky0hlxJCIRgezg7tsVqzwcGxMcnZsyZVVdkBP5WVgu5uyZmzEk03sFiyxxoflzQ2CAqDOiXFGh/7mLjtEB67HS5cMDlzxiSZypabJ1NghuHdgwZtF0z8+YL6ekFFhWBwUDI4KDl02KS0JFs2fuWKxGqD4uJscFVR7kWZTJLh7qOceut/Mh+ZpnLVbjRNp7N1/y33y1ZRXKH9+I9JxMKs2vIpNuz5A/ILaxFCY256AE9eGe3Hf0povJt4dEoFL+8F4XCYS5cuEQqFOHz4MP39/czOzvKJT3yCqqoqIpEI7733HidOnODIkSPouk4kEuEnP/lJbrvq6mqi0SiHDx/m2LFjHD58GJvNRnNzM6tXr14SKDx9+jTf+ta3OHDgAJlMhp07d7Jt2zaKiopIp9N0dXXxzjvv8P777zMwMIDVauUrX/kKeXlLM/T6+vr4X3/zv/jFL35BJBKhubmZffv2UVlZSSQS4ejRo/z7v/878/PzjIyMcO7cORwOB9FodFlm28mTJ/nLv/xL3nnnHRKJBJs2beJzn/sc5eXlxONxzp8/z6FDh9i/fz/9/f0YhsGnPvUpPJ4Plt34IHO4A1St2oOmWbIZmerK3AcSmxuj89TPmJ0aoLx+GxNDF0lEZ+/2shRlCcNIMTfdRyI2R0FpAwUlq7L9ZhfoFhsFpc34ApVMDfcQnhkklYyq4KXykUslIiTjYcobdtCw8VNYbS7mo1O3DV5GQkOMXDlBKhGjfuMz1G/8JD5/BZDtFV3Z+Cgub5B4dApfQTUW262rPZSHl2lkiEencLj91G/4JGV1jzB4+SCDl4/dcr90ap6JwXNMDl0iUFxL0+bnKa7elGsPU1DcxPrdv0dF4y6sNnfu9akoyv3B7RbU12t0XjaYnIRjx0x8vmzQr6VF0NoqGRmRHDtusmO7Rnm5oLNTMj4O7x0y2dSisWuXxs6dGsmkwfCwJBTKZlsmElBWKqit1fC4s+XcDsftv2vFYtmMzVOtklTq6u1GBnp6oOeKpLxUUlens3mTRnjO5HK35MgRE5cre950GppXC7Zv1/B41Pc75d5kGhnmo1PYHF7W7vwypTXbGOw6xOXTr91yPyMdZ7TvFFPDlwmU1LBu11cIlq/PZRkXlDSx4dE/oLR2K3ZnPg63/048nHvGPRu8tFgsCCEwTZM333yT2tpa/viP/5gdO3YQCARIJpPs2bOHP//zP6e1tZVTp04xPj5OQUEB3/zmN9m5cyeBQIBUKsVjjz3GX/zFX3Ds2DFOnTrFuXPnqK2tzWVfxuNxfv7zn3PkyBHi8Ti/9Vu/xde+9jWam5txu92Ypsnk5CQbN27kW9/6FgMDA/zoRz9i9+7dbNiwIVeunclk2L9/P2+99RbhcJitW7fm1uz3+0kmkzz77LN897vf5cCBA0SjUZLJJC6XK/d4F42NjfHd736Xt99+m1QqxRe+8AW+/OUvs2rVKvLy8kin04yMjPDqq6/yd3/3d5w/f57vfOc7NDY2snHjxnu7hPwO8OaXseHR/wwI3HklqqzgAzAyKXouvMFAxyGC5c1UNz9OeGaIRGzubi9NUZZbuNYjND3733V3a5rlmh6tquWBsjLcvmLW7PgdrHYXvkA1s5Pdy9qWXE9KyexkL7MTPbi8AUqqNuPxFS/Zxmp3U1K9eSWXrjwgdIuNioZHCZavJ6+wBk23Mtzz/m33S8RmmB7rJJNKECxfS2HZmiV9rYWmk1dQTV5B9UouX1GU34AQ2enin/m0TiqVnci9+BXQZoOmJoHVqjM0JEmnIRCA6moNhwMqKkxCoezgnZKS7OAbjyebDel0Zvtjer2CNc0Cny+boTm3kM+Q74eKCkF5mZabbP5B2O2Cdes0/H6BYS7/bCYAp1NQVSVwOgXPPitYP2QyPQOppMRmFxQUQEW5RkmJmjSu3LssVjvl9Y/gL2qgsHwNAsFQz5Hb7pdKxhjtPYlpmhRXbaawtHlJHENoOr5ABb7Aw3lB8Z6NcGkL70ZSSubm5nj++ed5+umnycvLy/0DPvroo+zevZv29nbC4TDDw8O8+OKLPPPMM+Tn5+e227lzJ3v27OH8+fOEw2E6OztJpVK54OXU1BRnz54lFovh9/v57Gc/y44dO5aUlufl5fH5z3+eAwcOMDY2xqVLl2hvb2f16tW5QOHExARvv/02U1NTeDwevvjFL/LUU0/h91+NiBcWFmK32+nt7eX999/HWMivv/bLjpSS9957jwMHDhCJRNi9ezcvvfQS27Ztw2azLTlWYWEhPT09/OxnP+PkyZO8++671NfXk/+Q59FbbE7yg/V3exn3lfHBs3Sd/jd0i5VVWz6LL1CtegQq9yTdYiOvoBqHy0dsbpzY3BiGkV5SVhsLTzAfncJiteL1V3zgfruK8mHYnXkUVS4OMvlgF8lMI0U4NEQ8OoO/uB5Pfhnx2AwzY5eJzY2BJvD6KwgUN+Fw+dXFN+WWNN1KfrAWEAghSCUit99JSuYj04RnBtGtNnwFVegWG5PDF5idvJLtFezOJ1DchM9fsWy4oqIo9wYhsn0m161b/nti8b61awX19RLDyAYP7fZsybjfr5NKSSwWweLXS79fJ5nMZlc6HNlAqMUCDfUa1VXkponb7Qv9Kz/kryeLBUpLBaWlH2zHykpBaalOIiHJZLL7L65LUe5lmm4lv7CO/GD9B/7dLKVJYn6W0EQPFouVwrJmJDAxeJ7ZyR4y6QQOd4CC0tX4ApUPZWu3e/pHf/EDe3V1NY8++iher3fJh3i73U5DQwM2mw0pJZWVlezZs2fZdjabjYaGBux2O+FwmMnJSTKZq01S8/Pz+ZM/+RNeeOEFTNNk7969OByOZesJBoPU19djs9mIRqMMDw+TTqdz9/f29tLd3U06naaxsZFdu3bhvW6kmqZprF+/nqeeeooLFy6QSCxvpB6LxTh48CDj4+PY7XaeffZZmpublwQuF5WWlvL000/z7rvvMjAwwJEjR3j++eeXBHkV5XZi4YlsufhkH6u3fZby+l0k5lW5uHJvEkKjtGYrJbWb6G8/RGfrv2Yz1Wq2ous2Zid76Gz9KTNjPQQr11Basw2703f7AyvKh/RhgpaLMukk8egU6VQS3WJn5MoxLrz/faZHOxc+3Aoc7jyClRtoanmO0tptWKzLP5MoyqIP+wVGAsn4LPHYDLrFxnxkkjPv/m9Geo4RC09iGhksNgd5BVXUrv04deufxX1ddrCiKPcHXc+WkF9LiGyPS6t16e0Ox43Lv4XIZnLabHf+u6XFgioPV+5L4kOmBkvTJBmfIx6dRrNYSKfmOfb6/8do70nikWlMM4PF5sz+bl73FE2bnn/ovt/c08HLRbW1tRQVFeWyMRcJIcjLy0PXdYQQVFdXLxt6s7idz+fLZUjOz88v6S/p9Xr5+Mc/jmEYpFIp7Hb7snNBNvCYl5eHxWLBNE3m57Mj6xf19fUxOzuLaZrU1dVRUlJyw+nhNpuNnTt3EgwGmZ6eXnb/1NQUnZ2dJBIJgsEgGzZswO2+cSNWIQTr1q2jqKiIwcFBurq6mJiYoKqq6qEvHVc+GCOT4spiuXjFGuo3fBK3r5hkXJWLK/cuX2E1G3b/Pjabm5ErJzn6y/+BK68ITbMQj04Tj85QWreF5q2/TVHlRpU5pNwzTDNNOhnDyKSZHu1kPjKByxukZs3HcLgDRGdHGbx8mK7WXzI/NwZCUF7/yJKSXkX5zUgyqTjp5DyJ+TB97QdwuPIIFDdQu/bjGEaa8f7TjPadZXayl0w6QfP2L+JwPVy9tRRFURTlTpGYpFPzpJJx0sk43WdfRUpJcdVG8gpryKSTjPW1MtR1nNDEFcxMmrW7/hMWi/1uL/2Ouec/CQshCAQC2Gy2G2YSLvaKXNzObrffcjvITha/fjgOgK7rpNNpLly4wODgINPT07mJ44ZhkMkYHDlymHg8njvOIiklY2NjJJNJhBCUlpbidDpvmv1YVVVFUVERXV1dy+6bmppienoawzCw2+1MT0/T1tZ2w0AowPT0NFarFU3TmJ6eZnp6BiNjPHTBSyklY32nGLlyjIKyNZTX7cB6i1JRKSWjvScY7T1BQWkzZfWPPJSlpZPDbVw+/Qs03ULT5ucJlDSpcnHlnqdpFiw2FxLJfHQGM5MhmYig6VbikRky6SRefylC09C0G793KsrdIE0Tw0hjGgbxyCylNZtZ88iXCFasx2pzkYqHKarcyLn3/oHh7lPkF72NP1iHJ7/sbi9deVBIFl6DaVLz84iARmPLp6ld+xQubxDTNAit6qb9+CtcPv1Lus/tp6C0mcqmvaqqR1EURVFWggTTSGMaGTKpNKlElM2P/yFVq/bhcAcwjQxVq/bi9v0z7cd/waWTP6W0bjtFFRvu9srvmPsiQnGzgCSw5Har1XrLD1W3ui8Wi/HWW2/xxhtv0NnZyczMDLFYjEQigWEYmKaJaZrE43ESicSyY0kpCYfDuYDmYkbozbjdbvLy8m6Y4RmJRIjH40gpmZiY4Nvf/jZut/um60+n03R2dpLJZEgkEoTDcximccNt7yfpVJyZ8cs43QE8eaW3D6hJyeTIBS6d/Bn165+hqGLDLYOXSMnE4DkuHH2F+g3PEqzY8NAFL+cjk3QsKRffidV24yxfRbmXTI1c4vzhf2Tw8hFKalqoWfMkeQXVaJpOPDbDQMcBhruPc+bdvwMEVU171cRm5R4ikUjceQVUNz9Oef3OXGm41eaiuvlxZsYvE5roY3zgDHPTfSp4qXx0xNXPxLrVSkl1C3XrnsFXUJXbpKhiA4nYDBODbYQmepkYPEtp7TasNtfNjqooiqIoym9Kgm7RKa3dQsPGT2F35uXuClasp2nLZxnoPEJoop/hnqMqeHm/+nWvBs/Pz/P973+f733ve7S1tZFOp6mpqaGxsRG/34/b7cZisWCxWDh37hynTp0ilUotPYiEZDKJlBIhBDab7YaByUUWi+Wm2aTpdDqXHZrJZBgZGcF6m1Fubrcbt9uN1+u9YW/M+9HcdB9n332ZylV7adjwSWy695bbCyGoaNiN01NIXqASm+PW2wNkUnESsTnSqRhI87bbP0iMTJrei28y0PEeRRVraVgoF1dZFcq9Lp2M0df+Jn0Xf4XXX8baR75MVdM+rPbsl2rTNAgUNZJKROnvOMyVttcJFDeoIV7KPUFoOrrFjqbpuHxBvIGKZT0t7c488oN12aFUs+PMRybv0mqVB5NA063ouhWr3YEvUInTU7BkC0234PVXkFdQxfRoD5HQEOlEVAUvFUVRFGUlCIGu29AsFnQp8Rc3YnMs7Wmpa1Z8gWryg9UMdZ0iNL68ivdB9kAFL39dR48e5R//8R85e/Ysbreb3//93+fpp5+mrKwMl8uFzWbL9dV8+eWXuXDhwvLgJdmemLcrTV8kpVxSdn4ti8WSO19FRQX//f/479TU1twyGHrtvmvWrMFuv/97H8xO9DDadxp/cQOmkbn9DkIQKG4iUNy08ot7AMyMd3L5zC+IhacIlDQw1HWYsf4zuftj4TFicxNkUkkGOt8lHpumoLSZkqpNHygwrCgrZT4yycRgG4n5KPUb1hMsX5sLXAJomo6/uJGiyo0Md59gcvgi0blxFbxU7gm6bsPu8GKxWBFCQ6BlR79ec+FICA2rzY3FaieZiGJkln/mUJTfhNXuxub0Mh+ZAaEtef0tsljtWO1ukJJ0Ko5hqNehoiiKoqwEIQQ2uwe7w818ZhYhblDFKwS6xYbN6UVKk1QyducXehc99MHLZDLJm2++SVdXF6Zp8uyzz/LVr36V5ubmZT0jTdO8eQm7AJfLhaZpSCmZn5+/aXASIJFIEI1Glwz8WeTxeHLnsdvtbNu+jS1btjwQPSzTqTgjV44x2neSaGgEw0hhtbnID9ZRteox/EUNGJkU/R2/ouvMvzEfnmHkygkAyup2UFDazFhfK/HoJMVVm5gYamOs7xT5wVpWbf4c0fAY4wNn8Rc1UFzVgtXmQkrJzFgn/R2/IjTRjWmkceeVUtW0j3R6/obrNM0Mk0NtDPccZXbyCulUHLvDQ2HZWipXPYbXX37f99Gbm+5nbnKAZHye4e4TjPWd5dqpuaaZIR4NYxoGPeffYqDjMA0tz5JfUKOCl8pdlUpGScbDICV2Vz66ZfkkZk23YnP40CxWkvEImXT8LqxUUZazWB24fMXYnG6S83Mk5kNIaS79kColmXScTCaFpulqWI/ykRJC4HT5cXuLmB7pJhGbJpOKL8uqNIw0mXQcSfY9VajXoaIoiqKsCCF07G4/7rwSorNTROdGMM0M+pKhoxLTyJBORIFsIPNh8tB/ColGo1y+fJlEIoHL5eLJJ5+ktrbuhoHCeDzO8PDwDbMuhRAUFhZisViQUjI1NUU6nb7pecfHx5mamrphgLOwsJD8/Hw0TWN2dpZQKIRh3P8DeIx0ks7Wn3HpxI9Jp+K484qxWOxEZ0cZ6jrKcM8xtjz+dbz+CkZ6TzA+cJ50MsHMWDepRBS704cnv5zei28yMXiesrpLjFw5STwyRXnjI9St/wTj/Wc4f/ifqFv/NIHiRqw2F9Mj7bT+6m8YuXISpyeAJ7+UeCzE9GgHmVScTGbpv5NppOlpe4P2468QmRnG5QtitbmYnehh8PIRRvtO0bLvJQpKmu/rwTZuXzE1ax8nNjcOLM8STiUijPdfIJmI4y+qwZNfQn6wDt26PFCkKHeSbrGhW2xIKUklIjfMBjLNDKlkGCOTxu7w3Nc/q8qDRdN18guz76kzY1eYHuugvGEXTncgt00qGWFuqo9EbI78YBV2V/5dXLHyIHJ5i8gP1jHQ+T4z413MTfdlX4MLF+ilNInOjjA3PYBuseLyBrNZmIqiKIqifOSEEDiceQQr1zHae47JwTYioWHyC2ty25iGQXRuhNnJfnSLBV+g8u4t+C546L/NpVNp5ufnMU0Th8NBcXExVuuNn5auri4uXLiY6215LSEE5eXluFwuhBD09/cTDocpKSlZVu5tmiatra2Mj4/fNHhZW1vL6dOnCYVCXLp0iZ07d960FDwWi3H69Gny8vJoaGi45ZTzuykcGuTy6V8QC0+yce9/obRm60JJXITus6/SfXY/ve1vsX7X79G06bcITw8wH56hvH47tes+TmHpWqw2F8n5WWYn+jGNNJVNeyiuasEbqMTpLiCViBCdHSc5P4uUJulUnK5zrzLQcYjimo2s3fE7+AJVpFPz9La/RdeZX5JJJpfE7qZG2rnw/veYnRxgzY7fpqJhNzaHl1hkkvZjP6T/0kFc3iAte4tw5xXfvSf0N1RQsoqWvS/dtBwxNNnDfOQvMaeHadj4SSqb9uL2FeNw++/wShVlKaenEF9BFbrlJJNDbcxN9eLxlSwJUEZDw0yNtJNKxCitbcHpLrjFERXlThL4ixooqljP1HAXfRffJlC8ipo1T2C1uTAySUaunGCw6zCZVJJASeND9+FUWXkOt5+iqg14/cVMDF6k+9x+XJ4ifIEKJDA3PUDfpXeYGbuCJ7+IwtJm1e9SURRFUVaQ1e6momE3Xa2vMjncyaXj/0LLvv/k+BY1AAAgAElEQVSK01OQvag4N0pn68+JhCbw+IOUVG+520u+ox764KXVZsXpdKJpGul0esnE8GuFQiF+9KMf0d3dRSaTQQhBKpVaUvZdV1dHcXExQ0ND9PT00NHRQXV1NU7n0gm3Q0ND/Md//AczMzM37Ivp8XjYvXs377zzDqOjo7z55ps8+eSTrFmzZtkEcyklx48f56/+6q9IJBI8//zzvPDCC/j9916AKRYeJxIawen2U9Gwm8KyZoTIBnbdvmJKarbgzS/D6SnAk1+OL1CJZrHiL6qnqukxHG4/yfgcCI1MOoU7r4TVWz9PQVkzmmZB3qAEPzY3ysiV4yAEDRs/RdWqx3KZA3anj7HeU8xNjea2N4w0/R0HmBzupKZ5L02bP0t+sBYhNAqliZQG06OXGeh8j9p1T+P0Fty35Xw2h/eW5d9SmljtLjRNx5NfRqBkFVY1rVm5B9idPqpXP8Z4/2mmhrs4e/A7RELDBEpWoes2onMj9F18m6HLR3G686hs2ofXX3G3l608YKRpMNh1mCsXXicemQIgEZ9jaqiDdDJJz7n9zIx1YrHYQGiU1m6jYcMn8eSX4fIGqVnzJJPDF5kYvMSpt/8nI73H8eSVMB+ZYuTKCSaHOskLVlC1ap+aNK7cVHhmgO5zv2R84AxIiWFkmJ3sJRmfZ2qki6Ov/b+4FobxuPNKqN/wScrrd6JbbBRXbaa6+WO0H/spHSd/TiQ0RGFZc7aCaKSdkZ6TAFQ07qK4est93y5HWVnRaIRL7Qc4cfRNfF6d3i5BcTCbTGFKicvt5tE9e3ni40+v2BpGhod4bf+rXGpvx3ZdxZphSmrq6vn8F75IUVHRipw/Fotx9P0jvLH/l1gt2Z+XqWlJ2wWJrknGh1zo7OaZT356Rc6vKMo9QEpikQnajvxTbqCOYWSYm+ojk84wOzHAwZ//39jsHkDg8PhZu+PLFFVuRLfYKa5qoaHlWS4c/SkXj/2IcGiQgtJmTCPD5HAbQ5ePoek6tWufoKiy5a4+1Dvt/oy6fIQ8Hg/19fXYbDZisRgHDhxg165dVFZW5jIm+/v7+cEPfsDrr79OXV0d8Xic+fl5hoeHSSQSuQnj1dXVbNmyhfb2diYnJ3nllVeora1l7dq1uZLv4eFhvvOd73Du3DnsdjvJZHLZmjRN42Mf+xj79+/nrbfe4vjx4/zDP/wDX/va16ivq0df+GVomiYnT57k5Zdf5t1338VqtfLcc899oME+d4PN7sVqdxEJjdJzfj9CaPiDdehWO3kFNeQFqhCaBogb9gK9ltA0iirW4ckvu2XwMDo7QmxuHKfHjz9Yv6TkOa+gBn9xA2N9F3K3pRIRJofakKZBSc0WXL5gLsAqhEZh2Rp8BRVMDrUTCQ0RLF+PZnvof4wU5Y7SNAtldY+wce+LtB/7F0autDI92onTE0DTdJLxCLG5Kdx5haza+llq134cu9N3+wMryocgpSQ00U332deZm5pYdv/kUC+TQ70AaJrANNJUND6a/b2lWymt3cbGPX9A25HvM9p3npmxXqw2O+lUEtMwKCyrZ80jX6Kyaa/KeFNuKhELMXj5MD3n3l12QTwzN0ts7lju7/7icgrK1lBevxMAb34Zq7f9NkYmRfe51+k++yYDHYeQEtLJOA6Xl1VbPs2a7S/gVQF05TZCoWmudP6Kovl/ZnvAhj4q0MazwcuUYdIxpxGbj69o8HKgv4+3fvZP5Ee6qMxb2gtuKJzhva6NbNm2fcWCl7Ozsxw7dIBLb/2AnZXZijmnlJT6JBlD0tshePXVkApeKsoDTALJRIS+9l8x3H1+2f3xaJTusweBbKcWr7+AqqbHKKrciBACt7eYdbu+gmlk6DrzGp0n9+NwH0RKSTIew+HysGbH51i/6/dwPGRthR76qIvNZuOJJ57gjTfeoLOzk/3792MYBps2bcLpdDIxMcGZM2dobW1l+/btbNu2jW9/+9sMDAxw9OhRXn75ZZqbm/nkJz+J1+vl+eef59ixY5w/f563336bZDLJ9u3ZX5KRSITTp09z/PhxNm3aRH9/P+fPL39BA1RXV/Piiy8yNjbG2bNn+fGPf8zg4CBbt26luKiYdCbNwMAAJ06c4NSpU2QyGT73uc/x5JNP4vF47vCz+MHkFdbQ2PJp2o78gAtH/4WhriPkB2sJlKyipHozheXrsH3Afkq6bsHpLUJb0sB2ucR8iHQqgdcfwOrw5gKRALrFjtNTiGax5ObUpJNR4tFp0qkkPW2vMz3asSQ4mskkCU1cIRWPEY/OYBpp4MHMRvTml7H96T8mGQ9TUr3loWsIrNzbHK586tY9Q36wjvDMQHYgzzXf2zXNgjuvhILS1bi8wSU/+4ryURCaRtWqfbi8haQS0VtvKwS+guolASC7M4/q5ifwBiqZm+7DSF+9mKnpVrz+cgpKVuO4phemolzPF6hi88f+kIaWT96offUSdqePwrJ1ub9rupXC0mY2PfZVSmu3MTV8gVh4HITAk1dKYdkaiio3kldQrfoGK7eVSqUQ6QhbyzQ+37z0gks8bfJad4yxufCKriGRSGBNhXiiykZzcGmP9raJJO+mkkSjt36//k2kUimMRIQNQcFzDUvPn8yYvHUlxsXw3IqdX1GUu08IgdtXzI5n/0/mw8svbl/PandTVLkx93dNt1BQspotT/w3yup2MDl8gfnIJELT8OSVEixfR1FVC3kF1fdkq8CV9NB/EhFCsGvXLl588UVefvllenp6+MlPfsI777yD1WolGo0ihOCpp57ipZdeoqioiCNHjjAxMcHAwAB///d/T2NjI7t378bn87Fjxw7+6I/+iL/927/l3LlzvP7665w4cQK3253rlfnUU0/xwgsv8PLLL3PhwkLW33WvO6vVymOPPYZpmnz/+9/n3Xff5bXXXuP999/H4/FgmiahUIhoNEpFRQXPPfccv/u7v0ttbe2y0vJ7hd3hZfW2L+D1VzDUdZjxgXN0n/sP9IsH8PpLqWjcxZodL5AfrANuHWgQmoZusd42IJHJpJCmia5b0TSd63++dd265IfeNA1MMwNSkpwPEwkNL2SDXpVfWI2/qA63r2jZfQ8SuyufmuYn7/YyFOWm7E4fpTVbKa3ZereXojyEhNDwFzXgL2r4tY+hXsPKb8rh9lPZtOfX3l/TreQHa/H6y6ls2ksqEUEIgc3hxe7Kv27KqaLcmkUTeG06PvvSz8dWTeC26ej6yn5uFoBdF/js2rI1+OwaDnPlv+jrmsBzg+cgaRF47DqWFX4OFEW5++wOL7VrPv5r76/pFvKDdXj9lVSt2kcqGV343ezD4Q48tG1c7krwUtM0vvzlL7Njxw4ymQxr167F5Vp6he7RRx/l29/+NolEgqamJrzeG/fm27p1K3/9139NIpGgvr4en+/GpYGbN2/mW9/6FvPz89TW1i7ZLj8/ny996UvU1dVx7Ngxent7icfjOBwOKioqaGlpYevWrdTX1yOE4Bvf+AYNDQ10d3djmmZuSA5ky9Cfe+45KisrOXr0KJ2dnczNzf3/7d1ZdF31effx7977zJqOJsuaZ/AgjG1sI4M84jfg2AomYUgoXWmz3tXVtL3pZS/alMtmJW+6mrarK1lNQiCQQMBMxRjZxhgTD8UTxvOELFmyxqP5zHu/F0IHhGwwIFkC/T5rcaFz/tr7OefILOvn5/9/cLvdFBcXs2TJEu68806ys7MxTRPHcbAsC7fbPSE5T09P5xvf+Abl5eV861vf4vjx47S0tDAyMoJpmmRlZVFdXc2iRYtYvHgxJSUlM3siuWGQESyidnEjcyvuYDDUymDoCl2tx/ng5Juc3P8shmGwZO1f47uh4RrGhND3kyzLhWEY2HYCx7FHuxI+9j2JROyjxwHTdGNablweL1W33UdxdT2Wa+KgJMMwSQ8W4XJ/PbsuRUREZPawXB7SMueQljk122lFbMehu9vm0KFPPxrqyzh7ziYSvXYLsgMMDdmcPWsTDE5NDW1tNp2dNnOuMdNAROTzslxu0rLmcmN7U7/+piXpMgyD5cuXs3z58uuuqaqqoqqq6jOvVV5eTnl5+WeuKykpoaTk+gMbCgoK2LRpE8uXLycUChGLxXC73QSDQebMmYPH89GW2ZUrV1JRUUGoN4SDQ2ZmJsHgR+cNBINB1q1bx+23305XVxfhcBjLsghmBZlTMAev10tbWxuDg4PYtk16ejoZGRnXPKvS5/Nx++23M2/ePO655x76+vqIRqOYponf7yc3N5fs7OwZ2235SY7jYFoegnmVBPMqse0kZbesJruglnff+Dmt5//Egju/d4Ph5Wfz+DKx3F5ikSHi0WEcx8YwRt+rRCJKeLgbO/HRgCaPNw3/h1v0vP5M8ooX4vWND8THzjgVEREREZHP5tjQ0gKvbZs4GHWyNF9yGBm5TnjpQKgPDhy06e6ZmhpCIZvmZoc8/ZogIjLpZnCb3s3ncrkoKiqiqOjTDyV3uVyUlpZSWlo67vFYLMaOHTs4dOgQoVCILVu2cOedd+L1Tuzcu3jxIm1tbSQSCcrKysjPz//UANLr9VJcXExxcfEXe3HTzHEcmk/vouXsHspuWUNR9UrcHn9qkvXcsiV4A5kkYhEce+wvFAbg4ODwmQc5XUd6ViGBjBwGetoY7G0hr3A+rg8nZg+GWunruEAiHkutd3vTyS2aT8u5/XRcPkLF/A14vRmM7Tcf6r/KmXf/iMvjp/q2jaRlFSrIFBEREZFZzbLA8ymnDDjA8Ai0tU1dDT09DolPySVjMejpAf8UbZwaGIDhYXDUJiUiMukUXk4i0zQ5c+YMTzzxBB0dHXR1dVFQUEBNTc24YLKtrY1nn32W1tZW3G43d999N3l5eTN2SvhkiUWGuHj8DbqvnCQeD1NYsQy3N52RwS4unWxiZKiX4urluDwBDMD9Ycg40NtCeDiE6fLifMYU8k/KyC6moGwxoauXOHvkJfwZ+eQWzmNksItTB//AQO+V0Vj0w/zRslyU3bqO5lO7aT23n8ycP1C7+FukZc1luL+dM4de4MyhrZTduprKhf/ns3ati4iIiIh87ZkmDMZtTrYMkeEZ33QQTjgcaY9QXBznwe9M3e87x44YvHzB5o3zg1wKxcY9d64nxpCZYPMqk6VLp6aGq1dNXt0KB48PM2f8iWhEkw6H2iLE/fEpubeIyNedwstJZFkW9fX1bN26ldbWVl5//XWSySRr1qyhtLQUwzBob2/nnXfe4Y033mBoaIiVK1eycePG657V+XVhGAZFVSuoWbyJC0df48DrPyU9c05qS/dg7xVyC2uZv/xh/Om5GIZBfslt+NOCXD71FkOhNoqq66ld/K3PdV+3N51bln6b/u5m2i6+y1BfG/70XBLxCC63j7zi+USGB3Ds5GiHp2GQX7SAxWv+L+/t/TWn332eK+ffwePLIBoeZDDURnZBDdWLvok/PY8JE4BERERERGaZ3Nw8lt+9lvDwIC2B8X8/TjpQUhFg8coNrFo1dUddFRVWMjL0XTqaz9Li+UQN+bCuZB5bttRSXj41NfT352Iaq9np66blE92XtgP5pT7q7lw7JfcWEfm6Mxzn2icKh8NhDMPA6/VqW+znEA6Hef311/nP//xP9u3bh23bFBYWkp2dDcDAwADt7e3Ytk1DQwM//OEP2bBhA+np6dNc+dRzHIfBUCsdLUfpaTvJ8EAndjKOx5dBVl4lc0oXkVe0AI83DTAYGezm0sk36Gw5hp2IM7diGZULNhDqOs9Q/1UKSm8nM6cc03KN3YCeq2fobj9JVm45eUULcLn9JBMxuq4cp+3SQfq7LmEn46Nb1SuW4QsE6e/5gIxgCXnFC3F7Rv+ZNBYdovvKCTpbjtHX/QGJ2DBuTxrBOdUUlC4mt3Aebq/2hIiIiIjIV1Ny+H+JXv05dvQKrqzV+Ip/9MWvlUzS29vL1atXudavlx6Ph/z8fHJzJ+dc+2uJRqN0dnYSCoUmPGcYBhkZGVM64NS2bUKhEG1tbdd+D9we8vLzyMvLm5L7y81jx5qJXv03koNHMANV+Mt/jmFqkKvIVFJ4OQVGRkY4duwYf/rTnzhy5AhXrlxhaGgIgIyMDEpLS1NTxxctWkQgEJhV77Hj2ERH+olFB3FsG8vtxRcI4nL5JnQyxmMjRIZ7cWwbrz8Ljy8D44tsr3ccYtFhouE+bNvG68/E68vAMD/9X17jsRGiI30kkzEsy4svLRuXy6uOSxERERH5SpvM8FJkNlF4KXLzadv4FAgEAtTX1zN//nza29tTE8JhdHp4dnY2RYVFpGekz6rQcoxhmPjSsvGlZX/mWrcnkOqG/JI3xeNLx+P7fB2uk3Z/ERERERERERH53BReThHDMAgGgwSDwekuRURERERERERE5Cvp6z3eWkRERERERERERL6yFF6KiIiIiIiIiIjIjKTwUkRERERERERERGYkhZciIiIiIiIiIiIyIym8FBERERERERERkRlJ08ZFRERERERkyti2zeXLl3n77bfxeDw0NDQwODjInj17yM/PZ926dQSDwWt+bzwe5+zZs+zbt4/CwkJWr15NRkbGDd87kUgQjUaxLAuv14thGJP1sr5yxt4L27bHPW6aJpZl4fF4MM2b398UDoc5duwY7777LgsWLGD16tXE43GOHj3KoUOHWLhwIatXr8ayrJtem4jMDAovRUREREREZMq0t7fz+9//nrfeeovNmzfjcrk4e/YsTz31FPPmzeOOO+64bnhpmiaGYXDy5EleeuklotEomzdvxuPxfOZ9o5Eo+/bv4/nnn6empoZHH32U/Pz8yX55XwnxeJzjx4/z5JNP0tHRMe45wzCwTIuMzAxqa2tpaGhg0aJFN/QeT4ZIJMLhw4f51a9+xZYtW7j77ruJRCIcPXqUJ598kgceeIC7775b4aXILKbwUkRERERERKbEwMAAu3fv5tVXX6W2tpb169eTmZlJJBKhp6eHgYGBCZ2AH2dZFhUVFaxZs4Zjx47xzDPPUFZWxh133PGZXZTxRJyLFy/y0ksvUV9fz/333z9rw0vbtrly5QqvvvoqnZ2dlJWV4fP5AHAch1gsRn9/P4Zh8NZbb/Hoo4/S2NiYWjPVtY2MjNDT08PQ0BCO4xAIBFi9ejUFBQVUVlbicim6EJnN9H8AERERERERmXTJZJJz587x8ssvA9DY2EhNTQ3JZPJzXScQCLBixQrWrVvHs88+yyuvvEJVVRU5OTlTUfbXmmVZVFdX8zd/8zdUV1enHo/H43R0dLBjxw527NhBPB6nvLycFStWTEudXq+XhQsXsnDhwmm5v4jMLAovRUREREREZNINDAywf/9+Tpw4wdq1a1lZvxK32z0uvEwmk1y4cIGmpiZOnTpFNBqloKCA+vp6Vq5cmTrfMi8vj4aGBt566y327NnDunXrWLNmzRc6w3JkZIQTJ05w7tw5qqqqME2TXbt20draisfjoa6ujvXr15OWlsY777zDwYMH6e/vJycnh5UrV7Jq1apUXY7j0NXVxb59+zh06BDd3d0A5Ofns2TJEhoaGsjLyxt3/76+Pvbv388777xDT08PmZmZLFmyhMWLF9Pa2kpXVxcrVqygqqpq3D3+9Kc/ceTIEbq7u7Esi5KSEurr61m2bBmBQOCGXrthGASDQZYtW8bixYvHPReNRikvL6ejo4MTJ06wf/9+li69A5fLwrZtOjo6eOeddzh27Bi9vb2YpklhYSHLli2jvr6ezMzM1LVs26azs5O9e/eOWz937tzU+qysrOvWGYlEOHXqFO+//z61tbUsX76cwcFBDhw4QH9/P4sWLaKjo4M9e/Zw9epVXC4XNTU1rF+/ngULFqR+Lmzb5urVq+zatYujR48SDodTP181NTUcOXIEt9vN6tWrr3t0gYhMP4WXIiIiIiIiMqkcx6GtrY19+/bhcrlYuXIl+XPyJ6y5dOkSTz31FL29vQQCAQYGBti7dy979uzh+9//Pt/5zncIBAK43W5qampYtmwZL7zwAvv27WPFihU3HNp93MjICAcOHOAPf/gDt956Kz6fj1AoRDgc5uLFi+zcuZOTJ09SUlLCgQMHsCyLnp4edu3axcGDB0kkEmzatAnTNGltbeXpp59OdZcWFRXhOA7vv/8+O3fu5MyZM/zFX/wFBQUFAPT29vLKK6/wq1/9it6eXsorygkEApw9e5adO3cyNDREKBQiLy+PqqoqbNvmgw8+4MnfPskbTW9gGAYFBQXEYjEOHDjAm2++ycMPP8yDDz74uQYZXYvX66WyspIFCxZw7Ngxrly5QjwewzA8XLhwgd/85je8+eabuN1u5s6dSzwe59ChQzQ1NfHAAw/w2GOPkZOTQzKZ5OLFi/zm179h15u7UusTiQSHDx9mx44dbNmyhccee4zc3Nxr1hIOhzl48CC//e1v+da3vsWSJUsIhUL8z//8D0ePHqWhoYHLly9j2zaJRIKWlha2b9/O6dOn+fu//3tqampwHIfW1lZ+/etf8+KLL2IYBlVVVXR2dnLs2DFKS0s5efIkc+fOZdGiRQovRWYwhZciIiIiIiIyqRKJBM3NzZw5c4bi4mIWLlw44dzCeDxBa2srtbW1/Pmf/znV1dUMDw+zY8cOnn76aZ5//nnq6upYsmQJADk5OdTV1fHSSy/x/vvv09nZSUVFxeeuzbEdBgYGOHXqFJFIhG9/+9s8+uijuFwu9uzZwy9+8Quee+456uvrue+++1i0aBH9/f1s3bqVl19+mV27drFq1Sp8Ph/vvvsuzz77LF6vl7/+67/mtttuw3Ecjh07xm9+8xu2bt3K/PnzaWxsJJlMcvr0aZ555hk6Ozt57LHH2LBhAwCHDx/mj3/8IydOnGDu3LlEIhFgtEvz9ddf57k/PkdVVRXf//73qaysJBaLsX//fn73u9/x5JNPUlZWxvr167/0NHXHcUgkEhiGgcvlwjTNVGj4wgsvcOutt/KDH/yA8vJy4vE4hw8f5re//S2/+93vKC0t5f7770+tf/6F57nlllv4wQ9+QEVFBfF4nCNHjvDEE0+k1m/ZsuWadSSTSfr7+2ltbSUUCqXq6u3t5b333sPr9bJp0yZWrlyJx+Ph1KlT/PKXv2TXrl3U19dTXV3NyMgI+/fv59lnnyUzM5Mf/vCH1NXVMTg4yFtvvcULL7xAa2sry5Yt+9xHGYjIzaXwUkRERERERCZVOBympaWFvr4+7rjjDgoLCyescRybrKws1q5dy+bNm/H5fNi2jd/v57333uP06dMcO3YsFV76fD5KS0vJzc3lypUrtLW1faHwEoNUyFdUVMSmTZtYdNsibGd0cFBTUxNHjx6lvLyc+++/n5ycHMLhMN3d3ezYsYPW1lb6+vooKCggGAyybt06ampqaGxsJDs7Gxjd5n7i/RO8sPUF3n//fe677z4ikQgnTpzg7NmzLF++nEceeSTVIVhaWkpnZyeHDx8eLdEwsG2b5uZmtm/fjmmaPPzwwzQ2NuLxeHAch6KiItrb23nuuefYvXs39fX1pKWlfYFPa1QkEuHMmTMcOXKEjIwMampqsCyLlpYWdu7ciWVZPPTQQ2zcuDFVQ3FxMe3t7fz6179m9+7drFmzhtbWVnbu3Ilpmjz00EN885vfTK0vKSmhvb2d//7v/2b37t2sX7/+xj82w8AwDEzTpLKykkceeST1c1VQUMCxY8d4+umnOX/+PPF4nFAoxP79o9vMt2zZwgMPPEBGRgbJZJLc3FxOnz7NuXPnUtcVkZlL4aWIiIiIiIhMqnA4TGdnJ8lkkjlz5lxzS7NpmhQVFVFXV5eaam2aJgUFBdTW1nLo0CFaW1vHrQ8Gg+Tn53Pp0iU6Ojq+VI1+v5/y8nJKS0vBANMwSUtLIzs7m6ysLGpra1NbiV0uF5mZmfj9fiKRCJFIBK/Xy5IlS6ioqCAQCJCWlkZfXx8jwyP09vbicrtSHYTxeJzh4WGam5uJx+PU1NRQXFwMjIZyeXl5LF++nKKiolR98XiclpYWzp8/T3l5OUuXLsXj8aS+Jzc3l9tvv52tW7dy7tw5ent7PzO8tG2brq4utm/fzqlTp1KPR6NRWltb2bt3L5cuXWLt2rXcddddJJNJrly5wqVLlygqKuL2228fV0MwGGThwoV4vV4uXLhAW1sbbW1tXLx4kaKiIhYvXjxh/djnfeHCBbq6uj734KXMzEzq6uqYM2dO6jGv15uaJD80NEQymWRwcJALF86TlpbGbbfdRnp6OjA6tKi4uJiVK1fS1NT0ue4tItND4aWIiIiIiIhMqlgsxsDAAIZhkJGRkQqwPs6yLLKysiaEV16vl2AwiOM4DA0NjXvO5/ORlZVFJBKhv7//S9XocrlIT08fV5tlWliWhdfrJSMjI9WRZxgGlmVhmia2bWPbNpZl4fF46OzsZPfu3Vy8eJGBgQGi0SiRSITLly8TjUZJJpM4jkM0GqWvrw/LssjNzR13X7fbTX5+PgUFBfT29gKj4WVvby+hUAjbtvmP//iPcUNxkskkH3zwAQMDA/T19dHf3z8axH6KsTM0f/nLX+L1elOPJxIJwuEwgUCAxsZG/uzP/ozKykri8Th9fX0MDw8TDAYnfFZut5vs7Gz8fj8DAwP0dPcQCoUYHh4mKysr1Yn68fc8GAwSCATo7++nr6/vc4eXHo+HYDCIaZqpx8Y+n7GOVdu2CYfDhEIh/H4/eXl547orfT4fxcXF495PEZm5FF6KiIiIiIjIpLJtm1gsBoyGTR8PmsYYhoHb7Z4QbJqmicvlwnGcCWcRulwuvF4vyWQydf0vamy78LgtwwYYfLQ9+dOMjIywZ88e/uu//otz585RXFxMRUUFRUVF2Lad2ro8xk7aRKNRTNPE4/FM2Krsdrvx+/2pr8fOeUwmk0QiEVpbW8cFjgDJRJIVK1Ywf/78VPfqpzFNk/z8fDZs2JDqXBz7HL4snV8AABKMSURBVILBIGVlZcybN4/KykrcbjeRSCQVwLrd7gnnlo59r2VZJBIJItGP1ns8Htxu96euj8fjn1nz9V7H9bZ6O46Teu/i8Tgul+uaP2N+v/+aobqIzDwKL0VERERERGRSfTz8cxznuuuSyeSEgHJsgjSMdmd+nOM42LZ9Q+HiVGtra+PFF1/kyJEjbNy4kccee4zCwkJ8Ph8DAwM888wznD17NrXeMI1UKDv2+j7uk4HsWMjp9Xqpra3lr/7qr1Jboz/OMAwy0jOuea7oJ5mGSUlJCd/73vdYuHBh6vvHuk19Pt+4UHAsSDYMIxWkftzHA9axTtSx9fF4fMLr/Ph6v98/IdycTKZpftQpm7Qn1HGt+kRkZlJ4KSIiIiIiIpPK5XIRCAQAGB4eJh6PT+wa/PBcwoGBgXGPx2Ix+vv7MU1zwhmO0WiU4eFh3G73lxpO82WNnR158uRJcnNz2bRpE3fddVcqUB17bR/vLHS73amBMWPnYI69J2OTtLu6ulLrx7ZYp6en43K5qKysZMGCBRNqcRznxgfOGKPXzcrKIi8v7zOXj9Xg9/vp7+8nFAqNO5czkUiMnvM5MkJpaSl5eXkMDg6m1vf19aXO9hxb39/fz8jICEVFRakzRSebYRj4fD4yMjIIhUL09fWNez4Wi9HR0cHg4OCU3F9EJtf0/lOViIiIiIiIfO34fD5yc3NxHIeenh7C4fCENclkkvb2ds6cOZPq6HMch66ubs6fP08gEKCkpCS1fuwMzO7ubtLS0sjNzb1pr+dakslkKoDMzMxMBZfRaJTTp09z4MAB4vE4tm3jOA6BQICioiIcx6G5uZment7Utfr7+3nvvffGDShyu90UFRVRUVFBa2srx48fH9f5GA6Hefvtt/nxj3/Mm2++OSVdhG63m+LiYiorK2lra+PkyZPY9kddjIODg5w6dYpoNEp1dTXFxcWp9e3t7Zw4cWLC+pMnTxKJRKiurr5mJ+lkMAyD9PR0SktLGRwc5MzZM6kgefRnrIuDBw9OOFNVRGYmhZciIiIiIiIyqfx+P4WFhfj9ftrb28ed/TjGMAx6enpoampiz549dHd3c+HCBbZte43jx49TU1PD7bffnlqfiCfo6uqiq6uL/Pz8G9omPVXGJp8XFxfT2dnJ/v37uXLlCm1tbWzbto3nn3+eRCKBZVl0dnbS1dWFz+dj/vz5FBYWcujQIV555WVaW1tpbm7mtdde44033hi3Td40TcrKyli3bh0jIyP8/ve/p6mpia6uLq5evUpTUxP/+q//ytatW+nu7r7x7svPwbKscTU899xzvP322/T29tLS0sJrr73Gtm3bKCws5J577iErK4uysjLuuecewuHwhPXbtm3jtddeY+7cuWzYsOGaU+gnSzAYZNmyZRiGwY4dO9i1axddXV2cOnWKF154gUOHDk3oBhaRmUnbxkVERERERGRSeTweysvLKSkpoaWlhQsXLlBRUYFhGKlBPJZlUVFRgdfr5Wc/+xl+v59wOMzFixfJzc3lwQcf5NZbb01dc2h4iPPnzzM4OEh1dfW47cvTobiomHvvvZdz587xxBNPcODAASzLIhKJUFtby8MPP8xTTz3F3r17+fGPf8yjjz5KXV0dDzzwAE888QT//u//zvbt21PnMpaUlDA0NMTIyEjqHjk5OWzevJnu7m62bdvG448/TmlpKY7j0NbWRiwWo7GxkRUrVkw4H3Sy5OTk0NjYSFdXF01NTfzjP/4jRUVFxGIxrly5gsfj4S//8i9paGjAsiyys7PZvHkznZ2dbN++/brrV61aNWU1A2RkZNDQ0MC9997L9u3befzxxykpKSEej+N2u1m+fLk6L0W+Iqx//ud//udrPZFIJDAMI3XYroiIiIiIiHw9OPE2kkMHcZKDmL5yXJlrJ/X6Y1uoW1paOHLkCHPmzGHp0qWpKdvp6eksXLiQjRs30tDQQHp6OrFYjEAgwJ133sn3vvc91q5dmzoT0XEcLl68yLPPPksoFOKhhx5i2bJlnzq0xzRMMjIzqKurY82aNdTW1uLxeLAsi5ycHJYsWcJdd91FcXFx6jpjW7Xr6+u54447yA5mw4e/Dvv9fmpra1m1ahXz588nKysrta07KysLy7IoLCxk9erVbN68maVLl1JQUEBmZiZz5sxh4cKF1NTUUFFRQVVVFVlZWbjdbkpLS9mwYQN1dbdx/PhxwuEw9913H1VVVakOz1tuuYWamhqysrJwHIf09HQWL17MQw89xMaNGyktLf3U39sNwyAQCDBv3jzWrl3LggULUmeS3shnOVZDZWUl6enpOI5DdnY2K1eu5Lvf/S7r169PbeP/rPWPPPII69evT525ORZ4Ll68mIaGBkpKSnC5XGRnZ7NkyRIaGhooLi7G4/FQVFTEypUrWbZsGTk5ORNqXLx4MasaVlFaWorL5SIzM4uammrKysrw+/34/X7mzZvHvffeS2FhIXv37iU/P5/GxkaysrJu6P1wkv0khw7gxK5iuLNxB7+JYUzd4CERAcO5zui3cDiMYRh4vV6FlyIiIiIiIl8jyeH/JXr159jRK7iyVuMr/tGk3yMSidDU1MS//Mu/kJuby49+9COWLl06YZ3jOITDYXp7e0kkEqSnp5OdnT2uK29oaIiXXnqJn/70p9TV1fFP//RP1NTUTHrNX0Q0Gk0NrfH5fGRnZ+Pz+bBtm+HhYUKhEJZlEQwGGRgY4MiRIyQSCWpqavH7fQQCATIzMzl48CD/8A//gM/n4yc/+cmE92ps2M3YkJmMjEyCwawp7V78pHg8Tn9/P0NDQ1iWRVZWFhkZGdfNDD7v+smutbm5mffee4+0tDRqa2uxLIuMjEwsy+TFF1/k8ccfZ+3atfz0pz8lOzv7hq5rx5qJXv03koNHMANV+Mt/jmH6p/jViMxu2jYuIiIiIiIik87r9bFkyRLuueceXn75ZbZt20ZlZeWEkGisK/B6nYDJZJLz58/z+uuv4/P52LRpE2VlZTfjJdwQr9dLQUHBhMdN0yQjIyN1rqNt21y4cIE//vGPtLW18dhjj9HY2EhaWtqHZ31uo6uri/vuu4/S0tIJ13O5XOTm5k7roCK3201eXt4NTSr/IusnUzKZ5NKlS/ziF78gKyuLv/3bv6W+vp5kMsnhw4fZtm0bbrebZcuWkZ6eftPrE5Ebp/BSREREREREJp1hQEFBAZs3b+bSpUvs3LmTefPm0djYiMfjuaFrOI7D1atXefXVVzl79iybN29m3dp1N/z9M4lpmsyZM4e6ujoOHTrEz372M5qamlJDjS5dusSiRYv4zne+M25LtHwxY1vya2tr2bZtGz/60Y9S3brNzc20tbVx33338Y1vfAO3W9u+RWYynXkpIiIiIiIyy0z1mZdjxs4zzMvLIxwO4/F4qKysJC0t7Ya+P5FI0PxBM6dOnWLhwoU89NBDlJZ9+vmOM5nf76e8vJzKykr8fj8DAwNEo1Fyc3PZsGEDjzzyCEuXLtUU7ElgmiaZmZlUV1dTUlICQH9/P8lkkvLycrZs2cK3v/1tqqurP/Xs1E/SmZciN586L0VERERERGTKBAIB6uvrqaioIJlM3nBwCaMBVHFJMQ8//DCZmZnMnTv3KxtcwmiYW1JSwpYtW1i1ahWhUCj1nuTm5pKZmfmVfn0zjcfjYf78+VRUVNDV1cXg4CCGYZCRkUF+fj5+v86qFPkqUHgpIiIiIiIiUyoQCFBVVfW5v8+yrGk7M3EqeTweCgsLKSwsnO5SvvbGzlQtLy+f7lJE5Au68d5oERERERERERERkZtInZciIiIiIiKzkrYn32zxeJzOzk5aW1txHGdK7pGWlk55eRmZmZlTcn0RkZtN4aWIiIiIiMisY4yOAwdgakI0Gc+2bY4cOcrPfvb/6OjomLLwMhAIcM899/B3f/d3X8mp7DOeA2B/+IUBWNNXi8gsofBSRERERERk1jEAExyHj4IYmUqRSIT33jvGc889RzKZnLL7mKZJPB7nu9/9LkVFRVN2n9nLAWfs8zMwDIWXIlNN4aWIiIiIiMhsY5ij/wE4NqPtZNpGPpUSiQTDw8MfThdP57bbllJXt2TSrt/b283Bg3tpbW0mHA4zPDw8adeWj/sovDQwPvpzJCJTRuGliIiIiIjIrPNh5yUwGsbYoA6ymyY3dw6bNn2Hb37zgUm75uXLlwiHR2htbZ60a8q1OMCHnZeGiUJ/kamn8FJERERERGTWMT+23dUGJ6Hw8iby+fwUFZVQWFgyadeMx+Pk5ORN2vXkOhwbx0mOnhmrPzMiN4X6m0VERERERGYbww2GFwDHjuPYI9NckMhXRRLsMGCA6Z/uYkRmBYWXIiIiIiIis4xh+jCsjNEvnChOcmB6CxL5inCcGE5iEAwLw8qc7nJEZgWFlyIiIiIiIrON6U+Fl44dwUn2TXNBIl8FDthRnOQIhuHCsLKnuyCRWUHhpYiIiIiIyCwz2nmZNfqFHcVJ9k9vQSJfBU4SJzk4Om3ccGG4FF6K3AwKL0VERERERGYZw/SDa3TLq6Nt4yI3xHFiOMnQ6BemOi9FbhaFlyIiIiIiIrON4cEwM8AwwY6p81LkRjgxnEQvAIbhxnDlTHNBIrODwksREREREZFZx8Aw/RiWX+GlyI2yYziJD8+HNVwKL0VuEoWXIiIiIiIis5Hpx7DScJwYaNu4yGcat23ccGNq27jITaHwUkREREREZBYyTD9Y6WAncZJD4MSnuySRmc2JpzovDdOD4QpOc0Eis4PCSxERERERkdnI9GNYHw7tsYdxEqFpLkhkZnPsCHaiZ/SsWDMTDPd0lyQyKyi8FBERERERmYUMMxPTXQCAkxzAjrVOc0UiM5iTxEn24cRDGKYH01M83RWJzBoKL0VERERERGYhw5WN4SkBA5zEAHasebpLEpmxHCeME7sMdhxMH6averpLEpk1FF6KiIiIiIjMQobpw3QXjA7tSQ5hx1qmuySRGctJDpKMnAfAsAKY3tpprkhk9lB4KSIiIiIiMisZGFYOhnsO2DGc+FWc5OB0FyUyMyWHsCMXwTAwrCxMb9l0VyQyayi8FBERERERmaUMVw6mtwQcByfRix27Mt0licxANk6yDzvaDoYbw1OCYQamuyiRWUPhpYiIiIiIyCxluHIwPaUA2IkQdvTcNFckMvM4dmR0oJUdwbB8WL6a6S5JZFZReCkiIiIiIjJLGWYapqcMw5WOE+/HDp8GJz7dZYnMLMkh7MhosG+YAUyfzrsUuZkUXoqIiIiIiMxWhoXhKcH0VYIdw441Y0cvT3dVIjOIg53oIjny3uh5l65MTG/VdBclMqu4prsAERERERERmT6muwjTv4Dk0HHsaBuJ4QN4fFWAMd2lfW11dLTxyivPMTQ0eQOSrly5zMGDeyftejLKSQ6SHD6KHe3AMH2Y/kUYVsZ0lyUyqyi8FBERERERmcUMKxPLP5+EJx8n3kNy+F3s9Hp1l00ywzCwLAuA/v4QTU2v8v77Ryft+sPDQ7S3t064l3wZDk78KonB3eAkMdx5uDLXo2Bf5OZSeCkiIiIiIjKbGSam71ZcGXcS7/0f7PA5Yj3P4M7aiOmrwbAyp7vCrwWfz0d5eTllZWVcvnyZ4eEhzp8/Pen3SU9P55ZbbiEvL2/Srz27ONjxduKhrdiR5tFBPWmLsHzV012YyKyj8FJERERERGSWM115WOl3kRx5Hzv8AcmhdzFMPy7Tj+VXeDkZ3G43K1as4Cc/+SmdnR04jjPp9zAMA7/fT11dHRkZ2tr85Rg48U4S/W+Pdl16i3BlbgDDPd2Ficw6Ci9FRERERERmO8PC8i/Anfsg8Z7ncOLdOMl+sCfvTEaBgoICHnrwQWzHnrJ7GIaBYWhb86RwEjh2BMOdhzv7ASzfvOmuSGRWUngpIiIiIiIiGFYmrvRVGFYWdvQDTPccDHfhdJf19WOAaZjTXYXcAMMVxJV5F1baElxZ94Dpne6SRGYlw7lOr3o4HMYwDLxer/7VRkREREREZLZwkjhODLAxDI+2ycqs5dgjOPF2DPdcDDNtussRmbUUXoqIiIiIiIiIiMiMpF51ERERERERERERmZEUXoqIiIiIiIiIiMiMpPBSREREREREREREZiSFlyIiIiIiIiIiIjIjKbwUERERERERERGRGUnhpYiIiIiIiIiIiMxICi9FRERERERERERkRlJ4KSIiIiIiIiIiIjOS69OeTCQSmKaFYdysckRERERERERERERGXTe8dLlc2LZNIhHHUHopIiIiIiIiIiIiN9mnhpcKLUVERERERERERGS6GI7jONNdhIiIiIiIiIiIiMgn/X//PpEyZsMduwAAAABJRU5ErkJggg==\">\n \n<br>\n<center><figcaption><b>Figure 4.</b> DCNN with Atrous Convolutions & Atrous Spatial Pyramid Pooling</figcaption></center>\n</figure>", "_____no_output_____" ], [ "## Building a Deeplab V3 model\n", "_____no_output_____" ], [ "Step-by-step process of building a DeepLab network is given below:\n\n- Features are extracted from the backbone network (VGG, DenseNet, ResNet)\n- To control the size of the feature map, atrous convolution is used in the last few blocks of the backbone\n- On top of extracted features from the backbone, the ASPP network is added to classify each pixel corresponding to their classes\n- The output from the ASPP network is passed through a 1 x 1 convolution to get the actual size of the image which will be the final segmented mask for the image\n", "_____no_output_____" ], [ "### References:", "_____no_output_____" ], [ "[1] L Chen, G Papandreou, I Kokkinos, K Murphy, A Yuille Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, arXiv:1412.7062 2016 \n\n[2] L Chen, G Papandreou, I Kokkinos, K Murphy, A Yuille DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs arXiv:1606.00915 2017\n\n\n[3] L Chen, G Papandreou, F Schroff, H Adam Rethinking Atrous Convolution for Semantic Image Segmentation arXiv:1706.05587 2017\n\n\n[4] Sik-Ho Tsang. Review: DilatedNet — Dilated Convolution (Semantic Segmentation). https://towardsdatascience.com/review-dilated-convolution-semantic-segmentation-9d5a5bd768f5. Accessed 10 November 2019.\n\n\n[5] Beeren Sahu, The Evolution of Deeplab for Semantic Segmentation, https://towardsdatascience.com/the-evolution-of-deeplab-for-semantic-segmentation-95082b025571, Accessed 21 Februrary 2020\n\n\n[6] Sik-Ho Tsang, Review: DeepLabv3 — Atrous Convolution (Semantic Segmentation), https://towardsdatascience.com/review-deeplabv3-atrous-convolution-semantic-segmentation-6d818bfd1d74, Accessed 21 Februrary 2020\n\n\n[7] Saurabh Pal, Semantic Segmentation: Introduction to the Deep Learning Technique Behind Google Pixel’s Camera!, https://www.analyticsvidhya.com/blog/2019/02/tutorial-semantic-segmentation-google-deeplab/, Accessed 21 February 2020\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb1308719811187047cfe1472775fbcb70169bd8
1,807
ipynb
Jupyter Notebook
HackerRank/Contest(Asia Specific)/IV/Untitled.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
37
2019-04-03T07:19:57.000Z
2022-01-09T06:18:41.000Z
HackerRank/Contest(Asia Specific)/IV/Untitled.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
16
2020-08-11T08:09:42.000Z
2021-10-30T17:40:48.000Z
HackerRank/Contest(Asia Specific)/IV/Untitled.ipynb
Dhaneshgupta1027/Python
12193d689cc49d3198ea6fee3f7f7d37b8e59175
[ "MIT" ]
130
2019-10-02T14:40:20.000Z
2022-01-26T17:38:26.000Z
15.577586
36
0.442723
[ [ [ "n=b", "_____no_output_____" ] ], [ [ "# Valid Binary String", "_____no_output_____" ] ], [ [ "s=input()\nn=int(input())\nprint(s.count('0'*(n)))", " 00100\n 2\n" ], [ "'100100000010'.find('000',7)", "_____no_output_____" ], [ "17%3", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb131cb718e9d7af918ceae9ef99fef954d86338
19,036
ipynb
Jupyter Notebook
docs/validation/MDT Validation Notebook.ipynb
coderxio/medication-diversification
770b827892fc05ab9a78a575e7d98124d118bd5b
[ "Apache-2.0" ]
2
2021-09-08T00:27:46.000Z
2021-11-08T12:26:18.000Z
docs/validation/MDT Validation Notebook.ipynb
coderxio/medication-diversification
770b827892fc05ab9a78a575e7d98124d118bd5b
[ "Apache-2.0" ]
null
null
null
docs/validation/MDT Validation Notebook.ipynb
coderxio/medication-diversification
770b827892fc05ab9a78a575e7d98124d118bd5b
[ "Apache-2.0" ]
1
2021-10-12T04:21:07.000Z
2021-10-12T04:21:07.000Z
47.118812
177
0.34088
[ [ [ "# MDT Validation Notebook\r\n\r\nValidated on Synthea +MDT population vs MEPS for Pediatric Asthma", "_____no_output_____" ] ], [ [ "import pandas as pd\r\nimport datetime as dt\r\nimport numpy as np\r\nfrom scipy.stats import chi2_contingency", "_____no_output_____" ] ], [ [ "# Grab medication RXCUI of interest\r\n\r\nGrabs the MEPS product RXCUI lists for filtering of Synthea to medications of interest. \r\nPath to this will be MDT module - log - rxcui_ndc_df_output.csv", "_____no_output_____" ] ], [ [ "rxcui_df = pd.read_csv(r\"\") # MDT produced medication list\r\nrxcui_df = rxcui_df[['medication_product_name','medication_product_rxcui']].drop_duplicates()\r\nrxcui_df['medication_product_rxcui'] = rxcui_df['medication_product_rxcui'].astype(int)", "_____no_output_____" ] ], [ [ "# Read Synthea Population\r\nReads Synthea Medication file and filters on medications of interest\r\n\r\nThe path for this will be synthea -> output -> csv -> medications.csv ", "_____no_output_____" ] ], [ [ "col_list = ['START','PATIENT','CODE']\r\n\r\nsyn_med_df = pd.DataFrame(columns = ['START','PATIENT','CODE','medication_product_rxcui','medication_product_name'])\r\n\r\nfor x in pd.read_csv(r\"\", usecols=col_list, chunksize=100000):\r\n x['CODE'] = x['CODE'].astype(int)\r\n temp_df = x.merge(rxcui_df, how=\"inner\", left_on='CODE', right_on='medication_product_rxcui')\r\n syn_med_df = syn_med_df.append(temp_df)", "_____no_output_____" ] ], [ [ "# Synthea Patient Population Filtering\r\n\r\nReads and merges Synthea patient data to allow for patient management.\r\nThe path for this will be synthea -> output -> csv -> patients.csv\r\n\r\nThis step can be skipped if not filtering by patient. For the pediatic use case we limited to patients who received medications when they were < 6 years of age", "_____no_output_____" ] ], [ [ "syn_pat_df = pd.read_csv(r\"\")\r\nsyn_pat_df = syn_pat_df.merge(syn_med_df, how='inner', left_on='Id', right_on='PATIENT')\r\n\r\nsyn_pat_df['START'] = pd.to_datetime(syn_pat_df['START']).dt.date\r\nsyn_pat_df['BIRTHDATE'] = pd.to_datetime(syn_pat_df['BIRTHDATE']).dt.date\r\nsyn_pat_df['age_in_days'] = (syn_pat_df['START'] - syn_pat_df['BIRTHDATE']).dt.days\r\n\r\nsyn_med_df = syn_pat_df[syn_pat_df['age_in_days'] < 2191]", "_____no_output_____" ] ], [ [ "# Synthea distributions\r\nGets total patient counts and medication distributions from Synthea population", "_____no_output_____" ] ], [ [ "syn_med_df = syn_med_df.groupby(['medication_product_name']).agg(patient_count=('CODE','count')).reset_index()\r\ntotal_patients = syn_med_df['patient_count'].sum()\r\nsyn_med_df['percent'] = syn_med_df['patient_count']/total_patients\r\nsyn_med_df", "_____no_output_____" ] ], [ [ "# MEPS Expected\r\n\r\ngenerates the expected MEPS patient counts for chi squared goodness of fit test\r\n\r\nPath to file will be in you MDT module - log - validation_df.csv", "_____no_output_____" ] ], [ [ "meps_df = pd.read_csv(r\"\")\r\nmeps_df = meps_df[meps_df['age'] == '0-5'][['medication_product_name','validation_percent_product_patients']]\r\nmeps_df['patient_count'] = meps_df['validation_percent_product_patients'] * total_patients\r\nmeps_df['patient_count'] = meps_df['patient_count'].round(0)\r\nmeps_df", "_____no_output_____" ] ], [ [ "# Run Chi Squared\r\n\r\nRuns chi squared test for two different populations\r\nTake the values for patient count from syn_med_df and meps_df for this.\r\n\r\nNumbers used are for the pediatric asthma use case of Synthea +MDT vs MEPS", "_____no_output_____" ] ], [ [ "obs = np.array([[203, 216],\r\n [977, 979],\r\n [513, 489],\r\n [1819, 1836],\r\n [1, 0],\r\n [2378, 2332],\r\n [1070, 1093]])\r\n\r\n\r\nchi2, p, df, ob = chi2_contingency(obs)\r\nprint(f\"\"\"X2 = {chi2}\r\np-value = {p}\r\ndegrees of freedom = {df}\r\nobservatrions = {ob}\"\"\")", "X2 = 2.7347252762386036\np-value = 0.8413287112519282\ndegrees of freedom = 6\nobservatrions = [[2.09741047e+02 2.09258953e+02]\n [9.79125270e+02 9.76874730e+02]\n [5.01576442e+02 5.00423558e+02]\n [1.82960269e+03 1.82539731e+03]\n [5.00575291e-01 4.99424709e-01]\n [2.35770962e+03 2.35229038e+03]\n [1.08274435e+03 1.08025565e+03]]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb133927555a7467d6b675669a513932ffad6db0
8,998
ipynb
Jupyter Notebook
classes/pt/operators.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
classes/pt/operators.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
classes/pt/operators.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
20.403628
250
0.473216
[ [ [ "---\n\n![Header](img/operators-header.png)\n\n---\n\n# Operadores\n\nOs operadores são usados para realizar operações sobre valores e variáveis. Os operadores manipulam e retornam valores de acordo com sua funcionalidade. Eles podem ser representados por palavras reservadas ou caracteres especiais (símbolos).\n\n - [Operadores Aritiméticos](#Operadores-Aritméticos)\n - [Operadores de Comparação](#Operadores-de-Comparação)\n - [Operadores de Atribuição](#Operadores-de-Atribuição)\n - [Operadores Lógicos](#Operadores-Lógicos)\n\n## Operadores Aritméticos\nEstes operadores são representados por símbolos e realizam calculos aritméticos como adição, subtração, multiplicação, etc. Segue a lista de operadores aritméticos:\n \n | Operador | Símbolo |\n |--|--|\n | Adição | + |\n | Subtração | - |\n | Multiplicação | * |\n | Divisão | / |\n | Divisão inteira | // |\n | Módulo | % |\n | Exponenciação | ** |\n \n [← Operadores](#Operadores)\n \n #### Exemplos", "_____no_output_____" ] ], [ [ "# Variáveis\na, b = 10, 3", "_____no_output_____" ], [ "# Adição\nprint(a + b)", "_____no_output_____" ], [ "# Subtração\nprint(a - b)", "_____no_output_____" ], [ "# Multiplicação\nprint(a * b)", "_____no_output_____" ], [ "# Divisão\nprint(a / b)", "_____no_output_____" ], [ "# Divisão inteira\nprint(a // b)", "_____no_output_____" ], [ "# Módulo\nprint(a % 2)\nprint(a % 3)", "_____no_output_____" ], [ "# Exponenciação\nprint(a**2)\nprint(a**4)\nprint(2**4)", "_____no_output_____" ] ], [ [ "## Operadores de Comparação\n\nEstes operadores comparam os valores em cada lado do operando e determinam a relação entre eles. Eles também são conhecidos como operadores relacionais. Este operadores também são descritos por símbolos. Segue a lista de operadores:\n\n| Operador | Símbolo |\n|--|--|\n| Igualdade | == |\n| Diferença | != |\n| Maior que | > |\n| Maior ou igual a | >= |\n| Menor que | < |\n| Menor ou igual a | <= |\n\nIndependente do operador utilizado, os resultados possíveis são `True` (Verdadeiro) ou `False` (Falso).\n\n[← Operadores](#Operadores)\n\n#### Exemples", "_____no_output_____" ] ], [ [ "# Variáveis\nx, y = 5, 4", "_____no_output_____" ], [ "# Igualdade\nprint('x == y:', x == y)", "_____no_output_____" ], [ "# Diferença\nprint('x != y:', x != y)", "_____no_output_____" ], [ "# Maior\nprint('x > y: ', x > y)\nprint('x >= y:', x >= y)", "_____no_output_____" ], [ "# Menor\nprint('x < y: ', x < y)\nprint('x <= y:', x <= y)", "_____no_output_____" ] ], [ [ "## Operadores de Atribuição\nRealizam calculos aritméticos e atribuições de forma simplificada. Segue a lista de operadores\n\n| Operator | Symbol |\n|--|--|\n| Atribuição | = |\n| Adição | += |\n| Subtração | -= |\n| Multiplicação | *= |\n| Divisão | //= |\n| Divisão Inteira | /= |\n| Módulo | %= |\n| Exponenciação | **= |\n\n\n[← Operadores](#Operadores)\n\n#### Exemplos", "_____no_output_____" ] ], [ [ "# Variáveis\na, b, c = 3, 2, 0", "_____no_output_____" ], [ "# Adição\nc += a\nprint('C:', c)", "_____no_output_____" ], [ "# Multiplicação\nc *= a\nprint('C:', c) ", "_____no_output_____" ], [ "# Divisão\nc /= a \nprint('C:', c)", "_____no_output_____" ], [ "# Módulo\nc = 10\nc %= a\nprint('C:', c)", "_____no_output_____" ], [ "# Exponenciação\nc = 2\nc **= a\nprint('C:', c)", "_____no_output_____" ], [ "# Divisão inteira\nc //= a\nprint('C:', c)", "_____no_output_____" ] ], [ [ "## Operadores Lógicos\n\nUtilizado para realizar calculos booleanos entre valores. Funcionam como nas tabelas-verdade. São representados por palavras reservadas.\n\n| Operador | Palavra reservada |\n| -- | -- | \n| E | `and` |\n| Ou | `or` |\n| Negação | `not` |\n \n[← Operadores](#Operadores)\n\n#### Exemplos", "_____no_output_____" ] ], [ [ "# Variáveis\na, b = True, False", "_____no_output_____" ], [ "print('a AND b:', a and b)", "_____no_output_____" ], [ "print('a OR b:', a or b)", "_____no_output_____" ], [ "print('NOT a:', not a)", "_____no_output_____" ], [ "print('NOT (a AND b):', not(a and b))", "_____no_output_____" ] ], [ [ "## Precedência de Operadores\n\n| Ordem | Operador | \n| -- |--|\n| 1º | ** |\n| 2º | *, /, //, % |\n| 3º | +, - |\n| 4º | <=, <, >, >= |\n| 5º | =, %=, /=, //=, -=, +=, *=, **= |\n| 6º | is, is not |\n| 7º | in, not in |\n| 8º | not, or, and |\n\n[← Operadores](#Operadores)", "_____no_output_____" ], [ "## Exercícios\n\nCrie um programa que lê um número, soma 2 a este número e imprime o dobro deste novo valor.", "_____no_output_____" ], [ "Crie um programa que lê dois números mostra se o primeiro número é maior que o segundo. A saída deve ser um booleano.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
cb133b9a290b7f89bd4b0ae26fb9fe814aeebbfd
307,742
ipynb
Jupyter Notebook
GAN_face_generation.ipynb
kaustav1987/Human-Face-Generation-using-GAN
89a7518034ae17648f6ed3b9be81800f51ade33f
[ "MIT" ]
null
null
null
GAN_face_generation.ipynb
kaustav1987/Human-Face-Generation-using-GAN
89a7518034ae17648f6ed3b9be81800f51ade33f
[ "MIT" ]
null
null
null
GAN_face_generation.ipynb
kaustav1987/Human-Face-Generation-using-GAN
89a7518034ae17648f6ed3b9be81800f51ade33f
[ "MIT" ]
null
null
null
155.190116
96,176
0.832428
[ [ [ "# Face Generation\n\nIn this project, we will define and train a DCGAN on a dataset of faces. Our goal is to get a generator network to generate *new* images of faces that look as realistic as possible!\n\n\n### Get the Data\n\nYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.\n\nThis dataset is more complex than the number datasets (like MNIST or SVHN)\n\n### Pre-processed Data\n\nEach of the CelebA images is of size 64x64x3 NumPy images. Some sample data is show below.\n", "_____no_output_____" ] ], [ [ "data_dir = 'processed_celeba_small/'\n\nimport pickle as pkl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport problem_unittests as tests\n#import helper\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Visualize the CelebA Data\n", "_____no_output_____" ] ], [ [ "# necessary imports\nimport torch\nfrom torchvision import datasets\nfrom torchvision import transforms\nfrom torch.utils.data import DataLoader", "_____no_output_____" ], [ "def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):\n \"\"\"\n Batch the neural network data using DataLoader\n :param batch_size: The size of each batch; the number of images in a batch\n :param img_size: The square size of the image data (x, y)\n :param data_dir: Directory where image data is located\n :return: DataLoader with batched data\n \"\"\"\n transform= transforms.Compose([transforms.Resize(image_size),\n transforms.ToTensor()])\n \n \n data = datasets.ImageFolder(root = data_dir,transform = transform )\n data_loader = DataLoader(data , batch_size = batch_size, shuffle = True)\n \n \n return data_loader\n", "_____no_output_____" ] ], [ [ "## Create a DataLoader\n\n#### Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.\n\nCall the above function and create a dataloader to view images. \n* You can decide on any reasonable `batch_size` parameter\n* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!", "_____no_output_____" ] ], [ [ "# Define function hyperparameters\nbatch_size = 256 ## TRy with different size later\nimg_size = 32\n\n\n# Call your function and get a dataloader\nceleba_train_loader = get_dataloader(batch_size, img_size)\n", "_____no_output_____" ] ], [ [ "Next, you can view some images! You should seen square images of somewhat-centered faces.\n\nNote: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.", "_____no_output_____" ] ], [ [ "# helper display function\ndef imshow(img):\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n# obtain one batch of training images\ndataiter = iter(celeba_train_loader)\nimages, _ = dataiter.next() # _ for no labels\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(20, 4))\nplot_size=20\nfor idx in np.arange(plot_size):\n ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])\n imshow(images[idx])", "_____no_output_____" ] ], [ [ "#### Pre-process your image data and scale it to a pixel range of -1 to 1\n\nYou need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)", "_____no_output_____" ] ], [ [ "# TODO: Complete the scale function\ndef scale(x, feature_range=(-1, 1)):\n ''' Scale takes in an image x and returns that image, scaled\n with a feature_range of pixel values from -1 to 1. \n This function assumes that the input x is already scaled from 0-1.'''\n # assume x is scaled to (0, 1)\n # scale to feature_range and return scaled x\n min_val,max_val = feature_range\n x = x*(max_val - min_val) + min_val\n \n return x\n", "_____no_output_____" ], [ "\n# check scaled range\n# should be close to -1 to 1\nimg = images[0]\nscaled_img = scale(img)\n\nprint('Min: ', scaled_img.min())\nprint('Max: ', scaled_img.max())", "Min: tensor(-1.)\nMax: tensor(0.6471)\n" ] ], [ [ "---\n# Define the Model\n\nA GAN is comprised of two adversarial networks, a discriminator and a generator.\n\n## Discriminator\n\nYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful.\n\n#### Complete the Discriminator class\n* The inputs to the discriminator are 32x32x3 tensor images\n* The output should be a single value that will indicate whether a given image is real or fake\n", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F", "_____no_output_____" ], [ "##Helper function for Conv and Batch Layer of Discriminator##\n\ndef conv(in_channels,out_channels,kernel_size=4,stride=2,padding=1,batch_norm = True):\n layers = []\n conv_layer = nn.Conv2d(in_channels, out_channels,kernel_size,stride,padding,bias = False)\n \n layers.append(conv_layer)\n \n if batch_norm:\n batch_layer = nn.BatchNorm2d(num_features = out_channels)\n layers.append(batch_layer)\n return nn.Sequential(*layers)", "_____no_output_____" ], [ "class Discriminator(nn.Module):\n\n def __init__(self, conv_dim):\n \"\"\"\n Initialize the Discriminator Module\n :param conv_dim: The depth of the first convolutional layer\n \"\"\"\n super(Discriminator, self).__init__()\n\n # complete init function\n \n ## class variables\n self.conv_dim = conv_dim \n \n ## class layers\n ##self.conv1 = nn.Conv2d(in_channels =3, out_channels=self.conv_dim,kernel_size=4,stride=2,padding=1)##16*16*32\n ##batchNorm1 is not required for the first conv layer\n ##self.conv2 = nn.Conv2d(in_channels = self.conv_dim, out_channels = self.conv_dim*2,kernel_size =4, stride=2,padding =1 )##8*8*64\n ##self.batchnorm2 = nn.BatchNorm2d(num_features = self.conv_dim*2)\n \n ##self.conv3 = nn.Conv2d(in_channels =self.conv_dim*2,out_channels=self.conv_dim*4, kernel_size = 4, stride =2,padding =1)##4*4*128\n ##self.batchnorm3 = nn.BatchNorm2d(self.conv_dim*4)\n \n ##self.conv4 = nn.Conv2d(in_channels =self.conv_dim*4,out_channels=self.conv_dim*8, kernel_size = 4, stride =2,padding =1)##2*2*256\n ##self.batchnorm4 = nn.BatchNorm2d(self.conv_dim*8) \n \n ## class layers\n self.layer1 = conv(in_channels =3, out_channels=self.conv_dim,kernel_size=4,stride=2,padding=1,batch_norm=False)\n self.layer2 = conv(in_channels = self.conv_dim, out_channels = self.conv_dim*2,kernel_size=4,stride=2,padding=1)\n self.layer3 = conv(in_channels =self.conv_dim*2,out_channels=self.conv_dim*4, kernel_size = 4, stride =2,padding =1)\n self.layer4 = conv(in_channels =self.conv_dim*4,out_channels=self.conv_dim*8, kernel_size = 4, stride =2,padding =1)\n \n self.FC1 = nn.Linear(2*2*self.conv_dim*8,1)\n \n \n\n def forward(self, x):\n \"\"\"\n Forward propagation of the neural network\n :param x: The input to the neural network \n :return: Discriminator logits; the output of the neural network\n \"\"\"\n # define feedforward behavior\n ##x = self.conv1(x)\n ##x = F.leaky_relu(x)\n ##\n ##x = self.conv2(x)\n ##x = self.batchnorm2(x)\n ##x = F.leaky_relu(x)\n ##\n ##x = self.conv3(x)\n ##x = self.batchnorm3(x)\n ##x = F.leaky_relu(x)\n ##\n ##x = self.conv4(x)\n ##x = self.batchnorm4(x)\n ##x = F.leaky_relu(x)\n ##\n#################\n x = self.layer1(x)\n x = F.leaky_relu(x)\n ##\n x = self.layer2(x)\n x = F.leaky_relu(x)\n ##\n x = self.layer3(x)\n x = F.leaky_relu(x)\n ##\n x = self.layer4(x)\n x = F.leaky_relu(x)\n\n x = x.reshape(-1,2*2*self.conv_dim*8)\n x = self.FC1(x)\n \n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_discriminator(Discriminator)", "Tests Passed\n" ] ], [ [ "## Generator\n\nThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs.\n\n#### Complete the Generator class\n* The inputs to the generator are vectors of some length `z_size`\n* The output should be a image of shape `32x32x3`", "_____no_output_____" ] ], [ [ "def deconv(in_channels,out_channels,kernel_size, stride,padding,batch_norm):\n layers = []\n\n conv_trans_layer = nn.ConvTranspose2d(in_channels,out_channels,kernel_size, stride,padding,bias=False)\n layers.append(conv_trans_layer)\n if batch_norm:\n batch_norm_layer = nn.BatchNorm2d(num_features= out_channels)\n layers.append(batch_norm_layer)\n return nn.Sequential(*layers)", "_____no_output_____" ], [ "class Generator(nn.Module):\n \n def __init__(self, z_size, conv_dim):\n \"\"\"\n Initialize the Generator Module\n :param z_size: The length of the input latent vector, z\n :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer\n \"\"\"\n super(Generator, self).__init__()\n\n # complete init function\n self.z_size = z_size\n self.conv_dim = conv_dim\n \n self.FC1 = nn.Linear(z_size, conv_dim*8*2*2)\n\n self.layer1 = deconv(in_channels= conv_dim*8,out_channels= conv_dim*4,kernel_size=4, stride=2,padding=1,batch_norm=True)\n self.layer2 = deconv(in_channels= conv_dim*4,out_channels= conv_dim*2,kernel_size=4, stride=2,padding=1,batch_norm=True)\n self.layer3 = deconv(in_channels= conv_dim*2,out_channels= conv_dim,kernel_size=4, stride=2,padding=1,batch_norm=True)\n self.layer4 = deconv(in_channels= conv_dim,out_channels= 3,kernel_size=4, stride=2,padding=1,batch_norm=False)\n self.tan = nn.Tanh()\n \n ##self.dropout = nn.Dropout(0.3)\n \n\n \n \n \n\n def forward(self, x):\n \"\"\"\n Forward propagation of the neural network\n :param x: The input to the neural network \n :return: A 32x32x3 Tensor image as output\n \"\"\"\n # define feedforward behavior\n batch_size = x.shape[0]\n x = self.FC1(x) ##Here I am generating enough dimension to feed to the con2d Layers \n x = x.reshape(-1,self.conv_dim*8,2,2) ## Here I am reshaping into accurate dimension\n assert (x.shape[0] == batch_size)\n \n x = self.layer1(x)\n ##x = F.relu(x)\n x = F.leaky_relu(x)\n ##x = self.dropout(x)\n x = self.layer2(x)\n ##x = F.relu(x)\n x = F.leaky_relu(x)\n ##x = self.dropout(x)\n x = self.layer3(x)\n ##x = F.relu(x)\n x = F.leaky_relu(x)\n ##x = self.dropout(x)\n x = self.layer4(x)\n x = self.tan(x)\n \n return x\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_generator(Generator)", "Tests Passed\n" ] ], [ [ "## Initialize the weights of your networks\n\nTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:\n> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.\n\nSo, your next task will be to define a weight initialization function that does just this!\n\n\n#### Complete the weight initialization function\n\n* This should initialize only **convolutional** and **linear** layers\n* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.\n* The bias terms, if they exist, may be left alone or set to 0.", "_____no_output_____" ] ], [ [ "from torch.nn import init\n\ndef weights_init_normal(m):\n \"\"\"\n Applies initial weights to certain layers in a model .\n The weights are taken from a normal distribution \n with mean = 0, std dev = 0.02.\n :param m: A module or layer in a network \n \"\"\"\n # classname will be something like:\n # `Conv`, `BatchNorm2d`, `Linear`, etc.\n classname = m.__class__.__name__\n \n # TODO: Apply initial weights to convolutional and linear layers\n if (classname.find('Conv') != -1 or classname.find('Linear') != -1):\n ##m.weight.data.normal_(0,0.02)\n init.normal_(m.weight.data,0.0,0.02)\n \n ##init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')\n \n ##init.xavier_normal_(m.weight.data,gain = 0.02)\n \n ", "_____no_output_____" ] ], [ [ "## Build complete network\n\nDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.", "_____no_output_____" ] ], [ [ "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ndef build_network(d_conv_dim, g_conv_dim, z_size):\n # define discriminator and generator\n D = Discriminator(d_conv_dim)\n G = Generator(z_size=z_size, conv_dim=g_conv_dim)\n\n # initialize model weights\n D.apply(weights_init_normal)\n G.apply(weights_init_normal)\n\n print(D)\n print()\n print(G)\n \n return D, G\n", "_____no_output_____" ] ], [ [ "#### Define model hyperparameters", "_____no_output_____" ] ], [ [ "# Define model hyperparams\nd_conv_dim = 32\ng_conv_dim = 32\nz_size = 100\n\n\nD, G = build_network(d_conv_dim, g_conv_dim, z_size)\n\ntrain_on_gpu = torch.cuda.is_available()\nif train_on_gpu:\n D,G = D.cuda(), G.cuda()", "Discriminator(\n (layer1): Sequential(\n (0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n )\n (layer2): Sequential(\n (0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (layer3): Sequential(\n (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (layer4): Sequential(\n (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (FC1): Linear(in_features=1024, out_features=1, bias=True)\n)\n\nGenerator(\n (FC1): Linear(in_features=100, out_features=1024, bias=True)\n (layer1): Sequential(\n (0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (layer2): Sequential(\n (0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (layer3): Sequential(\n (0): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n (layer4): Sequential(\n (0): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)\n )\n (tan): Tanh()\n)\n" ] ], [ [ "### Training on GPU\n\nCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that \n>* Models,\n* Model inputs, and\n* Loss function arguments\n\nAre moved to GPU, where appropriate.", "_____no_output_____" ] ], [ [ "\nimport torch\n\n# Check for a GPU\ntrain_on_gpu = torch.cuda.is_available()\nif not train_on_gpu:\n print('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Training on GPU!')", "Training on GPU!\n" ] ], [ [ "---\n## Discriminator and Generator Losses\n\nNow we need to calculate the losses for both types of adversarial networks.\n\n### Discriminator Losses\n\n> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. \n* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\n\n\n### Generator Loss\n\nThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*.\n\n#### Complete real and fake loss functions\n\n**We may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**", "_____no_output_____" ] ], [ [ "from scipy.stats import truncnorm\ndef get_truncated_normal(mean=0, sd=1, low=0, upp=10):\n return truncnorm(\n (low - mean) / sd, (upp - mean) / sd, loc=mean, scale=sd)\n\nsmooth_factor_for_real_loss = get_truncated_normal(mean=.85, sd=.05, low=.8, upp=.95)\nprint(smooth_factor_for_real_loss.rvs())\n\nsmooth_factor_for_fake_loss = get_truncated_normal(mean=.1, sd=0.05, low=0.0, upp=.15)\nprint(smooth_factor_for_fake_loss.rvs())", "0.9217683030449823\n0.0958442736836869\n" ], [ "def real_loss(D_out,smooth =False):\n '''Calculates how close discriminator outputs are to being real.\n param, D_out: discriminator logits\n return: real loss'''\n label_smooth = smooth_factor_for_real_loss.rvs()\n batch_size = D_out.shape[0]\n labels = torch.ones(batch_size)\n if smooth:\n labels = labels*label_smooth\n labels = labels.cuda()\n \n criterion = nn.BCEWithLogitsLoss()\n loss = criterion(D_out.squeeze(), labels)\n return loss\n\n\n##Fake loss smoothing is not used since it is not giving good result\ndef fake_loss(D_out,smooth = False):\n '''Calculates how close discriminator outputs are to being fake.\n param, D_out: discriminator logits\n return: fake loss'''\n batch_size = D_out.shape[0]\n label_smooth = smooth_factor_for_fake_loss.rvs()\n if smooth:\n labels = torch.ones(batch_size) ## Using ones and multiplying with a number in range 0.0 to 0.3\n labels = labels*label_smooth\n else:\n labels = torch.zeros(batch_size)\n labels = labels.cuda()\n \n criterion = nn.BCEWithLogitsLoss()\n loss = criterion(D_out.squeeze(), labels)\n return loss", "_____no_output_____" ] ], [ [ "## Optimizers\n\n#### Define optimizers for your Discriminator (D) and Generator (G)\n\nDefine optimizers for your models with appropriate hyperparameters.", "_____no_output_____" ] ], [ [ "import torch.optim as optim\n\nlr = .0002 ##.0001 \nbeta1=0.5\nbeta2=0.999 # default value\n\n## Create optimizers for the discriminator D and generator G\nd_optimizer = optim.Adam(D.parameters(),lr, [beta1, beta2]) ##,lr =lr)\n#d_optimizer = optim.SGD(D.parameters(),lr, momentum = 0.9)\n##d_optimizer = optim.RMSprop(D.parameters(), lr = lr, alpha = 0.9)\n\ng_optimizer = optim.Adam(G.parameters(),lr, [beta1, beta2]) ## ,lr =lr)", "_____no_output_____" ] ], [ [ "---\n## Training\n\nTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.\n\n* You should train the discriminator by alternating on real and fake images\n* Then the generator, which tries to trick the discriminator and should have an opposing loss function\n\n\n#### Saving Samples\n\nYou've been given some code to print out some loss statistics and save some generated \"fake\" samples.", "_____no_output_____" ], [ "#### Complete the training function\n\nKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.", "_____no_output_____" ] ], [ [ "\nX = get_truncated_normal(mean=0, sd=1, low=-1, upp=1)\n#X.rvs()", "_____no_output_____" ], [ "def train(D, G, n_epochs, print_every=50):\n '''Trains adversarial networks for some number of epochs\n param, D: the discriminator network\n param, G: the generator network\n param, n_epochs: number of epochs to train for\n param, print_every: when to print and record the models' losses\n return: D and G losses'''\n \n # move models to GPU\n if train_on_gpu:\n D.cuda()\n G.cuda()\n\n # keep track of loss and generated, \"fake\" samples\n samples = []\n losses = []\n\n # Get some fixed data for sampling. These are images that are held\n # constant throughout training, and allow us to inspect the model's performance\n sample_size=16\n fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))\n ##fixed_z = np.random.normal(0.0,0.33, size=(sample_size, z_size))\n \n ## a small mean so that most of the values are not zeros\n ##fixed_z = np.random.normal(0.00000,0.33, size=(sample_size, z_size)) \n \n ##fixed_z = X.rvs(size =(sample_size, z_size))\n fixed_z = torch.from_numpy(fixed_z).float()\n # move z to GPU if available\n if train_on_gpu:\n fixed_z = fixed_z.cuda()\n print('Cuda enabled')\n\n # epoch training loop\n for epoch in range(n_epochs):\n\n # batch training loop\n for batch_i, (real_images, _) in enumerate(celeba_train_loader):\n\n batch_size = real_images.size(0)\n real_images = scale(real_images)\n\n\n \n # 1. Train the discriminator on real and fake images\n if train_on_gpu:\n real_images = real_images.cuda()\n \n real_output = D(real_images)\n real_output_loss = real_loss(real_output,smooth=True)\n \n z = np.random.uniform(-1,1,size=(batch_size,z_size))\n ##z = np.random.normal(0.00000,0.33, size=(sample_size, z_size)) \n \n ## a small mean so that most of the values are not zeros\n ##z = np.random.normal(0.0001,0.33, size=(sample_size, z_size)) \n \n ##z = X.rvs(size =(sample_size, z_size))\n z = torch.from_numpy(z).float()\n if train_on_gpu:\n z = z.cuda()\n fake_images = G(z)\n fake_output = D(fake_images)\n fake_output_loss = fake_loss(fake_output,smooth =False)\n \n d_loss = real_output_loss + fake_output_loss\n d_optimizer.zero_grad()\n d_loss.backward()\n ##nn.utils.clip_grad_norm_(d.parameters(), clip=5)\n d_optimizer.step()\n\n # 2. Train the generator with an adversarial loss\n z = np.random.uniform(-1,1,size=(batch_size,z_size))\n ##z = np.random.normal(0.0,0.33,size=(batch_size,z_size))\n \n ## a small mean so that most of the values are not zeros\n ##z = np.random.normal(0.00000,0.33, size=(sample_size, z_size))\n \n ##z = X.rvs(size =(sample_size, z_size))\n z = torch.from_numpy(z).float()\n if train_on_gpu:\n z = z.cuda()\n fake_images = G(z)\n fake_output_for_g = D(fake_images)\n g_optimizer.zero_grad()\n g_loss = real_loss(fake_output_for_g,smooth =False)\n g_loss.backward()\n ##nn.utils.clip_grad_norm_(g.parameters(), clip=5)\n g_optimizer.step() \n \n \n\n\n # Print some loss stats\n if batch_i % print_every == 0:\n # append discriminator loss and generator loss\n losses.append((d_loss.item(), g_loss.item()))\n # print discriminator and generator loss\n print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(\n epoch+1, n_epochs, d_loss.item(), g_loss.item()))\n\n\n ## AFTER EACH EPOCH## \n # this code assumes your generator is named G, feel free to change the name\n # generate and save sample, fake images\n G.eval() # for generating samples\n samples_z = G(fixed_z)\n samples.append(samples_z)\n G.train() # back to training mode\n\n # Save training generator samples\n with open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n # finally return losses\n return losses", "_____no_output_____" ] ], [ [ "Set your number of training epochs and train your GAN!", "_____no_output_____" ] ], [ [ "# set number of epochs \nn_epochs = 100 ## Best is 50 epochs and lr = .0002\n\nprint(train_on_gpu)\n\n# call training function\nlosses = train(D, G, n_epochs=n_epochs)", "True\nCuda enabled\nEpoch [ 1/ 100] | d_loss: 1.4278 | g_loss: 0.8709\nEpoch [ 1/ 100] | d_loss: 0.5805 | g_loss: 4.0982\nEpoch [ 1/ 100] | d_loss: 0.5396 | g_loss: 4.0090\nEpoch [ 1/ 100] | d_loss: 0.7216 | g_loss: 2.0877\nEpoch [ 1/ 100] | d_loss: 0.6655 | g_loss: 2.8216\nEpoch [ 1/ 100] | d_loss: 0.7069 | g_loss: 2.4014\nEpoch [ 1/ 100] | d_loss: 0.7271 | g_loss: 2.9758\nEpoch [ 1/ 100] | d_loss: 1.0868 | g_loss: 3.5416\nEpoch [ 2/ 100] | d_loss: 1.0068 | g_loss: 3.0794\nEpoch [ 2/ 100] | d_loss: 0.6756 | g_loss: 2.6991\nEpoch [ 2/ 100] | d_loss: 0.7821 | g_loss: 2.6465\nEpoch [ 2/ 100] | d_loss: 0.8005 | g_loss: 1.8306\nEpoch [ 2/ 100] | d_loss: 1.0428 | g_loss: 1.5380\nEpoch [ 2/ 100] | d_loss: 0.8008 | g_loss: 2.5660\nEpoch [ 2/ 100] | d_loss: 1.1557 | g_loss: 3.2566\nEpoch [ 2/ 100] | d_loss: 0.7149 | g_loss: 1.8093\nEpoch [ 3/ 100] | d_loss: 0.9140 | g_loss: 2.1841\nEpoch [ 3/ 100] | d_loss: 0.9548 | g_loss: 3.2802\nEpoch [ 3/ 100] | d_loss: 0.8741 | g_loss: 2.7865\nEpoch [ 3/ 100] | d_loss: 0.8622 | g_loss: 2.5154\nEpoch [ 3/ 100] | d_loss: 0.9018 | g_loss: 2.1714\nEpoch [ 3/ 100] | d_loss: 0.9247 | g_loss: 2.1989\nEpoch [ 3/ 100] | d_loss: 0.8720 | g_loss: 2.1390\nEpoch [ 3/ 100] | d_loss: 0.8896 | g_loss: 1.9973\nEpoch [ 4/ 100] | d_loss: 0.8212 | g_loss: 3.1224\nEpoch [ 4/ 100] | d_loss: 0.9138 | g_loss: 2.2573\nEpoch [ 4/ 100] | d_loss: 0.8645 | g_loss: 2.2468\nEpoch [ 4/ 100] | d_loss: 0.8748 | g_loss: 1.6835\nEpoch [ 4/ 100] | d_loss: 0.9136 | g_loss: 2.3167\nEpoch [ 4/ 100] | d_loss: 0.8760 | g_loss: 1.4862\nEpoch [ 4/ 100] | d_loss: 0.9970 | g_loss: 3.2271\nEpoch [ 4/ 100] | d_loss: 0.8735 | g_loss: 2.2334\nEpoch [ 5/ 100] | d_loss: 0.9271 | g_loss: 1.6593\nEpoch [ 5/ 100] | d_loss: 0.8819 | g_loss: 2.3294\nEpoch [ 5/ 100] | d_loss: 0.9559 | g_loss: 1.5076\nEpoch [ 5/ 100] | d_loss: 0.8745 | g_loss: 2.9868\nEpoch [ 5/ 100] | d_loss: 0.9243 | g_loss: 2.0652\nEpoch [ 5/ 100] | d_loss: 0.8596 | g_loss: 1.3605\nEpoch [ 5/ 100] | d_loss: 0.9818 | g_loss: 1.8215\nEpoch [ 5/ 100] | d_loss: 1.0069 | g_loss: 1.3668\nEpoch [ 6/ 100] | d_loss: 1.0215 | g_loss: 1.7138\nEpoch [ 6/ 100] | d_loss: 0.8800 | g_loss: 1.5564\nEpoch [ 6/ 100] | d_loss: 1.0397 | g_loss: 1.0241\nEpoch [ 6/ 100] | d_loss: 0.9304 | g_loss: 1.8177\nEpoch [ 6/ 100] | d_loss: 0.9039 | g_loss: 1.8072\nEpoch [ 6/ 100] | d_loss: 1.2990 | g_loss: 3.8139\nEpoch [ 6/ 100] | d_loss: 1.0837 | g_loss: 2.6703\nEpoch [ 6/ 100] | d_loss: 0.9353 | g_loss: 2.4463\nEpoch [ 7/ 100] | d_loss: 1.1091 | g_loss: 2.6426\nEpoch [ 7/ 100] | d_loss: 0.8608 | g_loss: 2.2380\nEpoch [ 7/ 100] | d_loss: 0.9608 | g_loss: 2.2848\nEpoch [ 7/ 100] | d_loss: 0.9693 | g_loss: 2.0434\nEpoch [ 7/ 100] | d_loss: 0.9079 | g_loss: 1.9699\nEpoch [ 7/ 100] | d_loss: 0.9057 | g_loss: 2.8275\nEpoch [ 7/ 100] | d_loss: 0.8764 | g_loss: 1.8179\nEpoch [ 7/ 100] | d_loss: 0.8551 | g_loss: 1.8667\nEpoch [ 8/ 100] | d_loss: 1.0335 | g_loss: 3.2167\nEpoch [ 8/ 100] | d_loss: 0.8552 | g_loss: 2.2454\nEpoch [ 8/ 100] | d_loss: 0.8538 | g_loss: 1.8368\nEpoch [ 8/ 100] | d_loss: 0.8708 | g_loss: 1.6545\nEpoch [ 8/ 100] | d_loss: 0.8049 | g_loss: 2.1527\nEpoch [ 8/ 100] | d_loss: 0.9407 | g_loss: 2.0973\nEpoch [ 8/ 100] | d_loss: 0.8269 | g_loss: 1.3266\nEpoch [ 8/ 100] | d_loss: 0.9744 | g_loss: 3.0961\nEpoch [ 9/ 100] | d_loss: 1.8624 | g_loss: 3.5193\nEpoch [ 9/ 100] | d_loss: 0.9513 | g_loss: 2.1394\nEpoch [ 9/ 100] | d_loss: 0.8559 | g_loss: 2.2666\nEpoch [ 9/ 100] | d_loss: 0.8717 | g_loss: 1.7718\nEpoch [ 9/ 100] | d_loss: 1.0404 | g_loss: 1.2653\nEpoch [ 9/ 100] | d_loss: 0.9240 | g_loss: 1.6809\nEpoch [ 9/ 100] | d_loss: 0.9627 | g_loss: 1.7120\nEpoch [ 9/ 100] | d_loss: 0.9720 | g_loss: 2.6331\nEpoch [ 10/ 100] | d_loss: 1.0208 | g_loss: 2.9996\nEpoch [ 10/ 100] | d_loss: 0.8075 | g_loss: 2.2455\nEpoch [ 10/ 100] | d_loss: 1.0163 | g_loss: 1.3041\nEpoch [ 10/ 100] | d_loss: 0.8394 | g_loss: 2.3324\nEpoch [ 10/ 100] | d_loss: 0.9676 | g_loss: 1.6124\nEpoch [ 10/ 100] | d_loss: 0.9222 | g_loss: 1.7520\nEpoch [ 10/ 100] | d_loss: 0.7291 | g_loss: 2.2502\nEpoch [ 10/ 100] | d_loss: 0.8590 | g_loss: 2.3098\nEpoch [ 11/ 100] | d_loss: 0.8639 | g_loss: 2.0210\nEpoch [ 11/ 100] | d_loss: 0.8443 | g_loss: 2.3292\nEpoch [ 11/ 100] | d_loss: 0.8753 | g_loss: 2.6223\nEpoch [ 11/ 100] | d_loss: 0.8040 | g_loss: 1.5014\nEpoch [ 11/ 100] | d_loss: 0.7596 | g_loss: 2.2594\nEpoch [ 11/ 100] | d_loss: 1.5534 | g_loss: 5.0451\nEpoch [ 11/ 100] | d_loss: 0.8508 | g_loss: 1.4388\nEpoch [ 11/ 100] | d_loss: 0.7838 | g_loss: 2.0007\nEpoch [ 12/ 100] | d_loss: 0.8534 | g_loss: 3.0100\nEpoch [ 12/ 100] | d_loss: 0.9921 | g_loss: 1.3306\nEpoch [ 12/ 100] | d_loss: 1.0130 | g_loss: 3.3160\nEpoch [ 12/ 100] | d_loss: 0.8304 | g_loss: 1.8670\nEpoch [ 12/ 100] | d_loss: 1.1563 | g_loss: 3.1172\nEpoch [ 12/ 100] | d_loss: 0.7973 | g_loss: 2.8295\nEpoch [ 12/ 100] | d_loss: 1.4811 | g_loss: 1.1483\nEpoch [ 12/ 100] | d_loss: 0.7208 | g_loss: 2.4554\nEpoch [ 13/ 100] | d_loss: 1.0553 | g_loss: 3.5541\nEpoch [ 13/ 100] | d_loss: 0.6769 | g_loss: 1.6743\nEpoch [ 13/ 100] | d_loss: 0.7394 | g_loss: 1.0699\nEpoch [ 13/ 100] | d_loss: 0.8966 | g_loss: 2.2005\nEpoch [ 13/ 100] | d_loss: 0.7730 | g_loss: 2.2200\nEpoch [ 13/ 100] | d_loss: 0.7816 | g_loss: 2.0016\nEpoch [ 13/ 100] | d_loss: 1.1670 | g_loss: 1.2902\nEpoch [ 13/ 100] | d_loss: 0.9301 | g_loss: 1.2198\nEpoch [ 14/ 100] | d_loss: 0.8066 | g_loss: 1.9790\nEpoch [ 14/ 100] | d_loss: 1.0410 | g_loss: 0.9705\nEpoch [ 14/ 100] | d_loss: 0.7885 | g_loss: 2.4306\nEpoch [ 14/ 100] | d_loss: 1.2423 | g_loss: 4.6672\nEpoch [ 14/ 100] | d_loss: 0.9376 | g_loss: 1.1583\nEpoch [ 14/ 100] | d_loss: 0.8650 | g_loss: 2.4110\nEpoch [ 14/ 100] | d_loss: 0.8283 | g_loss: 2.0802\nEpoch [ 14/ 100] | d_loss: 1.1199 | g_loss: 2.6807\nEpoch [ 15/ 100] | d_loss: 0.7314 | g_loss: 2.2809\nEpoch [ 15/ 100] | d_loss: 0.9671 | g_loss: 2.3718\nEpoch [ 15/ 100] | d_loss: 0.8663 | g_loss: 1.3040\nEpoch [ 15/ 100] | d_loss: 0.8299 | g_loss: 2.3465\nEpoch [ 15/ 100] | d_loss: 0.8668 | g_loss: 2.4473\nEpoch [ 15/ 100] | d_loss: 0.8677 | g_loss: 2.1311\nEpoch [ 15/ 100] | d_loss: 0.8178 | g_loss: 2.1423\nEpoch [ 15/ 100] | d_loss: 0.6753 | g_loss: 2.0870\nEpoch [ 16/ 100] | d_loss: 0.7005 | g_loss: 2.9391\nEpoch [ 16/ 100] | d_loss: 0.7979 | g_loss: 1.8650\nEpoch [ 16/ 100] | d_loss: 0.7017 | g_loss: 1.8451\nEpoch [ 16/ 100] | d_loss: 0.9721 | g_loss: 1.7540\nEpoch [ 16/ 100] | d_loss: 0.9482 | g_loss: 1.3547\nEpoch [ 16/ 100] | d_loss: 0.7541 | g_loss: 2.5975\nEpoch [ 16/ 100] | d_loss: 0.9248 | g_loss: 1.8133\nEpoch [ 16/ 100] | d_loss: 1.0139 | g_loss: 2.7262\nEpoch [ 17/ 100] | d_loss: 0.7886 | g_loss: 3.0468\nEpoch [ 17/ 100] | d_loss: 1.0323 | g_loss: 0.9314\nEpoch [ 17/ 100] | d_loss: 0.7902 | g_loss: 2.5912\nEpoch [ 17/ 100] | d_loss: 0.9005 | g_loss: 2.1728\nEpoch [ 17/ 100] | d_loss: 0.6823 | g_loss: 1.8933\nEpoch [ 17/ 100] | d_loss: 0.6614 | g_loss: 3.0192\nEpoch [ 17/ 100] | d_loss: 0.7313 | g_loss: 1.5503\nEpoch [ 17/ 100] | d_loss: 0.7894 | g_loss: 2.1844\nEpoch [ 18/ 100] | d_loss: 0.6079 | g_loss: 2.3303\nEpoch [ 18/ 100] | d_loss: 0.7607 | g_loss: 2.7964\nEpoch [ 18/ 100] | d_loss: 0.7689 | g_loss: 1.5454\nEpoch [ 18/ 100] | d_loss: 0.7016 | g_loss: 2.8176\nEpoch [ 18/ 100] | d_loss: 0.6516 | g_loss: 2.1424\nEpoch [ 18/ 100] | d_loss: 0.6820 | g_loss: 3.0028\nEpoch [ 18/ 100] | d_loss: 0.7445 | g_loss: 2.1031\nEpoch [ 18/ 100] | d_loss: 0.9159 | g_loss: 4.0453\nEpoch [ 19/ 100] | d_loss: 1.2255 | g_loss: 3.2340\nEpoch [ 19/ 100] | d_loss: 0.7028 | g_loss: 3.4613\nEpoch [ 19/ 100] | d_loss: 0.8160 | g_loss: 2.9611\nEpoch [ 19/ 100] | d_loss: 0.8341 | g_loss: 2.8222\nEpoch [ 19/ 100] | d_loss: 0.7934 | g_loss: 2.4822\nEpoch [ 19/ 100] | d_loss: 1.4148 | g_loss: 1.0844\nEpoch [ 19/ 100] | d_loss: 0.8208 | g_loss: 3.1485\nEpoch [ 19/ 100] | d_loss: 1.2415 | g_loss: 2.8327\nEpoch [ 20/ 100] | d_loss: 0.9197 | g_loss: 1.7247\nEpoch [ 20/ 100] | d_loss: 0.6615 | g_loss: 2.0628\nEpoch [ 20/ 100] | d_loss: 1.6722 | g_loss: 4.3004\nEpoch [ 20/ 100] | d_loss: 0.6898 | g_loss: 2.0927\nEpoch [ 20/ 100] | d_loss: 0.8385 | g_loss: 2.2523\nEpoch [ 20/ 100] | d_loss: 0.9961 | g_loss: 3.2511\nEpoch [ 20/ 100] | d_loss: 0.8601 | g_loss: 2.5522\nEpoch [ 20/ 100] | d_loss: 0.8451 | g_loss: 1.1731\nEpoch [ 21/ 100] | d_loss: 0.6775 | g_loss: 4.4872\nEpoch [ 21/ 100] | d_loss: 1.9785 | g_loss: 4.7576\nEpoch [ 21/ 100] | d_loss: 0.8559 | g_loss: 3.2711\nEpoch [ 21/ 100] | d_loss: 0.7872 | g_loss: 2.7769\nEpoch [ 21/ 100] | d_loss: 1.2317 | g_loss: 1.1731\nEpoch [ 21/ 100] | d_loss: 0.7472 | g_loss: 1.6300\nEpoch [ 21/ 100] | d_loss: 1.2030 | g_loss: 3.8353\nEpoch [ 21/ 100] | d_loss: 0.6639 | g_loss: 3.1853\nEpoch [ 22/ 100] | d_loss: 0.8420 | g_loss: 3.3317\nEpoch [ 22/ 100] | d_loss: 0.7078 | g_loss: 2.5888\nEpoch [ 22/ 100] | d_loss: 0.6810 | g_loss: 2.1208\nEpoch [ 22/ 100] | d_loss: 0.8708 | g_loss: 0.6183\nEpoch [ 22/ 100] | d_loss: 0.7740 | g_loss: 2.9467\nEpoch [ 22/ 100] | d_loss: 0.8456 | g_loss: 1.7540\nEpoch [ 22/ 100] | d_loss: 0.7856 | g_loss: 1.8636\nEpoch [ 22/ 100] | d_loss: 1.4801 | g_loss: 3.4455\nEpoch [ 23/ 100] | d_loss: 0.9328 | g_loss: 1.0001\nEpoch [ 23/ 100] | d_loss: 0.7189 | g_loss: 2.4467\nEpoch [ 23/ 100] | d_loss: 0.9044 | g_loss: 2.2218\nEpoch [ 23/ 100] | d_loss: 0.7843 | g_loss: 1.2918\nEpoch [ 23/ 100] | d_loss: 1.4461 | g_loss: 5.6432\nEpoch [ 23/ 100] | d_loss: 0.7732 | g_loss: 2.5444\nEpoch [ 23/ 100] | d_loss: 0.6777 | g_loss: 2.7685\nEpoch [ 23/ 100] | d_loss: 0.5283 | g_loss: 3.1684\nEpoch [ 24/ 100] | d_loss: 1.1691 | g_loss: 3.8000\nEpoch [ 24/ 100] | d_loss: 0.6507 | g_loss: 1.6265\nEpoch [ 24/ 100] | d_loss: 1.1340 | g_loss: 0.9344\nEpoch [ 24/ 100] | d_loss: 0.6229 | g_loss: 2.3075\nEpoch [ 24/ 100] | d_loss: 0.7252 | g_loss: 1.1908\nEpoch [ 24/ 100] | d_loss: 0.7786 | g_loss: 1.8975\nEpoch [ 24/ 100] | d_loss: 1.1052 | g_loss: 4.2217\nEpoch [ 24/ 100] | d_loss: 0.6431 | g_loss: 1.7538\nEpoch [ 25/ 100] | d_loss: 0.7926 | g_loss: 2.2411\nEpoch [ 25/ 100] | d_loss: 0.5248 | g_loss: 3.6362\nEpoch [ 25/ 100] | d_loss: 0.4761 | g_loss: 2.9056\nEpoch [ 25/ 100] | d_loss: 0.7465 | g_loss: 1.3302\nEpoch [ 25/ 100] | d_loss: 0.6736 | g_loss: 2.7736\nEpoch [ 25/ 100] | d_loss: 0.7203 | g_loss: 2.5443\nEpoch [ 25/ 100] | d_loss: 0.5337 | g_loss: 2.3930\nEpoch [ 25/ 100] | d_loss: 1.7407 | g_loss: 2.0599\nEpoch [ 26/ 100] | d_loss: 0.6743 | g_loss: 2.8604\nEpoch [ 26/ 100] | d_loss: 0.7445 | g_loss: 2.6597\nEpoch [ 26/ 100] | d_loss: 0.9803 | g_loss: 2.9350\nEpoch [ 26/ 100] | d_loss: 0.6940 | g_loss: 3.1868\nEpoch [ 26/ 100] | d_loss: 1.4901 | g_loss: 5.8962\nEpoch [ 26/ 100] | d_loss: 0.6461 | g_loss: 2.1893\nEpoch [ 26/ 100] | d_loss: 0.7795 | g_loss: 2.5889\nEpoch [ 26/ 100] | d_loss: 1.2659 | g_loss: 1.8695\nEpoch [ 27/ 100] | d_loss: 1.0986 | g_loss: 4.2985\nEpoch [ 27/ 100] | d_loss: 0.6699 | g_loss: 2.6032\nEpoch [ 27/ 100] | d_loss: 0.7154 | g_loss: 1.4026\nEpoch [ 27/ 100] | d_loss: 0.7830 | g_loss: 2.3060\nEpoch [ 27/ 100] | d_loss: 0.8474 | g_loss: 2.6408\nEpoch [ 27/ 100] | d_loss: 0.4468 | g_loss: 2.6257\nEpoch [ 27/ 100] | d_loss: 0.6795 | g_loss: 3.1292\nEpoch [ 27/ 100] | d_loss: 0.8276 | g_loss: 2.3770\nEpoch [ 28/ 100] | d_loss: 0.5753 | g_loss: 2.9385\nEpoch [ 28/ 100] | d_loss: 0.7003 | g_loss: 2.9945\nEpoch [ 28/ 100] | d_loss: 0.9017 | g_loss: 0.5978\nEpoch [ 28/ 100] | d_loss: 0.8976 | g_loss: 1.8411\nEpoch [ 28/ 100] | d_loss: 0.8454 | g_loss: 1.3229\nEpoch [ 28/ 100] | d_loss: 0.8607 | g_loss: 3.7207\nEpoch [ 28/ 100] | d_loss: 0.7145 | g_loss: 2.9144\nEpoch [ 28/ 100] | d_loss: 0.5342 | g_loss: 4.0933\nEpoch [ 29/ 100] | d_loss: 1.0499 | g_loss: 3.8346\nEpoch [ 29/ 100] | d_loss: 0.5810 | g_loss: 2.4961\nEpoch [ 29/ 100] | d_loss: 0.9280 | g_loss: 3.2855\nEpoch [ 29/ 100] | d_loss: 0.7339 | g_loss: 1.9494\nEpoch [ 29/ 100] | d_loss: 0.7914 | g_loss: 3.1308\nEpoch [ 29/ 100] | d_loss: 0.5189 | g_loss: 3.2504\nEpoch [ 29/ 100] | d_loss: 0.5515 | g_loss: 4.0236\nEpoch [ 29/ 100] | d_loss: 0.6275 | g_loss: 2.6256\nEpoch [ 30/ 100] | d_loss: 0.7268 | g_loss: 3.7944\nEpoch [ 30/ 100] | d_loss: 0.7350 | g_loss: 2.0956\nEpoch [ 30/ 100] | d_loss: 0.5883 | g_loss: 2.1213\nEpoch [ 30/ 100] | d_loss: 0.4695 | g_loss: 2.0846\nEpoch [ 30/ 100] | d_loss: 0.7773 | g_loss: 1.7549\nEpoch [ 30/ 100] | d_loss: 0.5580 | g_loss: 2.8378\nEpoch [ 30/ 100] | d_loss: 0.6378 | g_loss: 1.6266\nEpoch [ 30/ 100] | d_loss: 0.7665 | g_loss: 2.5207\nEpoch [ 31/ 100] | d_loss: 0.7688 | g_loss: 3.3291\nEpoch [ 31/ 100] | d_loss: 0.6964 | g_loss: 2.7183\nEpoch [ 31/ 100] | d_loss: 0.5760 | g_loss: 2.3373\nEpoch [ 31/ 100] | d_loss: 0.7788 | g_loss: 3.7130\nEpoch [ 31/ 100] | d_loss: 0.7892 | g_loss: 1.2918\nEpoch [ 31/ 100] | d_loss: 0.7605 | g_loss: 2.7386\nEpoch [ 31/ 100] | d_loss: 0.6048 | g_loss: 2.2129\nEpoch [ 31/ 100] | d_loss: 0.4834 | g_loss: 2.5786\nEpoch [ 32/ 100] | d_loss: 0.6517 | g_loss: 3.5936\nEpoch [ 32/ 100] | d_loss: 0.6602 | g_loss: 3.3212\nEpoch [ 32/ 100] | d_loss: 0.8100 | g_loss: 2.4736\nEpoch [ 32/ 100] | d_loss: 0.6287 | g_loss: 3.3519\nEpoch [ 32/ 100] | d_loss: 0.6203 | g_loss: 3.3367\nEpoch [ 32/ 100] | d_loss: 0.5627 | g_loss: 3.3530\nEpoch [ 32/ 100] | d_loss: 0.4079 | g_loss: 3.4741\nEpoch [ 32/ 100] | d_loss: 1.0471 | g_loss: 4.3946\nEpoch [ 33/ 100] | d_loss: 0.9498 | g_loss: 1.6855\nEpoch [ 33/ 100] | d_loss: 1.1707 | g_loss: 3.7877\nEpoch [ 33/ 100] | d_loss: 0.5929 | g_loss: 3.8787\nEpoch [ 33/ 100] | d_loss: 0.4633 | g_loss: 3.9853\nEpoch [ 33/ 100] | d_loss: 0.8217 | g_loss: 2.6542\nEpoch [ 33/ 100] | d_loss: 0.5206 | g_loss: 2.6493\nEpoch [ 33/ 100] | d_loss: 1.1836 | g_loss: 1.6206\nEpoch [ 33/ 100] | d_loss: 0.6428 | g_loss: 3.0745\nEpoch [ 34/ 100] | d_loss: 0.7010 | g_loss: 2.0322\nEpoch [ 34/ 100] | d_loss: 0.5657 | g_loss: 2.5987\nEpoch [ 34/ 100] | d_loss: 1.1150 | g_loss: 4.2527\nEpoch [ 34/ 100] | d_loss: 0.7176 | g_loss: 1.7638\nEpoch [ 34/ 100] | d_loss: 0.5377 | g_loss: 2.5817\nEpoch [ 34/ 100] | d_loss: 0.5396 | g_loss: 3.0407\nEpoch [ 34/ 100] | d_loss: 0.6728 | g_loss: 3.6853\nEpoch [ 34/ 100] | d_loss: 0.6007 | g_loss: 2.2401\nEpoch [ 35/ 100] | d_loss: 0.6846 | g_loss: 4.2512\nEpoch [ 35/ 100] | d_loss: 0.9338 | g_loss: 4.2750\nEpoch [ 35/ 100] | d_loss: 0.6243 | g_loss: 2.4721\nEpoch [ 35/ 100] | d_loss: 0.6186 | g_loss: 1.4230\nEpoch [ 35/ 100] | d_loss: 0.4896 | g_loss: 4.4878\nEpoch [ 35/ 100] | d_loss: 0.8483 | g_loss: 3.7066\nEpoch [ 35/ 100] | d_loss: 0.6616 | g_loss: 2.9033\nEpoch [ 35/ 100] | d_loss: 0.7619 | g_loss: 4.2555\nEpoch [ 36/ 100] | d_loss: 1.1080 | g_loss: 0.5789\nEpoch [ 36/ 100] | d_loss: 0.5768 | g_loss: 3.3836\nEpoch [ 36/ 100] | d_loss: 0.9562 | g_loss: 3.1375\nEpoch [ 36/ 100] | d_loss: 0.8302 | g_loss: 3.6674\nEpoch [ 36/ 100] | d_loss: 0.6436 | g_loss: 2.7879\nEpoch [ 36/ 100] | d_loss: 1.4434 | g_loss: 1.9588\nEpoch [ 36/ 100] | d_loss: 0.8503 | g_loss: 2.8939\nEpoch [ 36/ 100] | d_loss: 0.5522 | g_loss: 3.5983\nEpoch [ 37/ 100] | d_loss: 0.8375 | g_loss: 3.5998\nEpoch [ 37/ 100] | d_loss: 0.7228 | g_loss: 3.6671\nEpoch [ 37/ 100] | d_loss: 0.7510 | g_loss: 2.2657\nEpoch [ 37/ 100] | d_loss: 0.6107 | g_loss: 4.1886\nEpoch [ 37/ 100] | d_loss: 0.5356 | g_loss: 3.1983\nEpoch [ 37/ 100] | d_loss: 0.5428 | g_loss: 2.7806\nEpoch [ 37/ 100] | d_loss: 0.5742 | g_loss: 1.7804\nEpoch [ 37/ 100] | d_loss: 1.1772 | g_loss: 1.1096\nEpoch [ 38/ 100] | d_loss: 0.8285 | g_loss: 2.4738\nEpoch [ 38/ 100] | d_loss: 0.5994 | g_loss: 2.9443\nEpoch [ 38/ 100] | d_loss: 0.5056 | g_loss: 2.6355\nEpoch [ 38/ 100] | d_loss: 0.4746 | g_loss: 2.7436\nEpoch [ 38/ 100] | d_loss: 0.8079 | g_loss: 3.0725\nEpoch [ 38/ 100] | d_loss: 0.4944 | g_loss: 1.8732\n" ], [ "##Saving the Generator and the Discrminator\n\ndef save_generator(model):\n model_name = 'trained_generator.pt'\n checkpoint = {'z_size' : model.z_size,\n 'conv_dim' : model.conv_dim,\n 'state_dict': model.state_dict()}\n \n with open(model_name, 'wb') as f:\n torch.save(checkpoint,f)\n \ndef save_discriminator(model):\n model_name = 'trained_discrimiator.pt'\n checkpoint = {'conv_dim' : model.conv_dim,\n 'state_dict': model.state_dict()}\n \n with open(model_name, 'wb') as f:\n torch.save(checkpoint,f)\n \nsave_generator(G)\nsave_discriminator(D)", "_____no_output_____" ], [ "## Function for loading the model\n\ndef load_generator():\n with open('trained_generator.pt', 'rb') as f:\n checkpoint = torch.load(f)\n ##model = CharRNN(checkpoint['tokens'],n_hidden =checkpoint['n_hidden'],n_layers=checkpoint['n_layers'])\n G = Generator(checkpoint['z_size'], checkpoint['conv_dim'])\n G.load_state_dict(checkpoint['state_dict'])\n return G\n\ndef load_discrimiator():\n with open('trained_discrimiator.pt', 'rb') as f:\n checkpoint = torch.load(f)\n\n D = Discriminator(checkpoint['conv_dim'])\n D.load_state_dict(checkpoint['state_dict'])\n return D\n\n#G = load_generator()\n#D = load_discrimiator()", "_____no_output_____" ] ], [ [ "## Training loss\n\nPlot the training losses for the generator and discriminator, recorded after each epoch.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()", "_____no_output_____" ] ], [ [ "## Generator samples from training\n\nView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.", "_____no_output_____" ] ], [ [ "# helper function for viewing a list of passed in sample images\ndef view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n img = img.detach().cpu().numpy()\n img = np.transpose(img, (1, 2, 0))\n img = ((img + 1)*255 / (2)).astype(np.uint8)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((32,32,3)))", "_____no_output_____" ], [ "# Load samples from generator, taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "_____no_output_____" ], [ "_ = view_samples(-50, samples) ## 50 epochs gave me the best result", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb133e1e69919c1a5fe3cdc2b90c24a74d1e6d18
152,477
ipynb
Jupyter Notebook
dev/04_transform.ipynb
dejanbatanjac/fastai_dev
a923fa0d148513e0bc77aa269bf678df69a4e2dd
[ "Apache-2.0" ]
null
null
null
dev/04_transform.ipynb
dejanbatanjac/fastai_dev
a923fa0d148513e0bc77aa269bf678df69a4e2dd
[ "Apache-2.0" ]
null
null
null
dev/04_transform.ipynb
dejanbatanjac/fastai_dev
a923fa0d148513e0bc77aa269bf678df69a4e2dd
[ "Apache-2.0" ]
null
null
null
82.242179
67,388
0.817979
[ [ [ "#default_exp transform", "_____no_output_____" ], [ "#export\nfrom local.torch_basics import *\nfrom local.test import *\nfrom local.notebook.showdoc import show_doc", "_____no_output_____" ], [ "from PIL import Image", "_____no_output_____" ] ], [ [ "# Transforms\n\n> Definition of `Transform` and `Pipeline`", "_____no_output_____" ], [ "The classes here provide functionality for creating a composition of *partially reversible functions*. By \"partially reversible\" we mean that a transform can be `decode`d, creating a form suitable for display. This is not necessarily identical to the original form (e.g. a transform that changes a byte tensor to a float tensor does not recreate a byte tensor when decoded, since that may lose precision, and a float tensor can be displayed already).\n\nClasses are also provided and for composing transforms, and mapping them over collections. `Pipeline` is a transform which composes several `Transform`, knowing how to decode them or show an encoded item.", "_____no_output_____" ], [ "## Helpers", "_____no_output_____" ] ], [ [ "#exports\ndef type_hints(f):\n \"Same as `typing.get_type_hints` but returns `{}` if not allowed type\"\n return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}", "_____no_output_____" ], [ "#export\ndef anno_ret(func):\n \"Get the return annotation of `func`\"\n if not func: return None\n ann = type_hints(func)\n if not ann: return None\n return ann.get('return')", "_____no_output_____" ], [ "#hide\ndef f(x) -> float: return x\ntest_eq(anno_ret(f), float)\ndef f(x) -> typing.Tuple[float,float]: return x\ntest_eq(anno_ret(f), typing.Tuple[float,float])\ndef f(x) -> None: return x\ntest_eq(anno_ret(f), NoneType)\ndef f(x): return x\ntest_eq(anno_ret(f), None)\ntest_eq(anno_ret(None), None)", "_____no_output_____" ], [ "#export\ncmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)", "_____no_output_____" ], [ "td = {int:1, numbers.Number:2, numbers.Integral:3}\ntest_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])", "_____no_output_____" ], [ "#export\ndef _p1_anno(f):\n \"Get the annotation of first param of `f`\"\n hints = type_hints(f)\n ann = [o for n,o in hints.items() if n!='return']\n return ann[0] if ann else object", "_____no_output_____" ], [ "def _f(a, b): pass\ntest_eq(_p1_anno(_f), object)\ndef _f(a, b)->str: pass\ntest_eq(_p1_anno(_f), object)\ndef _f(a, b:str)->float: pass\ntest_eq(_p1_anno(_f), str)\ndef _f(a:int, b:int)->float: pass\ntest_eq(_p1_anno(_f), int)\ndef _f(a:int, b:str)->float: pass\ntest_eq(_p1_anno(_f), int)\ntest_eq(_p1_anno(attrgetter('foo')), object)", "_____no_output_____" ] ], [ [ "## Types", "_____no_output_____" ], [ "`TensorImage`, `TensorImageBW` and `TensorMask` are subclasses of `torch.Tensor` that know how to show themselves.", "_____no_output_____" ] ], [ [ "#export\n@delegates(plt.subplots, keep=True)\ndef subplots(nrows=1, ncols=1, **kwargs):\n fig,ax = plt.subplots(nrows,ncols,**kwargs)\n if nrows*ncols==1: ax = array([ax])\n return fig,ax", "_____no_output_____" ], [ "#export\nclass TensorImageBase(TensorBase):\n _show_args = {'cmap':'viridis'}\n def show(self, ctx=None, **kwargs):\n return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})\n\n def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):\n n_samples = min(self.shape[0], max_n)\n rows = rows or int(np.ceil(math.sqrt(n_samples)))\n cols = cols or int(np.ceil(math.sqrt(n_samples)))\n figsize = (cols*3, rows*3) if figsize is None else figsize\n _,axs = subplots(rows, cols, figsize=figsize)\n return axs.flatten()", "_____no_output_____" ], [ "#export\nclass TensorImage(TensorImageBase): pass", "_____no_output_____" ], [ "#export\nclass TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}", "_____no_output_____" ], [ "#export\nclass TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}", "_____no_output_____" ], [ "im = Image.open(TEST_IMAGE)", "_____no_output_____" ], [ "im_t = TensorImage(array(im))\ntest_eq(type(im_t), TensorImage)", "_____no_output_____" ], [ "im_t2 = TensorMask(tensor(1))\ntest_eq(type(im_t2), TensorMask)\ntest_eq(im_t2, tensor(1))", "_____no_output_____" ], [ "ax = im_t.show(figsize=(2,2))", "_____no_output_____" ], [ "test_fig_exists(ax)", "_____no_output_____" ], [ "#hide\naxes = im_t.get_ctxs(1)\ntest_eq(axes.shape,[1])\nplt.close()\naxes = im_t.get_ctxs(4)\ntest_eq(axes.shape,[4])\nplt.close()", "_____no_output_____" ] ], [ [ "## TypeDispatch -", "_____no_output_____" ], [ "The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).", "_____no_output_____" ] ], [ [ "#export\nclass TypeDispatch:\n \"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`\"\n def __init__(self, *funcs):\n self.funcs,self.cache = {},{}\n for f in funcs: self.add(f)\n self.inst = None\n\n def _reset(self):\n self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}\n self.cache = {**self.funcs}\n\n def add(self, f):\n \"Add type `t` and function `f`\"\n self.funcs[_p1_anno(f) or object] = f\n self._reset()\n\n def returns(self, x): return anno_ret(self[type(x)])\n def returns_none(self, x):\n r = anno_ret(self[type(x)])\n return r if r == NoneType else None\n\n def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})\n\n def __call__(self, x, *args, **kwargs):\n f = self[type(x)]\n if not f: return x\n if self.inst is not None: f = types.MethodType(f, self.inst)\n return f(x, *args, **kwargs)\n\n def __get__(self, inst, owner):\n self.inst = inst\n return self\n\n def __getitem__(self, k):\n \"Find first matching type that is a super-class of `k`\"\n if k in self.cache: return self.cache[k]\n types = [f for f in self.funcs if issubclass(k,f)]\n res = self.funcs[types[0]] if types else None\n self.cache[k] = res\n return res", "_____no_output_____" ], [ "def f_col(x:typing.Collection): return x\ndef f_nin(x:numbers.Integral)->int: return x+1\ndef f_bti(x:TensorMask): return x\ndef f_fti(x:TensorImage): return x\ndef f_bll(x:bool): return x\ndef f_num(x:numbers.Number): return x\nt = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)\n\ntest_eq(t[int], f_nin)\ntest_eq(t[str], None)\ntest_eq(t[TensorImage], f_fti)\ntest_eq(t[float], f_num)\nt.add(f_col)\ntest_eq(t[str], f_col)\ntest_eq(t[int], f_nin)\ntest_eq(t(1), 2)\ntest_eq(t.returns(1), int)\nt", "_____no_output_____" ], [ "def m_nin(self, x:numbers.Integral): return x+1\ndef m_bll(self, x:bool): self.foo='a'\ndef m_num(self, x:numbers.Number): return x\n\nt = TypeDispatch(m_nin,m_num,m_bll)\nclass A: f = t\na = A()\ntest_eq(a.f(1), 2)\ntest_eq(a.f(1.), 1.)\na.f(False)\ntest_eq(a.foo, 'a')", "_____no_output_____" ] ], [ [ "## Transform -", "_____no_output_____" ] ], [ [ "#export\n_tfm_methods = 'encodes','decodes','setups'\n\nclass _TfmDict(dict):\n def __setitem__(self,k,v):\n if k not in _tfm_methods or not callable(v): return super().__setitem__(k,v)\n if k not in self: super().__setitem__(k,TypeDispatch())\n res = self[k]\n res.add(v)", "_____no_output_____" ], [ "#export\nclass _TfmMeta(type):\n def __new__(cls, name, bases, dict):\n res = super().__new__(cls, name, bases, dict)\n res.__signature__ = inspect.signature(res.__init__)\n return res\n\n def __call__(cls, *args, **kwargs):\n f = args[0] if args else None\n n = getattr(f,'__name__',None)\n for nm in _tfm_methods:\n if not hasattr(cls,nm): setattr(cls, nm, TypeDispatch())\n if callable(f) and n in _tfm_methods:\n getattr(cls,n).add(f)\n return f\n return super().__call__(*args, **kwargs)\n\n @classmethod\n def __prepare__(cls, name, bases): return _TfmDict()", "_____no_output_____" ], [ "#export\nclass Transform(metaclass=_TfmMeta):\n \"Delegates (`__call__`,`decode`,`setup`) to (`encodes`,`decodes`,`setups`) if `filt` matches\"\n filt,init_enc,as_item_force,as_item,order = None,False,None,True,0\n def __init__(self, enc=None, dec=None, filt=None, as_item=False):\n self.filt,self.as_item = ifnone(filt, self.filt),as_item\n self.init_enc = enc or dec\n if not self.init_enc: return\n\n # Passing enc/dec, so need to remove (base) class level enc/dec\n del(self.__class__.encodes,self.__class__.decodes,self.__class__.setups)\n self.encodes,self.decodes,self.setups = TypeDispatch(),TypeDispatch(),TypeDispatch()\n if enc:\n self.encodes.add(enc)\n self.order = getattr(self.encodes,'order',self.order)\n if dec: self.decodes.add(dec)\n\n @property\n def use_as_item(self): return ifnone(self.as_item_force, self.as_item)\n def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)\n def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)\n def setup(self, items=None): return self.setups(items)\n def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'\n\n def _call(self, fn, x, filt=None, **kwargs):\n if filt!=self.filt and self.filt is not None: return x\n f = getattr(self, fn)\n if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)\n res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)\n return retain_type(res, x)\n\n def _do_call(self, f, x, **kwargs):\n return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))\n\nadd_docs(Transform, decode=\"Delegate to `decodes` to undo transform\", setup=\"Delegate to `setups` to set up transform\")", "_____no_output_____" ], [ "show_doc(Transform)", "_____no_output_____" ] ], [ [ "A `Transform` is the main building block of the fastai data pipelines. In the most general terms a transform can be any function you want to apply to your data, however the `Transform` class provides several mechanisms that make the process of building them easy and flexible.\n\n### The main `Transform` features:\n\n- **Type dispatch** - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch. \n- **Handling of tuples** - When a tuple (or another collection satisfying `is_listy`) of data is passed to a transform it will get applied to each element separately. Most comonly it will be a *(x,y)* tuple, but it can be anything for example a list of images. You can opt out of this behavior by setting the flag `as_item=True`. For transforms that must always operate on the tuple level you can set `as_item_force=True` which takes precedence over `as_item`, an example of that is `PointScaler`.\n- **Reversability** - A transform can be made reversible by implementing the `decodes` method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes.\n- **Type propagation** - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like `TensorImage` which is a thin wrapper of pytorches `Tensor`. You can opt out of this behavior by adding `->None` return type annotation.\n- **Preprocessing** - The `setup` method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data.\n- **Filtering based on the dataset type** - By setting the `filt` flag you can make the transform be used only in a specific `DataSource` subset like in training, but not validation.\n- **Ordering** - You can set the `order` attribute which the `Pipeline` uses when it needs to merge two lists of transforms.\n- **Appending new behavior with decorators** - You can easily extend an existing `Transform` by creating `encodes` or `decodes` methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform.\n\n### Defining a `Transform`\nThere are a few ways to create a transform with different ratios of simplicity to flexibility.\n- **Extending the `Transform` class** - Use inheritence to implement the methods you want.\n- **Passing methods to the constructor** - Instantiate the `Transform` class and pass your functions as `enc` and `dec` arguments.\n- **@Transform decorator** - Turn any function into a `Transform` by just adding a decorator - very straightforward if all you need is a single `encodes` implementation.\n- **Passing a function to fastai APIs** - Same as above, but when passing a function to other transform aware classes like `Pipeline` or `TfmdDS` you don't even need a decorator. Your function will get converted to a `Transform` automatically.", "_____no_output_____" ] ], [ [ "class A(Transform): pass\n@A\ndef encodes(self, x): return x+1\nf1 = A()\ntest_eq(f1(1), 2)\n\nclass B(A): pass\nf2 = B()\ntest_eq(f2(1), 2)\n\nclass A(Transform): pass\nf3 = A()\ntest_eq_type(f3(2), 2)\ntest_eq_type(f3.decode(2.0), 2.0)", "_____no_output_____" ] ], [ [ "`Transform` can be used as a decorator, to turn a function into a `Transform`.", "_____no_output_____" ] ], [ [ "f = Transform(lambda o:o//2)", "_____no_output_____" ], [ "test_eq_type(f(2), 1)\ntest_eq_type(f.decode(2.0), 2.0)", "_____no_output_____" ], [ "@Transform\ndef f(x): return x//2\ntest_eq_type(f(2), 1)\ntest_eq_type(f.decode(2.0), 2.0)", "_____no_output_____" ] ], [ [ "You can derive from `Transform` and use `encodes` for your encoding function.", "_____no_output_____" ] ], [ [ "class A(Transform):\n def encodes(self, x:TensorImage): return -x\n def decodes(self, x:TensorImage): return x+1\n def setups (self, x:TensorImage): x.foo = 'a'\nf = A()\nt = f(im_t)\ntest_eq(t, -im_t)\ntest_eq(f(1), 1)\ntest_eq(type(t), TensorImage)\ntest_eq(f.decode(t), -im_t+1)\ntest_eq(f.decode(1), 1)\nf.setup(im_t)\ntest_eq(im_t.foo, 'a')\nt2 = tensor(1)\nf.setup(t2)\nassert not hasattr(f2,'foo')\nf", "_____no_output_____" ] ], [ [ "Without return annotation we get an `Int` back since that's what was passed.", "_____no_output_____" ] ], [ [ "class A(Transform): pass\n@A\ndef encodes(self, x:Int): return x//2\n@A\ndef encodes(self, x:float): return x+1\n\nf = A()\ntest_eq_type(f(Int(2)), Int(1))\ntest_eq_type(f(2), 2)\ntest_eq_type(f(2.), 3.)", "_____no_output_____" ] ], [ [ "Without return annotation we don't cast if we're not a subclass of the input type.", "_____no_output_____" ] ], [ [ "class A(Transform):\n def encodes(self, x:Int): return x/2\n def encodes(self, x:float): return x+1\n\nf = A()\ntest_eq_type(f(Int(2)), 1.)\ntest_eq_type(f(2), 2)\ntest_eq_type(f(Float(2.)), Float(3.))", "_____no_output_____" ] ], [ [ "With return annotation `None` we get back whatever Python creates usually.", "_____no_output_____" ] ], [ [ "def func(x)->None: return x/2\nf = Transform(func)\ntest_eq_type(f(2), 1.)\ntest_eq_type(f(2.), 1.)", "_____no_output_____" ] ], [ [ "Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.", "_____no_output_____" ] ], [ [ "def func(x): return Int(x+1)\ndef dec (x): return x-1\nf = Transform(func,dec)\nt = f(1)\ntest_eq_type(t, Int(2))\ntest_eq_type(f.decode(t), Int(1))", "_____no_output_____" ] ], [ [ "If the transform has `filt` then it's only applied if `filt` param matches.", "_____no_output_____" ] ], [ [ "f.filt = 1\ntest_eq(f(1, filt=1),2)\ntest_eq_type(f(1, filt=0), 1)", "_____no_output_____" ] ], [ [ "If `as_item=True` the transform takes tuples as a whole and is applied to them.", "_____no_output_____" ] ], [ [ "class A(Transform): \n def encodes(self, xy): x,y=xy; return (x+y,y)\n def decodes(self, xy): x,y=xy; return (x-y,y)\n\nf = A(as_item=True)\nt = f((1,2))\ntest_eq(t, (3,2))\ntest_eq(f.decode(t), (1,2))\nf.filt = 1\ntest_eq(f((1,2), filt=1), (3,2))\ntest_eq(f((1,2), filt=0), (1,2))", "_____no_output_____" ], [ "class AL(Transform): pass\n@AL\ndef encodes(self, x): return L(x_+1 for x_ in x)\n@AL\ndef decodes(self, x): return L(x_-1 for x_ in x)\n\nf = AL(as_item=True)\nt = f([1,2])\ntest_eq(t, [2,3])\ntest_eq(f.decode(t), [1,2])", "_____no_output_____" ] ], [ [ "If `as_item=False` the transform is applied to each element of a listy input.", "_____no_output_____" ] ], [ [ "def neg_int(x:numbers.Integral): return -x\n\nf = Transform(neg_int, as_item=False)\ntest_eq(f([1]), (-1,))\ntest_eq(f([1.]), (1.,))\ntest_eq(f([1.,2,3.]), (1.,-2,3.))\ntest_eq(f.decode([1,2]), (1,2))", "_____no_output_____" ], [ "#export\nclass InplaceTransform(Transform):\n \"A `Transform` that modifies in-place and just returns whatever it's passed\"\n def _call(self, fn, x, filt=None, **kwargs):\n super()._call(fn,x,filt,**kwargs)\n return x", "_____no_output_____" ] ], [ [ "## TupleTransform", "_____no_output_____" ] ], [ [ "#export\nclass TupleTransform(Transform):\n \"`Transform` that always treats `as_item` as `False`\"\n as_item_force=False", "_____no_output_____" ], [ "#export\nclass ItemTransform (Transform):\n \"`Transform` that always treats `as_item` as `True`\"\n as_item_force=True", "_____no_output_____" ], [ "def float_to_int(x:(float,int)): return Int(x)\n\nf = TupleTransform(float_to_int)\ntest_eq_type(f([1.]), (Int(1),))\ntest_eq_type(f([1]), (Int(1),))\ntest_eq_type(f(['1']), ('1',))\ntest_eq_type(f([1,'1']), (Int(1),'1'))\ntest_eq(f.decode([1]), [1])\n\ntest_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))", "_____no_output_____" ], [ "class B(TupleTransform): pass\nclass C(TupleTransform): pass\nf = B()\ntest_eq(f([1]), [1])", "_____no_output_____" ], [ "@B\ndef encodes(self, x:int): return x+1\n@B\ndef encodes(self, x:str): return x+'1'\n@B\ndef encodes(self, x)->None: return str(x)+'!'\n\nb,c = B(),C()\ntest_eq(b([1]), [2])\ntest_eq(b(['1']), ('11',))\ntest_eq(b([1.0]), ('1.0!',))\ntest_eq(c([1]), [1])\ntest_eq(b([1,2]), (2,3))\ntest_eq(b.decode([2]), [2])\nassert pickle.loads(pickle.dumps(b))", "_____no_output_____" ], [ "@B\ndef decodes(self, x:int): return x-1\ntest_eq(b.decode([2]), [1])\ntest_eq(b.decode(('2',)), ('2',))", "_____no_output_____" ] ], [ [ "Non-type-constrained functions are applied to all elements of a tuple.", "_____no_output_____" ] ], [ [ "class A(TupleTransform): pass\n@A\ndef encodes(self, x): return x+1\n@A\ndef decodes(self, x): return x-1\n\nf = A()\nt = f((1,2.0))\ntest_eq_type(t, (2,3.0))\ntest_eq_type(f.decode(t), (1,2.0))", "_____no_output_____" ] ], [ [ "Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.", "_____no_output_____" ] ], [ [ "class B(TupleTransform):\n def encodes(self, x:int): return Int(x+1)\n def encodes(self, x:str): return x+'1'\n def decodes(self, x:Int): return x//2\n\nf = B()\nstart = (1.,2,'3')\nt = f(start)\ntest_eq_type(t, (1.,Int(3),'31'))\ntest_eq(f.decode(t), (1.,Int(1),'31'))", "_____no_output_____" ] ], [ [ "The same behavior also works with `typing` module type classes.", "_____no_output_____" ] ], [ [ "class A(Transform): pass\n@A\ndef encodes(self, x:numbers.Integral): return x+1\n@A\ndef encodes(self, x:float): return x*3\n@A\ndef decodes(self, x:int): return x-1\n\nf = A()\nstart = 1.0\nt = f(start)\ntest_eq(t, 3.)\ntest_eq(f.decode(t), 3)\n\nf = A(as_item=False)\nstart = (1.,2,3.)\nt = f(start)\ntest_eq(t, (3.,3,9.))\ntest_eq(f.decode(t), (3.,2,9.))", "_____no_output_____" ] ], [ [ "Transform accepts lists", "_____no_output_____" ] ], [ [ "def a(x): return L(x_+1 for x_ in x)\ndef b(x): return L(x_-1 for x_ in x)\nf = TupleTransform(a,b)\n\nt = f((L(1,2),))\ntest_eq(t, (L(2,3),))\ntest_eq(f.decode(t), (L(1,2),))", "_____no_output_____" ] ], [ [ "## Func -", "_____no_output_____" ] ], [ [ "#export\ndef get_func(t, name, *args, **kwargs):\n \"Get the `t.name` (potentially partial-ized with `args` and `kwargs`) or `noop` if not defined\"\n f = getattr(t, name, noop)\n return f if not (args or kwargs) else partial(f, *args, **kwargs)", "_____no_output_____" ] ], [ [ "This works for any kind of `t` supporting `getattr`, so a class or a module.", "_____no_output_____" ] ], [ [ "test_eq(get_func(operator, 'neg', 2)(), -2)\ntest_eq(get_func(operator.neg, '__call__')(2), -2)\ntest_eq(get_func(list, 'foobar')([2]), [2])\nt = get_func(torch, 'zeros', dtype=torch.int64)(5)\ntest_eq(t.dtype, torch.int64)\na = [2,1]\nget_func(list, 'sort')(a)\ntest_eq(a, [1,2])", "_____no_output_____" ] ], [ [ "Transforms are built with multiple-dispatch: a given function can have several methods depending on the type of the object received. This is done directly with the `TypeDispatch` module and type-annotation in `Transform`, but you can also use the following class.", "_____no_output_____" ] ], [ [ "#export\nclass Func():\n \"Basic wrapper around a `name` with `args` and `kwargs` to call on a given type\"\n def __init__(self, name, *args, **kwargs): self.name,self.args,self.kwargs = name,args,kwargs\n def __repr__(self): return f'sig: {self.name}({self.args}, {self.kwargs})'\n def _get(self, t): return get_func(t, self.name, *self.args, **self.kwargs)\n def __call__(self,t): return mapped(self._get, t)", "_____no_output_____" ] ], [ [ "You can call the `Func` object on any module name or type, even a list of types. It will return the corresponding function (with a default to `noop` if nothing is found) or list of functions.", "_____no_output_____" ] ], [ [ "test_eq(Func('sqrt')(math), math.sqrt)\ntest_eq(Func('sqrt')(torch), torch.sqrt)\n\n@patch\ndef powx(x:math, a): return math.pow(x,a)\n@patch\ndef powx(x:torch, a): return torch.pow(x,a)\ntst = Func('powx',a=2)([math, torch])\ntest_eq([f.func for f in tst], [math.powx, torch.powx])\nfor t in tst: test_eq(t.keywords, {'a': 2})", "_____no_output_____" ], [ "#export\nclass _Sig():\n def __getattr__(self,k):\n def _inner(*args, **kwargs): return Func(k, *args, **kwargs)\n return _inner\n\nSig = _Sig()", "_____no_output_____" ], [ "show_doc(Sig, name=\"Sig\")", "_____no_output_____" ] ], [ [ "`Sig` is just sugar-syntax to create a `Func` object more easily with the syntax `Sig.name(*args, **kwargs)`.", "_____no_output_____" ] ], [ [ "f = Sig.sqrt()\ntest_eq(f(math), math.sqrt)\ntest_eq(f(torch), torch.sqrt)", "_____no_output_____" ] ], [ [ "## Pipeline -", "_____no_output_____" ] ], [ [ "#export\ndef compose_tfms(x, tfms, is_enc=True, reverse=False, **kwargs):\n \"Apply all `func_nm` attribute of `tfms` on `x`, maybe in `reverse` order\"\n if reverse: tfms = reversed(tfms)\n for f in tfms:\n if not is_enc: f = f.decode\n x = f(x, **kwargs)\n return x", "_____no_output_____" ], [ "def to_int (x): return Int(x)\ndef to_float(x): return Float(x)\ndef double (x): return x*2\ndef half(x)->None: return x/2", "_____no_output_____" ], [ "def test_compose(a, b, *fs): test_eq_type(compose_tfms(a, tfms=map(Transform,fs)), b)\n\ntest_compose(1, Int(1), to_int)\ntest_compose(1, Float(1), to_int,to_float)\ntest_compose(1, Float(2), to_int,to_float,double)\ntest_compose(2.0, 2.0, to_int,double,half)", "_____no_output_____" ], [ "class A(Transform):\n def encodes(self, x:float): return Float(x+1)\n def decodes(self, x): return x-1\n \ntfms = [A(), Transform(math.sqrt)]\nt = compose_tfms(3., tfms=tfms)\ntest_eq_type(t, Float(2.))\ntest_eq(compose_tfms(t, tfms=tfms, is_enc=False), 1.)\ntest_eq(compose_tfms(4., tfms=tfms, reverse=True), 3.)", "_____no_output_____" ], [ "tfms = [A(as_item=False), Transform(math.sqrt, as_item=False)]\ntest_eq(compose_tfms((9,3.), tfms=tfms), (3,2.))", "_____no_output_____" ], [ "#export\ndef mk_transform(f, as_item=True):\n \"Convert function `f` to `Transform` if it isn't already one\"\n f = instantiate(f)\n return f if isinstance(f,Transform) else Transform(f, as_item=as_item)", "_____no_output_____" ], [ "def neg(x): return -x\ntest_eq(type(mk_transform(neg)), Transform)\ntest_eq(type(mk_transform(math.sqrt)), Transform)\ntest_eq(type(mk_transform(lambda a:a*2)), Transform)", "_____no_output_____" ], [ "#export\ndef gather_attrs(o, k, nm):\n \"Used in __getattr__ to collect all attrs `k` from `self.{nm}`\"\n if k.startswith('_') or k==nm: raise AttributeError(k)\n att = getattr(o,nm)\n res = [t for t in att.attrgot(k) if t is not None]\n if not res: raise AttributeError(k)\n return res[0] if len(res)==1 else L(res)", "_____no_output_____" ], [ "#export\nclass Pipeline:\n \"A pipeline of composed (for encode/decode) transforms, setup with types\"\n def __init__(self, funcs=None, as_item=False, filt=None):\n self.filt,self.default = filt,None\n if isinstance(funcs, Pipeline): self.fs = funcs.fs\n else:\n if isinstance(funcs, Transform): funcs = [funcs]\n self.fs = L(ifnone(funcs,[noop])).mapped(mk_transform).sorted(key='order')\n for f in self.fs:\n name = camel2snake(type(f).__name__)\n a = getattr(self,name,None)\n if a is not None: f = L(a)+f\n setattr(self, name, f)\n self.set_as_item(as_item)\n\n def set_as_item(self, as_item):\n self.as_item = as_item\n for f in self.fs: f.as_item = as_item\n\n def setup(self, items=None):\n tfms = self.fs[:]\n self.fs.clear()\n for t in tfms: self.add(t,items)\n\n def add(self,t, items=None):\n t.setup(items)\n self.fs.append(t)\n\n def __call__(self, o): return compose_tfms(o, tfms=self.fs, filt=self.filt)\n def decode (self, o): return compose_tfms(o, tfms=self.fs, is_enc=False, reverse=True, filt=self.filt)\n def __repr__(self): return f\"Pipeline: {self.fs}\"\n def __getitem__(self,i): return self.fs[i]\n def decode_batch(self, b, max_n=10): return batch_to_samples(b, max_n=max_n).mapped(self.decode)\n def __setstate__(self,data): self.__dict__.update(data)\n def __getattr__(self,k): return gather_attrs(self, k, 'fs')\n \n def show(self, o, ctx=None, **kwargs):\n for f in reversed(self.fs):\n res = self._show(o, ctx, **kwargs)\n if res is not None: return res\n o = f.decode(o, filt=self.filt)\n return self._show(o, ctx, **kwargs)\n\n def _show(self, o, ctx, **kwargs):\n o1 = [o] if self.as_item or not is_listy(o) else o\n if not all(hasattr(o_, 'show') for o_ in o1): return\n for o_ in o1: ctx = o_.show(ctx=ctx, **kwargs)\n return ifnone(ctx,1)", "_____no_output_____" ], [ "add_docs(Pipeline,\n __call__=\"Compose `__call__` of all `fs` on `o`\",\n decode=\"Compose `decode` of all `fs` on `o`\",\n show=\"Show `o`, a single item from a tuple, decoding as needed\",\n add=\"Add transform `t`\",\n decode_batch=\"`decode` all sample in a the batch `b`\",\n set_as_item=\"Set value of `as_item` for all transforms\",\n setup=\"Call each tfm's `setup` in order\")", "_____no_output_____" ] ], [ [ "`Pipeline` is a wrapper for `compose_tfm`. You can pass instances of `Transform` or regular functions in `funcs`, the `Pipeline` will wrap them all in `Transform` (and instantiate them if needed) during the initialization. It handles the transform `setup` by adding them one at a time and calling setup on each, goes through them in order in `__call__` or `decode` and can `show` an object by applying decoding the transforms up until the point it gets an object that knows how to show itself.", "_____no_output_____" ] ], [ [ "# Empty pipeline is noop\npipe = Pipeline()\ntest_eq(pipe(1), 1)\npipe.set_as_item(False)\ntest_eq(pipe((1,)), (1,))\n# Check pickle works\nassert pickle.loads(pickle.dumps(pipe))", "_____no_output_____" ], [ "class IntFloatTfm(Transform):\n def encodes(self, x): return Int(x)\n def decodes(self, x): return Float(x)\n foo=1\n\nint_tfm=IntFloatTfm()\n\ndef neg(x): return -x\nneg_tfm = Transform(neg, neg)", "_____no_output_____" ], [ "pipe = Pipeline([neg_tfm, int_tfm])\n\nstart = 2.0\nt = pipe(start)\ntest_eq_type(t, Int(-2))\ntest_eq_type(pipe.decode(t), Float(start))\ntest_stdout(lambda:pipe.show(t), '-2')\n\npipe.set_as_item(False)\ntest_stdout(lambda:pipe.show(pipe((1.,2.))), '-1\\n-2')", "_____no_output_____" ] ], [ [ "Transforms are available as attributes named with the snake_case version of the names of their types. Attributes in transforms can be directly accessed as attributes of the pipeline.", "_____no_output_____" ] ], [ [ "test_eq(pipe.int_float_tfm, int_tfm)\ntest_eq(pipe.foo, 1)\n\npipe = Pipeline([int_tfm, int_tfm])\npipe.int_float_tfm\ntest_eq(pipe.int_float_tfm[0], int_tfm)\ntest_eq(pipe.foo, [1,1])", "_____no_output_____" ], [ "# Check opposite order\npipe = Pipeline([int_tfm,neg_tfm])\nt = pipe(start)\ntest_eq(t, -2)\ntest_stdout(lambda:pipe.show(t), '-2')", "_____no_output_____" ], [ "class A(Transform):\n def encodes(self, x): return int(x)\n def decodes(self, x): return Float(x)\n \npipe = Pipeline([neg_tfm, A])\nt = pipe(start)\ntest_eq_type(t, -2)\ntest_eq_type(pipe.decode(t), Float(start))\ntest_stdout(lambda:pipe.show(t), '-2.0')", "_____no_output_____" ], [ "s2 = (1,2)\npipe.set_as_item(False)\nt = pipe(s2)\ntest_eq_type(t, (-1,-2))\ntest_eq_type(pipe.decode(t), (Float(1.),Float(2.)))\ntest_stdout(lambda:pipe.show(t), '-1.0\\n-2.0')", "_____no_output_____" ], [ "class B(Transform):\n def encodes(self, x): return x+1\n def decodes(self, x): return x-1", "_____no_output_____" ], [ "from PIL import Image\n\ndef f1(x:TensorImage): return -x\ndef f2(x): return Image.open(x).resize((128,128))\ndef f3(x:Image.Image): return(TensorImage(array(x)))", "_____no_output_____" ], [ "pipe = Pipeline([f2,f3,f1])\nt = pipe(TEST_IMAGE)\ntest_eq(type(t), TensorImage)\ntest_eq(t, -tensor(f3(f2(TEST_IMAGE))))", "_____no_output_____" ], [ "pipe = Pipeline([f2,f3])\nt = pipe(TEST_IMAGE)\nax = pipe.show(t)", "_____no_output_____" ], [ "test_fig_exists(ax)", "_____no_output_____" ], [ "#Check filtering is properly applied\nadd1 = B()\nadd1.filt = 1\npipe = Pipeline([neg_tfm, A(), add1])\ntest_eq(pipe(start), -2)\npipe.filt=1\ntest_eq(pipe(start), -1)\npipe.filt=0\ntest_eq(pipe(start), -2)\nfor t in [None, 0, 1]:\n pipe.filt=t\n test_eq(pipe.decode(pipe(start)), start)\n test_stdout(lambda: pipe.show(pipe(start)), \"-2.0\")", "_____no_output_____" ] ], [ [ "### Methods", "_____no_output_____" ] ], [ [ "#TODO: method examples", "_____no_output_____" ], [ "show_doc(Pipeline.__call__)", "_____no_output_____" ], [ "show_doc(Pipeline.decode)", "_____no_output_____" ], [ "show_doc(Pipeline.decode_batch)", "_____no_output_____" ], [ "pipe.set_as_item(False)\nt = tensor([1,2,3])\npipe.filt=1\ntest_eq(pipe.decode_batch([t,t+1], max_n=2), [(0,-1),(-1,-2)])", "_____no_output_____" ], [ "show_doc(Pipeline.setup)", "_____no_output_____" ] ], [ [ "During the setup, the `Pipeline` starts with no transform and adds them one at a time, so that during its setup, each transform gets the items processed up to its point and not after.", "_____no_output_____" ] ], [ [ "#hide\n#Test is with TfmdList", "_____no_output_____" ] ], [ [ "## Export -", "_____no_output_____" ] ], [ [ "#hide\nfrom local.notebook.export import notebook2script\nnotebook2script(all_fs=True)", "Converted 00_test.ipynb.\nConverted 01_core.ipynb.\nConverted 01a_torch_core.ipynb.\nConverted 02_script.ipynb.\nConverted 03_dataloader.ipynb.\nConverted 04_transform.ipynb.\nConverted 05_data_core.ipynb.\nConverted 06_data_transforms.ipynb.\nConverted 07_vision_core.ipynb.\nConverted 08_pets_tutorial.ipynb.\nConverted 09_vision_augment.ipynb.\nConverted 11_layers.ipynb.\nConverted 11a_vision_models_xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_learner.ipynb.\nConverted 14_callback_schedule.ipynb.\nConverted 15_callback_hook.ipynb.\nConverted 16_callback_progress.ipynb.\nConverted 17_callback_tracker.ipynb.\nConverted 18_callback_fp16.ipynb.\nConverted 19_callback_mixup.ipynb.\nConverted 20_metrics.ipynb.\nConverted 21_tutorial_imagenette.ipynb.\nConverted 22_vision_learner.ipynb.\nConverted 23_tutorial_transfer_learning.ipynb.\nConverted 30_text_core.ipynb.\nConverted 31_text_data.ipynb.\nConverted 32_text_models_awdlstm.ipynb.\nConverted 33_text_models_core.ipynb.\nConverted 34_callback_rnn.ipynb.\nConverted 35_tutorial_wikitext.ipynb.\nConverted 36_text_models_qrnn.ipynb.\nConverted 40_tabular_core.ipynb.\nConverted 41_tabular_model.ipynb.\nConverted 42_tabular_rapids.ipynb.\nConverted 50_data_block.ipynb.\nConverted 90_notebook_core.ipynb.\nConverted 91_notebook_export.ipynb.\nConverted 92_notebook_showdoc.ipynb.\nConverted 93_notebook_export2html.ipynb.\nConverted 94_index.ipynb.\nConverted 95_utils_test.ipynb.\nConverted 96_data_external.ipynb.\nConverted notebook2jekyll.ipynb.\nConverted tmp.ipynb.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb13539c675e2e5e010eca6e6b03e5038187ec3a
17,628
ipynb
Jupyter Notebook
Homework notebooks/(HW notebooks) netology Machine learning/30. Hybrid Recommender Systems/hw5rs.ipynb
Alex110117/data_analysis
3cac3aac63d617b9fbd862788c778c2858445622
[ "MIT" ]
2
2020-07-22T07:33:13.000Z
2020-07-26T16:46:18.000Z
Homework notebooks/(HW notebooks) netology Machine learning/30. Hybrid Recommender Systems/hw5rs.ipynb
sibalex/data_analysis
3cac3aac63d617b9fbd862788c778c2858445622
[ "MIT" ]
null
null
null
Homework notebooks/(HW notebooks) netology Machine learning/30. Hybrid Recommender Systems/hw5rs.ipynb
sibalex/data_analysis
3cac3aac63d617b9fbd862788c778c2858445622
[ "MIT" ]
null
null
null
34.362573
247
0.463184
[ [ [ "import numpy as np\nimport pandas as pd\nfrom tqdm import tqdm_notebook, tqdm\nfrom scipy.spatial.distance import jaccard\n\nfrom surprise import Dataset, Reader, KNNBasic, KNNWithMeans, SVD, SVDpp, accuracy\nfrom surprise.model_selection import KFold, train_test_split, cross_validate, GridSearchCV\n\nimport warnings\nwarnings.simplefilter('ignore')", "_____no_output_____" ], [ "# !find * -iname 'movies.c*' -or -iname 'ratings.csv' -print -or -iname 'Library' -prune -or -iname 'Dropbox' -prune\n# !find * -iname 'movies.c*' -or -iname 'ratings.csv' -print -or -iname 'Library' -prune", "_____no_output_____" ], [ "movies = pd.read_csv('movies.csv') # Подгружаем данные\nratings = pd.read_csv('ratings.csv')", "_____no_output_____" ], [ "movies_with_ratings = movies.join(ratings.set_index('movieId'), on='movieId').reset_index(drop=True) # Объеденяем 'фильмы' и 'Оценки'\nmovies_with_ratings.dropna(inplace=True) # Удаляем пропуски\nmovies_with_ratings.head()", "_____no_output_____" ], [ "num_movies = movies_with_ratings.movieId.unique().shape[0] # len() Получаем колличество уникальных ID фильмов\nuniques = movies_with_ratings.movieId.unique() # Список уникальных ID фильмов <class 'numpy.ndarray'>\n\n\nuser_vector = {} # Формируем словарь (векторов), где {key=ID_юзера: values=array([Рейтинга])}\nfor user, group in movies_with_ratings.groupby('userId'):\n user_vector[user] = np.zeros(num_movies)\n \n for i in range(len(group.movieId.values)):\n m = np.argwhere(uniques==group.movieId.values[i])[0][0]\n r = group.rating.values[i]\n user_vector[user][m] = r\n\n \ndataset = pd.DataFrame({\n 'uid': movies_with_ratings.userId, \n 'iid': movies_with_ratings.title, \n 'rating': movies_with_ratings.rating\n}) # Формируем новый 'dataset' который будет учавствовать в нашей модели из библиотеки 'surprise'\n\n\ndataset.head()", "_____no_output_____" ], [ "reader = Reader(rating_scale=(0.5, 5.0)) # Указываем рейтинг где 0.5 минимальный, а 5.0 максимальный\ndata = Dataset.load_from_df(dataset, reader) # Преобразовываем 'dataset' в необходимый формат библиотеки 'surprise'\n\ntrainset, testset = train_test_split(data, test_size=.15, random_state=42) # Делим на train и test выборку\n\nalgo = SVDpp(n_factors=20, n_epochs=20) # Наша модель SVD++ (https://surprise.readthedocs.io/en/stable/matrix_factorization.html#surprise.prediction_algorithms.matrix_factorization.SVDpp)\nalgo.fit(trainset) # Обучаем модель на 'train'\n\ntest_pred = algo.test(testset) # Проверяем на 'test'\naccuracy.rmse(test_pred, verbose=True) # Смотрим на 'Среднюю Квадратическую Ошибку' (Root Mean Square Error)\n\n# Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). \\\n# Residuals are a measure of how far from the regression line data points are; \\\n# RMSE is a measure of how spread out these residuals are. \\\n# In other words, it tells you how concentrated the data is around the line of best fit. ", "RMSE: 0.8565\n" ], [ "def recommendation(uid=2.0, neighbors=5, ratin=4.5, films=5, top=5):\n \n '''\n uid - идентификационный номер пользователя, который запросил рекомендации\n neighbors - указываем необходимое количество похожих пользователей на 'uid' для поиска\n ratin - рейтинг фильмов похожих пользователей на 'uid'\n films - количество фильмов для предсказания оценки и сортировки\n top - количество рекомендованных фильмов пользователю 'uid'\n '''\n \n titles = [key for key in user_vector.keys() if key != uid] # только те ключи где != ID чтобы не брать фильмы пользователя которые он посмотрел и оценил\n distances = [jaccard(user_vector[uid], user_vector[key]) for key in user_vector.keys() if key != uid] # Джаккард (https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.jaccard.html#scipy.spatial.distance.jaccard)\n\n best_indexes = np.argsort(distances)[:neighbors] # Сортировка\n similar_users = np.array([(titles[i], distances[i]) for i in best_indexes])[:, 0]\n\n movies_with_ratings.sort_values('timestamp', inplace=True) # Сортировка по времени\n \n movies = np.array(list(set([]))) # Конструкция list(set()) для исключения дублей\n for user in similar_users:\n a = np.array(movies_with_ratings[movies_with_ratings.rating >= ratin][movies_with_ratings.userId == user][-films:].title)\n movies = np.concatenate([a, movies])\n\n user_movies = movies_with_ratings[movies_with_ratings.userId == uid].title.unique()\n\n scores = list(set([algo.predict(uid=uid, iid=movie).est for movie in movies]))\n titles_s = list(set([movie for movie in movies]))\n\n best_indexes = np.argsort(scores)[-top:] # Сортировка\n \n scores_r = [scores[i] for i in reversed(best_indexes)] #list(reversed([1, 2, 3, 4])) -> [4, 3, 2, 1]\n titles_r = [titles_s[i] for i in reversed(best_indexes)]\n \n # Объеденяем в один dataframe для вывода рекомендаций\n df1, df2 = pd.DataFrame(data=titles_r).reset_index(), pd.DataFrame(data=scores_r).reset_index()\n df1.columns, df2.columns = ['index','films'], ['index','scores']\n df = pd.merge(df1, df2, on='index')\n df['rank'] = df.scores.rank(ascending=False).astype('int')\n data = df[['rank', 'films', 'scores']]\n \n return data", "_____no_output_____" ], [ "''' Пользователь 2\n Похожих пользователей 10\n Из фильмов с минимальным рейтингом 4.5 у похожих пользователей\n По топ 10 фильмам похожих пользователей\n Топ рекомендаций 10 фильмов для пользователя '''\n\ndata = recommendation(uid=2.0, neighbors=10, ratin=4.5, films=10, top=10)", "_____no_output_____" ], [ "data.head(10)", "_____no_output_____" ], [ "pass", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1360c2f67a66a357f604d1998b87df1dedff34
104,330
ipynb
Jupyter Notebook
notebooks/archive/0.2.0-hef-beacon-personal-analysis.ipynb
intelligent-environments-lab/utx000
af60a6162d21e38f8cfa5cdebc0f14e717205f12
[ "MIT" ]
null
null
null
notebooks/archive/0.2.0-hef-beacon-personal-analysis.ipynb
intelligent-environments-lab/utx000
af60a6162d21e38f8cfa5cdebc0f14e717205f12
[ "MIT" ]
95
2020-06-08T17:29:13.000Z
2021-11-04T02:03:22.000Z
notebooks/archive/0.2.0-hef-beacon-personal-analysis.ipynb
intelligent-environments-lab/utx000
af60a6162d21e38f8cfa5cdebc0f14e717205f12
[ "MIT" ]
1
2022-02-17T17:14:03.000Z
2022-02-17T17:14:03.000Z
869.416667
101,828
0.957481
[ [ [ "# Personal Beacon Analysis\nThis notebook is dedicated to exploring the data captured at my own apartment using Beacon 7", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport os\nfrom datetime import datetime", "_____no_output_____" ] ], [ [ "## Data Import", "_____no_output_____" ] ], [ [ "b7 = pd.DataFrame()\nfor file in os.listdir('/Users/hagenfritz/Projects/utx000/data/raw/bpeace2/beacon/B07/sensirion/'):\n temp = pd.read_csv(f'/Users/hagenfritz/Projects/utx000/data/raw/bpeace2/beacon/B07/sensirion/{file}',\n index_col=0,parse_dates=True,infer_datetime_format=True)\n b7 = pd.concat([b7,temp])\n \nb7 = b7.resample('5T').mean()", "_____no_output_____" ] ], [ [ "## Visualizing", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport matplotlib.dates as mdates", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(16,8))\n#b7_short = b7[datetime(2020,6,10):datetime(2020,6,11)]\nax.plot(b7.index,b7['CO2'])\nax.plot(b7.index,b7['PM_C_2p5']*10)\n\nplt.show()\nplt.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb13619b1d3109d705a788c6ed89d3886fc48de2
2,597
ipynb
Jupyter Notebook
logger.ipynb
psmathur/tabular_ml_toolkit
46bd88abc398aeb5c7024c5b24c7a7ea9b3b5fb1
[ "Apache-2.0" ]
1
2021-11-07T04:50:26.000Z
2021-11-07T04:50:26.000Z
logger.ipynb
psmathur/tabular_ml_toolkit
46bd88abc398aeb5c7024c5b24c7a7ea9b3b5fb1
[ "Apache-2.0" ]
null
null
null
logger.ipynb
psmathur/tabular_ml_toolkit
46bd88abc398aeb5c7024c5b24c7a7ea9b3b5fb1
[ "Apache-2.0" ]
null
null
null
20.611111
91
0.539084
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "# default_exp logger", "_____no_output_____" ], [ "#hide\nfrom nbdev.showdoc import *\nfrom nbdev import *", "_____no_output_____" ], [ "#export\nimport logging", "_____no_output_____" ], [ "#export\nhandler = logging.StreamHandler()\n# set custom formatting settings for info\nhandler.setFormatter(logging.Formatter(\"%(asctime)s %(levelname)s %(message)s\"))\n\n# now create logger\nlogger = logging.getLogger(\"tmlt\")\nlogger.addHandler(handler)\nlogger.setLevel(logging.INFO)", "_____no_output_____" ], [ "# hide\n# run the script to build \n\nfrom nbdev.export import notebook2script; notebook2script()", "Converted 00_dataframeloader.ipynb.\nConverted 01_preprocessor.ipynb.\nConverted 02_tmlt.ipynb.\nConverted 04_xgb_optuna_objective.ipynb.\nConverted Kaggle_TPS_Dec_Tutorial.ipynb.\nConverted Kaggle_TPS_Nov_Tutorial.ipynb.\nConverted index.ipynb.\nConverted logger.ipynb.\nConverted utility.ipynb.\nConverted xgb_tabular_ml_toolkit.ipynb.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb13759ab622546249eb8ec76f07c77cc70202e9
749,292
ipynb
Jupyter Notebook
Exploratory_Data_Analysis.ipynb
ArpitaChatterjee/Comedian-transcript-Analysis
26c03acfd735556af2d3f5ce220a5af206ce4ce7
[ "MIT" ]
1
2021-06-15T15:02:33.000Z
2021-06-15T15:02:33.000Z
Exploratory_Data_Analysis.ipynb
ArpitaChatterjee/Routine-Analysis-of-a-Comedian
26c03acfd735556af2d3f5ce220a5af206ce4ce7
[ "MIT" ]
null
null
null
Exploratory_Data_Analysis.ipynb
ArpitaChatterjee/Routine-Analysis-of-a-Comedian
26c03acfd735556af2d3f5ce220a5af206ce4ce7
[ "MIT" ]
null
null
null
352.442145
613,770
0.903506
[ [ [ "<a href=\"https://colab.research.google.com/github/ArpitaChatterjee/Comedian-transcript-Analysis/blob/main/Exploratory_Data_Analysis.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "#To find the pattern of each comedian and find the reason of the likable\n1. Most common words\n2. size of vocab\n3. Amt. of profanity used", "_____no_output_____" ], [ "##Most common words", "_____no_output_____" ] ], [ [ "#read dmt \nimport pandas as pd\n\ndata=pd.read_pickle('/content/drive/MyDrive/Colab Notebooks/NLP/dtm.pkl')\ndata= data.transpose()\ndata.head()", "_____no_output_____" ], [ "#find the top 30 words said by each comedian\ntop_dict={}\nfor c in data.columns:\n top= data[c].sort_values(ascending=False).head(30)\n top_dict[c]=list(zip(top.index, top.values))\n\ntop_dict", "_____no_output_____" ], [ "#print top 15 words by each comedian\nfor comedian, top_words in top_dict.items():\n print(comedian)\n print(', '.join([word for word, count in top_words[0:14]]))\n print('---')\n", "ali\nlike, im, know, just, dont, shit, thats, youre, gonna, ok, lot, wanna, gotta, oh\n---\nanthony\nim, like, know, dont, joke, got, thats, said, anthony, day, say, just, guys, people\n---\nbill\nlike, just, right, im, know, dont, gonna, got, fucking, yeah, shit, youre, thats, dude\n---\nbo\nknow, like, think, im, love, bo, just, stuff, repeat, dont, yeah, want, right, cos\n---\ndave\nlike, know, said, just, im, shit, people, didnt, ahah, dont, time, thats, fuck, fucking\n---\nhasan\nlike, im, know, dont, dad, youre, just, going, thats, want, got, love, shes, hasan\n---\njim\nlike, im, dont, right, fucking, know, just, went, youre, people, thats, day, oh, think\n---\njoe\nlike, people, just, dont, fucking, im, fuck, thats, gonna, theyre, know, youre, think, shit\n---\njohn\nlike, know, just, dont, said, clinton, im, thats, right, youre, little, hey, got, time\n---\nlouis\nlike, just, know, dont, thats, im, youre, life, people, thing, gonna, hes, cause, theres\n---\nmike\nlike, im, know, said, just, dont, think, thats, says, cause, right, jenny, goes, id\n---\nricky\nright, like, just, im, dont, know, said, yeah, fucking, got, say, youre, went, id\n---\n" ] ], [ [ "**NOTE:** At this point, we could go on and create word clouds. However, by looking at these top words, you can see that some of them have very little meaning and could be added to a stop words list.", "_____no_output_____" ] ], [ [ "#look at most common top words and add to dtop word list\nfrom collections import Counter\n\n#pull out top 30 words\nwords=[]\nfor comedian in data.columns:\n top = [word for (word, count) in top_dict[comedian]]\n for t in top:\n words.append(t)\n\nwords", "_____no_output_____" ], [ "#aggregate the list and identify the most common words\nCounter(words).most_common()", "_____no_output_____" ], [ "#if more tham half the comedians have same top words, remove 'em as stop word\nadd_stop_words= [word for word, count in Counter(words).most_common() if count>6]\nadd_stop_words", "_____no_output_____" ], [ "#update the DTM with the new list of stop words\nfrom sklearn.feature_extraction import text\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n#read the clean data\ndata_clean= pd.read_pickle('/content/drive/MyDrive/Colab Notebooks/NLP/data_clean.pkl')\n\n#add new stop words\nstop_words= text.ENGLISH_STOP_WORDS.union(add_stop_words)\n\n#recreate the dtm\ncv= CountVectorizer(stop_words=stop_words)\ndata_cv= cv.fit_transform(data_clean.transcript)\ndata_stop =pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())\ndata_stop.index = data_clean.index\n\n#pickle for later use\nimport pickle\npickle.dump(cv, open(\"/content/drive/MyDrive/Colab Notebooks/NLP/cv.pkl\", \"wb\"))\ndata_stop.to_pickle(\"/content/drive/MyDrive/Colab Notebooks/NLP/dtm_stop.pkl\")", "_____no_output_____" ], [ "!pip install wordcloud", "Requirement already satisfied: wordcloud in /usr/local/lib/python3.7/dist-packages (1.5.0)\nRequirement already satisfied: numpy>=1.6.1 in /usr/local/lib/python3.7/dist-packages (from wordcloud) (1.19.5)\nRequirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from wordcloud) (7.1.2)\n" ], [ "from wordcloud import WordCloud\n\nwc= WordCloud(stopwords=stop_words, background_color='white', colormap='Dark2', max_font_size=150, random_state=42 )", "_____no_output_____" ], [ "#reset output dimension\nimport matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize']=[16, 6]\n\nfull_names=['Ali Wong', 'Anthony Jeselnik', 'Bill Burr', 'Bo Burnham', 'Dave Chappelle', 'Hasan Minhaj',\n 'Jim Jefferies', 'Joe Rogan', 'John Mulaney', 'Louis C.K.', 'Mike Birbiglia', 'Ricky Gervais']\n\n#create subplots for each comedian\nfor index, comedian in enumerate(data.columns):\n wc.generate(data_clean.transcript[comedian])\n\n plt.subplot(3, 4, index+1)\n plt.imshow(wc, interpolation=\"bilinear\")\n plt.axis(\"off\")\n plt.title(full_names[index])\n\nplt.show()", "_____no_output_____" ] ], [ [ "###**Finding**\n* Ali Wong says the s-word a lot and talks about her asian. I guess that's funny to me.\n* A lot of people use the F-word. Let's dig into that later.", "_____no_output_____" ], [ "# **Number of words**", "_____no_output_____" ] ], [ [ "#find no of unique words each of em used\n\n#identify the nonzero item in dtm, meaning that the word appaers atleast once\nunique_list=[]\nfor comedian in data.columns:\n uniques = data[comedian].to_numpy().nonzero()[0].size\n unique_list.append(uniques)\n\n#create a new dataframe that contains this unique word count\ndata_words = pd.DataFrame(list(zip(full_names, unique_list)),columns=['comedian', 'unique_words'])\ndata_unique_sort= data_words.sort_values(by='unique_words')\ndata_unique_sort ", "_____no_output_____" ], [ "#calculate the words permin of each comedian\n#total no of words comedian uses\ntotal_list=[]\nfor comedian in data.columns:\n totals= sum(data[comedian])\n total_list.append(totals)\n\n#comedy spl runtime from imdb, in mins\nrun_times= [60, 59, 80, 60, 67, 73, 77, 63, 62, 58, 76, 79]\n\n#add some more col to dataframe\ndata_words['total_words'] = total_list\ndata_words['run_times']= run_times\ndata_words['words_per_min']= data_words['total_words']/ data_words['run_times']\n\n#sort the df to check the slowest and fastest\ndata_wpm_sort= data_words.sort_values(by='words_per_min')\ndata_wpm_sort", "_____no_output_____" ], [ "#plot the findings\nimport numpy as np\n\ny_pos= np.arange(len(data_words))\n\nplt.subplot(1, 2, 1)\nplt.barh(y_pos, data_unique_sort.unique_words, align='center')\nplt.yticks(y_pos, data_unique_sort.comedian)\nplt.title('Number of Unique Words', fontsize=20)\n\nplt.subplot(1, 2, 2)\nplt.barh(y_pos, data_wpm_sort.words_per_min, align='center')\nplt.yticks(y_pos, data_wpm_sort.comedian)\nplt.title('Number of Words Per Minute', fontsize=20)\n\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "##**Finding**\n* **Vocabulary**\n * Ricky Gervais (British comedy) and Bill Burr (podcast host) use a lot of words in their comedy\n * Louis C.K. (self-depricating comedy) and Anthony Jeselnik (dark humor) have a smaller vocabulary\n\n\n* **Talking Speed**\n * Joe Rogan (blue comedy) and Bill Burr (podcast host) talk fast\n * Bo Burnham (musical comedy) and Anthony Jeselnik (dark humor) talk slow\n \nAli Wong is somewhere in the middle in both cases. Nothing too interesting here.", "_____no_output_____" ], [ "## **Amt of Profanity**", "_____no_output_____" ] ], [ [ "Counter(words).most_common()", "_____no_output_____" ], [ "#isolate just thse bad words\ndata_bad_words = data.transpose()[['fucking', 'fuck', 'shit']]\ndata_profanity = pd.concat([data_bad_words.fucking+ data_bad_words.fuck, data_bad_words.shit], axis=1)\ndata_profanity.columns = ['f_words', 's_words']\ndata_profanity", "_____no_output_____" ], [ "#lets create a scatter plot of our findings\nplt.rcParams['figure.figsize']=[10, 8]\n\nfor i, comedian in enumerate(data_profanity.index):\n x= data_profanity.f_words.loc[comedian]\n y= data_profanity.s_words.loc[comedian]\n plt.scatter(x, y, color='blue')\n plt.text(x+1.5, y+0.5, full_names[i], fontsize=10)\n plt.xlim(-5, 155)\n\nplt.title('No. of Bad-words used in Routine', fontsize=20)\nplt.xlabel('No of F words', fontsize=15)\nplt.ylabel('No of S words', fontsize=15)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## **Finding**\n* **Averaging 2 F-Bombs Per Minute!** - I don't like too much swearing, especially the f-word, which is probably why I've never heard of Bill Bur, Joe Rogan and Jim Jefferies.\n* **Clean Humor** - It looks like profanity might be a good predictor of the type of comedy I like. Besides Ali Wong, my two other favorite comedians in this group are John Mulaney and Mike Birbiglia.\n\nMy conclusion - yes, it does, for a first pass. There are definitely some things that could be better cleaned up, such as adding more stop words or including bi-grams. But we can save that for another day. The results, especially the profanity findings, are interesting and make general sense,", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cb13854bbd22410280d6763efc127e762a19488f
16,328
ipynb
Jupyter Notebook
8 Selecting the Right Model/7. Implementing Hold-Out Validation/Hold-Out + Stratify.ipynb
IamVaibhavsar/Machine_Learning_Files
1831166269f81cd738366382fd59bcc1857eb16e
[ "MIT" ]
null
null
null
8 Selecting the Right Model/7. Implementing Hold-Out Validation/Hold-Out + Stratify.ipynb
IamVaibhavsar/Machine_Learning_Files
1831166269f81cd738366382fd59bcc1857eb16e
[ "MIT" ]
null
null
null
8 Selecting the Right Model/7. Implementing Hold-Out Validation/Hold-Out + Stratify.ipynb
IamVaibhavsar/Machine_Learning_Files
1831166269f81cd738366382fd59bcc1857eb16e
[ "MIT" ]
null
null
null
25.794629
123
0.402621
[ [ [ "# Hold_out Validation", "_____no_output_____" ] ], [ [ "#importing libraries \nimport pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Importing the data", "_____no_output_____" ] ], [ [ "data = pd.read_csv('data_cleaned.csv')", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data.isnull().sum()", "_____no_output_____" ] ], [ [ "## Splitting", "_____no_output_____" ], [ "### Separating Dependent and Independent Variables", "_____no_output_____" ] ], [ [ "# For train set\ndata_x = data.drop(['Survived'], axis=1)\ndata_y = data['Survived']", "_____no_output_____" ] ], [ [ "### Creating Validation and test set", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split as tts\ntrain1_x, test_x , train1_y, test_y = tts( data_x, data_y , test_size = 0.2 , random_state = 50, stratify = data_y)", "_____no_output_____" ] ], [ [ "<img src=\"Image 1.png\" style=\"width:600px;\" align=\"left\">", "_____no_output_____" ] ], [ [ "train_x, val_x, train_y, val_y = tts(train1_x, train1_y, test_size = 0.2 , random_state = 51, stratify = train1_y)\n\nprint('training data ',train_x.shape,train_y.shape)\nprint('validation data ',val_x.shape,val_y.shape)\nprint('test data ',test_x.shape,test_y.shape)", "training data (569, 24) (569,)\nvalidation data (143, 24) (143,)\ntest data (179, 24) (179,)\n" ] ], [ [ "<img src=\"Image 1a.png\" style=\"width:600px;\" align=\"left\">", "_____no_output_____" ], [ "### Checking Distribution of target class in train, test and validation set", "_____no_output_____" ] ], [ [ "train_y.value_counts()/len(train_y)", "_____no_output_____" ], [ "val_y.value_counts()/len(val_y)", "_____no_output_____" ], [ "test_y.value_counts()/len(test_y)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
cb138f5ebd8e830e6a66a100702197258e3ded33
382,322
ipynb
Jupyter Notebook
Chapter 5 - Model Selection; Overfitting and Generalization.ipynb
hanifmahboobi/pydata-nyc-advanced-sklearn
4e8e22d521bac3bf373484898184d07db0b198b9
[ "CC0-1.0" ]
66
2015-02-18T10:50:40.000Z
2021-07-15T18:57:51.000Z
Chapter 5 - Model Selection; Overfitting and Generalization.ipynb
aaxwaz/pydata-nyc-advanced-sklearn
93038f783c78d12078a70e5fc598166c1e0fa45d
[ "CC0-1.0" ]
null
null
null
Chapter 5 - Model Selection; Overfitting and Generalization.ipynb
aaxwaz/pydata-nyc-advanced-sklearn
93038f783c78d12078a70e5fc598166c1e0fa45d
[ "CC0-1.0" ]
45
2015-03-09T01:04:16.000Z
2021-03-20T14:16:25.000Z
2,303.144578
327,461
0.949284
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb139f1a7fd47161b47039e0c16a37c23da68ef9
17,161
ipynb
Jupyter Notebook
WoS_data_cleaning_brazil_wide.ipynb
btiv/DSS_Final_Proj
4647a5de905732241dbada5448aa3dbb6f297d6d
[ "MIT" ]
null
null
null
WoS_data_cleaning_brazil_wide.ipynb
btiv/DSS_Final_Proj
4647a5de905732241dbada5448aa3dbb6f297d6d
[ "MIT" ]
null
null
null
WoS_data_cleaning_brazil_wide.ipynb
btiv/DSS_Final_Proj
4647a5de905732241dbada5448aa3dbb6f297d6d
[ "MIT" ]
null
null
null
29.639033
769
0.518967
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "deforestation_df = pd.read_excel('data/Brazil_research/raw_data/savedrecs 1_1000.xls')", "_____no_output_____" ], [ "for i in range(1000, 43000, 1000):\n temp_df = pd.read_excel(f'data/Brazil_research/raw_data/savedrecs {i+1}_{i+1000}.xls')\n deforestation_df = pd.concat([deforestation_df, temp_df])", "_____no_output_____" ], [ "temp_df = pd.read_excel('data/Brazil_research/raw_data/savedrecs 43001_43248.xls')\ndeforestation_df = pd.concat([deforestation_df, temp_df])", "_____no_output_____" ], [ "len(deforestation_df)", "_____no_output_____" ], [ "deforestation_df.head(2)", "_____no_output_____" ], [ "deforestation_df.iloc[4]['Addresses']", "_____no_output_____" ], [ "deforestation_df.iloc[5]['Article Title']", "_____no_output_____" ], [ "deforestation_df = deforestation_df.drop(columns=['Hot Paper Status', 'Date of Export', 'Pubmed Id', 'Highly Cited Status', 'Special Issue'])", "_____no_output_____" ], [ "# deforestation_df = deforestation_df.drop(columns=['Unnamed: 69', 'Hot Paper Status', 'Date of Export', 'Pubmed Id', 'Highly Cited Status', 'Special Issue'])", "_____no_output_____" ], [ "# drop rows without a title, year, authors, locations, citations\n# deforestation_df = deforestation_df.dropna(subset=['Authors', 'Addresses', 'Publication Year', 'Article Title', 'Times Cited, All Databases'])\ndeforestation_df = deforestation_df.dropna(subset=['Addresses', 'Publication Year', 'Times Cited, WoS Core'])", "_____no_output_____" ], [ "len(deforestation_df)", "_____no_output_____" ], [ "deforestation_df.to_csv('data/Brazil_research/cleaned_data/Brazil_focused_research.csv')", "_____no_output_____" ], [ "# reasearch location assignment\ncountry_name_df = pd.read_csv('data/country_names.csv')\ncountry_names = np.array(country_name_df['name'].unique())\ncountry_names = np.append(country_names, ['USA', 'England', 'Ireland', 'Korea', 'Moldova', 'Micronesia', \n 'Saint Martin', 'Sint Maarten', 'Tanzania', 'United Kingdom', 'UK',\n 'United States', 'Virgin Islands'])\n\ncountry_brazil = np.array(['Brazil'])\ncountry_names_no_brazil = np.setdiff1d(country_names, country_brazil)\n\ndeforestation_df = pd.read_csv('data/Brazil_research/cleaned_data/Brazil_focused_research.csv')\n\ndomestic_brazil_df = deforestation_df[deforestation_df['Addresses'].str.contains('|'.join(country_names_no_brazil), case=False) == False]\ndomestic_brazil_df = domestic_brazil_df[domestic_brazil_df['Addresses'].str.contains('Brazil', case=False) == True]\ndomestic_brazil_df = domestic_brazil_df.reset_index(drop=True)\n\ninternational_brazil_df = deforestation_df[deforestation_df['Addresses'].str.contains('Brazil', case=False) == False]\n\n# make sure that addresses at least contain some country\ninternational_brazil_df = international_brazil_df[international_brazil_df['Addresses'].str.contains('|'.join(country_names_no_brazil), case=False) == True]\ninternational_brazil_df = international_brazil_df.reset_index(drop=True)", "C:\\Users\\freja\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3058: DtypeWarning: Columns (12,19,47,56,57) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\nC:\\Users\\freja\\Anaconda3\\lib\\site-packages\\pandas\\core\\strings.py:1843: UserWarning: This pattern has match groups. To actually get the groups, use str.extract.\n return func(self, *args, **kwargs)\n" ], [ "collaboration_brazil_df = pd.concat([deforestation_df, domestic_brazil_df, international_brazil_df]).drop_duplicates(keep=False)\n\n# make sure that addresses at least contain some country\ncollaboration_brazil_df = collaboration_brazil_df[collaboration_brazil_df['Addresses'].str.contains('|'.join(country_names), case=False) == True]\ncollaboration_brazil_df = collaboration_brazil_df.reset_index(drop=True)", "_____no_output_____" ], [ "collaboration_brazil_df['Publication Year'].min()", "_____no_output_____" ], [ "domestic_brazil_df.to_csv('data/Brazil_research/cleaned_data/domestic_brazil_research.csv')\ninternational_brazil_df.to_csv('data/Brazil_research/cleaned_data/international_brazil_research.csv')\ncollaboration_brazil_df.to_csv('data/Brazil_research/cleaned_data/collaboration_brazil_research.csv')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb13a2a2d2bf3820f662e53d2c7f860f4c663c38
25,720
ipynb
Jupyter Notebook
00_intro_nn/Tensorflow_intro.ipynb
fogside/DeepNLP
9d5f896eadb4d864c41dd199a21d30aaa17bb19d
[ "MIT" ]
3
2017-09-20T18:44:14.000Z
2019-03-16T08:37:13.000Z
00_intro_nn/Tensorflow_intro.ipynb
fogside/DeepNLP
9d5f896eadb4d864c41dd199a21d30aaa17bb19d
[ "MIT" ]
null
null
null
00_intro_nn/Tensorflow_intro.ipynb
fogside/DeepNLP
9d5f896eadb4d864c41dd199a21d30aaa17bb19d
[ "MIT" ]
null
null
null
28.834081
347
0.577061
[ [ [ "# Going deeper with Tensorflow\n\nВ этом семинаре мы начнем изучать [Tensorflow](https://www.tensorflow.org/) для построения deep learning моделей.\n\nДля установки tf на свою машину\n\n* `pip install tensorflow` версия с поддержкой **cpu-only** для Linux & Mac OS\n* для автомагической поддержки GPU смотрите документацию [TF install page](https://www.tensorflow.org/install/)", "_____no_output_____" ] ], [ [ "import tensorflow as tf\ngpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.1)\ns = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))", "_____no_output_____" ] ], [ [ "# Приступим\n\nДля начала, давайте имплементируем простую функцию на numpy просто для сравнения. Напишите подсчет суммы квадратов чисел от 0 до N-1.\n\n**Подсказка:**\n* Массив чисел от 0 до N-1 включительно - numpy.arange(N)", "_____no_output_____" ] ], [ [ "import numpy as np\ndef sum_squares(N):\n return <student.Implement_me()>", "_____no_output_____" ], [ "%%time\nsum_squares(10**8)", "_____no_output_____" ] ], [ [ "# Tensoflow teaser\n\nDoing the very same thing", "_____no_output_____" ] ], [ [ "#I gonna be your function parameter\nN = tf.placeholder('int64', name=\"input_to_your_function\")\n\n#i am a recipe on how to produce sum of squares of arange of N given N\nresult = tf.reduce_sum((tf.range(N)**2))", "_____no_output_____" ], [ "%%time\n#example of computing the same as sum_squares\nprint(result.eval({N:10**8}))", "662921401752298880\nCPU times: user 1.24 s, sys: 2.78 s, total: 4.02 s\nWall time: 3.19 s\n" ] ], [ [ "# How does it work?\n1. define placeholders where you'll send inputs;\n2. make symbolic graph: a recipe for mathematical transformation of those placeholders;\n3. compute outputs of your graph with particular values for each placeholder\n * output.eval({placeholder:value}) \n * s.run(output, {placeholder:value})\n\n* So far there are two main entities: \"placeholder\" and \"transformation\"\n* Both can be numbers, vectors, matrices, tensors, etc.\n* Both can be int32/64, floats of booleans (uint8) of various size.\n\n* You can define new transformations as an arbitrary operation on placeholders and other transformations\n * tf.reduce_sum(tf.arange(N)\\**2) are 3 sequential transformations of placeholder N\n * There's a tensorflow symbolic version for every numpy function\n * `a+b, a/b, a**b, ...` behave just like in numpy\n * np.mean -> tf.reduce_mean\n * np.arange -> tf.range\n * np.cumsum -> tf.cumsum\n * If if you can't find the op you need, see the [docs](https://www.tensorflow.org/api_docs/python).\n \n \nStill confused? We gonna fix that.", "_____no_output_____" ] ], [ [ "#Default placeholder that can be arbitrary float32 scalar, vertor, matrix, etc.\narbitrary_input = tf.placeholder('float32')\n\n#Input vector of arbitrary length\ninput_vector = tf.placeholder('float32',shape=(None,))\n\n#Input vector that _must_ have 10 elements and integer type\nfixed_vector = tf.placeholder('int32',shape=(10,))\n\n#Matrix of arbitrary n_rows and 15 columns (e.g. a minibatch your data table)\ninput_matrix = tf.placeholder('float32',shape=(None,15))\n\n#You can generally use None whenever you don't need a specific shape\ninput1 = tf.placeholder('float64',shape=(None,100,None))\ninput2 = tf.placeholder('int32',shape=(None,None,3,224,224))", "_____no_output_____" ], [ "#elementwise multiplication\ndouble_the_vector = input_vector*2\n\n#elementwise cosine\nelementwise_cosine = tf.cos(input_vector)\n\n#difference between squared vector and vector itself\nvector_squares = input_vector**2 - input_vector\n", "_____no_output_____" ], [ "#Practice time: create two vectors of type float32\nmy_vector = <student.init_float32_vector()>\nmy_vector2 = <student.init_one_more_such_vector()>", "_____no_output_____" ], [ "#Write a transformation(recipe):\n#(vec1)*(vec2) / (sin(vec1) +1)\nmy_transformation = <student.implementwhatwaswrittenabove()>", "_____no_output_____" ], [ "print(my_transformation)\n#it's okay, it's a symbolic graph", "_____no_output_____" ], [ "#\ndummy = np.arange(5).astype('float32')\n\nmy_transformation.eval({my_vector:dummy,my_vector2:dummy[::-1]})", "_____no_output_____" ] ], [ [ "### Visualizing graphs\n\nIt's often useful to visualize the computation graph when debugging or optimizing. \nInteractive visualization is where tensorflow really shines as compared to other frameworks. \n\nThere's a special instrument for that, called Tensorboard. You can launch it from console:\n\n```tensorboard --logdir=/tmp/tboard --port=7007```\n\nIf you're pathologically afraid of consoles, try this:\n\n```os.system(\"tensorboard --logdir=/tmp/tboard --port=7007 &\"```\n\n_(but don't tell anyone we taught you that)_", "_____no_output_____" ] ], [ [ "# launch tensorflow the ugly way, uncomment if you need that\nimport os\nport = 6000 + os.getuid()\nprint(\"Port: %d\" % port)\n#!killall tensorboard\nos.system(\"tensorboard --logdir=./tboard --port=%d &\" % port)\n\n# show graph to tensorboard\nwriter = tf.summary.FileWriter(\"./tboard\", graph=tf.get_default_graph())\nwriter.close()", "_____no_output_____" ] ], [ [ "One basic functionality of tensorboard is drawing graphs. One you've run the cell above, go to `localhost:7007` in your browser and switch to _graphs_ tab in the topbar. \n\nHere's what you should see:\n\n<img src=\"https://s12.postimg.org/a374bmffx/tensorboard.png\" width=480>\n\nTensorboard also allows you to draw graphs (e.g. learning curves), record images & audio ~~and play flash games~~. This is useful when monitoring learning progress and catching some training issues.\n\nOne researcher said:\n```\nIf you spent last four hours of your worktime watching as your algorithm prints numbers and draws figures, you're probably doing deep learning wrong.\n```", "_____no_output_____" ], [ "You can read more on tensorboard usage [here](https://www.tensorflow.org/get_started/graph_viz)", "_____no_output_____" ], [ "# Do It Yourself\n\n__[2 points max]__", "_____no_output_____" ] ], [ [ "# Quest #1 - implement a function that computes a mean squared error of two input vectors\n# Your function has to take 2 vectors and return a single number\n\n<student.define_inputs_and_transformations()>\n\nmse =<student.define_transformation()>\n\ncompute_mse = lambda vector1, vector2: <how to run you graph?>", "_____no_output_____" ], [ "# Tests\nfrom sklearn.metrics import mean_squared_error\n\nfor n in [1,5,10,10**3]:\n \n elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),\n np.ones(n),np.random.random(n),np.random.randint(100,size=n)]\n \n for el in elems:\n for el_2 in elems:\n true_mse = np.array(mean_squared_error(el,el_2))\n my_mse = compute_mse(el,el_2)\n if not np.allclose(true_mse,my_mse):\n print('Wrong result:')\n print('mse(%s,%s)' % (el,el_2))\n print(\"should be: %f, but your function returned %f\" % (true_mse,my_mse))\n raise ValueError,\"Что-то не так\"\n\nprint(\"All tests passed\") ", "_____no_output_____" ] ], [ [ "# variables\n\nThe inputs and transformations have no value outside function call. This isn't too comfortable if you want your model to have parameters (e.g. network weights) that are always present, but can change their value over time.\n\nTensorflow solves this with `tf.Variable` objects.\n* You can assign variable a value at any time in your graph\n* Unlike placeholders, there's no need to explicitly pass values to variables when `s.run(...)`-ing\n* You can use variables the same way you use transformations \n ", "_____no_output_____" ] ], [ [ "#creating shared variable\nshared_vector_1 = tf.Variable(initial_value=np.ones(5))", "_____no_output_____" ], [ "#initialize variable(s) with initial values\ns.run(tf.global_variables_initializer())\n\n#evaluating shared variable (outside symbolicd graph)\nprint(\"initial value\", s.run(shared_vector_1))\n\n# within symbolic graph you use them just as any other inout or transformation, not \"get value\" needed", "_____no_output_____" ], [ "#setting new value\ns.run(shared_vector_1.assign(np.arange(5)))\n\n#getting that new value\nprint(\"new value\", s.run(shared_vector_1))\n", "_____no_output_____" ] ], [ [ "# tf.gradients - why graphs matter\n* Tensorflow can compute derivatives and gradients automatically using the computation graph\n* Gradients are computed as a product of elementary derivatives via chain rule:\n\n$$ {\\partial f(g(x)) \\over \\partial x} = {\\partial f(g(x)) \\over \\partial g(x)}\\cdot {\\partial g(x) \\over \\partial x} $$\n\nIt can get you the derivative of any graph as long as it knows how to differentiate elementary operations", "_____no_output_____" ] ], [ [ "my_scalar = tf.placeholder('float32')\n\nscalar_squared = my_scalar**2\n\n#a derivative of scalar_squared by my_scalar\nderivative = tf.gradients(scalar_squared, my_scalar)[0]", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nx = np.linspace(-3,3)\nx_squared, x_squared_der = s.run([scalar_squared,derivative],\n {my_scalar:x})\n\nplt.plot(x, x_squared,label=\"x^2\")\nplt.plot(x, x_squared_der, label=\"derivative\")\nplt.legend();", "_____no_output_____" ] ], [ [ "# Why that rocks", "_____no_output_____" ] ], [ [ "my_vector = tf.placeholder('float32',[None])\n\n#Compute the gradient of the next weird function over my_scalar and my_vector\n#warning! Trying to understand the meaning of that function may result in permanent brain damage\n\nweird_psychotic_function = tf.reduce_mean((my_vector+my_scalar)**(1+tf.nn.moments(my_vector,[0])[1]) + 1./ tf.atan(my_scalar))/(my_scalar**2 + 1) + 0.01*tf.sin(2*my_scalar**1.5)*(tf.reduce_sum(my_vector)* my_scalar**2)*tf.exp((my_scalar-4)**2)/(1+tf.exp((my_scalar-4)**2))*(1.-(tf.exp(-(my_scalar-4)**2))/(1+tf.exp(-(my_scalar-4)**2)))**2\n\nder_by_scalar = <student.compute_grad_over_scalar()>\nder_by_vector = <student.compute_grad_over_vector()>", "_____no_output_____" ], [ "#Plotting your derivative\nscalar_space = np.linspace(1, 7, 100)\n\ny = [s.run(weird_psychotic_function, {my_scalar:x, my_vector:[1, 2, 3]})\n for x in scalar_space]\n\nplt.plot(scalar_space, y, label='function')\n\ny_der_by_scalar = [s.run(der_by_scalar, {my_scalar:x, my_vector:[1, 2, 3]})\n for x in scalar_space]\n\nplt.plot(scalar_space, y_der_by_scalar, label='derivative')\nplt.grid()\nplt.legend();", "_____no_output_____" ] ], [ [ "# Almost done - optimizers\n\nWhile you can perform gradient descent by hand with automatic grads from above, tensorflow also has some optimization methods implemented for you. Recall momentum & rmsprop?", "_____no_output_____" ] ], [ [ "y_guess = tf.Variable(np.zeros(2,dtype='float32'))\ny_true = tf.range(1,3,dtype='float32')\n\nloss = tf.reduce_mean((y_guess - y_true + tf.random_normal([2]))**2) \n\noptimizer = tf.train.MomentumOptimizer(0.01,0.9).minimize(loss,var_list=y_guess)\n\n#same, but more detailed:\n#updates = [[tf.gradients(loss,y_guess)[0], y_guess]]\n#optimizer = tf.train.MomentumOptimizer(0.01,0.9).apply_gradients(updates)", "_____no_output_____" ], [ "from IPython.display import clear_output\n\ns.run(tf.global_variables_initializer())\n\nguesses = [s.run(y_guess)]\n\nfor _ in range(100):\n s.run(optimizer)\n guesses.append(s.run(y_guess))\n \n clear_output(True)\n plt.plot(*zip(*guesses),marker='.')\n plt.scatter(*s.run(y_true),c='red')\n plt.show()", "_____no_output_____" ] ], [ [ "# Logistic regression example\nImplement the regular logistic regression training algorithm\n\nTips:\n* Use a shared variable for weights\n* X and y are potential inputs\n* Compile 2 functions:\n * `train_function(X, y)` - returns error and computes weights' new values __(through updates)__\n * `predict_fun(X)` - just computes probabilities (\"y\") given data\n \n \nWe shall train on a two-class MNIST dataset\n* please note that target `y` are `{0,1}` and not `{-1,1}` as in some formulae", "_____no_output_____" ] ], [ [ "from sklearn.datasets import load_digits\nmnist = load_digits(2)\n\nX,y = mnist.data, mnist.target\n\nprint(\"y [shape - %s]:\" % (str(y.shape)), y[:10])\nprint(\"X [shape - %s]:\" % (str(X.shape)))", "_____no_output_____" ], [ "print('X:\\n',X[:3,:10])\nprint('y:\\n',y[:10])\nplt.imshow(X[0].reshape([8,8]))", "_____no_output_____" ], [ "# inputs and shareds\nweights = <student.code_variable()>\ninput_X = <student.code_placeholder()>\ninput_y = <student.code_placeholder()>", "_____no_output_____" ], [ "predicted_y = <predicted probabilities for input_X>\nloss = <logistic loss (scalar, mean over sample)>\n\noptimizer = <optimizer that minimizes loss>", "_____no_output_____" ], [ "train_function = <compile function that takes X and y, returns log loss and updates weights>\npredict_function = <compile function that takes X and computes probabilities of y>", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y)", "_____no_output_____" ], [ "from sklearn.metrics import roc_auc_score\n\nfor i in range(5):\n <run optimizer operation>\n loss_i = <compute loss at iteration i>\n \n print(\"loss at iter %i:%.4f\" % (i, loss_i))\n \n print(\"train auc:\",roc_auc_score(y_train, predict_function(X_train)))\n print(\"test auc:\",roc_auc_score(y_test, predict_function(X_test)))\n\n \nprint (\"resulting weights:\")\nplt.imshow(shared_weights.get_value().reshape(8, -1))\nplt.colorbar();", "_____no_output_____" ] ], [ [ "# Bonus: my1stNN\nYour ultimate task for this week is to build your first neural network [almost] from scratch and pure tensorflow.\n\nThis time you will same digit recognition problem, but at a larger scale\n* images are now 28x28\n* 10 different digits\n* 50k samples\n\nNote that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.\n\n__[bonus score]__\nIf you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.\n\n__SPOILER!__\nAt the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.", "_____no_output_____" ] ], [ [ "from mnist import load_dataset\n\n#[down]loading the original MNIST dataset.\n#Please note that you should only train your NN on _train sample,\n# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping\n# _test should be hidden under a rock untill final evaluation... \n# But we both know it is near impossible to catch you evaluating on it.\nX_train,y_train,X_val,y_val,X_test,y_test = load_dataset()\n\nprint (X_train.shape,y_train.shape)", "_____no_output_____" ], [ "plt.imshow(X_train[0,0])", "_____no_output_____" ], [ "<here you could just as well create computation graph>", "_____no_output_____" ], [ "<this may or may not be a good place to evaluating loss and optimizer>", "_____no_output_____" ], [ "<this may be a perfect cell to write a training&evaluation loop in>", "_____no_output_____" ], [ "<predict & evaluate on test here, right? No cheating pls.>", "_____no_output_____" ] ], [ [ "```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n\n# SPOILERS!\n\nRecommended pipeline\n\n* Adapt logistic regression from previous assignment to classify some number against others (e.g. zero vs nonzero)\n* Generalize it to multiclass logistic regression.\n - Either try to remember lecture 0 or google it.\n - Instead of weight vector you'll have to use matrix (feature_id x class_id)\n - softmax (exp over sum of exps) can implemented manually or as T.nnet.softmax (stable)\n - probably better to use STOCHASTIC gradient descent (minibatch)\n - in which case sample should probably be shuffled (or use random subsamples on each iteration)\n* Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs.\n - Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (sigmoid) instead of softmax\n - You need to train both layers, not just output layer :)\n - Do not initialize layers with zeros (due to symmetry effects). A gaussian noize with small sigma will do.\n - 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve. \n - In ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid\n - __make sure this neural network works better than logistic regression__\n \n* Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb13a7e8974d3402574c9badf2faa99bc747f526
30,751
ipynb
Jupyter Notebook
cord19download2text.ipynb
iued-uni-heidelberg/DAAD-Training-2021
ff3f33a273f813fb5586edf6e44e9b7f21b6f787
[ "MIT" ]
null
null
null
cord19download2text.ipynb
iued-uni-heidelberg/DAAD-Training-2021
ff3f33a273f813fb5586edf6e44e9b7f21b6f787
[ "MIT" ]
null
null
null
cord19download2text.ipynb
iued-uni-heidelberg/DAAD-Training-2021
ff3f33a273f813fb5586edf6e44e9b7f21b6f787
[ "MIT" ]
null
null
null
43.867332
254
0.533804
[ [ [ "<a href=\"https://colab.research.google.com/github/iued-uni-heidelberg/DAAD-Training-2021/blob/main/cord19download2text.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Downloading and reading CORD19 corpus\nThis notebook downloads and reads the free cord19 corpus into one file. The notebook is hosted at IÜD, Heidelberg University github repository https://github.com/iued-uni-heidelberg/cord19\n\nCORD19 (covid-19) open-source corpus is available from https://www.semanticscholar.org/cord19/download. \n\nDocumentation is available at https://github.com/allenai/cord19\n\nThe original files are in json format. The output file is in plain text format; documents are separated (by default) by \\<doc id=\"doc1000001\"> ... \\</doc> tags\n\nThe purpose of the plain text file is for further processing, e.g., generating linguistic annotation using the TreeTagger or the Standford parser for part-of-speech annotation or dependency / constituency parsing.\n\n", "_____no_output_____" ], [ "## Downloading CORD19 corpus\n\nThe corpus is downloaded and extracted from https://www.semanticscholar.org/cord19/download\n\nPlease check the link above: if you need the latest release of the corpus or if you would like to choose another release. Currently the 2021-08-30 release is downloaded.\n\nFile size is ~11GB\nexpected download time ~5 min\n", "_____no_output_____" ] ], [ [ "!wget https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases/cord-19_2021-08-30.tar.gz", "--2021-09-05 10:29:02-- https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases/cord-19_2021-08-30.tar.gz\nResolving ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com (ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com)... 52.92.128.170\nConnecting to ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com (ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com)|52.92.128.170|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 12069361365 (11G) [binary/octet-stream]\nSaving to: ‘cord-19_2021-08-30.tar.gz’\n\ncord-19_2021-08-30. 100%[===================>] 11.24G 34.3MB/s in 5m 10s \n\n2021-09-05 10:34:12 (37.2 MB/s) - ‘cord-19_2021-08-30.tar.gz’ saved [12069361365/12069361365]\n\n" ] ], [ [ "Extracting cord-19 corpus, approximate time ~ 4 min", "_____no_output_____" ] ], [ [ "!tar -xvzf cord-19_2021-08-30.tar.gz", "2021-08-30/changelog\n2021-08-30/cord_19_embeddings.tar.gz\n2021-08-30/document_parses.tar.gz\n2021-08-30/metadata.csv\n" ] ], [ [ "Removing initial archive to free some disk space", "_____no_output_____" ] ], [ [ "!rm cord-19_2021-08-30.tar.gz", "_____no_output_____" ] ], [ [ "Extracting document parsers, which contain individual articles in separate json files. This is expected to take ~ 12 min.", "_____no_output_____" ] ], [ [ "!tar -xvzf 2021-08-30/document_parses.tar.gz", "_____no_output_____" ] ], [ [ "Removing more files to save space: ~ 9 seconds", "_____no_output_____" ] ], [ [ "# removing more files to save space\n!rm --recursive 2021-08-30\n!rm --recursive document_parses/pdf_json", "_____no_output_____" ] ], [ [ "## Reading json directory and merging into text file(s)\n\nRun this cell to create the class; then run the next cell to execute on the directory \"document_parses/pmc_json\"\n\nThis is a class for reading a directory with json files and writing them to a single file or split into several text file, with \"split_by_docs=N\", N documents in each file.", "_____no_output_____" ] ], [ [ "# -*- coding: utf-8 -*-\n# Python script to open each file, read json input and copy to one text file for subsequent processing\nimport os, re, sys\nimport json\n\nclass clJsonDir2txt(object):\n '''\n @author Bogdan Babych, IÜD, Heidelberg University\n @email bogdan [dot] babych [at] iued [dot] uni-heidelberg [dot] de\n a script for processing covid-19 corpus:\n @url https://www.semanticscholar.org/cord19 @url https://www.semanticscholar.org/cord19/download\n recursively reads files from a directory, and glues them together into a single corpus file\n\n @todo:\n working with sections - collect titles of all sections; frequent sections; select argumentative sections (e.g., discussion, analysis...)\n - to compare descriptive and argumentative parts of the corpus\n\n experimenting with different annotations (pos, parsing... ); MT quality evaluation...\n '''\n def __init__(self, SDirName, output_file = 'corpus_out.txt', textfilter=None, include_title = True, include_refs = True, include_authors = True, tag='doc', id=1000000, split_by_docs = 0): # initialising by openning the directories\n self.SOutput_file = output_file\n self.STextFilter = textfilter\n self.RFilter = re.compile(textfilter, re.IGNORECASE | re.MULTILINE)\n self.BInclTitle = include_title # implemented\n self.BInclRefs = include_refs # not implemented yet\n self.BInclAuth = include_authors # not implemented yet\n self.STag = tag\n self.ID = id\n self.ISplitByDocs = int(split_by_docs)\n # print(self.ISplitByDocs)\n self.openDir(SDirName)\n return\n\n\n def openDir(self, path): # implementation of recursively openning directories from a given rule directory and reading each file recursively into a string\n i = 0\n if self.ISplitByDocs:\n SPartFile = \"part1000000\" + self.SOutput_file\n FOut = open(SPartFile, 'w')\n else:\n FOut = open(self.SOutput_file, 'w')\n\n for root,d_names,f_names in os.walk(path):\n for f in f_names:\n i+=1\n if i%1000==0: print(str(i) + '. Processing: ' + f)\n fullpath = os.path.join(root, f)\n # print(fullpath)\n try:\n FIn = open(fullpath,'r')\n SIn = FIn.read()\n # apply text filter, if not None\n if self.STextFilter and (re.search(self.RFilter, SIn) == None): continue\n SText2Write = self.procFile(SIn,f,i)\n if SText2Write: FOut.write(SText2Write) # if the string is not empty then write to file\n FIn.close()\n except:\n print(f'file {f} cannot be read or processed')\n finally:\n # splitting output into chunks of \"split_by_docs\" size\n if self.ISplitByDocs and (i % self.ISplitByDocs == 0): # if self.ISplitByDocs == 0 then everything goes into one file; if this > 0 then\n SPartFile = \"part\" + str(1000000 + i) + self.SOutput_file # generate new file name\n FOut.flush()\n FOut.close()\n FOut = open(SPartFile, 'w')\n FOut.flush()\n FOut.close()\n\n return\n\n\n def procFile(self, SIn,SFNameIn,i): # sending each json string for extraction of text and attaching an correct tags to each output string output string\n STagOpen = '<' + self.STag + ' id=\"' + self.STag + str(self.ID + i) + '\">\\n'\n STagClose = '\\n</' + self.STag + '>\\n\\n'\n SText4Corpus = self.getJson(SIn, SFNameIn)\n if SText4Corpus:\n return STagOpen + SText4Corpus + STagClose\n else:\n print('\\tNo data read from: ' + SFNameIn)\n return None\n\n\n def getJson(self, SIn, SFNameIn): # for each file-level string read from a file: managing internal structure of the covid-19 json file\n LOut = [] # collecting a list of strings\n try:\n DDoc = json.loads(SIn)\n except:\n print('\\t\\t' + SFNameIn + ' => error reading json2dictionary')\n return None\n # metadata:\n try:\n DMetaData = DDoc['metadata']\n if DMetaData:\n SMetaData = self.getJson_Metadata(DMetaData)\n if SMetaData: LOut.append(SMetaData)\n except:\n print('\\t\\t\\t' + SFNameIn + ' ====> no metadata')\n DMetaData = None\n # body text\n try:\n LBodyText = DDoc['body_text']\n if LBodyText:\n SBodyText = self.getJson_BodyText(LBodyText)\n LOut.append(SBodyText)\n except:\n print('\\t\\t\\t' + SFNameIn + ' ====> no body_text')\n LBodyText = None\n # further: to implement references\n\n SText = '\\n\\n'.join(LOut)\n return SText\n\n\n def getJson_Metadata(self, DIn): # converts interesting parts of metadata into a string\n SMetadata = ''\n LMetadata = []\n try: STitle = DIn[\"title\"]\n except: STitle = None\n if STitle and self.BInclTitle:\n LMetadata.append(STitle)\n\n # to implement reading of authors' names\n\n if LMetadata: SMetadata = '\\n\\n'.join(LMetadata)\n return SMetadata\n\n\n def getJson_BodyText(self, LIn): # converts interesting parts of the body texts into a string\n SBodyText = ''\n LBodyText = []\n for DParagraph in LIn:\n try:\n ## DParagraphs[section] ## -- later on >> distinction between different sections....\n SParagraph = DParagraph[\"text\"]\n LBodyText.append(SParagraph)\n except:\n print('!',)\n continue\n\n SBodyText = '\\n\\n'.join(LBodyText)\n return SBodyText\n\n# arguments:\n'''\n sys.argv[1], # obligatory: input directory name;\n other arguments optional:\n output_file = 'covid19corpus.txt',\n textfilter = None, # if this is string, only texts containing it are collected, e.g., covid\n include_title = True, # include or exclude title\n include_refs = False, # not implemented yet: include or exclude references\n split_by_docs=0 # split by groups of n documents; if 0 then write to one file\n\n'''\n\n'''if __name__ == '__main__':\n OJsonDir2txt = clJsonDir2txt(sys.argv[1], output_file = 'covid19corpus.txt', textfilter=None, include_title = True, include_refs = False, split_by_docs=0)\n'''\n", "_____no_output_____" ] ], [ [ "This cell will executre reading of json files into a single (or multiple) files\n\nChange the value of \"split_by_docs=0\" to \"split_by_docs=10000\" or any number ; this will create several corpus files with 10000 or any required number fo documents per file, which you wish to have.\n\n\nApproximate execution time ~10 min\n\nFile size to download ~4.3 GB\n\nIt contains ~198.000 documents,\n\n~ 671.578.587 words\n\n~ 19.381.647 paragraphs (including empty lines, i.e., ~10M real paragraphs)\n\n~ 4.619.100.883 characters\n\nDownload time can take up to 1 hour depending on your connection speed.\n\nTo split into ~BNC size chunks (100MW), split into groups of ~40000 documents (in the following cell set \"split_by_docs=20000\")\n", "_____no_output_____" ] ], [ [ "# remove parameter textfilter='covid', to return all documents\nOJsonDir2txt = clJsonDir2txt(\"document_parses/pmc_json\", output_file = 'covid19corpusFilterCOVID.txt', textfilter='covid', include_title = True, include_refs = False, split_by_docs=40000)\n", "1000. Processing: PMC8328126.xml.json\n2000. Processing: PMC8378831.xml.json\n3000. Processing: PMC7885334.xml.json\n4000. Processing: PMC7123675.xml.json\n5000. Processing: PMC8237894.xml.json\n6000. Processing: PMC4629194.xml.json\n7000. Processing: PMC7423848.xml.json\n8000. Processing: PMC8284347.xml.json\n9000. Processing: PMC8278374.xml.json\n10000. Processing: PMC1570461.xml.json\n11000. Processing: PMC7744420.xml.json\n12000. Processing: PMC7158367.xml.json\n13000. Processing: PMC7869758.xml.json\n14000. Processing: PMC7585733.xml.json\n15000. Processing: PMC7871039.xml.json\n16000. Processing: PMC7119026.xml.json\n17000. Processing: PMC7705351.xml.json\n18000. Processing: PMC3657891.xml.json\n19000. Processing: PMC7443920.xml.json\n20000. Processing: PMC7152080.xml.json\n21000. Processing: PMC8012999.xml.json\n22000. Processing: PMC8204832.xml.json\n23000. Processing: PMC8330914.xml.json\n24000. Processing: PMC7156231.xml.json\n25000. Processing: PMC7195042.xml.json\n26000. Processing: PMC8341344.xml.json\n27000. Processing: PMC7962418.xml.json\n28000. Processing: PMC7972699.xml.json\n29000. Processing: PMC7516084.xml.json\n30000. Processing: PMC8065505.xml.json\n31000. Processing: PMC8139434.xml.json\n32000. Processing: PMC8310257.xml.json\n33000. Processing: PMC7114247.xml.json\n34000. Processing: PMC5707224.xml.json\n35000. Processing: PMC7107964.xml.json\n36000. Processing: PMC8216795.xml.json\n37000. Processing: PMC8242644.xml.json\n38000. Processing: PMC7688298.xml.json\n39000. Processing: PMC8110188.xml.json\n40000. Processing: PMC7495188.xml.json\n41000. Processing: PMC6270622.xml.json\n42000. Processing: PMC7547551.xml.json\n43000. Processing: PMC7162744.xml.json\n44000. Processing: PMC7218365.xml.json\n45000. Processing: PMC7498113.xml.json\n46000. Processing: PMC7167173.xml.json\n47000. Processing: PMC8122205.xml.json\n48000. Processing: PMC8379572.xml.json\n49000. Processing: PMC7596573.xml.json\n50000. Processing: PMC8393523.xml.json\n51000. Processing: PMC8185154.xml.json\n52000. Processing: PMC8233587.xml.json\n53000. Processing: PMC7957463.xml.json\n54000. Processing: PMC7650208.xml.json\n55000. Processing: PMC7817669.xml.json\n56000. Processing: PMC8122682.xml.json\n57000. Processing: PMC7971905.xml.json\n58000. Processing: PMC7167020.xml.json\n59000. Processing: PMC8090526.xml.json\n60000. Processing: PMC5856488.xml.json\n61000. Processing: PMC7117251.xml.json\n62000. Processing: PMC7169139.xml.json\n63000. Processing: PMC7521072.xml.json\n64000. Processing: PMC7491024.xml.json\n65000. Processing: PMC7548063.xml.json\n66000. Processing: PMC7095898.xml.json\n67000. Processing: PMC7974840.xml.json\n68000. Processing: PMC7978783.xml.json\n69000. Processing: PMC8001669.xml.json\n70000. Processing: PMC5097846.xml.json\n71000. Processing: PMC7340084.xml.json\n72000. Processing: PMC7923186.xml.json\n73000. Processing: PMC7523158.xml.json\n74000. Processing: PMC7920223.xml.json\n75000. Processing: PMC7565235.xml.json\n76000. Processing: PMC8395041.xml.json\n77000. Processing: PMC8043565.xml.json\n78000. Processing: PMC7404464.xml.json\n79000. Processing: PMC7901760.xml.json\n80000. Processing: PMC8103302.xml.json\n81000. Processing: PMC7132659.xml.json\n82000. Processing: PMC8007320.xml.json\n83000. Processing: PMC7368899.xml.json\n84000. Processing: PMC8193012.xml.json\n85000. Processing: PMC7682772.xml.json\n86000. Processing: PMC7801046.xml.json\n87000. Processing: PMC7094105.xml.json\n88000. Processing: PMC7872935.xml.json\n89000. Processing: PMC7158175.xml.json\n90000. Processing: PMC7912416.xml.json\n91000. Processing: PMC7251350.xml.json\n92000. Processing: PMC8276696.xml.json\n93000. Processing: PMC7503345.xml.json\n94000. Processing: PMC7575419.xml.json\n95000. Processing: PMC7303610.xml.json\n96000. Processing: PMC7561703.xml.json\n97000. Processing: PMC6942430.xml.json\n98000. Processing: PMC7089399.xml.json\n99000. Processing: PMC7669911.xml.json\n100000. Processing: PMC7149989.xml.json\n101000. Processing: PMC8215884.xml.json\n102000. Processing: PMC8007413.xml.json\n103000. Processing: PMC7267673.xml.json\n104000. Processing: PMC4768256.xml.json\n105000. Processing: PMC8107953.xml.json\n106000. Processing: PMC7983986.xml.json\n107000. Processing: PMC8192105.xml.json\n108000. Processing: PMC7172400.xml.json\n109000. Processing: PMC4322817.xml.json\n110000. Processing: PMC8006198.xml.json\n111000. Processing: PMC7203719.xml.json\n112000. Processing: PMC7601200.xml.json\n113000. Processing: PMC7727606.xml.json\n114000. Processing: PMC7529057.xml.json\n115000. Processing: PMC7485637.xml.json\n116000. Processing: PMC8280662.xml.json\n117000. Processing: PMC8358439.xml.json\n118000. Processing: PMC7543786.xml.json\n119000. Processing: PMC7746989.xml.json\n120000. Processing: PMC8006669.xml.json\n121000. Processing: PMC7858040.xml.json\n122000. Processing: PMC7498226.xml.json\n123000. Processing: PMC8092326.xml.json\n124000. Processing: PMC7416023.xml.json\n125000. Processing: PMC7105060.xml.json\n126000. Processing: PMC8186798.xml.json\n127000. Processing: PMC7834269.xml.json\n128000. Processing: PMC8382194.xml.json\n129000. Processing: PMC7912123.xml.json\n130000. Processing: PMC8219317.xml.json\n131000. Processing: PMC7432467.xml.json\n132000. Processing: PMC7088171.xml.json\n133000. Processing: PMC7903365.xml.json\n134000. Processing: PMC7508231.xml.json\n135000. Processing: PMC7802986.xml.json\n136000. Processing: PMC8169340.xml.json\n137000. Processing: PMC7991370.xml.json\n138000. Processing: PMC7514286.xml.json\n139000. Processing: PMC7382241.xml.json\n140000. Processing: PMC5017074.xml.json\n141000. Processing: PMC7173099.xml.json\n142000. Processing: PMC7199785.xml.json\n143000. Processing: PMC7334984.xml.json\n144000. Processing: PMC8035613.xml.json\n145000. Processing: PMC3083680.xml.json\n146000. Processing: PMC8043830.xml.json\n147000. Processing: PMC8398060.xml.json\n148000. Processing: PMC8202008.xml.json\n149000. Processing: PMC8395413.xml.json\n150000. Processing: PMC2582734.xml.json\n151000. Processing: PMC8239833.xml.json\n152000. Processing: PMC7606072.xml.json\n153000. Processing: PMC8237057.xml.json\n154000. Processing: PMC7706120.xml.json\n155000. Processing: PMC4456770.xml.json\n156000. Processing: PMC7293761.xml.json\n157000. Processing: PMC5789847.xml.json\n158000. Processing: PMC7240265.xml.json\n159000. Processing: PMC7123743.xml.json\n160000. Processing: PMC7194622.xml.json\n161000. Processing: PMC7500538.xml.json\n162000. Processing: PMC7662584.xml.json\n163000. Processing: PMC7724668.xml.json\n164000. Processing: PMC8356220.xml.json\n165000. Processing: PMC3651868.xml.json\n166000. Processing: PMC7435286.xml.json\n167000. Processing: PMC7115809.xml.json\n168000. Processing: PMC7440090.xml.json\n169000. Processing: PMC7605181.xml.json\n170000. Processing: PMC7972461.xml.json\n171000. Processing: PMC8150955.xml.json\n172000. Processing: PMC7206655.xml.json\n173000. Processing: PMC8121884.xml.json\n174000. Processing: PMC7887714.xml.json\n175000. Processing: PMC7172456.xml.json\n176000. Processing: PMC8344578.xml.json\n177000. Processing: PMC8101992.xml.json\n178000. Processing: PMC7128819.xml.json\n179000. Processing: PMC7423874.xml.json\n180000. Processing: PMC7119551.xml.json\n181000. Processing: PMC8312222.xml.json\n182000. Processing: PMC7731629.xml.json\n183000. Processing: PMC7273261.xml.json\n184000. Processing: PMC8079127.xml.json\n185000. Processing: PMC7832731.xml.json\n186000. Processing: PMC7972658.xml.json\n187000. Processing: PMC4429008.xml.json\n188000. Processing: PMC7875171.xml.json\n189000. Processing: PMC8219292.xml.json\n190000. Processing: PMC2958806.xml.json\n191000. Processing: PMC7510183.xml.json\n192000. Processing: PMC8270108.xml.json\n193000. Processing: PMC7654320.xml.json\n194000. Processing: PMC7795268.xml.json\n195000. Processing: PMC6677726.xml.json\n196000. Processing: PMC7405317.xml.json\n197000. Processing: PMC7115285.xml.json\n" ] ], [ [ "To see the number of words, paragraphs in your corpus you can use this command:", "_____no_output_____" ] ], [ [ "!wc covid19corpus.txt", "_____no_output_____" ] ], [ [ "If you have split the text into parts, you can see the number of words in each part using this command:", "_____no_output_____" ] ], [ [ "!wc part*", " 3899868 136307720 936593762 part1000000covid19corpus.txt\n 3861089 135312676 930249471 part1040000covid19corpus.txt\n 3966522 136553801 939515657 part1080000covid19corpus.txt\n 3941044 136273170 937283286 part1120000covid19corpus.txt\n 3713124 127131220 875458707 part1160000covid19corpus.txt\n 19381647 671578587 4619100883 total\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb13b159eb94285c3a7d10fe7832cb9b68b9b992
871,325
ipynb
Jupyter Notebook
P1.ipynb
CollazzoD/CarND-LaneLines-P1
f330626a3fb5ac1faf11a3e8177f009dcfb5cf5f
[ "MIT" ]
null
null
null
P1.ipynb
CollazzoD/CarND-LaneLines-P1
f330626a3fb5ac1faf11a3e8177f009dcfb5cf5f
[ "MIT" ]
null
null
null
P1.ipynb
CollazzoD/CarND-LaneLines-P1
f330626a3fb5ac1faf11a3e8177f009dcfb5cf5f
[ "MIT" ]
null
null
null
897.348095
132,752
0.950384
[ [ [ "# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---", "_____no_output_____" ], [ "**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>", "_____no_output_____" ], [ "**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ", "_____no_output_____" ], [ "## Import Packages", "_____no_output_____" ] ], [ [ "#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Read in an Image", "_____no_output_____" ] ], [ [ "#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')", "This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n" ] ], [ [ "## Ideas for Lane Detection Pipeline", "_____no_output_____" ], [ "**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images \n`cv2.cvtColor()` to grayscale or change color \n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**", "_____no_output_____" ], [ "## Helper Functions", "_____no_output_____" ], [ "Below are some helper functions to help get you started. They should look familiar from the lesson!", "_____no_output_____" ] ], [ [ "import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n # return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n `vertices` should be a numpy array of integer points.\n \"\"\"\n \n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n# -------------- MY CODE -------------------#\n# Convert an image from RGB to HSV\ndef to_hsv(img):\n return cv2.cvtColor(img, cv2.COLOR_RGB2HSV)\n\n# Takes as input an image, a lower HSV threshold and a higher HSV threshold\n# and return a mask used to filter out all the colors that are not in range\ndef mask_color(hsv, low, high):\n mask = cv2.inRange(hsv, low, high)\n return mask\n\n# Utility function to check if a value is not +/-inf\ndef check_inf(value):\n return abs(value) < math.inf\n\n# Utility function used to filter out all the slopes that are not in the\n# range [MIN_SLOPE, MAX_SLOPE]\ndef slope_in_range(slope):\n MIN_SLOPE = 0.3\n MAX_SLOPE = 1.0\n return MIN_SLOPE <= abs(slope) <= MAX_SLOPE\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=12):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \n \n First, I tried to follow https://knowledge.udacity.com/questions/18578\n but the result obtained wasn't satisying. So, I decided to take the slope\n and coefficient of the line and average them instead of the \"lower_x_values\" as suggested\n in point 4 of https://knowledge.udacity.com/questions/18578.\n \n I also filtered the slope in order to have them in a range so as to avoid strange behaviours.\n Following point 1 of https://knowledge.udacity.com/questions/18578, I used the sign of the slope\n to divide them in left_line_slopes and right_line_slopes.\n \n Then, I proceeded with averaging slopes and coefficient and use the means to find\n a lower_x and an upper_x. In order to find the x, I used two y points:\n - the y max dimension of the image\n - the minimum y point found in all the lines that are in slope_range\n \n Finally, I use all these information to draw the lines\n \n Some maths:\n if we have a line defined as y = slope*x + coeff, we have that:\n - slope = ((y2-y1)/(x2-x1))\n - coeff = y1 - slope*x1 (could also be y2 and x2)\n \n \"\"\"\n # In order to find the minimum, start with the max value and\n # update it accordingly\n min_y = img.shape[0]\n max_y = img.shape[0]\n \n left_line_slopes = []\n left_line_coeff = []\n right_line_slopes = []\n right_line_coeff = []\n \n for line in lines:\n for x1,y1,x2,y2 in line:\n slope = ((y2-y1)/(x2-x1))\n coeff = y1 - slope*x1\n \n if slope_in_range(slope) and check_inf(slope):\n if slope > 0:\n # Right line case (y axis is inverted in the image)\n right_line_slopes.append(slope)\n right_line_coeff.append(coeff)\n elif slope < 0:\n # Left line case (y axis is inverted in the image)\n left_line_slopes.append(slope)\n left_line_coeff.append(coeff)\n \n min_y = min(y1, y2, min_y)\n \n # if len(right_line_slopes) > 0 I also have a coeff, so \n # it's no use to check len(right_line_coeff)\n if len(right_line_slopes) > 0:\n right_line_slope_mean = np.mean(right_line_slopes, dtype=np.float64)\n right_line_coeff_mean = np.mean(right_line_coeff, dtype=np.float64)\n \n # due to precision, np.mean can return inf or -inf: check if values are correct before using them\n if check_inf(right_line_slope_mean) and check_inf(right_line_coeff_mean):\n max_x = int((max_y - right_line_coeff_mean)/right_line_slope_mean)\n min_x = int((min_y - right_line_coeff_mean)/right_line_slope_mean)\n cv2.line(img, (min_x, min_y), (max_x, max_y), color, thickness) \n \n # if len(left_line_slopes) > 0 I also have a coeff, so \n # it's no use to check len(left_line_coeff)\n if len(left_line_slopes) > 0:\n left_line_slope_mean = np.mean(left_line_slopes, dtype=np.float64)\n left_line_coeff_mean = np.mean(left_line_coeff, dtype=np.float64)\n \n # due to precision, np.mean can return inf or -inf: check if values are correct before using them\n if check_inf(left_line_slope_mean) and check_inf(left_line_coeff_mean):\n max_x = int((max_y - left_line_coeff_mean)/left_line_slope_mean)\n min_x = int((min_y - left_line_coeff_mean)/left_line_slope_mean)\n cv2.line(img, (min_x, min_y), (max_x, max_y), color, thickness) \n \n# ------------- END OF MY CODE -------------#\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + γ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)", "_____no_output_____" ] ], [ [ "## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**", "_____no_output_____" ] ], [ [ "import os\nos.listdir(\"test_images/\")", "_____no_output_____" ] ], [ [ "## Build a Lane Finding Pipeline\n\n", "_____no_output_____" ], [ "Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.", "_____no_output_____" ] ], [ [ "# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images_output directory.\n\n# Define Gaussian Filter parameters\nGAUSSIAN_KERNEL_SIZE = 3\n# End Gaussian Filter parameters\n\n# Define Canny parameters\nCANNY_LOW_THRESHOLD = 75\nCANNY_HIGH_THRESHOLD = 150\n# End Canny parameters\n\n# Do not use with challenge (image's size is different)\n# Define polygon\n# imshape = image.shape\n# LEFT_BOTTOM = (100, imshape[0])\n# RIGHT_BOTTOM = (930, imshape[0])\n# Y_HORIZON = 320\n# LEFT_UP = (400, Y_HORIZON)\n# RIGHT_UP = (590, Y_HORIZON)\n# VERTICES = np.array([[LEFT_BOTTOM , LEFT_UP, RIGHT_UP, RIGHT_BOTTOM]], dtype=np.int32)\n\n# Challenge images have different size, so I have to find a polygon using percentage\n# For percentage I'm using the polygon found before\nLEFT_BOTTOM_PERC = 100.0/960.0\nRIGHT_BOTTOM_PERC = 930.0/960.0\nY_HORIZON_PERC = 320.0/540.0\nLEFT_UP_PERC = 400.0/960.0\nRIGHT_UP_PERC = 590.0/960.0\n\ndef get_vertices(img):\n left_bottom = (img.shape[1] * LEFT_BOTTOM_PERC, img.shape[0])\n right_bottom = (img.shape[1] * RIGHT_BOTTOM_PERC, img.shape[0])\n left_up = (img.shape[1] * LEFT_UP_PERC, img.shape[0] * Y_HORIZON_PERC)\n right_up = (img.shape[1] * RIGHT_UP_PERC, img.shape[0] * Y_HORIZON_PERC)\n vertices = np.array([[left_bottom , left_up, right_up, right_bottom]], dtype=np.int32)\n return vertices\n\n# End polygon\n\n# Define Hough transform parameters\nRHO = 3.5 # distance resolution in pixels of the Hough grid\nTHETA = np.pi/180 # angular resolution in radians of the Hough grid\nHOUGH_THRESHOLD = 30 # minimum number of votes (intersections in Hough grid cell)\nHOUGH_MIN_LINE_LENGTH = 5 #minimum number of pixels making up a line\nHOUGH_MAX_LINE_GAP = 25 # maximum gap in pixels between connectable line segments\n# End Hough transform parameters\n\n# Define Hough transform parameters suggested by reviewer\nHOUGH_THRESHOLD = 50 # minimum number of votes (intersections in Hough grid cell)\nHOUGH_MIN_LINE_LENGTH = 100 #minimum number of pixels making up a line\nHOUGH_MAX_LINE_GAP = 160 # maximum gap in pixels between connectable line segments\n# End Hough transform parameters suggested by reviewer\n\n# In order to improve the quality of filtering, I switched from grayscale to hsv, which is more robust \n# in situations where illumination changes. Aside from converting to hsv and filtering by colors, \n# the pipeline resemble the one seen in the course material.\ndef my_pipeline(image):\n hsv = to_hsv(image) # Convert the image to HSV\n blur_gray = gaussian_blur(hsv, GAUSSIAN_KERNEL_SIZE)\n \n # Define color ranges and apply color mask, so as to filter per color\n yellow_hsv_low = np.array([ 0, 100, 100])\n yellow_hsv_high = np.array([ 50, 255, 255])\n\n white_hsv_low = np.array([ 20, 0, 180])\n white_hsv_high = np.array([ 255, 80, 255])\n \n mask_yellow = mask_color(blur_gray, yellow_hsv_low, yellow_hsv_high)\n mask_white = mask_color(blur_gray, white_hsv_low, white_hsv_high)\n color_masked_img = cv2.bitwise_or(mask_yellow, mask_white) \n \n edges = canny(color_masked_img, CANNY_LOW_THRESHOLD, CANNY_HIGH_THRESHOLD)\n vertices = get_vertices(image)\n masked_edges = region_of_interest(edges, vertices)\n line_image = hough_lines(masked_edges, RHO, THETA, HOUGH_THRESHOLD, HOUGH_MIN_LINE_LENGTH, HOUGH_MAX_LINE_GAP)\n lines_edges = weighted_img(image, line_image)\n \n return lines_edges", "_____no_output_____" ], [ "# Try the pipeline with the test_images provided\nfor file in os.listdir(\"test_images/\"):\n image = mpimg.imread(\"test_images/\" + file)\n lines_edges = my_pipeline(image)\n vertices = get_vertices(image)\n x = [vertices[0][0][0], vertices[0][1][0], vertices[0][2][0], vertices[0][3][0]]\n y = [vertices[0][0][1], vertices[0][1][1], vertices[0][2][1], vertices[0][3][1]]\n plt.plot(x, y, 'w--', lw=2)\n plt.imshow(lines_edges)\n plt.show()\n cv2.imwrite(\"test_images_output/\" + file, lines_edges)", "_____no_output_____" ] ], [ [ "## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**", "_____no_output_____" ] ], [ [ "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML", "Imageio: 'ffmpeg.linux64' was not found on your computer; downloading it now.\nTry 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg.linux64 (27.2 MB)\nDownloading: 8192/28549024 bytes (0.0188416/28549024 bytes (0.7548864/28549024 bytes (1.91024000/28549024 bytes (3.6%1581056/28549024 bytes (5.5%2105344/28549024 bytes (7.4%2449408/28549024 bytes (8.6%2826240/28549024 bytes (9.9%3219456/28549024 bytes (11.33612672/28549024 bytes (12.74005888/28549024 bytes (14.04399104/28549024 bytes (15.44792320/28549024 bytes (16.85169152/28549024 bytes (18.15562368/28549024 bytes (19.55955584/28549024 bytes (20.96348800/28549024 bytes (22.26742016/28549024 bytes (23.67135232/28549024 bytes (25.07528448/28549024 bytes (26.47921664/28549024 bytes (27.78314880/28549024 bytes (29.18708096/28549024 bytes (30.59101312/28549024 bytes (31.99494528/28549024 bytes (33.39887744/28549024 bytes (34.610280960/28549024 bytes (36.0%10657792/28549024 bytes (37.3%11067392/28549024 bytes (38.8%11444224/28549024 bytes (40.1%11853824/28549024 bytes (41.5%12230656/28549024 bytes (42.8%12623872/28549024 bytes (44.2%13017088/28549024 bytes (45.6%13410304/28549024 bytes (47.0%13705216/28549024 bytes (48.0%14065664/28549024 bytes (49.3%15310848/28549024 bytes (53.6%15753216/28549024 bytes (55.2%16146432/28549024 bytes (56.6%16539648/28549024 bytes (57.9%16932864/28549024 bytes (59.3%17326080/28549024 bytes (60.7%17719296/28549024 bytes (62.1%18112512/28549024 bytes (63.4%18505728/28549024 bytes (64.8%18898944/28549024 bytes (66.2%19292160/28549024 bytes (67.6%19685376/28549024 bytes (69.0%20078592/28549024 bytes (70.3%20471808/28549024 bytes (71.7%20865024/28549024 bytes (73.1%21258240/28549024 bytes (74.5%21651456/28549024 bytes (75.8%22044672/28549024 bytes (77.2%22437888/28549024 bytes (78.6%22831104/28549024 bytes (80.0%23224320/28549024 bytes (81.3%23617536/28549024 bytes (82.7%24010752/28549024 bytes (84.1%24403968/28549024 bytes (85.5%24797184/28549024 bytes (86.9%25190400/28549024 bytes (88.2%25583616/28549024 bytes (89.6%25976832/28549024 bytes (91.0%26370048/28549024 bytes (92.4%26763264/28549024 bytes (93.7%27156480/28549024 bytes (95.1%27549696/28549024 bytes (96.5%27942912/28549024 bytes (97.9%28336128/28549024 bytes (99.3%28549024/28549024 bytes (100.0%)\n Done\nFile saved as /root/.imageio/ffmpeg/ffmpeg.linux64.\n" ], [ "# Utility function used to find out that challenge video has different image's size\ndef print_image_param(image):\n print('This image is:', type(image), 'with dimensions:', image.shape)\n\ndef process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # print_image_param(image)\n result = my_pipeline(image)\n # you should return the final output (image where lines are drawn on lanes)\n return result", "_____no_output_____" ] ], [ [ "Let's try the one with the solid white lane on the right first ...", "_____no_output_____" ] ], [ [ "white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4\n[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4\n" ] ], [ [ "Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.", "_____no_output_____" ] ], [ [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))", "_____no_output_____" ] ], [ [ "## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**", "_____no_output_____" ], [ "Now for the one with the solid yellow lane on the left. This one's more tricky!", "_____no_output_____" ] ], [ [ "yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/solidYellowLeft.mp4\n[MoviePy] Writing video test_videos_output/solidYellowLeft.mp4\n" ], [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "_____no_output_____" ] ], [ [ "## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n", "_____no_output_____" ], [ "## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!", "_____no_output_____" ] ], [ [ "challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/challenge.mp4\n[MoviePy] Writing video test_videos_output/challenge.mp4\n" ], [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cb13c0df0bac5a4fa86ad7bcdde3ee34805a8c1f
21,311
ipynb
Jupyter Notebook
biosignalsnotebooks_notebooks/Categories/Train_And_Classify/classification_game_volume_2.ipynb
csavur/biosignalsnotebooks
c99596741a854c58bdefb429906023ac48ddc3b7
[ "MIT" ]
1
2020-06-26T05:05:11.000Z
2020-06-26T05:05:11.000Z
biosignalsnotebooks_notebooks/Categories/Train_And_Classify/classification_game_volume_2.ipynb
csavur/biosignalsnotebooks
c99596741a854c58bdefb429906023ac48ddc3b7
[ "MIT" ]
null
null
null
biosignalsnotebooks_notebooks/Categories/Train_And_Classify/classification_game_volume_2.ipynb
csavur/biosignalsnotebooks
c99596741a854c58bdefb429906023ac48ddc3b7
[ "MIT" ]
null
null
null
41.786275
443
0.550748
[ [ [ "<link rel=\"stylesheet\" href=\"../../styles/theme_style.css\">\n<!--link rel=\"stylesheet\" href=\"../../styles/header_style.css\"-->\n<link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css\">\n\n<table width=\"100%\">\n <tr>\n <td id=\"image_td\" width=\"15%\" class=\"header_image_color_7\"><div id=\"image_img\"\n class=\"header_image_7\"></div></td>\n <td class=\"header_text\"> Rock, Paper or Scissor Game - Train and Classify [Volume 2] </td>\n </tr>\n</table>", "_____no_output_____" ], [ "<div id=\"flex-container\">\n <div id=\"diff_level\" class=\"flex-item\">\n <strong>Difficulty Level:</strong> <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star checked\"></span>\n <span class=\"fa fa-star\"></span>\n </div>\n <div id=\"tag\" class=\"flex-item-tag\">\n <span id=\"tag_list\">\n <table id=\"tag_list_table\">\n <tr>\n <td class=\"shield_left\">Tags</td>\n <td class=\"shield_right\" id=\"tags\">train_and_classify&#9729;machine-learning&#9729;features&#9729;extraction</td>\n </tr>\n </table>\n </span>\n <!-- [OR] Visit https://img.shields.io in order to create a tag badge-->\n </div>\n</div>", "_____no_output_____" ], [ "<span class=\"color4\"><strong>Previous Notebooks that are part of \"Rock, Paper or Scissor Game - Train and Classify\" module</strong></span>\n<ul>\n <li><a href=\"classification_game_volume_1.ipynb\"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 1] | Experimental Setup <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n</ul>\n\n<span class=\"color7\"><strong>Following Notebooks that are part of \"Rock, Paper or Scissor Game - Train and Classify\" module</strong></span>\n<ul>\n <li><a href=\"classification_game_volume_3.ipynb\"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 3] | Training a Classifier <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n <li><a href=\"../Evaluate/classification_game_volume_4.ipynb\"><strong>Rock, Paper or Scissor Game - Train and Classify [Volume 4] | Performance Evaluation <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></strong></a></li>\n</ul>\n\n<table width=\"100%\">\n <tr>\n <td style=\"text-align:left;font-size:12pt;border-top:dotted 2px #62C3EE\">\n <span class=\"color1\">&#9740;</span> After the presentation of data acquisition conditions on the previous <a href=\"classification_game_volume_1.ipynb\">Jupyter Notebook <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a>, we will follow our Machine Learning Journey by specifying which features will be extracted.\n <br>\n \"Features\" are numerical parameters extracted from the training data (in our case physiological signals acquired when executing gestures of \"Rock, Paper or Scissor\" game), characterizing objectively the training example.\n A good feature is a parameter that has the ability to separate the different classes of our classification system, i.e, a parameter with a characteristic range of values for each available class.\n </td>\n </tr>\n</table>\n<hr>", "_____no_output_____" ], [ "<p style=\"font-size:20pt;color:#62C3EE;padding-bottom:5pt\">Starting Point (Setup)</p>\n<strong>List of Available Classes:</strong>\n<br>\n<ol start=\"0\">\n <li><span class=\"color1\"><strong>\"No Action\"</strong></span> [When the hand is relaxed]</li>\n <li><span class=\"color4\"><strong>\"Paper\"</strong></span> [All fingers are extended]</li>\n <li><span class=\"color7\"><strong>\"Rock\"</strong></span> [All fingers are flexed]</li>\n <li><span class=\"color13\"><strong>\"Scissor\"</strong></span> [Forefinger and middle finger are extended and the remaining ones are flexed]</li>\n</ol>\n<table align=\"center\">\n <tr>\n <td height=\"200px\">\n <img src=\"../../images/train_and_classify/classification_game_volume_2/classification_game_paper.png\" style=\"display:block;height:100%\">\n </td>\n <td height=\"200px\">\n <img src=\"../../images/train_and_classify/classification_game_volume_2/classification_game_stone.png\" style=\"display:block;height:100%\">\n </td>\n <td height=\"200px\">\n <img src=\"../../images/train_and_classify/classification_game_volume_2/classification_game_scissor.png\" style=\"display:block;height:100%\">\n </td>\n </tr>\n <tr>\n <td style=\"text-align:center\">\n <strong>Paper</strong>\n </td>\n <td style=\"text-align:center\">\n <strong>Rock</strong>\n </td>\n <td style=\"text-align:center\">\n <strong>Scissor</strong>\n </td>\n </tr>\n</table>\n\n<strong>Acquired Data:</strong>\n<br>\n<ul>\n <li>Electromyography (EMG) | 2 muscles | Adductor pollicis and Flexor digitorum superficialis</li>\n <li>Accelerometer (ACC) | 1 axis | Sensor parallel to the thumb nail (Axis perpendicular)</li>\n</ul>", "_____no_output_____" ], [ "<p style=\"font-size:20pt;color:#62C3EE;padding-bottom:5pt\">Protocol/Feature Extraction</p>\n<strong>Extracted Features</strong>\n<ul>\n <li><span style=\"color:#E84D0E\"><strong>[From] EMG signal</strong></span></li>\n <ul>\n <li>Standard Deviation &#9734;</li>\n <li>Maximum sampled value &#9757;</li>\n <li><a href=\"https://en.wikipedia.org/wiki/Zero-crossing_rate\">Zero-Crossing Rate</a> &#9740;</li>\n <li>Standard Deviation of the absolute signal &#9735;</li>\n </ul>\n <li><span style=\"color:#FDC400\"><strong>[From] ACC signal</strong></span></li>\n <ul>\n <li>Average Value &#9737;</li>\n <li>Standard Deviation &#9734;</li>\n <li>Maximum sampled value &#9757;</li>\n <li><a href=\"https://en.wikipedia.org/wiki/Zero-crossing_rate\">Zero-Crossing Rate</a> &#9740;</li>\n <li><a href=\"https://en.wikipedia.org/wiki/Slope\">Slope of the regression curve</a> &#9741;</li>\n </ul>\n</ul>\n\n<strong>Formal definition of parameters</strong>\n<br>\n&#9757; | Maximum Sample Value of a set of elements is equal to the last element of the sorted set\n\n&#9737; | $\\mu = \\frac{1}{N}\\sum_{i=1}^N (sample_i)$\n\n&#9734; | $\\sigma = \\sqrt{\\frac{1}{N}\\sum_{i=1}^N(sample_i - \\mu_{signal})^2}$\n\n&#9740; | $zcr = \\frac{1}{N - 1}\\sum_{i=1}^{N-1}bin(i)$ \n\n&#9735; | $\\sigma_{abs} = \\sqrt{\\frac{1}{N}\\sum_{i=1}^N(|sample_i| - \\mu_{signal_{abs}})^2}$\n\n&#9741; | $m = \\frac{\\Delta signal}{\\Delta t}$\n\n... being $N$ the number of acquired samples (that are part of the signal), $sample_i$ the value of the sample number $i$, $signal_{abs}$ the absolute signal, $\\Delta signal$ is the difference between the y coordinate of two points of the regression curve and $\\Delta t$ the difference between the x (time) coordinate of the same two points of the regression curve.\n\n... and \n\n$bin(i)$ a binary function defined as:\n\n$bin(i) = \\begin{cases} 1, & \\mbox{if } signal_i \\times signal_{i-1} \\leq 0 \\\\ 0, & \\mbox{if } signal_i \\times signal_{i-1}>0 \\end{cases}$\n<hr>", "_____no_output_____" ], [ "<p class=\"steps\">0 - Import of the needed packages for a correct execution of the current <span class=\"color4\">Jupyter Notebook</span></p>", "_____no_output_____" ] ], [ [ "# Package that ensures a programatically interaction with operating system folder hierarchy.\nfrom os import listdir\n\n# Package used for clone a dictionary.\nfrom copy import deepcopy\n\n# Functions intended to extract some statistical parameters.\nfrom numpy import max, std, average, sum, absolute\n\n# With the following import we will be able to extract the linear regression parameters after \n# fitting experimental points to the model.\nfrom scipy.stats import linregress\n\n# biosignalsnotebooks own package that supports some functionalities used on the Jupyter Notebooks.\nimport biosignalsnotebooks as bsnb", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1 - Loading of all signals that integrates our training samples (storing them inside a dictionary)</p>\nThe acquired signals are stored inside a folder which can be accessed through a relative path <span class=\"color7\">\"../../signal_samples/classification_game/data\"</span>", "_____no_output_____" ], [ "<p class=\"steps\">1.1 - Identification of the list of files/examples</p>", "_____no_output_____" ] ], [ [ "# Transposition of data from signal files to a Python dictionary.\nrelative_path = \"../../signal_samples/classification_game\"\ndata_folder = \"data\"\n\n# List of files (each file is a training example).\nlist_examples = listdir(relative_path + \"/\" + data_folder)", "_____no_output_____" ], [ "print(list_examples)", "_____no_output_____" ] ], [ [ "The first digit of filename identifies the class to which the training example belongs and the second digit is the trial number <span class=\"color1\">(<i>&lt;class&gt;_&lt;trial&gt;.txt</i>)</span>", "_____no_output_____" ], [ "<p class=\"steps\">1.2 - Access the content of each file and store it on the respective dictionary entry</p>", "_____no_output_____" ] ], [ [ "# Initialization of dictionary.\nsignal_dict = {}\n\n# Scrolling through each entry in the list.\nfor example in list_examples:\n if \".txt\" in example: # Read only .txt files.\n # Get the class to which the training example under analysis belong.\n example_class = example.split(\"_\")[0]\n\n # Get the trial number of the training example under analysis.\n example_trial = example.split(\"_\")[1].split(\".\")[0]\n\n # Creation of a new \"class\" entry if it does not exist.\n if example_class not in signal_dict.keys():\n signal_dict[example_class] = {}\n\n # Load data.\n complete_data = bsnb.load(relative_path + \"/\" + data_folder + \"/\" + example)\n\n # Store data in the dictionary.\n signal_dict[example_class][example_trial] = complete_data", "_____no_output_____" ] ], [ [ "<p class=\"steps\">1.3 - Definition of the content of each channel</p>", "_____no_output_____" ] ], [ [ "# Channels (CH1 Flexor digitorum superficialis | CH2 Aductor policis | CH3 Accelerometer axis Z).\nemg_flexor = \"CH1\"\nemg_adductor = \"CH2\"\nacc_z = \"CH3\"", "_____no_output_____" ] ], [ [ "<p class=\"steps\">2 - Extraction of features according to the signal under analysis</p>\nThe extracted values of each feature will be stored in a dictionary with the same hierarchical structure as \"signal_dict\"", "_____no_output_____" ] ], [ [ "# Clone \"signal_dict\".\nfeatures_dict = deepcopy(signal_dict)\n\n# Navigate through \"signal_dict\" hierarchy.\nlist_classes = signal_dict.keys()\nfor class_i in list_classes:\n list_trials = signal_dict[class_i].keys()\n for trial in list_trials:\n # Initialise \"features_dict\" entry content.\n features_dict[class_i][trial] = []\n \n for chn in [emg_flexor, emg_adductor, acc_z]:\n # Temporary storage of signal inside a reusable variable.\n signal = signal_dict[class_i][trial][chn]\n \n # Start the feature extraction procedure accordingly to the channel under analysis.\n if chn == emg_flexor or chn == emg_adductor: # EMG Features.\n # Converted signal (taking into consideration that our device is a \"biosignalsplux\", the resolution is\n # equal to 16 bits and the output unit should be in \"mV\").\n signal = bsnb.raw_to_phy(\"EMG\", device=\"biosignalsplux\", raw_signal=signal, resolution=16, option=\"mV\")\n \n # Standard Deviation.\n features_dict[class_i][trial] += [std(signal)]\n # Maximum Value.\n features_dict[class_i][trial] += [max(signal)]\n # Zero-Crossing Rate.\n features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal)) \n if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]\n # Standard Deviation of the absolute signal.\n features_dict[class_i][trial] += [std(absolute(signal))]\n else: # ACC Features.\n # Converted signal (taking into consideration that our device is a \"biosignalsplux\", the resolution is\n # equal to 16 bits and the output unit should be in \"g\").\n signal = bsnb.raw_to_phy(\"ACC\", device=\"biosignalsplux\", raw_signal=signal, resolution=16, option=\"g\")\n \n # Average value.\n features_dict[class_i][trial] += [average(signal)]\n # Standard Deviation.\n features_dict[class_i][trial] += [std(signal)]\n # Maximum Value.\n features_dict[class_i][trial] += [max(signal)]\n # Zero-Crossing Rate.\n features_dict[class_i][trial] += [sum([1 for i in range(1, len(signal)) \n if signal[i]*signal[i-1] <= 0]) / (len(signal) - 1)]\n # Slope of the regression curve.\n x_axis = range(0, len(signal))\n features_dict[class_i][trial] += [linregress(x_axis, signal)[0]]", "_____no_output_____" ] ], [ [ "Each training array has the following structure/content:\n<br>\n\\[$\\sigma_{emg\\,flexor}$, $max_{emg\\,flexor}$, $zcr_{emg\\,flexor}$, $\\sigma_{emg\\,flexor}^{abs}$, $\\sigma_{emg\\,adductor}$, $max_{emg\\,adductor}$, $zcr_{emg\\,adductor}$, $\\sigma_{emg\\,adductor}^{abs}$, $\\mu_{acc\\,z}$, $\\sigma_{acc\\,z}$, $max_{acc\\,z}$, $zcr_{acc\\,z}$, $m_{acc\\,z}$\\] ", "_____no_output_____" ], [ "<p class=\"steps\">3 - Storage of the content inside the filled \"features_dict\" to an external file (<a href=\"https://fileinfo.com/extension/json\">.json <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a>)</p>\nWith this procedure it is possible to ensure a \"permanent\" memory of the results produced during feature extraction, reusable in the future by simple reading the file (without the need to reprocess again).", "_____no_output_____" ] ], [ [ "# Package dedicated to the manipulation of json files.\nfrom json import dump\n\nfilename = \"classification_game_features.json\"\n\n# Generation of .json file in our previously mentioned \"relative_path\".\n# [Generation of new file]\nwith open(relative_path + \"/features/\" + filename, 'w') as file:\n dump(features_dict, file)", "_____no_output_____" ] ], [ [ "We reach the end of the \"Classification Game\" second volume. Now all the features of training examples are in our possession.\nIf you are feeling your interest increasing, please jump to the next <a href=\"../Train_and_Classify/classification_game_volume_3.ipynb\">volume <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a>\n\n<strong><span class=\"color7\">We hope that you have enjoyed this guide. </span><span class=\"color2\">biosignalsnotebooks</span><span class=\"color4\"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href=\"../MainFiles/biosignalsnotebooks.ipynb\">Notebooks <img src=\"../../images/icons/link.png\" width=\"10px\" height=\"10px\" style=\"display:inline\"></a></span></strong> !", "_____no_output_____" ], [ "<span class=\"color6\">**Auxiliary Code Segment (should not be replicated by\nthe user)**</span>", "_____no_output_____" ] ], [ [ "from biosignalsnotebooks.__notebook_support__ import css_style_apply\ncss_style_apply()", "_____no_output_____" ], [ "%%html\n<script>\n // AUTORUN ALL CELLS ON NOTEBOOK-LOAD!\n require(\n ['base/js/namespace', 'jquery'],\n function(jupyter, $) {\n $(jupyter.events).on(\"kernel_ready.Kernel\", function () {\n console.log(\"Auto-running all cells-below...\");\n jupyter.actions.call('jupyter-notebook:run-all-cells-below');\n jupyter.actions.call('jupyter-notebook:save-notebook');\n });\n }\n );\n</script>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cb13c525e7a2ee32ee9cd29f157ef61c1221f1dc
86,251
ipynb
Jupyter Notebook
page_ana/links_cluster.ipynb
ICEJM1020/DarkPatterns-StockInforPage
62f7d9e7b0b235f6630ecfdb50745330c18f29ab
[ "MIT" ]
1
2021-06-08T11:27:17.000Z
2021-06-08T11:27:17.000Z
page_ana/links_cluster.ipynb
ICEJM1020/DarkPatterns-StockInforPage
62f7d9e7b0b235f6630ecfdb50745330c18f29ab
[ "MIT" ]
null
null
null
page_ana/links_cluster.ipynb
ICEJM1020/DarkPatterns-StockInforPage
62f7d9e7b0b235f6630ecfdb50745330c18f29ab
[ "MIT" ]
null
null
null
91.464475
56,272
0.800698
[ [ [ "import pandas as pd\nimport os\nimport time\nimport re\nimport numpy as np\nimport json\nfrom urllib.parse import urlparse, urljoin\nrun_root = \"/home/icejm/Code/OpenWPM/stockdp/page_ana/\"", "_____no_output_____" ], [ "# gather all potent/black links\ncount = 0\nfor root, dirs, files in os.walk(os.path.abspath('.')):\n if len(dirs)==0:\n for i in files:\n if i.endswith(\".json\") and i.startswith(\"potent\"): \n count += 1\n file = ((root+'/'+i).split(run_root))[1]\n web_name = root.split('/')[-1]\n with open(file,\"r\") as f:\n text = f.read() \n if i.startswith(\"potent\"):\n tmp_data = json.loads(text)\n for each_page in tmp_data:\n with open(\"potentlist.csv\", \"a+\") as potent_f:\n for j in tmp_data[each_page]:\n j = j.replace(\",\", \"/\").replace(\"http://\", \"\").replace(\"https://\", \"\")\n write_data = j+','+web_name+'\\n'\n potent_f.writelines(write_data)\nprint(count)", "83\n" ], [ "potent_dp_links = pd.read_csv(\"potentlist.csv\", names=[\"url\", \"website\"])\nprint(potent_dp_links.shape)\n\npotent_dp_links.head()", "(2552, 2)\n" ], [ "def getlistnum(li):\n li = list(li)\n set1 = set(li)\n dict1 = {}\n for item in set1:\n dict1.update({item:li.count(item)})\n return dict1\n\ngetlistnum(potent_dp_links['website'])", "_____no_output_____" ], [ "x = \"data.eastmoney.com/report/zw_stock.jshtml?encodeUrl=zXF5Zl6XRyYdSx1spWVTCqDhUpdvWCPeqRcR2Jjm0qE=\"\npath = urlparse(x).path + urlparse(x).params + urlparse(x).query + urlparse(x).fragment\nprint(path)\nprint(len(re.findall(\"([a-z])\",path)))\nprint(len(re.findall(\"([A-Z])\",path)))\nprint(len(re.findall(\"([/_\\.\\%&#\\-\\?])\",x)))", "data.eastmoney.com/report/zw_stock.jshtmlencodeUrl=zXF5Zl6XRyYdSx1spWVTCqDhUpdvWCPeqRcR2Jjm0qE=\n61\n21\n7\n" ] ], [ [ "# Build Features", "_____no_output_____" ], [ "## 1.Basic Features\nlength, num of (signs, upper characters, lower character, number)", "_____no_output_____" ] ], [ [ "def build_features(df):\n processed_features = df[[\"url\"]].copy()\n processed_features[\"path\"] = processed_features[\"url\"].map(\n lambda x: urlparse(x).path + urlparse(x).params + urlparse(x).query + urlparse(x).fragment)\n processed_features[\"path_len\"] = processed_features[\"path\"].map(\n lambda x: len(x))\n processed_features[\"num_sign\"] = processed_features[\"url\"].map(\n lambda x: len(re.findall(\"([/_\\.\\%&#\\-\\?])\",x)))\n processed_features[\"num_upper_char\"] = processed_features[\"path\"].map(\n lambda x: len(re.findall(\"([A-Z])\",x)))\n processed_features[\"num_lower_char\"] = processed_features[\"path\"].map(\n lambda x: len(re.findall(\"([a-z])\",x)))\n processed_features[\"num_number\"] = processed_features[\"path\"].map(\n lambda x: len(re.findall(\"(\\d)\",x)))\n\n processed_features.drop(['url', 'path'], axis=1, inplace=True)\n return processed_features", "_____no_output_____" ], [ "feature = build_features(potent_dp_links)\ndata = pd.concat([potent_dp_links, feature], axis = 1, ignore_index = False)\ndata.head()", "_____no_output_____" ] ], [ [ "# 2. Levenshtein\nbuuild a series distance between url and the website url\n\n 1. Edit Distance\n 2. Levenshtein Ratio\n 3. Jaro/Jaro-Winkler Dsitance(the answers are actually same)", "_____no_output_____" ] ], [ [ "import Levenshtein", "_____no_output_____" ], [ "def build_leven_features(df):\n processed_features = []\n for index, row in df.iterrows():\n str1 = row['url']\n str2 = row['website']\n row['edit-dis'] = Levenshtein.distance(str1, str2)\n row['leven-ratio'] = Levenshtein.ratio(str1, str2)\n row['jw-dis'] = Levenshtein.jaro_winkler(str1, str2)\n processed_features.append(row)\n back_data = pd.DataFrame(processed_features).drop(['url', 'website'], axis=1)\n return back_data", "_____no_output_____" ], [ "leven_features = build_leven_features(potent_dp_links)\ndata = pd.concat([data, leven_features], axis = 1, ignore_index = False)\ndata.to_csv(\"featured_data.csv\", index=False)\ndata_features = data.drop(['url', 'website'], axis=1)", "_____no_output_____" ], [ "data = pd.read_csv(\"featured_data.csv\", )\npotent_dp_links = pd.read_csv(\"potentlist.csv\", names=[\"url\", \"website\"])\ndata_features = data.drop(['url', 'website'], axis=1)", "_____no_output_____" ], [ "import seaborn as sns\nfrom sklearn.metrics import confusion_matrix\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import normalize\nfrom hdbscan import HDBSCAN\n\n%matplotlib inline", "_____no_output_____" ], [ "data_features.describe()\ndata_features = data_features[['path_len','num_sign','num_upper_char','num_lower_char','num_number','edit-dis','leven-ratio','jw-dis']]", "_____no_output_____" ], [ "dfData = abs(pd.DataFrame(data_features).corr())\nplt.subplots(figsize=(12, 9)) # 设置画面大小\nsns.heatmap(dfData, annot=True, vmax=1, square=True, cmap=\"Blues\")", "_____no_output_____" ], [ "data_features = normalize(data_features, axis=1)\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=3)\n# pca = PCA(tol=10)\n\npca_data = pca.fit_transform(data_features)\nprint('Matrix of PCs: %s' % str(pca_data.shape))\nprint('Data matrix: %s' % str(data_features.shape))\nprint('%d singular values: %s' % (pca.singular_values_.shape[0], str(pca.singular_values_)))", "Matrix of PCs: (2552, 3)\nData matrix: (2552, 8)\n3 singular values: [7.08981557 3.05313714 2.22082397]\n" ] ], [ [ "# Clustering", "_____no_output_____" ], [ "## DBSCAN", "_____no_output_____" ] ], [ [ "# test dbscan\nfrom sklearn.cluster import DBSCAN\nfrom sklearn.utils import parallel_backend\n\nwith parallel_backend('threading'):\n clusterer = DBSCAN(eps=0.005, min_samples=5, n_jobs=10, metric='euclidean')\n cluster_labels = clusterer.fit(data_features)\npotent_dp_links['cluster_dbscan'] = pd.Series(cluster_labels.labels_).values\nprint('Number of clusters: %d' % len(set(cluster_labels.labels_)))\n\nwith parallel_backend('threading'):\n clusterer = DBSCAN(eps=0.005, min_samples=5, n_jobs=10, metric='euclidean')\n cluster_labels = clusterer.fit(pca_data)\npotent_dp_links['pca_cluster_dbscan'] = pd.Series(cluster_labels.labels_).values\nprint('Number of clusters: %d' % len(set(cluster_labels.labels_))) \n", "Number of clusters: 178\nNumber of clusters: 163\n" ] ], [ [ "## HDBSCAN", "_____no_output_____" ] ], [ [ "clusterer = HDBSCAN(min_cluster_size=5, metric='euclidean')\ncluster_labels = clusterer.fit_predict(data_features)\npca_cluster_labels = clusterer.fit_predict(pca_data)\n\npotent_dp_links['cluster_hdbscan'] = pd.Series(cluster_labels).values\npotent_dp_links['pca_cluster_hdbscan'] = pd.Series(pca_cluster_labels).values", "_____no_output_____" ], [ "print('HDBSCAN without PCA: \\n Number of clusters: %s' % len(potent_dp_links['cluster_hdbscan'].value_counts()))\n# print('cluster_hdbscan.value_counts(): \\n %s' % potent_dp_links['cluster_hdbscan'].value_counts().to_string())\n\nprint('HDBSCAN wit PCA: \\n Number of clusters: %s' % len(potent_dp_links['pca_cluster_hdbscan'].value_counts()))\n# print('cluster_hdbscan.value_counts(): \\n %s' % potent_dp_links['cluster_hdbscan'].value_counts().to_string())", "HDBSCAN without PCA: \n Number of clusters: 177\nHDBSCAN wit PCA: \n Number of clusters: 175\n" ], [ "hdbscan = HDBSCAN(min_cluster_size=5, min_samples=4, cluster_selection_epsilon=0.001, metric='euclidean')\ndbscan = DBSCAN(eps=0.001, min_samples=4, metric='euclidean')\n\nhdbscan_labels = hdbscan.fit_predict(data_features)\npca_hdbscan_labels = hdbscan.fit_predict(pca_data)\ndbscan_labels = dbscan.fit_predict(data_features)\npca_dbscan_labels = dbscan.fit_predict(pca_data)\n\npotent_dp_links['cluster_hdbscan'] = pd.Series(hdbscan_labels).values\npotent_dp_links['pca_cluster_hdbscan'] = pd.Series(pca_hdbscan_labels).values\npotent_dp_links['cluster_dbscan'] = pd.Series(dbscan_labels).values\npotent_dp_links['pca_cluster_dbscan'] = pd.Series(pca_dbscan_labels).values\n\nprint('HDBSCAN without PCA: \\n Number of clusters: %s' % len(potent_dp_links['cluster_hdbscan'].value_counts()))\nprint('HDBSCAN wit PCA: \\n Number of clusters: %s' % len(potent_dp_links['pca_cluster_hdbscan'].value_counts()))\nprint('DBSCAN without PCA: \\n Number of clusters: %s' % len(potent_dp_links['cluster_dbscan'].value_counts()))\nprint('DBSCAN wit PCA: \\n Number of clusters: %s' % len(potent_dp_links['pca_cluster_dbscan'].value_counts()))", "HDBSCAN without PCA: \n Number of clusters: 238\nHDBSCAN wit PCA: \n Number of clusters: 213\nDBSCAN without PCA: \n Number of clusters: 240\nDBSCAN wit PCA: \n Number of clusters: 223\n" ], [ "potent_dp_links.head()", "_____no_output_____" ], [ "# Silhouette Coefficient\n\nfrom sklearn import metrics\n\ns1 = metrics.silhouette_score(data_features, potent_dp_links['cluster_hdbscan'], metric='euclidean')\ns2 = metrics.silhouette_score(pca_data, potent_dp_links['pca_cluster_hdbscan'], metric='euclidean')\ns3 = metrics.silhouette_score(data_features, potent_dp_links['cluster_dbscan'], metric='euclidean')\ns4 = metrics.silhouette_score(pca_data, potent_dp_links['pca_cluster_dbscan'], metric='euclidean')\nprint('Silhouette score: %.5f' % s1)\nprint('Silhouette score: %.5f' % s2)\nprint('Silhouette score: %.5f' % s3)\nprint('Silhouette score: %.5f' % s4)", "Silhouette score: 0.74536\nSilhouette score: 0.74862\nSilhouette score: 0.63687\nSilhouette score: 0.66851\n" ], [ "# Calinski-Harabaz Index\n\nfrom sklearn import metrics\n \nchi1 = metrics.calinski_harabasz_score(data_features, potent_dp_links['cluster_hdbscan'])\nchi2 = metrics.calinski_harabasz_score(pca_data, potent_dp_links['pca_cluster_hdbscan'])\nchi3 = metrics.calinski_harabasz_score(data_features, potent_dp_links['cluster_dbscan'])\nchi4 = metrics.calinski_harabasz_score(pca_data, potent_dp_links['pca_cluster_dbscan'])\n\nprint('Calinski-Harabaz Index: %.3f' % chi1)\nprint('Calinski-Harabaz Index: %.3f' % chi2)\nprint('Calinski-Harabaz Index: %.3f' % chi3)\nprint('Calinski-Harabaz Index: %.3f' % chi4)", "Calinski-Harabaz Index: 100.756\nCalinski-Harabaz Index: 118.342\nCalinski-Harabaz Index: 42.331\nCalinski-Harabaz Index: 56.435\n" ], [ "# Davies-Bouldin Index\n\nfrom sklearn.metrics import davies_bouldin_score\n\ndbi1 = davies_bouldin_score(data_features, potent_dp_links['cluster_hdbscan'])\nprint('Davies-Bouldin Index: %.5f' % dbi1)\ndbi2 = davies_bouldin_score(pca_data, potent_dp_links['pca_cluster_hdbscan'])\nprint('Davies-Bouldin Index: %.5f' % dbi2)\ndbi3 = davies_bouldin_score(data_features, potent_dp_links['cluster_dbscan'])\nprint('Davies-Bouldin Index: %.5f' % dbi3)\ndbi4 = davies_bouldin_score(pca_data, potent_dp_links['pca_cluster_dbscan'])\nprint('Davies-Bouldin Index: %.5f' % dbi4)", "Davies-Bouldin Index: 1.06182\nDavies-Bouldin Index: 0.99177\nDavies-Bouldin Index: 0.89275\nDavies-Bouldin Index: 0.87477\n" ], [ "para_min_cluster_size = [2,3,4,5,6,7,8,9,10]\npara_min_samples = [3,4,5]\ncluster_selection_epsilon = [0.1,0.01,0.001, 0.0001, 0]\nfor i in cluster_selection_epsilon:\n clusterer = hdbscan.HDBSCAN(min_cluster_size=5, min_samples=4, cluster_selection_epsilon=0.001, metric='euclidean')\n pca_cluster_labels = clusterer.fit_predict(pca_data)\n s4 = metrics.silhouette_score(pca_data, pca_cluster_labels, metric='euclidean')\n print('Number of clusters: %s, Silhouette score: %.5f' % \n (len(pd.DataFrame(pca_cluster_labels).value_counts()), s4))", "Number of clusters: 3, Silhouette score: 0.27955\nNumber of clusters: 160, Silhouette score: 0.57830\nNumber of clusters: 237, Silhouette score: 0.75305\nNumber of clusters: 247, Silhouette score: 0.75713\nNumber of clusters: 248, Silhouette score: 0.75125\n" ], [ "potent_dp_links.to_csv(\"potent_dp_links_cluster.csv\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb13d2059858f0673dcf3a4155b72c0e861dfd37
15,940
ipynb
Jupyter Notebook
_beginning level/3d_tic_tac_toe/3d_tic_tac_toe.ipynb
ZedTDean/modeling-examples
3c79966cb2b7225ae2405b7f10fadb924d6ef099
[ "Apache-2.0" ]
null
null
null
_beginning level/3d_tic_tac_toe/3d_tic_tac_toe.ipynb
ZedTDean/modeling-examples
3c79966cb2b7225ae2405b7f10fadb924d6ef099
[ "Apache-2.0" ]
null
null
null
_beginning level/3d_tic_tac_toe/3d_tic_tac_toe.ipynb
ZedTDean/modeling-examples
3c79966cb2b7225ae2405b7f10fadb924d6ef099
[ "Apache-2.0" ]
null
null
null
43.791209
2,848
0.613927
[ [ [ "# 3D Tic-Tac-Toe\n\n## Objective and Prerequisites\n\nTry this logic programming example to learn how to solve the problem of arranging X’s and O’s on a three-dimensional Tic-Tac-Toe board so as to minimize the number of completed lines or diagonals. This example will show you how a binary programming model can be used to capture simple logical constraints.\n\nThis is example 17 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 272 and 327 – 328.\n\nThis modeling example is at the beginning level. We assume that you have some familiarity with Python and the Gurobi Python API, but you can hopefully pick up any missing concepts from the example.\n\n**Download the Repository** <br /> \nYou can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). \n\n\n**Gurobi License** <br /> \nIn order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an \n[evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_3D-Tic-Tac-Toe_COM_EVAL_GITHUB_&utm_term=logic-programing&utm_content=C_JPM)\nas a *commercial user*, or download a \n[free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_3D-Tic-Tac-Toe_ACADEMIC_EVAL_GITHUB_&utm_term=logic-programing&utm_content=C_JPM)\nas an *academic user*.\n\n---\n## Problem Description\n\nGiven a 3-D tic-tac-toe board, where players take turns placing $X$'s and $O$'s, the game typically ends when one player completes a line or diagonal; that is, when they manage to place their symbols in three cells that form a line or diagonal in the grid. The twist that is tackled here is that the game continues until every cell contains a symbol, and the goal is to arrange the symbols to minimize the number of completed lines or diagonals.\n\n---\n## Model Formulation\n\n\n### Decision Variables\n\n$\\text{isX}_{ijk} \\in [0,1]$: Does cell $(i,j,k)$ contain an $X$ ($isX=1$) or an $O$ ($isX=0$)?\n\n$\\text{isLine}_{l} \\in [0,1]$: Does line/diagonal $l$ contain 3 of the same symbol?\n\n### Objective Function\n\n- **Lines**: Minimize the number of completed lines or diagonals\n\n\\begin{equation}\n\\text{Minimize} \\quad Z = \\sum_{l \\in \\text{Lines}}\\text{isLine}_l\n\\end{equation}\n\n### Constraints\n\n- **Take turns**: The board must contain 14 $X$'s and 13 $O$'s ($X$ goes first).\n\n\\begin{equation}\n\\sum_{ijk} \\text{isX}_{ijk} = 14\n\\end{equation}\n\n- **Lines**: For a line to not be complete, one cell must have a different value. The simple observation here is that the sum of the corresponding 3 binary variables would be 3 if they are all $X$ and 0 if they were all $O$. We need to forbid those outcomes whenever $isLine_l == 0$. Note that $l_0$ is the first cell in line $l$, $l_1$ is the second, and $l_2$ is the third.\n\n\\begin{equation}\n\\text{isLine}_l == 0 \\implies isX[l_0] + isX[l_1] + isX[l_2] >= 1 \\quad \\forall l \\in \\text{Lines}\n\\end{equation}\n\n\\begin{equation}\n\\text{isLine}_l == 0 \\implies isX[l_0] + isX[l_1] + isX[l_2] <= 2 \\quad \\forall l \\in \\text{Lines}\n\\end{equation}\n\n---\n## Python Implementation\n\nWe import the Gurobi Python Module.", "_____no_output_____" ] ], [ [ "import gurobipy as gp\nfrom gurobipy import GRB\n\n# tested with Python 3.7.0 & Gurobi 9.0", "_____no_output_____" ] ], [ [ "## Model Deployment\n\nWe first create a list of all possible lines and diagonals in a 3-D tic-tac-toe board. Each is represented as a Python tuple with 3 entries, where each entry gives the (i,j,k) position of the corresponding cell. There are 49 in total.", "_____no_output_____" ] ], [ [ "lines = []\nsize = 3\n\nfor i in range(size):\n for j in range(size):\n for k in range(size):\n if i == 0:\n lines.append(((0,j,k), (1,j,k), (2,j,k)))\n if j == 0:\n lines.append(((i,0,k), (i,1,k), (i,2,k)))\n if k == 0:\n lines.append(((i,j,0), (i,j,1), (i,j,2)))\n if i == 0 and j == 0:\n lines.append(((0,0,k), (1,1,k), (2,2,k)))\n if i == 0 and j == 2:\n lines.append(((0,2,k), (1,1,k), (2,0,k)))\n if i == 0 and k == 0:\n lines.append(((0,j,0), (1,j,1), (2,j,2)))\n if i == 0 and k == 2:\n lines.append(((0,j,2), (1,j,1), (2,j,0)))\n if j == 0 and k == 0:\n lines.append(((i,0,0), (i,1,1), (i,2,2)))\n if j == 0 and k == 2:\n lines.append(((i,0,2), (i,1,1), (i,2,0)))\nlines.append(((0,0,0), (1,1,1), (2,2,2)))\nlines.append(((2,0,0), (1,1,1), (0,2,2)))\nlines.append(((0,2,0), (1,1,1), (2,0,2)))\nlines.append(((0,0,2), (1,1,1), (2,2,0)))", "_____no_output_____" ] ], [ [ "Next we create our model and our decision variables.", "_____no_output_____" ] ], [ [ "model = gp.Model('Tic_Tac_Toe')\nisX = model.addVars(size, size, size, vtype=GRB.BINARY, name=\"isX\")\nisLine = model.addVars(lines, vtype=GRB.BINARY, name=\"isLine\")", "Using license file c:\\gurobi\\gurobi.lic\n" ] ], [ [ "Now we create the constraints. The first states the board will contain 14 X's (and 13 O's):", "_____no_output_____" ] ], [ [ "x14 = model.addConstr(isX.sum() == 14)", "_____no_output_____" ] ], [ [ "The remaining constraints establish the relationship between the $isLine[]$ and $isX[]$ variables. A line is complete if all three cells contain the same symbol. In our model, this would correspond to three associated $isX[]$ variables summing to either 3 (all $X$) or 0 (all $O$). For our purposes, it is enough to enforce the condition that if $isLine[] = 0$, the sum must be strictly between these two values.", "_____no_output_____" ] ], [ [ "for line in lines:\n model.addGenConstrIndicator(isLine[line], False, isX[line[0]] + isX[line[1]] + isX[line[2]] >= 1)\n model.addGenConstrIndicator(isLine[line], False, isX[line[0]] + isX[line[1]] + isX[line[2]] <= 2)", "_____no_output_____" ] ], [ [ "Finally, we set the optimization objective, which is to minimize the number of completed lines.", "_____no_output_____" ] ], [ [ "model.setObjective(isLine.sum())", "_____no_output_____" ] ], [ [ "Now we perform the optimization.", "_____no_output_____" ] ], [ [ "model.optimize()", "Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)\nThread count: 4 physical cores, 8 logical processors, using up to 8 threads\nOptimize a model with 1 rows, 76 columns and 27 nonzeros\nModel fingerprint: 0xcbf7569f\nModel has 98 general constraints\nVariable types: 0 continuous, 76 integer (76 binary)\nCoefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [1e+00, 1e+00]\n RHS range [1e+01, 1e+01]\nPresolve added 98 rows and 0 columns\nPresolve time: 0.00s\nPresolved: 99 rows, 76 columns, 419 nonzeros\nVariable types: 0 continuous, 76 integer (76 binary)\nFound heuristic solution: objective 7.0000000\n\nRoot relaxation: objective 0.000000e+00, 46 iterations, 0.00 seconds\n\n Nodes | Current Node | Objective Bounds | Work\n Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n\n 0 0 0.00000 0 8 7.00000 0.00000 100% - 0s\nH 0 0 5.0000000 0.00000 100% - 0s\n 0 0 0.00000 0 18 5.00000 0.00000 100% - 0s\nH 0 0 4.0000000 0.00000 100% - 0s\n 0 0 0.00000 0 19 4.00000 0.00000 100% - 0s\n 0 0 0.00000 0 8 4.00000 0.00000 100% - 0s\n 0 0 0.00000 0 21 4.00000 0.00000 100% - 0s\n 0 0 0.50000 0 8 4.00000 0.50000 87.5% - 0s\n 0 0 0.66667 0 18 4.00000 0.66667 83.3% - 0s\n 0 0 1.00000 0 8 4.00000 1.00000 75.0% - 0s\n 0 0 1.00000 0 8 4.00000 1.00000 75.0% - 0s\n 0 2 1.00000 0 8 4.00000 1.00000 75.0% - 0s\n\nCutting planes:\n MIR: 4\n Inf proof: 9\n Zero half: 14\n\nExplored 2139 nodes (14255 simplex iterations) in 0.26 seconds\nThread count was 8 (of 8 available processors)\n\nSolution count 3: 4 5 7 \n\nOptimal solution found (tolerance 1.00e-04)\nBest objective 4.000000000000e+00, best bound 4.000000000000e+00, gap 0.0000%\n" ] ], [ [ "---\n## Result\n\nThe optimal solution completes only 4 lines or diagonals. We can visualize the result using matplotlib (we've peeled off the third dimension of the 3-D tic-tac-toe board).", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nfig, ax = plt.subplots(1, 3, figsize=(10,5))\nfor i in range(3):\n ax[i].grid()\n ax[i].set_xticks(range(4))\n ax[i].set_yticks(range(4))\n ax[i].tick_params(labelleft=False, labelbottom=False)\n \nfor cell in isX.keys():\n if isX[cell].x > 0.5:\n ax[cell[0]].add_patch(plt.Rectangle((cell[1],cell[2]), 1, 1))\n\nplt.show()", "_____no_output_____" ] ], [ [ "---\n## References\n\nH. Paul Williams, Model Building in Mathematical Programming, fifth edition.\n\nCopyright © 2020 Gurobi Optimization, LLC", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb13d845d3df0fe5a11f08b377b7b66f6a968c25
168,401
ipynb
Jupyter Notebook
varimaxspss_example.ipynb
trhgu/Varimax_Rotation_and_Thereafter_module
785d423fa245a428ded293cb1b37cf8ea905960d
[ "MIT" ]
1
2020-08-26T03:39:44.000Z
2020-08-26T03:39:44.000Z
varimaxspss_example.ipynb
trhgu/Varimax_Rotation_and_Thereafter_module
785d423fa245a428ded293cb1b37cf8ea905960d
[ "MIT" ]
null
null
null
varimaxspss_example.ipynb
trhgu/Varimax_Rotation_and_Thereafter_module
785d423fa245a428ded293cb1b37cf8ea905960d
[ "MIT" ]
null
null
null
400.954762
156,528
0.923047
[ [ [ "filepath = 'http://bestelon.com/pca/lipset.csv'\ndf = pd.read_csv(filepath)\nx = df.iloc[:, 1:10]", "_____no_output_____" ], [ "import varimaxspss", "_____no_output_____" ], [ "result = varimaxspss.PcaVarimax(x)", "_____no_output_____" ], [ "result.pca()", " PC# Eigenvalue % of Varian Exp Cumulative %\n0 1 2.384 26.486 26.486%\n1 2 2.015 22.393 48.879%\n2 3 1.341 14.898 63.777%\n3 4 1.127 12.526 76.303%\n4 5 0.168 1.865 78.169%\n5 6 0.732 8.136 86.305%\n6 7 0.320 3.551 89.855%\n7 8 0.401 4.459 94.315%\n8 9 0.512 5.685 100.000%\n" ], [ "result.loadings(3)", " PC1 PC2 PC3\nv1 -0.481 0.607 0.228\nv2 -0.501 0.392 0.542\nv3 0.388 0.667 -0.265\nv4 0.416 0.725 0.171\nv5 -0.525 -0.600 0.289\nv6 0.691 -0.107 0.591\nv7 0.711 -0.182 0.386\nv8 -0.565 0.320 0.234\nv9 -0.027 0.128 -0.509\n" ], [ "result.varimax()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb13df2521341ac63411039f5925fd1e2348aa65
85,456
ipynb
Jupyter Notebook
examples/trustscore_mnist.ipynb
SeptumCapital/alibi
2f24229c17cbab475c0d7b31c07af96bfb9fbf2f
[ "Apache-2.0" ]
null
null
null
examples/trustscore_mnist.ipynb
SeptumCapital/alibi
2f24229c17cbab475c0d7b31c07af96bfb9fbf2f
[ "Apache-2.0" ]
null
null
null
examples/trustscore_mnist.ipynb
SeptumCapital/alibi
2f24229c17cbab475c0d7b31c07af96bfb9fbf2f
[ "Apache-2.0" ]
1
2020-03-05T06:47:13.000Z
2020-03-05T06:47:13.000Z
117.223594
31,320
0.848273
[ [ [ "# Trust Scores applied to MNIST", "_____no_output_____" ], [ "It is important to know when a machine learning classifier's predictions can be trusted. Relying on the classifier's (uncalibrated) prediction probabilities is not optimal and can be improved upon. *Trust scores* measure the agreement between the classifier and a modified nearest neighbor classifier on the test set. The trust score is the ratio between the distance of the test instance to the nearest class different from the predicted class and the distance to the predicted class. Higher scores correspond to more trustworthy predictions. A score of 1 would mean that the distance to the predicted class is the same as to another class.\n\nThe original paper on which the algorithm is based is called [To Trust Or Not To Trust A Classifier](https://arxiv.org/abs/1805.11783). Our implementation borrows heavily from https://github.com/google/TrustScore, as does the example notebook.\n\nTrust scores work best for low to medium dimensional feature spaces. This notebook illustrates how you can **apply trust scores to high dimensional** data like images by adding an additional pre-processing step in the form of an [auto-encoder](https://en.wikipedia.org/wiki/Autoencoder) to reduce the dimensionality. Other dimension reduction techniques like PCA can be used as well.", "_____no_output_____" ] ], [ [ "import keras\nfrom keras import backend as K\nfrom keras.layers import Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Input, UpSampling2D\nfrom keras.models import Model\nfrom keras.utils import to_categorical\nimport matplotlib\n%matplotlib inline\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom alibi.confidence import TrustScore", "Using TensorFlow backend.\n" ], [ "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\nprint('x_train shape:', x_train.shape, 'y_train shape:', y_train.shape)\nplt.gray()\nplt.imshow(x_test[0])", "x_train shape: (60000, 28, 28) y_train shape: (60000,)\n" ] ], [ [ "Prepare data: scale, reshape and categorize", "_____no_output_____" ] ], [ [ "x_train = x_train.astype('float32') / 255\nx_test = x_test.astype('float32') / 255\nx_train = np.reshape(x_train, x_train.shape + (1,))\nx_test = np.reshape(x_test, x_test.shape + (1,))\nprint('x_train shape:', x_train.shape, 'x_test shape:', x_test.shape)\ny_train = to_categorical(y_train)\ny_test = to_categorical(y_test)\nprint('y_train shape:', y_train.shape, 'y_test shape:', y_test.shape)", "x_train shape: (60000, 28, 28, 1) x_test shape: (10000, 28, 28, 1)\ny_train shape: (60000, 10) y_test shape: (10000, 10)\n" ], [ "xmin, xmax = -.5, .5\nx_train = ((x_train - x_train.min()) / (x_train.max() - x_train.min())) * (xmax - xmin) + xmin\nx_test = ((x_test - x_test.min()) / (x_test.max() - x_test.min())) * (xmax - xmin) + xmin", "_____no_output_____" ] ], [ [ "## Define and train model", "_____no_output_____" ], [ "For this example we are not interested in optimizing model performance so a simple softmax classifier will do:", "_____no_output_____" ] ], [ [ "def sc_model():\n x_in = Input(shape=(28, 28, 1))\n x = Flatten()(x_in)\n x_out = Dense(10, activation='softmax')(x)\n sc = Model(inputs=x_in, outputs=x_out)\n sc.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\n return sc", "_____no_output_____" ], [ "sc = sc_model()\nsc.summary()\nsc.fit(x_train, y_train, batch_size=128, epochs=5, verbose=0)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 28, 28, 1) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 7850 \n=================================================================\nTotal params: 7,850\nTrainable params: 7,850\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "Evaluate the model on the test set:", "_____no_output_____" ] ], [ [ "score = sc.evaluate(x_test, y_test, verbose=0)\nprint('Test accuracy: ', score[1])", "Test accuracy: 0.8862\n" ] ], [ [ "## Define and train auto-encoder", "_____no_output_____" ] ], [ [ "def ae_model():\n # encoder\n x_in = Input(shape=(28, 28, 1))\n x = Conv2D(16, (3, 3), activation='relu', padding='same')(x_in)\n x = MaxPooling2D((2, 2), padding='same')(x)\n x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)\n x = MaxPooling2D((2, 2), padding='same')(x)\n x = Conv2D(4, (3, 3), activation=None, padding='same')(x)\n encoded = MaxPooling2D((2, 2), padding='same')(x)\n encoder = Model(x_in, encoded)\n\n # decoder\n dec_in = Input(shape=(4, 4, 4))\n x = Conv2D(4, (3, 3), activation='relu', padding='same')(dec_in)\n x = UpSampling2D((2, 2))(x)\n x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)\n x = UpSampling2D((2, 2))(x)\n x = Conv2D(16, (3, 3), activation='relu')(x)\n x = UpSampling2D((2, 2))(x)\n decoded = Conv2D(1, (3, 3), activation=None, padding='same')(x)\n decoder = Model(dec_in, decoded)\n \n # autoencoder = encoder + decoder\n x_out = decoder(encoder(x_in))\n autoencoder = Model(x_in, x_out)\n autoencoder.compile(optimizer='adam', loss='mse')\n \n return autoencoder, encoder, decoder", "_____no_output_____" ], [ "ae, enc, dec = ae_model()\nae.summary()\nae.fit(x_train, x_train, batch_size=128, epochs=8, validation_data=(x_test, x_test), verbose=0)", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_2 (InputLayer) (None, 28, 28, 1) 0 \n_________________________________________________________________\nmodel_2 (Model) (None, 4, 4, 4) 1612 \n_________________________________________________________________\nmodel_3 (Model) (None, 28, 28, 1) 1757 \n=================================================================\nTotal params: 3,369\nTrainable params: 3,369\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "## Calculate Trust Scores", "_____no_output_____" ], [ "Initialize trust scores:", "_____no_output_____" ] ], [ [ "ts = TrustScore()", "_____no_output_____" ] ], [ [ "The key is to **fit and calculate the trust scores on the encoded instances**. The encoded data still needs to be reshaped from (60000, 4, 4, 4) to (60000, 64) to comply with the k-d tree format. This is handled internally:", "_____no_output_____" ] ], [ [ "x_train_enc = enc.predict(x_train)\nts.fit(x_train_enc, y_train, classes=10) # 10 classes present in MNIST", "Reshaping data from (60000, 4, 4, 4) to (60000, 64) so k-d trees can be built.\n" ] ], [ [ "We can now calculate the trust scores and closest not predicted classes of the predictions on the test set, using the distance to the 5th nearest neighbor in each class:", "_____no_output_____" ] ], [ [ "x_test_enc = enc.predict(x_test)\ny_pred = sc.predict(x_test)\nscore, closest_class = ts.score(x_test_enc, y_pred, k=5)", "Reshaping data from (10000, 4, 4, 4) to (10000, 64) so k-d trees can be queried.\n" ] ], [ [ "Let's inspect which predictions have low and high trust scores:", "_____no_output_____" ] ], [ [ "n = 5\nidx_min, idx_max = np.argsort(score)[:n], np.argsort(score)[-n:]\nscore_min, score_max = score[idx_min], score[idx_max]\nclosest_min, closest_max = closest_class[idx_min], closest_class[idx_max]\npred_min, pred_max = np.argmax(y_pred[idx_min], axis=1), np.argmax(y_pred[idx_max], axis=1)\nimgs_min, imgs_max = x_test[idx_min], x_test[idx_max]\nlabel_min, label_max = np.argmax(y_test[idx_min], axis=1), np.argmax(y_test[idx_max], axis=1)", "_____no_output_____" ] ], [ [ "### Low Trust Scores", "_____no_output_____" ], [ "The image below makes clear that the low trust scores correspond to misclassified images. Because the trust scores are significantly below 1, they correctly identified that the images belong to another class than the predicted class, and identified that class.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(20, 4))\nfor i in range(n):\n ax = plt.subplot(1, n, i+1)\n plt.imshow(imgs_min[i].reshape(28, 28))\n plt.title('Model prediction: {} \\n Label: {} \\n Trust score: {:.3f}' \\\n '\\n Closest other class: {}'.format(pred_min[i], label_min[i], score_min[i], closest_min[i]))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()", "_____no_output_____" ] ], [ [ "### High Trust Scores", "_____no_output_____" ], [ "The high trust scores on the other hand all are very clear 1's:", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(20, 4))\nfor i in range(n):\n ax = plt.subplot(1, n, i+1)\n plt.imshow(imgs_max[i].reshape(28, 28))\n plt.title('Model prediction: {} \\n Label: {} \\n Trust score: {:.3f}'.format(pred_max[i], label_max[i], score_max[i]))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()", "_____no_output_____" ] ], [ [ "## Comparison of Trust Scores with model prediction probabilities", "_____no_output_____" ], [ "Let’s compare the prediction probabilities from the classifier with the trust scores for each prediction by checking whether trust scores are better than the model’s prediction probabilities at identifying correctly classified examples.\n\nFirst we need to set up a couple of helper functions.\n\n* Define a function that handles model training and predictions:\n", "_____no_output_____" ] ], [ [ "def run_sc(X_train, y_train, X_test):\n clf = sc_model()\n clf.fit(X_train, y_train, batch_size=128, epochs=5, verbose=0)\n y_pred_proba = clf.predict(X_test)\n y_pred = np.argmax(y_pred_proba, axis=1)\n probas = y_pred_proba[range(len(y_pred)), y_pred] # probabilities of predicted class\n return y_pred, probas", "_____no_output_____" ] ], [ [ "* Define the function that generates the precision plots:", "_____no_output_____" ] ], [ [ "def plot_precision_curve(plot_title, \n percentiles, \n labels, \n final_tp, \n final_stderr, \n final_misclassification,\n colors = ['blue', 'darkorange', 'brown', 'red', 'purple']):\n \n plt.title(plot_title, fontsize=18)\n colors = colors + list(cm.rainbow(np.linspace(0, 1, len(final_tp))))\n plt.xlabel(\"Percentile\", fontsize=14)\n plt.ylabel(\"Precision\", fontsize=14)\n \n for i, label in enumerate(labels):\n ls = \"--\" if (\"Model\" in label) else \"-\"\n plt.plot(percentiles, final_tp[i], ls, c=colors[i], label=label)\n plt.fill_between(percentiles, \n final_tp[i] - final_stderr[i],\n final_tp[i] + final_stderr[i],\n color=colors[i],\n alpha=.1)\n \n if 0. in percentiles:\n plt.legend(loc=\"lower right\", fontsize=14)\n else:\n plt.legend(loc=\"upper left\", fontsize=14)\n model_acc = 100 * (1 - final_misclassification)\n plt.axvline(x=model_acc, linestyle=\"dotted\", color=\"black\")\n plt.show()", "_____no_output_____" ] ], [ [ "* The function below trains the model on a number of folds, makes predictions, calculates the trust scores, and generates the precision curves to compare the trust scores with the model prediction probabilities:", "_____no_output_____" ] ], [ [ "def run_precision_plt(X, y, nfolds, percentiles, run_model, test_size=.2, \n plt_title=\"\", plt_names=[], predict_correct=True, classes=10):\n \n def stderr(L):\n return np.std(L) / np.sqrt(len(L))\n \n all_tp = [[[] for p in percentiles] for _ in plt_names]\n misclassifications = []\n mult = 1 if predict_correct else -1\n \n folds = StratifiedShuffleSplit(n_splits=nfolds, test_size=test_size, random_state=0)\n for train_idx, test_idx in folds.split(X, y):\n # create train and test folds, train model and make predictions\n X_train, y_train = X[train_idx, :], y[train_idx, :]\n X_test, y_test = X[test_idx, :], y[test_idx, :]\n y_pred, probas = run_sc(X_train, y_train, X_test)\n # target points are the correctly classified points\n y_test_class = np.argmax(y_test, axis=1)\n target_points = (np.where(y_pred == y_test_class)[0] if predict_correct else \n np.where(y_pred != y_test_class)[0])\n final_curves = [probas]\n # calculate trust scores\n ts = TrustScore()\n ts.fit(enc.predict(X_train), y_train, classes=classes)\n scores, _ = ts.score(enc.predict(X_test), y_pred, k=5)\n final_curves.append(scores) # contains prediction probabilities and trust scores\n # check where prediction probabilities and trust scores are above a certain percentage level\n for p, perc in enumerate(percentiles):\n high_proba = [np.where(mult * curve >= np.percentile(mult * curve, perc))[0] for curve in final_curves]\n if 0 in map(len, high_proba):\n continue\n # calculate fraction of values above percentage level that are correctly (or incorrectly) classified\n tp = [len(np.intersect1d(hp, target_points)) / (1. * len(hp)) for hp in high_proba]\n for i in range(len(plt_names)):\n all_tp[i][p].append(tp[i]) # for each percentile, store fraction of values above cutoff value\n misclassifications.append(len(target_points) / (1. * len(X_test)))\n \n # average over folds for each percentile\n final_tp = [[] for _ in plt_names]\n final_stderr = [[] for _ in plt_names]\n for p, perc in enumerate(percentiles):\n for i in range(len(plt_names)):\n final_tp[i].append(np.mean(all_tp[i][p]))\n final_stderr[i].append(stderr(all_tp[i][p]))\n\n for i in range(len(all_tp)):\n final_tp[i] = np.array(final_tp[i])\n final_stderr[i] = np.array(final_stderr[i])\n\n final_misclassification = np.mean(misclassifications)\n \n # create plot\n plot_precision_curve(plt_title, percentiles, plt_names, final_tp, final_stderr, final_misclassification)", "_____no_output_____" ] ], [ [ "## Detect correctly classified examples", "_____no_output_____" ], [ "The x-axis on the plot below shows the percentiles for the model prediction probabilities of the predicted class for each instance and for the trust scores. The y-axis represents the precision for each percentile. For each percentile level, we take the test examples whose trust score is above that percentile level and plot the percentage of those points that were correctly classified by the classifier. We do the same with the classifier’s own model confidence (i.e. softmax probabilities). For example, at percentile level 80, we take the top 20% scoring test examples based on the trust score and plot the percentage of those points that were correctly classified. We also plot the top 20% scoring test examples based on model probabilities and plot the percentage of those that were correctly classified. The vertical dotted line is the error of the classifier. The plots are an average over 2 folds of the dataset with 20% of the data kept for the test set.\n\nThe *Trust Score* and *Model Confidence* curves then show that the model precision is typically higher when using the trust scores to rank the predictions compared to the model prediction probabilities.", "_____no_output_____" ] ], [ [ "X = x_train\ny = y_train\npercentiles = [0 + 0.5 * i for i in range(200)]\nnfolds = 2\nplt_names = ['Model Confidence', 'Trust Score']\nplt_title = 'MNIST -- Softmax Classifier -- Predict Correct'", "_____no_output_____" ], [ "run_precision_plt(X, y, nfolds, percentiles, run_sc, plt_title=plt_title, \n plt_names=plt_names, predict_correct=True)", "Reshaping data from (48000, 4, 4, 4) to (48000, 64) so k-d trees can be built.\nReshaping data from (12000, 4, 4, 4) to (12000, 64) so k-d trees can be queried.\nReshaping data from (48000, 4, 4, 4) to (48000, 64) so k-d trees can be built.\nReshaping data from (12000, 4, 4, 4) to (12000, 64) so k-d trees can be queried.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cb13e0c48c047ea635085dc8aa4380bb5460ae2e
443,946
ipynb
Jupyter Notebook
Deep-Learning-Notebooks/notebooks/CSAILVision_SemanticSegmentation.ipynb
deepraj1729/Resources-and-Guides
0ca030d2f6a5d3533101b1ad7a0f329cc6538c52
[ "MIT" ]
2
2020-09-06T15:51:13.000Z
2020-10-04T23:29:19.000Z
Deep-Learning-Notebooks/notebooks/CSAILVision_SemanticSegmentation.ipynb
deepraj1729/Resources-and-Guides
0ca030d2f6a5d3533101b1ad7a0f329cc6538c52
[ "MIT" ]
null
null
null
Deep-Learning-Notebooks/notebooks/CSAILVision_SemanticSegmentation.ipynb
deepraj1729/Resources-and-Guides
0ca030d2f6a5d3533101b1ad7a0f329cc6538c52
[ "MIT" ]
null
null
null
1,474.903654
421,290
0.944631
[ [ [ "# CSAILVision semantic segmention models\n\nThis is a semantic segmentation notebook using an [ADE20K](http://groups.csail.mit.edu/vision/datasets/ADE20K/) pretrained model from the open source project [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch).\n\nFor other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks).\n\n## Clone repo and install dependencies", "_____no_output_____" ] ], [ [ "import os\nfrom os.path import exists, join, basename, splitext\n\ngit_repo_url = 'https://github.com/CSAILVision/semantic-segmentation-pytorch.git'\nproject_name = splitext(basename(git_repo_url))[0]\nif not exists(project_name):\n # clone and install dependencies\n !git clone -q $git_repo_url\n #!cd $project_name && pip install -q -r requirement.txt\n \nimport sys\nsys.path.append(project_name)\nimport time\nimport matplotlib\nimport matplotlib.pylab as plt\nplt.rcParams[\"axes.grid\"] = False", "_____no_output_____" ] ], [ [ "## Download a pretrained model\n\nAccording to [https://github.com/CSAILVision/semantic-segmentation-pytorch#performance](https://github.com/CSAILVision/semantic-segmentation-pytorch#performance), **UperNet101** was the best performing model. We will use it as the pretrained model:", "_____no_output_____" ] ], [ [ "ENCODER_NAME = 'resnet101'\nDECODER_NAME = 'upernet'\nPRETRAINED_ENCODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/encoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)\nPRETRAINED_DECODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/decoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)\n\npretrained_encoder_file = basename(PRETRAINED_ENCODER_MODEL_URL)\nif not exists(pretrained_encoder_file):\n !wget -q $PRETRAINED_ENCODER_MODEL_URL\npretrained_decoder_file = basename(PRETRAINED_DECODER_MODEL_URL)\nif not exists(pretrained_decoder_file):\n !wget -q $PRETRAINED_DECODER_MODEL_URL", "_____no_output_____" ] ], [ [ "## Prepare model\n\nLoad the pretrained model:", "_____no_output_____" ] ], [ [ "from types import SimpleNamespace\nimport torch\nfrom models import ModelBuilder, SegmentationModule\nfrom dataset import TestDataset\nfrom utils import colorEncode\nfrom scipy.io import loadmat\n\n# options\noptions = SimpleNamespace(fc_dim=2048,\n num_class=150,\n imgSize = [300, 400, 500, 600],\n imgMaxSize=1000,\n padding_constant=8,\n segm_downsampling_rate=8)\n\n# create model\nbuilder = ModelBuilder()\nnet_encoder = builder.build_encoder(arch=ENCODER_NAME, weights=pretrained_encoder_file,\n fc_dim=options.fc_dim)\nnet_decoder = builder.build_decoder(arch=DECODER_NAME, weights=pretrained_decoder_file,\n fc_dim=options.fc_dim, num_class=options.num_class, use_softmax=True)\nsegmentation_module = SegmentationModule(net_encoder, net_decoder, torch.nn.NLLLoss(ignore_index=-1))\nsegmentation_module = segmentation_module.eval()\ntorch.set_grad_enabled(False)\n\nif torch.cuda.is_available():\n segmentation_module = segmentation_module.cuda()\n\n# test on a given image\ndef test(test_image_name):\n dataset_test = TestDataset([{'fpath_img': test_image_name}], options, max_sample=-1)\n \n batch_data = dataset_test[0]\n segSize = (batch_data['img_ori'].shape[0], batch_data['img_ori'].shape[1])\n img_resized_list = batch_data['img_data']\n \n scores = torch.zeros(1, options.num_class, segSize[0], segSize[1])\n if torch.cuda.is_available():\n scores = scores.cuda()\n\n for img in img_resized_list:\n feed_dict = batch_data.copy()\n feed_dict['img_data'] = img\n del feed_dict['img_ori']\n del feed_dict['info']\n if torch.cuda.is_available():\n feed_dict = {k: o.cuda() for k, o in feed_dict.items()}\n\n # forward pass\n pred_tmp = segmentation_module(feed_dict, segSize=segSize)\n scores = scores + pred_tmp / len(options.imgSize)\n\n _, pred = torch.max(scores, dim=1)\n return pred.squeeze(0).cpu().numpy()", "_____no_output_____" ] ], [ [ "## Evaluate on a test image\n\nFirst, download a test image from the internet:", "_____no_output_____" ] ], [ [ "IMAGE_URL = 'https://raw.githubusercontent.com/tugstugi/dl-colab-notebooks/master/resources/lidl.jpg'\n\nimage_file = basename(IMAGE_URL)\n!wget -q -O $image_file $IMAGE_URL\n\nplt.figure(figsize=(10, 5))\nplt.imshow(matplotlib.image.imread(image_file))", "_____no_output_____" ] ], [ [ "Now, test on the downloaded image:", "_____no_output_____" ] ], [ [ "t = time.time()\npred = test(image_file)\nprint(\"executed in %.3fs\" % (time.time()-t))\n\npred_color = colorEncode(pred, loadmat(os.path.join(project_name, 'data/color150.mat'))['colors'])\nplt.imshow(pred_color)", "# samples: 1\nexecuted in 0.631s\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb13e410cabc8e5a576704862ebb25c8ed0630ac
22,677
ipynb
Jupyter Notebook
content/lessons/06/Class-Coding-Lab/CCL-Functions.ipynb
auramnar/Summer2018Learn2Code
068b89215ad2b90f57a73511b0b1b6b126589c5f
[ "MIT" ]
null
null
null
content/lessons/06/Class-Coding-Lab/CCL-Functions.ipynb
auramnar/Summer2018Learn2Code
068b89215ad2b90f57a73511b0b1b6b126589c5f
[ "MIT" ]
null
null
null
content/lessons/06/Class-Coding-Lab/CCL-Functions.ipynb
auramnar/Summer2018Learn2Code
068b89215ad2b90f57a73511b0b1b6b126589c5f
[ "MIT" ]
null
null
null
31.627615
444
0.557261
[ [ [ "# In-Class Coding Lab: Functions\n\nThe goals of this lab are to help you to understand:\n\n- How to use Python's built-in functions in the standard library.\n- How to write user-defined functions\n- The benefits of user-defined functions to code reuse and simplicity.\n- How to create a program to use functions to solve a complex idea\n\nWe will demonstrate these through the following example:\n\n\n## The Credit Card Problem\n\nIf you're going to do commerce on the web, you're going to support credit cards. But how do you know if a given number is valid? And how do you know which network issued the card?\n\n**Example:** Is `5300023581452982` a valid credit card number?Is it? Visa? MasterCard, Discover? or American Express?\n\nWhile eventually the card number is validated when you attempt to post a transaction, there's a lot of reasons why you might want to know its valid before the transaction takes place. The most common being just trying to catch an honest key-entry mistake made by your site visitor.\n\nSo there are two things we'd like to figure out, for any \"potential\" card number:\n\n- Who is the issuing network? Visa, MasterCard, Discover or American Express.\n- In the number potentially valid (as opposed to a made up series of digits)?\n\n### What does the have to do with functions?\n\nIf we get this code to work, it seems like it might be useful to re-use it in several other programs we may write in the future. We can do this by writing the code as a **function**. Think of a function as an independent program its own inputs and output. The program is defined under a name so that we can use it simply by calling its name. \n\n**Example:** `n = int(\"50\")` the function `int()` takes the string `\"50\"` as input and converts it to an `int` value `50` which is then stored in the value `n`.\n\nWhen you create these credit card functions, we might want to re-use them by placing them in a **Module** which is a file with a collection of functions in it. Furthermore we can take a group of related modules and place them together in a Python **Package**. You install packages on your computer with the `pip` command. \n", "_____no_output_____" ], [ "## Built-In Functions\n\nLet's start by checking out the built-in functions in Python's math library. We use the `dir()` function to list the names of the math library:\n", "_____no_output_____" ] ], [ [ "import math\n\ndir(math)", "_____no_output_____" ] ], [ [ "If you look through the output, you'll see a `factorial` name. Let's see if it's a function we can use:", "_____no_output_____" ] ], [ [ "help(math.factorial)", "Help on built-in function factorial in module math:\n\nfactorial(...)\n factorial(x) -> Integral\n \n Find x!. Raise a ValueError if x is negative or non-integral.\n\n" ] ], [ [ "It says it's a built-in function, and requies an integer value (which it referrs to as x, but that value is arbitrary) as an argument. Let's call the function and see if it works:", "_____no_output_____" ] ], [ [ "math.factorial(5) #this is an example of \"calling\" the function with input 5. The output should be 120", "_____no_output_____" ], [ "math.factorial(0) # here we call the same function with input 0. The output should be 1.", "_____no_output_____" ], [ "## Call the factorial function with an input argument of 4. What is the output?\n#TODO write code here.\nmath.factorial(5)", "_____no_output_____" ] ], [ [ "## Using functions to print things awesome in Juypter\n\nUp until this point we've used the boring `print()` function for our output. Let's do better. In the `IPython.display` module there are two functions `display()` and `HTML()`. The `display()` function outputs a Python object to the Jupyter notebook. The `HTML()` function creates a Python object from [HTML Markup](https://www.w3schools.com/html/html_intro.asp) as a string.\n\nFor example this prints Hello in Heading 1.\n", "_____no_output_____" ] ], [ [ "from IPython.display import display, HTML\n\nprint(\"Exciting:\")\ndisplay(HTML(\"<h1>Hello</h1>\"))\nprint(\"Boring:\")\nprint(\"Hello\")\n", "Exciting:\n" ] ], [ [ "Let's keep the example going by writing two of our own functions to print a title and print text as normal, respectively. \n\nExecute this code:", "_____no_output_____" ] ], [ [ "def print_title(text):\n '''\n This prints text to IPython.display as H1\n '''\n return display(HTML(\"<H1>\" + text + \"</H1>\"))\n\ndef print_normal(text):\n '''\n this prints text to IPython.display as normal text\n '''\n return display(HTML(text))\n", "_____no_output_____" ] ], [ [ "Now let's use these two functions in a familiar program! ", "_____no_output_____" ] ], [ [ "print_title(\"Area of a Rectangle\")\nlength = float(input(\"Enter length: \"))\nwidth = float(input(\"Enter width: \"))\narea = length * width\nprint_normal(\"The area is %.2f\" % area)", "_____no_output_____" ] ], [ [ "## Let's get back to credit cards....\n\nNow that we know how a bit about **Packages**, **Modules**, and **Functions** let's attempt to write our first function. Let's tackle the easier of our two credit card related problems:\n\n- Who is the issuing network? Visa, MasterCard, Discover or American Express.\n\nThis problem can be solved by looking at the first digit of the card number:\n\n - \"4\" ==> \"Visa\"\n - \"5\" ==> \"MasterCard\"\n - \"6\" ==> \"Discover\"\n - \"3\" ==> \"American Express\"\n \nSo for card number `5300023581452982` the issuer is \"MasterCard\".\n\nIt should be easy to write a program to solve this problem. Here's the algorithm:\n\n```\ninput credit card number into variable card\nget the first digit of the card number (eg. digit = card[0])\nif digit equals \"4\"\n the card issuer \"Visa\"\nelif digit equals \"5\"\n the card issuer \"MasterCard\"\nelif digit equals \"6\"\n the card issuer is \"Discover\"\nelif digit equals \"3\"\n the card issues is \"American Express\"\nelse\n the issuer is \"Invalid\" \nprint issuer\n```\n\n### Now You Try It\n\nTurn the algorithm into python code", "_____no_output_____" ] ], [ [ "## TODO: Write your code here\nccard = input(\"Please enter your credit card number: \")\ndigit = ccard[0]\n\nif digit == '4':\n card_issuer = \"Visa\"\nelif digit == '5':\n card_issuer = \"MasterCard\"\nelif digit == '6':\n card_issuer = \"Discover\"\nelif digit == '3':\n card_issuer = \"American Express\"\nelse:\n card_issuer = \"Invalid\"\n \nprint(card_issuer)", "Please enter your credit card number: 4215678\nVisa\n" ] ], [ [ "**IMPORTANT** Make sure to test your code by running it 5 times. You should test issuer and also the \"Invalid Card\" case.\n\n## Introducing the Write - Refactor - Test - Rewrite approach\n\nIt would be nice to re-write this code to use a function. This can seem daunting / confusing for beginner programmers, which is why we teach the **Write - Refactor - Test - Rewrite** approach. In this approach you write the ENTIRE PROGRAM and then REWRITE IT to use functions. Yes, it's inefficient, but until you get comfotable thinking \"functions first\" its the best way to modularize your code with functions. Here's the approach:\n\n1. Write the code\n2. Refactor (change the code around) to use a function\n3. Test the function by calling it\n4. Rewrite the original code to use the new function.\n\n\nWe already did step 1: Write so let's move on to:\n\n### Step 2: refactor\n\nLet's strip the logic out of the above code to accomplish the task of the function:\n\n- Send into the function as input a credit card number as a `str`\n- Return back from the function as output the issuer of the card as a `str`\n\nTo help you out we've written the function stub for you all you need to do is write the function body code.", "_____no_output_____" ] ], [ [ "def CardIssuer(card):\n '''This function takes a card number (card) as input, and returns the issuer name as output'''\n ## TODO write code here they should be the same as lines 3-13 from the code above\n digit = card[0]\n\n if digit == '4':\n card_issuer = \"Visa\"\n elif digit == '5':\n card_issuer = \"MasterCard\"\n elif digit == '6':\n card_issuer = \"Discover\"\n elif digit == '3':\n card_issuer = \"American Express\"\n else:\n card_issuer = \"Invalid\"\n\n\n # the last line in the function should return the output\n return card_issuer", "_____no_output_____" ] ], [ [ "### Step 3: Test\n\nYou wrote the function, but how do you know it works? The short answer is unless you test it you're guessing. \n\nTesting our function is as simple as calling the function with input values where WE KNOW WHAT TO EXPECT from the output. We then compare that to the ACTUAL value from the called function. If they are the same, then we know the function is working as expected!\n\nHere's some examples:\n\n```\nWHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa\nWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard\nWHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover\nWHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express\nWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid Card\n```\n\n### Now you Try it!\n\nWrite the tests based on the examples:", "_____no_output_____" ] ], [ [ "# Testing the CardIssuer() function\nprint(\"WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL\", CardIssuer(\"40123456789\"))\nprint(\"WHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL\", CardIssuer(\"50123456789\"))\n\n## TODO: You write the remaining 3 tests, you can copy the lines and edit the values accordingly\n\nprint(\"WHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL:\", CardIssuer(\"60123456789\"))\nprint(\"WHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL:\", CardIssuer(\"30123456789\"))\nprint(\"WHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL:\", CardIssuer(\"90123456789\"))\n", "WHEN card='40123456789' We EXPECT CardIssuer(card) to return Visa ACTUAL Visa\nWHEN card='50123456789' We EXPECT CardIssuer(card) to return MasterCard ACTUAL MasterCard\nWHEN card='60123456789' We EXPECT CardIssuer(card) to return Discover ACTUAL: Discover\nWHEN card='30123456789' We EXPECT CardIssuer(card) to return American Express ACTUAL: American Express\nWHEN card='90123456789' We EXPECT CardIssuer(card) to return Invalid ACTUAL: Invalid\n" ] ], [ [ "### Step 4: Rewrite\n\nThe final step is to re-write the original program, but use the function instead. The algorithm becomes\n\n```\ninput credit card number into variable card\ncall the CardIssuer function with card as input, issuer as output\nprint issuer\n```\n\n### Now You Try It!\n", "_____no_output_____" ] ], [ [ "# TODO Re-write the program here, calling our function.\nccard = input(\"Please enter your credit card number: \")\ncard_issuer = CardIssuer(ccard)\nprint(card_issuer)", "Please enter your credit card number: 1234567\nInvalid\n" ] ], [ [ "## Functions are abstractions. Abstractions are good.\n\n\nStep on the accellerator and the car goes. How does it work? Who cares, it's an abstraction! Functions are the same way. Don't believe me. Consider the Luhn Check Algorithm: https://en.wikipedia.org/wiki/Luhn_algorithm \n\nThis nifty little algorithm is used to verify that a sequence of digits is possibly a credit card number (as opposed to just a sequence of numbers). It uses a verfication approach called a **checksum** to as it uses a formula to figure out the validity. \n\nHere's the function which given a card will let you know if it passes the Luhn check:\n", "_____no_output_____" ] ], [ [ "# Todo: execute this code\n\ndef checkLuhn(card):\n ''' This Luhn algorithm was adopted from the pseudocode here: https://en.wikipedia.org/wiki/Luhn_algorithm'''\n total = 0\n length = len(card)\n parity = length % 2\n for i in range(length):\n digit = int(card[i])\n if i%2 == parity:\n digit = digit * 2\n if digit > 9:\n digit = digit -9\n total = total + digit\n return total % 10 == 0\n", "_____no_output_____" ] ], [ [ "### Is that a credit card number or the ramblings of a madman?\n\nIn order to test the `checkLuhn()` function you need some credit card numbers. (Don't look at me... you ain't gettin' mine!!!!) Not to worry, the internet has you covered. The website: http://www.getcreditcardnumbers.com/ is not some mysterious site on the dark web. It's a site for generating \"test\" credit card numbers. You can't buy anything with these numbers, but they will pass the Luhn test.\n\nGrab a couple of numbers and test the Luhn function as we did with the `CardIssuer()` function. Write at least to tests like these ones:\n\n```\nWHEN card='5443713204330437' We EXPECT checkLuhn(card) to return True\nWHEN card='5111111111111111' We EXPECT checkLuhn(card) to return False \n```\n", "_____no_output_____" ] ], [ [ "#TODO Write your two tests here\nprint(\"When card='4716511919678261' We EXPECT checkLuhn(card) to return True. ACTUAL: %s\" % checkLuhn('4716511919678261'))\nprint(\"When card='4222222222222222' We EXPECT checkLuhn(card) to return False. ACTUAL: %s\" % checkLuhn('4222222222222222'))", "_____no_output_____" ] ], [ [ "## Putting it all together\n\nFinally use our two functions to write the following program. It will ask for a series of credit card numbers, until you enter 'quit' for each number it will output whether it's invalid or if valid name the issuer.\n\n\nHere's the Algorithm:\n```\nloop\n input a credit card number\n if card = 'quit' stop loop\n if card passes luhn check\n get issuer\n print issuer\n else\n print invalid card\n```\n\n### Now You Try It", "_____no_output_____" ] ], [ [ "## TODO Write code here\nwhile True:\n ccard = input(\"Enter your credit card number or'quit'to end the program. \")\n if ccard == 'quit':\n break\n try:\n if checkLuhn(ccard) == True:\n issuer = CardIssuer(ccard)\n print(issuer)\n else:\n print(\"Invalid card. Try again.\")\n \n except:\n print(\"Please enter a number. Try again.\")", "Enter your credit card number or'quit'to end the program. qwqwewq\nPlease enter a number. Try again.\nEnter your credit card number or'quit'to end the program. 601139199679300\nInvalid card. Try again.\nEnter your credit card number or'quit'to end the program. 4916741594665261\nVisa\nEnter your credit card number or'quit'to end the program. quit\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb13e5eb44962c30a78fe8525d1c9b93f244c19c
173,506
ipynb
Jupyter Notebook
AAAI/Learnability/CIN/older/ds1/synthetic_type0_MLP2_m_250_lr0001.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
2
2019-08-24T07:20:35.000Z
2020-03-27T08:16:59.000Z
AAAI/Learnability/CIN/older/ds1/synthetic_type0_MLP2_m_250_lr0001.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
null
null
null
AAAI/Learnability/CIN/older/ds1/synthetic_type0_MLP2_m_250_lr0001.ipynb
lnpandey/DL_explore_synth_data
0a5d8b417091897f4c7f358377d5198a155f3f24
[ "MIT" ]
3
2019-06-21T09:34:32.000Z
2019-09-19T10:43:07.000Z
64.07164
15,822
0.624566
[ [ [ "import numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom tqdm import tqdm\n%matplotlib inline\nfrom torch.utils.data import Dataset, DataLoader\nimport torch\nimport torchvision\n\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.nn import functional as F\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(device)", "cuda\n" ], [ "torch.backends.cudnn.deterministic = True\ntorch.backends.cudnn.benchmark= False", "_____no_output_____" ], [ "m = 250", "_____no_output_____" ] ], [ [ "# Generate dataset", "_____no_output_____" ] ], [ [ "np.random.seed(12)\ny = np.random.randint(0,3,500)\nidx= []\nfor i in range(3):\n print(i,sum(y==i))\n idx.append(y==i)", "0 174\n1 163\n2 163\n" ], [ "x = np.zeros((500,))", "_____no_output_____" ], [ "np.random.seed(12)\nx[idx[0]] = np.random.uniform(low =-1,high =0,size= sum(idx[0]))\nx[idx[1]] = np.random.uniform(low =0,high =1,size= sum(idx[1]))\nx[idx[2]] = np.random.uniform(low =2,high =3,size= sum(idx[2]))", "_____no_output_____" ], [ "x[idx[0]][0], x[idx[2]][5] ", "_____no_output_____" ], [ "print(x.shape,y.shape)", "(500,) (500,)\n" ], [ "idx= []\nfor i in range(3):\n idx.append(y==i)", "_____no_output_____" ], [ "for i in range(3):\n y= np.zeros(x[idx[i]].shape[0])\n plt.scatter(x[idx[i]],y,label=\"class_\"+str(i))\nplt.legend()", "_____no_output_____" ], [ "bg_idx = [ np.where(idx[2] == True)[0]]\n\nbg_idx = np.concatenate(bg_idx, axis = 0)\nbg_idx.shape", "_____no_output_____" ], [ "np.unique(bg_idx).shape", "_____no_output_____" ], [ "x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)\n", "_____no_output_____" ], [ "np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)", "_____no_output_____" ], [ "x = x/np.std(x[bg_idx], axis = 0, keepdims = True)", "_____no_output_____" ], [ "np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)", "_____no_output_____" ], [ "for i in range(3):\n y= np.zeros(x[idx[i]].shape[0])\n plt.scatter(x[idx[i]],y,label=\"class_\"+str(i))\nplt.legend()", "_____no_output_____" ], [ "foreground_classes = {'class_0','class_1' }\n\nbackground_classes = {'class_2'}", "_____no_output_____" ], [ "fg_class = np.random.randint(0,2)\nfg_idx = np.random.randint(0,m)\n\na = []\nfor i in range(m):\n if i == fg_idx:\n b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)\n a.append(x[b])\n print(\"foreground \"+str(fg_class)+\" present at \" + str(fg_idx))\n else:\n bg_class = np.random.randint(2,3)\n b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)\n a.append(x[b])\n print(\"background \"+str(bg_class)+\" present at \" + str(i))\na = np.concatenate(a,axis=0)\nprint(a.shape)\n\nprint(fg_class , fg_idx)", "background 2 present at 0\nbackground 2 present at 1\nbackground 2 present at 2\nbackground 2 present at 3\nbackground 2 present at 4\nbackground 2 present at 5\nbackground 2 present at 6\nbackground 2 present at 7\nbackground 2 present at 8\nbackground 2 present at 9\nbackground 2 present at 10\nbackground 2 present at 11\nbackground 2 present at 12\nbackground 2 present at 13\nbackground 2 present at 14\nbackground 2 present at 15\nbackground 2 present at 16\nbackground 2 present at 17\nbackground 2 present at 18\nbackground 2 present at 19\nbackground 2 present at 20\nbackground 2 present at 21\nbackground 2 present at 22\nbackground 2 present at 23\nbackground 2 present at 24\nbackground 2 present at 25\nbackground 2 present at 26\nbackground 2 present at 27\nbackground 2 present at 28\nbackground 2 present at 29\nbackground 2 present at 30\nbackground 2 present at 31\nbackground 2 present at 32\nbackground 2 present at 33\nbackground 2 present at 34\nbackground 2 present at 35\nbackground 2 present at 36\nbackground 2 present at 37\nbackground 2 present at 38\nbackground 2 present at 39\nbackground 2 present at 40\nbackground 2 present at 41\nbackground 2 present at 42\nbackground 2 present at 43\nbackground 2 present at 44\nbackground 2 present at 45\nbackground 2 present at 46\nbackground 2 present at 47\nbackground 2 present at 48\nbackground 2 present at 49\nbackground 2 present at 50\nbackground 2 present at 51\nbackground 2 present at 52\nbackground 2 present at 53\nbackground 2 present at 54\nbackground 2 present at 55\nbackground 2 present at 56\nbackground 2 present at 57\nbackground 2 present at 58\nbackground 2 present at 59\nbackground 2 present at 60\nbackground 2 present at 61\nbackground 2 present at 62\nbackground 2 present at 63\nbackground 2 present at 64\nbackground 2 present at 65\nbackground 2 present at 66\nbackground 2 present at 67\nbackground 2 present at 68\nbackground 2 present at 69\nbackground 2 present at 70\nbackground 2 present at 71\nbackground 2 present at 72\nbackground 2 present at 73\nbackground 2 present at 74\nbackground 2 present at 75\nbackground 2 present at 76\nbackground 2 present at 77\nbackground 2 present at 78\nbackground 2 present at 79\nbackground 2 present at 80\nbackground 2 present at 81\nbackground 2 present at 82\nbackground 2 present at 83\nbackground 2 present at 84\nbackground 2 present at 85\nbackground 2 present at 86\nbackground 2 present at 87\nbackground 2 present at 88\nbackground 2 present at 89\nbackground 2 present at 90\nbackground 2 present at 91\nbackground 2 present at 92\nbackground 2 present at 93\nbackground 2 present at 94\nbackground 2 present at 95\nbackground 2 present at 96\nbackground 2 present at 97\nbackground 2 present at 98\nbackground 2 present at 99\nbackground 2 present at 100\nbackground 2 present at 101\nbackground 2 present at 102\nbackground 2 present at 103\nbackground 2 present at 104\nbackground 2 present at 105\nbackground 2 present at 106\nbackground 2 present at 107\nbackground 2 present at 108\nbackground 2 present at 109\nbackground 2 present at 110\nbackground 2 present at 111\nbackground 2 present at 112\nbackground 2 present at 113\nbackground 2 present at 114\nbackground 2 present at 115\nbackground 2 present at 116\nbackground 2 present at 117\nbackground 2 present at 118\nbackground 2 present at 119\nbackground 2 present at 120\nbackground 2 present at 121\nbackground 2 present at 122\nbackground 2 present at 123\nbackground 2 present at 124\nbackground 2 present at 125\nbackground 2 present at 126\nbackground 2 present at 127\nbackground 2 present at 128\nbackground 2 present at 129\nbackground 2 present at 130\nbackground 2 present at 131\nbackground 2 present at 132\nbackground 2 present at 133\nbackground 2 present at 134\nbackground 2 present at 135\nbackground 2 present at 136\nbackground 2 present at 137\nbackground 2 present at 138\nbackground 2 present at 139\nbackground 2 present at 140\nbackground 2 present at 141\nbackground 2 present at 142\nbackground 2 present at 143\nbackground 2 present at 144\nbackground 2 present at 145\nbackground 2 present at 146\nbackground 2 present at 147\nbackground 2 present at 148\nbackground 2 present at 149\nbackground 2 present at 150\nbackground 2 present at 151\nbackground 2 present at 152\nbackground 2 present at 153\nbackground 2 present at 154\nbackground 2 present at 155\nbackground 2 present at 156\nbackground 2 present at 157\nbackground 2 present at 158\nbackground 2 present at 159\nbackground 2 present at 160\nbackground 2 present at 161\nbackground 2 present at 162\nbackground 2 present at 163\nbackground 2 present at 164\nbackground 2 present at 165\nbackground 2 present at 166\nbackground 2 present at 167\nbackground 2 present at 168\nbackground 2 present at 169\nbackground 2 present at 170\nbackground 2 present at 171\nbackground 2 present at 172\nbackground 2 present at 173\nbackground 2 present at 174\nbackground 2 present at 175\nbackground 2 present at 176\nbackground 2 present at 177\nbackground 2 present at 178\nbackground 2 present at 179\nbackground 2 present at 180\nbackground 2 present at 181\nbackground 2 present at 182\nbackground 2 present at 183\nbackground 2 present at 184\nbackground 2 present at 185\nbackground 2 present at 186\nbackground 2 present at 187\nbackground 2 present at 188\nbackground 2 present at 189\nbackground 2 present at 190\nbackground 2 present at 191\nbackground 2 present at 192\nbackground 2 present at 193\nbackground 2 present at 194\nbackground 2 present at 195\nbackground 2 present at 196\nbackground 2 present at 197\nbackground 2 present at 198\nbackground 2 present at 199\nforeground 1 present at 200\nbackground 2 present at 201\nbackground 2 present at 202\nbackground 2 present at 203\nbackground 2 present at 204\nbackground 2 present at 205\nbackground 2 present at 206\nbackground 2 present at 207\nbackground 2 present at 208\nbackground 2 present at 209\nbackground 2 present at 210\nbackground 2 present at 211\nbackground 2 present at 212\nbackground 2 present at 213\nbackground 2 present at 214\nbackground 2 present at 215\nbackground 2 present at 216\nbackground 2 present at 217\nbackground 2 present at 218\nbackground 2 present at 219\nbackground 2 present at 220\nbackground 2 present at 221\nbackground 2 present at 222\nbackground 2 present at 223\nbackground 2 present at 224\nbackground 2 present at 225\nbackground 2 present at 226\nbackground 2 present at 227\nbackground 2 present at 228\nbackground 2 present at 229\nbackground 2 present at 230\nbackground 2 present at 231\nbackground 2 present at 232\nbackground 2 present at 233\nbackground 2 present at 234\nbackground 2 present at 235\nbackground 2 present at 236\nbackground 2 present at 237\nbackground 2 present at 238\nbackground 2 present at 239\nbackground 2 present at 240\nbackground 2 present at 241\nbackground 2 present at 242\nbackground 2 present at 243\nbackground 2 present at 244\nbackground 2 present at 245\nbackground 2 present at 246\nbackground 2 present at 247\nbackground 2 present at 248\nbackground 2 present at 249\n(250,)\n1 200\n" ], [ "a.shape", "_____no_output_____" ], [ "np.reshape(a,(m,1))", "_____no_output_____" ], [ "desired_num = 2000\nmosaic_list_of_images =[]\nmosaic_label = []\nfore_idx=[]\nfor j in range(desired_num):\n np.random.seed(j)\n fg_class = np.random.randint(0,2)\n fg_idx = np.random.randint(0,m)\n a = []\n for i in range(m):\n if i == fg_idx:\n b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)\n a.append(x[b])\n# print(\"foreground \"+str(fg_class)+\" present at \" + str(fg_idx))\n else:\n bg_class = np.random.randint(2,3)\n b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)\n a.append(x[b])\n# print(\"background \"+str(bg_class)+\" present at \" + str(i))\n a = np.concatenate(a,axis=0)\n mosaic_list_of_images.append(np.reshape(a,(m,1)))\n mosaic_label.append(fg_class)\n fore_idx.append(fg_idx)", "_____no_output_____" ], [ "mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T\nmosaic_list_of_images.shape", "_____no_output_____" ], [ "mosaic_list_of_images.shape, mosaic_list_of_images[0]", "_____no_output_____" ], [ "for j in range(m):\n print(mosaic_list_of_images[0][j])\n ", "1.4358009314866034\n0.13695372724012772\n-0.4252275951078302\n-0.9185033095974854\n-0.12909750858981067\n-1.7315442430857817\n0.7558231343879587\n0.6640638025673503\n-0.15416630449113\n0.832545401567119\n1.3192211298342615\n-0.411611268813296\n0.7558231343879587\n-0.15416630449113\n-0.9048783650832936\n0.2655196567190298\n1.4173461573148838\n-1.3506738707130648\n-0.9185033095974854\n-0.8487774501542434\n0.41497163990658553\n0.22843592432187756\n0.4120483059558463\n1.039571830882509\n-0.42473802055748705\n-1.7449385946476759\n-1.7449385946476759\n-1.4306962812399096\n-0.2991878808509575\n-0.9185033095974854\n-1.4652385079352608\n-0.2991878808509575\n0.7183782579402468\n-1.3759761997330582\n0.6533943080642937\n0.345865811484694\n1.6201484942286413\n0.34820546484848275\n0.34820546484848275\n-0.24077840436878184\n0.7817004669153338\n-0.529177237043516\n1.397998025695915\n0.22843592432187756\n-1.2078792839203505\n1.0496038541198456\n-1.1569264275370823\n-10.780503085215361\n-1.5499378105140722\n-0.992671849313759\n1.0027472329330642\n-1.573584632766583\n1.5012761831023353\n-0.5157721805614179\n-1.2810438533218755\n-0.3647486544408019\n0.4120483059558463\n-0.038090346259590985\n0.34820546484848275\n-1.4306962812399096\n1.039571830882509\n-0.24077840436878184\n0.832545401567119\n0.9760668439559996\n-0.8978926538361668\n1.1446292841441261\n0.38745842607626113\n-0.70196914745493\n-1.4652385079352608\n-0.9959213015328963\n-0.7298799970913634\n1.4470291398063106\n-1.660679504426819\n-0.8487774501542434\n-0.03466237152640819\n-1.5317549497195035\n1.4358009314866034\n1.3923668338722077\n0.9719187406356335\n-0.1118367162703567\n-1.0593112442151893\n-0.8024104318571436\n1.0011593284257532\n-0.363319004172012\n1.3053355316930382\n-1.7315442430857817\n0.06747312638794169\n0.9719187406356335\n-1.6277634830710428\n-0.9959213015328963\n0.907524130895722\n-1.1569264275370823\n-0.36464896647758427\n-1.272662683896807\n-1.4652385079352608\n-0.363319004172012\n-0.5854355038893796\n0.6533943080642937\n-1.0496938597364855\n-1.4652385079352608\n-0.03466237152640819\n0.9324306003807514\n0.9760668439559996\n0.907524130895722\n-0.3854535468599667\n-0.9452496411517143\n0.4120483059558463\n-0.8487774501542434\n1.4747135972927596\n-1.0808519586324388\n0.4499833637821108\n0.3230925845697044\n1.3192211298342615\n-1.7315442430857817\n1.0496947054642105\n1.2774408980197915\n-1.0496938597364855\n1.4074927036417388\n-0.3647486544408019\n-0.8395236381868343\n0.6600647973906256\n0.9930435774566333\n-0.2991878808509575\n0.34820546484848275\n-0.529177237043516\n-0.3972937951880947\n-1.1569264275370823\n0.41497163990658553\n1.4173461573148838\n1.404131276723479\n-0.48596822928314454\n0.4678634415173758\n-0.8395236381868343\n-1.391406079093982\n1.0011593284257532\n0.588584223030855\n1.5389025682085709\n-0.36464896647758427\n-0.758940247008594\n-1.3506738707130648\n-0.3972937951880947\n-0.8024104318571436\n-1.3506738707130648\n-1.620717323627289\n0.13695372724012772\n1.348357839804765\n0.4499833637821108\n1.371916253051332\n0.9858997788405828\n-1.0818298278085032\n1.2774408980197915\n0.9760668439559996\n-0.1843434778477586\n-1.5721297979884876\n1.379398681396218\n0.8312237532196014\n-0.521621676475422\n-1.272662683896807\n1.2861576669133736\n0.2655196567190298\n0.9760668439559996\n-0.5290035660358465\n1.4358009314866034\n-1.1935931998008302\n-1.47371278630514\n1.4074927036417388\n-1.4426403386796098\n0.9760668439559996\n0.6640638025673503\n0.7183782579402468\n1.5389025682085709\n1.002285072401912\n0.8312237532196014\n0.22843592432187756\n0.3230925845697044\n1.2130093073898924\n-0.758940247008594\n-0.3647486544408019\n1.5240835625729838\n-1.6277634830710428\n0.286994126800434\n-0.5290035660358465\n-1.022987706680747\n0.4678634415173758\n-0.48596822928314454\n0.13695372724012772\n-1.391406079093982\n1.5012761831023353\n-1.0496938597364855\n-0.2991878808509575\n-0.3647486544408019\n1.4074927036417388\n-0.5854355038893796\n0.4678634415173758\n-0.5951166186019476\n1.5240835625729838\n-0.8395236381868343\n1.5517123555259094\n-1.3759761997330582\n-0.24077840436878184\n0.6600647973906256\n-0.8487774501542434\n-0.42473802055748705\n0.13695372724012772\n1.5012761831023353\n-0.411611268813296\n0.9426970310457449\n1.113079017699489\n0.3230925845697044\n-1.3759761997330582\n-0.9959213015328963\n-1.4053577438062967\n0.9416404843151556\n-1.391406079093982\n0.7183782579402468\n-0.9185033095974854\n-0.5854355038893796\n1.5517123555259094\n1.348357839804765\n1.3923668338722077\n1.3980609289745154\n-1.022987706680747\n1.6201484942286413\n0.06747312638794169\n-0.15416630449113\n-0.20140902027738322\n0.7817004669153338\n-1.7315442430857817\n-0.363319004172012\n1.6201484942286413\n0.7750950204862487\n0.9930435774566333\n-0.2751075925449926\n0.286994126800434\n-0.36464896647758427\n1.5012761831023353\n-1.5499378105140722\n-0.038090346259590985\n0.022518769497317636\n-1.7315442430857817\n0.2655196567190298\n0.13695372724012772\n-0.4252275951078302\n1.5012761831023353\n0.6533943080642937\n1.404131276723479\n-0.42473802055748705\n-0.20140902027738322\n-1.0567665016598924\n1.397998025695915\n" ], [ "mosaic_list_of_images[0:2], mosaic_list_of_images[1000:1002]", "_____no_output_____" ], [ "np.zeros(5)", "_____no_output_____" ], [ "def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):\n \"\"\"\n mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point\n labels : mosaic_dataset labels\n foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average\n dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is \"j\" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9\n \"\"\"\n avg_image_dataset = []\n cnt = 0\n counter = np.zeros(m)\n for i in range(len(mosaic_dataset)):\n img = torch.zeros([1], dtype=torch.float64)\n np.random.seed(int(dataset_number*10000 + i))\n give_pref = foreground_index[i] #np.random.randint(0,9)\n # print(\"outside\", give_pref,foreground_index[i])\n for j in range(m):\n if j == give_pref:\n img = img + mosaic_dataset[i][j]*dataset_number/m #2 is data dim\n else :\n img = img + mosaic_dataset[i][j]*(m-dataset_number)/((m-1)*m)\n\n if give_pref == foreground_index[i] :\n # print(\"equal are\", give_pref,foreground_index[i])\n cnt += 1\n counter[give_pref] += 1\n else :\n counter[give_pref] += 1\n\n avg_image_dataset.append(img)\n\n print(\"number of correct averaging happened for dataset \"+str(dataset_number)+\" is \"+str(cnt)) \n print(\"the averaging are done as \", counter) \n return avg_image_dataset , labels , foreground_index\n \n ", "_____no_output_____" ], [ "avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)\n\ntest_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)", "number of correct averaging happened for dataset 1 is 1000\nthe averaging are done as [ 4. 4. 4. 4. 3. 4. 7. 4. 2. 3. 3. 3. 7. 0. 2. 3. 9. 5.\n 7. 5. 1. 2. 6. 6. 4. 4. 5. 3. 6. 3. 6. 5. 4. 7. 4. 4.\n 1. 2. 2. 6. 2. 4. 3. 4. 1. 2. 4. 5. 2. 5. 3. 4. 2. 5.\n 3. 1. 5. 6. 8. 1. 5. 4. 7. 1. 4. 4. 6. 1. 3. 3. 7. 5.\n 6. 5. 3. 6. 3. 7. 2. 2. 4. 2. 6. 6. 4. 1. 5. 2. 3. 5.\n 3. 3. 4. 3. 4. 5. 3. 6. 3. 2. 1. 8. 3. 4. 3. 4. 8. 5.\n 4. 5. 5. 3. 4. 4. 2. 7. 4. 5. 5. 2. 3. 3. 2. 2. 6. 5.\n 5. 3. 5. 4. 4. 5. 3. 4. 4. 2. 4. 4. 2. 5. 4. 2. 4. 4.\n 4. 3. 4. 2. 5. 2. 1. 5. 4. 4. 8. 3. 6. 4. 4. 9. 3. 2.\n 2. 6. 1. 3. 5. 1. 4. 4. 2. 3. 4. 4. 7. 1. 9. 3. 5. 3.\n 7. 2. 6. 5. 4. 4. 5. 5. 6. 10. 4. 2. 2. 5. 5. 3. 7. 5.\n 6. 5. 1. 3. 3. 6. 4. 1. 5. 4. 7. 3. 4. 3. 3. 5. 6. 6.\n 4. 2. 9. 3. 7. 3. 5. 5. 5. 2. 2. 2. 2. 1. 6. 6. 5. 4.\n 4. 4. 6. 5. 3. 3. 5. 5. 2. 1. 2. 4. 3. 5. 1. 5.]\nnumber of correct averaging happened for dataset 250 is 1000\nthe averaging are done as [ 5. 6. 3. 4. 4. 3. 3. 8. 3. 4. 2. 1. 4. 5. 5. 4. 5. 5.\n 2. 9. 1. 8. 6. 4. 6. 1. 3. 6. 5. 2. 7. 0. 4. 2. 6. 5.\n 3. 4. 3. 3. 7. 4. 0. 5. 7. 2. 3. 5. 4. 1. 2. 7. 3. 5.\n 4. 7. 6. 3. 3. 2. 1. 2. 5. 8. 4. 5. 10. 0. 1. 5. 5. 2.\n 3. 4. 3. 3. 3. 3. 6. 5. 8. 2. 1. 3. 5. 5. 3. 6. 2. 1.\n 4. 2. 4. 5. 6. 2. 3. 5. 4. 11. 3. 5. 8. 9. 5. 6. 6. 6.\n 2. 0. 4. 5. 4. 6. 6. 2. 1. 4. 1. 4. 4. 7. 2. 3. 6. 4.\n 3. 2. 7. 2. 4. 3. 2. 0. 5. 1. 3. 4. 2. 7. 3. 0. 3. 9.\n 4. 8. 4. 5. 4. 3. 4. 4. 5. 4. 5. 2. 8. 3. 7. 1. 5. 5.\n 7. 5. 2. 3. 2. 8. 3. 4. 5. 3. 2. 2. 3. 2. 1. 6. 5. 4.\n 3. 3. 4. 5. 2. 8. 6. 4. 6. 5. 2. 4. 7. 8. 3. 2. 5. 1.\n 5. 5. 7. 3. 3. 2. 3. 3. 4. 3. 4. 5. 1. 3. 4. 1. 4. 4.\n 2. 4. 3. 4. 1. 2. 2. 6. 2. 4. 6. 6. 3. 1. 2. 4. 4. 4.\n 3. 6. 3. 7. 5. 9. 4. 2. 4. 4. 4. 6. 7. 3. 2. 4.]\n" ], [ "avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)\n\n# mean = torch.mean(avg_image_dataset_1, keepdims= True, axis = 0)\n# std = torch.std(avg_image_dataset_1, keepdims= True, axis = 0)\n\n# avg_image_dataset_1 = (avg_image_dataset_1 - mean) / std \n\n# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))\n# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))\n# print(\"==\"*40)\n\n\ntest_dataset = torch.stack(test_dataset, axis = 0)\n# mean = torch.mean(test_dataset, keepdims= True, axis = 0)\n# std = torch.std(test_dataset, keepdims= True, axis = 0)\n# test_dataset = (test_dataset - mean) / std\n\n# print(torch.mean(test_dataset, keepdims= True, axis = 0))\n# print(torch.std(test_dataset, keepdims= True, axis = 0))\n# print(\"==\"*40)\n", "_____no_output_____" ], [ "x1 = (avg_image_dataset_1).numpy()\ny1 = np.array(labels_1)\n\n# idx1 = []\n# for i in range(3):\n# idx1.append(y1 == i)\n\n# for i in range(3):\n# z = np.zeros(x1[idx1[i]].shape[0])\n# plt.scatter(x1[idx1[i]],z,label=\"class_\"+str(i))\n# plt.legend()\n\nplt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')\nplt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')\n# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')\nplt.legend()\nplt.title(\"dataset1 CIN with alpha = 1/\"+str(m))", "_____no_output_____" ], [ "x1 = (avg_image_dataset_1).numpy()\ny1 = np.array(labels_1)\n\nidx_1 = y1==0\nidx_2 = np.where(idx_1==True)[0]\nidx_3 = np.where(idx_1==False)[0]\ncolor = ['#1F77B4','orange', 'brown']\n\ntrue_point = len(idx_2)\nplt.scatter(x1[idx_2[:25]], y1[idx_2[:25]]*0, label='class 0', c= color[0], marker='o')\nplt.scatter(x1[idx_3[:25]], y1[idx_3[:25]]*0, label='class 1', c= color[1], marker='o')\n\nplt.scatter(x1[idx_3[50:75]], y1[idx_3[50:75]]*0, c= color[1], marker='o')\nplt.scatter(x1[idx_2[50:75]], y1[idx_2[50:75]]*0, c= color[0], marker='o')\n\n\nplt.legend()\nplt.xticks( fontsize=14, fontweight = 'bold')\nplt.yticks( fontsize=14, fontweight = 'bold')\nplt.xlabel(\"X\", fontsize=14, fontweight = 'bold')\n# plt.savefig(fp_cin+\"ds1_alpha_04.png\", bbox_inches=\"tight\")\n# plt.savefig(fp_cin+\"ds1_alpha_04.pdf\", bbox_inches=\"tight\")", "_____no_output_____" ], [ "avg_image_dataset_1[0:10]", "_____no_output_____" ], [ "x1 = (test_dataset).numpy()/m\ny1 = np.array(labels)\n\n# idx1 = []\n# for i in range(3):\n# idx1.append(y1 == i)\n\n# for i in range(3):\n# z = np.zeros(x1[idx1[i]].shape[0])\n# plt.scatter(x1[idx1[i]],z,label=\"class_\"+str(i))\n# plt.legend()\n\nplt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')\nplt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')\n# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')\nplt.legend()\nplt.title(\"test dataset1 \")", "_____no_output_____" ], [ "test_dataset.numpy()[0:10]/m", "_____no_output_____" ], [ "test_dataset = test_dataset/m", "_____no_output_____" ], [ "test_dataset.numpy()[0:10]", "_____no_output_____" ], [ "class MosaicDataset(Dataset):\n \"\"\"MosaicDataset dataset.\"\"\"\n\n def __init__(self, mosaic_list_of_images, mosaic_label):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.mosaic = mosaic_list_of_images\n self.label = mosaic_label\n #self.fore_idx = fore_idx\n \n def __len__(self):\n return len(self.label)\n\n def __getitem__(self, idx):\n return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]\n\n", "_____no_output_____" ], [ "avg_image_dataset_1[0].shape, avg_image_dataset_1[0]", "_____no_output_____" ], [ "batch = 200\n\ntraindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )\ntrainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)\n", "_____no_output_____" ], [ "testdata_1 = MosaicDataset(test_dataset, labels )\ntestloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)", "_____no_output_____" ], [ "class Whatnet(nn.Module):\n def __init__(self):\n super(Whatnet,self).__init__()\n self.linear1 = nn.Linear(1,50)\n self.linear2 = nn.Linear(50,10)\n self.linear3 = nn.Linear(10,2)\n\n torch.nn.init.xavier_normal_(self.linear1.weight)\n torch.nn.init.zeros_(self.linear1.bias)\n torch.nn.init.xavier_normal_(self.linear2.weight)\n torch.nn.init.zeros_(self.linear2.bias)\n torch.nn.init.xavier_normal_(self.linear3.weight)\n torch.nn.init.zeros_(self.linear3.bias)\n\n def forward(self,x):\n x = F.relu(self.linear1(x))\n x = F.relu(self.linear2(x))\n x = (self.linear3(x))\n\n return x", "_____no_output_____" ], [ "def calculate_loss(dataloader,model,criter):\n model.eval()\n r_loss = 0\n with torch.no_grad():\n for i, data in enumerate(dataloader, 0):\n inputs, labels = data\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n outputs = model(inputs)\n loss = criter(outputs, labels)\n r_loss += loss.item()\n return r_loss/i", "_____no_output_____" ], [ "def test_all(number, testloader,net):\n correct = 0\n total = 0\n out = []\n pred = []\n with torch.no_grad():\n for data in testloader:\n images, labels = data\n images, labels = images.to(\"cuda\"),labels.to(\"cuda\")\n out.append(labels.cpu().numpy())\n outputs= net(images)\n _, predicted = torch.max(outputs.data, 1)\n pred.append(predicted.cpu().numpy())\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n \n pred = np.concatenate(pred, axis = 0)\n out = np.concatenate(out, axis = 0)\n print(\"unique out: \", np.unique(out), \"unique pred: \", np.unique(pred) )\n print(\"correct: \", correct, \"total \", total)\n print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))", "_____no_output_____" ], [ "def train_all(trainloader, ds_number, testloader_list):\n \n print(\"--\"*40)\n print(\"training on data set \", ds_number)\n \n torch.manual_seed(12)\n net = Whatnet().double()\n net = net.to(\"cuda\")\n \n criterion_net = nn.CrossEntropyLoss()\n optimizer_net = optim.Adam(net.parameters(), lr=0.0001 ) #, momentum=0.9)\n \n acti = []\n loss_curi = []\n epochs = 1500\n running_loss = calculate_loss(trainloader,net,criterion_net)\n loss_curi.append(running_loss)\n print('epoch: [%d ] loss: %.3f' %(0,running_loss)) \n for epoch in range(epochs): # loop over the dataset multiple times\n ep_lossi = []\n\n running_loss = 0.0\n net.train()\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n\n # zero the parameter gradients\n optimizer_net.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion_net(outputs, labels)\n # print statistics\n running_loss += loss.item()\n loss.backward()\n optimizer_net.step()\n\n running_loss = calculate_loss(trainloader,net,criterion_net)\n if(epoch%200 == 0):\n print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss)) \n loss_curi.append(running_loss) #loss per epoch\n if running_loss<=0.05:\n print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))\n break\n\n print('Finished Training')\n \n correct = 0\n total = 0\n with torch.no_grad():\n for data in trainloader:\n images, labels = data\n images, labels = images.to(\"cuda\"), labels.to(\"cuda\")\n outputs = net(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\n print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))\n \n for i, j in enumerate(testloader_list):\n test_all(i+1, j,net)\n \n print(\"--\"*40)\n \n return loss_curi, net\n ", "_____no_output_____" ], [ "train_loss_all=[]\n\ntestloader_list= [ testloader_1 ]", "_____no_output_____" ], [ "loss, net = train_all(trainloader_1, 1, testloader_list)\ntrain_loss_all.append(loss)", "--------------------------------------------------------------------------------\ntraining on data set 1\nepoch: [0 ] loss: 0.867\nepoch: [1] loss: 0.867\nepoch: [201] loss: 0.860\nepoch: [401] loss: 0.859\nepoch: [601] loss: 0.858\nepoch: [801] loss: 0.858\nepoch: [1001] loss: 0.858\nepoch: [1201] loss: 0.858\nepoch: [1401] loss: 0.858\nFinished Training\nAccuracy of the network on the 1000 train images: 53.20 %\nunique out: [0 1] unique pred: [0 1]\ncorrect: 866 total 1000\nAccuracy of the network on the 1000 test dataset 1: 86.60 %\n--------------------------------------------------------------------------------\n" ], [ "net.linear1.weight, net.linear1.bias", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "for i,j in enumerate(train_loss_all):\n plt.plot(j,label =\"dataset \"+str(i+1))\n \n\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Training_loss\")\n\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb13f0427bccfa44394d0d1946b4dcc47297de97
15,335
ipynb
Jupyter Notebook
_drafts/2020-03-28-Bayesian-Camera-Calibration.ipynb
CGCooke/Blog
ab1235939011d55674c0888dba4501ff7e4008c6
[ "Apache-2.0" ]
1
2020-10-29T06:32:23.000Z
2020-10-29T06:32:23.000Z
_drafts/2020-03-28-Bayesian-Camera-Calibration.ipynb
CGCooke/Blog
ab1235939011d55674c0888dba4501ff7e4008c6
[ "Apache-2.0" ]
20
2020-04-04T09:39:50.000Z
2022-03-25T12:30:56.000Z
_drafts/2020-03-28-Bayesian-Camera-Calibration.ipynb
CGCooke/Blog
ab1235939011d55674c0888dba4501ff7e4008c6
[ "Apache-2.0" ]
null
null
null
66.385281
1,599
0.634235
[ [ [ "# Bayesian Camera Calibration\n> Let's apply Bayesian analysis to calibrate a camera\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [Bayesian, Computer Vision]\n- image: images/2020-03-28-Bayesian-Camera-Calibration/header.jpg", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport pymc3 as pm\n\nplt.rcParams['figure.figsize'] = [10,10]", "_____no_output_____" ], [ "def x_rot(theta,x,y,z):\n theta *= np.pi/180\n x_rot = x\n y_rot = np.cos(theta)*y - np.sin(theta)*z\n z_rot = np.sin(theta)*y + np.cos(theta)*z\n return(x_rot,y_rot,z_rot)\n\ndef y_rot(theta,x,y,z):\n theta *= np.pi/180\n x_rot = np.cos(theta)*x + np.sin(theta)*z\n y_rot = y\n z_rot = -np.sin(theta)*x + np.cos(theta)*z\n return(x_rot,y_rot,z_rot)\n\ndef z_rot(theta,x,y,z):\n theta *= np.pi/180\n x_rot = np.cos(theta)*x - np.sin(theta)*y\n y_rot = np.sin(theta)*x + np.cos(theta)*y\n z_rot = z\n return(x_rot,y_rot,z_rot)\n\n\npoints = np.loadtxt(\"data/2020-02-23-An-Adventure-In-Camera-Calibration/points.csv\")\n\npoints_2d = points[:,0:2]\npoints_3d = points[:,2:5]\n\nnumber_points = points.shape[0]\n\npx = points_2d[:,0]\npy = points_2d[:,1]\n\nX_input = points_3d[:,0]\nY_input = points_3d[:,1]\nZ_input = points_3d[:,2]", "_____no_output_____" ], [ "def rotate(theta_Z_est,theta_Y_est,theta_X_est, X_est, Y_est, Z_est):\n X_est, Y_est, Z_est = z_rot(theta_Z_est, X_est, Y_est, Z_est)\n X_est, Y_est, Z_est = y_rot(theta_Y_est, X_est, Y_est, Z_est)\n X_est, Y_est, Z_est = x_rot(theta_X_est, X_est, Y_est, Z_est)\n return(X_est, Y_est, Z_est)\n\n\n # Define priors\n X_translate_est = pm.Normal('X_translate', mu = -7, sigma = 1)\n Y_translate_est = pm.Normal('Y_translate', mu = -13, sigma = 1)\n Z_translate_est = pm.Normal('Z_translate', mu = 3, sigma = 1)\n focal_length_est = pm.Normal('focal_length',mu = 1000, sigma = 100)\n \n theta_Z_est = pm.Normal('theta_Z',mu = -45, sigma = 30)\n theta_Y_est = pm.Normal('theta_Y',mu = 0, sigma = 15)\n theta_X_est = pm.Normal('theta_X',mu = 90, sigma = 30)\n \n c_x_est = pm.Normal('c_x',mu = 1038.42, sigma = 100)\n c_y_est = pm.Normal('c_y',mu = 2666.56, sigma = 100)\n \n k1 = -0.351113\n k2 = 0.185768\n k3 = -0.032289\n \n error_scale = 2\n \n X_est = X_input + X_translate_est\n Y_est = Y_input + Y_translate_est\n Z_est = Z_input + Z_translate_est\n \n X_est, Y_est, Z_est = rotate(theta_Z_est, theta_Y_est, theta_X_est, X_est, Y_est, Z_est)\n \n px_est = X_est / Z_est\n py_est = Y_est / Z_est\n \n r = np.sqrt(px_est**2 + py_est**2)\n px_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)\n py_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)\n \n px_est *= focal_length_est\n py_est *= focal_length_est\n\n px_est += c_x_est\n py_est += c_y_est\n\n delta = np.sqrt((px - px_est)**2 + (py - py_est)**2)\n \n # Define likelihood\n likelihood = pm.Normal('error', mu = delta, sigma = error_scale, observed=np.zeros(number_points))\n\n # Inference!\n trace = pm.sample(2_000, cores=4, tune=5000)", "Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [c_y, c_x, theta_X, theta_Y, theta_Z, focal_length, Z_translate, Y_translate, X_translate]\nSampling 4 chains, 0 divergences: 1%| | 269/28000 [00:01<02:16, 202.49draws/s] \n" ], [ "plt.figure(figsize=(7, 7))\npm.traceplot(trace[1000:])\nplt.tight_layout();", "_____no_output_____" ], [ "pm.plot_posterior(trace);", "_____no_output_____" ], [ "pm.summary(trace)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb13f8bb904cbeb2fa74966186b8ec8577146b7f
435,713
ipynb
Jupyter Notebook
Projet_ML.ipynb
maigaabdoul/Projet_ML
42eeb5357000217c840e4d324b0fe179dd12ace2
[ "MIT" ]
null
null
null
Projet_ML.ipynb
maigaabdoul/Projet_ML
42eeb5357000217c840e4d324b0fe179dd12ace2
[ "MIT" ]
null
null
null
Projet_ML.ipynb
maigaabdoul/Projet_ML
42eeb5357000217c840e4d324b0fe179dd12ace2
[ "MIT" ]
null
null
null
195.650202
26,652
0.870754
[ [ [ "# Projet de Machine Learning : Test de classification bout en bout \n\n", "_____no_output_____" ], [ "## 1.Chargement des données ", "_____no_output_____" ] ], [ [ "#importation des modules necessaires\nimport pyspark\nfrom pyspark.sql import SparkSession\nimport joblib", "_____no_output_____" ], [ "#creation d'une session spark \n\nmon_spark=SparkSession.builder.master(\"local\").appName(\"MLproject\").getOrCreate()\n\nccdefault = mon_spark.read.format(\"csv\").options(header=True,inferSchema=True).load(\"Doc_Evaluation/session pratique/data/ccdefault.csv\")", "21/09/12 10:42:58 WARN Utils: Your hostname, HP-ProBook-450-G3 resolves to a loopback address: 127.0.1.1; using 192.168.1.126 instead (on interface wlp2s0)\n21/09/12 10:42:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address\nWARNING: An illegal reflective access operation has occurred\nWARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/home/tabsoba/ML/lib/python3.8/site-packages/pyspark/jars/spark-unsafe_2.12-3.1.2.jar) to constructor java.nio.DirectByteBuffer(long,int)\nWARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform\nWARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations\nWARNING: All illegal access operations will be denied in a future release\n21/09/12 10:43:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nUsing Spark's default log4j profile: org/apache/spark/log4j-defaults.properties\nSetting default log level to \"WARN\".\nTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n \r" ], [ "# Affichage des 3 premières lignes\nccdefault.show(3)", "+---+---------+---+---------+--------+---+-----+-----+-----+-----+-----+-----+---------+---------+---------+---------+---------+---------+--------+--------+--------+--------+--------+--------+-------+\n| ID|LIMIT_BAL|SEX|EDUCATION|MARRIAGE|AGE|PAY_0|PAY_2|PAY_3|PAY_4|PAY_5|PAY_6|BILL_AMT1|BILL_AMT2|BILL_AMT3|BILL_AMT4|BILL_AMT5|BILL_AMT6|PAY_AMT1|PAY_AMT2|PAY_AMT3|PAY_AMT4|PAY_AMT5|PAY_AMT6|DEFAULT|\n+---+---------+---+---------+--------+---+-----+-----+-----+-----+-----+-----+---------+---------+---------+---------+---------+---------+--------+--------+--------+--------+--------+--------+-------+\n| 1| 20000| 2| 2| 1| 24| 2| 2| -1| -1| -2| -2| 3913| 3102| 689| 0| 0| 0| 0| 689| 0| 0| 0| 0| 1|\n| 2| 120000| 2| 2| 2| 26| -1| 2| 0| 0| 0| 2| 2682| 1725| 2682| 3272| 3455| 3261| 0| 1000| 1000| 1000| 0| 2000| 1|\n| 3| 90000| 2| 2| 2| 34| 0| 0| 0| 0| 0| 0| 29239| 14027| 13559| 14331| 14948| 15549| 1518| 1500| 1000| 1000| 1000| 5000| 0|\n+---+---------+---+---------+--------+---+-----+-----+-----+-----+-----+-----+---------+---------+---------+---------+---------+---------+--------+--------+--------+--------+--------+--------+-------+\nonly showing top 3 rows\n\n" ] ], [ [ "## 2. Analyse exploratoire", "_____no_output_____" ] ], [ [ "# Affichage des attributs\nccdefault.printSchema()", "root\n |-- ID: integer (nullable = true)\n |-- LIMIT_BAL: integer (nullable = true)\n |-- SEX: integer (nullable = true)\n |-- EDUCATION: integer (nullable = true)\n |-- MARRIAGE: integer (nullable = true)\n |-- AGE: integer (nullable = true)\n |-- PAY_0: integer (nullable = true)\n |-- PAY_2: integer (nullable = true)\n |-- PAY_3: integer (nullable = true)\n |-- PAY_4: integer (nullable = true)\n |-- PAY_5: integer (nullable = true)\n |-- PAY_6: integer (nullable = true)\n |-- BILL_AMT1: integer (nullable = true)\n |-- BILL_AMT2: integer (nullable = true)\n |-- BILL_AMT3: integer (nullable = true)\n |-- BILL_AMT4: integer (nullable = true)\n |-- BILL_AMT5: integer (nullable = true)\n |-- BILL_AMT6: integer (nullable = true)\n |-- PAY_AMT1: integer (nullable = true)\n |-- PAY_AMT2: integer (nullable = true)\n |-- PAY_AMT3: integer (nullable = true)\n |-- PAY_AMT4: integer (nullable = true)\n |-- PAY_AMT5: integer (nullable = true)\n |-- PAY_AMT6: integer (nullable = true)\n |-- DEFAULT: integer (nullable = true)\n\n" ], [ "import pandas as pd\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n\n# Conversion en dataframe pandas pour une meilleure manipulation\nccdefault.pd=ccdefault.toPandas()\n\n\n# Changement de nom de colonnnes, la colonne \"PAY_0\" devrait plutot etre \"PAY_1\"\n\nccdefault.pd.rename(columns={'PAY_0':'PAY_1'},inplace = True)\nccdefault.pd.columns", "\r[Stage 3:> (0 + 1) / 1]\r\r \r" ], [ "## Quelques statistiques des données \nccdefault.pd.describe()", "_____no_output_____" ] ], [ [ "### 2.1 Distribution des variables continues", "_____no_output_____" ] ], [ [ "## variables quantitatives \nvar_quant=ccdefault.pd[['LIMIT_BAL','AGE','BILL_AMT4','BILL_AMT5','BILL_AMT6','PAY_AMT1','PAY_AMT2','PAY_AMT3','PAY_AMT4','PAY_AMT5','PAY_AMT6']]\n\nvar_quant", "_____no_output_____" ], [ "for col in var_quant:\n plt.figure()\n sns.distplot(var_quant[col])", "/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n/home/tabsoba/ML/lib/python3.8/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n" ] ], [ [ "### 2.2 Distribution des variables qualitatives", "_____no_output_____" ] ], [ [ "## variables qualitatives\nvar_qual=ccdefault.pd[['SEX','MARRIAGE','EDUCATION','PAY_1','PAY_2','PAY_3','PAY_4','PAY_5','PAY_6']]\n\nvar_qual", "_____no_output_____" ], [ "## Presentation des differentes catégories pour chaque variable qualitative\nfor col in var_qual:\n print(f'{col:-<50} {var_qual[col].unique()}')", "SEX----------------------------------------------- [2 1]\nMARRIAGE------------------------------------------ [1 2 3 0]\nEDUCATION----------------------------------------- [2 1 3 5 4 6 0]\nPAY_1--------------------------------------------- [ 2 -1 0 -2 1 3 4 8 7 5 6]\nPAY_2--------------------------------------------- [ 2 0 -1 -2 3 5 7 4 1 6 8]\nPAY_3--------------------------------------------- [-1 0 2 -2 3 4 6 7 1 5 8]\nPAY_4--------------------------------------------- [-1 0 -2 2 3 4 5 7 6 1 8]\nPAY_5--------------------------------------------- [-2 0 -1 2 3 5 4 7 8 6]\nPAY_6--------------------------------------------- [-2 2 0 -1 3 6 4 7 8 5]\n" ], [ "## Presentation des differentes catégories dans chaque variable\nfor col in var_qual:\n plt.figure()\n var_qual[col].value_counts().plot.pie()\n ", "_____no_output_____" ] ], [ [ "### 2.3 Relation entre variables explicatives et variable cible", "_____no_output_____" ] ], [ [ "# Changement du code de 'SEX' (1,2) en 'F' and 'M' \n\nccdefault.pd[\"SEX\"]=ccdefault.pd[\"SEX\"].map({1:'M',2:'F'}).astype('category')\nccdefault.pd[\"SEX\"].dtypes\n\n# Creation d'une nouvelle colonne nommée \"RET_PAY\" indiquant les clients ayant au moins un retard de payement de PAY_1 to Pay_6\n# 0 : PAS_RETARD ; 1: RETARD\n\ncondition = (ccdefault.pd.PAY_1 >1) | (ccdefault.pd.PAY_2 >1) | (ccdefault.pd.PAY_3 >1) | (ccdefault.pd.PAY_4 >1) | (ccdefault.pd.PAY_5 >1) | (ccdefault.pd.PAY_6 >1)\nccdefault.pd.loc[condition, \"RET_PAY\"] = 1\nccdefault.pd.loc[ccdefault.pd.RET_PAY.isna(),\"RET_PAY\"] = 0\n\nccdefault.pd", "_____no_output_____" ], [ "# fonction pour representer les relations entre les attributs et la variable cible \n\ndef relation_var(nom_colonne):\n\n # Get the percentage of default by each group\n status_rembourse_group = pd.crosstab(index=ccdefault.pd['RET_PAY'],columns = ccdefault.pd[nom_colonne], normalize = 'columns')\n \n # Round up to 2 decimal\n status_rembourse_group = status_rembourse_group.apply(lambda x: round(x,2))\n \n labels = status_rembourse_group.columns\n list1 = status_rembourse_group.iloc[0].to_list()\n list2 = status_rembourse_group.iloc[1].to_list()\n \n list1_name = \"Remboursé\"\n list2_name = \"Non Remboursé\"\n title = f\"Default by {nom_colonne}\"\n xlabel = nom_colonne\n ylabel = \"Pourcentage de non remboursement\"\n \n fig, ax = plt.subplots(figsize=(10, 5))\n bar_width = 0.5\n \n ax1 = ax.bar(labels,list1, bar_width, label = list1_name)\n ax2 = ax.bar(labels,list2, bar_width, bottom = list1, label = list2_name)\n\n ax.set_title(title, fontweight = \"bold\")\n ax.set_xlabel(xlabel, fontweight = \"bold\")\n ax.set_ylabel(ylabel, fontweight = \"bold\")\n ax.legend(loc=\"best\")\n \n plt.xticks(list(range(len(labels))), labels,rotation=90)\n plt.yticks(fontsize=9)\n\n for r1, r2 in zip(ax1, ax2):\n h1 = r1.get_height()\n h2 = r2.get_height()\n plt.text(r1.get_x() + r1.get_width() / 2., h1 / 2., f\"{h1:.0%}\", ha=\"center\", va=\"center\", color=\"white\", fontsize=9, fontweight=\"bold\")\n plt.text(r2.get_x() + r2.get_width() / 2., h1 + h2 / 2., f\"{h2:.0%}\", ha=\"center\", va=\"center\", color=\"white\", fontsize=9, fontweight=\"bold\")\n\n plt.show()\n", "_____no_output_____" ] ], [ [ "#### Relation entre la variable cible et le sexe, le status matrimonial et le niveau d'education des clients", "_____no_output_____" ] ], [ [ "var_qual1=[\"SEX\",\"MARRIAGE\",\"EDUCATION\"]\n\nfor col in var_qual1:\n relation_var(col)", "_____no_output_____" ] ], [ [ "#### Relation variable cible et age des clients", "_____no_output_____" ] ], [ [ "\n\nbornes= [21,30,40,50,60,70,80]\ntranches_ages = ['20-30','30-40','40-50','50-60','60-70','70-80']\nccdefault.pd['AGE'] = pd.cut(ccdefault.pd['AGE'],bins=bornes, labels=tranches_ages ,right=False)\n\nrelation_var('AGE')", "_____no_output_____" ] ], [ [ "### 2.4 Tendance des paiements entre avril 2005 et septembre 2005", "_____no_output_____" ] ], [ [ "# Extraction des clients ayant au moins un retard de payement entre avril et septembre\n\nretard= ccdefault.pd[ccdefault.pd['RET_PAY']== 1]\ntendance= retard[['PAY_6','PAY_5','PAY_4','PAY_3','PAY_2','PAY_1']].sum(axis=0)\n\nfig,ax = plt.subplots()\nax.plot(tendance)\nplt.xticks(['PAY_6','PAY_5','PAY_4','PAY_3','PAY_2','PAY_1'],['Apr','May','Jun','Jul','Aug','Sep'])\n\nplt.xlabel('Mois',fontweight='bold')\nplt.ylabel('Total mois payés en retard',fontweight='bold')\nplt.title('tendance paiements retardés',fontweight='bold')\n\nplt.show()", "_____no_output_____" ], [ "# Correlation entre la consommation et le temps de retard mis pour le paiement\n\nfrom matplotlib.pyplot import figure\n\nretard= ccdefault.pd[ccdefault.pd['RET_PAY']== 1]\npaiements = [ f\"PAY_{i}\" for i in range(1, 7) ]\nconsommation= [ f\"BILL_AMT{i}\" for i in range(1, 7) ]\n\nfig, ax = plt.subplots(3,2, figsize=(10, 10))\n\nfor paie, conso, m in zip(paiements, consommation, ax.flatten()):\n \n \n data = []\n for i in sorted(retard[paie].unique()):\n temp = retard.loc[retard[paie] == i, conso]\n data.append(temp)\n m.boxplot(data, showfliers=False,) \n m.set_xticklabels(sorted(retard[paie].unique()))\n \nplt.show()", "_____no_output_____" ], [ "# Correlation entre le crédit alloué et la variable cible\n# 1: non remboursé ; 0: remboursé\n\ndef0 = ccdefault.pd.loc[ccdefault.pd['DEFAULT'] == 0,'LIMIT_BAL']\ndef1 = ccdefault.pd.loc[ccdefault.pd['DEFAULT'] == 1,'LIMIT_BAL']\n\nfig, ax = plt.subplots()\nax.boxplot([def0, def1], showfliers=False)\n\nax.set_xticklabels(['Remboursé',\"Non remboursé\"],fontweight ='bold')\nax.set_ylabel('Crédit alloué',fontweight ='bold')\nax.set_title('Crédit alloué & Etat du remboursement',fontweight ='bold')\n\nplt.show()", "_____no_output_____" ] ], [ [ "## 3. Etude comparative de modèles de machine learning pour la prediction de la variable cible", "_____no_output_____" ] ], [ [ "\n\n## Valeurs manquantes \n\nfor c in ccdefaultNew.columns:\n count=ccdefaultNew.filter(c+\" is NULL\" or c+\"is ''\" or c+\"is NaN\" or c+\"is null\").count()\n print(str(count) +\" valeurs manquantes dans la colonne \"+ c)", "0 valeurs manquantes dans la colonne ID\n0 valeurs manquantes dans la colonne LIMIT_BAL\n0 valeurs manquantes dans la colonne SEX\n0 valeurs manquantes dans la colonne EDUCATION\n0 valeurs manquantes dans la colonne MARRIAGE\n0 valeurs manquantes dans la colonne AGE\n0 valeurs manquantes dans la colonne PAY_0\n0 valeurs manquantes dans la colonne PAY_2\n0 valeurs manquantes dans la colonne PAY_3\n0 valeurs manquantes dans la colonne PAY_4\n0 valeurs manquantes dans la colonne PAY_5\n0 valeurs manquantes dans la colonne PAY_6\n0 valeurs manquantes dans la colonne BILL_AMT1\n0 valeurs manquantes dans la colonne BILL_AMT2\n0 valeurs manquantes dans la colonne BILL_AMT3\n0 valeurs manquantes dans la colonne BILL_AMT4\n0 valeurs manquantes dans la colonne BILL_AMT5\n0 valeurs manquantes dans la colonne BILL_AMT6\n0 valeurs manquantes dans la colonne PAY_AMT1\n0 valeurs manquantes dans la colonne PAY_AMT2\n0 valeurs manquantes dans la colonne PAY_AMT3\n0 valeurs manquantes dans la colonne PAY_AMT4\n0 valeurs manquantes dans la colonne PAY_AMT5\n0 valeurs manquantes dans la colonne PAY_AMT6\n0 valeurs manquantes dans la colonne label\n" ], [ "from pyspark.ml.feature import VectorAssembler, StringIndexer, VectorIndexer, MinMaxScaler\nfrom pyspark.ml import Pipeline\nfrom pyspark.sql.functions import *\n\nfrom pyspark.ml.classification import LogisticRegression\nfrom pyspark.ml.classification import DecisionTreeClassifier\nfrom pyspark.ml.classification import RandomForestClassifier\n\nfrom pyspark.ml.evaluation import BinaryClassificationEvaluator\n\n# Drop ID column\nccdefault = ccdefault .select(ccdefault .schema.names[1:])\n\n# Split data into training and test sample\nsplits = ccdefault.randomSplit([0.75, 0.25])\nccdefault_train = splits[0]\nccdefault_test = splits[1]\n\n# Get and convert categorical features (SEX, EDUCATION, MARRIAGE)\ncategorical_features = ccdefault.schema.names[1:4]\ncatVect = VectorAssembler(inputCols = categorical_features, outputCol = \"catFeatures\")\ncatIdx = VectorIndexer(inputCol = catVect.getOutputCol(), outputCol = \"idxCatFeatures\")\n\n\n# Get and normalize numerical features\nnumerical_features = ccdefault.schema.names[0:1] + ccdefault.schema.names[4:]\nnumVect = VectorAssembler(inputCols = numerical_features, outputCol = \"numFeatures\")\nminMax = MinMaxScaler(inputCol = numVect.getOutputCol(), outputCol = \"normFeatures\")\n\n\n# Define pipeline \nfeatVect = VectorAssembler(inputCols=[\"idxCatFeatures\", \"normFeatures\"], outputCol = \"features\")\npipeline = Pipeline(stages = [catVect, catIdx, numVect, minMax, featVect])\npipeline_object = pipeline.fit(ccdefault_train)\n\n# Run training and test data through the pipeline\nccdefault_train = pipeline_object.transform(ccdefault_train).select(\"features\", col(\"DEFAULT\").alias(\"label\"))\nccdefault_test = pipeline_object.transform(ccdefault_test).select(\"features\", col(\"DEFAULT\").alias(\"label\"))\n\n\n\n\n", "_____no_output_____" ], [ "\n\naccuracy = MulticlassClassificationEvaluator(\n labelCol = \"label\", predictionCol = \"prediction\", metricName = \"accuracy\")\nprecision = MulticlassClassificationEvaluator(\n labelCol = \"label\", predictionCol = \"prediction\", metricName = \"weightedPrecision\")\nrecall = MulticlassClassificationEvaluator(\n labelCol = \"label\", predictionCol = \"prediction\", metricName = \"weightedRecall\")\n\n", "_____no_output_____" ] ], [ [ "### 3.1 Regression logistique", "_____no_output_____" ] ], [ [ "logit = LogisticRegression(labelCol = \"label\", featuresCol = \"features\", maxIter = 20, regParam = 0.2)\nmodel = logit.fit(ccdefault_train)\npredictions = model.transform(ccdefault_test)\n\npredictions.select(\"prediction\", \"label\", \"features\").show(5)\n\n# select (prediction, true label) and compute test error\nevaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"rmse\")\nauc = evaluator.evaluate(predictions)\nprint(\"AUC = \"+ str(auc))\n", "+----------+-----+--------------------+\n|prediction|label| features|\n+----------+-----+--------------------+\n| 0.0| 0|[1.0,-3768.0,-393...|\n| 0.0| 0|[1.0,-2640.0,0.0,...|\n| 1.0| 1|(15,[0,1,2,4,5,6,...|\n| 0.0| 0|[1.0,-2000.0,780....|\n| 0.0| 0|(15,[0,1,2,4,5,6,...|\n+----------+-----+--------------------+\nonly showing top 5 rows\n\nAUC = 0.9999992147530933\n" ] ], [ [ "### 3.2 Arbre de decision", "_____no_output_____" ] ], [ [ "dt = DecisionTreeClassifier().setLabelCol(\"label\").setFeaturesCol(\"features\")\n\n# train the model\ndtModel = dt.fit(ccdefault_train)\n\n# make predictions on the test data\npredictions = dtModel.transform(ccdefault_test)\npredictions.select(\"prediction\", \"label\", \"features\").show(5)\n\n# select (prediction, true label) and compute test error\nevaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"rmse\")\nauc = evaluator.evaluate(predictions)\nprint(\"AUC = \"+ str(auc))", "+----------+-----+--------------------+\n|prediction|label| features|\n+----------+-----+--------------------+\n| 1.0| 1|[1.0,1.0,-6029.0,...|\n| 0.0| 0|[1.0,1.0,-6028.0,...|\n| 0.0| 0|[1.0,1.0,-3768.0,...|\n| 0.0| 0|[1.0,1.0,-2640.0,...|\n| 0.0| 0|(16,[0,1,2,4,5,6,...|\n+----------+-----+--------------------+\nonly showing top 5 rows\n\nAUC = 1.0\n" ] ], [ [ "### 3.3 Forêt aleatoire ", "_____no_output_____" ] ], [ [ "# TODO: Replace <FILL IN> with appropriate code\n\nrf = RandomForestClassifier().setLabelCol(\"label\").setFeaturesCol(\"features\")\n\n# train the model\nrfModel = rf.fit(ccdefault_train)\n\n# make predictions on the test data\npredictions = rfModel.transform(ccdefault_test)\npredictions.select(\"prediction\", \"label\", \"features\").show(5)\n\n# select (prediction, true label) and compute test error\nevaluator = BinaryClassificationEvaluator() # MulticlassClassificationEvaluator().setLabelCol(\"label\").setPredictionCol(\"prediction\").setMetricName(\"rmse\")\nauc = evaluator.evaluate(predictions)\nprint(\"AUC = \"+ str(auc))", "+----------+-----+--------------------+\n|prediction|label| features|\n+----------+-----+--------------------+\n| 1.0| 1|[1.0,1.0,-6029.0,...|\n| 0.0| 0|[1.0,1.0,-6028.0,...|\n| 0.0| 0|[1.0,1.0,-3768.0,...|\n| 0.0| 0|[1.0,1.0,-2640.0,...|\n| 0.0| 0|(16,[0,1,2,4,5,6,...|\n+----------+-----+--------------------+\nonly showing top 5 rows\n\nRoot Mean Squared Error (RMSE) on test data = 1.0\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb14154e31a8637c249b96596f3ced01f1a7326d
16,288
ipynb
Jupyter Notebook
docs/source/examples/Exploring Graphs.ipynb
vidartf/ipywidgets
42ece696f1b53abb28eff08edadf7ac57695d722
[ "BSD-3-Clause" ]
1
2017-05-10T05:19:00.000Z
2017-05-10T05:19:00.000Z
docs/source/examples/Exploring Graphs.ipynb
vidartf/ipywidgets
42ece696f1b53abb28eff08edadf7ac57695d722
[ "BSD-3-Clause" ]
null
null
null
docs/source/examples/Exploring Graphs.ipynb
vidartf/ipywidgets
42ece696f1b53abb28eff08edadf7ac57695d722
[ "BSD-3-Clause" ]
null
null
null
29.24237
116
0.569069
[ [ [ "## Explore Random Graphs Using NetworkX", "_____no_output_____" ], [ "In this example, we build a simple UI for exploring random graphs with [NetworkX](http://networkx.github.io/).", "_____no_output_____" ] ], [ [ "from ipywidgets import interact", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "import networkx as nx", "_____no_output_____" ], [ "# wrap a few graph generation functions so they have the same signature\n\ndef random_lobster(n, m, k, p):\n return nx.random_lobster(n, p, p / m)\n\ndef powerlaw_cluster(n, m, k, p):\n return nx.powerlaw_cluster_graph(n, m, p)\n\ndef erdos_renyi(n, m, k, p):\n return nx.erdos_renyi_graph(n, p)\n\ndef newman_watts_strogatz(n, m, k, p):\n return nx.newman_watts_strogatz_graph(n, k, p)\n\ndef plot_random_graph(n, m, k, p, generator):\n g = generator(n, m, k, p)\n nx.draw(g)\n plt.show()", "_____no_output_____" ], [ "interact(plot_random_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),\n generator={\n 'lobster': random_lobster,\n 'power law': powerlaw_cluster,\n 'Newman-Watts-Strogatz': newman_watts_strogatz,\n u'Erdős-Rényi': erdos_renyi,\n });", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb14181abe347d75b2f198c605789c57922a1dad
36,287
ipynb
Jupyter Notebook
tutorials/file_types/reading_files.ipynb
mshamohammadi/cookies-n-code
5fb0c9e0b6f8614712c7a626ce99de8a0d2a0a42
[ "MIT" ]
19
2018-12-07T00:46:16.000Z
2022-03-08T01:13:45.000Z
tutorials/file_types/reading_files.ipynb
mshamohammadi/cookies-n-code
5fb0c9e0b6f8614712c7a626ce99de8a0d2a0a42
[ "MIT" ]
34
2018-08-18T00:29:56.000Z
2021-10-11T22:56:20.000Z
tutorials/file_types/reading_files.ipynb
mshamohammadi/cookies-n-code
5fb0c9e0b6f8614712c7a626ce99de8a0d2a0a42
[ "MIT" ]
25
2018-08-29T03:57:31.000Z
2021-09-13T05:55:44.000Z
87.861985
11,928
0.733155
[ [ [ "# How to read data from varius file formats\n\nsome of the most basic things noone ever treaches you is how to actually access your data in various formats. This notebook shows a couple of examples on how to read data from a number of sources. Feel free to edit this notebook with more methods that you have worked with.", "_____no_output_____" ] ], [ [ "#import relevant packages\n#from urllib.request import urlretrieve\nfrom urllib2 import urlopen\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom sqlalchemy import create_engine\nimport numpy as np\nfrom astropy.io import fits\nimport urllib\nimport h5py\nimport pickle\n\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# importing files from the internet", "_____no_output_____" ] ], [ [ "\n# Assign url of file: url\n\nurl ='https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv'\n\n# Save file locally\n\ntestfile = urllib.URLopener()\ntestfile.retrieve(url, \"winequality-red.csv\")\n\n# or use wget\n#file_name = wget.download(url)\n\n\n# Read file into a DataFrame and print its head\ndf = pd.read_csv('winequality-red.csv', sep=';')\nprint(df.head())\n\n\npd.DataFrame.hist(df.ix[:, 0:1])\nplt.xlabel('fixed acidity (g(tartaric acid)/dm$^3$)')\nplt.ylabel('count')\nplt.show()\n", " fixed acidity volatile acidity citric acid residual sugar chlorides \\\n0 7.4 0.70 0.00 1.9 0.076 \n1 7.8 0.88 0.00 2.6 0.098 \n2 7.8 0.76 0.04 2.3 0.092 \n3 11.2 0.28 0.56 1.9 0.075 \n4 7.4 0.70 0.00 1.9 0.076 \n\n free sulfur dioxide total sulfur dioxide density pH sulphates \\\n0 11.0 34.0 0.9978 3.51 0.56 \n1 25.0 67.0 0.9968 3.20 0.68 \n2 15.0 54.0 0.9970 3.26 0.65 \n3 17.0 60.0 0.9980 3.16 0.58 \n4 11.0 34.0 0.9978 3.51 0.56 \n\n alcohol quality \n0 9.4 5 \n1 9.8 5 \n2 9.8 5 \n3 9.8 6 \n4 9.4 5 \n" ] ], [ [ "# same thing with csv or txt file \n", "_____no_output_____" ] ], [ [ "example_sheet='cereal.csv'\nexample_file='cereal.txt'\n\n\nxl = pd.read_csv(example_sheet)\nx2 = pd.read_csv(example_sheet)\n# akternatively you can use read_csv\nprint (xl.keys())\n\n# pandas lets you specify seperators as well as number of colums and filling nans \n #pd.read_csv(file, sep='\\t', comment='#', na_values='Nothing')\n \n \n\n# textfiles\ndata = np.loadtxt(example_file, delimiter='\\t', skiprows=1, usecols=[4,5])\n", "Index([u'name', u'mfr', u'type', u'calories', u'protein', u'fat', u'sodium',\n u'fiber', u'carbo', u'sugars', u'potass', u'vitamins', u'shelf',\n u'weight', u'cups', u'rating'],\n dtype='object')\n" ] ], [ [ "# Chunks", "_____no_output_____" ] ], [ [ "chunksize = 10 ** 6\nfor chunk in pd.read_csv(example_file,sep='\\t', chunksize=chunksize):\n print len(chunk) # print len can be replaced with any process that you would want to use\n \n#similarly using read_table\n\nfor chunk in pd.read_table(example_file,sep='\\t', chunksize=chunksize):\n len(chunk)", "77\n" ] ], [ [ "# reading fits files", "_____no_output_____" ] ], [ [ "filename= 'example.fits'\nhdulist = fits.open(filename)\nfinal_data = hdulist[1].data", "_____no_output_____" ], [ "final_data.columns()\nfinal_data[1]", "_____no_output_____" ] ], [ [ "# writing and reading HDF5 files\n", "_____no_output_____" ] ], [ [ "\ndata_matrix = np.random.uniform(-1, 1, size=(10, 3))\n\n# Write data to HDF5\ndata_file = h5py.File('file.hdf5', 'w')\ndata_file.create_dataset('group_name', data=data_matrix)\ndata_file.close()", "_____no_output_____" ], [ "\nfilename_hdf = 'file.hdf5'\nf = h5py.File(filename_hdf, 'r')\n\n# List all groups\nprint(\"Keys: %s\" % f.keys())\na_group_key = list(f.keys())[0]\n\n# Get the data\ndata = list(f[a_group_key])", "Keys: [u'group_name']\n" ] ], [ [ "# SQL databases\nassuming you want to read them into python\nalso have a look at the databases talk sarah gave (27/04/18)", "_____no_output_____" ] ], [ [ "# make sql database with pandas\nengine = create_engine('PATH')\npd.to_sql('new_database', engine)\n", "_____no_output_____" ], [ "pd.read_sql(\"SELCT * FROM new_database\", engine)", "_____no_output_____" ] ], [ [ "# Reading pickled files\nI didn't have a pickled file ready so we will make a mock file to start with ", "_____no_output_____" ] ], [ [ "your_data = {'foo': 'bar'} #makes dictionary\n\n#alternatively use pandas to make and read pickled files\n\n# Store data (serialize)\nwith open('filename.pickle', 'wb') as handle:\n pickle.dump(your_data, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n# Load data (deserialize)\nwith open('filename.pickle', 'rb') as handle:\n unserialized_data = pickle.load(handle)\n\nprint(unserialized_data)", "{'foo': 'bar'}\n" ] ], [ [ "# Reading JSON files", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1424a8f642a04fb3fe4ed267c4e2da1a301738
2,707
ipynb
Jupyter Notebook
class3_null_and_conditions/generating_fake_data.ipynb
techsparksguru/data_analysis_pandas_spark_koalas
384801961f84d94f4c68901332bf1874d0ac5929
[ "Apache-2.0" ]
3
2020-07-17T16:46:43.000Z
2022-02-09T10:49:50.000Z
class3_null_and_conditions/generating_fake_data.ipynb
techsparksguru/pandas_basic_training
3caacf9e8b166d5bc19a508795a06a9024ca4bf9
[ "Apache-2.0" ]
null
null
null
class3_null_and_conditions/generating_fake_data.ipynb
techsparksguru/pandas_basic_training
3caacf9e8b166d5bc19a508795a06a9024ca4bf9
[ "Apache-2.0" ]
2
2020-04-08T10:53:04.000Z
2021-03-10T03:26:06.000Z
21.656
86
0.529738
[ [ [ "# Using Faker to generate fake data\n- https://faker.readthedocs.io/en/stable/", "_____no_output_____" ] ], [ [ "!pip3 install Faker==4.1.1", "_____no_output_____" ], [ "import pandas as pd\ndf = pd.read_parquet(\"./techsparks_candidate_table_dataset_2.parquet\")\ndf", "_____no_output_____" ], [ "df_sample = df.sample(1000)", "_____no_output_____" ], [ "df_sample[df_sample['candidate_id'].str.contains(\"@\")]", "_____no_output_____" ], [ "from faker import Faker\nfake = Faker()\nimport random\n\ndef replace_fake_emails(val):\n if \"@\" in val:\n return fake.email()\n else:\n return val\n\ndef phone_number(val):\n if str(val).isnumeric():\n return fake.phone_number()\n else:\n return val\n\ndef random_integer_candidate_type(val):\n return random.randint(1,100)\n\ndef random_integer_job_id(val):\n return random.randint(10000,99999)\n \n \ndf['candidate_id'] = df['candidate_id'].apply(replace_fake_emails)\ndf['candidate_id'] = df['candidate_id'].apply(phone_number)\ndf['job_id'] = df['job_id'].apply(random_integer_job_id)\ndf['candidate_type'] = df['candidate_type'].apply(random_integer_candidate_type)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df.to_parquet(\"./techsparks_candidate_table_dataset_2.parquet\")", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb143331a02468105873a81b67fde5847b586b10
61,128
ipynb
Jupyter Notebook
book/pandas/08-Filtering Rows.ipynb
hossainlab/dsnotes
fee64e157f45724bba1f49ad1b186dcaaf1e6c02
[ "CC0-1.0" ]
null
null
null
book/pandas/08-Filtering Rows.ipynb
hossainlab/dsnotes
fee64e157f45724bba1f49ad1b186dcaaf1e6c02
[ "CC0-1.0" ]
null
null
null
book/pandas/08-Filtering Rows.ipynb
hossainlab/dsnotes
fee64e157f45724bba1f49ad1b186dcaaf1e6c02
[ "CC0-1.0" ]
null
null
null
34.380202
415
0.362027
[ [ [ "# Filtering Rows ", "_____no_output_____" ] ], [ [ "# import pandas \nimport pandas as pd ", "_____no_output_____" ], [ "# read movie data \nmovies = pd.read_csv(\"http://bit.ly/imdbratings\")", "_____no_output_____" ], [ "# examine first few rows \nmovies.head() ", "_____no_output_____" ] ], [ [ "## Filtering Movies with `for` Loop", "_____no_output_____" ] ], [ [ "booleans = [] \nfor length in movies.duration: \n if length >= 200: \n booleans.append(True)\n else: \n booleans.append(False)", "_____no_output_____" ], [ "# Check length of booleans \nlen(booleans)", "_____no_output_____" ], [ "# Inspect booleans elements \nbooleans[0:5]", "_____no_output_____" ], [ "# create a pandas series \nis_long = pd.Series(booleans)", "_____no_output_____" ], [ "# Inspect few values \nis_long.head() ", "_____no_output_____" ], [ "# show dataframe all columns in duration 200 minutes \nmovies[is_long]", "_____no_output_____" ] ], [ [ "## Filtering by Condition", "_____no_output_____" ] ], [ [ "# filtering by conditions \nis_long = movies.duration >= 200\nis_long.head() ", "_____no_output_____" ], [ "# show the rows duration >200\nmovies[is_long]", "_____no_output_____" ] ], [ [ "## Filtering in DataFrame ", "_____no_output_____" ] ], [ [ "# filtering by columns \nmovies[movies.duration >= 200]", "_____no_output_____" ], [ "# select only genre\nmovies[movies.duration >= 200].genre", "_____no_output_____" ], [ "# same as above \nmovies[movies.duration >= 200]['genre']", "_____no_output_____" ], [ "# select columns by label \nmovies.loc[movies.duration >= 200, 'genre']", "_____no_output_____" ] ], [ [ "## Multiple Filtering Criteria", "_____no_output_____" ] ], [ [ "# True and True == True \n# True and False == False \n# True or True == True \n# True or False == True \n# False or False == False \nTrue and True\nTrue and False \nTrue or True \nTrue or False", "_____no_output_____" ], [ "# multiple criteria \nmovies[(movies.duration >= 200) & (movies.genre == 'Drama')]", "_____no_output_____" ], [ "# multiple criteria \nmovies[(movies.duration >= 200) | (movies.genre == 'Drama')]", "_____no_output_____" ], [ "# multiple or conditions \nmovies[(movies.genre == \"Crime\") | (movies.genre == 'Drama') | (movies.genre == \"Action\")]", "_____no_output_____" ], [ "# multiple or using isin() method \nmovies.genre.isin([\"Drama\", \"Action\", \"Crime\"])", "_____no_output_____" ], [ "# pass the series in DataFrame \nmovies[movies.genre.isin([\"Drama\", \"Action\", \"Crime\"])]", "_____no_output_____" ] ], [ [ "<h3>About the Author</h3>\nThis repo was created by <a href=\"https://www.linkedin.com/in/jubayer28/\" target=\"_blank\">Jubayer Hossain</a> <br>\n<a href=\"https://www.linkedin.com/in/jubayer28/\" target=\"_blank\">Jubayer Hossain</a> is a student of Microbiology at Jagannath University and the founder of <a href=\"https://github.com/hdro\" target=\"_blank\">Health Data Research Organization</a>. He is also a team member of a bioinformatics research group known as Bio-Bio-1. \n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.m", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb14567c7df6160003ab66e9e0105ea7314f35eb
16,247
ipynb
Jupyter Notebook
model-selection-101/model-selection-part-2.ipynb
prakash123mayank/Data-Science-45min-Intros
f9208e43d69f791f8611998b39238444e9a7b7ba
[ "Unlicense" ]
1,406
2015-01-05T19:20:55.000Z
2022-03-17T08:35:09.000Z
model-selection-101/model-selection-part-2.ipynb
prakash123mayank/Data-Science-45min-Intros
f9208e43d69f791f8611998b39238444e9a7b7ba
[ "Unlicense" ]
1
2019-07-27T11:53:24.000Z
2019-10-02T19:34:32.000Z
model-selection-101/model-selection-part-2.ipynb
prakash123mayank/Data-Science-45min-Intros
f9208e43d69f791f8611998b39238444e9a7b7ba
[ "Unlicense" ]
495
2015-01-06T11:39:21.000Z
2022-03-15T10:21:43.000Z
34.204211
486
0.545393
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb145eab8720df1bd79a4ca39dd9be218c7db6a6
35,360
ipynb
Jupyter Notebook
modelproject/modelproject/ModelProject7.ipynb
NumEconCopenhagen/projects-2019-athena
0f94ad321ce9672f5821057e37d22c81f24e4085
[ "MIT" ]
null
null
null
modelproject/modelproject/ModelProject7.ipynb
NumEconCopenhagen/projects-2019-athena
0f94ad321ce9672f5821057e37d22c81f24e4085
[ "MIT" ]
11
2019-04-14T13:41:46.000Z
2019-05-14T11:21:39.000Z
modelproject/modelproject/ModelProject7.ipynb
NumEconCopenhagen/projects-2019-athena
0f94ad321ce9672f5821057e37d22c81f24e4085
[ "MIT" ]
null
null
null
50.014144
3,296
0.685351
[ [ [ "import numpy as np\nfrom scipy import linalg\nfrom scipy import optimize\nimport sympy as sm\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets", "_____no_output_____" ] ], [ [ "The following Model Project is based on the classic Solow model as we know it from Macroeconomics. First, we will model the Solow model in its simplest form. Next we will build on this simple Solow model by adding Total Factor Productivity (TFP).\n\n1) Our model is defined by the framework below. Note that TFP is not part of the model yet and that we are assuming that we are in a small closed economy:", "_____no_output_____" ], [ "A small closed economy can be described by following equations:\n\\\\[ Y_t = BK_t^\\alpha L_t^{1-\\alpha},\\alpha\\in(0,1) \\\\]\n\n\\\\[ S_{t+1} = sY_t, s\\in(0,1) \\\\]\n\n\\\\[ L_{t+1} = (1+n)L_t, n>-1 \\\\]\n\n\nwhere $Y_t = F(K_t,L_t)$ is GDP; $K_t$ is capital; $L_t$ is labor (growing with a constant rate of $n$); $S_t$ is total savings; s is the savings rate and $k_t = K_t/L_t$. Note also that $B$, alpha, $s$ and $n$ are exogenous parameters.\n\nThe transition equation, which shows how capital is accumulated, then becomes\n\n\\\\[ k_{t+1} = \\frac{1}{1+n}(sBk_t^{\\alpha}+(1-\\delta)k_t), 0<\\delta =<1\\\\]\n\nwhere, in addition to above defined parameters, $\\delta$ is depreciation of capital.\n\n**Steady state** for $k_t$ and $y_t$ is derived below", "_____no_output_____" ] ], [ [ "#Below is run to get LaTex format\nsm.init_printing(use_unicode=True)\n\n\n# Define varibles\nk = sm.symbols('k')\ny = sm.symbols('y')\nK = sm.symbols('K')\nB = sm.symbols('B')\nL = sm.symbols('L')\nY = sm.symbols('Y')\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\ns = sm.symbols('s')\nn = sm.symbols('n')\n\n\n# Define transition equation and solve steady state for capital per capital\nf=B*k**alpha\ntransition1=sm.Eq(k,((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k)))\nsteadystate_k1=sm.solve(transition1,k)\nprint('Steady state of k is')\nsteadystate_k1\n", "Steady state of k is\n" ], [ "y=B*k**alpha\nprint('Steady state for y is')\nsteadystate_y1=y.subs({'k':steadystate_k1[0]})\nsteadystate_y1", "Steady state for y is\n" ] ], [ [ "Above we have found the expression for steady state in capital per capita and income per capita. We test our steady state expressions by plugging in arbitrary values for each parameter in each expression: $B=1.5$, $s=0.20$, $n=2\\%$, $\\alpha=\\frac{1}{3}$ and $\\delta=1$. We use a simple function: solution(x), to retrieve a specific steady state value for each. See below.", "_____no_output_____" ], [ "We notice that this value seems plausible and proceed by solving for optimal steady states using more sophisticated functions. Again we start by choosing arbitrary parameter values.", "_____no_output_____" ] ], [ [ "s=0.2\nn=0.02\ndelta=1\nalpha=1/3\nB=10\n\nopt_steadystate_k1= lambda steadystate_k1: steadystate_k1 - ( ((1)/(1+n)*(s*B*(steadystate_k1)**alpha+(1-delta)*steadystate_k1)) )\nresult1 = optimize.root_scalar(opt_steadystate_k1,bracket=[0.1,10],method='brentq')\n\nprint('the steady state for k is', result1.root)\n", "the steady state for k is 2.74564722358436\n" ] ], [ [ "Given the specified parameter values, steady state of capital per capita is 2.75.", "_____no_output_____" ], [ "We now wish to investigate how the steady state value of capital per capita changes when the savings rate changes by an arbitrary amount.", "_____no_output_____" ] ], [ [ "n=0.02\ndelta=1\nalpha=1/3\nB=10\n\nsavings = [0.05,0.1,0.2,0.25,0.4,0.5,0.75,0.9]\nfor s1 in savings:\n obj_kss = lambda steadystate_k1: steadystate_k1 - ((1)/(1+n)*(s1*B*(steadystate_k1)**alpha+(1-delta)*steadystate_k1))\n result1 = optimize.root_scalar(obj_kss,bracket=[0.1,1000],method='brentq')\n print(f'for savings = {s1:.3f} the steady state for k is',result1.root) \n ", "for savings = 0.050 the steady state for k is 0.343205902947649\nfor savings = 0.100 the steady state for k is 0.9707328852712966\nfor savings = 0.200 the steady state for k is 2.7456472235841\nfor savings = 0.250 the steady state for k is 3.8371586463550824\nfor savings = 0.400 the steady state for k is 7.765863082169994\nfor savings = 0.500 the steady state for k is 10.853123597305084\nfor savings = 0.750 the steady state for k is 19.938461196567655\nfor savings = 0.900 the steady state for k is 26.209787902323733\n" ] ], [ [ "From above result it is instantly clear that the savings rate, s, has a substantial impact on the capital accumulation in the society. Below is a visualisation of how the parameteres impact the accumulation of capital as well as income.\n\nIn order to create this visualisation, our next step is to create an interactive graph plotting the transition curve and the 45 degree line including sliders for each of the parameters in the model. The purpose of this is to see the effect on the steady state point from changing one of more parameters. In order to do so, we first define the range of each slider. Then we set up the interactive figure and finally we set up the appropriate sliders.", "_____no_output_____" ] ], [ [ "# We define below the possible numeric intervals of the parameters\nalpha = np.arange(0,1)\ns = np.arange(0,1)\nk = np.arange(100)\nB = np.arange(0,100)\nn = np.arange(0,0.2)\ndelta = np.arange(0,1.1)\n\ndef interactive_transition1(B,s,alpha,n,delta,k):\n\n k0 = ((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k)) \n m0 = k\n #plt.plot(m)\n \n fig = plt.figure(dpi=90)\n ax = fig.add_subplot(1,1,1)\n ax.plot(k0, label = 'Capital')\n ax.plot(m0, label = '45 degrees line')\n ax.set_xlim([0,15]) # fixed x range\n ax.set_ylim([0,15]) # fixed y range\n plt.xlabel('k in period t')\n plt.ylabel('k in period t+1')\n plt.title('Transition diagram')\n ax.grid(True)\n ax.legend(loc='upper left')\n \n y0 = B*((1)/(1+n)*(s*B*(k**alpha)+(1-delta)*k))**alpha\n \n \n fig = plt.figure(dpi=90)\n ax = fig.add_subplot(1,1,1)\n ax.plot(y0, label = 'Income per capita')\n ax.set_xlim([0,15]) # fixed x range\n ax.set_ylim([0,100]) # fixed y range\n plt.xlabel('k')\n plt.ylabel('y')\n plt.title('Income per capita')\n ax.grid(True)\n ax.legend(loc='upper left')\n\nwidgets.interact(interactive_transition1,\n #k1=widgets.fixed(k1),\n alpha=widgets.FloatSlider(description=\"$alpha$\", min=0, max=1, step=0.005, value=0.33),\n s=widgets.FloatSlider(description=\"$s$\", min=0, max=0.7, step=0.005, value=0.2),\n B=widgets.FloatSlider(description=\"$B$\", min=0, max=50, step=1, value=10),\n n=widgets.FloatSlider(description=\"$n$\", min=0, max=0.2, step=0.01, value=0.02),\n delta=widgets.FloatSlider(description=\"$delta$\", min=0, max=1, step=0.01, value=0.4),\n k=widgets.fixed(k)\n);", "_____no_output_____" ] ], [ [ "In the figure we notice that as B, s and alpha increases, so does steady state for capital accumulation. Conversely, when n and delta increase, steady state for capital accumulation decreases.", "_____no_output_____" ], [ "**2) We now consider the Solow-model with a productive externality per worker where:**\n\nA small closed economy can be described by following equations:\n\\\\[ Y_t = A_tK_t^\\alpha L_t^{1-\\alpha},\\alpha\\in(0,1) \\\\]\n\n\\\\[ A_t = Bk_t^{\\phi(1-\\alpha)}, B>0,\\phi\\in(0,1) \\\\]\n\n\\\\[ K_{t+1} = sY_t, s\\in(0,1) \\\\]\n\n\\\\[ L_{t+1} = (1+n)L_t, n>-1 \\\\]\n\n\nwhere $K_t$ is capital; $L_t$ is labor (growing with a constant rate of $n$); $A_t$ is total factor productivity; $Y_t = F(K_t,A_t,L_t)$ is GDP; $k_t = K_t/L_t$ and s is the savings rate. \n\nNote that TFP is dependent on capital accumulation and therefore an increase in capital effects income accumulation via 2 channels: (i) directly through an increase in k and (ii) indirectly through an increase in productivity. Hence all increases in capital will have a larger effect on income accumulation in this model compared to the simple Solow model in part 1. \n\nThe transition equation then becomes\n\n\\\\[ k_{t+1} = (\\frac{sB}{1+n})k_t^{\\alpha+\\phi*(1-\\alpha)}\\\\]\n\n**Steady state** for $k_t$ and $y_t$ is derived below", "_____no_output_____" ] ], [ [ "#Below is run to get LaTex format\nsm.init_printing(use_unicode=True)", "_____no_output_____" ], [ "# Define additional variables\n# Define varibles\nk = sm.symbols('k')\ny = sm.symbols('y')\nK = sm.symbols('K')\nB = sm.symbols('B')\nL = sm.symbols('L')\nY = sm.symbols('Y')\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\ns = sm.symbols('s')\nn = sm.symbols('n')\nA = sm.symbols('A')\nphi = sm.symbols('phi')\n\n# Define income equation and transition equation and solve steady state for capital per capita\ny=B*k**(alpha + phi*(1-alpha))\ntransition=sm.Eq(k,((s*B)/(1+n)*k**(alpha+phi*(1-alpha))))\nsteadystate_k=sm.solve(transition,k)\nprint('Steady state of k is')\nsteadystate_k\n", "Steady state of k is\n" ], [ "print('Steady state for y is')\nsteadystate_y=y.subs({'k':steadystate_k[0]})\nsteadystate_y", "Steady state for y is\n" ] ], [ [ "Above we have found the expression for steady state in capital per capita and income per capits. We test our steady state expressions by plugging in arbitrary values for each parameter in each expression: $B=10$, $s=0.20$, $n=2\\%$, $\\alpha=\\frac{1}{3}$, $\\phi=0.56$. We use a simple function: solution(x), to retrieve a specific steady state value for each. See below.", "_____no_output_____" ], [ "We notice that this value seems plausible and proceed by solving for optimal steady states using more sophisticated functions. Again we start by choosing arbitrary parameter values.", "_____no_output_____" ] ], [ [ "print('Inserting the values gives the steadystate value of:')\nSolution=sm.lambdify((B,s,n,alpha,phi),steadystate_k)\nSolution(10, 0.2, 0.02, 1/3, 0.4)", "Inserting the values gives the steadystate value of:\n" ], [ "s=0.2\nn=0.02\nphi=0.4\nalpha=1/3\nB=10\n\nf = lambda k: A*k**alpha\nopt_steadystate_k= lambda steadystate_k: steadystate_k - ( ((s*B)/(1+n))*steadystate_k**(alpha+phi*(1-alpha)) )\nresult = optimize.root_scalar(opt_steadystate_k,bracket=[0.1,10],method='brentq')\n\nprint('the steady state for k is', result.root)\n", "the steady state for k is 5.383622007028104\n" ] ], [ [ "Given the specified parameter values, steady state of capital per capita is 5.38.", "_____no_output_____" ], [ "We now wish to investigate how the steady state value of capital per capita changes when the savings rate changes by an arbitrary amount.", "_____no_output_____" ] ], [ [ "n=0.02\nphi=0.4\nalpha=1/3\nB=10\nsavings_rate = [0.05,0.1,0.2,0.25,0.4,0.5,0.75,0.9]\nfor s1 in savings_rate:\n f = lambda k: A*k**alpha\n opt_steadystate_k= lambda steadystate_k: steadystate_k - ( ((s1*B)/(1+n))*steadystate_k**(alpha+phi*(1-alpha)) )\n result = optimize.root_scalar(opt_steadystate_k,bracket=[0.1,1000],method='brentq')\n print(f'for s = {s1:.3f} the steady state for k is',result.root)\n", "for s = 0.050 the steady state for k is 0.16823818771955754\nfor s = 0.100 the steady state for k is 0.951698907128675\nfor s = 0.200 the steady state for k is 5.383622007028105\nfor s = 0.250 the steady state for k is 9.40480060381148\nfor s = 0.400 the steady state for k is 30.4543650281175\nfor s = 0.500 the steady state for k is 53.20158626129952\nfor s = 0.750 the steady state for k is 146.60633232770348\nfor s = 0.900 the steady state for k is 231.2628344322685\n" ] ], [ [ "Similar to part 1, it is instantly clear that the savings rate, s, has a substantial impact on the capital accumulation in the society. Below is a visualisation of how the parameteres impact the accumulation of capital as well as income.\n\nOur next step is to create an interactive graph plotting the transition curve and the 45 degree line including sliders for each of the parameters in the model. The purpose of this is to see the effect on the steady state point from changing one of more parameters. In order to do so, we first define the range of each slider. Then we set up the interactive figure and finally we set up the appropriate sliders.", "_____no_output_____" ] ], [ [ "alpha = np.arange(0,1)\ns = np.arange(0,1)\nk = np.arange(100)\nB = np.arange(0,100)\nn = np.arange(0,0.2)\nphi = np.arange(0,1)\ny = np.arange(100)\n\ndef interactive_transition(B,s,alpha,n,phi,k):\n\n\n k1 = ( ((s*B)/(1+n))*k**(alpha+phi*(1-alpha)) )\n m = k\n #plt.plot(m)\n \n \n \n fig = plt.figure(dpi=90)\n ax = fig.add_subplot(1,1,1)\n ax.plot(k1, label = 'Capital')\n ax.plot(m, label = '45 degrees line')\n ax.set_xlim([0,15]) # fixed x range\n ax.set_ylim([0,15]) # fixed y range\n plt.xlabel('k in period t')\n plt.ylabel('k in period t+1')\n plt.title('Transition diagram')\n ax.grid(True)\n ax.legend(loc='upper left')\n \n y1=B*( ((s*B)/(1+n))*k**(alpha+phi*(1-alpha)) )**(alpha + phi*(1-alpha))\n \n fig = plt.figure(dpi=90)\n ax = fig.add_subplot(1,1,1)\n ax.plot(y1, label='Income per capita')\n ax.set_xlim([0,15]) # fixed x range\n ax.set_ylim([0,100]) # fixed y range\n plt.xlabel('k ')\n plt.ylabel('y ')\n plt.title('Income per capita')\n ax.grid(True)\n ax.legend(loc='upper left')\n\nwidgets.interact(interactive_transition,\n #k1=widgets.fixed(k1),\n alpha=widgets.FloatSlider(description=\"$alpha$\", min=0, max=1, step=0.005, value=0.33),\n s=widgets.FloatSlider(description=\"$s$\", min=0, max=0.7, step=0.005, value=0.2),\n B=widgets.FloatSlider(description=\"$B$\", min=0, max=50, step=1, value=10),\n n=widgets.FloatSlider(description=\"$n$\", min=0, max=0.2, step=0.01, value=0.02),\n phi=widgets.FloatSlider(description=\"$phi$\", min=0, max=1, step=0.01, value=0.56),\n k=widgets.fixed(k)\n);", "_____no_output_____" ] ], [ [ "Similar to part 1, we notice that as B, s, alpha and phi increase, so does steady state for capital accumulation. Conversely, when n increases, steady state for capital accumulation decreases. Importantly we notice that when total factor productivity is dependent on k, all changes in either k or the productivity factor, B, they will have a significantly higher effect on capital accumulation and thereby income. \n\nWe can therefore conclude that an economy with a productive externality per capita will increase capital accumulation.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1473f12fde5563b75e60861ff1c4f49d87928f
53,022
ipynb
Jupyter Notebook
codes/tomography/shadow_tomography2.ipynb
vutuanhai237/QuantumTomography
52916096482d7e7cd29782c049478bbba901d9bd
[ "MIT" ]
2
2021-12-11T07:49:46.000Z
2022-03-04T07:11:30.000Z
codes/tomography/shadow_tomography2.ipynb
vutuanhai237/QuantumTomography
52916096482d7e7cd29782c049478bbba901d9bd
[ "MIT" ]
null
null
null
codes/tomography/shadow_tomography2.ipynb
vutuanhai237/QuantumTomography
52916096482d7e7cd29782c049478bbba901d9bd
[ "MIT" ]
null
null
null
121.889655
27,094
0.846139
[ [ [ "from itertools import combinations\nimport qiskit\nimport numpy as np\nimport tqix\nimport sys\n\n\ndef generate_u_pauli(num_qubits):\n lis = [0, 1, 2]\n coms = []\n if num_qubits == 2:\n for i in lis:\n for j in lis:\n coms.append([i, j])\n if num_qubits == 3:\n for i in lis:\n for j in lis:\n for k in lis:\n coms.append([i, j, k])\n if num_qubits == 4:\n for i in lis:\n for j in lis:\n for k in lis:\n for l in lis:\n coms.append([i, j, k, l])\n if num_qubits == 5:\n for i in lis:\n for j in lis:\n for k in lis:\n for l in lis:\n for m in lis:\n coms.append([i, j, k, l, m])\n sigma = [tqix.sigmax(), tqix.sigmay(), tqix.sigmaz()]\n Us = []\n for com in coms:\n U = sigma[com[0]]\n for i in range(1, num_qubits):\n U = np.kron(U, sigma[com[i]])\n Us.append(U)\n \n return Us[: 3**num_qubits]\n\n\ndef create_basic_vector(num_qubits: int):\n \"\"\"Generate list of basic vectors\n\n Args:\n num_qubits (int): number of qubits\n\n Returns:\n np.ndarray: |00...0>, |00...1>, ..., |11...1>\n \"\"\"\n bs = []\n for i in range(0, 2**num_qubits):\n b = np.zeros((2**num_qubits, 1))\n b[i] = 1\n bs.append(b)\n return bs\n\n\ndef calculate_sigma(U: np.ndarray, b: np.ndarray):\n \"\"\"Calculate measurement values\n\n Args:\n U (np.ndarray): operator\n b (np.ndarray): basic vector\n\n Returns:\n np.ndarray: sigma operator\n \"\"\"\n return (np.conjugate(np.transpose(U)) @ b @ np.conjugate(np.transpose(b)) @ U)\n\n# def calculate_mu(density_matrix):\n# M = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)\n# for i in range(0, num_observers):\n# for j in range(0, 2**num_qubits):\n# k = sigmass[i][j]\n# M += np.trace(k @ density_matrix) * k\n# M /= num_observers\n# return M\n\n\ndef calculate_mu_inverse(density_matrix, num_qubits):\n k = 3*density_matrix - \\\n np.trace(density_matrix) * np.identity(2 **\n num_qubits, dtype=np.complex128)\n # M = k.copy()\n # for i in range(1, num_qubits):\n # M = np.kron(M, k)\n return k\ndef self_tensor(matrix, n):\n product = matrix\n for i in range(1, n):\n product = np.kron(product, matrix)\n return product\n", "_____no_output_____" ], [ "num_qubits = 4\npsi = 2*np.random.rand(2**num_qubits)\npsi = psi / np.linalg.norm(psi)\nrho = qiskit.quantum_info.DensityMatrix(psi).data\n\ndef shadow(num_experiments):\n\n num_observers = 3**num_qubits\n Us, bs = [], []\n bs = create_basic_vector(num_qubits)\n Us = generate_u_pauli(num_qubits)\n count_i = [0] * (num_observers)\n sum_b_s = [np.zeros((2**num_qubits, 2**num_qubits),\n dtype=np.complex128)] * (num_observers)\n for i in range(0, num_experiments):\n r = np.random.randint(0, num_observers)\n count_i[r] += 1\n U = Us[r]\n sum_b = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)\n for j in range(0, 2**num_qubits):\n k = calculate_sigma(U, bs[j])\n sum_b_s[r] += np.trace(k @ rho)*calculate_mu_inverse(k, num_qubits)\n temp = sum_b_s[r].copy()\n sum_b_s[r] = (np.conjugate(np.transpose(\n temp)) @ temp) / (np.trace(np.conjugate(np.transpose(temp)) @ temp))\n\n ps = np.zeros(num_observers)\n rho_hat = np.zeros((2**num_qubits, 2**num_qubits), dtype=np.complex128)\n rho_hat_variant = 0\n for i in range(0, num_observers):\n ps[i] = count_i[i] / num_experiments\n traceA = np.trace(self_tensor(tqix.sigmaz(), num_qubits) @ sum_b_s[i])\n traceB = np.trace(self_tensor(tqix.sigmaz(), num_qubits) @ rho)\n rho_hat_variant += ps[i] * (traceA - traceB)**2\n rho_hat += ps[i] * sum_b_s[i]\n return rho_hat_variant, rho_hat\n# new_rho_hat = (np.conjugate(np.transpose(\n# rho_hat)) @ rho_hat) / (np.trace(np.conjugate(np.transpose(rho_hat)) @ rho_hat))\n# fidelity = qtm.base.trace_fidelity(rho, new_rho_hat)\n# trace = qtm.base.trace_distance(rho, new_rho_hat)\n# return trace, fidelity, rho, new_rho_hat\n\n# traces = []\n# fidelities = []\n# rho_hats = []\n# for i in range(0, 1):\n# trace, fidelity, rho, new_rho_hat = shadow_tomo()\n# traces.append(trace)\n# fidelities.append(fidelity)\n# rho_hats.append(new_rho_hat.copy())\n\n# print(np.mean(traces))\n# print(np.mean(fidelities))\n# print(np.std(traces))\n# print(np.std(fidelities))\n# min_rho_hat = (rho_hats[np.argmin(traces)])\n", "_____no_output_____" ], [ "rho_hat_variantss = []\nnoe_large = [10**2, 10**3, 10**4, 10**5]\nfor noe in noe_large:\n rho_hat_variants = []\n for i in range(0, 10):\n rho_hat_variant, rho_hat = shadow(noe)\n rho_hat_variants.append(rho_hat_variant)\n rho_hat_variantss.append(rho_hat_variants)\n", "_____no_output_____" ], [ "np.savetxt(\"./rho_hat_variantss\" + str(num_qubits) + \".csv\",\n rho_hat_variantss,\n delimiter=\",\")\n\naverages_var = [0]*4\naverages_std = [0]*4\nfor i in range(len(noe_large)):\n averages_var[i] = np.mean(rho_hat_variantss[i])\n averages_std[i] = np.std(rho_hat_variantss[i])\nprint(averages_var)\nprint(averages_std)\n", "[(0.008687697988279635+0j), (0.009658731787962236+0j), (0.008910386952710084+0j), (0.00971607342606371+0j)]\n[0.0063233555101296236, 0.002370011341539773, 0.0005752209842798887, 0.0019561615997392896]\n" ], [ "import matplotlib.pyplot as plt\n\nplt.plot(noe_large, averages_var)\nplt.subplot(2, 1, 1)\nplt.plot(noe_large, averages_var)\nplt.xscale('log')\nplt.yscale('log')\nplt.xlabel('NoE')\nplt.ylabel('Var')\nplt.show()\n", "C:\\Users\\HAI\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\site-packages\\matplotlib\\cbook\\__init__.py:1298: ComplexWarning: Casting complex values to real discards the imaginary part\n return np.asarray(x, float)\n" ] ], [ [ "L = 2, W_chain, Adam\n\nCalculate $var(Z\\otimes Z) = (\\langle\\tilde{\\psi}|ZZ|\\tilde{\\psi}\\rangle^2 - \\langle\\psi|ZZ|\\psi\\rangle^2)$\n", "_____no_output_____" ] ], [ [ "\nimport sys\nsys.path.insert(1, '../')\nimport qtm.fubini_study\nimport qtm.nqubit\nimport qtm.base\nnum_layers = 2\nthetas = np.ones(num_layers*num_qubits*4)\n\nqc = qiskit.QuantumCircuit(num_qubits, num_qubits)\nqc.initialize(psi, range(0, num_qubits))\n\nloss_values = []\nthetass = []\n\nfor i in range(0, 400):\n if i % 20 == 0:\n print('W_chain: (' + str(num_layers) +\n ',' + str(num_qubits) + '): ' + str(i))\n\n grad_loss = qtm.base.grad_loss(\n qc,\n qtm.nqubit.create_Wchain_layerd_state,\n thetas, r=1/2, s=np.pi/2, num_layers=num_layers)\n if i == 0:\n m, v = list(np.zeros(thetas.shape[0])), list(\n np.zeros(thetas.shape[0]))\n thetas = qtm.base.adam(thetas, m, v, i, grad_loss)\n thetass.append(thetas.copy())\n qc_copy = qtm.nqubit.create_Wchain_layerd_state(\n qc.copy(), thetas, num_layers)\n loss = qtm.base.loss_basis(qtm.base.measure(\n qc_copy, list(range(qc_copy.num_qubits))))\n loss_values.append(loss)\n", "W_chain: (2,4): 0\nW_chain: (2,4): 20\nW_chain: (2,4): 40\nW_chain: (2,4): 60\nW_chain: (2,4): 80\nW_chain: (2,4): 100\nW_chain: (2,4): 120\nW_chain: (2,4): 140\nW_chain: (2,4): 160\nW_chain: (2,4): 180\nW_chain: (2,4): 200\nW_chain: (2,4): 220\nW_chain: (2,4): 240\nW_chain: (2,4): 260\nW_chain: (2,4): 280\nW_chain: (2,4): 300\nW_chain: (2,4): 320\nW_chain: (2,4): 340\nW_chain: (2,4): 360\nW_chain: (2,4): 380\n" ], [ "variances = []\nfor thetas in thetass:\n qc = qiskit.QuantumCircuit(num_qubits, num_qubits)\n qc = qtm.nqubit.create_Wchain_layerd_state(\n qc, thetas, num_layers=num_layers).inverse()\n psi_hat = qiskit.quantum_info.Statevector.from_instruction(qc).data\n variances.append((np.conjugate(np.transpose(psi_hat)) @ self_tensor(tqix.sigmaz(), num_qubits) @ psi_hat)\n ** 2 - (np.conjugate(np.transpose(psi)) @ self_tensor(tqix.sigmaz(), num_qubits) @ psi)**2)\nplt.plot(variances)\n\nnp.savetxt(\"./thetass\"+ str(num_qubits) + \".csv\",\n thetass,\n delimiter=\",\")\nnp.savetxt(\"./variances\" + str(num_qubits) + \".csv\",\n variances,\n delimiter=\",\")", "_____no_output_____" ], [ "min((abs(x), x) for x in variances)[1]\n", "_____no_output_____" ], [ "variances[-1]\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb147beaeae101f58b0fe7d9caf78f18286785ee
22,032
ipynb
Jupyter Notebook
lab/01_CreateAlgorithmContainer/01_Creating a Classifier Container.ipynb
doctorai-in/amazon-sagemaker-workshop
83d9d3d1c3ac32741d9d8723da321a1b6ebe1e9e
[ "MIT-0" ]
null
null
null
lab/01_CreateAlgorithmContainer/01_Creating a Classifier Container.ipynb
doctorai-in/amazon-sagemaker-workshop
83d9d3d1c3ac32741d9d8723da321a1b6ebe1e9e
[ "MIT-0" ]
null
null
null
lab/01_CreateAlgorithmContainer/01_Creating a Classifier Container.ipynb
doctorai-in/amazon-sagemaker-workshop
83d9d3d1c3ac32741d9d8723da321a1b6ebe1e9e
[ "MIT-0" ]
1
2021-06-08T10:34:04.000Z
2021-06-08T10:34:04.000Z
37.405772
526
0.592048
[ [ [ "# Building a docker container for training/deploying our classifier\n\nIn this exercise we'll create a Docker image that will have the required code for training and deploying a ML model. In this particular example, we'll use scikit-learn (https://scikit-learn.org/) and the **Random Forest Tree** implementation of that library to train a flower classifier. The dataset used in this experiment is a toy dataset called Iris (http://archive.ics.uci.edu/ml/datasets/iris). The clallenge itself is very basic, so you can focus on the mechanics and the features of this automated environment.\n\nA first pipeline will be executed at the end of this exercise, automatically. It will get the assets you'll push to a Git repo, build this image and push it to ECR, a docker image repository, used by SageMaker.\n\n> **Question**: Why would I create a Scikit-learn container from scratch if SageMaker already offerst one (https://docs.aws.amazon.com/sagemaker/latest/dg/sklearn.html). \n> **Answer**: This is an exercise and the idea here is also to show you how you can create your own container. In a real-life scenario, the best approach is to use the native container offered by SageMaker.\n\n\n## Why do I have to do this? If you're asking yourself this question you probably don't need to create a custom conainer. If that is the case, you can skip this section by clicking on the link bellow and use the built-in container with XGBoost to run the automated pipeline\n> [Skip this section](../02_TrainYourModel/01_Training%20our%20model.ipynb) and start training your ML model\n", "_____no_output_____" ], [ "## PART 1 - Creating the assets required to build/test a docker image", "_____no_output_____" ], [ "### 1.1 Let's start by creating the training script!\n\nAs you can see, this is a very basic example of Scikit-Learn. Nothing fancy.", "_____no_output_____" ] ], [ [ "%%writefile train.py\nimport os\nimport pandas as pd\nimport re\nimport joblib\nimport json\nimport traceback\nimport sys\nfrom sklearn.ensemble import RandomForestClassifier\n\ndef load_dataset(path):\n # Take the set of files and read them all into a single pandas dataframe\n files = [ os.path.join(path, file) for file in os.listdir(path) ]\n print(files)\n \n if len(files) == 0:\n raise ValueError(\"Invalid # of files in dir: {}\".format(path))\n\n raw_data = [ pd.read_csv(file, sep=\",\", header=None ) for file in files ]\n data = pd.concat(raw_data)\n print(data.head(10))\n\n # labels are in the first column\n y = data.iloc[:,0]\n X = data.iloc[:,1:]\n return X,y\n \ndef start(args):\n print(\"Training mode\")\n\n try:\n X_train, y_train = load_dataset(args.train)\n X_test, y_test = load_dataset(args.validation)\n \n hyperparameters = {\n \"max_depth\": args.max_depth,\n \"verbose\": 1, # show all logs\n \"n_jobs\": args.n_jobs,\n \"n_estimators\": args.n_estimators\n }\n print(\"Training the classifier\")\n model = RandomForestClassifier()\n model.set_params(**hyperparameters)\n model.fit(X_train, y_train)\n print(\"Score: {}\".format( model.score(X_test, y_test)) )\n joblib.dump(model, open(os.path.join(args.model_dir, \"iris_model.pkl\"), \"wb\"))\n \n except Exception as e:\n # Write out an error file. This will be returned as the failureReason in the\n # DescribeTrainingJob result.\n trc = traceback.format_exc()\n output_path=\"/tmp/\"\n with open(os.path.join(output_path, \"failure\"), \"w\") as s:\n s.write(\"Exception during training: \" + str(e) + \"\\\\n\" + trc)\n \n # Printing this causes the exception to be in the training job logs, as well.\n print(\"Exception during training: \" + str(e) + \"\\\\n\" + trc, file=sys.stderr)\n \n # A non-zero exit code causes the training job to be marked as Failed.\n sys.exit(255)", "_____no_output_____" ] ], [ [ "### 1.2 Ok. Lets then create the handler. The **Inference Handler** is how we use the SageMaker Inference Toolkit to encapsulate our code and expose it as a SageMaker container.\nSageMaker Inference Toolkit: https://github.com/aws/sagemaker-inference-toolkit", "_____no_output_____" ] ], [ [ "%%writefile handler.py\nimport os\nimport sys\nimport joblib\nfrom sagemaker_inference.default_inference_handler import DefaultInferenceHandler\nfrom sagemaker_inference.default_handler_service import DefaultHandlerService\nfrom sagemaker_inference import content_types, errors, transformer, encoder, decoder\n\nclass HandlerService(DefaultHandlerService, DefaultInferenceHandler):\n def __init__(self):\n op = transformer.Transformer(default_inference_handler=self)\n super(HandlerService, self).__init__(transformer=op)\n \n ## Loads the model from the disk\n def default_model_fn(self, model_dir):\n model_filename = os.path.join(model_dir, \"iris_model.pkl\")\n return joblib.load(open(model_filename, \"rb\"))\n \n ## Parse and check the format of the input data\n def default_input_fn(self, input_data, content_type):\n if content_type != \"text/csv\":\n raise Exception(\"Invalid content-type: %s\" % content_type)\n return decoder.decode(input_data, content_type).reshape(1,-1)\n \n ## Run our model and do the prediction\n def default_predict_fn(self, payload, model):\n return model.predict( payload ).tolist()\n \n ## Gets the prediction output and format it to be returned to the user\n def default_output_fn(self, prediction, accept):\n if accept != \"text/csv\":\n raise Exception(\"Invalid accept: %s\" % accept)\n return encoder.encode(prediction, accept)", "_____no_output_____" ] ], [ [ "### 1.3 Now we need to create the entrypoint of our container. The main function\n\nWe'll use **SageMaker Training Toolkit** (https://github.com/aws/sagemaker-training-toolkit) to work with the arguments and environment variables defined by SageMaker. This library will make our code simpler.", "_____no_output_____" ] ], [ [ "%%writefile main.py\nimport train\nimport argparse\nimport sys\nimport os\nimport traceback\nfrom sagemaker_inference import model_server\nfrom sagemaker_training import environment\n\nif __name__ == \"__main__\":\n if len(sys.argv) < 2 or ( not sys.argv[1] in [ \"serve\", \"train\" ] ):\n raise Exception(\"Invalid argument: you must inform 'train' for training mode or 'serve' predicting mode\") \n \n if sys.argv[1] == \"train\":\n \n env = environment.Environment()\n \n parser = argparse.ArgumentParser()\n # https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md\n parser.add_argument(\"--max-depth\", type=int, default=10)\n parser.add_argument(\"--n-jobs\", type=int, default=env.num_cpus)\n parser.add_argument(\"--n-estimators\", type=int, default=120)\n \n # reads input channels training and testing from the environment variables\n parser.add_argument(\"--train\", type=str, default=env.channel_input_dirs[\"train\"])\n parser.add_argument(\"--validation\", type=str, default=env.channel_input_dirs[\"validation\"])\n\n parser.add_argument(\"--model-dir\", type=str, default=env.model_dir)\n \n args,unknown = parser.parse_known_args()\n train.start(args)\n else:\n model_server.start_model_server(handler_service=\"serving.handler\")", "_____no_output_____" ] ], [ [ "### 1.4 Then, we can create the Dockerfile\nJust pay attention to the packages we'll install in our container. Here, we'll use **SageMaker Inference Toolkit** (https://github.com/aws/sagemaker-inference-toolkit) and **SageMaker Training Toolkit** (https://github.com/aws/sagemaker-training-toolkit) to prepare the container for training/serving our model. **By serving** you can understand: exposing our model as a webservice that can be called through an api call.", "_____no_output_____" ] ], [ [ "%%writefile Dockerfile\nFROM python:3.7-buster\n\n# Set a docker label to advertise multi-model support on the container\nLABEL com.amazonaws.sagemaker.capabilities.multi-models=false\n# Set a docker label to enable container to use SAGEMAKER_BIND_TO_PORT environment variable if present\nLABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true\n\nRUN apt-get update -y && apt-get -y install --no-install-recommends default-jdk\nRUN rm -rf /var/lib/apt/lists/*\n\nRUN pip --no-cache-dir install multi-model-server sagemaker-inference sagemaker-training\nRUN pip --no-cache-dir install pandas numpy scipy scikit-learn\n\nENV PYTHONUNBUFFERED=TRUE\nENV PYTHONDONTWRITEBYTECODE=TRUE\nENV PYTHONPATH=\"/opt/ml/code:${PATH}\"\n\nCOPY main.py /opt/ml/code/main.py\nCOPY train.py /opt/ml/code/train.py\nCOPY handler.py /opt/ml/code/serving/handler.py\n\nENTRYPOINT [\"python3\", \"/opt/ml/code/main.py\"]", "_____no_output_____" ] ], [ [ "### 1.5 Finally, let's create the buildspec\nThis file will be used by CodeBuild for creating our Container image. \nWith this file, CodeBuild will run the \"docker build\" command, using the assets we created above, and deploy the image to the Registry. \nAs you can see, each command is a bash command that will be executed from inside a Linux Container.", "_____no_output_____" ] ], [ [ "%%writefile buildspec.yml\nversion: 0.2\n\nphases:\n install:\n runtime-versions:\n docker: 18\n\n pre_build:\n commands:\n - echo Logging in to Amazon ECR...\n - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)\n build:\n commands:\n - echo Build started on `date`\n - echo Building the Docker image...\n - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .\n - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG\n\n post_build:\n commands:\n - echo Build completed on `date`\n - echo Pushing the Docker image...\n - echo docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG\n - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG\n - echo $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > image.url\n - echo Done\nartifacts:\n files:\n - image.url\n name: image_url\n discard-paths: yes", "_____no_output_____" ] ], [ [ "## PART 2 - Local Test: Let's build the image locally and do some tests\n### 2.1 Building the image locally, first\nEach SageMaker Jupyter Notebook already has a **docker** envorinment pre-installed. So we can play with Docker containers just using the same environment.", "_____no_output_____" ] ], [ [ "!docker build -f Dockerfile -t iris_model:1.0 .", "_____no_output_____" ] ], [ [ "### 2.2 Now that we have the algorithm image we can run it to train/deploy a model", "_____no_output_____" ], [ "### Then, we need to prepare the dataset\nYou'll see that we're splitting the dataset into training and validation and also saving these two subsets of the dataset into csv files. These files will be then uploaded to an S3 Bucket and shared with SageMaker.", "_____no_output_____" ] ], [ [ "!rm -rf input\n!mkdir -p input/data/train\n!mkdir -p input/data/validation\n\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\n\niris = datasets.load_iris()\n\ndataset = np.insert(iris.data, 0, iris.target,axis=1)\n\ndf = pd.DataFrame(data=dataset, columns=[\"iris_id\"] + iris.feature_names)\nX = df.iloc[:,1:]\ny = df.iloc[:,0]\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)\n\ntrain_df = X_train.copy()\ntrain_df.insert(0, \"iris_id\", y_train)\ntrain_df.to_csv(\"input/data/train/training.csv\", sep=\",\", header=None, index=None)\n\ntest_df = X_test.copy()\ntest_df.insert(0, \"iris_id\", y_test)\ntest_df.to_csv(\"input/data/validation/testing.csv\", sep=\",\", header=None, index=None)\n\ndf.head()", "_____no_output_____" ] ], [ [ "### 2.3 Just a basic local test, using the local Docker daemon\nHere we will simulate SageMaker calling our docker container for training and serving. We'll do that using the built-in Docker Daemon of the Jupyter Notebook Instance.", "_____no_output_____" ] ], [ [ "!rm -rf input/config && mkdir -p input/config", "_____no_output_____" ], [ "%%writefile input/config/hyperparameters.json\n{\"max_depth\": 20, \"n_jobs\": 4, \"n_estimators\": 120}", "_____no_output_____" ], [ "%%writefile input/config/resourceconfig.json\n{\"current_host\": \"localhost\", \"hosts\": [\"algo-1-kipw9\"]}", "_____no_output_____" ], [ "%%writefile input/config/inputdataconfig.json\n{\"train\": {\"TrainingInputMode\": \"File\"}, \"validation\": {\"TrainingInputMode\": \"File\"}}", "_____no_output_____" ], [ "%%time\n!rm -rf model/\n!mkdir -p model\n\nprint( \"Training...\")\n!docker run --rm --name \"my_model\" \\\n -v \"$PWD/model:/opt/ml/model\" \\\n -v \"$PWD/input:/opt/ml/input\" iris_model:1.0 train", "_____no_output_____" ] ], [ [ "### 2.4 This is the serving test. It simulates an Endpoint exposed by Sagemaker\n\nAfter you execute the next cell, this Jupyter notebook will freeze. A webservice will be exposed at the port 8080. ", "_____no_output_____" ] ], [ [ "!docker run --rm --name \"my_model\" \\\n -p 8080:8080 \\\n -v \"$PWD/model:/opt/ml/model\" \\\n -v \"$PWD/input:/opt/ml/input\" iris_model:1.0 serve", "_____no_output_____" ] ], [ [ "> While the above cell is running, click here [TEST NOTEBOOK](02_Testing%20our%20local%20model%20server.ipynb) to run some tests.\n\n> After you finish the tests, press **STOP**", "_____no_output_____" ], [ "## PART 3 - Integrated Test: Everything seems ok, now it's time to put all together\n\nWe'll start by running a local **CodeBuild** test, to check the buildspec and also deploy this image into the container registry. Remember that SageMaker will only see images published to ECR.\n", "_____no_output_____" ] ], [ [ "import boto3\n\nsts_client = boto3.client(\"sts\")\nsession = boto3.session.Session()\n\naccount_id = sts_client.get_caller_identity()[\"Account\"]\nregion = session.region_name\ncredentials = session.get_credentials()\ncredentials = credentials.get_frozen_credentials()\n\nrepo_name=\"iris-model\"\nimage_tag=\"test\"", "_____no_output_____" ], [ "!sudo rm -rf tests && mkdir -p tests\n!cp handler.py main.py train.py Dockerfile buildspec.yml tests/\nwith open(\"tests/vars.env\", \"w\") as f:\n f.write(\"AWS_ACCOUNT_ID=%s\\n\" % account_id)\n f.write(\"IMAGE_TAG=%s\\n\" % image_tag)\n f.write(\"IMAGE_REPO_NAME=%s\\n\" % repo_name)\n f.write(\"AWS_DEFAULT_REGION=%s\\n\" % region)\n f.write(\"AWS_ACCESS_KEY_ID=%s\\n\" % credentials.access_key)\n f.write(\"AWS_SECRET_ACCESS_KEY=%s\\n\" % credentials.secret_key)\n f.write(\"AWS_SESSION_TOKEN=%s\\n\" % credentials.token )\n f.close()\n\n!cat tests/vars.env", "_____no_output_____" ], [ "%%time\n\n!/tmp/aws-codebuild/local_builds/codebuild_build.sh \\\n -a \"$PWD/tests/output\" \\\n -s \"$PWD/tests\" \\\n -i \"samirsouza/aws-codebuild-standard:3.0\" \\\n -e \"$PWD/tests/vars.env\" \\\n -c", "_____no_output_____" ] ], [ [ "> Now that we have an image deployed in the ECR repo we can also run some local tests using the SageMaker Estimator.\n\n> Click on this [TEST NOTEBOOK](03_Testing%20the%20container%20using%20SageMaker%20Estimator.ipynb) to run some tests.\n\n> After you finishing the tests, come back to **this notebook** to push the assets to the Git Repo\n", "_____no_output_____" ], [ "## PART 4 - Let's push all the assets to the Git Repo connected to the Build pipeline\nThere is a CodePipeine configured to keep listeining to this Git Repo and start a new Building process with CodeBuild.", "_____no_output_____" ] ], [ [ "%%bash\ncd ../../../mlops\ngit branch iris_model\ngit checkout iris_model\ncp $OLDPWD/buildspec.yml $OLDPWD/handler.py $OLDPWD/train.py $OLDPWD/main.py $OLDPWD/Dockerfile .\n\ngit add --all\ngit commit -a -m \" - files for building an iris model image\"\ngit push --set-upstream origin iris_model", "_____no_output_____" ] ], [ [ "> Alright, now open the AWS console and go to the **CodePipeline** dashboard. Look for a pipeline called **mlops-iris-model**. This pipeline will deploy the final image to an ECR repo. When this process finishes, open the **Elastic Compute Registry** dashboard, in the AWS console, and check if you have an image called **iris-model:latest**. If yes, you can go to the next exercise. If not, wait a little more.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
cb1487167b05c0b1a46103251c7af5e928d71466
19,927
ipynb
Jupyter Notebook
Week_2/Ea.ipynb
symmy596/Bath_University_Advanced_Practical_Chemistry_Year_2
65f846f87d97639f324c6f75f2ac0d1bedfb91d3
[ "MIT" ]
null
null
null
Week_2/Ea.ipynb
symmy596/Bath_University_Advanced_Practical_Chemistry_Year_2
65f846f87d97639f324c6f75f2ac0d1bedfb91d3
[ "MIT" ]
null
null
null
Week_2/Ea.ipynb
symmy596/Bath_University_Advanced_Practical_Chemistry_Year_2
65f846f87d97639f324c6f75f2ac0d1bedfb91d3
[ "MIT" ]
null
null
null
158.150794
17,280
0.906609
[ [ [ "import numpy as np\nfrom scipy import stats\nimport seaborn as sns\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "Temperature = np.array([100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500])\nDiffusion = np.array([3.47578797e-04, 3.47578797e-04, 1.71670606e-03, 4.13489708e-03, 2.18762447e-02, 1.36015747e-01, \n 2.90576568e-01, 6.93472254e-02, 1.08966402e-01, 2.51636797e-01, 5.03405716e-01, 7.22862928e-01,\n 9.18047160e-01, 1.02884720e+00, 9.54932975e-01])", "_____no_output_____" ], [ "slope, intercept, r_value, p_value, std_err = stats.linregress((1 / Temperature)[2:], (np.log(Diffusion))[2:])", "_____no_output_____" ], [ "Ea = ((-slope * 8.314) / 1000) / 96.485", "_____no_output_____" ], [ "print(Ea)", "0.21574122936453297\n" ], [ "sns.regplot((1 / Temperature)[2:], (np.log(Diffusion))[2:])\nplt.ylabel(\"LnD\")\nplt.xlabel(\"1 / T (K)\")", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb1487bd2d73174e616bdf767128a603f98fe256
13,826
ipynb
Jupyter Notebook
lstm_run_posenet.ipynb
haithienld/har
a1d5cbc607ab293d8d9a76d8c6f6c77720358a9a
[ "MIT" ]
null
null
null
lstm_run_posenet.ipynb
haithienld/har
a1d5cbc607ab293d8d9a76d8c6f6c77720358a9a
[ "MIT" ]
null
null
null
lstm_run_posenet.ipynb
haithienld/har
a1d5cbc607ab293d8d9a76d8c6f6c77720358a9a
[ "MIT" ]
null
null
null
27.324111
306
0.508607
[ [ [ "import cv2\n\nrecord = False\n\ncap = cv2.VideoCapture(0)\n\nif (cap.isOpened() == False): \n print(\"Unable to read camera feed\")\n\nframe_width = int(cap.get(3))\nframe_height = int(cap.get(4))\n\nout = cv2.VideoWriter('output.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))\n\nwhile(True):\n ret, frame = cap.read()\n k = cv2.waitKey(1)\n\n if ret == True: \n cv2.imshow('frame',frame)\n\n # press space key to start recording\n if k%256 == 32:\n record = True\n\n if record:\n out.write(frame) \n\n # press q key to close the program\n if k & 0xFF == ord('q'):\n break\n\n else:\n break \n\ncap.release()\nout.release()\n\ncv2.destroyAllWindows()", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "import numpy as np", "_____no_output_____" ], [ "# Load the networks inputs\n\n# Useful Constants\n# Output classes to learn how to classify\nLABELS = [ \n \"JUMPING\",\n \"JUMPING_JACKS\",\n \"BOXING\",\n \"WAVING_2HANDS\",\n \"WAVING_1HAND\",\n \"CLAPPING_HANDS\"\n\n] \nn_steps = 32 # 32 timesteps per series\n\ndef load_X(X_path):\n file = open(X_path, 'r')\n X_ = np.array(\n [elem for elem in [\n row.split(',') for row in file\n ]], \n dtype=np.float32\n )\n file.close()\n blocks = int(len(X_) / n_steps)\n \n X_ = np.array(np.split(X_,blocks))\n\n return X_ \n\n# Load the networks outputs\ndef load_y(y_path):\n file = open(y_path, 'r')\n y_ = np.array(\n [elem for elem in [\n row.replace(' ', ' ').strip().split(' ') for row in file\n ]], \n dtype=np.int32\n )\n file.close()\n \n # for 0-based indexing \n return y_ - 1", "_____no_output_____" ], [ "import torch", "_____no_output_____" ], [ "import torch.nn as nn", "_____no_output_____" ], [ "device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')", "_____no_output_____" ], [ "import torch\nimport torch.nn as nn\nimport numpy as np\ndevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\nclass LSTM(nn.Module):\n \n def __init__(self,input_dim,hidden_dim,output_dim,layer_num):\n super(LSTM,self).__init__()\n self.hidden_dim = hidden_dim\n self.output_dim = output_dim\n self.lstm = torch.nn.LSTM(input_dim,hidden_dim,layer_num,batch_first=True)\n self.fc = torch.nn.Linear(hidden_dim,output_dim)\n self.bn = nn.BatchNorm1d(32)\n \n def forward(self,inputs):\n x = self.bn(inputs)\n lstm_out,(hn,cn) = self.lstm(x)\n out = self.fc(lstm_out[:,-1,:])\n return out\n", "_____no_output_____" ], [ "arr = np.array([[307.589,162.976,319.364,205.944,293.267,204.68,285.434,250.281,277.616,286.729,349.276,208.539,357.095,255.523,349.362,297.17,297.183,290.662,307.599,351.938,308.905,401.454,329.758,291.958,333.701,357.132,337.546,411.953,304.994,160.291,315.459,160.269,0,0,329.752,161.651]]) \nnp.r_[arr,[[307.567,162.979,319.362,205.947,293.257,204.695,285.428,250.28,277.616,285.504,349.282,208.541,357.101,255.503,349.361,297.149,297.182,290.671,307.603,351.914,307.689,401.429,329.757,291.969,333.698,357.127,337.545,411.937,304.969,160.295,315.434,160.268,0,0,328.527,161.655\n]]]\nprint (arr)\ninputs = load_X(\"demofile3.txt\")\ninputs=torch.from_numpy(inputs)\ninputs\n\n", "[[307.589 162.976 319.364 205.944 293.267 204.68 285.434 250.281 277.616\n 286.729 349.276 208.539 357.095 255.523 349.362 297.17 297.183 290.662\n 307.599 351.938 308.905 401.454 329.758 291.958 333.701 357.132 337.546\n 411.953 304.994 160.291 315.459 160.269 0. 0. 329.752 161.651]]\n" ], [ "n_hidden = 128\nn_joints = 18*2\nn_categories = 6\nn_layer = 3\n#model = torch.load('lstm_6_bn.pkl')", "_____no_output_____" ], [ "model = LSTM(n_joints,n_hidden,n_categories,n_layer)\nmodel.load_state_dict(torch.load('lstm_6_bn.pkl'))\nmodel.eval()", "_____no_output_____" ], [ "#category_tensor, inputs = randomTrainingExampleBatch(1,'test',0)\n#category = LABELS[int(category_tensor[0])]\ninputs = inputs.to(device)\nmodel.cuda()\noutput = model(inputs)\ntop_n, top_i = output.topk(1)\ncategory_i = top_i[0].item()\ncategory = LABELS[category_i]\ncategory_ii = LABELS.index(category)\ncategory ,category_ii,inputs", "_____no_output_____" ] ], [ [ "# PoseNet", "_____no_output_____" ] ], [ [ "import torch\nimport cv2\nimport time\n#import argparse\n\nimport posenet\n\n#parser = argparse.ArgumentParser()\n#parser.add_argument('--model', type=int, default=101)\n#parser.add_argument('--cam_id', type=int, default=0)\n#parser.add_argument('--cam_width', type=int, default=1280)\n#parser.add_argument('--cam_height', type=int, default=720)\n#parser.add_argument('--scale_factor', type=float, default=0.7125)\n#args = parser.parse_args()\n\n\nmodel = posenet.load_model(int(101))#args.model\nmodel = model.cuda()\noutput_stride = model.output_stride\n\ncap = cv2.VideoCapture(0)#args.cam_id = 0\ncap.set(3, 1280)#args.cam_width=1280\ncap.set(4, 720)#args.cam_height=720\n\nstart = time.time()\nframe_count = 0\nwhile True:\n input_image, display_image, output_scale = posenet.read_cap(\n cap, scale_factor= 0.7125, output_stride=output_stride)#args.scale_factor=0.7125\n\n with torch.no_grad():\n input_image = torch.Tensor(input_image).cuda()\n\n heatmaps_result, offsets_result, displacement_fwd_result, displacement_bwd_result = model(input_image)\n\n pose_scores, keypoint_scores, keypoint_coords = posenet.decode_multiple_poses(\n heatmaps_result.squeeze(0),\n offsets_result.squeeze(0),\n displacement_fwd_result.squeeze(0),\n displacement_bwd_result.squeeze(0),\n output_stride=output_stride,\n max_pose_detections=10,\n min_pose_score=0.15)\n\n keypoint_coords *= output_scale\n\n # TODO this isn't particularly fast, use GL for drawing and display someday...\n overlay_image = posenet.draw_skel_and_kp(\n display_image, pose_scores, keypoint_scores, keypoint_coords,\n min_pose_score=0.15, min_part_score=0.1)\n\n cv2.imshow('posenet', overlay_image)\n frame_count += 1\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n print('Average FPS: ', frame_count / (time.time() - start))\n\n\n", "_____no_output_____" ] ], [ [ "# Convert to TensorRT", "_____no_output_____" ] ], [ [ "import torch\nfrom torch2trt import torch2trt\n", "_____no_output_____" ], [ "model = LSTM(n_joints,n_hidden,n_categories,n_layer)\nmodel.load_state_dict(torch.load('lstm_6_bn.pkl'))", "_____no_output_____" ], [ "rnn.cuda().eval()", "_____no_output_____" ], [ "x = torch.ones((1,32,36)).half().cuda()", "_____no_output_____" ], [ "rnn(x)", "_____no_output_____" ], [ "import pdb\npdb.pm()", "_____no_output_____" ], [ "class LSTM(nn.Module):\n \n def __init__(self):\n super(LSTM,self).__init__()\n self.fc = torch.nn.Linear(12,1)\n \n def forward(self,inputs):\n out = self.fc(inputs)\n return out", "_____no_output_____" ], [ "a = LSTM()", "_____no_output_____" ], [ "a.cuda().eval()\nx = torch.ones((1,12)).cuda()\nmodel_trt = torch2trt(a, [x])", "_____no_output_____" ], [ "wget https://github.com/xieyulai/LSTM-for-Human-Activity-Recognition-using-2D-Pose_Pytorch/blob/master/lstm.ipynb", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb1497710141a3cdd9d18ef91a40c0bef60b3ff3
245,250
ipynb
Jupyter Notebook
docs/examples/simulation.ipynb
vansjyo/NeuroKit
238cd3d89467f7922c68a3a4c1f44806a8466922
[ "MIT" ]
1
2020-05-26T09:46:57.000Z
2020-05-26T09:46:57.000Z
docs/examples/simulation.ipynb
vansjyo/NeuroKit
238cd3d89467f7922c68a3a4c1f44806a8466922
[ "MIT" ]
null
null
null
docs/examples/simulation.ipynb
vansjyo/NeuroKit
238cd3d89467f7922c68a3a4c1f44806a8466922
[ "MIT" ]
1
2020-10-27T06:47:51.000Z
2020-10-27T06:47:51.000Z
729.910714
75,772
0.952705
[ [ [ "# Simulate Artificial Physiological Signals", "_____no_output_____" ], [ "Neurokit's core signal processing functions surround electrocardiogram (ECG), respiratory (RSP), electrodermal activity (EDA), and electromyography (EMG) data. Hence, this example shows how to use Neurokit to simulate these physiological signals with customized parametric control.", "_____no_output_____" ] ], [ [ "# Load NeuroKit and other useful packages\nimport neurokit2 as nk\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "%matplotlib inline\nplt.rcParams['figure.figsize'] = [8, 5] # Bigger images", "_____no_output_____" ] ], [ [ "## Cardiac Activity (ECG)", "_____no_output_____" ], [ "With `ecg_simulate()`, you can generate an artificial ECG signal of a desired length (in this case here, `duration=10`), noise, and heart rate. As you can see in the plot below, *ecg50* has about half the number of heart beats than *ecg100*, and *ecg50* also has more noise in the signal than the latter.", "_____no_output_____" ] ], [ [ "# Alternate heart rate and noise levels\necg50 = nk.ecg_simulate(duration=10, noise=0.05, heart_rate=50)\necg100 = nk.ecg_simulate(duration=10, noise=0.01, heart_rate=100)\n\n# Visualize\npd.DataFrame({\"ECG_100\": ecg100,\n \"ECG_50\": ecg50}).plot()\n", "_____no_output_____" ] ], [ [ "You can also choose to generate the default, simple simulation based on Daubechies wavelets, which roughly approximates one cardiac cycle, or a more complex one by specifiying `method=\"ecgsyn\"`.", "_____no_output_____" ] ], [ [ "# Alternate methods\necg_sim = nk.ecg_simulate(duration=10, method=\"simple\")\necg_com = nk.ecg_simulate(duration=10, method=\"ecgsyn\")\n\n# Visualize\npd.DataFrame({\"ECG_Simple\": ecg_sim,\n \"ECG_Complex\": ecg_com}).plot(subplots=True)", "_____no_output_____" ] ], [ [ "## Respiration (RSP)", "_____no_output_____" ], [ "To simulate a synthetic respiratory signal, you can use `rsp_simulate()` and choose a specific duration and breathing rate. In this example below, you can see that *rsp7* has a lower breathing rate than *rsp15*. You can also decide which model you want to generate the signal. The *simple rsp15* signal incorporates `method = \"sinusoidal\"` which approximates a respiratory cycle based on the trigonometric sine wave. On the other hand, the *complex rsp15* signal specifies `method = \"breathmetrics\"` which uses a more advanced model by interpolating inhalation and exhalation pauses between each respiratory cycle.", "_____no_output_____" ] ], [ [ "# Simulate\nrsp15_sim = nk.rsp_simulate(duration=20, respiratory_rate=15, method=\"sinusoidal\")\nrsp15_com = nk.rsp_simulate(duration=20, respiratory_rate=15, method=\"breathmetrics\")\nrsp7 = nk.rsp_simulate(duration=20, respiratory_rate=7, method=\"breathmetrics\")\n\n# Visualize respiration rate\npd.DataFrame({\"RSP7\": rsp7,\n \"RSP15_simple\": rsp15_sim,\n \"RSP15_complex\": rsp15_com}).plot(subplots=True)", "_____no_output_____" ] ], [ [ "## Electromyography (EMG)", "_____no_output_____" ], [ "Now, we come to generating an artificial EMG signal using `emg_simulate()`. Here, you can specify the number of bursts of muscular activity (`n_bursts`) in the signal as well as the duration of the bursts (`duration_bursts`). As you can see the active muscle periods in *EMG2_Longer* are greater in duration than that of *EMG2*, and *EMG5* contains more bursts than the former two. ", "_____no_output_____" ] ], [ [ "# Simulate\nemg2 = nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.0)\nemg2_long = nk.emg_simulate(duration=10, burst_number=2, burst_duration=1.5)\nemg5 = nk.emg_simulate(duration=10, burst_number=5, burst_duration=1.0)\n\n# Visualize\npd.DataFrame({\"EMG2\": emg2,\n \"EMG2_Longer\": emg2_long,\n \"EMG5\": emg5}).plot(subplots=True)", "_____no_output_____" ] ], [ [ "## Electrodermal Activity (EDA)", "_____no_output_____" ], [ "Finally, `eda_simulate()` can be used to generate a synthetic EDA signal of a given duration, specifying the number of skin conductance responses or activity 'peaks' (`n_scr`) and the `drift` of the signal. You can also modify the noise level of the signal.", "_____no_output_____" ] ], [ [ "# Simulate\neda1 = nk.eda_simulate(duration=10, scr_number=1, drift=-0.01, noise=0.05)\neda3 = nk.eda_simulate(duration=10, scr_number=3, drift=-0.01, noise=0.01)\neda3_long = nk.eda_simulate(duration=10, scr_number=3, drift=-0.1, noise=0.01)\n\n# Visualize\npd.DataFrame({\"EDA1\": eda1,\n \"EDA3\": eda3,\n \"EDA3_Longer\": eda3_long}).plot(subplots=True)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cb14999c011c6e44a04271000d25cfb0e9d1f5b4
950
ipynb
Jupyter Notebook
search/binary_search_implement.ipynb
eric999j/Udemy_Python_Hand_On
7a985b3e2c9adfd3648d240af56ac00bb916c3ad
[ "Apache-2.0" ]
1
2020-12-31T18:03:34.000Z
2020-12-31T18:03:34.000Z
search/binary_search_implement.ipynb
cntfk2017/Udemy_Python_Hand_On
52f2a5585bfdea95d893f961c8c21844072e93c7
[ "Apache-2.0" ]
null
null
null
search/binary_search_implement.ipynb
cntfk2017/Udemy_Python_Hand_On
52f2a5585bfdea95d893f961c8c21844072e93c7
[ "Apache-2.0" ]
2
2019-09-23T14:26:48.000Z
2020-05-25T07:09:26.000Z
23.75
137
0.603158
[ [ [ "# Write a Python program for binary search. \n# Binary Search : In computer science, a binary search or half-interval search algorithm finds the position of a target value\n# within a sorted array. The binary search algorithm can be classified as a dichotomies divide-and-conquer search algorithm and \n# executes in logarithmic time.", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb14ac29773e1d2d8ab0f567f9bfcd3c835f81b4
29,994
ipynb
Jupyter Notebook
krishnan/landing-zone-final.ipynb
vamsiramakrishnan/landing-zone
99a69572c6a5fb1a9dfb387ae0d8eb93bf1c2a84
[ "Apache-2.0" ]
null
null
null
krishnan/landing-zone-final.ipynb
vamsiramakrishnan/landing-zone
99a69572c6a5fb1a9dfb387ae0d8eb93bf1c2a84
[ "Apache-2.0" ]
null
null
null
krishnan/landing-zone-final.ipynb
vamsiramakrishnan/landing-zone
99a69572c6a5fb1a9dfb387ae0d8eb93bf1c2a84
[ "Apache-2.0" ]
null
null
null
39.674603
206
0.548576
[ [ [ "# All necessary imports here\nimport oci\nimport os.path\nimport sys\nimport json\nimport logging\nimport pprint\nimport re\nfrom collections import Counter \nimport ipaddr #pip3 install ipaddr", "_____no_output_____" ], [ "# Read config and create clients (identity,network,etc.)\n\nconfig = oci.config.from_file()\nidentity_client = oci.identity.IdentityClient(config)\nvirtual_network_client = oci.core.VirtualNetworkClient(config)\nvirtual_network_composite_operations = oci.core.VirtualNetworkClientCompositeOperations(virtual_network_client)", "_____no_output_____" ], [ "# local logger\ndef local_logger(value): \n print(value)", "_____no_output_____" ], [ "def print_decorator(message):\n print (\"====================\")\n print (message)\n print (\"====================\")", "_____no_output_____" ], [ "# Helper methods for extracting and json_lookup\ndef extract_value_by_field(obj, key):\n \"\"\"Pull all values of specified key from nested JSON.\"\"\"\n arr = []\n\n def extract(obj, arr, key):\n \"\"\"Recursively search for values of key in JSON tree.\"\"\"\n if isinstance(obj, dict):\n for k, v in obj.items():\n if isinstance(v, (dict, list)):\n extract(v, arr, key)\n elif k == key:\n arr.append(v)\n elif isinstance(obj, list):\n for item in obj:\n extract(item, arr, key)\n return arr\n\n results = extract(obj, arr, key)\n return results\n", "_____no_output_____" ], [ "# helper method to convert response to dictionary\ndef convert_response_to_dict(oci_response):\n return oci.util.to_dict(oci_response.data)", "_____no_output_____" ], [ "# Special Characters Regex validation\ndef special_char_regex_validation(value):\n special_chars_regex = re.compile('[@!#$%^&*()<>?/\\|}{~:`]')\n special_char_check = special_chars_regex.search(value)\n if special_char_check is None: \n return True\n else:\n return False", "_____no_output_____" ], [ "# IPV4 CIDR Notation Regex Validation \ndef ipv4_regex_validation(value):\n ipv4_cidr_regex = re.compile('^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])/(1[0-9]|2[0-9]|3[0-1])$') \n ipv4_cidr_check = ipv4_cidr_regex.search(value)\n if ipv4_cidr_check is None:\n return False\n else:\n return True", "_____no_output_____" ], [ "# Json file exist validation\ndef validate_file_exist():\n if os.path('./hub_spokes.json').is_file():\n local_logger (\"====================\")\n local_logger (\"Input file hub_spokes.json exists !\")\n local_logger (\"====================\")\n return True\n else:\n local_logger (\"====================\")\n local_logger (\"Input file hub_spokes.json does not exists !\")\n local_logger (\"====================\")\n return False", "_____no_output_____" ], [ "# Json file complete payload validation\ndef validate_json(payload):\n # Name sanitization check i.e. all vcn name and dns-name and drg-name \n hub_check = False\n spoke_check = False\n hub_vcn = payload[\"hub\"][\"vcn\"]\n hub_drg = payload[\"hub\"][\"drg\"]\n \n spokes = payload[\"spokes\"]\n spoke_vcns = []\n \n for spoke in spokes:\n spoke_vcns.append(spoke[\"vcn\"])\n \n hub_vcn_name = special_char_regex_validation(hub_vcn[\"name\"])\n hub_vcn_dns_name = special_char_regex_validation(hub_vcn[\"dns_name\"])\n \n hub_drg_name = special_char_regex_validation(hub_drg[\"name\"])\n \n if hub_vcn_name and hub_vcn_dns_name and hub_drg_name:\n hub_check = True\n else:\n hub_check = False\n \n for spoke in spoke_vcns:\n spoke_vcn_name = special_char_regex_validation(spoke[\"name\"])\n spoke_vcn_dns_name = special_char_regex_validation(spoke[\"dns_name\"])\n if spoke_vcn_name and spoke_vcn_dns_name:\n spoke_check = True\n else:\n spoke_check = False\n break\n \n if hub_check and spoke_check:\n local_logger(\"============SUCCESS=======\")\n local_logger(\"HUB AND SPOKE CHECK FOR SPECIAL CHARACTER PASSED !\")\n local_logger(\"==========================\")\n \n else:\n local_logger(\"===========FAILED=========\")\n local_logger(\"HUB AND SPOKE CHECK FOR SPECIAL CHARACTER FAILED !\")\n local_logger(\"==========================\")\n \n # CIDR overlap check VCN level\n hub_cidr_blocks = hub_vcn[\"cidr\"]\n cidr_blocks = []\n \n cidr_blocks.append(ipaddr.IPNetwork(hub_cidr_blocks))\n\n for spoke_vcn in spoke_vcns:\n cidr_blocks.append(ipaddr.IPNetwork(spoke_vcn[\"cidr\"]))\n\n \n for index in range(len(cidr_blocks)): \n for ind in range(len(cidr_blocks)):\n if index!=ind:\n is_overlap = cidr_blocks[index].overlaps(cidr_blocks[ind])\n if is_overlap is True:\n hub_check = False\n spoke_check = False \n break\n else:\n hub_check = True\n spoke_check = True\n \n \n if hub_check and spoke_check:\n local_logger(\"==========SUCCESS=========\")\n local_logger(\"CIDR BLOCKS CHECK IN THE INPUT HUB/SPOKE SUCCESSFULLY VALIDATED !\" )\n local_logger(\"==========================\")\n else:\n local_logger(\"=========FAILED===========\")\n local_logger(\"OVERLAPPING CIDR BLOCKS FOUND IN THE INPUT HUB/SPOKE !\" )\n local_logger(\"==========================\")\n \n return True if hub_check and spoke_check else False\n ", "_____no_output_____" ], [ "# Compartment validation\n\ndef fetch_all_compartments_in_tenancy(client):\n \"\"\"Fetch all Compartments in Tenancy , and look across all subtrees.\"\"\"\n compartmentResponse = oci.pagination.list_call_get_all_results(\n client.list_compartments,\n compartment_id=tenancy_ocid,\n limit=200,\n access_level=\"ACCESSIBLE\",\n compartment_id_in_subtree=True,\n retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,\n )\n return convert_response_to_dict(compartmentResponse)\n\ndef checkByOCID_if_compartment_active(compartment_id, client):\n compartment_response = client.get_compartment(\n compartment_id=compartment_id, retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY,\n )\n compartment = convert_response_to_dict(compartment_response)\n return True if compartment[\"lifecycle_state\"] == \"ACTIVE\" else False\n\ndef filter_compartments_by_state(compartmentList=[], compartmentState=\"ACTIVE\"):\n \"\"\"Filter Compartments by their lifecycle state, ACTIVE| DELETNG | DELETED | CREATING\"\"\"\n filteredCompartments = [\n compartment\n for compartment in compartmentList\n if compartment[\"lifecycle_state\"] == compartmentState\n ]\n return filteredCompartments", "_____no_output_____" ], [ "# VCN Checks\n\n# Check if VCN created matches by Name\ndef checkVcnNameMatch(client, compartment_id, vcn_name):\n listVCNReponse = client.list_vcns(compartment_id = compartment_id)\n vcns = convert_response_to_dict(listVCNReponse)\n vcn_names = extract_value_by_field(vcns, \"display_name\")\n if vcn_name in vcn_names:\n print_decorator(\"VCN NAME ALREADY EXIST. SKIPPING VCN CREATION\")\n return True\n else:\n return False\n \n\n# Get VCN OCID for matched VCN Name\ndef checkVCNNameGetOCID(client, compartment_id, vcn_name):\n listVCNReponse = client.list_vcns(compartment_id = compartment_id)\n vcns = convert_response_to_dict(listVCNReponse) \n for vcn in vcns:\n if vcn[\"display_name\"]==vcn_name:\n return vcn[\"id\"]\n else:\n return None\n \n# Get VCN OCID for matched VCN CIDR\ndef checkVCNCidrGetOCID(client, compartment_id, cidr):\n listVCNReponse = client.list_vcns(compartment_id = compartment_id)\n vcns = convert_response_to_dict(listVCNReponse) \n for vcn in vcns:\n if vcn[\"cidr_block\"]==cidr:\n return vcn[\"id\"]\n else:\n return None\n\n# Check if VCN is in available state\ndef checkVCNStateAvailable(client, compartment_id, vcn_id):\n getVCNResponse = client.get_vcn(vcn_id = vcn_id)\n vcn = convert_response_to_dict(getVCNResponse)\n if vcn[\"lifecycle_state\"] == \"AVAILABLE\":\n return True\n else:\n return False\n \n# Check if VCN created matches by CIDR\ndef checkVcnCidrMatch(client, compartment_id, vcn_cidr):\n getResponse = client.list_vcns(compartment_id = compartment_id)\n vcns = convert_response_to_dict(getResponse)\n for vcn in vcns:\n if vcn[\"cidr_block\"] == vcn_cidr: \n return True\n else:\n return False\n \n# Get VCN by VCN OCID\ndef getVCNbyOCID(client, compartment_id, vcn_ocid):\n try:\n getVCN = client.get_vcn(vcn_id = vcn_ocid)\n vcn = getVCN.data\n return vcn.id\n except Exception as e:\n return None\n \ngetVCNbyOCID(virtual_network_client, config[\"compartment\"], \"ocid1.vcn.oc1.phx.amaaaaaa43cggciamaezwaggscjbvqgbwzbo4zlakrisgc4xrnkh2k6kbzga\")", "_____no_output_____" ], [ "# Create VCN\ndef create_new_vcn(client,composite_client, vcn_object):\n vcn_details = oci.core.models.CreateVcnDetails(cidr_block=vcn_object[\"cidr\"],\n display_name=vcn_object[\"name\"],\n compartment_id=vcn_object[\"compartment_id\"],\n dns_label= vcn_object[\"dns_name\"])\n VCNResponse = composite_client.create_vcn_and_wait_for_state(\n vcn_details, \n wait_for_states=[oci.core.models.Vcn.LIFECYCLE_STATE_AVAILABLE]\n )\n vcn = convert_response_to_dict(VCNResponse)\n return vcn[\"id\"]\n\ndef check_before_vcn_create(client,composite_client, compartment_id, vcn_object):\n vcn_ocid = None\n checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)\n print_decorator(\"COMPARTMENT EXISTS AND ACTIVE !\") if checkCompartmentActive else print_decorator(\"COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !\")\n \n checkVCNNameMatch = checkVcnNameMatch(client, compartment_id, vcn_object[\"name\"])\n \n if checkVCNNameMatch:\n vcn_ocid = checkVCNNameGetOCID(client, compartment_id, vcn_object[\"name\"])\n \n checkVCNCIDRMatch = checkVcnCidrMatch(client, compartment_id, vcn_object[\"cidr\"])\n \n if checkVCNCIDRMatch:\n vcn_ocid = checkVCNCidrGetOCID(client, compartment_id, vcn_object[\"cidr\"])\n \n if vcn_ocid is None:\n return create_new_vcn(client,composite_client, vcn_object)\n else:\n print_decorator(\"VCN ALREADY EXIST ! SKIPPING VCN CREATION !\")\n return vcn_ocid\n \n ", "_____no_output_____" ], [ "# DRG Checks\n\n# Check if DRG exist by name\ndef is_drg_exist_by_name(client, compartment_ocid, drg_name):\n listDRGs = client.list_drgs(compartment_id = compartment_ocid)\n drgs = convert_response_to_dict(listDRGs)\n drg_name_extract = extract_value_by_field(drgs, \"display_name\")\n return True if drg_name in drg_name_extract else False\n\n# Get DRG OCID from DRG Name\ndef get_drg_ocid_by_name(client, compartment_ocid, drg_name):\n listDRGs = client.list_drgs(compartment_id = compartment_ocid)\n drgs = convert_response_to_dict(listDRGs)\n for drg in drgs:\n if drg[\"display_name\"] == drg_name:\n return drg[\"id\"]\n else:\n return None\n \n# Check if DRG exist already by OCID and State in Compartment \ndef is_drg_already_exist_in_compartment(client, compartment_ocid, drg_ocid):\n drg = client.get_drg(drg_id = drg_ocid, retry_strategy= oci.retry.DEFAULT_RETRY_STRATEGY)\n drg_dict = convert_response_to_dict(drg)\n if drg_dict is not None and drg_dict[\"lifecycle_state\"] == \"AVAILABLE\":\n return True\n else:\n return False \n \n# Check DRG attachment status \ndef get_drg_attachment_status(client, drg_attachment_ocid):\n drg_attachment = client.get_drg_attachment(drg_attachment_id = drg_attachment_ocid)\n drg_attachment_dict = convert_response_to_dict(drg_attachment)\n if drg_attachment_dict is not None and drg_attachment_dict[\"lifecycle_state\"]==\"ATTACHED\":\n return True\n else:\n return False\n \ndef get_drg_attachment_ocid(client, drg_attachment_ocid):\n drg_attachment = client.get_drg_attachment(drg_attachment_id = drg_attachment_ocid)\n drg_attach = convert_response_to_dict(drg_attachment)\n return drg_attach[\"id\"]\n\ndef filter_drg_attachment_id(client,compartment_ocid, drg_id):\n drg_attachments = client.list_drg_attachments(compartment_id = compartment_ocid)\n drg_attachments_dict = convert_response_to_dict(drg_attachments)\n for drg_attachment in drg_attachments_dict:\n if drg_attachment[\"drg_id\"] == drg_id:\n return drg_attachment[\"id\"]\n break\n else:\n return None\n", "_____no_output_____" ], [ "# Create DRG\ndef create_new_drg(client, compartment_id, hub_drg):\n drg_result = client.create_drg(\n oci.core.models.CreateDrgDetails(\n compartment_id=compartment_id,\n display_name= hub_drg[\"name\"]\n )\n )\n drg = oci.wait_until(\n client,\n client.get_drg(drg_result.data.id),\n 'lifecycle_state',\n 'AVAILABLE'\n )\n local_logger('Created DRG')\n local_logger('===============')\n local_logger('\\n')\n drg_new = convert_response_to_dict(drg)\n return drg_new[\"id\"]\n\n# Create DRG if drg does not exist already\ndef check_before_drg_create(client, compartment_id, hub_drg):\n drg_ocid = None\n checkDRGStatusCompartment = False\n checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)\n print_decorator(\"COMPARTMENT EXISTS AND ACTIVE !\") if checkCompartmentActive else print_decorator(\"COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !\")\n checkDRGNameMatch = is_drg_exist_by_name(client, compartment_id, hub_drg[\"name\"])\n \n if checkDRGNameMatch:\n drg_ocid = get_drg_ocid_by_name(client, compartment_id, hub_drg[\"name\"])\n checkDRGStatusCompartment = is_drg_already_exist_in_compartment(client, compartment_id, drg_ocid)\n \n if drg_ocid is None and checkDRGStatusCompartment is False:\n return create_new_drg(client,compartment_id,hub_drg)\n else:\n print_decorator(\"DRG ALREADY EXIST ! SKIPPING DRG CREATION !\")\n return drg_ocid\n \n \n# Attach newly created DRG to VCN using VCN OCID\ndef drg_attach(client, vcn_ocid, drg_ocid, drg_name):\n drg_attach_result = client.create_drg_attachment(\n oci.core.models.CreateDrgAttachmentDetails(\n display_name= drg_name,\n vcn_id=vcn_ocid,\n drg_id=drg_ocid\n )\n )\n drg_attachment = oci.wait_until(\n client,\n client.get_drg_attachment(drg_attach_result.data.id),\n 'lifecycle_state',\n 'ATTACHED'\n )\n local_logger('Created DRG Attachment')\n local_logger('=========================')\n local_logger('\\n')\n drg_attach = convert_response_to_dict(drg_attachment)\n return drg_attach[\"id\"]\n\ndef check_before_drg_attach(client, compartment_id, vcn_ocid, drg_ocid, drg_name):\n drg_attachment_ocid = None\n checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client) \n print_decorator(\"COMPARTMENT EXISTS AND ACTIVE !\") if checkCompartmentActive else print_decorator(\"COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !\")\n checkIfVCNIsAvailable = checkVCNStateAvailable(client, compartment_id, vcn_ocid)\n checkIfDRGIsAvailable = is_drg_already_exist_in_compartment(client, compartment_id, drg_ocid)\n drg_attachment_ocid = filter_drg_attachment_id(client, compartment_id, drg_ocid)\n \n if drg_attachment_ocid is not None:\n print_decorator(\"DRG ATTACHMENT ALREADY EXIST ! SKIPPING DRG ATTACHMENT !\")\n return drg_attachment_ocid\n \n if checkCompartmentActive and checkIfVCNIsAvailable and checkIfDRGIsAvailable:\n return drg_attach(client, vcn_ocid, drg_ocid, drg_name)\n \n\n", "_____no_output_____" ], [ "# LPG METHODS\n\n# Check for name match in LPG\ndef match_lpg_by_name(client, compartment_ocid,vcn_ocid, lpg_name):\n listLPGs = client.list_local_peering_gateways(compartment_id = compartment_ocid, vcn_id= vcn_ocid)\n lpgs = convert_response_to_dict(listLPGs)\n lpg_names = extract_value_by_field(lpgs, \"display_name\")\n if lpg_name in lpg_names:\n return True\n else:\n return False\n\n# Get LPG OCID from matching name\ndef get_lpg_ocid (client, compartment_ocid,vcn_ocid, lpg_name):\n listlpgs = client.list_local_peering_gateways(compartment_id = compartment_ocid, vcn_id= vcn_ocid)\n lpgs = convert_response_to_dict(listlpgs)\n lpg_id= None\n for lpg in lpgs:\n if lpg[\"display_name\"]==lpg_name:\n lpg_id = lpg[\"id\"]\n else:\n lpg_id = None\n return lpg_id\n \n \n# Check for LPG state\ndef is_lpg_available_status(client, lpg_ocid, compartment_ocid):\n lpg = client.get_local_peering_gateway(local_peering_gateway_id = lpg_ocid)\n lpg_dict = convert_response_to_dict(lpg)\n if lpg_dict[\"lifecycle_state\"] == \"AVAILABLE\":\n return True\n else:\n return False\n \n# Create Local Peering Gateway been Hub and Spokes VCN\ndef create_local_peering_gateway(composite_client, lpg_name, compartment_id, vcn_ocid):\n create_lpg_details = oci.core.models.CreateLocalPeeringGatewayDetails(compartment_id = compartment_id, display_name = lpg_name, vcn_id = vcn_ocid)\n create_lpg_response = composite_client.create_local_peering_gateway_and_wait_for_state(\n create_lpg_details,\n wait_for_states=[oci.core.models.LocalPeeringGateway.LIFECYCLE_STATE_AVAILABLE]\n )\n lpg = create_lpg_response\n lpg_dict = convert_response_to_dict(lpg)\n local_logger('Created LPG')\n local_logger('=========================')\n local_logger('\\n')\n \n return lpg_dict[\"id\"]\n\n\ndef check_before_lpg_create(client,composite_client, compartment_id, lpg_name, vcn_ocid):\n checkCompartmentActive = checkByOCID_if_compartment_active(compartment_id, identity_client)\n print_decorator(\"COMPARTMENT EXISTS AND ACTIVE !\") if checkCompartmentActive else print_decorator(\"COMPARTMENT DOES NOT EXIST OR IS NOT ACTIVE !\") \n vcn_exist = getVCNbyOCID(client, compartment_id, vcn_ocid)\n if vcn_exist is not None:\n vcn_state = checkVCNStateAvailable(client, compartment_id, vcn_ocid)\n checkLpgNameMatch = match_lpg_by_name(client, compartment_id, vcn_ocid, lpg_name)\n getLpgocid = get_lpg_ocid(client, compartment_id,vcn_ocid, lpg_name)\n if getLpgocid is not None:\n getLpgStatus = is_lpg_available_status(client, getLpgocid, compartment_id)\n if checkLpgNameMatch and getLpgStatus:\n print_decorator(\"LPG EXIST ALREADY FOR THIS VCN\")\n return getLpgocid\n else:\n return create_local_peering_gateway(composite_client, lpg_name, compartment_id, vcn_ocid)\n", "_____no_output_____" ], [ "# Quickstart code\ndef quick_start():\n with open('./hub_spokes.json') as hub_spokes:\n data = json.load(hub_spokes)\n # TODO: JSON Validation to be captured\n validate_json(data)\n vcn_ocid = check_before_vcn_create(virtual_network_client, virtual_network_composite_operations, config[\"compartment\"], data[\"hub\"][\"vcn\"])\n drg_ocid = check_before_drg_create(virtual_network_client, config[\"compartment\"], data[\"hub\"][\"drg\"])\n drg_attachment_ocid = check_before_drg_attach(virtual_network_client, config[\"compartment\"], vcn_ocid, drg_ocid, data[\"hub\"][\"drg\"][\"name\"])\n lpg_ocid = check_before_lpg_create(virtual_network_client, virtual_network_composite_operations, config[\"compartment\"], \"HubLPG\", vcn_ocid)\n \n for spoke in data[\"spokes\"]:\n vcn_ocid = check_before_vcn_create(virtual_network_client,virtual_network_composite_operations, config[\"compartment\"], spoke[\"vcn\"])\n lpg_ocid = check_before_lpg_create(virtual_network_client, virtual_network_composite_operations, config[\"compartment\"], \"SpokeLPG\", vcn_ocid)\n \n ", "_____no_output_____" ], [ "quick_start()", "============SUCCESS=======\nHUB AND SPOKE CHECK FOR SPECIAL CHARACTER PASSED !\n==========================\n==========SUCCESS=========\nCIDR BLOCKS CHECK IN THE INPUT HUB/SPOKE SUCCESSFULLY VALIDATED !\n==========================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nDRG ALREADY EXIST ! SKIPPING DRG CREATION !\n====================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nDRG ATTACHMENT ALREADY EXIST ! SKIPPING DRG ATTACHMENT !\n====================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\nCreated LPG\n=========================\n\n\n{'vcn': {'cidr': '192.168.0.0/24', 'name': 'spoke1', 'compartment_id': 'ocid1.compartment.oc1..aaaaaaaad7ngb2s3armaa6qdkglst4nxnvvdrjjggq5afhfqxqgiyyoyluiq', 'dns_name': 'spoke1', 'subnets': []}}\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nVCN NAME ALREADY EXIST. SKIPPING VCN CREATION\n====================\n====================\nVCN ALREADY EXIST ! SKIPPING VCN CREATION !\n====================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nLPG EXIST ALREADY FOR THIS VCN\n====================\n{'vcn': {'cidr': '192.168.1.0/24', 'name': 'spoke2', 'compartment_id': 'ocid1.compartment.oc1..aaaaaaaad7ngb2s3armaa6qdkglst4nxnvvdrjjggq5afhfqxqgiyyoyluiq', 'dns_name': 'spoke2', 'subnets': []}}\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nVCN NAME ALREADY EXIST. SKIPPING VCN CREATION\n====================\n====================\nVCN ALREADY EXIST ! SKIPPING VCN CREATION !\n====================\n====================\nCOMPARTMENT EXISTS AND ACTIVE !\n====================\n====================\nLPG EXIST ALREADY FOR THIS VCN\n====================\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb14b017d7fa8f92a227e70ba6d2d77deecc5d4c
34,236
ipynb
Jupyter Notebook
UserName_extraction.ipynb
ali-darvishi/crossPlatform-account-linking
0b9a5166a77c23311594b6c1697d61a523712edb
[ "MIT" ]
null
null
null
UserName_extraction.ipynb
ali-darvishi/crossPlatform-account-linking
0b9a5166a77c23311594b6c1697d61a523712edb
[ "MIT" ]
null
null
null
UserName_extraction.ipynb
ali-darvishi/crossPlatform-account-linking
0b9a5166a77c23311594b6c1697d61a523712edb
[ "MIT" ]
null
null
null
53.410296
938
0.492318
[ [ [ "import pandas as pd\nimport numpy as np\nimport json\nimport re", "_____no_output_____" ], [ "data = pd.read_csv ('tweets_2020.csv')", "C:\\Users\\s4282784\\AppData\\Local\\Continuum\\anaconda3\\envs\\R-env\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3343: DtypeWarning: Columns (19,20,21,31,33,34,35,58,59,60) have mixed types.Specify dtype option on import or set low_memory=False.\n exec(code_obj, self.user_global_ns, self.user_ns)\n" ], [ "data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1266467 entries, 0 to 1266466\nData columns (total 74 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 1266467 non-null int64 \n 1 conversation_id 1266467 non-null int64 \n 2 referenced_tweets.replied_to.id 118933 non-null float64\n 3 referenced_tweets.retweeted.id 0 non-null float64\n 4 referenced_tweets.quoted.id 4114 non-null float64\n 5 author_id 1266467 non-null int64 \n 6 in_reply_to_user_id 129971 non-null float64\n 7 retweeted_user_id 0 non-null float64\n 8 quoted_user_id 4114 non-null float64\n 9 created_at 1266467 non-null object \n 10 text 1266467 non-null object \n 11 lang 1266467 non-null object \n 12 source 1266467 non-null object \n 13 public_metrics.like_count 1266467 non-null int64 \n 14 public_metrics.quote_count 1266467 non-null int64 \n 15 public_metrics.reply_count 1266467 non-null int64 \n 16 public_metrics.retweet_count 1266467 non-null int64 \n 17 reply_settings 1266467 non-null object \n 18 possibly_sensitive 1266467 non-null bool \n 19 withheld.scope 8 non-null object \n 20 withheld.copyright 13 non-null object \n 21 withheld.country_codes 13 non-null object \n 22 entities.annotations 394376 non-null object \n 23 entities.cashtags 4685 non-null object \n 24 entities.hashtags 274410 non-null object \n 25 entities.mentions 129143 non-null object \n 26 entities.urls 1266356 non-null object \n 27 context_annotations 491149 non-null object \n 28 attachments.media 406236 non-null object \n 29 attachments.media_keys 406236 non-null object \n 30 attachments.poll.duration_minutes 30 non-null float64\n 31 attachments.poll.end_datetime 30 non-null object \n 32 attachments.poll.id 30 non-null float64\n 33 attachments.poll.options 30 non-null object \n 34 attachments.poll.voting_status 30 non-null object \n 35 attachments.poll_ids 30 non-null object \n 36 author.id 1266467 non-null int64 \n 37 author.created_at 1266467 non-null object \n 38 author.username 1266467 non-null object \n 39 author.name 1265399 non-null object \n 40 author.description 1135247 non-null object \n 41 author.entities.description.cashtags 2486 non-null object \n 42 author.entities.description.hashtags 164695 non-null object \n 43 author.entities.description.mentions 234559 non-null object \n 44 author.entities.description.urls 165517 non-null object \n 45 author.entities.url.urls 664458 non-null object \n 46 author.location 717720 non-null object \n 47 author.pinned_tweet_id 422635 non-null float64\n 48 author.profile_image_url 1265413 non-null object \n 49 author.protected 1266467 non-null bool \n 50 author.public_metrics.followers_count 1266467 non-null int64 \n 51 author.public_metrics.following_count 1266467 non-null int64 \n 52 author.public_metrics.listed_count 1266467 non-null int64 \n 53 author.public_metrics.tweet_count 1266467 non-null int64 \n 54 author.url 664458 non-null object \n 55 author.verified 1266467 non-null bool \n 56 author.withheld.scope 0 non-null float64\n 57 author.withheld.copyright 0 non-null float64\n 58 author.withheld.country_codes 3 non-null object \n 59 geo.coordinates.coordinates 988 non-null object \n 60 geo.coordinates.type 988 non-null object \n 61 geo.country 3611 non-null object \n 62 geo.country_code 3611 non-null object \n 63 geo.full_name 3611 non-null object \n 64 geo.geo.bbox 3611 non-null object \n 65 geo.geo.type 3611 non-null object \n 66 geo.id 3611 non-null object \n 67 geo.name 3611 non-null object \n 68 geo.place_id 3612 non-null object \n 69 geo.place_type 3611 non-null object \n 70 __twarc.retrieved_at 1266467 non-null object \n 71 __twarc.url 1266467 non-null object \n 72 __twarc.version 1266467 non-null object \n 73 Unnamed: 73 0 non-null float64\ndtypes: bool(3), float64(12), int64(12), object(47)\nmemory usage: 689.7+ MB\n" ] ], [ [ "# from Users' Bio Url", "_____no_output_____" ] ], [ [ "reddit_users = data[data['author.entities.url.urls'].str.contains('reddit.com/u', na=False)]", "_____no_output_____" ], [ "# reddit_users.to_csv('reddit_users_2020.csv')", "_____no_output_____" ], [ "reddit_users=reddit_users.drop_duplicates(subset=['author.username']).reset_index()\n# reddit_users.to_csv('reddit_unique_users_2020.csv')", "_____no_output_____" ], [ "reddit_users.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 207 entries, 0 to 206\nData columns (total 75 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 index 207 non-null int64 \n 1 id 207 non-null int64 \n 2 conversation_id 207 non-null int64 \n 3 referenced_tweets.replied_to.id 46 non-null float64\n 4 referenced_tweets.retweeted.id 0 non-null float64\n 5 referenced_tweets.quoted.id 2 non-null float64\n 6 author_id 207 non-null int64 \n 7 in_reply_to_user_id 55 non-null float64\n 8 retweeted_user_id 0 non-null float64\n 9 quoted_user_id 2 non-null float64\n 10 created_at 207 non-null object \n 11 text 207 non-null object \n 12 lang 207 non-null object \n 13 source 207 non-null object \n 14 public_metrics.like_count 207 non-null int64 \n 15 public_metrics.quote_count 207 non-null int64 \n 16 public_metrics.reply_count 207 non-null int64 \n 17 public_metrics.retweet_count 207 non-null int64 \n 18 reply_settings 207 non-null object \n 19 possibly_sensitive 207 non-null bool \n 20 withheld.scope 0 non-null object \n 21 withheld.copyright 0 non-null object \n 22 withheld.country_codes 0 non-null object \n 23 entities.annotations 64 non-null object \n 24 entities.cashtags 0 non-null object \n 25 entities.hashtags 23 non-null object \n 26 entities.mentions 47 non-null object \n 27 entities.urls 207 non-null object \n 28 context_annotations 80 non-null object \n 29 attachments.media 25 non-null object \n 30 attachments.media_keys 25 non-null object \n 31 attachments.poll.duration_minutes 0 non-null float64\n 32 attachments.poll.end_datetime 0 non-null object \n 33 attachments.poll.id 0 non-null float64\n 34 attachments.poll.options 0 non-null object \n 35 attachments.poll.voting_status 0 non-null object \n 36 attachments.poll_ids 0 non-null object \n 37 author.id 207 non-null int64 \n 38 author.created_at 207 non-null object \n 39 author.username 207 non-null object \n 40 author.name 207 non-null object \n 41 author.description 203 non-null object \n 42 author.entities.description.cashtags 1 non-null object \n 43 author.entities.description.hashtags 32 non-null object \n 44 author.entities.description.mentions 30 non-null object \n 45 author.entities.description.urls 23 non-null object \n 46 author.entities.url.urls 207 non-null object \n 47 author.location 137 non-null object \n 48 author.pinned_tweet_id 124 non-null float64\n 49 author.profile_image_url 207 non-null object \n 50 author.protected 207 non-null bool \n 51 author.public_metrics.followers_count 207 non-null int64 \n 52 author.public_metrics.following_count 207 non-null int64 \n 53 author.public_metrics.listed_count 207 non-null int64 \n 54 author.public_metrics.tweet_count 207 non-null int64 \n 55 author.url 207 non-null object \n 56 author.verified 207 non-null bool \n 57 author.withheld.scope 0 non-null float64\n 58 author.withheld.copyright 0 non-null float64\n 59 author.withheld.country_codes 0 non-null object \n 60 geo.coordinates.coordinates 0 non-null object \n 61 geo.coordinates.type 0 non-null object \n 62 geo.country 0 non-null object \n 63 geo.country_code 0 non-null object \n 64 geo.full_name 0 non-null object \n 65 geo.geo.bbox 0 non-null object \n 66 geo.geo.type 0 non-null object \n 67 geo.id 0 non-null object \n 68 geo.name 0 non-null object \n 69 geo.place_id 0 non-null object \n 70 geo.place_type 0 non-null object \n 71 __twarc.retrieved_at 207 non-null object \n 72 __twarc.url 207 non-null object \n 73 __twarc.version 207 non-null object \n 74 Unnamed: 73 0 non-null float64\ndtypes: bool(3), float64(12), int64(13), object(47)\nmemory usage: 117.2+ KB\n" ], [ "index=40\nprint(reddit_users.loc[index,]['author.entities.url.urls'])\nprint('Reddit_username:')\nprint(reddit_users.loc[index,]['author.entities.url.urls'].split(\"\\\"expanded_url\\\": \\\"\")[1].split(\"/\")[4].split(\"\\\\\")[0].split(\"\\\"\")[0].split(\"?\")[0])\nprint('Twitter_username:')\nprint(reddit_users.loc[index,]['author.username'])", "[{\"start\": 0, \"end\": 23, \"url\": \"https://t.co/MvO42m1cJ4\", \"expanded_url\": \"https://www.reddit.com/u/extrafluffybunn/?utm_source=share&utm_medium=ios_app&utm_name=iossmf\", \"display_url\": \"reddit.com/u/extrafluffyb\\u2026\"}]\nReddit_username:\nextrafluffybunn\nTwitter_username:\nJokestarJ\n" ], [ "Reddit_username=[]\nTwitter_username=[]\nfor index in range(len(reddit_users)):\n Reddit_username.append(reddit_users.loc[index,]['author.entities.url.urls'].split(\"\\\"expanded_url\\\": \\\"\")[1].split(\"/\")[4].split(\"\\\\\")[0].split(\"\\\"\")[0].split(\"?\")[0])\n Twitter_username.append(reddit_users.loc[index,]['author.username'])", "_____no_output_____" ], [ "# pd.DataFrame(Reddit_username,Twitter_username).to_csv('usernames.csv')\nusers=reddit_users[['author.id','author.name','author.username']]\nusers['Reddit_username'] = Reddit_username\nusers.to_csv('usernames_Bio.csv')", "<ipython-input-195-0f4a9594bba6>:3: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n users['Reddit_username'] = Reddit_username\n" ] ], [ [ "# From Source", "_____no_output_____" ] ], [ [ "reddit_source = data[data['source']=='Reddit Official']", "_____no_output_____" ], [ "# reddit_source=reddit_source.drop_duplicates(subset=['author.username']).reset_index()\n# reddit_source.to_csv('reddit_unique_source_2020.csv')", "_____no_output_____" ], [ "reddit_source.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 66715 entries, 3 to 1266462\nData columns (total 74 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 66715 non-null int64 \n 1 conversation_id 66715 non-null int64 \n 2 referenced_tweets.replied_to.id 0 non-null float64\n 3 referenced_tweets.retweeted.id 0 non-null float64\n 4 referenced_tweets.quoted.id 3 non-null float64\n 5 author_id 66715 non-null int64 \n 6 in_reply_to_user_id 0 non-null float64\n 7 retweeted_user_id 0 non-null float64\n 8 quoted_user_id 3 non-null float64\n 9 created_at 66715 non-null object \n 10 text 66715 non-null object \n 11 lang 66715 non-null object \n 12 source 66715 non-null object \n 13 public_metrics.like_count 66715 non-null int64 \n 14 public_metrics.quote_count 66715 non-null int64 \n 15 public_metrics.reply_count 66715 non-null int64 \n 16 public_metrics.retweet_count 66715 non-null int64 \n 17 reply_settings 66715 non-null object \n 18 possibly_sensitive 66715 non-null bool \n 19 withheld.scope 0 non-null object \n 20 withheld.copyright 0 non-null object \n 21 withheld.country_codes 0 non-null object \n 22 entities.annotations 51087 non-null object \n 23 entities.cashtags 52 non-null object \n 24 entities.hashtags 2594 non-null object \n 25 entities.mentions 485 non-null object \n 26 entities.urls 66715 non-null object \n 27 context_annotations 66112 non-null object \n 28 attachments.media 0 non-null object \n 29 attachments.media_keys 0 non-null object \n 30 attachments.poll.duration_minutes 0 non-null float64\n 31 attachments.poll.end_datetime 0 non-null object \n 32 attachments.poll.id 0 non-null float64\n 33 attachments.poll.options 0 non-null object \n 34 attachments.poll.voting_status 0 non-null object \n 35 attachments.poll_ids 0 non-null object \n 36 author.id 66715 non-null int64 \n 37 author.created_at 66715 non-null object \n 38 author.username 66715 non-null object \n 39 author.name 66709 non-null object \n 40 author.description 59100 non-null object \n 41 author.entities.description.cashtags 209 non-null object \n 42 author.entities.description.hashtags 11303 non-null object \n 43 author.entities.description.mentions 3277 non-null object \n 44 author.entities.description.urls 12686 non-null object \n 45 author.entities.url.urls 49149 non-null object \n 46 author.location 45501 non-null object \n 47 author.pinned_tweet_id 20923 non-null float64\n 48 author.profile_image_url 66710 non-null object \n 49 author.protected 66715 non-null bool \n 50 author.public_metrics.followers_count 66715 non-null int64 \n 51 author.public_metrics.following_count 66715 non-null int64 \n 52 author.public_metrics.listed_count 66715 non-null int64 \n 53 author.public_metrics.tweet_count 66715 non-null int64 \n 54 author.url 49149 non-null object \n 55 author.verified 66715 non-null bool \n 56 author.withheld.scope 0 non-null float64\n 57 author.withheld.copyright 0 non-null float64\n 58 author.withheld.country_codes 0 non-null object \n 59 geo.coordinates.coordinates 0 non-null object \n 60 geo.coordinates.type 0 non-null object \n 61 geo.country 0 non-null object \n 62 geo.country_code 0 non-null object \n 63 geo.full_name 0 non-null object \n 64 geo.geo.bbox 0 non-null object \n 65 geo.geo.type 0 non-null object \n 66 geo.id 0 non-null object \n 67 geo.name 0 non-null object \n 68 geo.place_id 0 non-null object \n 69 geo.place_type 0 non-null object \n 70 __twarc.retrieved_at 66715 non-null object \n 71 __twarc.url 66715 non-null object \n 72 __twarc.version 66715 non-null object \n 73 Unnamed: 73 0 non-null float64\ndtypes: bool(3), float64(12), int64(12), object(47)\nmemory usage: 36.8+ MB\n" ], [ "reddit_source_user = reddit_source[reddit_source['entities.urls'].str.contains('reddit.com/r/u_', na=False)].reset_index()", "_____no_output_____" ], [ "reddit_source_user=reddit_source_user.drop_duplicates(subset=['author.username']).reset_index()\nreddit_source_user.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 3743 entries, 0 to 3742\nData columns (total 76 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 level_0 3743 non-null int64 \n 1 index 3743 non-null int64 \n 2 id 3743 non-null int64 \n 3 conversation_id 3743 non-null int64 \n 4 referenced_tweets.replied_to.id 0 non-null float64\n 5 referenced_tweets.retweeted.id 0 non-null float64\n 6 referenced_tweets.quoted.id 1 non-null float64\n 7 author_id 3743 non-null int64 \n 8 in_reply_to_user_id 0 non-null float64\n 9 retweeted_user_id 0 non-null float64\n 10 quoted_user_id 1 non-null float64\n 11 created_at 3743 non-null object \n 12 text 3743 non-null object \n 13 lang 3743 non-null object \n 14 source 3743 non-null object \n 15 public_metrics.like_count 3743 non-null int64 \n 16 public_metrics.quote_count 3743 non-null int64 \n 17 public_metrics.reply_count 3743 non-null int64 \n 18 public_metrics.retweet_count 3743 non-null int64 \n 19 reply_settings 3743 non-null object \n 20 possibly_sensitive 3743 non-null bool \n 21 withheld.scope 0 non-null object \n 22 withheld.copyright 0 non-null object \n 23 withheld.country_codes 0 non-null object \n 24 entities.annotations 2943 non-null object \n 25 entities.cashtags 1 non-null object \n 26 entities.hashtags 103 non-null object \n 27 entities.mentions 10 non-null object \n 28 entities.urls 3743 non-null object \n 29 context_annotations 3713 non-null object \n 30 attachments.media 0 non-null object \n 31 attachments.media_keys 0 non-null object \n 32 attachments.poll.duration_minutes 0 non-null float64\n 33 attachments.poll.end_datetime 0 non-null object \n 34 attachments.poll.id 0 non-null float64\n 35 attachments.poll.options 0 non-null object \n 36 attachments.poll.voting_status 0 non-null object \n 37 attachments.poll_ids 0 non-null object \n 38 author.id 3743 non-null int64 \n 39 author.created_at 3743 non-null object \n 40 author.username 3743 non-null object \n 41 author.name 3743 non-null object \n 42 author.description 3219 non-null object \n 43 author.entities.description.cashtags 4 non-null object \n 44 author.entities.description.hashtags 366 non-null object \n 45 author.entities.description.mentions 129 non-null object \n 46 author.entities.description.urls 633 non-null object \n 47 author.entities.url.urls 2540 non-null object \n 48 author.location 2482 non-null object \n 49 author.pinned_tweet_id 1047 non-null float64\n 50 author.profile_image_url 3743 non-null object \n 51 author.protected 3743 non-null bool \n 52 author.public_metrics.followers_count 3743 non-null int64 \n 53 author.public_metrics.following_count 3743 non-null int64 \n 54 author.public_metrics.listed_count 3743 non-null int64 \n 55 author.public_metrics.tweet_count 3743 non-null int64 \n 56 author.url 2540 non-null object \n 57 author.verified 3743 non-null bool \n 58 author.withheld.scope 0 non-null float64\n 59 author.withheld.copyright 0 non-null float64\n 60 author.withheld.country_codes 0 non-null object \n 61 geo.coordinates.coordinates 0 non-null object \n 62 geo.coordinates.type 0 non-null object \n 63 geo.country 0 non-null object \n 64 geo.country_code 0 non-null object \n 65 geo.full_name 0 non-null object \n 66 geo.geo.bbox 0 non-null object \n 67 geo.geo.type 0 non-null object \n 68 geo.id 0 non-null object \n 69 geo.name 0 non-null object \n 70 geo.place_id 0 non-null object \n 71 geo.place_type 0 non-null object \n 72 __twarc.retrieved_at 3743 non-null object \n 73 __twarc.url 3743 non-null object \n 74 __twarc.version 3743 non-null object \n 75 Unnamed: 73 0 non-null float64\ndtypes: bool(3), float64(12), int64(14), object(47)\nmemory usage: 2.1+ MB\n" ], [ "index=133\nprint(reddit_source_user.loc[index,]['entities.urls'])\nprint(reddit_source_user.loc[index,]['entities.urls'].split(\"\\\"expanded_url\\\": \\\"https://www.reddit.com/r/u_\")[1].split(\"/\")[0])\nprint(reddit_source_user.loc[index,]['author.username'])", "[{\"start\": 15, \"end\": 38, \"url\": \"https://t.co/syQdCG9g8B\", \"expanded_url\": \"https://www.reddit.com/user/123tonks123/draft/158ab4c2-4a8e-11eb-8312-a2816d891002\", \"display_url\": \"reddit.com/user/123tonks1\\u2026\", \"status\": 200, \"unwound_url\": \"https://www.reddit.com/user/123tonks123/draft/158ab4c2-4a8e-11eb-8312-a2816d891002\"}, {\"start\": 51, \"end\": 74, \"url\": \"https://t.co/v3l4iOUIT5\", \"expanded_url\": \"https://www.reddit.com/r/u_123tonks123/comments/kn09o0/httpswwwredditcomuser123tonks123draft158ab4c24a8e1/?utm_content=post&utm_medium=twitter&utm_source=share&utm_name=submit&utm_term=t3_kn09o0\", \"display_url\": \"reddit.com/r/u_123tonks12\\u2026\", \"status\": 200, \"unwound_url\": \"https://www.reddit.com/user/123tonks123/comments/kn09o0/httpswwwredditcomuser123tonks123draft158ab4c24a8e1/?utm_content=post&utm_medium=twitter&utm_name=submit&utm_source=share&utm_term=t3_kn09o0\"}]\n123tonks123\nTonks29988785\n" ], [ "Reddit_username=[]\nfor index in range(len(reddit_source_user)):\n Reddit_username.append(reddit_source_user.loc[index,]['entities.urls'].split(\"\\\"expanded_url\\\": \\\"https://www.reddit.com/r/u_\")[1].split(\"/\")[0])", "_____no_output_____" ], [ "users=reddit_source_user[['text','author.id','author.name','author.username']]\nusers['Reddit_username'] = Reddit_username\nusers.to_csv('usernames_Source.csv')", "<ipython-input-223-07455e72156e>:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n users['Reddit_username'] = Reddit_username\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb14c369bf5ee95bda416aa1ecf3f2fab0fbab5d
964,246
ipynb
Jupyter Notebook
notebooks/frequentist_colab.ipynb
cstorm125/abtest
1d47c7144d693e67d897e6939119b23f9764f67d
[ "Apache-2.0" ]
12
2019-04-23T03:12:39.000Z
2020-09-16T06:00:44.000Z
notebooks/frequentist_colab.ipynb
cstorm125/abtest
1d47c7144d693e67d897e6939119b23f9764f67d
[ "Apache-2.0" ]
null
null
null
notebooks/frequentist_colab.ipynb
cstorm125/abtest
1d47c7144d693e67d897e6939119b23f9764f67d
[ "Apache-2.0" ]
27
2020-10-08T19:22:58.000Z
2021-11-29T11:09:45.000Z
385.082268
69,400
0.922508
[ [ [ "<a href=\"https://colab.research.google.com/github/cstorm125/abtestoo/blob/master/notebooks/frequentist_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# A/B Testing from Scratch: Frequentist Approach", "_____no_output_____" ], [ "Frequentist A/B testing is one of the most used and abused statistical methods in the world. This article starts with a simple problem of comparing two online ads campaigns (or teatments, user interfaces or slot machines). It outlines several useful statistical concepts and how we exploit them to solve our problem. At the end, it acknowledges some common pitfalls we face when doing a frequentist A/B test and proposes some possible solutions to a more robust A/B testing. Readers are encouraged to tinker with the widgets provided in order to explore the impacts of each parameter.\n\nThanks to [korakot](https://github.com/korakot) for notebook conversion to Colab.", "_____no_output_____" ] ], [ [ "# #depedencies for colab\n# %%capture\n# !pip install plotnine", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nfrom typing import Collection, Tuple\n\n#widgets เอาออก เปลี่ยนไปใช้ colab form แทน\n#from ipywidgets import interact, interactive, fixed, interact_manual\n#import ipywidgets as widgets\n# from IPython.display import display\n\n#plots\nimport matplotlib.pyplot as plt\nfrom plotnine import *\n\n#stats\nimport scipy as sp\n\n#suppress annoying warning prints\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "## Start with A Problem", "_____no_output_____" ], [ "A typical situation marketers (research physicians, UX researchers, or gamblers) find themselves in is that they have two variations of ads (treatments, user interfaces, or slot machines) and want to find out which one has the better performance in the long run.\n\nPractitioners know this as A/B testing and statisticians as **hypothesis testing**. Consider the following problem. We are running an online ads campaign `A` for a period of time, but now we think a new ads variation might work better so we run an experiemnt by dividing our audience in half: one sees the existing campaign `A` whereas the other sees a new campaign `B`. Our performance metric is conversion (sales) per click (ignore [ads attribution problem](https://support.google.com/analytics/answer/1662518) for now). After the experiment ran for two months, we obtain daily clicks and conversions of each campaign and determine which campaign has the better performance.\n\nWe simulate the aforementioned problem with both campaigns getting randomly about a thousand clicks per day. The secrete we will pretend to not know is that hypothetical campaign `B` has slightly better conversion rate than `A` in the long run. With this synthetic data, we will explore some useful statistical concepts and exploit them for our frequentist A/B testing.", "_____no_output_____" ] ], [ [ "def gen_bernoulli_campaign(p1: float, p2: float,\n lmh: Collection = [500, 1000, 1500],\n timesteps: int = 60,\n scaler: float = 300, seed: int = 1412) -> pd.DataFrame:\n '''\n :meth: generate fake impression-conversion campaign based on specified parameters\n :param float p1: true conversion rate of group 1\n :param float p2: true conversion rate of group 2\n :param Collection lmh: low-, mid-, and high-points for the triangular distribution of clicks\n :param int nb_days: number of timesteps the campaigns run for\n :param float scaler: scaler for Gaussian noise\n :param int seed: seed for Gaussian noise\n :return: dataframe containing campaign results\n '''\n\n np.random.seed(seed)\n ns = np.random.triangular(*lmh, size=timesteps * 2).astype(int)\n np.random.seed(seed)\n es = np.random.randn(timesteps * 2) / scaler\n\n n1 = ns[:timesteps]\n c1 = ((p1 + es[:timesteps]) * n1).astype(int)\n n2 = ns[timesteps:]\n c2 = ((p2 + es[timesteps:]) * n2).astype(int)\n result = pd.DataFrame({'timesteps': range(timesteps), 'impression_a': n1, 'conv_a': c1, 'impression_b': n2, 'conv_b': c2})\n\n result = result[['timesteps', 'impression_a', 'impression_b', 'conv_a', 'conv_b']]\n result['cumu_impression_a'] = result.impression_a.cumsum()\n result['cumu_impression_b'] = result.impression_b.cumsum()\n result['cumu_conv_a'] = result.conv_a.cumsum()\n result['cumu_conv_b'] = result.conv_b.cumsum()\n result['cumu_rate_a'] = result.cumu_conv_a / result.cumu_impression_a\n result['cumu_rate_b'] = result.cumu_conv_b / result.cumu_impression_b\n return result\n\nconv_days = gen_bernoulli_campaign(p1 = 0.10,\n p2 = 0.105,\n timesteps = 60,\n scaler=300,\n seed = 1412) #god-mode \nconv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks\nconv_days.head()", "_____no_output_____" ], [ "rates_df = conv_days[['timesteps','cumu_rate_a','cumu_rate_b']].melt(id_vars='timesteps')\ng = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +\n xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))\ng", "_____no_output_____" ], [ "#sum after 2 months\nconv_df = pd.DataFrame({'campaign_id':['A','B'], 'clicks':[conv_days.click_a.sum(),conv_days.click_b.sum()],\n 'conv_cnt':[conv_days.conv_a.sum(),conv_days.conv_b.sum()]})\nconv_df['conv_per'] = conv_df['conv_cnt'] / conv_df['clicks']\nconv_df", "_____no_output_____" ] ], [ [ "## Random Variables and Probability Distributions", "_____no_output_____" ], [ "Take a step back and think about the numbers we consider in our daily routines, whether it is conversion rate of an ads campaign, the relative risk of a patient group, or sales and revenues of a shop during a given period of time. From our perspective, they have one thing in common: **we do not know exactly how they come to be**. In fact, we would not need an A/B test if we do. For instance, if we know for certain that conversion rate of an ads campaign will be `0.05 + 0.001 * number of letters in the ads`, we can tell exactly which ads to run: the one with the highest number of letters in it.\n\nWith our lack of knowledge, we do the next best thing and assume that our numbers are generated by some mathematical formula, calling them **random variables**. For instance, we might think of the probability of a click converting the same way as a coin-flip event, with the probability of converting as $p$ (say 0.1) and not converting as $1-p$ (thus 0.9). With this, we can simulate the event aka click conversion for as many times as we want:", "_____no_output_____" ] ], [ [ "def bernoulli(n,p):\n flips = np.random.choice([0,1], size=n, p=[1-p,p])\n flips_df = pd.DataFrame(flips)\n flips_df.columns = ['conv_flag']\n g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) + \n theme_minimal() + xlab('Conversion Flag') + ylab('Percentage of Occurence') +\n geom_hline(yintercept=p, colour='red') + ggtitle(f'Distribution after {n} Trials'))\n g.draw()\n print(f'Expectation: {p}\\nVariance: {p*(1-p)}')\n print(f'Sample Mean: {np.mean(flips)}\\nSample Variance: {np.var(flips)}')\n\n# ใช้ colab form แทน interact \n#interact(bernoulli, n=widgets.IntSlider(min=1,max=500,step=1,value=20),\n# p=widgets.FloatSlider(min=0.1,max=0.9))", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nn = 20 #@param {type:\"slider\", min:1, max:500, step:1}\np = 0.1 #@param {type:\"slider\", min:0.1, max:0.9, step:0.1}\nbernoulli(n, p)", "Expectation: 0.1\nVariance: 0.09000000000000001\nSample Mean: 0.05\nSample Variance: 0.04749999999999999\n" ] ], [ [ "**Probability distribution** is represented with the values of a random variable we are interested in the X-axis, and the chance of them appearing after a number of trials in the Y-axis. The distribution above is called [Bernoulli Distribution](http://mathworld.wolfram.com/BernoulliDistribution.html), usually used to model hypothetical coin flips and online advertisements. [Other distributions](https://en.wikipedia.org/wiki/List_of_probability_distributions) are used in the same manner for other types of random variables. [Cloudera](https://www.cloudera.com/) provided a [quick review](https://blog.cloudera.com/blog/2015/12/common-probability-distributions-the-data-scientists-crib-sheet/) on a few of them you might find useful.\n\n<img src='https://github.com/cstorm125/abtestoo/blob/master/images/distribution.png?raw=1' alt='Common Probability Distributions; Cloudera'/>", "_____no_output_____" ], [ "## Law of Large Numbers", "_____no_output_____" ], [ "There are two sets of indicators of a distribution that are especially relevant to our problem: one derived theoretically and another derived from data we observed. **Law of Large Numbers (LLN)** describes the relationship of between them.\n\nTheoretically, we can derive these values about any distribution:\n\n* **Expectation** of a random variable $X_i$ is its long-run average dervied from repetitively sampling $X_i$ from the same distribution. Each distribution requires its own way to obtain the expectation. For our example, it is the weighted average of outcomes $X_i$ ($X_i=1$ converted; $X_i=0$ not converted) and their respective probabilities ($p$ converted; $1-p$ not converted):\n\n\\begin{align}\nE[X_i] &= \\mu = \\sum_{i=1}^{k} p_i * X_i \\\\\n&= (1-p)*0 + p*1 \\\\\n&= p\n\\end{align}\nwhere $k$ is number of patterns of outcomes\n\n* **Variance** of a random variable $X_i$ represents the expectation of how much $X_i$ deviates from its expectation, for our example formulated as:\n\n\\begin{align}\nVar(X_i) &= \\sigma^2 = E[(X_i-E(X_i))^2] \\\\\n&= E[X_i^2] - E[X_i]^2 \\\\\n&= \\{(1-p)*0^2 + p*1^2\\} - p^2 \\\\\n&= p(1-p)\n\\end{align}\n\nEmpirically, we can also calculate their counterparts with the any amount of data we have on hand:\n\n* **Sample Mean** is simply an average of all $X_i$ we currently have in our sample of size $n$:\n\n\\begin{align}\n\\bar{X} &= \\frac{1}{n} \\sum_{i=1}^{n} X_i\n\\end{align}\n\n* **Sample Variance** is the variance based on deviation from sample mean; the $n-1$ is due to [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias) (See Appendix):\n\n\\begin{align}\ns^2 &= \\frac{1}{n-1} \\sum_{i=1}^{n} (X_i - \\bar{X})^2\n\\end{align}\n\nLLN posits that when we have a large enough number of sample $n$, the sample mean will converge to expectation. This can be shown with a simple simulation:", "_____no_output_____" ] ], [ [ "def lln(n_max,p):\n mean_flips = []\n var_flips = []\n ns = []\n for n in range(1,n_max):\n flips = np.random.choice([0,1], size=n, p=[1-p,p])\n ns.append(n)\n mean_flips.append(flips.mean())\n var_flips.append(flips.var())\n flips_df = pd.DataFrame({'n':ns,'mean_flips':mean_flips,'var_flips':var_flips}).melt(id_vars='n')\n g = (ggplot(flips_df,aes(x='n',y='value',colour='variable')) + geom_line() +\n facet_wrap('~variable', ncol=1, scales='free') + theme_minimal() +\n ggtitle(f'Expectation={p:2f}; Variance={p*(1-p):2f}') + xlab('Number of Samples') +\n ylab('Value'))\n g.draw()\n\n# interact(lln, n_max=widgets.IntSlider(min=2,max=10000,step=1,value=1000),\n# p=widgets.FloatSlider(min=0.1,max=0.9))", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nn = 1000 #@param {type:\"slider\", min:2, max:10000, step:1}\np = 0.1 #@param {type:\"slider\", min:0.1, max:0.9, step:0.1}\nlln(n, p)", "_____no_output_____" ] ], [ [ "Notice that even though LLN does not says that sample variance will also converge to variance as $n$ grows large enough, it is also the case. Mathematically, it can be derived as follows:\n\n\\begin{align}\ns^2 &= \\frac{1}{n}\\sum_{i=1}^{n}(X_i - \\bar{X}^2) \\\\\n&= \\frac{1}{n}\\sum_{i=1}^{n}(X_i - \\mu)^2 \\text{; as }n\\rightarrow\\infty\\text{ }\\bar{X}\\rightarrow\\mu\\\\\n&=\\frac{1}{n}(\\sum_{i=1}^{n}{X_i}^2 - 2\\mu\\sum_{i=1}^{n}X_i + n\\mu^2) \\\\\n&=\\frac{\\sum_{i=1}^{n}{X_i}^2}{n} - \\frac{2\\mu\\sum_{i=1}^{n}X_i}{n} + \\mu^2 \\\\\n&= \\frac{\\sum_{i=1}^{n}{X_i}^2}{n} - 2\\mu\\bar{X} + \\mu^2\\text{; as }\\frac{\\sum_{i=1}^{n}X_i}{n} = \\bar{X}\\\\\n&= \\frac{\\sum_{i=1}^{n}{X_i}^2}{n} - 2\\mu^2 + \\mu^2 = \\frac{\\sum_{i=1}^{n}{X_i}^2}{n} - \\mu^2 \\text{; as }n\\rightarrow\\infty\\text{ }\\bar{X}\\rightarrow\\mu\\\\\n&= E[{X_i}^2] - E[X_i]^2 = Var(X_i) = \\sigma^2\n\\end{align}", "_____no_output_____" ], [ "## Central Limit Theorem", "_____no_output_____" ], [ "Assuming some probability distribution for our random variable also lets us exploit another extremely powerful statistical concept: **Central Limit Theorem (CLT)**. To see CLT in action, let us simplify our problem a bit and say we are only trying to find out if a hypothetical ads campaign `C` has a conversion rate of more than 10% or not, assuming data collected so far say that `C` has 1,000 clicks and 107 conversions.", "_____no_output_____" ] ], [ [ "c_df = pd.DataFrame({'campaign_id':'C','clicks':1000,'conv_cnt':107,'conv_per':0.107},index=[0])\nc_df", "_____no_output_____" ] ], [ [ "CLT goes as follows: \n> If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\\mu$ and variance $\\sigma^2$ and $\\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then when $n$ is large enough, $\\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\\mu$ and variance $\\frac{\\sigma^2}{n}$\n\nIt is a mouthful to say and full of weird symbols, so let us break it down line by line.", "_____no_output_____" ], [ "**If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\\mu$ and variance $\\sigma^2$** <br/>In our case, $X_i$ is if click $i$ is coverted ($X_i=1$) or not converted ($X_i=0$) with $\\mu$ as some probability that represents how likely a click will convert on average. *Independent* means that the probability of each click converting depends only on itself and not other clicks. *Identically distributed* means that the true probability of each click converting is more or less the same. We need to rely on domain knowledge to verify these assumptions; for example, in online advertisement, we would expect, at least for when working with a reputable ads network such as Criteo, that each click comes from indepdent users, as opposed to, say, a click farm where we would see a lot of clicks behaving the same way by design. Identical distribution is a little difficult to assume since we would think different demographics the ads are shown to will react differently so they might not have the same expectation.", "_____no_output_____" ] ], [ [ "ind_df = pd.DataFrame({'iid':[False]*100+[True]*100,\n 'order': list(range(100)) + list(range(100)),\n 'conv_flag':[1]*50+ [0]*50+ list(np.random.choice([0,1], size=100))})\ng = (ggplot(ind_df,aes(x='order',y='conv_flag',color='iid')) + geom_point() +\n facet_wrap('~iid') + theme_minimal() + xlab('i-th Click') + ylab('Conversion') +\n ggtitle('Both plots has conversion rate of 50% but only one is i.i.d.'))\ng", "_____no_output_____" ] ], [ [ "**and $\\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then**<br/>\nFor campaign `C`, we can think of all the clicks we observed as one sample group, which exists in parallel with an infinite number of sample groups that we have not seen yet but can be drawn from the distribution by additional data collection. This way, we calculate the sample mean as total conversions divided by total number of clicks observed during the campaign.\n\n<img src='https://github.com/cstorm125/abtestoo/blob/master/images/sample_group.png?raw=1' alt='Sample Group in Universe'>", "_____no_output_____" ], [ "**when $n$ is large enough, $\\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\\mu$ and variance $\\frac{\\sigma^2}{n}$**</br>\nHere's the kicker: regardless of what distribution each $X_i$ of sample group $j$ is drawn from, as long as you have enough number of sample $n$, the sample mean of that sample group $\\bar{X_j}$ will converge to a normal distribution. Try increase $n$ in the plot below and see what happens.", "_____no_output_____" ] ], [ [ "def clt(n, dist):\n n_total = n * 10000\n if dist == 'discrete uniform':\n r = np.random.uniform(size=n_total)\n elif dist =='bernoulli':\n r = np.random.choice([0,1],size=n_total,p=[0.9,0.1])\n elif dist =='poisson':\n r = np.random.poisson(size=n_total)\n else:\n raise ValueError('Choose distributions that are available')\n #generate base distribution plot\n r_df = pd.DataFrame({'r':r})\n g1 = (ggplot(r_df, aes(x='r')) + geom_histogram(bins=30) + theme_minimal() +\n xlab('Values') + ylab('Number of Samples') + \n ggtitle(f'{dist} distribution where sample groups are drawn from'))\n g1.draw()\n \n #generate sample mean distribution plot\n normal_distribution = np.random.normal(loc=np.mean(r), scale=np.std(r) / np.sqrt(n), size=10000)\n sm_df = pd.DataFrame({'sample_means':r.reshape(-1,n).mean(1),\n 'normal_distribution': normal_distribution}).melt()\n g2 = (ggplot(sm_df, aes(x='value',fill='variable')) + \n geom_histogram(bins=30,position='nudge',alpha=0.5) + \n theme_minimal() + xlab('Sample Means') + ylab('Number of Sample Means') + \n ggtitle(f'Distribution of 10,000 sample means with size {n}')) \n g2.draw()\n \n\ndists = ['bernoulli','discrete uniform','poisson']\n# interact(clt, n=widgets.IntSlider(min=1,max=100,value=1),\n# dist = widgets.Dropdown(\n# options=dists,\n# value='bernoulli')\n# )", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nn = 30 #@param {type:\"slider\", min:1, max:100, step:1}\ndist = 'bernoulli' #@param [\"discrete uniform\", \"bernoulli\", \"poisson\"] {type:\"string\"}\nclt(n, dist)", "_____no_output_____" ] ], [ [ "The expectation and variance of the sample mean distribution can be derived as follows:\n\n\\begin{align}\nE[\\bar{X_j}] &= E[\\frac{\\sum_{i=1}^{n} X_i}{n}] \\\\\n&= \\frac{1}{n} \\sum_{i=1}^{n} E[X_i] = \\frac{1}{n} \\sum_{i=1}^{n} \\mu\\\\\n&= \\frac{n\\mu}{n} = \\mu \\\\\nVar(\\bar{X_j}) &= Var(\\frac{\\sum_{i=1}^{n} X_i}{n}) \\\\\n&= \\frac{1}{n^2} \\sum_{i=1}^{n} Var(X_i) = \\frac{1}{n^2} \\sum_{i=1}^{n} \\sigma^2\\\\\n&= \\frac{n\\sigma^2}{n^2} = \\frac{\\sigma^2}{n} \\\\\n\\end{align}", "_____no_output_____" ], [ "The fact that we know this specific normal distribution of sample means has expectation $\\mu$ and variance $\\frac{\\sigma^2}{n}$ is especially useful. Remember we want to find out whether campaign `C` **in general, not just in any sample group,** has better conversion rate than 10%. Below is that exact normal distribution based on information from our sample group (1,000 clicks) and the assumption that conversion rate is 10%:\n\n\\begin{align}\nE[\\bar{X_j}] &= \\mu = p\\\\\n&= 0.1 \\text{; by our assumption}\\\\\nVar(\\bar{X_j}) &= \\frac{\\sigma^2}{n} = \\frac{p*(1-p)}{n}\\\\\n&= \\frac{0.1 * (1-0.1)}{1000}\\\\\n&= 0.0009\\\\\n\\end{align}", "_____no_output_____" ] ], [ [ "n = c_df.clicks[0]\nx_bar = c_df.conv_per[0]\np = 0.1\nmu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)\n# mu = 0; variance = 1; sigma = (variance)**(0.5)\nx = np.arange(0.05, 0.15, 1e-3)\ny = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])\n\nsm_df = pd.DataFrame({'x': x, 'y': y, 'crit':[False if i>x_bar else True for i in x]})\ng = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +\n theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + \n ggtitle('Sample mean distribution under our assumption')) \ng", "_____no_output_____" ] ], [ [ "As long as we know the expectation (which we usually do as part of the assumption) and variance (which is more tricky) of the base distribution, we can use this normal distribution to model random variable from *any* distribution. That is, we can model *any* data as long as we can assume their expectation and variance. ", "_____no_output_____" ], [ "## Think Like A ~~Detective~~ Frequentist", "_____no_output_____" ], [ "In a frequentist perspective, we treat a problem like a criminal persecution. First, we assume innocence of the defendant often called **null hypothesis** (in our case that conversion rate is *less than or equal to* 10%). Then, we collect the evidence (all clicks and conversions from campaign `C`). After that, we review how *unlikely* it is that we have this evidence assuming the defendant is innocent (by looking at where our sample mean lands on the sample mean distribution). Most frequentist tests are simply saying:\n\n>If we assume that [conversion rate]() of [ads campaign C]() has the long-run [conversion rate]() of less than or equal to [10%](), our results with sample mean [0.107]() or more extreme ones are so unlikely that they happen only [23%]() of the time, calculated by the area of the distribution with higher value than our sample mean.\n\nNote that you can substitute the highlighted parts with any other numbers and statistics you are comparing; for instance, medical trials instead of ads campaigns and relative risks instead of converion rates.", "_____no_output_____" ] ], [ [ "g = (ggplot(sm_df, aes(x='x', y='y', group='crit')) + geom_area(aes(fill='crit')) +\n theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + \n ggtitle('Sample mean distribution under our assumption') + \n guides(fill=guide_legend(title=\"Conversion Rate < 0.1\"))) \ng", "_____no_output_____" ] ], [ [ "Whether 23% is unlikely *beyond reasonable doubt* depends on how much we are willing to tolerate the false positive rate (the percentage of innocent people you are willing to execute). By convention, a lot of practioners set this to 1-5% depending on their problems; for instance, an experiment in physics may use 1% or less because physical phenomena is highly reproducible whereas social science may use 5% because the human behaviors are more variable. This is not to be confused with **false discovery rate** which is the probability of our positive predictions turning out to be wrong. The excellent book [Statistics Done Wrong](https://www.statisticsdonewrong.com/p-value.html) has given this topic an extensive coverage that you definitely should check out (Reinhart, 2015).\n\nThis degree of acceptable unlikeliness is called **alpha** and the probability we observe is called **p-value**. We must set alpha as part of the assumption before looking at the data (the law must first state how bad an action is for a person to be executed).", "_____no_output_____" ], [ "## Transforming A Distribution", "_____no_output_____" ], [ "In the previous example of `C`, we are only interested when the conversion rate is *more than* 10% so we look only beyond the right-hand side of our sample mean (thus called **one-tailed tests**). If we were testing whether the conversion rate is *equal to* 10% or not we would be interested in both sides (thus called **two-tailed tests**). However, it is not straightforward since we have to know the equivalent position of our sample mean on the left-hand side of the distribution.\n\nOne way to remedy this is to convert the sample mean distribution to a distribution that is symmetrical around zero and has a fixed variance so the value on one side is equivalent to minus that value of the other side. **Standard normal distribution** is the normal distribution with expectation $\\mu=0$ and variance $\\sigma^2=1$. We convert any normal distribution to a standard normal distribution by:\n\n1. Shift its expectation to zero. This can be done by substracting all values of a distribution by its expectation:\n\\begin{align}\nE[\\bar{X_j}-\\mu] &= E[\\bar{X_j}]-\\mu \\\\\n&= \\mu-\\mu \\\\\n&= 0 \\\\\n\\end{align}\n2. Scale its variance to 1. This can be done by dividing all values by square root of its variance called **standard deviation**:\n\\begin{align}\nVar(\\frac{\\bar{X_j}}{\\sqrt{\\sigma^2/n}}) &= \\frac{1}{\\sigma^2/n}Var(\\bar{X_j})\\\\\n&= \\frac{\\sigma^2/n}{\\sigma^2/n}\\\\\n&=1\n\\end{align}\n\nTry shifting and scaling the distribution below with different $m$ and $v$.", "_____no_output_____" ] ], [ [ "def shift_normal(m,v):\n n = c_df.clicks[0]\n x_bar = c_df.conv_per[0]\n p = 0.1\n mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)\n x = np.arange(0.05, 0.15, 1e-3)\n y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])\n sm_df = pd.DataFrame({'x': x, 'y': y})\n \n #normalize process\n sm_df['x'] = (sm_df.x - m) / np.sqrt(v)\n sm_df['y'] = np.array([sp.stats.norm.pdf(i, loc=mu-m, scale=sigma/np.sqrt(v)) for i in sm_df.x])\n print(f'Expectation of sample mean: {mu-m}; Variance of sample mean: {variance/v}')\n g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +\n theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + \n ggtitle('Shifted Normal Distribution of Sample Mean')) \n g.draw()\n \n# interact(shift_normal, \n# m=widgets.FloatSlider(min=-1e-1,max=1e-1,value=1e-1,step=1e-2),\n# v=widgets.FloatSlider(min=9e-5,max=9e-3,value=9e-5,step=1e-4, readout_format='.5f'))", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nm = 0.1 #@param {type:\"slider\", min:-1e-1, max:1e-1, step:1e-2}\nv = 9e-5 #@param {type:\"slider\", min:9e-5, max:9e-3, step:1e-4}\nshift_normal(m,v)", "Expectation of sample mean: 0.0; Variance of sample mean: 1.0\n" ] ], [ [ "By shifting and scaling, we can find out where `C`'s sample mean of 0.107 lands on the X-axis of a standard normal distribution:\n\\begin{align}\n\\bar{Z_j} &= \\frac{\\bar{X_j} - \\mu}{\\sigma / \\sqrt{n}} \\\\\n&= \\frac{0.107 - 0.1}{0.3 / \\sqrt{1000}} \\approx 0.7378648\\\\\n\\end{align}", "_____no_output_____" ], [ "With $\\bar{Z_j}$ and $-\\bar{Z_j}$, we can calculate the probability of falsely rejecting the null hypotheysis, or p-value, as the area in red, summing up to approximately 46%. This is most likely too high a false positive rate anyone is comfortable with (no one believes a pregnancy test that turns out positive for 46% of the people who are not pregnant), so we fail to reject the null hypothesis that conversion rate of `C` is equal to 10%. \n\nIf someone asks a frequentist for an opinion, they would probably say that they cannot disprove `C` has conversion rate of 10% in the long run. If they were asked to choose an action, they would probably go with the course of action that assumes `C` has a conversion rate of 10%.", "_____no_output_____" ] ], [ [ "n = c_df.clicks[0]\nx_bar = c_df.conv_per[0]\np = 0.1; mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)\nx_bar_norm = (x_bar - mu) / sigma\n\ndef standard_normal(x_bar_norm, legend_title):\n x_bar_norm = abs(x_bar_norm)\n x = np.arange(-3, 3, 1e-2)\n y = np.array([sp.stats.norm.pdf(i, loc=0, scale=1) for i in x])\n sm_df = pd.DataFrame({'x': x, 'y': y})\n\n #normalize process\n sm_df['crit'] = sm_df.x.map(lambda x: False if ((x<-x_bar_norm)|(x>x_bar_norm)) else True)\n g = (ggplot(sm_df, aes(x='x', y='y',group='crit')) + geom_area(aes(fill='crit')) +\n theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') + \n ggtitle('Standard Normal Distribution of Sample Mean') +\n guides(fill=guide_legend(title=legend_title))) \n g.draw()\n\nstandard_normal(x_bar_norm, \"Conversion Rate = 0.1\")", "_____no_output_____" ] ], [ [ "## Z-test and More", "_____no_output_____" ], [ "With CLT and standard normal distribution (sometimes called **Z-distribution**), we now have all the tools for one of the most popular and useful statistical hypothesis test, the **Z-test**. In fact we have already done it with the hypothetical campaign `C`. But let us go back to our original problem of comparing the long-run conversion rates of `A` and `B`. Let our null hypothesis be that they are equal to each other and alpha be 0.05 (we are comfortable with false positive rate of 5%).", "_____no_output_____" ] ], [ [ "conv_df", "_____no_output_____" ] ], [ [ "We already know how to compare a random variable to a fixed value, but now we have two random variables from two ads campaign. We get around this by comparing **the difference of their sample mean** $\\bar{X_\\Delta} = \\bar{X_{A}} - \\bar{X_{B}}$ to 0. This way, our null hypothesis states that there is no difference between the long-run conversion rates of these campaigns. Through another useful statistical concept, we also know that the variance of $\\bar{X_\\Delta}$ is the sum of sample mean variances of $\\bar{X_\\text{A}}$ and $\\bar{X_\\text{B}}$ (Normal Sum Theorem; [Lemon, 2002](https://www.goodreads.com/book/show/3415974-an-introduction-to-stochastic-processes-in-physics)).\n\nThus, we can calculate the **test statistic** or, specifically for Z-test, **Z-value** as follows:\n\n\\begin{align}\n\\bar{Z_\\Delta} &= \\frac{\\bar{X_\\Delta}-\\mu}{\\sqrt{\\frac{\\sigma^2_\\text{A}}{n_\\text{A}} + \\frac{\\sigma^2_\\text{B}}{n_\\text{B}}}} \\\\\n&= \\frac{\\bar{X_\\Delta}-\\mu}{\\sqrt{\\sigma^2_\\text{pooled} * (\\frac{1}{n_\\text{A}} + \\frac{1}{n_\\text{B}})}} \n\\end{align}\n\nSince we are assuming that `A` and `B` has the same conversion rate, their variance is also assumed to be the same:\n\n$$\\sigma^2_{A} = \\sigma^2_{B} = \\sigma_\\text{pooled} = p * (1-p)$$\n\nwhere $p$ is the total conversions of both campaigns divided by their clicks (**pooled probability**).\n\nIn light of the Z-value calculated from our data, we found that p-value of rejecting the null hypothesis that conversion rates of `A` and `B` are equal to each other is less than 3%, lower than our acceptable false positive rate of 5%, so we reject the null hypothesis that they perform equally well. The result of the test is **statistically significant**; that is, it is unlikely enough for us given the null hypothesis.", "_____no_output_____" ] ], [ [ "def proportion_test(c1: int, c2: int,\n n1: int, n2: int,\n mode: str = 'one_sided') -> Tuple[float, float]:\n '''\n :meth: Z-test for difference in proportion\n :param int c1: conversions for group 1\n :param int c2: conversions for group 2\n :param int n1: impressions for group 1\n :param int n2: impressions for group 2\n :param str mode: mode of test; `one_sided` or `two_sided`\n :return: Z-score, p-value\n '''\n p = (c1 + c2) / (n1 + n2)\n p1 = c1 / n1\n p2 = c2 / n2\n z = (p1 - p2) / np.sqrt(p * (1 - p) * (1 / n1 + 1 / n2))\n if mode == 'two_sided':\n p = 2 * (1 - sp.stats.norm.cdf(abs(z)))\n elif mode == 'one_sided':\n p = 1 - sp.stats.norm.cdf(abs(z))\n else:\n raise ValueError('Available modes are `one_sided` and `two_sided`')\n return z, p\n\nz_value, p_value = proportion_test(c1=conv_df.conv_cnt[0], c2=conv_df.conv_cnt[1],\n n1=conv_df.clicks[0], n2=conv_df.clicks[1], mode='two_sided')\nprint(f'Z-value: {z_value}; p-value: {p_value}')\n\nstandard_normal(z_value, \"No Difference in Conversion Rates\")", "Z-value: -2.6744183909575856; p-value: 0.00748589934574917\n" ] ], [ [ "This rationale extends beyond comparing proportions such as conversion rates. For instance, we can also compare revenues of two different stores, assuming they are i.i.d. However in this case, we do not know the variance of the base distribution $\\sigma^2$, as it cannot be derived from our assumption (variance of Bernoulli distribution is $p*(1-p)$ but store revenues are not modelled after a coin flip). The test statistic then is created with sample variance $s^2$ based on our sample group and follows a slightly modified version of standard normal distribution (see [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test)). Your test statistics and sample mean distributions may change, but bottom line of frequentist A/B test is exploiting CLT and frequentist reasoning.", "_____no_output_____" ], [ "## Confidence Intervals", "_____no_output_____" ], [ "Notice that we can calculate p-value from Z-value and vice versa. This gives us another canny way to look at the problem; that is, we can calculate the intervals where there is an arbitrary probability, say 95%, that sample mean of `A` or `B` will fall into. We call it **confidence interval**. You can see that despite us rejecting the null hypothesis that their difference is zero, the confidence intervals of both campaigns can still overlap. \n\nTry changing the number of conversion rate and clicks of each group as well as the alpha to see what changes in terms of p-value of Z-test and confidence intervals. You will see that the sample mean distribution gets \"wider\" as we have fewer samples in a group. Intuitively, this makes sense because the fewer clicks you have collected, the less information you have about true performance of an ads campaign and less confident you are about where it should be. So when designing an A/B test, you should plan to have similar number of sample between both sample groups in order to have similarly distributed sample means.", "_____no_output_____" ] ], [ [ "def proportion_plot(c1: int, c2: int,\n n1: int, n2: int, alpha: float = 0.05,\n mode: str = 'one_sided') -> None:\n '''\n :meth: plot Z-test for difference in proportion and confidence intervals for each campaign\n :param int c1: conversions for group 1\n :param int c2: conversions for group 2\n :param int n1: impressions for group 1\n :param int n2: impressions for group 2\n :param float alpha: alpha\n :param str mode: mode of test; `one_sided` or `two_sided`\n :return: None\n '''\n p = (c1 + c2) / (n1 + n2)\n p1 = c1 / n1\n p2 = c2 / n2\n se1 = np.sqrt(p1 * (1 - p1) / n1)\n se2 = np.sqrt(p2 * (1 - p2) / n2)\n z = sp.stats.norm.ppf(1 - alpha / 2)\n x1 = np.arange(p1 - 3 * se1, p1 + 3 * se1, 1e-4)\n x2 = np.arange(p2 - 3 * se2, p2 + 3 * se2, 1e-4)\n y1 = np.array([sp.stats.norm.pdf(i, loc=p1, scale=np.sqrt(p1 * (1 - p1) / n1)) for i in x1])\n y2 = np.array([sp.stats.norm.pdf(i, loc=p2, scale=np.sqrt(p2 * (1 - p2) / n2)) for i in x2])\n sm_df = pd.DataFrame({'campaign_id': ['Campaign A'] * len(x1) + ['Campaign B'] * len(x2),\n 'x': np.concatenate([x1, x2]), 'y': np.concatenate([y1, y2])})\n\n z_value, p_value = proportion_test(c1, c2, n1, n2, mode)\n print(f'Z-value: {z_value}; p-value: {p_value}')\n\n g = (ggplot(sm_df, aes(x='x', y='y', fill='campaign_id')) +\n geom_area(alpha=0.5)\n + theme_minimal() + xlab('Sample Mean Distribution of Each Campaign')\n + ylab('Probability Density Function')\n + geom_vline(xintercept=[p1 + se1 * z, p1 - se1 * z], colour='red')\n + geom_vline(xintercept=[p2+se2*z, p2-se2*z], colour='blue')\n + ggtitle(f'Confident Intervals at alpha={alpha}'))\n g.draw()\n \n# interact(ci_plot, \n# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],\n# step=1e-3,readout_format='.5f'),\n# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],\n# step=1e-3,readout_format='.5f'),\n# n1 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[0]),\n# n2 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[1]),\n# alpha = widgets.FloatSlider(min=0,max=1,value=0.05))", "_____no_output_____" ], [ "conv_df.clicks[0], conv_df.clicks[1]", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nc1 = 5950 #@param {type:\"slider\", min:0, max:70000}\nc2 = 6189 #@param {type:\"slider\", min:0, max:70000}\nn1 = 59504 #@param {type:\"slider\", min:10, max:70000, step:10}\nn2 = 58944 #@param {type:\"slider\", min:10, max:70000, step:10}\nalpha = 0.05 #@param {type:\"slider\", min:0, max:1, step:1e-3}\n\nproportion_plot(c1,c2,n1,n2,alpha)", "Z-value: -2.839599860839738; p-value: 0.002258507725720227\n" ] ], [ [ "## Any Hypothesis Test Is Statistically Significant with Enough Samples", "_____no_output_____" ], [ "Because we generated the data, we know that conversion rate of campaign `A` (10%) is about 95% that of campaign `B` (10.5%). If we go with our gut feeling, most of us would say that they are practically the same; yet, our Z-test told us that they are different. The reason for this becomes apparent graphically when we decrease the number of clicks for both campaigns in the plot above. The Z-test stops becoming significant when both campaigns have about 50,000 clicks each, even though they still have exactly the same conversion rate. The culprit is our Z-value calculated as:\n\n\\begin{align}\n\\bar{Z_\\Delta} &= \\frac{\\bar{X_\\Delta}-\\mu}{\\sqrt{\\sigma^2_\\text{pooled} * (\\frac{1}{n_\\text{A}} + \\frac{1}{n_\\text{B}})}} \n\\end{align}\n\nNotice number of clicks $n_\\text{A}$ and $n_\\text{B}$ hiding in the denominator. Our test statistics $\\bar{Z_\\Delta}$ will go infinitely higher as long as we collect more clicks. If both campaigns `A` and `B` have one million clicks each, the difference of as small as 0.1% will be detected as statistically significant. Try adjusting the probabilities $p1$ and $p2$ in the plot below and see if the area of statistical significance expands or contracts as the difference between the two numbers changes.\n", "_____no_output_____" ] ], [ [ "def significance_plot(p1,p2):\n n1s = pd.DataFrame({'n1':[10**i for i in range(1,7)],'k':0})\n n2s = pd.DataFrame({'n2':[10**i for i in range(1,7)],'k':0})\n ns = pd.merge(n1s,n2s,how='outer').drop('k',1)\n ns['p_value'] = ns.apply(lambda row: proportion_test(p1*row['n1'], p2*row['n2'],row['n1'],row['n2'])[1], 1)\n g = (ggplot(ns,aes(x='factor(n1)',y='factor(n2)',fill='p_value')) + geom_tile(aes(width=.95, height=.95)) +\n geom_text(aes(label='round(p_value,3)'), size=10)+ theme_minimal() +\n xlab('Number of Samples in A') + ylab('Number of Samples in B') +\n guides(fill=guide_legend(title=\"p-value\")))\n g.draw()\n\n# interact(significance_plot, \n# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],\n# step=1e-3,readout_format='.5f'),\n# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],\n# step=1e-3,readout_format='.5f'))", "_____no_output_____" ], [ "#@title {run: \"auto\"}\np1 = 0.09898494218876042 #@param {type:\"slider\", min:0, max:1, step:1e-3}\np2 = 0.10367467426710097 #@param {type:\"slider\", min:0, max:1, step:1e-3}\n\nsignificance_plot(p1,p2)", "_____no_output_____" ] ], [ [ "More practically, look at cumulative conversion rates and z-values of `A` and `B` on a daily basis. Every day that we check the results based on cumulative clicks and conversions, we will come up with a different test statistic and p-value. Difference in conversion rates seem to stabilize after 20 days; however, notice that if you stop the test at day 25 or so, you would say it is NOT statistically significant, whereas if you wait a little longer, you will get the opposite result. The only thing that changes as time goes on is that we have more samples.", "_____no_output_____" ] ], [ [ "g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +\n xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))\ng", "_____no_output_____" ], [ "#test\nconv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], \n row['cumu_conv_b'],row['cumu_click_a'], \n row['cumu_click_b'], mode='two_sided')[0],1)\nconv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], \n row['cumu_conv_b'],row['cumu_click_a'], \n row['cumu_click_b'], mode='two_sided')[1],1)\n\n#plot\ng = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +\n xlab('Days of Campaign') + ylab('Z-value Calculated By Cumulative Data') +\n geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +\n annotate(\"text\", label = \"Above this line A is better than B\", x = 20, y = 2, color = 'red') +\n annotate(\"text\", label = \"Below this line B is better than A\", x = 20, y = -2, color = 'green'))\ng", "_____no_output_____" ] ], [ [ "## Minimum Detectable Effect, Power and Required Sample Size", "_____no_output_____" ], [ "We argue that this too-big-to-fail phenomena among sample groups is especially dangerous in the context of today's \"big data\" society. Gone are the days where statistical tests are done among two control groups of 100 people each using paper survey forms. Now companies are performning A/B testing between ad variations that could have tens of thousands or more samples (impressions or clicks), and potentially all of them will be \"statistically significant\". \n\nOne way to remedy this is to do what frequentists do best: make more assumptions, more specifically **two** more.\n\nFirst, if we want to find out whether `B` has *better* conversion than `A`, we do not only make assumptions about the mean of the null hypothesis but **minimally by how much**, aka the mean of the alternative hypothesis. We can set **mininum detectable effect** as the smallest possible difference that would be worth investing the time and money in one campaign over the other; let say that from experience we think it is 1%. We then ask:\n\n> What is the mininum number of samples in a sample group (clicks in a campaign) should we have in order to reject the null hypothesis at a **significance level ($\\alpha$)** and **power ($1-\\beta$)** when the difference in sample means is [1%]()?\n\nThe **significance level ($\\alpha$)** takes care of the false positive rate promise, for example to be lower than 5% (95% specificity), where as **power ($1-\\beta$)** indicates the desired recall, for example to be 80% (20% false negative rate).", "_____no_output_____" ] ], [ [ "def power_plot(mean_h0: float, \n mean_h1: float,\n critical: float) -> None:\n '''\n :meth: plot Z-test for difference in proportion with power and alpha highlighted\n :param float mean1: mean for null hypothesis\n :param float mean2: mean for alternative hypothesis\n :param float critical: critical value selected\n :return: None\n '''\n x = np.arange(-4,6,0.1)\n dat = pd.DataFrame({'x':x,\n 'y1':sp.stats.norm.pdf(x,mean_h0,1),\n 'y2':sp.stats.norm.pdf(x,mean_h1,1)})\n dat['x1'] = dat.x.map(lambda x: np.where(x>critical,x,None))\n dat['x2'] = dat.x.map(lambda x: np.where(x>critical,x,None))\n \n\n g = (\n ggplot(dat, aes(x = 'x')) +\n geom_line(aes(y = 'y1'), color='red', size = 1.2) +\n geom_line(aes(y = 'y2'), color='blue',size = 1.2) +\n geom_vline(xintercept=mean_h0,linetype='dashed',color='red')+\n geom_vline(xintercept=mean_h1,linetype='dashed',color='blue')+\n geom_area(aes(y = 'y1', x = 'x1'), fill='red') +\n geom_area(aes(y = 'y2', x = 'x2'), fill = 'blue', alpha = 0.3) +\n ylab('Probability Density Function') + xlab('Z value')+\n ggtitle(f'significance level = {sp.stats.norm.pdf(critical,mean_h0,1):.2f}; power ={1-sp.stats.norm.pdf(critical,mean_h1,1):.2f}')+\n theme_minimal()\n )\n g.draw()\n \n#@title {run: \"auto\"}\nmean_h0 = 0 #@param {type:\"slider\", min:0, max:6, step:1e-3}\nmean_h1 = 3.18 #@param {type:\"slider\", min:0, max:6, step:1e-3}\ncritical = 2 #@param {type:\"slider\", min:0, max:3, step:1e-1}\npower_plot(mean_h0, mean_h1, critical)", "_____no_output_____" ] ], [ [ "Given a minimum detectable effect $\\text{MDE}$, significance level $\\alpha$ and power $1-\\beta$, we can calculate the critical Z value $Z_{critical}$ that satisfies these conditions, where the required number of samples in each group is $n$ and $mn$ (where m is multiplier):\n\\begin{align}\nZ_{critical} &= \\mu_{H0} + Z_{\\alpha} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}\\\\\nZ_{critical} &= 0 + Z_{\\alpha} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}\\\\\nZ_{critical} &= \\mu_{H1}-\\mu_{H0} - Z_{\\beta} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}\\\\\nZ_{critical} &= \\text{MDE} - Z_{\\beta} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}\\\\\n0 + Z_{\\alpha} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})} &= \\text{MDE} - Z_{\\beta} * \\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}\\\\\nZ_{\\alpha} + Z_{\\beta} &= \\frac{\\text{MDE}}{\\sqrt{\\sigma^2 * (\\frac{1}{n} + \\frac{1}{mn})}} \\\\\n\\frac{(m+1)\\sigma^2}{mn} &= (\\frac{\\text{MDE}}{Z_{\\alpha} + Z_{\\beta}})^2 \\\\\nn &= \\frac{m+1}{m}(\\frac{(Z_{\\alpha} + Z_{\\beta}) \\sigma}{\\text{MDE}})^2 \\\\\nn &= 2(\\frac{(Z_{\\alpha} + Z_{\\beta}) \\sigma}{\\text{MDE}})^2; m=1\n\\end{align}", "_____no_output_____" ], [ "Second, we make yet another crucial assumption about **the variance $\\sigma^2$ we expect**. Remember we used to estimate the variance by using the pooled probability of our sample groups, but here we have not even started the experiments. In a conventional A/B testing scenario, we are testing whether an experimental variation is better than the existing one, so one choice is **using sample variance of a campaign you are currently running**; for instance, if `A` is our current ads and we want to know if we should change to `B`, then we will use conversion rate of `A` from past time period to calculate the variance, say 10%.\n\nLet us go back in time before we even started our 2-month-long test between campaign `A` and `B`. Now we assume not only acceptable false positive rate alpha of 0.05 but also minimum detectable effect of 1% and expected variance of $\\sigma^2 = 0.1 * (1-0.1) = 0.09$, then we calculate that the minimum number of samples we should collect for each campaign. You can see that should we have done that we would have not been able to reject the null hypothesis, and stuck with campaign `A` going forward.\n\nThe upside is that now we only have to run the test for about 5 days instead of 60 days assuming every day is the same for the campaigns (no peak traffic on weekends, for instance). The downside is that our null hypothesis gets much more specific with not only one but three assumptions:\n\n* Long-run conversion rate of `B` is no better than `A`'s\n* The difference that will matter to us is at least 1%\n* The expected variance conversion rates is $\\sigma^2 = 0.1 * (1-0.1) = 0.09$\n\nThis fits many A/B testing scenarios since we might not want to change to a new variation even though it is better but not so much that we are willing to invest our time and money to change our current setup. Try adjusting $\\text{MDE}$ and $\\sigma$ in the plot below and see how the number of required samples change.", "_____no_output_____" ] ], [ [ "def proportion_samples(mde: float, p: float, m: float = 1,\n alpha: float = 0.05, \n beta: float = 0.8,\n mode: str = 'one_sided') -> float:\n '''\n :meth: get number of required sample based on minimum detectable difference (in absolute terms)\n :param float mde: minimum detectable difference\n :param float p: pooled probability of both groups\n :param float m: multiplier of number of samples; groups are n and nm\n :param float alpha: alpha\n :param float beta: beta\n :param str mode: mode of test; `one_sided` or `two_sided`\n :return: estimated number of samples to get significance\n '''\n variance = p * (1 - p)\n z_b = sp.stats.norm.ppf(beta)\n if mode == 'two_sided':\n z_a = sp.stats.norm.ppf(1 - alpha / 2)\n elif mode == 'one_sided':\n z_a = sp.stats.norm.ppf(1 - alpha)\n else:\n raise ValueError('Available modes are `one_sided` and `two_sided`')\n return ((m + 1) / m) * variance * ((z_a+z_b) / mde)**2\n\n\ndef plot_proportion_samples(mde, p, m=1, alpha=0.05,beta=0.8, mode='one_sided'):\n minimum_samples = proportion_samples(mde, p,m, alpha,beta, mode)\n g = (ggplot(conv_days, aes(x='cumu_click_a',y='cumu_z_value',color='cumu_p_value')) + geom_line() + \n theme_minimal() +\n xlab('Number of Samples per Campaign') + ylab('Z-value Calculated By Cumulative Data') +\n geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +\n annotate(\"text\", label = \"Above this line A is better than B\", x = 30000, y = 2, color = 'red') +\n annotate(\"text\", label = \"Below this line B is better than A\", x = 30000, y = -2, color = 'green') +\n annotate(\"text\", label = f'Minimum required samples at MDE {mde}={int(minimum_samples)}', x = 30000, y = 0,) +\n geom_vline(xintercept=minimum_samples))\n g.draw()", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nmde = 0.01 #@param {type:\"slider\", min:0.001, max:0.01, step:1e-3}\np = 0.1 #@param {type:\"slider\", min:0, max:1, step:1e-3}\nm = 1 #@param {type:\"slider\", min:0, max:1, step:1e-1}\np_value = 0.05 #@param {type:\"slider\", min:0.01, max:0.1, step:1e-3}\nmode = 'one_sided' #@param ['one_sided','two_sided'] {type:\"string\"}\nplot_proportion_samples(mde, p, m, alpha, mode)", "_____no_output_____" ] ], [ [ "## You Will Get A Statistically Significant Result If You Try Enough Times", "_____no_output_____" ], [ "The concept p-value represents is false positive rate of our test, that is, how unlikely it is to observe our sample groups given that they do not have different conversion rates in the long run. Let us re-simulate our campaigns `A` and `B` to have equal expectation of 10%. If we apply our current method, we can be comfortably sure we will not get statistical significance (unless we have an extremely large number of samples).", "_____no_output_____" ] ], [ [ "conv_days = gen_bernoulli_campaign(p1 = 0.10,\n p2 = 0.10,\n timesteps = 60,\n scaler=100,\n seed = 1412) #god-mode \nconv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks\n\nconv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], \n row['cumu_conv_b'],row['cumu_click_a'], \n row['cumu_click_b'], mode='two_sided')[0],1)\nconv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'], \n row['cumu_conv_b'],row['cumu_click_a'], \n row['cumu_click_b'], mode='two_sided')[1],1)\nconv_days['z_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'], \n row['conv_b'],row['click_a'], \n row['click_b'], mode='two_sided')[0],1)\nconv_days['p_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'], \n row['conv_b'],row['click_a'], \n row['click_b'], mode='two_sided')[1],1)\ng = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +\n xlab('Days in Campaign') + ylab('Z-value Calculated By Cumulative Data') +\n geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']))\ng", "_____no_output_____" ] ], [ [ "Another approach is instead of doing the test only once, we **do it every day using clicks and conversions of that day alone**. We will have 60 tests where 3 of them give statistically significant results that `A` and `B` have different conversion rates in the long run. The fact that we have exactly 5% of the tests turning positive despite knowing that none of them should is not a coincidence. The Z-value is calculated based on alpha of 5%, which means even if there is no difference at 5% of the time we perform this test with this specific set of assumptions we will still have a positive result ([Obligatory relevant xkcd strip](https://xkcd.com/882/); Munroe, n.d.).", "_____no_output_____" ] ], [ [ "g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +\n xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +\n geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']) +\n ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))\ng", "_____no_output_____" ] ], [ [ "Not many people will test online ads campaigns based on daily data, but many researchers perform repeated experiments and by necessity repeated A/B tests as shown above. If you have a reason to believe that sample groups from different experiments have the same distribution, you might consider grouping them together and perform one large test as usual. Otherwise, you can tinker the assumption of how much false positive you can tolerate. One such approach, among [others](https://en.wikipedia.org/wiki/Multiple_comparisons_problem), is the [Bonferroni correction](http://mathworld.wolfram.com/BonferroniCorrection.html). It scales your alpha down by the number of tests you perform to make sure that your false positive rate stays at most your original alpha. In our case, if we cale our alpha as$\\alpha_{\\text{new}}=\\frac{0.05}{60} \\approx 0.0008$, we will have the following statistically non-significant results.", "_____no_output_____" ] ], [ [ "g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +\n xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +\n geom_hline(yintercept=[sp.stats.norm.ppf(1-0.0008/2),sp.stats.norm.ppf(0.0008/2)], color=['red','red']) +\n ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))\ng", "_____no_output_____" ] ], [ [ "## Best Practices", "_____no_output_____" ], [ "To the best of our knowledge, the most reasonable and practical way to perform a frequentist A/B test is to know your assumptions, including but not limited to:\n\n* What distribution should your data be assumed to be drawn from? In many cases, we use Bernoulli distribution for proportions, Poisson distribution for counts and normal distribution for real numbers.\n* Are you comparing your sample group to a fixed value or another sample group?\n* Do you want to know if the expectation of the sample group is equal to, more than or less than its counterpart?\n* What is the minimum detectable effect and how many samples should you collect? What is a reasonable variance to assume in order to calculated required sample size?\n* What is the highest false positive rate $\\alpha$ that you can accept?\n\nWith these assumptions cleared, you can most likely create a test statistics, then with frequentist reasoning, you can determine if the sample group you collected are unlikely enough that you would reject your null hypothesis because of it.", "_____no_output_____" ], [ "## References", "_____no_output_____" ], [ "* Lemons, D. S. (2002). An introduction to stochastic processes in physics. Baltimore: Johns Hopkins University Press.\nNormal Sum Theorem; p34\n* Munroe, Randall (n.d.). HOW TO Absurd Scientific Answers toCommon Real-world Problems. Retrieved from https://xkcd.com/882/\n* Reinhart, A. (2015, March 1). The p value and the base rate fallacy. Retrieved from https://www.statisticsdonewrong.com/p-value.html\n* [whuber](https://stats.stackexchange.com/users/919/whuber) (2017). Can a probability distribution value exceeding 1 be OK?. Retrieved from https://stats.stackexchange.com/q/4223", "_____no_output_____" ], [ "## Appendix", "_____no_output_____" ], [ "### Bessel's Correction for Sample Variance", "_____no_output_____" ], [ "Random variables can be thought of as estimation of the real values such as sample variance is an estimation of variance from the \"true\" distribution. An estimator is said to be **biased** when its expectation is not equal to the true value (not to be confused with LLN where the estimator itself approaches the true value as number of samples grows).\n\nWe can repeat the experiment we did for LLN with sample mean and true mean, but this time we compare how biased version ($\\frac{1}{n} \\sum_{i=1}^{n} (X_i - \\bar{X})^2$) and unbiased version ($\\frac{1}{n-1} \\sum_{i=1}^{n} (X_i - \\bar{X})^2$) of sample variance approach true variance as number of sample groups grow. Clearly, we can see that biased sample variance normally underestimates the true variance.", "_____no_output_____" ] ], [ [ "def var(x, dof=0):\n n = x.shape[0]\n mu = np.sum(x)/n\n return np.sum((x - mu)**2) / (n-dof)\n\nn_total = 10000 #total number of stuff\nn_sample = 100 #number of samples per sample group\nsg_range = range(1,100) #number of sample groups to take average of sample variances from\nr = np.random.normal(loc=0,scale=1,size=n_total) #generate random variables based on Z distribution\npop_var = var(r) #true variance of the population\n\nmean_s_bs = []\nmean_s_us = []\n\nfor n_sg in sg_range:\n s_bs = []\n s_us =[]\n for i in range(n_sg):\n sg = np.random.choice(r,size=n_sample,replace=False)\n s_bs.append(var(sg)) #biased sample variance\n s_us.append(var(sg,1)) #unbiased sample variance\n mean_s_bs.append(np.mean(s_bs))\n mean_s_us.append(np.mean(s_us))\ns_df = pd.DataFrame({'nb_var':sg_range,'biased_var':mean_s_bs,\n 'unbiased_var':mean_s_us}).melt(id_vars='nb_var')\ng = (ggplot(s_df,aes(x='nb_var',y='value',color='variable',group='variable')) + geom_line() + \n geom_hline(yintercept=pop_var) + theme_minimal() +\n xlab('Number of Sample Groups') + ylab('Sample Mean of Sample Variance in Each Group'))\ng", "_____no_output_____" ] ], [ [ "We derive exactly how much the bias is as follows:\n\n$$B[s_{biased}^2] = E[s_{biased}^2] - \\sigma^2 = E[s_{biased}^2 - \\sigma^2]$$\n\nwhere $B[s^2]$ is the bias of estimator (biased sample variance) $s_{biased}^2$ of variance $\\sigma^2$. Then we can calculate the bias as:\n\n\\begin{align}\nE[s_{biased}^2 - \\sigma^2] &= E[\\frac{1}{n} \\sum_{i=1}^n(X_i - \\bar{X})^2 - \\frac{1}{n} \\sum_{i=1}^n(X_i - \\mu)^2] \\\\\n&= \\frac{1}{n}E[(\\sum_{i=1}^n X_i^2 -2\\bar{X}\\sum_{i=1}^n X_i + n\\bar{X^2}) - (\\sum_{i=1}^n X_i^2 -2\\mu\\sum_{i=1}^n X_i + n\\mu^2)] \\\\\n&= E[\\bar{X^2} - \\mu^2 - 2\\bar{X^2} + 2\\mu\\bar{X}] \\\\\n&= -E[\\bar{X^2} -2\\mu\\bar{X} +\\mu^2] \\\\\n&= -E[(\\bar{X} - \\mu)^2] \\\\\n&= -\\frac{\\sigma^2}{n} \\text{; variance of sample mean}\\\\\nE[s_{biased}^2] &= \\sigma^2 - \\frac{\\sigma^2}{n} \\\\ \n&= (1-\\frac{1}{n})\\sigma^2\n\\end{align}\n\nTherefore if we divide biased estimator $s_{biased}^2$ by $1-\\frac{1}{n}$, we will get an unbiased estimator of variance $s_{unbiased}^2$,\n\n\\begin{align}\ns_{unbiased}^2 &= \\frac{s_{biased}^2}{1-\\frac{1}{n}} \\\\\n&= \\frac{\\frac{1}{n} \\sum_{i=1}^n(X_i - \\bar{X})^2}{1-\\frac{1}{n}}\\\\\n&= \\frac{1}{n-1} \\sum_{i=1}^n(X_i - \\bar{X})^2\n\\end{align}\n\nThis is why the sample variance we usually use $s^2$ has $n-1$ instead of $n$. Also, this is not to be confused with the variance of sample means which is $\\frac{\\sigma^2}{n}$ when variance of the base distribution is known or assumed and $\\frac{s^2}{n}$ when it is not.", "_____no_output_____" ], [ "### Mass vs Density", "_____no_output_____" ], [ "You might wonder why the sample mean distribution has Y-axis that exceeds 1 even though it seemingly should represents probability of each value of sample mean. The short answer is that it does not represents probability but rather **probability density function**. The long answer is that there are two ways of representing probability distributions depending on whether they describe **discrete** or **continuous** data. See also this excellent [answer on Stack Exchange](https://stats.stackexchange.com/questions/4220/can-a-probability-distribution-value-exceeding-1-be-ok) (whuber, 2017). \n\n**Discrete probability distributions** contain values that are finite (for instance, $1, 2, 3, ...$) or countably infinite (for instance, $\\frac{1}{2^i}$ where $i=1, 2, 3, ...$). They include but not limited to distributions we have used to demonstrate CLT namely uniform, Bernoulli and Poisson distribution. In all these distributions, the Y-axis, now called **probability mass function**, represents the exact probability each value in the X-axis will take, such as the Bernouilli distribution we have shown before:", "_____no_output_____" ] ], [ [ "flips = np.random.choice([0,1], size=n, p=[1-p,p])\nflips_df = pd.DataFrame(flips)\nflips_df.columns = ['conv_flag']\ng = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) + \n theme_minimal() + xlab('Value') + ylab('Probability Mass Function') +\n ggtitle(f'Bernoulli Distribution'))\ng", "_____no_output_____" ] ], [ [ "**Continuous probability distribution** contains values that can take infinitely many, uncountable values (for instance, all real numbers between 0 and 1). Since there are infinitely many values, the probability of each individual value is essentially zero (what are the chance of winning the lottery that has infinite number of digits). Therefore, instead of the exact probability of each value (probability mass function), the Y-axis only represents the **probability density function**. This can be thought of as the total probability within an immeasurably small interval around the value. Take an example of a normal distribution with expectation $\\mu=0$ and variance $\\sigma^2=0.01$. The probability density function of the value 0 is described as:\n\n\\begin{align}\nf(x) &= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{\\frac{-(x-\\mu)^2}{2\\sigma^2}}\\\\\n&= \\frac{1}{\\sqrt{2\\pi(0.01)}} e^{\\frac{-(x-0)^2}{2(0.01)}} \\text{; }\\mu=0;\\sigma^2=0.01 \\\\\n&\\approx 3.989 \\text{; when } x=0\n\\end{align}\n\nThis of course does not mean that there is 398.9% chance that we will draw the value 0 but the density of the probability around the value. The actual probability of that interval around 0 is 3.989 times an immeasurably small number which will be between 0 and 1.\n\nIntuitively, we can think of these intervals as start from relatively large numbers such as 0.1 and gradually decreases to smaller numbers such as 0.005. As you can see from the plot below, the plot becomes more fine-grained and looks more \"normal\" as the intervals get smaller.", "_____no_output_____" ] ], [ [ "def prob_density(step,mu=0,sigma=0.1):\n x = np.arange(-0.5, 0.5, step)\n y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])\n\n sm_df = pd.DataFrame({'x': x, 'y': y})\n g = (ggplot(sm_df, aes(x='x', y='y')) + geom_bar(stat='identity') +\n theme_minimal() + xlab('Value') + ylab('Probability Density Function') + \n ggtitle(f'Normal Distribution with Expectation={mu} and Variance={sigma**2:2f}')) \n g.draw()\n \n# interact(prob_density, step=widgets.FloatSlider(min=5e-3,max=1e-1,value=1e-1,step=1e-3,readout_format='.3f'))", "_____no_output_____" ], [ "#@title {run: \"auto\"}\nstep = 0.1 #@param {type:\"slider\", min:5e-3, max:0.1, step:1e-3}\n\n\nprob_density(step)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb14d0fd67834061a144c9e87124cd13d38daf4a
175,362
ipynb
Jupyter Notebook
notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb
jckantor/Controlling-Natural-Watersheds
4b9cdeb39a676a1bd9caef28af2fa6313f33520b
[ "MIT" ]
null
null
null
notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb
jckantor/Controlling-Natural-Watersheds
4b9cdeb39a676a1bd9caef28af2fa6313f33520b
[ "MIT" ]
null
null
null
notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb
jckantor/Controlling-Natural-Watersheds
4b9cdeb39a676a1bd9caef28af2fa6313f33520b
[ "MIT" ]
1
2021-12-11T20:38:11.000Z
2021-12-11T20:38:11.000Z
319.420765
70,490
0.909199
[ [ [ "<!--NOTEBOOK_HEADER-->\n*This notebook contains material from [Controlling Natural Watersheds](https://jckantor.github.io/Controlling-Natural-Watersheds);\ncontent is available [on Github](https://github.com/jckantor/Controlling-Natural-Watersheds.git).*", "_____no_output_____" ], [ "<!--NAVIGATION-->\n< [Control](http://nbviewer.jupyter.org/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.00-Control.ipynb) | [Contents](toc.ipynb) | [Implementation of Rainy Lake Rule Curves with Feedback Control](http://nbviewer.jupyter.org/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.02-Implementation_of_Rainy_Lake_Rule_Curves_with_Feedback_Control.ipynb) ><p><a href=\"https://colab.research.google.com/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://raw.githubusercontent.com/jckantor/Controlling-Natural-Watersheds/master/notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb\"><img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>", "_____no_output_____" ], [ "# Lumped Parameter Model for Lake Dynamics", "_____no_output_____" ] ], [ [ "# Display graphics inline with the notebook\n%matplotlib inline\n\n# Standard Python modules\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport pandas as pd\nimport os\nimport datetime\n\n# Module to enhance matplotlib plotting\nimport seaborn\nseaborn.set()\n\n# Modules to display images and data tables\nfrom IPython.display import Image\nfrom IPython.core.display import display\n\n# Data Directory\ndir = './data/'\n\n# Styles\nfrom IPython.core.display import HTML\nHTML(open(\"styles/custom.css\", \"r\").read())", "_____no_output_____" ] ], [ [ "## Stage-Volume Relationships", "_____no_output_____" ], [ "$$ V(h) = a_0 + a_1 h + a_2 h^2 $$\n\n$$\\begin{align*}\nA(h) & = \\frac{d V}{d h} \\\\\n& = a_1 + 2 a_2 h \\\\\n& = \\underbrace{\\left(a_1 + 2 a_2 h_0\\right)}_{b_0} \\; + \\; \\underbrace{2 a_2}_{b_1} \\left(h-h_0\\right)\\end{align*}$$", "_____no_output_____" ], [ "### Rainy Lake", "_____no_output_____" ] ], [ [ "h = np.array([335.0, 336.0, 336.5, 337.0, 337.5, 338.0, 339.0, 340.0])\nv = np.array([112.67, 798.00, 1176.42, 1577.25, 2002.06, 2450.57, 3416.85, 4458.97])\n\nplt.subplot(2,1,1)\nplt.scatter(h,v)\nplt.xlim(h.min(),h.max())\nplt.ylim(0,plt.ylim()[1])\nplt.title('Rainy Lake Stage-Volume Relationship')\nplt.xlabel('Water Elevation (m)')\nplt.ylabel('Volume (million cu. m)')\n\npf = sp.polyfit(h,v,2)\n\nVRL= sp.poly1d(pf)\nplt.hold(True)\nplt.plot(h,VRL(h))\nplt.hold(False)\n\nARL = sp.poly1d(np.array([2.0*pf[0],pf[1]]))\nplt.subplot(2,1,2)\nplt.plot(h,ARL(h))\nplt.title('Rainy Lake Stage-Surface Area Relationship')\nplt.xlabel('Elevation (m)')\nplt.ylabel('Area (sq. km.)')\n\nplt.tight_layout()\n\ndf = pd.DataFrame(zip(h,v,VRL(h),ARL(h)),columns=['h','V','Vhat','Ahat'])\nprint df.to_string(formatters={'V':' {:.0f}'.format, \n 'Vhat':' {:.0f}'.format,\n 'Ahat':' {:.0f}'.format})", " h V Vhat Ahat\n0 335.0 113 111 644\n1 336.0 798 799 734\n2 336.5 1176 1178 780\n3 337.0 1577 1579 825\n4 337.5 2002 2003 870\n5 338.0 2451 2450 916\n6 339.0 3417 3411 1007\n7 340.0 4459 4463 1097\n" ], [ "h = np.array([337.0, 338.0, 338.5, 339.0, 339.5, 340.0,\n 340.5, 341.0, 341.5, 342.0, 343.0])\nv = np.array([65.33, 259.95, 364.20, 475.58, 592.46, 712.28,\n 836.56, 966.17, 1099.79, 1239.68, 1540.75])\n\nplt.subplot(2,1,1)\nplt.scatter(h,v)\nplt.xlim(h.min(),h.max())\nplt.ylim(0,plt.ylim()[1])\nplt.title('Namakan Lake Stage-Volume Relationship')\nplt.xlabel('Water Elevation (m)')\nplt.ylabel('Volume (million cu. m)')\n\npf = sp.polyfit(h,v,2)\n\nVNL= sp.poly1d(pf)\nplt.hold(True)\nplt.plot(h,VNL(h))\nplt.hold(False)\n\nANL = sp.poly1d(np.array([2.0*pf[0],pf[1]]))\nplt.subplot(2,1,2)\nplt.plot(h,ANL(h))\nplt.title('Namakan Lake Stage-Surface Area Relationship')\nplt.xlabel('Elevation (m)')\nplt.ylabel('Area (sq. km.)')\n\nplt.tight_layout()\n\ndf = pd.DataFrame(zip(h,v,VNL(h),ANL(h)),columns=['h','V','Vhat','Ahat'])\nprint df.to_string(formatters={'V':' {:.0f}'.format, \n 'Vhat':' {:.0f}'.format,\n 'Ahat':' {:.0f}'.format})", " h V Vhat Ahat\n0 337.0 65 66 185\n1 338.0 260 260 205\n2 338.5 364 365 215\n3 339.0 476 475 225\n4 339.5 592 591 235\n5 340.0 712 711 246\n6 340.5 837 836 256\n7 341.0 966 966 266\n8 341.5 1100 1102 276\n9 342.0 1240 1242 286\n10 343.0 1541 1539 306\n" ] ], [ [ "## Stage-Discharge Relationships", "_____no_output_____" ], [ "### Rainy Lake", "_____no_output_____" ] ], [ [ "h = np.array([335.4, 336.0, 336.5, 336.75, 337.0, 337.25, 337.5, \n 337.75, 338.0, 338.5, 339, 339.5, 340.0])\nd = np.array([0.0, 399., 425., 443., 589., 704., 792., 909.,\n 1014., 1156., 1324., 1550., 1778.])\n\n#plt.subplot(2,1,1)\n\nplt.hold(True)\nplt.scatter(d,h)\nplt.plot(d,h)\n\nplt.ylim(h.min(),h.max())\nplt.xlim(0,plt.xlim()[1])\nplt.title('Rainy Lake Stage-Discharge Relationship')\nplt.ylabel('Water Elevation (m)')\nplt.xlabel('Discharge (cu. per sec.)')\n\n\n# Get historical flowrates and levels on RR and RL\nRR = pd.read_pickle(dir+'RR.pkl')['1970':]\nRL = pd.read_pickle(dir+'RL.pkl')['1970':]\n\nQ = pd.concat([RR,RL],axis=1)\nQ.columns = ['RR','RL']\n\nA = Q['1970':'2000']\n\nplt.scatter(A['RR'],A['RL'],marker='+',color='b')\nB = Q['2000':]\nplt.scatter(B['RR'],B['RL'],marker='+',color='r')\n\nax = plt.axis()\nplt.plot([ax[0],ax[1]],[336.70,336.70],'y--')\nplt.plot([ax[0],ax[1]],[337.20,337.20],'y--')\nplt.plot([ax[0],ax[1]],[337.75,337.75],'m--')\nplt.plot([ax[0],ax[1]],[337.90,337.90],'c--')\n\nplt.xlabel('Upper Rainy River Flow [cubic meters/sec]')\nplt.ylabel('Rainy Lake Level [meters]')\nax = plt.axis()\nplt.axis([0,ax[1],ax[2],ax[3]])\nplt.legend(['Winter Drought Line','Summer Drought Line','URC_max','All Gates Open','1970-2000','2000-2010'],loc=\"upper left\")\n\nplt.hold(False)\n#plt.savefig('./images/RainyRiverDischarge.png')\n", "_____no_output_____" ], [ "Q = lambda x: np.interp(x,h,d)\nx = np.linspace(336.,339.)\nplt.plot(x,Q(x))", "_____no_output_____" ] ], [ [ "<!--NAVIGATION-->\n< [Control](http://nbviewer.jupyter.org/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.00-Control.ipynb) | [Contents](toc.ipynb) | [Implementation of Rainy Lake Rule Curves with Feedback Control](http://nbviewer.jupyter.org/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.02-Implementation_of_Rainy_Lake_Rule_Curves_with_Feedback_Control.ipynb) ><p><a href=\"https://colab.research.google.com/github/jckantor/Controlling-Natural-Watersheds/blob/master/notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open in Google Colaboratory\"></a><p><a href=\"https://raw.githubusercontent.com/jckantor/Controlling-Natural-Watersheds/master/notebooks/05.01-Lumped_Parameter_Model_for_Lake_Dynamics.ipynb\"><img align=\"left\" src=\"https://img.shields.io/badge/Github-Download-blue.svg\" alt=\"Download\" title=\"Download Notebook\"></a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb14d4c3838314417c9da48fecc194190ad2cd9c
33,487
ipynb
Jupyter Notebook
scratch_notebooks/Simple-env-DQN-Baseline-scratch.ipynb
zsunberg/ai4all-berkeley-driving
f38379154b6bf92a86a053ab213fdf7a0219e108
[ "MIT" ]
null
null
null
scratch_notebooks/Simple-env-DQN-Baseline-scratch.ipynb
zsunberg/ai4all-berkeley-driving
f38379154b6bf92a86a053ab213fdf7a0219e108
[ "MIT" ]
2
2019-08-13T01:14:47.000Z
2019-08-27T20:58:56.000Z
scratch_notebooks/Simple-env-DQN-Baseline-scratch.ipynb
zsunberg/ai4all-berkeley-driving
f38379154b6bf92a86a053ab213fdf7a0219e108
[ "MIT" ]
null
null
null
124.951493
14,824
0.870129
[ [ [ "from driving.simple_env import *\nimport math", "_____no_output_____" ], [ "def policy(s):\n d = s[0]\n theta = s[1]\n return -30*d -0.5*theta", "_____no_output_____" ], [ "env = SimpleDrivingEnv()", "_____no_output_____" ], [ "plot_sim(env, policy)", "reward: -0.09445045529176965\n" ], [ "from stable_baselines.common.vec_env import DummyVecEnv, SubprocVecEnv\nfrom stable_baselines import DQN\n\ndef env_constructor():\n return SimpleDrivingEnv()\n\nenv = DummyVecEnv([lambda: env_constructor()]) # The algorithms require a vectorized environment to run\n\nmodel = DQN('MlpPolicy', env, verbose=1)\nmodel.learn(total_timesteps=10000)", "WARNING: Logging before flag parsing goes to stderr.\nW0813 11:21:20.017900 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:98: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nW0813 11:21:20.023282 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/common/tf_util.py:107: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nW0813 11:21:20.086621 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/dqn.py:123: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.\n\nW0813 11:21:20.088983 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:358: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n\nW0813 11:21:20.092293 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:359: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0813 11:21:20.106109 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:139: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.\n\nW0813 11:21:20.125057 140003204597568 deprecation.py:323] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/policies.py:109: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.flatten instead.\nW0813 11:21:21.808060 140003204597568 deprecation.py:323] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:149: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0813 11:21:22.271237 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:415: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n\nW0813 11:21:23.110738 140003204597568 deprecation_wrapper.py:119] From /home/robbizorg/anaconda3/lib/python3.6/site-packages/stable_baselines/deepq/build_graph.py:449: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.\n\n" ], [ "model.learn(total_timesteps=10000) # more learning", "--------------------------------------\n| % time spent exploring | 2 |\n| episodes | 100 |\n| mean 100 episode reward | -1.9 |\n| steps | 9896 |\n--------------------------------------\n" ], [ "model.step_model.step([[1,0],])", "_____no_output_____" ], [ "# Example Policy that students should write using the step model from stable-baselines.DQN\ndef cutePolicy(s):\n d = s[0]\n theta = s[1]\n \n best_a = 0\n best_q = -math.inf\n estimates = model.step_model.step([[d,theta],])[1][0]\n# print(estimates)\n for i in range(0, 5):\n if estimates[i] > best_q:\n best_q = estimates[i]\n best_a = i\n \n return env.actions[best_a]\n ", "_____no_output_____" ], [ "plot_sim(env, cutePolicy)", "reward: -1.095636553429323\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb14fa972399622b86d9572d91a706cea47069c2
18,980
ipynb
Jupyter Notebook
notebooks/check data.ipynb
rolando-contribute/TeamHG-Memex-Formasaurus
70aa88a50dbc3edf702e31b763ca944ba0995d13
[ "MIT" ]
132
2015-04-18T01:53:52.000Z
2022-03-31T08:33:26.000Z
notebooks/check data.ipynb
rolando-contribute/TeamHG-Memex-Formasaurus
70aa88a50dbc3edf702e31b763ca944ba0995d13
[ "MIT" ]
26
2015-07-08T20:09:26.000Z
2022-03-03T16:50:08.000Z
notebooks/check data.ipynb
rolando-contribute/TeamHG-Memex-Formasaurus
70aa88a50dbc3edf702e31b763ca944ba0995d13
[ "MIT" ]
63
2015-02-17T08:41:00.000Z
2022-03-31T08:58:18.000Z
52.430939
448
0.558061
[ [ [ "import sys\nsys.path.insert(0, '..')\n\nfrom formasaurus import features, evaluation, tool\nfrom formasaurus.storage import Storage, FORM_TYPES, FORM_TYPES_INV, load_html", "_____no_output_____" ], [ "storage = Storage(\"../formasaurus/data\")\nstorage.print_type_counts()", "Annotated HTML forms:\n\n177 search (s)\n139 login (l)\n111 other (o)\n89 registration (r)\n50 password/login recovery (p)\n38 join mailing list (m)\n36 contact (c)\n\nTotal form count: 640\n" ], [ "TRAIN_SIZE = 400\n\nX, y = storage.get_Xy(verbose=True)\nX_train, X_test, y_train, y_test = X[:TRAIN_SIZE], X[TRAIN_SIZE:], y[:TRAIN_SIZE], y[TRAIN_SIZE:]", "" ], [ "mailing_list_forms = [form for form, tp in zip(X_test, y_test) if tp == 'm']", "_____no_output_____" ], [ "for form in mailing_list_forms:\n tool.print_form_html(form)\n print(\"=\"*40)", "<form id=\"newsletterForm\" action=\"/profile/subscribe_home/\" method=\"post\"> Hírlevél feliratkozás\n<input tabindex=\"100\" name=\"RegistrationForm[username]\" id=\"RegistrationForm_username\" type=\"text\">\n<input tabindex=\"101\" name=\"RegistrationForm[email]\" id=\"RegistrationForm_email\" type=\"text\">\n<input tabindex=\"101\" id=\"submit_subscribe\" class=\"submit_subscribe\" type=\"submit\" name=\"yt0\" value=\"\">\n</form>\n========================================\n<form id=\"newsletterForm\" action=\"/profile/subscribe_home/\" method=\"post\"> Hírlevél feliratkozás\n<input tabindex=\"100\" name=\"RegistrationForm[username]\" id=\"RegistrationForm_username\" type=\"text\">\n<input tabindex=\"101\" name=\"RegistrationForm[email]\" id=\"RegistrationForm_email\" type=\"text\">\n<input tabindex=\"101\" id=\"submit_subscribe\" class=\"submit_subscribe\" type=\"submit\" name=\"yt1\" value=\"\">\n</form>\n========================================\n<form action=\"http://petswelcome.list-manage.com/subscribe/post?u=ade3865e57924e2f40e3e555a&amp;id=273168a6fe\" method=\"post\" id=\"mc-embedded-subscribe-form\" name=\"mc-embedded-subscribe-form\" class=\"validate\" target=\"_blank\">\n<input type=\"text\" value=\"Email Address\" name=\"EMAIL\" class=\"required email\" id=\"mce-EMAIL\">\n<input type=\"text\" value=\"First Name\" name=\"FNAME\" class=\"required\" id=\"mce-FNAME\">\n<input type=\"submit\" value=\"Subscribe\" name=\"subscribe\" id=\"mc-embedded-subscribe\" class=\"btn\">\n</form>\n========================================\n<form class=\"newsletter\" action=\"\">\n<input type=\"hidden\" name=\"user_id\" value=\"\">\n<input type=\"hidden\" name=\"user_email\" value=\"\">\n<input type=\"text\" name=\"subscriber\" value=\"\" id=\"subscriber_email\">\n<input type=\"submit\" name=\"submit\" value=\"Noriu gauti naujienlaiškius\">\n</form>\n========================================\n<form id=\"subscription-form\" method=\"POST\" class=\"form\" action=\"/subscription/add\">\n<input type=\"text\" name=\"email\" class=\"required email\">\n<select id=\"city_id\" name=\"city_id\"><option value=\"\">-- Select City --</option>\n<option value=\"9\">Albany</option>\n<option value=\"58\">Albuquerque</option>\n<option value=\"110\">Alexandria Louisiana</option>\n<option value=\"59\">Amarillo</option>\n<option value=\"122\">Athens</option>\n<option value=\"3\">Atlanta</option>\n<option value=\"125\">Atlantic City</option>\n<option value=\"7\">Austin</option>\n<option value=\"23\">Baltimore</option>\n<option value=\"46\">Baton Rouge</option>\n<option value=\"111\">Beaumont</option>\n<option value=\"53\">Big Island</option>\n<option value=\"64\">Boise</option>\n<option value=\"6\">Boston</option>\n<option value=\"55\">Buffalo</option>\n<option value=\"65\">Cedar Rapids</option>\n<option value=\"118\">Cedar Valley</option>\n<option value=\"66\">Charleston</option>\n<option value=\"8\">Charlotte</option>\n<option value=\"67\">Chattanooga</option>\n<option value=\"34\">Chicago</option>\n<option value=\"68\">Cincinnati</option>\n<option value=\"32\">Cleveland</option>\n<option value=\"69\">Colorado Springs</option>\n<option value=\"30\">Columbus</option>\n<option value=\"20\">Dallas</option>\n<option value=\"70\">Dayton</option>\n<option value=\"4\">Denver</option>\n<option value=\"27\">Detroit</option>\n<option value=\"33\">East Bay</option>\n<option value=\"71\">El Paso</option>\n<option value=\"72\">Eugene</option>\n<option value=\"42\">Fort Worth</option>\n<option value=\"252\">Fort-lauderdale</option>\n<option value=\"73\">Fresno</option>\n<option value=\"74\">Ft. Myers</option>\n<option value=\"75\">Ft. Wayne</option>\n<option value=\"108\">Golden Triangle</option>\n<option value=\"76\">Grand Rapids</option>\n<option value=\"77\">Green Bay</option>\n<option value=\"78\">Greenwood</option>\n<option value=\"109\">Gulfport</option>\n<option value=\"79\">Harrisburg</option>\n<option value=\"80\">Hartford</option>\n<option value=\"52\">Honolulu</option>\n<option value=\"17\">Houston</option>\n<option value=\"22\">Indianapolis</option>\n<option value=\"12\">Jacksonville</option>\n<option value=\"48\">Johnson County</option>\n<option value=\"14\">Kansas City</option>\n<option value=\"81\">Knoxville</option>\n<option value=\"106\">Lafayette</option>\n<option value=\"105\">Lake Charles</option>\n<option value=\"11\">Las Vegas</option>\n<option value=\"117\">Lawrence</option>\n<option value=\"82\">Little Rock</option>\n<option value=\"49\">Los Angeles/ventura County</option>\n<option value=\"28\">Louisville</option>\n<option value=\"83\">Lubbock</option>\n<option value=\"116\">Manhattan - Kansas</option>\n<option value=\"54\">Maui</option>\n<option value=\"47\">Memphis</option>\n<option value=\"13\">Miami</option>\n<option value=\"84\">Milwaukee</option>\n<option value=\"85\">Minneapolis / St Paul</option>\n<option value=\"86\">Mobile</option>\n<option value=\"87\">Montgomery</option>\n<option value=\"25\">Nashville</option>\n<option value=\"36\" selected>National Deals</option>\n<option value=\"43\">New Orleans</option>\n<option value=\"5\">New York City</option>\n<option value=\"88\">Norfolk</option>\n<option value=\"57\">Northwest Suburbs - Chicago</option>\n<option value=\"89\">Odessa</option>\n<option value=\"44\">Oklahoma City</option>\n<option value=\"90\">Omaha</option>\n<option value=\"29\">Orange County</option>\n<option value=\"2\">Orlando</option>\n<option value=\"91\">Palm Springs</option>\n<option value=\"121\">Pets</option>\n<option value=\"31\">Philadelphia</option>\n<option value=\"18\">Phoenix</option>\n<option value=\"92\">Pittsburgh</option>\n<option value=\"16\">Portland</option>\n<option value=\"93\">Portland Maine</option>\n<option value=\"38\">Poughkeepsie</option>\n<option value=\"94\">Providence</option>\n<option value=\"61\">Raleigh</option>\n<option value=\"95\">Richmond</option>\n<option value=\"56\">Rochester</option>\n<option value=\"113\">Rochester - Minnesota</option>\n<option value=\"96\">Sacramento</option>\n<option value=\"45\">Salt Lake City</option>\n<option value=\"26\">San Antonio</option>\n<option value=\"19\">San Diego</option>\n<option value=\"50\">San Fernando Valley</option>\n<option value=\"10\">San Francisco</option>\n<option value=\"21\">San Jose</option>\n<option value=\"51\">Santa Clarita</option>\n<option value=\"24\">Seattle</option>\n<option value=\"107\">Shreveport</option>\n<option value=\"115\">Sioux City</option>\n<option value=\"97\">Sioux Falls</option>\n<option value=\"98\">Springfield</option>\n<option value=\"60\">St. Louis</option>\n<option value=\"37\">St. Petersburg</option>\n<option value=\"112\">Sugar Land</option>\n<option value=\"99\">Syracuse</option>\n<option value=\"100\">Tacoma</option>\n<option value=\"101\">Tallahassee</option>\n<option value=\"1\">Tampa Bay</option>\n<option value=\"123\">Temecula</option>\n<option value=\"114\">Topeka</option>\n<option value=\"41\">Tucson</option>\n<option value=\"40\">Tulsa</option>\n<option value=\"39\">Utica</option>\n<option value=\"104\">Washington D. C.</option></select>\nSubmit\n<input type=\"hidden\" name=\"url\" value=\"/national/share\">\n</form>\n========================================\n<form action=\"https://www.rakuten.de/kundenkonto/newsletter/anmelden\" method=\"post\">\n<input type=\"text\" name=\"newsletter\" id=\"newsletter\" class=\"tr_footer_newsletter_input\" value=\"Ihre E-Mail-Adresse\" target=\"https://www.rakuten.de/kundenkonto/newsletter/anmelden\">\n<input type=\"submit\" id=\"button\" value=\"OK\" class=\"tr_footer_newsletter_icon\">\n</form>\n========================================\n<form action=\"http://www.rakuten.de/kundenkonto/newsletter/anmelden\" method=\"post\" class=\"newsletter-register\">\n<input type=\"text\" id=\"nl_email\" name=\"susa_newsletter\">\nJetzt anmelden &amp; sparen\n</form>\n========================================\n<form action=\"http://www.rakuten.de/kundenkonto/newsletter/anmelden\" method=\"post\" class=\"newsletter-register\">\n<input type=\"text\" id=\"nl_email\" name=\"susa_newsletter\" value=\"Ihre Email-Adresse eingeben\">\nAnmelden\n<input type=\"hidden\" name=\"inp_40840\" value=\"DE_2014\">\n</form>\n========================================\n<form id=\"form_news\" name=\"form_news\"> Recevoir la newsletter bimensuelle de routard.com <input id=\"email_news\" name=\"email_news\" type=\"text\" value=\"Votre adresse email\">   <a href=\"/pages_spe/newsletter\" target=\"_blank\">consulter un exemple</a>   </form>\n========================================\n<form name=\"ccoptin\" action=\"http://visitor.r20.constantcontact.com/d.jsp\" target=\"_blank\" method=\"post\"> Sign up for our Happy Homes Email Newsletter  <input type=\"text\" name=\"ea\" size=\"16\" value=\"\"> <input type=\"submit\" name=\"go\" value=\"GO\"> <input type=\"hidden\" name=\"llr\" value=\"uxruvzcab\"> <input type=\"hidden\" name=\"m\" value=\"1102520665844\"> <input type=\"hidden\" name=\"p\" value=\"oi\"> </form>\n========================================\n<form action=\"http://trendo.bg/user/signup/flybox\" method=\"POST\" id=\"signup_flybox\">\n<input type=\"text\" name=\"signup[email]\" value=\"\" class=\"form-control\">\n<input type=\"hidden\" name=\"signup[gender]\" value=\"\">\n<input type=\"submit\" value=\" Мъж\" class=\"btn btn-default\">\n<input type=\"submit\" value=\"Жена\" class=\"btn btn-default\">\n</form>\n========================================\n<form method=\"get\" action=\"/web/en/emailSubscription.page\">\n<input type=\"text\" id=\"signupText\" class=\"mr20 empty\" name=\"emailsignup\" value=\"Enter e-mail address\">\n<input type=\"submit\" id=\"signupButton\" name=\"btnFooterSignUp\" value=\"Sign up\">\n<input type=\"hidden\" value=\"e\" name=\"cmtag\">\n</form>\n========================================\n<form action=\"https://wpmudev.us1.list-manage.com/subscribe/post?u=4e8bfa5d84735d9c9d7016040&amp;id=1ba864f654\" method=\"post\" id=\"mc-embedded-subscribe-form\" name=\"mc-embedded-subscribe-form\" class=\"validate\" target=\"_blank\">\n<label for=\"newsletter-email\">Sign up for the WPMU DEV Newsletter!</label>\n<input value=\"\" name=\"EMAIL\" class=\"required email\" id=\"mce-EMAIL\" type=\"text\">\n<input type=\"submit\" class=\"submit\" value=\"Submit\" id=\"wpmu-mailchimp-send\">\n</form>\n========================================\n<form id=\"newsletterValidateDetail\" method=\"post\" action=\"/newsletter/subscriber/new\" name=\"footer.newsletter\">\n<input type=\"hidden\" value=\"1\" name=\"wt_form\"> <input type=\"text\" class=\"email-correction inputText required-entry validate-email prefilled\" value=\"E-Mail-Adresse hier eingeben\" name=\"email\" id=\"newsletter\">\nFür Frauen\nFür Männer\n<input type=\"hidden\" name=\"_xtk\" value=\"m50vxeTVWWp9r5LNVDcYpZwcTFpFrEX6pvw9IUepK6EY1uFT3l3opqqQfsfuvYpb\">\n</form>\n========================================\n<form id=\"newsletterValidateDetail\" method=\"post\" action=\"/newsletter/subscriber/new\" name=\"footer.newsletter\">\n<input type=\"hidden\" value=\"1\" name=\"wt_form\"> <input type=\"text\" class=\"email-correction inputText required-entry validate-email prefilled\" value=\"E-Mail-Adresse hier eingeben\" name=\"email\" id=\"newsletter\">\nFür Frauen\nFür Männer\n<input type=\"hidden\" name=\"_xtk\" value=\"gM9xzfZWpPVdsDP1yb2u5FRcSxxA3MOjR3suJmfK8FhluuHjS0QMJIRq5B0kzBPN\">\n</form>\n========================================\n<form id=\"newsletterValidateDetail\" method=\"post\" action=\"/newsletter/subscriber/new\" name=\"footer.newsletter\">\n<input type=\"hidden\" value=\"1\" name=\"wt_form\"> <input type=\"text\" class=\"email-correction inputText required-entry validate-email prefilled\" value=\"E-Mail-Adresse hier eingeben\" name=\"email\" id=\"newsletter\">\nFür Frauen\nFür Männer\n<input type=\"hidden\" name=\"_xtk\" value=\"KydAFS3DVnsPWyVjSb51dftpgpvbtfQOgj708t8fcGjXnsyhDtdcQYCjr7b9sN78\">\n</form>\n========================================\n<form id=\"page_footer_iaFooter_emailSignup-form\" class=\"AcAJ-form\" method=\"post\" action=\"http://www.zazzle.com/eml/signupdialog\" name=\"emailSignup\" target=\"emailSignupWindow\" accept-charset=\"UTF-8\">\n<input id=\"page_footer_iaFooter_emailSignup_elements_email-input\" class=\" \" name=\"em\" type=\"text\">\n<input type=\"hidden\" name=\"pn\" id=\"page_footer_iaFooter_emailSignup_elements_zWidget1-input\" value=\"Zazzle\">\nSign me up!\n</form>\n========================================\n<form id=\"page_footer_iaFooter_emailSignup2-form\" class=\"AcAJ-form\" method=\"post\" action=\"http://www.zazzle.com/eml/signupdialog\" name=\"emailSignup2\" target=\"emailSignupWindow\" accept-charset=\"UTF-8\">\n<input id=\"page_footer_iaFooter_emailSignup2_elements_email-input\" class=\" \" name=\"em\" type=\"text\">\n<input type=\"hidden\" name=\"pn\" id=\"page_footer_iaFooter_emailSignup2_elements_zWidget1-input\" value=\"Zazzle\">\n</form>\n========================================\n<form id=\"page_footer_iaFooter_emailSignup-form\" class=\"AcAJ-form\" method=\"post\" action=\"https://www.zazzle.com/eml/signupdialog\" name=\"emailSignup\" target=\"emailSignupWindow\" accept-charset=\"UTF-8\">\n<input id=\"page_footer_iaFooter_emailSignup_elements_email-input\" class=\" \" name=\"em\" type=\"text\">\n<input type=\"hidden\" name=\"pn\" id=\"page_footer_iaFooter_emailSignup_elements_zWidget1-input\" value=\"Zazzle\">\nSign me up!\n</form>\n========================================\n<form id=\"page_footer_iaFooter_emailSignup2-form\" class=\"AcAJ-form\" method=\"post\" action=\"https://www.zazzle.com/eml/signupdialog\" name=\"emailSignup2\" target=\"emailSignupWindow\" accept-charset=\"UTF-8\">\n<input id=\"page_footer_iaFooter_emailSignup2_elements_email-input\" class=\" \" name=\"em\" type=\"text\">\n<input type=\"hidden\" name=\"pn\" id=\"page_footer_iaFooter_emailSignup2_elements_zWidget1-input\" value=\"Zazzle\">\n</form>\n========================================\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb150378abfe1bf431f698596b2b4babd6a5ca4d
26,080
ipynb
Jupyter Notebook
src/.ipynb_checkpoints/PrecisionComparision-checkpoint.ipynb
bhadreshpsavani/Computer-Pointer-Controller
645d62377f7b70bb38ffb8cd5cb04e83ba48acd0
[ "MIT" ]
11
2020-06-22T09:21:23.000Z
2022-01-29T23:17:46.000Z
src/.ipynb_checkpoints/PrecisionComparision-checkpoint.ipynb
bhadreshpsavani/Computer-Pointer-Controller
645d62377f7b70bb38ffb8cd5cb04e83ba48acd0
[ "MIT" ]
3
2021-06-08T21:47:49.000Z
2022-02-10T02:18:50.000Z
src/.ipynb_checkpoints/PrecisionComparision-checkpoint.ipynb
bhadreshpsavani/Computer-Pointer-Controller
645d62377f7b70bb38ffb8cd5cb04e83ba48acd0
[ "MIT" ]
9
2020-06-22T14:20:16.000Z
2021-07-20T13:02:16.000Z
172.715232
8,004
0.906748
[ [ [ "# Assess Performance:\n\n## Get Perfomance data from files", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nprecision_list = ['FP16', 'FP32', 'FP32-INT8']\ninference_time = []\nmodel_load_time = []\nfps = []\n\nfor precision in precision_list:\n with open('results/'+precision+'/stats.txt', 'r') as f:\n inference_time.append(float(f.readline().split('\\n')[0]))\n fps.append(float(f.readline().split('\\n')[0]))\n model_load_time.append(float(f.readline().split('\\n')[0]))\n\nprint(\"Inference Time :\",inference_time)\nprint(\"fps :\",fps)\nprint(\"Model Load Time :\",model_load_time)", "Inference Time : [24.0, 24.0, 26.9]\nfps : [2.4583333333333335, 2.4583333333333335, 2.193308550185874]\nModel Load Time : [0.8468494415283203, 0.7280545234680176, 5.170256853103638]\n" ], [ "plt.bar(precision_list, inference_time)\nplt.xlabel('Model Precision Value')\nplt.ylabel('Inference Time in seconds')\nplt.show()", "_____no_output_____" ], [ "plt.bar(precision_list, fps)\nplt.xlabel('Model Precision Value')\nplt.ylabel('Frames per Second')\nplt.show()", "_____no_output_____" ], [ "plt.bar(precision_list, model_load_time)\nplt.xlabel('Model Precision Value')\nplt.ylabel('Model Load Time')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb1507461b51253828e9cbed7f420bc10f5de555
6,121
ipynb
Jupyter Notebook
01-python-basics/12-tuples.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
01-python-basics/12-tuples.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
01-python-basics/12-tuples.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
18.548485
152
0.482601
[ [ [ "# Python in Action\n## Part 1: Python Fundamentals\n### 12 &mdash; Tuples in Python\n> inmmutable groups of objects\n\nTuples are an essential part of the Python programming language.\n\nTuples allow you to create immutable groups of objects &mdash; once a tuple has been defined, you cannot add, remove or modify any of its items.\n", "_____no_output_____" ], [ "Tuples are typically created using parentheses:\n", "_____no_output_____" ] ], [ [ "my_tuple = ('foo', 'bar')", "['one', 2, 'a', False]\n" ], [ "But you can use the following syntax too:", "_____no_output_____" ], [ "my_tuple = 1, 2, 3, 4, 5\nprint(my_tuple)", "(1, 2, 3, 4, 5)\n" ] ], [ [ "You can also create a tuple from a generator comprehension using `tuple()`:", "_____no_output_____" ] ], [ [ "a = 1, 2, 3, 4, 5 # this is a generator\nb = tuple(x + 10 for x in a)\nprint(b)", "(11, 12, 13, 14, 15)\n" ] ], [ [ "You can access the elements of a tuple using `[]`:", "_____no_output_____" ] ], [ [ "my_tuple = ('foo', 'bar')\n\n# by index\nprint(my_tuple[0])\nprint(my_tuple[1])", "foo\nbar\n" ] ], [ [ "You can also use *unpacking* to get the elements of a tuple:", "_____no_output_____" ] ], [ [ "coordinates = (1.36, -2.42)\nlatitude, longitude = coordinates\n\nprint(latitude)\nprint(longitude)", "1.36\n-2.42\n" ] ], [ [ "You can use the `index()` method to obtain the index of a given tuple item:", "_____no_output_____" ] ], [ [ "my_tuple = ('foo', 'bar')\n\nprint(my_tuple.index('bar'))", "1\n" ] ], [ [ "You can use negative indices with tuples too:", "_____no_output_____" ] ], [ [ "print(my_tuple[-2])", "foo\n" ], [ "And you can use `len()` global function to get the number of items of a tuple:", "_____no_output_____" ], [ "len(my_tuple)", "_____no_output_____" ] ], [ [ "And you can use `in` to check if an item is contained in a tuple:", "_____no_output_____" ] ], [ [ "print('foo' in my_tuple)\nprint('foobar' in my_tuple)", "True\nFalse\n" ] ], [ [ "You can extract a part of a tuple using slices:", "_____no_output_____" ] ], [ [ "my_tuple = ('one', 'two', 'three', 14)\n\nprint(my_tuple[1:3])", "('two', 'three')\n" ] ], [ [ "You can use the `+` operator to create new tuples from existing ones (remember, tuples are immutable):", "_____no_output_____" ] ], [ [ "my_tuple = ('foo', 'bar')\n\nnew_tuple = my_tuple + (14,)\n\nprint(new_tuple)", "('foo', 'bar', 14)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb150c11aebcc609349366deb5b8062de0ba9730
210,403
ipynb
Jupyter Notebook
notebooks/demo_noise_ceiling.ipynb
lukassnoek/FEED_behav_analyses
7530bc0d9fe0b1dd0d4b45529f762458c0529f8c
[ "BSD-3-Clause" ]
null
null
null
notebooks/demo_noise_ceiling.ipynb
lukassnoek/FEED_behav_analyses
7530bc0d9fe0b1dd0d4b45529f762458c0529f8c
[ "BSD-3-Clause" ]
null
null
null
notebooks/demo_noise_ceiling.ipynb
lukassnoek/FEED_behav_analyses
7530bc0d9fe0b1dd0d4b45529f762458c0529f8c
[ "BSD-3-Clause" ]
null
null
null
594.358757
112,240
0.942629
[ [ [ "# \"Proof\" of noise ceiling by simulation", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom sklearn.model_selection import StratifiedKFold, cross_val_predict, GroupKFold\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import roc_auc_score\nfrom tqdm import tqdm_notebook", "_____no_output_____" ] ], [ [ "Define functions.", "_____no_output_____" ] ], [ [ "class Dataset:\n \n def __init__(self, P, N_per_class, K, R, verbose=False):\n self.P = P\n self.N_per_class = N_per_class\n self.N = N_per_class * K\n self.K = K\n self.R = R\n self.verbose = verbose\n self.rep_idx = np.repeat(np.arange(self.N), R)\n self.y = None\n self.X = None\n self.ohe = None # added later\n \n def generate(self, signal=0.1, inconsistency=5):\n \"\"\" Generates pseudo-random data (X, y). \n \n Parameters\n ----------\n signal : float\n \"Amount\" of signal added to X to induce corr(X, y)\n \"\"\"\n\n X_unrep = np.random.normal(0, 1, (self.N, self.P))\n self.X = np.concatenate([X_unrep for _ in range(self.R)]) # repeat R times!\n \n # Generate \"unrepeated\" labels\n y_unrep = np.repeat(np.arange(self.K), self.N_per_class)\n \n # Generate random labels, repeated R times, simulating\n # inconsistency in subject ratings (sometimes anger, sometimes happiness, etc.)\n shifts = np.random.normal(0, inconsistency, R).astype(int)\n self.y = np.concatenate([np.roll(y_unrep, shifts[ii]) for ii in range(R)])\n\n # Add signal\n for k in range(self.K):\n self.X[self.y == k, k] += signal\n \n self.ohe = OneHotEncoder(sparse=False, categories='auto')\n self.ohe.fit(y_unrep[:, np.newaxis])\n \n def compute_noise_ceiling(self, use_prob=True):\n \"\"\" Estimates the best prediction and score (noise ceiling) given the \n inconsistency in the labels.\n\n Parameters\n ----------\n use_prob : bool\n Whether to evaluate probabilistic performance or binarized\n \"\"\"\n\n # Get 2d version of y, shape (N / reps, reps)\n y2d = np.c_[[self.y[self.N*i:self.N*(i+1)] for i in range(self.R)]].T\n\n # Magic below! Count the number of classes across repetitions of the same sample ...\n counts = np.apply_along_axis(np.bincount, 1, y2d, minlength=self.K)\n\n # Pre-allocate best prediction array\n best_pred = np.zeros_like(counts, dtype=float) \n\n for ii in range(counts.shape[0]):\n # Determine most frequent label across reps\n opt_class = np.where(counts[ii, :] == counts[ii, :].max())[0]\n\n if use_prob:\n # Set prediction of the \"optimal class\" to 1 / num_opt_classes (ties)\n best_pred[ii, opt_class] = 1 / len(opt_class)\n else:\n rnd_class = np.random.choice(opt_class, size=1) \n best_pred[ii, rnd_class] = 1\n\n # Repeat best possible prediction R times\n best_pred = np.tile(best_pred.T, R).T\n\n # Convert y to one-hot-encoded array\n y_ohe = self.ohe.transform(self.y[:, np.newaxis])\n\n # Compute best possible score (\"ceiling\")\n self.ceiling = roc_auc_score(\n y_ohe,\n best_pred,\n average=None if use_prob else 'micro'\n )\n \n if self.verbose:\n print(f\"Ceiling: {np.round(self.ceiling, 2)}\")\n\n def compute_model_performance(self, estimator, use_prob=True, cv=None, stratify_reps=False):\n \n # Fit actual model\n if cv is None:\n estimator.fit(self.X, self.y)\n preds = estimator.predict(self.X)\n else:\n preds = cross_val_predict(\n estimator, self.X, self.y, cv=cv,\n groups=self.rep_idx if stratify_reps else None\n )\n \n # Compute actual score (should be > ceiling)\n y_ohe = self.ohe.transform(self.y[:, np.newaxis])\n self.score = roc_auc_score(\n self.ohe.transform(self.y[:, np.newaxis]),\n self.ohe.transform(preds[:, np.newaxis]),\n average=None if use_prob else 'micro'\n )\n \n if self.verbose:\n print(f\"Score: {np.round(self.score, 2)}\")\n \n self.diff = self.ceiling - self.score", "_____no_output_____" ] ], [ [ "## 1. Within-subject, no CV\nFirst, let's check it out for within-subject ratings. We'll define some simulation parameters.", "_____no_output_____" ] ], [ [ "P = 1000 # number of features\nN_per_class = 10 # number of samples per class\nK = 3 # number of classes [0 - K]\nR = 4 # how many repetitions of each sample\nestimator = make_pipeline(StandardScaler(), SVC(kernel='linear'))\niters = 100\n\nds = Dataset(P=P, N_per_class=N_per_class, K=K, R=R, verbose=False)\n\nscores = np.zeros((iters, K))\nceilings = np.zeros((iters, K))\ndiffs = np.zeros((iters, K))\n\nfor i in tqdm_notebook(range(iters)):\n ds.generate(signal=0, inconsistency=5)\n ds.compute_noise_ceiling()\n ds.compute_model_performance(estimator)\n scores[i, :] = ds.score\n ceilings[i, :] = ds.ceiling\n diffs[i, :] = ds.diff", "_____no_output_____" ], [ "fig, axes = plt.subplots(ncols=2, figsize=(15, 5), gridspec_kw={'width_ratios': [1, 3]})\nfor i in range(K):\n sns.distplot(diffs[:, i], kde=False, ax=axes[0])\n\naxes[0].set_xlabel('Score - ceiling')\naxes[0].set_ylabel('Freq')\naxes[0].legend([f'class {i+1}' for i in range(ds.K)], frameon=False)\n\naxes[1].plot(ceilings.mean(axis=1), ls='--')\naxes[1].plot(scores.mean(axis=1))\naxes[1].set_xlabel('Iteration')\naxes[1].set_ylabel('Model performance')\naxes[1].legend(['ceiling', 'score'], frameon=False)\nsns.despine()", "_____no_output_____" ] ], [ [ "## 2. With CV", "_____no_output_____" ] ], [ [ "ds = Dataset(P=P, N_per_class=N_per_class, K=K, R=R, verbose=False)\n\nscores = np.zeros((iters, K))\nceilings = np.zeros((iters, K))\ndiffs = np.zeros((iters, K))\n\nfor i in tqdm_notebook(range(iters)):\n ds.generate(signal=0, inconsistency=5)\n ds.compute_noise_ceiling()\n ds.compute_model_performance(estimator, cv=GroupKFold(n_splits=10), stratify_reps=True)\n scores[i, :] = ds.score\n ceilings[i, :] = ds.ceiling\n diffs[i, :] = ds.diff", "_____no_output_____" ], [ "fig, axes = plt.subplots(ncols=2, figsize=(15, 5), gridspec_kw={'width_ratios': [1, 3]})\nfor i in range(K):\n sns.distplot(diffs[:, i], kde=False, ax=axes[0])\n\naxes[0].set_xlabel('Score - ceiling')\naxes[0].set_ylabel('Freq')\naxes[0].legend([f'class {i+1}' for i in range(ds.K)], frameon=False)\n\naxes[1].plot(ceilings.mean(axis=1), ls='--')\naxes[1].plot(scores.mean(axis=1))\naxes[1].set_xlabel('Iteration')\naxes[1].set_ylabel('Model performance')\naxes[1].legend(['ceiling', 'score'], frameon=False)\nsns.despine()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb1512e41473d26b4c8547f63de9f34df083b3e3
354,766
ipynb
Jupyter Notebook
experiments/test2.ipynb
Easoncyx/srcnn
e3de55acbf53881f408ad12f2c0ce8e8b4bf81b1
[ "MIT" ]
null
null
null
experiments/test2.ipynb
Easoncyx/srcnn
e3de55acbf53881f408ad12f2c0ce8e8b4bf81b1
[ "MIT" ]
null
null
null
experiments/test2.ipynb
Easoncyx/srcnn
e3de55acbf53881f408ad12f2c0ce8e8b4bf81b1
[ "MIT" ]
null
null
null
726.979508
168,068
0.953107
[ [ [ "import numpy as np\nfrom keras.preprocessing.image import img_to_array\nfrom keras.preprocessing.image import load_img\nimport matplotlib.pyplot as plt\n\nfrom pathlib import Path\nfrom functools import partial\nfrom PIL import Image\n", "Using TensorFlow backend.\n" ], [ "img = load_img('../data/91-image/t2.bmp')\nx = img_to_array(img)\nplt.imshow(x/255.)\nplt.show()\nprint(x.shape)", "_____no_output_____" ], [ "def load_image_pair(path, scale=3):\n image = load_img(path)\n image = image.convert('YCbCr')\n hr_image = modcrop(image, scale)\n lr_image = bicubic_rescale(hr_image, 1 / scale)\n return lr_image, hr_image\n\n\ndef generate_sub_images(image, size, stride):\n for i in range(0, image.size[0] - size + 1, stride):\n for j in range(0, image.size[1] - size + 1, stride):\n yield image.crop([i, j, i + size, j + size])\n\n\ndef array_to_img(x, mode='YCbCr'):\n return Image.fromarray(x.astype('uint8'), mode=mode)\n\n\ndef bicubic_rescale(image, scale):\n if isinstance(scale, (float, int)):\n size = (np.array(image.size) * scale).astype(int)\n return image.resize(size, resample=Image.BICUBIC)\n\n\ndef modcrop(image, scale):\n size = np.array(image.size)\n size -= size % scale\n return image.crop([0, 0, *size])\n", "_____no_output_____" ], [ "repo_dir = Path('..')\ndata_dir = repo_dir / 'data'", "_____no_output_____" ], [ "i=0\ndataset_name = '91-image'\nfor path in (data_dir / dataset_name).glob('*'):\n i+=1\n print(path)\n if i%4==0:\n break", "../data/91-image/t63.bmp\n../data/91-image/t44.bmp\n../data/91-image/t20.bmp\n../data/91-image/t52.bmp\n" ], [ "lr_sub_size=11\nlr_sub_stride=5\nscale=3\ndataset_name = '91-image'\n\nhr_sub_size = lr_sub_size * scale # 33\nhr_sub_stride = lr_sub_stride * scale # 15\n\nlr_gen_sub = partial(generate_sub_images, size=lr_sub_size,\n stride=lr_sub_stride)\nhr_gen_sub = partial(generate_sub_images, size=hr_sub_size,\n stride=hr_sub_stride)\n\nlr_sub_arrays = []\nhr_sub_arrays = []\nfor path in (data_dir / dataset_name).glob('*'):\n image = load_img(path)\n image = image.convert('YCbCr')\n hr_image = modcrop(image, scale)\n lr_image = bicubic_rescale(hr_image, 1 / scale)\n \n lr_sub_arrays += [img_to_array(img) for img in lr_gen_sub(lr_image)]\n hr_sub_arrays += [img_to_array(img) for img in hr_gen_sub(hr_image)]\n\nx = np.stack(lr_sub_arrays)\ny = np.stack(hr_sub_arrays)", "_____no_output_____" ], [ "# len(y)\nlen(x)", "_____no_output_____" ], [ "lr_image, hr_image = load_image_pair(str(path), scale=scale)\nhr_imagecov = hr_image.convert('RGB')\nlr_imagecov = lr_image.convert('RGB')\n\nplt.subplot(2,2,1)\nplt.imshow(np.asarray(hr_image))\nplt.subplot(2,2,2)\nplt.imshow(np.asarray(hr_imagecov))\nprint(type(hr_imagecov))\nprint(lr_image.size)\nprint(hr_image.size)", "<class 'PIL.Image.Image'>\n(145, 117)\n(435, 351)\n" ], [ "plt.imshow(img_to_array(hr_imagecov)/255.0)\nplt.show()", "_____no_output_____" ], [ "plt.imshow(lr_sub_arrays[0]/255.0)", "_____no_output_____" ], [ "lr_sub_arrays[0].shape", "_____no_output_____" ], [ "len(lr_sub_arrays)", "_____no_output_____" ], [ "lr_sub_arrays[0].shape", "_____no_output_____" ], [ "len(hr_sub_arrays)", "_____no_output_____" ], [ "lr_sub_arrays=[]\nhr_sub_arrays=[]\nfor path in (data_dir / dataset_name).glob('*'):\n image = load_img(path)\n# image = image.convert('YCbCr')\n hr_image = modcrop(image, scale)\n lr_image = bicubic_rescale(hr_image, 1 / scale)\n \n lr_sub_arrays += [img for img in lr_gen_sub(lr_image)]\n hr_sub_arrays += [img for img in hr_gen_sub(hr_image)]", "_____no_output_____" ], [ "len([img for img in lr_gen_sub(lr_image)])", "_____no_output_____" ], [ "x.shape[1:]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb152956bdd86706ee621ae820089d69c2b5be3b
31,064
ipynb
Jupyter Notebook
6_CarRacing_trabajo_final.ipynb
pleslabay/finalRL
bd03be194163ba81468127ee8ecd8f8c0ca1f8b6
[ "MIT" ]
null
null
null
6_CarRacing_trabajo_final.ipynb
pleslabay/finalRL
bd03be194163ba81468127ee8ecd8f8c0ca1f8b6
[ "MIT" ]
null
null
null
6_CarRacing_trabajo_final.ipynb
pleslabay/finalRL
bd03be194163ba81468127ee8ecd8f8c0ca1f8b6
[ "MIT" ]
null
null
null
62.128
11,320
0.781998
[ [ [ "import gym\n#import moviepy.editor as mpy\nimport os\nfrom pyvirtualdisplay import Display", "_____no_output_____" ], [ "# Filter tensorflow version warnings\nimport os\n# https://stackoverflow.com/questions/40426502/is-there-a-way-to-suppress-the-messages-tensorflow-prints/40426709\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}\nimport warnings\n# https://stackoverflow.com/questions/15777951/how-to-suppress-pandas-future-warning\nwarnings.simplefilter(action='ignore', category=FutureWarning)\nwarnings.simplefilter(action='ignore', category=Warning)\nimport tensorflow as tf\ntf.get_logger().setLevel('INFO')\ntf.autograph.set_verbosity(0)\nimport logging\ntf.get_logger().setLevel(logging.ERROR)", "_____no_output_____" ], [ "from matplotlib import pyplot as plt\nfrom IPython.display import clear_output\nimport numpy as np\n\nfrom stable_baselines import PPO2", "_____no_output_____" ], [ "#cargo el env original de OpenAI\nenv = gym.make('CarRacing-v0')", "_____no_output_____" ], [ "env.action_space", "_____no_output_____" ], [ "env.action_space.high, env.action_space.low", "_____no_output_____" ], [ "env.observation_space", "_____no_output_____" ] ], [ [ "## +++modelos propios single frame en rgb, no andan demasiado bien... +++\n\n#param comunes de imagenes usados en train\nC=3 #frames consecutivos x canales de rgb\nG=0 #rgb o grayscale o red\n\n#trained models segun el env de openAI, sacar el discretizador\ndiscre=False\n\n#model = PPO2.load('ppo2_cnn_agent.pkl')\n#model = PPO2.load('ppo2_cnn_agentII.pkl')\n#model = PPO2.load('ppo2_cnn_agent_short')\n#model = PPO2.load('ppo2_cnn_gymv0_bs256.pkl')\n#model = PPO2.load('ppo2_cnn_gymv0_bs256II.pkl')\n#model = PPO2.load('ppo2_cnn_gymv0_bs256III.pkl')\n \n\n#trained models segun env() de Mike: _v0 es como el de openAI, pero discretizado\ndiscre=True\n\n#model = PPO2.load('gym-master/ppo2_v0_mike_cnn')\n#model = PPO2.load('gym-master/ppo2_v0_mike_cnn_bs256.pkl')\n#model = PPO2.load('gym-master/ppo2_v0_mike_cnn_bs256II.pkl')\n#model = PPO2.load('gym-master/ppo2_v0_mike_cnn_clipvf.pkl') #no aprendio nada\n#model = PPO2.load('gym-master/ppo2_v0_mike_cnn_clipvfII.pkl') #no aprendio nada\n\n\n#trained models segun env() de Mike: _v1 es el avanzado\ndiscre=True\n\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn')\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn2')\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_offtrack') #no aprendio nada\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_tmout+.pkl')\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_tmout-.pkl')\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_tmout-II.pkl')\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_tmout-III.pkl')\n\nacc_prev=0\nif discre:\n intro=45\nelse:\n intro=0\n", "_____no_output_____" ] ], [ [ "#modelos propios multi-frame, segun el env_v1 de Mike que lo permite\n\n#param comunes de imagenes usados en train\nG=1 #rgb o grayscale o green\ndiscre=True\nintro=45\nacc_prev=0\n\n#model = PPO2.load('gym-master/ppo2_v1_mike_cnn_gray.pkl') #cuidado 2 frames\nC=2\n\nmodel = PPO2.load('gym-master/ppo2_v1_mike_cnn_4f_green_bs1600.pkl')\nC=4\nG=2", "Loading a model without an environment, this model cannot be trained until it has a valid environment.\n" ] ], [ [ "#modelo publicado de Mike\n\nmodel = PPO2.load('gym-master/car_racing_weights.pkl')\nC=4 #4 frames sucesivos\nG=1 #grayscale\ndiscre=True\nintro=45\nacc_prev=0", "_____no_output_____" ] ], [ [ "#mike's training discretization\nact=[]\nact.append([0, 0, 0])\nact.append([-1,0, 0])\nact.append([1, 0, 0])\nact.append([0,1,0])\nact.append([0, 0,0.8])", "_____no_output_____" ], [ "#setup\nplot=False #si plot no render grande\n\nif plot:\n display = Display(visible=0, size=(1024, 768))\n display.start()\n os.environ[\"DISPLAY\"] = \":\" + str(display.display) + \".\" + str(display.screen)\n\nrgb=env.reset()\n\n#salteo la intro si uso 45 frames, lo pongo a acelerar segun env entretanto\nfor i in range(intro):\n rgb, reward, done, info = env.step([0, acc_prev, 0])\n ", "Track generation: 1140..1429 -> 289-tiles track\n" ], [ "#armo array de obser inicial \n#--> hay sendos wrapper para esto, pero no puedo agregarle a TU entorno la\n# opcion de quedarme solo con el canal green de rgb (G=2)\n\nobser=np.zeros((96,96,C))\n\nfor i in range(C):\n if G==1: \n obs=np.average(rgb, weights=[0.299, 0.587, 0.114], axis=2)\n elif G==2:\n obs=rgb[:,:,1]\n else:\n obs=rgb[:,:,i]\n \n obser[:,:,i]=obs\n# rgb, reward, done, info = env.step([0, acc_prev, 0]) #acelero segun env estos C frames...\n\nenv.render()", "_____no_output_____" ], [ "obser.shape", "_____no_output_____" ], [ "#play\nepisode = intro\ntotal_reward = 0 \nrewards = []\ntotal_rewards = []\ndone = False\nnp.set_printoptions(precision=4)\n\n#si plot no render grande\n\nwhile not done and episode<1003: #a los mil corta el env() automaticamente\n mod_act, _states = model.predict(obser, deterministic=True)\n action=act[mod_act] if discre else mod_act\n \n rgb, reward, done, info = env.step(action)\n #obs=np.average(rgb, weights=[0.299, 0.587, 0.114], axis=2) if G else rgb\n if G==1: \n obs=np.average(rgb, weights=[0.299, 0.587, 0.114], axis=2)\n elif G==2:\n obs=rgb[:,:,1]\n else:\n obs=rgb\n \n if C>1:\n obser=np.roll(obser,1,2)\n obser[:,:,0]=obs\n else:\n obser=obs\n \n total_reward += reward\n rewards.append(reward)\n total_rewards.append(total_reward)\n \n if plot:\n clear_output(wait=True)\n plt.subplots(1,2,figsize=(15,10))\n plt.subplot(1,2, 1)\n plt.imshow(obs,cmap='gray') #imagen mas nueva que ve la red para decidir, guarda C anteriores\n plt.subplot(1,2,2)\n plt.imshow(rgb) #el juego en estado natural\n plt.show()\n #print(f'\\r{model.action_probability(obser):.4f}')\n print(model.action_probability(obser))\n else:\n env.render()\n \n episode+=1\n #print(mod_act, act[mod_act])\n print(f'\\r{reward:.2f}, {action}, {total_reward:.2f}, {episode}', end=' ')\n\nprint('')\nprint('')\nprint('env() lo dio por finalizado: ', done)\nprint(f'\\r{info}, {total_reward:.2f}, {episode}')\n\nenv.close()", "-0.10, [0, 1, 0], 869.78, 1000 \n\nenv() lo dio por finalizado: True\n{'TimeLimit.truncated': True}, 869.78, 1000\n" ], [ "plt.plot(rewards[:episode-1])\nplt.plot(total_rewards[:episode])", "_____no_output_____" ], [ "plt.plot(rewards, marker='.')", "_____no_output_____" ], [ "env.close()", "_____no_output_____" ] ] ]
[ "code", "raw", "code", "raw", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb15327b564b58c0699d6d2a3e12874055e5dde2
158,190
ipynb
Jupyter Notebook
Assignment4/hw4_Perceptrons.ipynb
vedantc6/CS536_MachineLearning
564ef54bb15d39de709d682e7b1699dbe0fe0e31
[ "MIT" ]
1
2019-03-02T11:51:05.000Z
2019-03-02T11:51:05.000Z
Assignment4/hw4_Perceptrons.ipynb
vedantc6/CS536_MachineLearning
564ef54bb15d39de709d682e7b1699dbe0fe0e31
[ "MIT" ]
null
null
null
Assignment4/hw4_Perceptrons.ipynb
vedantc6/CS536_MachineLearning
564ef54bb15d39de709d682e7b1699dbe0fe0e31
[ "MIT" ]
1
2020-10-10T11:31:35.000Z
2020-10-10T11:31:35.000Z
170.463362
30,144
0.872211
[ [ [ "## CS536: Perceptrons\n#### Done by - Vedant Choudhary, vc389\nIn the usual way, we need data that we can fit and analyze using perceptrons. Consider generating data points (X, Y) in the following way:\n- For $i = 1,....,k-1$, let $X_i ~ N(0, 1)$ (i.e. each $X_i$ is an i.i.d. standard normal)\n- For $i = k$, generate $X_k$ in the following way: let $D ~ Exp(1)$, and for a parameter $\\epsilon > 0$ take\n\n$X_k = (\\epsilon + D)$ with probability 1/2\n\n$X_k = -(\\epsilon + D)$ with probability 1/2\n\nThe effect of this is that while $X_1,...X_{k-1}$ are i.i.d. standard normals, $X_k$ is distributed randomly with some gap (of size $2\\epsilon$ around $X_k = 0$. We can then classify each point according to the following:\n\n$Y = 1$ if $X_k$ > 0\n\n$Y = -1$ if $X_k$ < 0\n\nWe see that the class of each data point is determined entirely by the value of the $X_k$ feature", "_____no_output_____" ], [ "#### 1. Show that there is a perceptron that correctly classifies this data. Is this perceptron unique? What is the ‘best’ perceptron for this data set, theoretically?", "_____no_output_____" ], [ "**Solution:** The perceptron generated when the data is linearly separable is unique. Best perceptron for a data would be the perceptron that relies heaviliy on the last feature of the dataset, as target value is governed by that.", "_____no_output_____" ] ], [ [ "# Importing required libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport pprint\nfrom tqdm import tqdm\n\n%matplotlib inline", "_____no_output_____" ], [ "# Creating X (feature) vectors for the data\ndef create_data(k, m, D, epsilon):\n X_k_minus_1 = np.random.normal(0, 1, (m,k-1))\n X_k = []\n for i in range(m):\n temp = np.random.choice(2, 1, p=[0.5,0.5])\n# print(temp)\n if temp == 1:\n X_k.append(epsilon + D)\n else:\n X_k.append(-(epsilon + D))\n X_k = np.asarray(X_k).reshape((1,m))\n# print(X_k_minus_1)\n# print(X_k)\n return np.concatenate((X_k_minus_1, X_k.T), axis=1)\n\n# Creating target column for the data\ndef create_y(X, m):\n y = []\n for i in range(m):\n if X[i][-1] > 0:\n y.append(1)\n else:\n y.append(-1)\n return y\n\n# Combining all the sub data points into a dataframe\ndef create_dataset(k, m, epsilon, D):\n X = np.asarray(create_data(k, m, epsilon, D))\n y = np.asarray(create_y(X, m)).reshape((m,1))\n# print(X.shape,y.shape)\n\n # Training data is an appended version of X and y arrays\n data = pd.DataFrame(np.append(X, y, axis=1), columns=[\"X\" + str(i) for i in range(1,k+1)]+['Y'])\n return data", "_____no_output_____" ], [ "# Global Variables - k = 20, m = 100, epsilon = 1\nk, m, epsilon = 20, 100, 1\nD = float(np.random.exponential(1, 1))\n\ntrain_data = create_dataset(k, m, epsilon, D)\ntrain_data.head()", "_____no_output_____" ] ], [ [ "#### 2. We want to consider the problem of learning perceptrons from data sets. Generate a set of data of size m = 100 with k = 20, $\\epsilon$ = 1\n##### - Implement the perceptron learning algorithm. This data is separable, so the algorithm will terminate. How does the output perceptron compare to your theoretical answer in the previous problem?", "_____no_output_____" ] ], [ [ "# Class for Perceptron\nclass Perceptron():\n def __init__(self):\n pass\n \n '''\n Calculates the sign of the predicted value\n Input: dot product (X.w + b)\n Return: Predicted sign of f_x\n '''\n def sign_function(self, data_vec):\n return np.array([1 if val >= 1 else -1 for val in data_vec])[:, np.newaxis]\n \n '''\n Perceptron learning algorithm according to the notes posted\n Input: dataset\n Return: final weights and biases, along with number of steps for convergence \n and upper bound of theoretical convergence\n '''\n def pla(self, data):\n X = np.asarray(data.iloc[:,:-1])\n y = np.asarray(data.iloc[:,-1:])\n num_samples, num_features = X.shape\n \n# Initialize weight and bias parameters\n self.w = np.zeros(shape=(num_features, 1))\n self.bias = 0\n \n count_till_solution = 0\n f_x = [0]*num_samples\n i = 0\n theoretical_termination = []\n while True:\n mismatch = 0\n for i in range(num_samples):\n# Calculate the mapping function f(x)\n f_x[i] = float(self.sign_function(np.dot(X[i].reshape((num_features, 1)).T, self.w) + self.bias))\n\n# Compute weights if f_x != y\n if float(f_x[i]) != float(y[i]):\n mismatch += 1\n self.w += np.dot(X[i].reshape((num_features, 1)), y[i].reshape((1,1)))\n self.bias += y[i]\n \n count_till_solution += 1\n min_margin = 99999\n for i in range(num_samples):\n margin = abs(np.dot(self.w.T, X[i].reshape(-1,1))/(np.linalg.norm(self.w)))\n if margin < min_margin:\n min_margin = margin\n \n theoretical_termination.append(int(1/(min_margin**2)))\n \n f_x = np.asarray(f_x).reshape((num_samples, 1))\n i += 1\n if (np.array_equal(y, f_x)) or (mismatch >= 0.3*num_samples and count_till_solution >= 5000):\n break\n \n return self.w, self.bias, count_till_solution, max(theoretical_termination)\n \n '''\n Predicts the target value based on a data vector\n Input - a single row of dataset or a single X vector\n Return - predicted value\n '''\n def predict(self, instance_data):\n instance_data = np.asarray(instance_data)\n prediction = self.sign_function(np.dot(self.w.T, instance_data.reshape((len(instance_data),1))) + self.bias)\n return prediction \n \n '''\n Predicts the target value and then calculates error based on the predictions\n Input - dataset, decision tree built\n Return - error\n '''\n def fit(self, data):\n error = 0\n for i in range(len(data)):\n prediction = self.predict(data.iloc[i][:-1])\n if prediction != data.iloc[i][-1]:\n print(\"Not equal\")\n error += 1\n return error/len(data) ", "_____no_output_____" ], [ "perceptron = Perceptron()\nfinal_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)\nprint(\"Final weights:\\n\",final_w)\nprint(\"Final bias:\\n\", final_b)\nprint(\"Number of steps till convergence: \\n\", num_steps)\nprint(\"Theoretical number of steps till convergence can be found for linear separation: \", theoretical_steps)", "Final weights:\n [[-1.52874374]\n [-0.95692616]\n [ 0.2175046 ]\n [-0.24172105]\n [-0.69067186]\n [ 1.30426699]\n [-0.52795995]\n [ 1.17166464]\n [ 1.65988943]\n [-0.9546229 ]\n [ 1.92738242]\n [ 0.09576421]\n [ 1.60728812]\n [ 0.42904243]\n [-0.16280013]\n [ 1.85345704]\n [-0.60308505]\n [-2.03354627]\n [ 0.5129562 ]\n [ 6.71430226]]\nFinal bias:\n [3.]\nNumber of steps till convergence: \n 2\nTheoretical number of steps till convergence can be found for linear separation: 15\n" ], [ "error = perceptron.fit(train_data)\nerror", "_____no_output_____" ], [ "plt.plot(np.linspace(0, 20, 20), list(final_w))\nplt.title(\"Weight vector by feature\")\nplt.xlabel(\"Feature number\")\nplt.ylabel(\"Weights\")\nplt.show()", "_____no_output_____" ] ], [ [ "**Solution:** On implementing the perceptron learning algorithm on the dataset provided, we see that it is similar to our theoretical answer. The last feature has highest weight associated to it (as can be seen from the graph generated above). This is so because the data is created such that the target value depends solely on the last feature value.", "_____no_output_____" ], [ "#### 3. For any given data set, there may be multiple separators with multiple margins - but for our data set, we can effectively control the size of the margin with the parameter $\\epsilon$ - the bigger this value, the bigger the margin of our separator.\n#### – For m = 100, k = 20, generate a data set for a given value of $\\epsilon$ and run the learning algorithm to completion. Plot, as a function of $\\epsilon$ ∈ [0, 1], the average or typical number of steps the algorithm needs to terminate. Characterize the dependence.", "_____no_output_____" ] ], [ [ "def varied_margin():\n k, m = 20, 100\n epsilon = list(np.arange(0, 1.05, 0.02))\n avg_steps = []\n for i in tqdm(range(len(epsilon))):\n steps = []\n for j in range(100):\n train_data = create_dataset(k, m, epsilon[i], D)\n perceptron = Perceptron()\n final_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)\n steps.append(num_steps)\n \n avg_steps.append(sum(steps)/len(steps))\n \n plt.plot(epsilon, avg_steps)\n plt.title(\"Number of steps w.r.t. margin\")\n plt.xlabel(\"Margin value\")\n plt.ylabel(\"#Steps\")\n plt.show()", "_____no_output_____" ], [ "varied_margin()", "100%|██████████| 53/53 [00:43<00:00, 1.25it/s]\n" ] ], [ [ "**Solution:** On plotting average number of steps needed for termination of a linearly separable data w.r.t. $\\epsilon$, we observe that bigger the margin, lesser the number of steps are needed for the perceptron to terminate. This dependence can be proved by the Perceptron Convergence Theorem - If data is linearly separable, perceptron algorithm will find a linear classifier that classifies all data correctly, whose convergence is inversely proportional to the square of margin.\n\nThis means as the margin increases, the convergence steps decrease.", "_____no_output_____" ], [ "#### 4. One of the nice properties of the perceptron learning algorithm (and perceptrons generally) is that learning the weight vector w and bias value b is typically independent of the ambient dimension. To see this, consider the following experiment:\n#### – Fixing m = 100, $\\epsilon$ = 1, consider generating a data set on k features and running the learning algorithm on it. Plot, as a function k (for k = 2, . . . , 40), the typical number of steps to learn a perceptron on a data set of this size. How does the number of steps vary with k? Repeat for m = 1000.", "_____no_output_____" ] ], [ [ "def varied_features(m):\n epsilon = 1\n D = float(np.random.exponential(1, 1))\n k = list(np.arange(2, 40, 1))\n steps = []\n for i in range(len(k)):\n train_data = create_dataset(k[i], m, epsilon, D)\n perceptron = Perceptron()\n final_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)\n steps.append(num_steps)\n \n plt.plot(k, steps)\n plt.title(\"Number of steps w.r.t. features\")\n plt.xlabel(\"#Features\")\n plt.ylabel(\"#Steps\")\n plt.show()", "_____no_output_____" ], [ "varied_features(100)", "_____no_output_____" ], [ "varied_features(1000)", "_____no_output_____" ] ], [ [ "**Solution:** The number of steps needed for convergence of a linearly separable data through perceptrons is usually independent of number of features the data has. This is shown through the above experiment too. For this case, I see no change in number of steps, but some different runs have shown very random change in number of steps, that also by just 1 step more or less. We cannot establish a trend of convergence w.r.t. the number of features.", "_____no_output_____" ], [ "#### 5. As shown in class, the perceptron learning algorithm always terminates in finite time - if there is a separator. Consider generating non-separable data in the following way: generate each $X_1, . . . , X_k$ as i.i.d. standard normals N(0, 1). Define Y by\n\n$$Y = 1 if \\sum_{i=1}^k{X_i^2} \\ge k $$\n$$Y = -1 else$$", "_____no_output_____" ] ], [ [ "def create_non_separable_data(k, m):\n X = np.random.normal(0, 1, (m,k))\n y = []\n for i in range(m):\n total = 0\n for j in range(k):\n total += X[i][j]**2\n \n if total >= k:\n y.append(1)\n else:\n y.append(-1)\n \n return X, y\n\ndef create_non_separable_dataset(k, m):\n X, y = create_non_separable_data(k, m)\n X = np.asarray(X)\n y = np.asarray(y).reshape((m,1))\n # Training data is an appended version of X and y arrays\n data = pd.DataFrame(np.append(X, y, axis=1), columns=[\"X\" + str(i) for i in range(1,k+1)]+['Y'])\n return data", "_____no_output_____" ], [ "k, m = 2, 100\ntrain_ns_data = create_non_separable_dataset(k, m)\ntrain_ns_data.head()", "_____no_output_____" ], [ "perceptron2 = Perceptron()\nfinal_w2, final_b2, num_steps2, theoretical_steps = perceptron2.pla(train_ns_data)", "_____no_output_____" ], [ "plt.scatter(X1, X2, c=y2)\nplt.title(\"Dataset\")\nplt.xlabel(\"First feature\")\nplt.ylabel(\"Second feature\")\nplt.show()", "_____no_output_____" ] ], [ [ "The data represented above is the data generated from the new rules of creating a non separable data. As can be seen, this data cannot be linearly separated through Perceptrons. A kernel method has to be applied to this data to find a separable hyper-plane. ", "_____no_output_____" ] ], [ [ "def plot_hyperplane(x1, x2, y, w, b):\n slope = -w[0]/w[1]\n intercept = -b/w[1]\n x_hyperplane = np.linspace(-3,3,20)\n y_hyperplane = slope*x_hyperplane + intercept\n plt.scatter(x1, x2, c=y)\n plt.plot(x_hyperplane, y_hyperplane, 'b-')\n plt.title(\"Dataset with fitted hyperplane\")\n plt.xlabel(\"First feature\")\n plt.ylabel(\"Second feature\")\n plt.show()", "_____no_output_____" ], [ "X2_1 = train_ns_data.iloc[:,:-2]\nX2_2 = train_ns_data.iloc[:,1:-1]\ny2 = train_ns_data.iloc[:,-1:]\nplot_hyperplane(X2_1, X2_2, y2, final_w2, final_b2)", "_____no_output_____" ] ], [ [ "**Solution:** For a linearly non-separable data, perceptron is not a good algorithm to use, because it will never converge. Theoretically, it is possible to find an upper bound on number of steps required to converge (if the data is linearly separable). But, it cannot be put into practice easily, as to compute that, we first need to find the weight vector. \n\nAnother thing to note is that, even if there is a convergence, the number of steps needed might be too large, which might bring the problem of computation power.\n\nFor this assignment, I have established a heurisitc that if the mismatch % is approximately 30% of the total number of samples and the iterations have been more than 10000, then that means that possibly the data is not separable linearly. My reasoning for this is very straight forward, if 30% of data is still mismatched, it is likely that the mismatch will continue to happen for long, which is not computationally feasible.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb1539bf262c656fef6867efaa482949d507555b
4,743
ipynb
Jupyter Notebook
pylondinium/notebooks/dashboard.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
82
2017-04-14T20:18:55.000Z
2021-12-25T23:38:52.000Z
pylondinium/notebooks/dashboard.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
3
2017-04-07T18:37:21.000Z
2020-07-11T09:37:53.000Z
pylondinium/notebooks/dashboard.ipynb
jtpio/quantstack-talks
092f93ddb9901cb614f428e13a0b1b1e3ffcc0ec
[ "BSD-3-Clause" ]
59
2017-04-07T11:16:56.000Z
2022-03-25T14:48:55.000Z
23.597015
239
0.539743
[ [ [ "This demo uses voila to render a notebook to a custom HTML page using gridstack.js for the layout of each output. In the cell metadata you can change the default cell with and height (in grid units between 1 and 12) by specifying.\n * `grid_row`\n * `grid_columns`", "_____no_output_____" ] ], [ [ "import numpy as np\nn = 200\n\nx = np.linspace(0.0, 10.0, n)\ny = np.cumsum(np.random.randn(n)*10).astype(int)\n", "_____no_output_____" ], [ "import ipywidgets as widgets", "_____no_output_____" ], [ "label_selected = widgets.Label(value=\"Selected: 0\")\nlabel_selected", "_____no_output_____" ], [ "import numpy as np\nfrom bqplot import pyplot as plt\nimport bqplot\n\nfig = plt.figure( title='Histogram')\nnp.random.seed(0)\nhist = plt.hist(y, bins=25)\nhist.scales['sample'].min = float(y.min())\nhist.scales['sample'].max = float(y.max())\ndisplay(fig)\nfig.layout.width = 'auto'\nfig.layout.height = 'auto'\nfig.layout.min_height = '300px' # so it shows nicely in the notebook\nfig.layout.flex = '1'", "_____no_output_____" ], [ "import numpy as np\nfrom bqplot import pyplot as plt\nimport bqplot\n\nfig = plt.figure( title='Line Chart')\nnp.random.seed(0)\nn = 200\np = plt.plot(x, y)\nfig", "_____no_output_____" ], [ "fig.layout.width = 'auto'\nfig.layout.height = 'auto'\nfig.layout.min_height = '300px' # so it shows nicely in the notebook\nfig.layout.flex = '1'", "_____no_output_____" ], [ "brushintsel = bqplot.interacts.BrushIntervalSelector(scale=p.scales['x'])", "_____no_output_____" ], [ "def update_range(*args):\n label_selected.value = \"Selected range {}\".format(brushintsel.selected)\n mask = (x > brushintsel.selected[0]) & (x < brushintsel.selected[1])\n hist.sample = y[mask]\n \nbrushintsel.observe(update_range, 'selected')\nfig.interaction = brushintsel", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb153acafe7729170390be9d87dac9ca87bb71f9
260,847
ipynb
Jupyter Notebook
project/MutliModalClassification.ipynb
vipul43/CS5007_DL
3661a872043446d530941571fafbbd3ca5ffba16
[ "MIT" ]
null
null
null
project/MutliModalClassification.ipynb
vipul43/CS5007_DL
3661a872043446d530941571fafbbd3ca5ffba16
[ "MIT" ]
null
null
null
project/MutliModalClassification.ipynb
vipul43/CS5007_DL
3661a872043446d530941571fafbbd3ca5ffba16
[ "MIT" ]
null
null
null
252.514037
131,870
0.867413
[ [ [ "# installs", "_____no_output_____" ], [ "# imports\nimport scipy.io\nimport cv2\nfrom google.colab.patches import cv2_imshow\nfrom skimage import io\nimport numpy as np\nimport pandas as pd\nfrom PIL import Image \nimport matplotlib.pylab as plt\nimport pickle\nfrom skimage import transform\nfrom sklearn.model_selection import train_test_split\nimport tensorflow as tf\nfrom nltk.tokenize import word_tokenize\nfrom nltk.stem import WordNetLemmatizer, LancasterStemmer\nimport spacy\nimport nltk\nimport keras.backend as K\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences", "_____no_output_____" ], [ "nltk.download('punkt') #tokenizer\nnltk.download('wordnet') #lemmatization\n\nlemmatizer = WordNetLemmatizer() #lemmatizer\n\nsp = spacy.load('en_core_web_sm') #lexical importance find\n\nls = LancasterStemmer()", "[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Unzipping corpora/wordnet.zip.\n" ], [ "# data loading\n\n!gdown --id 1mrjvJq6XNM8QAgajSgrVGpsj8Vrm3BEm #PASCAL50S\n\nmat = scipy.io.loadmat('/content/pascal50S.mat')\nprint(type(mat))", "Downloading...\nFrom: https://drive.google.com/uc?id=1mrjvJq6XNM8QAgajSgrVGpsj8Vrm3BEm\nTo: /content/pascal50S.mat\n\r 0% 0.00/1.12M [00:00<?, ?B/s]\r100% 1.12M/1.12M [00:00<00:00, 74.8MB/s]\n<class 'dict'>\n" ], [ "classes = ['person',\n 'bird', \n 'cat', \n 'cow', \n 'dog',\n 'horse',\n 'sheep', \n 'aeroplane', \n 'bicycle', \n 'boat', \n 'bus', \n 'car', \n 'motorbike', \n 'train', \n 'bottle', \n 'chair', \n 'dining table',\n 'potted plant',\n 'sofa', \n 'tv/monitor']\n\ndict_classes = {'person':0, 'man':0, 'human':0, 'people':0, 'men': 0, 'girl':0, 'boy':0, \n 'serviceman':0, 'homo':0, 'valet':0, 'child':0, 'family':0, 'group':0, \n 'woman':0, 'women':0, 'couple':0, 'her':0, 'his':0, 'rider':0, 'him':0, \n 'he':0, 'she':0, 'child':0, 'children':0, 'baby':0, 'guy':0, 'gentleman':0,\n 'lady':0, 'grandma':0, 'friend':0, 'mother':0, 'father':0, 'teen':0, 'kid':0,\n 'teenager':0, 'cowboy':0, 'daughter':0, 'dad':0, 'son':0,\n 'bird':1, 'penguin':1, 'parrot':1, 'sparrow':1, 'dame':1, 'boo':1, 'eagle':1, \n 'cockatoo':1, 'hummingbird':1, 'duck':1, 'goose':1, 'songbird':1, 'dove':1,\n 'chicken':1, 'rooster':1, 'chick':1, 'crow':1, 'hawk':1, 'canary':1, 'peacock':1,\n 'magpie':1, 'swan':1, 'kingfisher':1, 'kookaburra':1, 'owl':1, 'woodpecker':1,\n 'crane':1,\n 'cat':2, 'pussy':2, 'kitty':2, 'wildcat':2, 'kitten':2,\n 'cow':3, 'calf':3, 'bullock':3, 'bull':3, 'ox':3,\n 'dog':4, 'greyhound':4, 'pug':4, 'puppy':4, 'schnauzer':4, 'pooch':4, 'tyke':4,\n 'labrador':4, 'bulldog':4, 'chihuahua':4, 'pomeranian':4, 'bernard':4, 'bitch':4,\n 'horse':5, 'stallion':5, 'pony':5, 'mare':5,\n 'sheep':6, 'goat':6, 'ram':6, 'ewe':6, 'lamb':6,\n 'aeroplane':7, 'airplane':7, 'flight':7, 'plane':7, 'jet':7, 'aircraft':7, 'biplane':7,\n 'bicycle':8, 'cycle':8, 'bike':8, \n 'boat':9, 'ship':9, 'cruise':9, 'canoe':9, 'kayak':9, 'barge':9,\n 'bus':10, 'van': 10,\n 'car':11, 'corvette':11, 'truck':11, 'supercar':11, 'coupe':11, 'sedan':11, 'roadster':11,\n 'hatchback':11, 'minivan':11,\n 'motorbike':12, 'motorcycle':12,\n 'train':13, 'locomotive':13, 'freight':13,\n 'bottle':14, 'flask':14,\n 'chair':15, 'armchair':15, 'rocker':15, 'recliner':15,\n 'dining':16, 'table':16,\n 'plant':17, 'sapling':17, 'flowerpot':17, 'potted':17,\n 'sofa':18, 'couch':18, 'lounge':18,\n 'tv':19, 'monitor':19, 'television':19, 'desktop':19, 'computer':19}\n\nrever_dict_classes = {\n 0: 'person',\n 1: 'bird',\n 2: 'cat',\n 3: 'cow',\n 4: 'dog',\n 5: 'horse',\n 6: 'sheep',\n 7: 'aeroplane',\n 8: 'bicycle',\n 9: 'boat',\n 10: 'bus',\n 11: 'car',\n 12: 'motorbike',\n 13: 'train',\n 14: 'bottle',\n 15: 'chair',\n 16: 'dining table',\n 17: 'potted plant',\n 18: 'sofa',\n 19: 'tv/monitor'}", "_____no_output_____" ], [ "count = {'0':0, #person\n '1':0, #bird\n '2':0, #cat\n '3':0, #cow\n '4':0, #dog\n '5':0, #horse\n '6':0, #sheep\n '7':0, #aeroplane\n '8':0, #bicycle\n '9':0, #boat\n '10':0, #bus\n '11':0, #car\n '12':0, #motorbike\n '13':0, #train\n '14':0, #bottle\n '15':0, #chair\n '16':0, #dining\n '17':0, #potted plant\n '18':0, #sofa\n '19':0} #tv/monitor", "_____no_output_____" ], [ "# observing data\ndata = []\nidx=0\nfor sample in mat[\"train_sent_final\"][0]:\n # image = io.imread(i[0][0])\n # cv2_imshow(image)\n link = [sample[0][0]] #image link\n cls = set()\n for k in sample[1]:\n for sent in k:\n # if idx==10:\n # break\n # idx+=1\n\n for word in sent[0].split():\n pre_word = lemmatizer.lemmatize(ls.stem(word.lower()))\n if(pre_word in dict_classes.keys()):\n cls.add(dict_classes[pre_word])\n for cl in cls:\n count[str(cl)]+=1\n data.append([link, list(cls)])", "_____no_output_____" ], [ "file = open(\"data.pkl\", \"wb\")\npickle.dump(data, file)\nfile.close()", "_____no_output_____" ], [ "# preprocessing the dataset\n'''\ndata -> url -> image -> array -> resized array\nTrainX = array of images resized to (224x224x3)\nTrainY = array of labels with size (20x1) in ones-zeros vector like [1, 1, 0, ....]\n'''\n# TrainX\nnew_shape = (224, 224, 3)\nTrainX1 = []\n\nfor point in data:\n photo = io.imread(point[0][0])\n photo = transform.resize(image=photo, output_shape=new_shape)\n TrainX1.append(photo)\n\nTrainX1 = np.array(TrainX1)", "_____no_output_____" ], [ "file = open(\"TrainX1.pkl\", \"wb\")\npickle.dump(TrainX1, file)\nfile.close()", "_____no_output_____" ] ], [ [ "**Loading the TrainX1 Pickle file**", "_____no_output_____" ] ], [ [ "pickle_in = open(\"TrainX1.pkl\",\"rb\")\nTrainX1 = pickle.load(pickle_in)", "_____no_output_____" ], [ "# TrainY\nTrainY = []\nfor points in data:\n full_label = np.zeros(shape=(20, ))\n for label in points[1]:\n full_label[label] = 1\n TrainY.append(full_label)\nTrainY = np.array(TrainY)", "_____no_output_____" ] ], [ [ "**Models**", "_____no_output_____" ] ], [ [ "# model making(Image to Vector)\n# input layer\ninput1 = tf.keras.Input(shape=(224, 224, 3), name='input1')\n\n# Transfer Learning with VGG16 model with weights as imagenet\nvgg16 = tf.keras.applications.VGG16(include_top=False, weights=\"imagenet\", classes=20)\nvgg16.trainable = False\nx = vgg16(input1)\n\n# Dense Layers\nx = tf.keras.layers.Flatten(name='flatten')(x)\nx = tf.keras.layers.BatchNormalization(name='norm1')(x)\nx = tf.keras.layers.Dense(192, activation='relu', name='dense1')(x)\nx = tf.keras.layers.BatchNormalization(name='norm2')(x)\nx = tf.keras.layers.Dense(84, activation='relu', name='dense2')(x)\nx = tf.keras.layers.BatchNormalization(name='norm3')(x)\nx = tf.keras.layers.Dense(64, activation='relu', name='dense3')(x)\nx = tf.keras.layers.BatchNormalization(name='norm4')(x)\n\n#Output layer\noutput = tf.keras.layers.Dense(500, activation=\"linear\", name='output')(x)\n\nmodel1 = tf.keras.models.Model(inputs=input1, outputs=output, name='model1')\n\nmodel1.summary()", "Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5\n58892288/58889256 [==============================] - 0s 0us/step\nModel: \"model1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput1 (InputLayer) [(None, 224, 224, 3)] 0 \n_________________________________________________________________\nvgg16 (Functional) (None, None, None, 512) 14714688 \n_________________________________________________________________\nflatten (Flatten) (None, 25088) 0 \n_________________________________________________________________\nnorm1 (BatchNormalization) (None, 25088) 100352 \n_________________________________________________________________\ndense1 (Dense) (None, 192) 4817088 \n_________________________________________________________________\nnorm2 (BatchNormalization) (None, 192) 768 \n_________________________________________________________________\ndense2 (Dense) (None, 84) 16212 \n_________________________________________________________________\nnorm3 (BatchNormalization) (None, 84) 336 \n_________________________________________________________________\ndense3 (Dense) (None, 64) 5440 \n_________________________________________________________________\nnorm4 (BatchNormalization) (None, 64) 256 \n_________________________________________________________________\noutput (Dense) (None, 500) 32500 \n=================================================================\nTotal params: 19,687,640\nTrainable params: 4,922,096\nNon-trainable params: 14,765,544\n_________________________________________________________________\n" ] ], [ [ "**Text Model(Model2)**", "_____no_output_____" ], [ "#**Data Preprocessing**", "_____no_output_____" ] ], [ [ "# observing data for text Model\ndata2 = []\nstringX2 = []\nidx=0\nfor sample in mat[\"train_sent_final\"][0]:\n # image = io.imread(i[0][0])\n # cv2_imshow(image)\n link = [sample[0][0]] #image link\n cls = set()\n for k in sample[1]:\n for sent in k:\n # if idx==10:\n # break\n # idx+=1\n\n for word in sent[0].split():\n pre_word = lemmatizer.lemmatize(ls.stem(word.lower()))\n if(pre_word in dict_classes.keys()):\n cls.add(dict_classes[pre_word])\n for cl in cls:\n count[str(cl)]+=1\n\n for k in sample[1]:\n for sent in k:\n stringX2.append(sent[0])\n temp = np.zeros(shape=20)\n for ele in list(cls):\n temp[ele] = 1\n data2.append([sent, temp])", "_____no_output_____" ], [ "#Preparation of TrainX2 and Trainy2 for Text Model(Model2)\ntk = Tokenizer(filters='!\"#$%&()*+,-./:;<=>?@[\\]^`{|}~\\t\\n')\ntk.fit_on_texts(stringX2)\nX_seq = tk.texts_to_sequences(stringX2)\nX_pad = pad_sequences(X_seq, maxlen=100, padding='post')\nX_pad.shape\nTrainX2 = X_pad\nTrainy2 = np.zeros(shape=(len(data2), 20))\ni = 0\nfor d in data2:\n Trainy2[i] = np.array(d[1])\n i = i + 1", "_____no_output_____" ], [ "INP_LEN1 = 100 #(Text to Vector)\ninput2 = tf.keras.Input(shape=(INP_LEN1,), name='input')\nembed = tf.keras.layers.Embedding((len(tk.word_counts)+1),INP_LEN1)(input2)\nrnn1 = tf.keras.layers.GRU(192, return_sequences=True, dropout=0.3)(embed)\npool = tf.keras.layers.MaxPool1D()(rnn1)\nrnn2 = tf.keras.layers.GRU(128, dropout=0.2)(pool)\ndense1 = tf.keras.layers.Dense(84, activation='relu')(rnn2)\ndrop1 = tf.keras.layers.Dropout(0.2)(dense1)\nnorm1 = tf.keras.layers.BatchNormalization()(drop1)\noutput = tf.keras.layers.Dense(500, activation='linear')(norm1)\n\nmodel2 = tf.keras.models.Model(inputs=input2, outputs=output, name='model2')\nmodel2.summary()\n", "Model: \"model2\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput (InputLayer) [(None, 100)] 0 \n_________________________________________________________________\nembedding (Embedding) (None, 100, 100) 758600 \n_________________________________________________________________\ngru (GRU) (None, 100, 192) 169344 \n_________________________________________________________________\nmax_pooling1d (MaxPooling1D) (None, 50, 192) 0 \n_________________________________________________________________\ngru_1 (GRU) (None, 128) 123648 \n_________________________________________________________________\ndense (Dense) (None, 84) 10836 \n_________________________________________________________________\ndropout (Dropout) (None, 84) 0 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 84) 336 \n_________________________________________________________________\ndense_1 (Dense) (None, 500) 42500 \n=================================================================\nTotal params: 1,105,264\nTrainable params: 1,105,096\nNon-trainable params: 168\n_________________________________________________________________\n" ] ], [ [ "**Concatenating the Image Model(Model1) and Text Model(Model2)**", "_____no_output_____" ] ], [ [ "concate = tf.keras.layers.Concatenate(axis=-1)([model1.output, model2.output])\nfinal_dense = tf.keras.layers.Dense(256, activation='relu')(concate)\nOutput = tf.keras.layers.Dense(20, activation='sigmoid')(final_dense)\nfinalModel = tf.keras.models.Model(inputs=[input1,input2],outputs=Output)\n\nfinalModel.summary()\ntf.keras.utils.plot_model(finalModel,to_file=\"finalModel.png\")", "Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput1 (InputLayer) [(None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nvgg16 (Functional) (None, None, None, 5 14714688 input1[0][0] \n__________________________________________________________________________________________________\nflatten (Flatten) (None, 25088) 0 vgg16[0][0] \n__________________________________________________________________________________________________\ninput (InputLayer) [(None, 100)] 0 \n__________________________________________________________________________________________________\nnorm1 (BatchNormalization) (None, 25088) 100352 flatten[0][0] \n__________________________________________________________________________________________________\nembedding (Embedding) (None, 100, 100) 758600 input[0][0] \n__________________________________________________________________________________________________\ndense1 (Dense) (None, 192) 4817088 norm1[0][0] \n__________________________________________________________________________________________________\ngru (GRU) (None, 100, 192) 169344 embedding[0][0] \n__________________________________________________________________________________________________\nnorm2 (BatchNormalization) (None, 192) 768 dense1[0][0] \n__________________________________________________________________________________________________\nmax_pooling1d (MaxPooling1D) (None, 50, 192) 0 gru[0][0] \n__________________________________________________________________________________________________\ndense2 (Dense) (None, 84) 16212 norm2[0][0] \n__________________________________________________________________________________________________\ngru_1 (GRU) (None, 128) 123648 max_pooling1d[0][0] \n__________________________________________________________________________________________________\nnorm3 (BatchNormalization) (None, 84) 336 dense2[0][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, 84) 10836 gru_1[0][0] \n__________________________________________________________________________________________________\ndense3 (Dense) (None, 64) 5440 norm3[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 84) 0 dense[0][0] \n__________________________________________________________________________________________________\nnorm4 (BatchNormalization) (None, 64) 256 dense3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 84) 336 dropout[0][0] \n__________________________________________________________________________________________________\noutput (Dense) (None, 500) 32500 norm4[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 500) 42500 batch_normalization[0][0] \n__________________________________________________________________________________________________\nconcatenate (Concatenate) (None, 1000) 0 output[0][0] \n dense_1[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 256) 256256 concatenate[0][0] \n__________________________________________________________________________________________________\ndense_3 (Dense) (None, 20) 5140 dense_2[0][0] \n==================================================================================================\nTotal params: 21,054,300\nTrainable params: 6,288,588\nNon-trainable params: 14,765,712\n__________________________________________________________________________________________________\n" ] ], [ [ "**Train the MultiModal**", "_____no_output_____" ] ], [ [ "finalModel.compile(optimizer=tf.keras.optimizers.Adam(lr = 0.0001), loss='binary_crossentropy',metrics=[tf.keras.metrics.BinaryAccuracy()])", "_____no_output_____" ], [ "finalModel.fit([TrainX1, TrainX2], TrainY, validation_split=0.2, epochs=100)", "Epoch 1/100\n25/25 [==============================] - 48s 279ms/step - loss: 0.6829 - binary_accuracy: 0.5630 - val_loss: 0.6503 - val_binary_accuracy: 0.7042\nEpoch 2/100\n25/25 [==============================] - 4s 169ms/step - loss: 0.5469 - binary_accuracy: 0.8200 - val_loss: 0.5941 - val_binary_accuracy: 0.8825\nEpoch 3/100\n25/25 [==============================] - 4s 170ms/step - loss: 0.4160 - binary_accuracy: 0.9188 - val_loss: 0.4966 - val_binary_accuracy: 0.9190\nEpoch 4/100\n25/25 [==============================] - 4s 169ms/step - loss: 0.2982 - binary_accuracy: 0.9355 - val_loss: 0.3852 - val_binary_accuracy: 0.9285\nEpoch 5/100\n25/25 [==============================] - 4s 171ms/step - loss: 0.2202 - binary_accuracy: 0.9382 - val_loss: 0.3177 - val_binary_accuracy: 0.9287\nEpoch 6/100\n25/25 [==============================] - 4s 171ms/step - loss: 0.1771 - binary_accuracy: 0.9427 - val_loss: 0.2791 - val_binary_accuracy: 0.9268\nEpoch 7/100\n25/25 [==============================] - 4s 171ms/step - loss: 0.1555 - binary_accuracy: 0.9499 - val_loss: 0.2540 - val_binary_accuracy: 0.9262\nEpoch 8/100\n25/25 [==============================] - 4s 173ms/step - loss: 0.1309 - binary_accuracy: 0.9574 - val_loss: 0.2389 - val_binary_accuracy: 0.9267\nEpoch 9/100\n25/25 [==============================] - 4s 172ms/step - loss: 0.1149 - binary_accuracy: 0.9641 - val_loss: 0.2281 - val_binary_accuracy: 0.9270\nEpoch 10/100\n25/25 [==============================] - 4s 173ms/step - loss: 0.0955 - binary_accuracy: 0.9716 - val_loss: 0.2211 - val_binary_accuracy: 0.9285\nEpoch 11/100\n25/25 [==============================] - 4s 174ms/step - loss: 0.0857 - binary_accuracy: 0.9768 - val_loss: 0.2186 - val_binary_accuracy: 0.9277\nEpoch 12/100\n25/25 [==============================] - 4s 173ms/step - loss: 0.0765 - binary_accuracy: 0.9814 - val_loss: 0.2169 - val_binary_accuracy: 0.9265\nEpoch 13/100\n25/25 [==============================] - 4s 173ms/step - loss: 0.0655 - binary_accuracy: 0.9835 - val_loss: 0.2141 - val_binary_accuracy: 0.9280\nEpoch 14/100\n25/25 [==============================] - 4s 174ms/step - loss: 0.0573 - binary_accuracy: 0.9879 - val_loss: 0.2159 - val_binary_accuracy: 0.9277\nEpoch 15/100\n25/25 [==============================] - 4s 173ms/step - loss: 0.0537 - binary_accuracy: 0.9882 - val_loss: 0.2159 - val_binary_accuracy: 0.9270\nEpoch 16/100\n25/25 [==============================] - 4s 175ms/step - loss: 0.0419 - binary_accuracy: 0.9931 - val_loss: 0.2181 - val_binary_accuracy: 0.9260\nEpoch 17/100\n25/25 [==============================] - 4s 174ms/step - loss: 0.0405 - binary_accuracy: 0.9927 - val_loss: 0.2199 - val_binary_accuracy: 0.9265\nEpoch 18/100\n25/25 [==============================] - 4s 174ms/step - loss: 0.0347 - binary_accuracy: 0.9946 - val_loss: 0.2255 - val_binary_accuracy: 0.9275\nEpoch 19/100\n25/25 [==============================] - 4s 176ms/step - loss: 0.0301 - binary_accuracy: 0.9954 - val_loss: 0.2241 - val_binary_accuracy: 0.9300\nEpoch 20/100\n25/25 [==============================] - 4s 175ms/step - loss: 0.0261 - binary_accuracy: 0.9963 - val_loss: 0.2278 - val_binary_accuracy: 0.9292\nEpoch 21/100\n25/25 [==============================] - 4s 175ms/step - loss: 0.0232 - binary_accuracy: 0.9969 - val_loss: 0.2344 - val_binary_accuracy: 0.9275\nEpoch 22/100\n25/25 [==============================] - 4s 176ms/step - loss: 0.0209 - binary_accuracy: 0.9971 - val_loss: 0.2317 - val_binary_accuracy: 0.9275\nEpoch 23/100\n25/25 [==============================] - 4s 178ms/step - loss: 0.0187 - binary_accuracy: 0.9975 - val_loss: 0.2388 - val_binary_accuracy: 0.9277\nEpoch 24/100\n25/25 [==============================] - 4s 178ms/step - loss: 0.0162 - binary_accuracy: 0.9991 - val_loss: 0.2442 - val_binary_accuracy: 0.9285\nEpoch 25/100\n25/25 [==============================] - 4s 178ms/step - loss: 0.0135 - binary_accuracy: 0.9988 - val_loss: 0.2455 - val_binary_accuracy: 0.9282\nEpoch 26/100\n25/25 [==============================] - 4s 178ms/step - loss: 0.0150 - binary_accuracy: 0.9985 - val_loss: 0.2524 - val_binary_accuracy: 0.9277\nEpoch 27/100\n25/25 [==============================] - 4s 179ms/step - loss: 0.0121 - binary_accuracy: 0.9991 - val_loss: 0.2573 - val_binary_accuracy: 0.9293\nEpoch 28/100\n25/25 [==============================] - 4s 179ms/step - loss: 0.0116 - binary_accuracy: 0.9988 - val_loss: 0.2559 - val_binary_accuracy: 0.9295\nEpoch 29/100\n25/25 [==============================] - 4s 178ms/step - loss: 0.0109 - binary_accuracy: 0.9991 - val_loss: 0.2580 - val_binary_accuracy: 0.9305\nEpoch 30/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0099 - binary_accuracy: 0.9990 - val_loss: 0.2576 - val_binary_accuracy: 0.9315\nEpoch 31/100\n25/25 [==============================] - 4s 179ms/step - loss: 0.0088 - binary_accuracy: 0.9994 - val_loss: 0.2654 - val_binary_accuracy: 0.9312\nEpoch 32/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0073 - binary_accuracy: 0.9998 - val_loss: 0.2713 - val_binary_accuracy: 0.9320\nEpoch 33/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0080 - binary_accuracy: 0.9994 - val_loss: 0.2758 - val_binary_accuracy: 0.9305\nEpoch 34/100\n25/25 [==============================] - 4s 181ms/step - loss: 0.0071 - binary_accuracy: 0.9995 - val_loss: 0.2799 - val_binary_accuracy: 0.9315\nEpoch 35/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0067 - binary_accuracy: 0.9997 - val_loss: 0.2846 - val_binary_accuracy: 0.9307\nEpoch 36/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0048 - binary_accuracy: 0.9998 - val_loss: 0.2889 - val_binary_accuracy: 0.9302\nEpoch 37/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0056 - binary_accuracy: 0.9998 - val_loss: 0.2927 - val_binary_accuracy: 0.9297\nEpoch 38/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0054 - binary_accuracy: 0.9999 - val_loss: 0.2934 - val_binary_accuracy: 0.9292\nEpoch 39/100\n25/25 [==============================] - 4s 179ms/step - loss: 0.0054 - binary_accuracy: 0.9997 - val_loss: 0.2991 - val_binary_accuracy: 0.9310\nEpoch 40/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0040 - binary_accuracy: 1.0000 - val_loss: 0.3038 - val_binary_accuracy: 0.9312\nEpoch 41/100\n25/25 [==============================] - 4s 181ms/step - loss: 0.0040 - binary_accuracy: 0.9999 - val_loss: 0.3089 - val_binary_accuracy: 0.9287\nEpoch 42/100\n25/25 [==============================] - 4s 180ms/step - loss: 0.0046 - binary_accuracy: 0.9998 - val_loss: 0.3136 - val_binary_accuracy: 0.9297\nEpoch 43/100\n25/25 [==============================] - 4s 181ms/step - loss: 0.0037 - binary_accuracy: 1.0000 - val_loss: 0.3141 - val_binary_accuracy: 0.9290\nEpoch 44/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0036 - binary_accuracy: 0.9999 - val_loss: 0.3201 - val_binary_accuracy: 0.9280\nEpoch 45/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0035 - binary_accuracy: 0.9999 - val_loss: 0.3210 - val_binary_accuracy: 0.9275\nEpoch 46/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0034 - binary_accuracy: 0.9998 - val_loss: 0.3228 - val_binary_accuracy: 0.9300\nEpoch 47/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0038 - binary_accuracy: 0.9994 - val_loss: 0.3285 - val_binary_accuracy: 0.9313\nEpoch 48/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0033 - binary_accuracy: 0.9997 - val_loss: 0.3345 - val_binary_accuracy: 0.9307\nEpoch 49/100\n25/25 [==============================] - 5s 182ms/step - loss: 0.0026 - binary_accuracy: 0.9997 - val_loss: 0.3356 - val_binary_accuracy: 0.9315\nEpoch 50/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0032 - binary_accuracy: 0.9995 - val_loss: 0.3361 - val_binary_accuracy: 0.9310\nEpoch 51/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0028 - binary_accuracy: 0.9999 - val_loss: 0.3483 - val_binary_accuracy: 0.9283\nEpoch 52/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0027 - binary_accuracy: 0.9998 - val_loss: 0.3497 - val_binary_accuracy: 0.9305\nEpoch 53/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0042 - binary_accuracy: 0.9991 - val_loss: 0.3546 - val_binary_accuracy: 0.9312\nEpoch 54/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0027 - binary_accuracy: 0.9999 - val_loss: 0.3553 - val_binary_accuracy: 0.9297\nEpoch 55/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0024 - binary_accuracy: 0.9997 - val_loss: 0.3625 - val_binary_accuracy: 0.9288\nEpoch 56/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0025 - binary_accuracy: 0.9999 - val_loss: 0.3648 - val_binary_accuracy: 0.9305\nEpoch 57/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0024 - binary_accuracy: 0.9998 - val_loss: 0.3661 - val_binary_accuracy: 0.9287\nEpoch 58/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0021 - binary_accuracy: 1.0000 - val_loss: 0.3727 - val_binary_accuracy: 0.9283\nEpoch 59/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0023 - binary_accuracy: 1.0000 - val_loss: 0.3732 - val_binary_accuracy: 0.9317\nEpoch 60/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0027 - binary_accuracy: 0.9998 - val_loss: 0.3854 - val_binary_accuracy: 0.9297\nEpoch 61/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0033 - binary_accuracy: 0.9997 - val_loss: 0.3954 - val_binary_accuracy: 0.9265\nEpoch 62/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0038 - binary_accuracy: 0.9994 - val_loss: 0.3952 - val_binary_accuracy: 0.9272\nEpoch 63/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0024 - binary_accuracy: 0.9998 - val_loss: 0.3963 - val_binary_accuracy: 0.9280\nEpoch 64/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0026 - binary_accuracy: 0.9997 - val_loss: 0.3910 - val_binary_accuracy: 0.9292\nEpoch 65/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0035 - binary_accuracy: 0.9993 - val_loss: 0.3959 - val_binary_accuracy: 0.9287\nEpoch 66/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0026 - binary_accuracy: 0.9998 - val_loss: 0.3973 - val_binary_accuracy: 0.9277\nEpoch 67/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0027 - binary_accuracy: 0.9995 - val_loss: 0.3988 - val_binary_accuracy: 0.9295\nEpoch 68/100\n25/25 [==============================] - 5s 183ms/step - loss: 0.0026 - binary_accuracy: 0.9996 - val_loss: 0.3918 - val_binary_accuracy: 0.9297\nEpoch 69/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0016 - binary_accuracy: 0.9999 - val_loss: 0.3923 - val_binary_accuracy: 0.9280\nEpoch 70/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0018 - binary_accuracy: 0.9998 - val_loss: 0.3904 - val_binary_accuracy: 0.9285\nEpoch 71/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0020 - binary_accuracy: 0.9996 - val_loss: 0.3781 - val_binary_accuracy: 0.9295\nEpoch 72/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0021 - binary_accuracy: 0.9999 - val_loss: 0.4097 - val_binary_accuracy: 0.9297\nEpoch 73/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0031 - binary_accuracy: 0.9994 - val_loss: 0.4095 - val_binary_accuracy: 0.9277\nEpoch 74/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0015 - binary_accuracy: 0.9999 - val_loss: 0.4097 - val_binary_accuracy: 0.9287\nEpoch 75/100\n25/25 [==============================] - 5s 184ms/step - loss: 0.0022 - binary_accuracy: 0.9998 - val_loss: 0.4165 - val_binary_accuracy: 0.9305\nEpoch 76/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0014 - binary_accuracy: 0.9999 - val_loss: 0.4225 - val_binary_accuracy: 0.9300\nEpoch 77/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0022 - binary_accuracy: 0.9996 - val_loss: 0.4270 - val_binary_accuracy: 0.9308\nEpoch 78/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0016 - binary_accuracy: 0.9999 - val_loss: 0.4287 - val_binary_accuracy: 0.9302\nEpoch 79/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0021 - binary_accuracy: 0.9997 - val_loss: 0.4318 - val_binary_accuracy: 0.9310\nEpoch 80/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0012 - binary_accuracy: 0.9999 - val_loss: 0.4219 - val_binary_accuracy: 0.9300\nEpoch 81/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0013 - binary_accuracy: 1.0000 - val_loss: 0.4169 - val_binary_accuracy: 0.9295\nEpoch 82/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0012 - binary_accuracy: 0.9999 - val_loss: 0.4225 - val_binary_accuracy: 0.9290\nEpoch 83/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0012 - binary_accuracy: 0.9998 - val_loss: 0.4216 - val_binary_accuracy: 0.9302\nEpoch 84/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0012 - binary_accuracy: 0.9998 - val_loss: 0.4281 - val_binary_accuracy: 0.9302\nEpoch 85/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0011 - binary_accuracy: 1.0000 - val_loss: 0.4353 - val_binary_accuracy: 0.9302\nEpoch 86/100\n25/25 [==============================] - 5s 185ms/step - loss: 8.9591e-04 - binary_accuracy: 0.9999 - val_loss: 0.4373 - val_binary_accuracy: 0.9295\nEpoch 87/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0012 - binary_accuracy: 0.9997 - val_loss: 0.4305 - val_binary_accuracy: 0.9297\nEpoch 88/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0013 - binary_accuracy: 0.9999 - val_loss: 0.4291 - val_binary_accuracy: 0.9305\nEpoch 89/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0012 - binary_accuracy: 0.9999 - val_loss: 0.4356 - val_binary_accuracy: 0.9300\nEpoch 90/100\n25/25 [==============================] - 5s 185ms/step - loss: 0.0011 - binary_accuracy: 0.9999 - val_loss: 0.4358 - val_binary_accuracy: 0.9300\nEpoch 91/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0010 - binary_accuracy: 0.9998 - val_loss: 0.4347 - val_binary_accuracy: 0.9290\nEpoch 92/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0013 - binary_accuracy: 0.9997 - val_loss: 0.4354 - val_binary_accuracy: 0.9297\nEpoch 93/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0010 - binary_accuracy: 0.9998 - val_loss: 0.4378 - val_binary_accuracy: 0.9295\nEpoch 94/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0012 - binary_accuracy: 0.9997 - val_loss: 0.4574 - val_binary_accuracy: 0.9295\nEpoch 95/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0012 - binary_accuracy: 0.9999 - val_loss: 0.4271 - val_binary_accuracy: 0.9300\nEpoch 96/100\n25/25 [==============================] - 5s 186ms/step - loss: 0.0012 - binary_accuracy: 0.9998 - val_loss: 0.4394 - val_binary_accuracy: 0.9303\nEpoch 97/100\n25/25 [==============================] - 5s 186ms/step - loss: 7.3250e-04 - binary_accuracy: 1.0000 - val_loss: 0.4488 - val_binary_accuracy: 0.9310\nEpoch 98/100\n25/25 [==============================] - 5s 186ms/step - loss: 7.6331e-04 - binary_accuracy: 1.0000 - val_loss: 0.4467 - val_binary_accuracy: 0.9302\nEpoch 99/100\n25/25 [==============================] - 5s 186ms/step - loss: 8.8239e-04 - binary_accuracy: 0.9998 - val_loss: 0.4476 - val_binary_accuracy: 0.9300\nEpoch 100/100\n25/25 [==============================] - 5s 186ms/step - loss: 6.6914e-04 - binary_accuracy: 1.0000 - val_loss: 0.4505 - val_binary_accuracy: 0.9318\n" ] ], [ [ "**Predictions**", "_____no_output_____" ] ], [ [ "rnd = np.random.randint(0, len(TrainX1))\nsampleX1 = np.expand_dims(TrainX1[rnd], axis=0)\nsampleX2 = np.expand_dims(TrainX2[rnd], axis=0)\nlabel = TrainY[rnd]\npred = finalModel.predict([sampleX1, sampleX2])[0]\npred = (pred > 0.5)\npred = pred.astype(int)\nplt.imshow(TrainX1[rnd])\ntrue = []\napprox = []\nfor i in range(20):\n if label[i] == 1:\n true.append(rever_dict_classes[i])\n if pred[i] == 1:\n approx.append(rever_dict_classes[i])\nprint(\"True Classes in the image: \", true)\nprint(\"Predicted Classes in the image: \", approx)", "True Classes in the image: ['chair', 'sofa']\nPredicted Classes in the image: ['chair', 'sofa']\n" ] ], [ [ "**Save the Model**", "_____no_output_____" ] ], [ [ "tf.keras.utils.plot_model(finalModel, to_file=\"finalModel.png\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb153bb143fd60089d49702abb3cbe3da1d559b5
1,041,993
ipynb
Jupyter Notebook
Fashion MNIST/Linear Model/Fashion_MNIST_oneill_complete.ipynb
aj96oneill/ML-Beginner-Projects
b787343f37e931664beefd015ce8b0960b8c233f
[ "MIT" ]
null
null
null
Fashion MNIST/Linear Model/Fashion_MNIST_oneill_complete.ipynb
aj96oneill/ML-Beginner-Projects
b787343f37e931664beefd015ce8b0960b8c233f
[ "MIT" ]
null
null
null
Fashion MNIST/Linear Model/Fashion_MNIST_oneill_complete.ipynb
aj96oneill/ML-Beginner-Projects
b787343f37e931664beefd015ce8b0960b8c233f
[ "MIT" ]
null
null
null
564.154304
158,712
0.935237
[ [ [ "Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. It shares the same image size and structure of training and testing splits.", "_____no_output_____" ], [ "- ## Try to build a classifier for the Fashion-MNIST dataset that achieves over 85% accuracy on the test set. \n- ## Use only classifiers that are used in the Chapter 3 of the textbook.\n- ## Do the error analysis following the textbook.", "_____no_output_____" ] ], [ [ "# Check GPU\nimport tensorflow as tf\ntf.test.gpu_device_name()", "_____no_output_____" ], [ "# !mkdir -p my_drive\n# !google-drive-ocamlfuse my_drive\n# #!mkdir -p my_drive/Fashion", "_____no_output_____" ], [ "# !python3 /content/my_drive/Fashion/mnist_reader.py", "_____no_output_____" ], [ "import sys\nsys.path.append('/content/my_drive/Fashion')", "_____no_output_____" ] ], [ [ "----------Everything above this is for Google Coloab----------", "_____no_output_____" ] ], [ [ "import numpy as np\nimport os\n\n# to make this notebook's output stable across runs\nnp.random.seed(42)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12", "_____no_output_____" ], [ "import mnist_reader\nX_train, y_train = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='train')\nX_test, y_test = mnist_reader.load_mnist('/content/my_drive/Fashion', kind='t10k')", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "y_train.shape", "_____no_output_____" ] ], [ [ "### Labels\nEach training and test example is assigned to one of the following labels:\n\nLabel\tDescription\n- 0\tT-shirt/top\n- 1\tTrouser\n- 2\tPullover\n- 3\tDress\n- 4\tCoat\n- 5\tSandal\n- 6\tShirt\n- 7\tSneaker\n- 8\tBag\n- 9\tAnkle boot", "_____no_output_____" ] ], [ [ "def plot_digit(data):\n image = data.reshape(28, 28)\n plt.imshow(image, cmap = matplotlib.cm.binary, interpolation=\"nearest\")\n plt.axis(\"off\")", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nsome_digit = X_train[36001]\nsome_digit_image = some_digit.reshape(28, 28)\nplt.imshow(some_digit_image, cmap = matplotlib.cm.binary, interpolation=\"nearest\")\nplt.axis(\"off\");", "_____no_output_____" ], [ "y_train[36001]", "_____no_output_____" ], [ "plot_digit(X_train[40000])", "_____no_output_____" ], [ "y_train[40000]", "_____no_output_____" ], [ "def plot_digits(instances, images_per_row=10, **options):\n size = 28\n images_per_row = min(len(instances), images_per_row)\n images = [instance.reshape(size,size) for instance in instances]\n n_rows = (len(instances) - 1) // images_per_row + 1\n row_images = []\n n_empty = n_rows * images_per_row - len(instances)\n images.append(np.zeros((size, size * n_empty)))\n for row in range(n_rows):\n rimages = images[row * images_per_row : (row + 1) * images_per_row]\n row_images.append(np.concatenate(rimages, axis=1))\n image = np.concatenate(row_images, axis=0)\n plt.imshow(image, cmap = matplotlib.cm.binary, **options)\n plt.axis(\"off\")", "_____no_output_____" ], [ "plt.figure(figsize=(9,9))\nX_0 = X_train[(y_train == 0)]\nexample_images = X_0[:100]\nplot_digits(example_images, images_per_row=10)", "_____no_output_____" ], [ "plt.figure(figsize=(9,9))\nX_6 = X_train[(y_train == 6)]\nexample_images = X_6[:100]\nplot_digits(example_images, images_per_row=10)", "_____no_output_____" ], [ "plt.figure(figsize=(9,9))\nX_1 = X_train[(y_train == 1)]\nexample_images = X_1[:100]\nplot_digits(example_images, images_per_row=10)", "_____no_output_____" ], [ "plt.figure(figsize=(9,9))\nX_2 = X_train[(y_train == 2)]\nexample_images = X_2[:100]\nplot_digits(example_images, images_per_row=10)", "_____no_output_____" ], [ "plt.figure(figsize=(9,9))\nX_3 = X_train[(y_train == 3)]\nexample_images = X_3[:100]\nplot_digits(example_images, images_per_row=10)", "_____no_output_____" ], [ "some_article = X_train[1014]\nplot_digit(some_article)", "_____no_output_____" ], [ "y_train[1014]", "_____no_output_____" ] ], [ [ "# Training a Binary classifier to identify the shirt", "_____no_output_____" ] ], [ [ "y_train_shirt = (y_train == 6)\ny_test_shirt = (y_train == 6)", "_____no_output_____" ], [ "from sklearn.linear_model import SGDClassifier\n\nsgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)\nsgd_clf.fit(X_train, y_train_shirt)", "_____no_output_____" ] ], [ [ "# Performance measures", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\n\ncross_val_score(sgd_clf, X_train, y_train_shirt, cv=3, scoring=\"accuracy\")", "_____no_output_____" ], [ "# from sklearn.model_selection import StratifiedKFold\n# from sklearn.base import clone\n\n# skfolds = StratifiedKFold(n_splits=3, random_state=42)\n\n# for train_index, test_index in skfolds.split(X_train, y_train_sneaker):\n# clone_clf = clone(sgd_clf)\n# X_train_folds = X_train[train_index]\n# y_train_folds = y_train_sneaker[train_index]\n# X_test_fold = X_train[test_index]\n# y_test_fold = y_train_sneaker[test_index]\n\n# clone_clf.fit(X_train_folds, y_train_folds)\n# y_pred = clone_clf.predict(X_test_fold)\n# n_correct = sum(y_pred == y_test_fold)\n# print(n_correct / len(y_pred))", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_predict\n\ny_train_pred = cross_val_predict(sgd_clf, X_train, y_train_shirt, cv=3)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\n\nconfusion_matrix(y_train_shirt, y_train_pred)", "_____no_output_____" ], [ "y_train_perfect_predictions = y_train_shirt # pretend we reached perfection\nconfusion_matrix(y_train_shirt, y_train_perfect_predictions)", "_____no_output_____" ], [ "from sklearn.metrics import precision_score, recall_score\n\nprecision_score(y_train_shirt, y_train_pred)", "_____no_output_____" ], [ "recall_score(y_train_shirt, y_train_pred)", "_____no_output_____" ], [ "from sklearn.metrics import f1_score\n\nf1_score(y_train_shirt, y_train_pred)", "_____no_output_____" ], [ "y_scores = cross_val_predict(sgd_clf, X_train, y_train_shirt, cv=3, method=\"decision_function\")", "_____no_output_____" ], [ "from sklearn.metrics import precision_recall_curve\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_shirt, y_scores)", "_____no_output_____" ], [ "def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], \"b--\", label=\"Precision\", linewidth=2)\n plt.plot(thresholds, recalls[:-1], \"g-\", label=\"Recall\", linewidth=2)\n plt.legend(loc=\"center right\", fontsize=16) # Not shown in the book\n plt.xlabel(\"Threshold\", fontsize=16) # Not shown\n plt.grid(True) # Not shown\n plt.axis([-50000, 50000, 0, 1]) # Not shown\n\nplt.figure(figsize=(8, 4)) # Not shown\nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\n\n#save_fig(\"precision_recall_vs_threshold_plot\") # Not shown\nplt.show()", "_____no_output_____" ], [ "def plot_precision_vs_recall(precisions, recalls):\n plt.plot(recalls, precisions, \"b-\", linewidth=2)\n plt.xlabel(\"Recall\", fontsize=16)\n plt.ylabel(\"Precision\", fontsize=16)\n plt.axis([0, 1, 0, 1])\n plt.grid(True)\n\nplt.figure(figsize=(8, 6))\nplot_precision_vs_recall(precisions, recalls)\n\n#save_fig(\"precision_vs_recall_plot\")\nplt.show()", "_____no_output_____" ] ], [ [ "Sneaker Binary Classifier for comparison", "_____no_output_____" ] ], [ [ "y_train_sneaker = (y_train == 7)\ny_scores_sneaker = cross_val_predict(sgd_clf, X_train, y_train_sneaker, cv=3, method=\"decision_function\")\nprecisions, recalls, thresholds = precision_recall_curve(y_train_sneaker, y_scores_sneaker)", "_____no_output_____" ], [ "plt.figure(figsize=(8, 4)) # Not shown\nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\n#save_fig(\"precision_recall_vs_threshold_plot\") # Not shown\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(8, 6))\nplot_precision_vs_recall(precisions, recalls)\n\nplt.show()", "_____no_output_____" ] ], [ [ "# ROC Curves", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_train_shirt, y_scores)", "_____no_output_____" ], [ "def plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal\n plt.axis([0, 1, 0, 1]) # Not shown in the book\n plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown\n plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown\n plt.grid(True) # Not shown\n\nplt.figure(figsize=(8, 6)) # Not shown\nplot_roc_curve(fpr, tpr)\n\n#save_fig(\"roc_curve_plot\") # Not shown\nplt.show()", "_____no_output_____" ], [ "from sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_shirt, y_scores)", "_____no_output_____" ], [ "from sklearn.ensemble import RandomForestClassifier\nforest_clf = RandomForestClassifier(n_estimators=100, random_state=42)\ny_probas_forest = cross_val_predict(forest_clf, X_train, y_train_shirt, cv=3, method=\"predict_proba\")", "_____no_output_____" ], [ "y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class\nfpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_shirt,y_scores_forest)", "_____no_output_____" ], [ "plt.figure(figsize=(8, 6))\nplt.plot(fpr, tpr, \"b:\", linewidth=2, label=\"SGD\")\nplot_roc_curve(fpr_forest, tpr_forest, \"Random Forest\")\n\nplt.grid(True)\nplt.legend(loc=\"lower right\", fontsize=16)\n#save_fig(\"roc_curve_comparison_plot\")\nplt.show()", "_____no_output_____" ], [ "roc_auc_score(y_train_shirt, y_scores_forest)", "_____no_output_____" ], [ "y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_shirt, cv=3)\nprecision_score(y_train_shirt, y_train_pred_forest)", "_____no_output_____" ], [ "recall_score(y_train_shirt, y_train_pred_forest)", "_____no_output_____" ] ], [ [ "# Multiclass classification", "_____no_output_____" ] ], [ [ "#from sklearn.svm import SVC", "_____no_output_____" ], [ "#from sklearn.multiclass import OneVsRestClassifier\n#ovr_clf = OneVsRestClassifier(SVC(gamma=\"auto\", random_state=42))\n#ovr_clf.fit(X_train, y_train)", "_____no_output_____" ], [ "forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)\nforest_clf.fit(X_train, y_train)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.astype(np.float64))", "_____no_output_____" ], [ "y_train_pred_forest = cross_val_predict(forest_clf, X_train_scaled, y_train, cv=3)", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\naccuracy_score(y_train, y_train_pred_forest)", "_____no_output_____" ] ], [ [ "88% Accuracy", "_____no_output_____" ] ], [ [ "#cross_val_score(forest_clf, X_train_scaled, y_train, cv=3, scoring=\"accuracy\")", "_____no_output_____" ] ], [ [ "# Error Analysis", "_____no_output_____" ] ], [ [ "conf_mx = confusion_matrix(y_train, y_train_pred_forest)\nconf_mx", "_____no_output_____" ], [ "plt.matshow(conf_mx, cmap=plt.cm.gray)\nplt.show()", "_____no_output_____" ], [ "row_sums = conf_mx.sum(axis=1, keepdims=True)\nnorm_conf_mx = conf_mx / row_sums", "_____no_output_____" ], [ "np.fill_diagonal(norm_conf_mx, 0)\nplt.matshow(norm_conf_mx, cmap=plt.cm.gray)\nplt.show()", "_____no_output_____" ] ], [ [ "6(shirt) is confused with 0(T-shirt/top) the most", "_____no_output_____" ] ], [ [ "cl_a, cl_b = 6, 0\nX_aa = X_train[(y_train == cl_a) & (y_train_pred_forest == cl_a)]\nX_ab = X_train[(y_train == cl_a) & (y_train_pred_forest == cl_b)]\nX_ba = X_train[(y_train == cl_b) & (y_train_pred_forest == cl_a)]\nX_bb = X_train[(y_train == cl_b) & (y_train_pred_forest == cl_b)]\n\nplt.figure(figsize=(8,8))\nplt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)\nplt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)\nplt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)\nplt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)\nplt.show()", "_____no_output_____" ] ], [ [ "The error / loss in accuraccy is mostly coming from the shirt being confused for a T-shirt or top. This confusion is from the length of the sleeves and is not easy to tell.", "_____no_output_____" ], [ "# Testing test sample", "_____no_output_____" ] ], [ [ "y_test_pred = forest_clf.predict(X_test)", "_____no_output_____" ], [ "accuracy_score(y_test, y_test_pred)", "_____no_output_____" ] ], [ [ "87% Accuracy", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb154e10e293cb0d2bd6ec15d05331603366ee89
56,041
ipynb
Jupyter Notebook
examples/notebooks/Companion.ipynb
FrancescoPinto/edward2
94c5ddbcbd08f9c8643dc8fb52672acb731eda6e
[ "Apache-2.0" ]
null
null
null
examples/notebooks/Companion.ipynb
FrancescoPinto/edward2
94c5ddbcbd08f9c8643dc8fb52672acb731eda6e
[ "Apache-2.0" ]
null
null
null
examples/notebooks/Companion.ipynb
FrancescoPinto/edward2
94c5ddbcbd08f9c8643dc8fb52672acb731eda6e
[ "Apache-2.0" ]
null
null
null
38.123129
1,440
0.51018
[ [ [ "##### Copyright 2018 Google LLC.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");", "_____no_output_____" ] ], [ [ "#@title Default title text\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Simple, Distributed, and Accelerated Probabilistic Programming\n\nThis notebook is a companion webpage for the NIPS 2018 paper, [\"Simple, Distributed, and Accelerated Probabilistic Programming\"](https://arxiv.org/abs/1811.02091) (Tran et al., 2018). See the [README.md](https://github.com/google/edward2) for details on how to interact with data, models, probabilistic inference, and more. It assumes the following dependencies:", "_____no_output_____" ] ], [ [ "!pip install scipy==1.0.0\n!pip install tensor2tensor==1.9.0\n!pip install tensorflow==1.12.0rc2 # alternatively, tensorflow-gpu==1.12.0rc2", "_____no_output_____" ], [ "import numpy as np\nimport tensorflow as tf\n\nfrom contextlib import contextmanager\nfrom scipy import stats\nfrom tensor2tensor.layers import common_attention\nfrom tensor2tensor.layers import common_image_attention as cia\nfrom tensor2tensor.models import image_transformer as imgtransformer\nfrom tensor2tensor.models import transformer\ntfb = tf.contrib.distributions.bijectors\ntfe = tf.contrib.eager", "_____no_output_____" ] ], [ [ "This notebook also requires importing files in this Github repository:", "_____no_output_____" ] ], [ [ "import edward2 as ed\nimport no_u_turn_sampler # local file import", "_____no_output_____" ] ], [ [ "Certain snippets require eager execution. This is run with the command below.", "_____no_output_____" ] ], [ [ "tf.enable_eager_execution()", "_____no_output_____" ] ], [ [ "## Section 2. Random Variables Are All You Need\n\n__Figure 1__. Beta-Bernoulli program. In eager mode, `model()` generates a binary vector of $50$ elements. In graph mode, `model()` returns an op to be evaluated in a TensorFlow session.", "_____no_output_____" ] ], [ [ "def model():\n p = ed.Beta(1., 1., name=\"p\")\n x = ed.Bernoulli(probs=p, \n sample_shape=50, \n name=\"x\")\n return x", "_____no_output_____" ], [ "x = model()\nprint(x)", "RandomVariable(\"\n[0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0\n 0 0 0 0 0 0 0 0 0 0 0 0 0]\", shape=(50,), dtype=int32, device=/job:localhost/replica:0/task:0/device:CPU:0)\n" ] ], [ [ "__Figure 2__. Variational program (Ranganath et al., 2016), available in eager mode. Python control flow is applicable to generative processes: given a coin flip, the program generates from one of two neural nets. Their outputs can have differing shape (and structure).", "_____no_output_____" ] ], [ [ "def neural_net_negative(noise, inputs):\n net = noise + inputs\n net = tf.layers.dense(net, 512, activation=tf.nn.relu)\n net = tf.layers.dense(net, 64, activation=None)\n return net\n\ndef neural_net_positive(noise, inputs):\n del noise, inputs # unused\n return \"Hello. I'm a different output type.\"\n\ndef variational(x):\n eps = ed.Normal(0., 1., sample_shape=2)\n if eps[0] > 0:\n return neural_net_positive(eps[1], x)\n else:\n return neural_net_negative(eps[1], x)", "_____no_output_____" ], [ "if not tf.executing_eagerly():\n raise ValueError(\"This code snippet requires eager execution.\")\n\nx = tf.random_normal([4, 64, 64, 3]) # batch of, e.g., 64x64x3 images\nz = variational(x)\nif isinstance(z, tf.Tensor):\n print(type(z), z.shape) # to avoid printing a huge Tensor\nelse:\n print(z)", "(<type 'EagerTensor'>, TensorShape([Dimension(4), Dimension(64), Dimension(64), Dimension(64)]))\n" ] ], [ [ "__Figure 3.__ Distributed autoregressive flows. The default length is 8, each with 4 independent flows (Papamakarios et al., 2017). Each flow transforms inputs via layers respecting autoregressive ordering. Flows are partitioned across a virtual topology of 4x4 cores (rectangles); each core computes 2 flows and is locally connected; a final core aggregates. The virtual topology aligns with the physical TPU topology: for 4x4 TPUs, it is exact; for 16x16 TPUs, it is duplicated for data parallelism.", "_____no_output_____" ] ], [ [ "class SplitAutoregressiveFlow(tfb.Bijector):\n def __init__(flow_size=[4]*8):\n self.flows = []\n for num_splits in flow_size:\n flow = SplitAutoregressiveFlow(masked_network, num_splits)\n self.flows.append(flow)\n self.flows.append(SplitAutoregressiveFlow(masked_network, 1))\n super(SplitAutoregressiveFlow, self).__init__()\n\n def _forward(self, x):\n for l, flow in enumerate(self.flows):\n with tf.device(tf.contrib.tpu.core(l%4)):\n x = flow.forward(x)\n return x\n\n def _inverse_and_log_det_jacobian(self, y):\n ldj = 0.\n for l, flow in enumerate(self.flows[::-1]):\n with tf.device(tf.contrib.tpu.core(l%4)):\n y, new_ldj = flow.inverse_and_log_det_jacobian(y)\n ldj += new_ldj\n return y, ldj\n\n\nclass DistributedAutoregressiveFlow(tfb.Bijector):\n def __init__(flow_size=[4]*8):\n self.flows = []\n for num_splits in flow_size:\n flow = SplitAutoregressiveFlow(masked_network, num_splits)\n self.flows.append(flow)\n self.flows.append(SplitAutoregressiveFlow(masked_network, 1))\n super(DistributedAutoregressiveFlow, self).__init__()\n \n def _forward(self, x):\n for l, flow in enumerate(self.flows):\n with tf.device(tf.contrib.tpu.core(l%4)):\n x = flow.forward(x)\n return x\n\n def _inverse_and_log_det_jacobian(self, y):\n ldj = 0.\n for l, flow in enumerate(self.flows[::-1]):\n with tf.device(tf.contrib.tpu.core(l%4)):\n y, new_ldj = flow.inverse_and_log_det_jacobian(y)\n ldj += new_ldj\n return y, ldj", "_____no_output_____" ] ], [ [ "__Figure 4.__\nModel-parallel VAE with TPUs, generating 16-bit audio from 8-bit latents. The prior and decoder split computation according to distributed autoregressive flows. The encoder may split computation according to `compressor`; we omit it for space.", "_____no_output_____" ] ], [ [ "def prior():\n \"\"\"Uniform noise to 8-bit latent, [u1,...,u(T/2)] -> [z1,...,z(T/2)]\"\"\"\n dist = ed.Independent(ed.Uniform(low=tf.zeros([batch_size, T/2])))\n return ed.TransformedDistribution(dist, DistributedAutoregressiveFlow(flow_size))\n\ndef decoder(z):\n \"\"\"Uniform noise + latent to 16-bit audio, [u1,...,uT], [z1,...,z(T/2)] -> [x1,...,xT]\"\"\"\n dist = ed.Independent(ed.Uniform(low=tf.zeros([batch_size, T])))\n dist = ed.TransformedDistribution(dist, tfb.Affine(shift=decompressor(z)))\n return ed.TransformedDistribution(dist, DistributedAutoregressiveFlow(flow_size))\n\ndef encoder(x):\n \"\"\"16-bit audio to 8-bit latent, [x1,...,xT] -> [z1,...,z(T/2)]\"\"\"\n loc, log_scale = tf.split(compressor(x), 2, axis=-1)\n return ed.MultivariateNormalDiag(loc=loc, scale=tf.exp(log_scale))", "_____no_output_____" ] ], [ [ "__Figure 5__. Edward2's core.\n`trace` defines a context; any traceable ops executed during it are replaced by calls to `tracer`. `traceable` registers these ops; we register Edward random variables.", "_____no_output_____" ] ], [ [ "STACK = [lambda f, *a, **k: f(*a, **k)]\n\n@contextmanager\ndef trace(tracer):\n STACK.append(tracer)\n yield\n STACK.pop()\n \ndef traceable(f):\n def f_wrapped(*a, **k):\n STACK[-1](f, *a, **k)\n return f_wrapped", "_____no_output_____" ] ], [ [ "__Figure 7__. A higher-order function which takes a `model` program as input and returns its log-joint density function.", "_____no_output_____" ] ], [ [ "def make_log_joint_fn(model):\n def log_joint_fn(**model_kwargs):\n def tracer(rv_call, *args, **kwargs):\n name = kwargs.get(\"name\")\n kwargs[\"value\"] = model_kwargs.get(name)\n rv = rv_call(*args, **kwargs)\n log_probs.append(tf.reduce_sum(rv.distribution.log_prob(rv)))\n return rv\n log_probs = []\n with ed.trace(tracer):\n model()\n return sum(log_probs)\n return log_joint_fn", "_____no_output_____" ], [ "try:\n model\nexcept NameError:\n raise NameError(\"This code snippet requires `model` from above.\")\n\nlog_joint = make_log_joint_fn(model)\np = np.random.uniform()\nx = np.round(np.random.normal(size=[50])).astype(np.int32)\nout = log_joint(p=p, x=x)\nprint(out)", "tf.Tensor(-23.1994, shape=(), dtype=float32)\n" ] ], [ [ "__Figure 8__. A higher-order function which takes a `model` program as input and returns its causally intervened program. Intervention differs from conditioning: it does not change the sampled value but the distribution.", "_____no_output_____" ] ], [ [ "def mutilate(model, **do_kwargs):\n def mutilated_model(*args, **kwargs):\n def tracer(rv_call, *args, **kwargs):\n name = kwargs.get(\"name\")\n if name in do_kwargs:\n return do_kwargs[name]\n return rv_call(*args, **kwargs)\n with ed.trace(tracer):\n return model(*args, **kwargs)\n return mutilated_model", "_____no_output_____" ], [ "try:\n model\nexcept NameError:\n raise NameError(\"This code snippet requires `model` from above.\")\n\nmutilated_model = mutilate(model, p=0.999)\nx = mutilated_model()\nprint(x)", "RandomVariable(\"\n[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1]\", shape=(50,), dtype=int32, device=/job:localhost/replica:0/task:0/device:CPU:0)\n" ] ], [ [ "## Section 3. Learning with Low-Level Functions\n\n__FIgure 9__. Data-parallel Image Transformer with TPUs (Parmar et al., 2018). It is a neural autoregressive model which computes the log-probability of a batch of images with self-attention. Edward2 enables representing and training the model as a log-probability function; this is more efficient than the typical representation of programs as a generative process.", "_____no_output_____" ] ], [ [ "get_channel_embeddings = cia.get_channel_embeddings\nadd_positional_embedding = common_attention.add_positional_embedding\nlocal_attention_1d = cia.local_attention_1d\n\ndef image_transformer(inputs, hparams):\n x = get_channel_embeddings(3, inputs, hparams.hidden_size)\n x = tf.reshape(x, [-1, 32*32*3, hparams.hidden_size])\n x = tf.pad(x, [[0, 0], [1, 0], [0, 0]])[:, :-1, :] # shift pixels right\n x = add_positional_embedding(x, max_length=32*32*3+3, name=\"pos_embed\")\n x = tf.nn.dropout(x, keep_prob=0.7)\n for _ in range(hparams.num_decoder_layers):\n with tf.variable_scope(None, default_name=\"decoder_layer\"):\n y = local_attention_1d(x, hparams, attention_type=\"local_mask_right\", \n q_padding=\"LEFT\", kv_padding=\"LEFT\")\n x = tf.contrib.layers.layer_norm(tf.nn.dropout(y, keep_prob=0.7) + x, begin_norm_axis=-1)\n y = tf.layers.dense(x, hparams.filter_size, activation=tf.nn.relu)\n y = tf.layers.dense(y, hparams.hidden_size, activation=None)\n x = tf.contrib.layers.layer_norm(tf.nn.dropout(y, keep_prob=0.7) + x, begin_norm_axis=-1)\n x = tf.reshape(x, [-1, 32, 32, 3, hparams.hidden_size])\n logits = tf.layers.dense(x, 256, activation=None)\n return ed.Categorical(logits=logits).distribution.log_prob(inputs)", "_____no_output_____" ], [ "if tf.executing_eagerly():\n raise ValueError(\"This code snippet does not support eager execution.\")\n\nbatch_size = 4\ninputs = tf.random_uniform([batch_size, 32, 32, 3], minval=0, maxval=256, dtype=tf.int32)\nhparams = imgtransformer.imagetransformer_cifar10_base()\nloss = -tf.reduce_sum(image_transformer(inputs, hparams))\ntrain_op = tf.contrib.tpu.CrossShardOptimizer(tf.train.AdamOptimizer()).minimize(loss)\nprint(loss)", "Tensor(\"Neg:0\", shape=(), dtype=float32)\n" ] ], [ [ "__Figure 10__. Core logic in No-U-Turn Sampler (Hoffman and Gelman, 2014). This algorithm has data-dependent non-tail recursion.\n\nSee [`no_u_turn_sampler/`](https://github.com/google/edward2/tree/master/examples/no_u_turn_sampler/) in the Github repository for its full implementation.", "_____no_output_____" ], [ "__Figure 11__. Variational inference with preconditioned gradient descent. Edward2 offers writing the probabilistic program and performing arbitrary TensorFlow computation for learning.", "_____no_output_____" ] ], [ [ "try:\n model\n make_log_joint_fn\nexcept NameError:\n raise NameError(\"This code snippet requires `model`, `make_log_joint_fn` \"\n \" from above.\")\n\nclass Variational(object):\n def __init__(self):\n self.parameters = tf.random_normal([2])\n \n def __call__(self, x):\n del x # unused; it is a non-amortized approximation\n return ed.Deterministic(loc=tf.sigmoid(self.parameters[0]), name=\"qp\")\n\nvariational = Variational()\nx = tf.random_uniform([50], minval=0, maxval=2, dtype=tf.int32)\nalignment = {\"qp\": \"p\"}\n\ndef loss(x):\n qz = variational(x)\n log_joint_fn = make_log_joint_fn(model)\n kwargs = {alignment[rv.distribution.name]: rv.value\n for rv in [qz]}\n energy = log_joint_fn(x=x, **kwargs)\n entropy = sum([rv.distribution.entropy() for rv in [qz]])\n return -energy - entropy\n\ndef grad():\n with tf.GradientTape() as tape:\n tape.watch(variational.parameters)\n loss_value = loss(x)\n return tape.gradient(loss_value, variational.parameters)\n\ndef train(precond):\n for _ in range(5):\n grads = tf.tensordot(precond, grad(), [[1], [0]])\n variational.parameters -= 0.1 * grads\n return loss(x)", "_____no_output_____" ], [ "if not tf.executing_eagerly():\n raise ValueError(\"This code snippet requires eager execution.\")\n\nprecond = tf.eye(2)\nloss_value = train(precond)\nprint(loss_value)", "tf.Tensor(34.2965, shape=(), dtype=float32)\n" ] ], [ [ "__Figure 12__. Learning-to-learn. It finds the optimal preconditioner for `train` (__Figure 11__) by differentiating the entire learning algorithm with respect to the preconditioner.", "_____no_output_____" ] ], [ [ "if not tf.executing_eagerly():\n raise ValueError(\"This code snippet requires eager execution.\")\n \nprecond = tfe.Variable(tf.random_normal([2, 2]))\noptimizer = tf.train.AdamOptimizer(1.)\nfor _ in range(10):\n with tf.GradientTape() as tape:\n loss_value = train(precond)\n grads = tape.gradient(loss_value, [precond])\n optimizer.apply_gradients(zip(grads, [precond]))\n print(loss_value.numpy(), precond.numpy())", "(34.296486, array([[ 0.67495859, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296494, array([[ 0.04607767, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296486, array([[-0.44003215, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296486, array([[-0.83821177, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296486, array([[-1.1741091 , 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296501, array([[-0.69452661, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.300507, array([[-0.18749332, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.328743, array([[ 0.34323317, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296623, array([[ 0.81854713, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n(34.296486, array([[ 1.24275446, 0.81464583],\n [-0.18986063, -0.0458361 ]], dtype=float32))\n" ] ], [ [ "## Appendix A. Edward2 on SciPy\n\nWe illustrate the broad applicability of Edward2’s tracing by implementing Edward2 on top of SciPy.\n\nFor this notebook, we mimick a namespace using a struct so that one can play with the traceable scipy stats here.", "_____no_output_____" ] ], [ [ "class FakeEdward2ScipyNamespace(object):\n pass\n\nfor _name in sorted(dir(stats)):\n _candidate = getattr(stats, _name)\n if isinstance(_candidate, (stats._multivariate.multi_rv_generic,\n stats.rv_continuous,\n stats.rv_discrete,\n stats.rv_histogram)):\n _candidate.rvs = ed.traceable(_candidate.rvs)\n setattr(FakeEdward2ScipyNamespace, _name, _candidate)\n del _candidate\n \nscipy_stats = FakeEdward2ScipyNamespace()\nprint([name for name in dir(scipy_stats) if not name.startswith(\"__\")])", "['alpha', 'anglit', 'arcsine', 'argus', 'bernoulli', 'beta', 'betaprime', 'binom', 'boltzmann', 'bradford', 'burr', 'burr12', 'cauchy', 'chi', 'chi2', 'cosine', 'crystalball', 'dgamma', 'dirichlet', 'dlaplace', 'dweibull', 'erlang', 'expon', 'exponnorm', 'exponpow', 'exponweib', 'f', 'fatiguelife', 'fisk', 'foldcauchy', 'foldnorm', 'frechet_l', 'frechet_r', 'gamma', 'gausshyper', 'genexpon', 'genextreme', 'gengamma', 'genhalflogistic', 'genlogistic', 'gennorm', 'genpareto', 'geom', 'gilbrat', 'gompertz', 'gumbel_l', 'gumbel_r', 'halfcauchy', 'halfgennorm', 'halflogistic', 'halfnorm', 'hypergeom', 'hypsecant', 'invgamma', 'invgauss', 'invweibull', 'invwishart', 'johnsonsb', 'johnsonsu', 'kappa3', 'kappa4', 'ksone', 'kstwobign', 'laplace', 'levy', 'levy_l', 'levy_stable', 'loggamma', 'logistic', 'loglaplace', 'lognorm', 'logser', 'lomax', 'matrix_normal', 'maxwell', 'mielke', 'multinomial', 'multivariate_normal', 'nakagami', 'nbinom', 'ncf', 'nct', 'ncx2', 'norm', 'ortho_group', 'pareto', 'pearson3', 'planck', 'poisson', 'powerlaw', 'powerlognorm', 'powernorm', 'randint', 'random_correlation', 'rayleigh', 'rdist', 'recipinvgauss', 'reciprocal', 'rice', 'semicircular', 'skellam', 'skewnorm', 'special_ortho_group', 't', 'trapz', 'triang', 'truncexpon', 'truncnorm', 'tukeylambda', 'uniform', 'unitary_group', 'vonmises', 'vonmises_line', 'wald', 'weibull_max', 'weibull_min', 'wishart', 'wrapcauchy', 'zipf']\n" ] ], [ [ "Below is an Edward2 linear regression program on SciPy.", "_____no_output_____" ] ], [ [ "def make_log_joint_fn(model):\n def log_joint_fn(*model_args, **model_kwargs):\n def tracer(rv_call, *args, **kwargs):\n name = kwargs.pop(\"name\", None)\n kwargs.pop(\"size\", None)\n kwargs.pop(\"random_state\", None)\n value = model_kwargs.get(name)\n log_prob_fn = getattr(scipy_stats, rv_call.im_class.__name__[:-4]).logpdf\n log_prob = np.sum(log_prob_fn(value, *args, **kwargs))\n log_probs.append(log_prob)\n return value\n log_probs = []\n with ed.trace(tracer):\n model(*model_args)\n return sum(log_probs)\n return log_joint_fn\n\ndef linear_regression(X):\n beta = scipy_stats.norm.rvs(loc=0.0, scale=0.1, size=X.shape[1], name=\"beta\")\n loc = np.einsum('ij,j->i', X, beta)\n y = scipy_stats.norm.rvs(loc=loc, scale=1., size=1, name=\"y\")\n return y", "_____no_output_____" ], [ "log_joint = make_log_joint_fn(linear_regression)\n\nX = np.random.normal(size=[3, 2])\nbeta = np.random.normal(size=[2])\ny = np.random.normal(size=[3])\nout = log_joint(X, beta=beta, y=y)\nprint(out)", "-1.0854508813\n" ] ], [ [ "## Appendix B. Grammar Variational Auto-Encoder\n\nThe grammar variational auto-encoder (VAE) (Kusner et al., 2017) posits a generative model over\nproductions from a context-free grammar, and it posits an amortized variational\napproximation for efficient posterior inference. We train the grammar VAE\non synthetic data using the grammar from Kusner et al. (2017; Figure 1).\n\nThis example showcases eager execution in order to train the model where data\npoints have a variable number of time steps. However, note that this requires a\nbatch size of 1. In this example, we assume data points arrive in a stream, one\nat a time. Such a setting requires handling a variable number of time steps\nas the maximum length is unbounded.", "_____no_output_____" ] ], [ [ "class SmilesGrammar(object):\n \"\"\"Context-free grammar for SMILES strings.\"\"\"\n nonterminal_symbols = {\"smiles\", \"chain\", \"branched atom\", \"atom\", \"ringbond\",\n \"aromatic organic\", \"aliphatic organic\", \"digit\"}\n alphabet = {\"c\", \"C\", \"N\", \"1\", \"2\"}\n production_rules = [\n (\"smiles\", [\"chain\"]),\n (\"chain\", [\"chain\", \"branched atom\"]),\n (\"chain\", [\"branched atom\"]),\n (\"branched atom\", [\"atom\", \"ringbond\"]),\n (\"branched atom\", [\"atom\"]),\n (\"atom\", [\"aromatic organic\"]),\n (\"atom\", [\"aliphatic organic\"]),\n (\"ringbond\", [\"digit\"]),\n (\"aromatic organic\", [\"c\"]),\n (\"aliphatic organic\", [\"C\"]),\n (\"aliphatic organic\", [\"N\"]),\n (\"digit\", [\"1\"]),\n (\"digit\", [\"2\"]),\n ]\n start_symbol = \"smiles\"\n\n def mask(self, symbol, on_value=0., off_value=-1e9):\n \"\"\"Produces a masking tensor for (in)valid production rules.\"\"\"\n mask_values = []\n for lhs, _ in self.production_rules:\n if symbol in lhs:\n mask_value = on_value\n else:\n mask_value = off_value\n mask_values.append(mask_value)\n mask_values = tf.reshape(mask_values, [1, len(self.production_rules)])\n return mask_values\n\nclass ProbabilisticGrammar(tf.python.keras.Model):\n \"\"\"Deep generative model over productions which follow a grammar.\"\"\"\n\n def __init__(self, grammar, latent_size, num_units):\n \"\"\"Constructs a probabilistic grammar.\"\"\"\n super(ProbabilisticGrammar, self).__init__()\n self.grammar = grammar\n self.latent_size = latent_size\n self.lstm = tf.nn.rnn_cell.LSTMCell(num_units)\n self.output_layer = tf.python.keras.layers.Dense(len(grammar.production_rules))\n\n def call(self, inputs):\n \"\"\"Runs the model forward to generate a sequence of productions.\"\"\"\n del inputs # unused\n latent_code = ed.MultivariateNormalDiag(loc=tf.zeros(self.latent_size),\n sample_shape=1,\n name=\"latent_code\")\n state = self.lstm.zero_state(1, dtype=tf.float32)\n t = 0\n productions = []\n stack = [self.grammar.start_symbol]\n while stack:\n symbol = stack.pop()\n net, state = self.lstm(latent_code, state)\n logits = self.output_layer(net) + self.grammar.mask(symbol)\n production = ed.OneHotCategorical(logits=logits,\n name=\"production_\" + str(t))\n _, rhs = self.grammar.production_rules[tf.argmax(production, axis=1)]\n for symbol in rhs:\n if symbol in self.grammar.nonterminal_symbols:\n stack.append(symbol)\n productions.append(production)\n t += 1\n return tf.stack(productions, axis=1)\n\nclass ProbabilisticGrammarVariational(tf.python.keras.Model):\n \"\"\"Amortized variational posterior for a probabilistic grammar.\"\"\"\n\n def __init__(self, latent_size):\n \"\"\"Constructs a variational posterior for a probabilistic grammar.\"\"\"\n super(ProbabilisticGrammarVariational, self).__init__()\n self.latent_size = latent_size\n self.encoder_net = tf.python.keras.Sequential([\n tf.python.keras.layers.Conv1D(64, 3, padding=\"SAME\"),\n tf.python.keras.layers.BatchNormalization(),\n tf.python.keras.layers.Activation(tf.nn.elu),\n tf.python.keras.layers.Conv1D(128, 3, padding=\"SAME\"),\n tf.python.keras.layers.BatchNormalization(),\n tf.python.keras.layers.Activation(tf.nn.elu),\n tf.python.keras.layers.Dropout(0.1),\n tf.python.keras.layers.GlobalAveragePooling1D(),\n tf.python.keras.layers.Dense(latent_size * 2, activation=None),\n ])\n\n def call(self, inputs):\n \"\"\"Runs the model forward to return a stochastic encoding.\"\"\"\n net = self.encoder_net(tf.cast(inputs, tf.float32))\n return ed.MultivariateNormalDiag(\n loc=net[..., :self.latent_size],\n scale_diag=tf.nn.softplus(net[..., self.latent_size:]),\n name=\"latent_code_posterior\")", "_____no_output_____" ], [ "if not tf.executing_eagerly():\n raise ValueError(\"This code snippet requires eager execution.\")\n\ngrammar = SmilesGrammar()\nprobabilistic_grammar = ProbabilisticGrammar(\n grammar=grammar, latent_size=8, num_units=128)\nprobabilistic_grammar_variational = ProbabilisticGrammarVariational(\n latent_size=8)\n\nfor _ in range(5):\n productions = probabilistic_grammar(_)\n print(\"Production Shape: {}\".format(productions.shape))\n\n string = grammar.convert_to_string(productions)\n print(\"String: {}\".format(string))\n\n encoded_production = probabilistic_grammar_variational(productions)\n print(\"Encoded Productions: {}\".format(encoded_production.numpy()))", "Production Shape: (1, 5, 13)\nString: N\nEncoded Productions: [[-0.20581727 0.92976463 -0.38938859 0.19347586 -0.74668086 -0.39914176\n 0.93074167 0.36102515]]\nProduction Shape: (1, 8, 13)\nString: 2N\nEncoded Productions: [[-1.10246503 0.50674993 0.97561336 -1.5284797 -0.20616516 0.84167266\n 0.78415519 -0.26026636]]\nProduction Shape: (1, 13, 13)\nString: 2N2C\nEncoded Productions: [[ 0.67856091 1.32186925 -0.84166789 0.60194713 -0.94573712 -0.85340422\n 0.36050722 -0.9566167 ]]\nProduction Shape: (1, 24, 13)\nString: 1N21c2N\nEncoded Productions: [[-0.80259132 0.27463835 -0.45711571 0.42127368 -0.3014937 -0.09925826\n 0.34425485 0.13968848]]\nProduction Shape: (1, 13, 13)\nString: 221N\nEncoded Productions: [[ 0.06864282 0.02749904 0.06260949 0.24946308 0.1729231 0.04198643\n 0.26645684 0.8167249 ]]\n" ] ], [ [ "See [`tensorflow_probability/examples/grammar_vae.py`](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/grammar_vae.py) for the full example.", "_____no_output_____" ], [ "## Appendix C. Markov chain Monte Carlo within Variational Inference\n\n\nWe demonstrate another level of composability: inference within a probabilistic program. Namely, we apply MCMC to construct a flexible family of distributions for variational inference\n(Salimans et al., 2015; Hoffman, 2017). We apply a chain of transition kernels specified by NUTS (`nuts`) in Section 3.3 and the variational inference algorithm specified by `train` in __Figure 12__.", "_____no_output_____" ] ], [ [ "class DeepLatentGaussianModel(tf.python.keras.Model):\n \"\"\"Deep generative model.\"\"\"\n def __init__(self, latent_size, data_shape, batch_size):\n super(DeepLatentGaussianModel, self).__init__()\n self.latent_size = latent_size\n self.data_shape = data_shape\n self.batch_size = batch_size\n self.decoder_net = tf.python.keras.Sequential([\n tf.python.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.python.keras.layers.Dense(np.prod(data_shape), activation=None),\n tf.python.keras.layers.Reshape(data_shape),\n ])\n\n def call(self, inputs):\n del inputs # unused\n latent_code = ed.MultivariateNormalDiag(\n loc=tf.zeros([self.batch_size, self.latent_size]),\n scale_diag=tf.ones([self.batch_size, self.latent_size]),\n name=\"latent_code\")\n data = ed.Categorical(logits=self.decoder_net(latent_code), name=\"data\")\n return data\n\nclass DeepLatentGaussianModelVariational(tf.python.keras.Model):\n \"\"\"Amortized variational posterior.\"\"\"\n def __init__(self,\n latent_size,\n data_shape,\n num_transitions,\n target_log_prob_fn,\n step_size):\n super(DeepLatentGaussianModelVariational, self).__init__()\n self.latent_size = latent_size\n self.data_shape = data_shape\n self.num_transitions = num_transitions\n self.target_log_prob_fn = target_log_prob_fn\n self.step_size = step_size\n self.encoder_net = tf.python.keras.Sequential([\n tf.python.keras.layers.Reshape(np.prod(data_shape)),\n tf.python.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.python.keras.layers.Dense(latent_size * 2, activation=None),\n ])\n \n def call(self, inputs):\n net = encoder_net(inputs)\n qz = ed.MultivariateNormalDiag(\n loc=net[..., :self.latent_size],\n scale_diag=tf.nn.softplus(net[..., self.latent_size:]),\n name=\"latent_code_posterior\")\n current_target_log_prob = None\n current_grads_target_log_prob = None\n for _ in range(self.num_transitions):\n [\n [qz],\n current_target_log_prob,\n current_grads_target_log_prob,\n ] = self._kernel(\n current_state=[qz],\n current_target_log_prob=current_target_log_prob,\n current_grads_target_log_prob=current_grads_target_log_prob)\n return qz\n \n def _kernel(self, current_state, current_target_log_prob,\n current_grads_target_log_prob):\n return no_u_turn_sampler.kernel(\n current_state=current_state,\n target_log_prob_fn=self.target_log_prob_fn,\n step_size=self.step_size,\n current_target_log_prob=current_target_log_prob,\n current_grads_target_log_prob=current_grads_target_log_prob)", "_____no_output_____" ], [ "latent_size = 50\ndata_shape = [32, 32, 3, 256]\nbatch_size = 4\n\nfeatures = tf.random_normal([batch_size] + data_shape)\nmodel = DeepLatentGaussianModel(\n latent_size=latent_size,\n data_shape=data_shape,\n batch_size=batch_size)\nvariational = DeepLatentGaussianModelVariational(\n latent_size=latent_size,\n data_shape=data_shape,\n step_size=[0.1],\n target_log_prob_fn=lambda z: ed.make_log_joint(model)(x=x, z=z),\n num_transitions=10)\nalignment = {\"latent_code_posterior\": \"latent_code\"}\noptimizer = tf.train.AdamOptimizer(1e-2)\n\nfor _ in range(10):\n with tf.GradientTape() as tape:\n with ed.trace() as variational_tape:\n _ = variational(features)\n log_joint_fn = make_log_joint_fn(model)\n kwargs = {alignment[rv.distribution.name]: rv.value\n for rv in variational_tape.values()}\n energy = log_joint_fn(data=features, **kwargs)\n entropy = sum([rv.distribution.entropy()\n for rv in variational_tape.values()])\n loss_value = -energy - entropy\n grads = tape.gradient(loss_value, variational.variables)\n optimizer.apply_gradients(zip(grads, variational.variables))\n print(\"Step: {:>3d} Loss: {:.3f}\".format(step, loss_value))", "_____no_output_____" ] ], [ [ "## Appendix D. No-U-Turn Sampler\n\nWe implement an Edward program for Bayesian logistic regression with NUTS (Hoffman and Gelman, 2014).", "_____no_output_____" ] ], [ [ "def logistic_regression(features):\n \"\"\"Bayesian logistic regression, which returns labels given features.\"\"\"\n coeffs = ed.MultivariateNormalDiag(\n loc=tf.zeros(features.shape[1]), name=\"coeffs\")\n labels = ed.Bernoulli(\n logits=tf.tensordot(features, coeffs, [[1], [0]]), name=\"labels\")\n return labels\n\nfeatures = tf.random_uniform([500, 55])\ntrue_coeffs = 5. * tf.random_normal([55])\nlabels = tf.cast(tf.tensordot(features, true_coeffs, [[1], [0]]) > 0,\n dtype=tf.int32)\n\nlog_joint = ed.make_log_joint_fn(logistic_regression)\ndef target_log_prob_fn(coeffs):\n return log_joint(features=features, coeffs=coeffs, labels=labels)", "_____no_output_____" ], [ "if not tf.executing_eagerly():\n raise ValueError(\"This code snippet requires eager execution.\")\n\ncoeffs_samples = []\ntarget_log_prob = None\ngrads_target_log_prob = None\nfor step in range(500):\n [\n [coeffs],\n target_log_prob,\n grads_target_log_prob,\n ] = kernel(target_log_prob_fn=target_log_prob_fn,\n current_state=[coeffs],\n step_size=[0.1],\n current_target_log_prob=target_log_prob,\n current_grads_target_log_prob=grads_target_log_prob)\n coeffs_samples.append(coeffs)\n\nfor coeffs_sample in coeffs_samples:\n plt.plot(coeffs_sample.numpy())\n\nplt.show()", "_____no_output_____" ] ], [ [ "See [`no_u_turn_sampler/logistic_regression.py`](https://github.com/google/edward2/tree/master/examples/no_u_turn_sampler/logistic_regression.py) for the full example.\n\n## References\n\n1. Hoffman, M. D. (2017). Learning deep latent Gaussian models with Markov chain Monte Carlo. In _International Conference on Machine Learning_.\n2. Hoffman, M. D. and Gelman, A. (2014). The No-U-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. _Journal of Machine Learning Research_, 15(1):1593–1623.\n3. Kusner, M. J., Paige, B., and Hernández-Lobato, J. M. (2017). Grammar variational auto-encoder. In _International Conference on Machine Learning_.\n4. Papamakarios, G., Murray, I., and Pavlakou, T. (2017). Masked autoregressive flow for density estimation. In _Neural Information Processing Systems_.\n5. Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, Ł., Shazeer, N., Ku, A., and Tran, D. (2018). Image transformer. In _International Conference on Machine Learning_.\n6. Ranganath, R., Altosaar, J., Tran, D., and Blei, D. M. (2016). Operator variational inference. In _Neural Information Processing Systems_.\n7. Salimans, T., Kingma, D., and Welling, M. (2015). Markov chain Monte Carlo and variational inference: Bridging the gap. In _International Conference on Machine Learning_.\n8. Tran, D., Hoffman, M. D., Moore, D., Suter, C., Vasudevan S., Radul A., Johnson M., and Saurous R. A. (2018). Simple, Distributed, and Accelerated Probabilistic Programming. In _Neural Information Processing Systems_.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb15552d7605d1ec2c738720f8e22ac0cc82dcb5
27,456
ipynb
Jupyter Notebook
notebooks/ch05_regression.ipynb
mertcookimg/pytorch_book_info
225c5f63c82a4a984b5ca5a1cd98f536219b4e17
[ "Apache-2.0" ]
52
2021-08-05T00:08:32.000Z
2022-03-30T05:08:47.000Z
notebooks/ch05_regression.ipynb
mertcookimg/pytorch_book_info
225c5f63c82a4a984b5ca5a1cd98f536219b4e17
[ "Apache-2.0" ]
9
2021-09-23T13:12:09.000Z
2022-03-01T13:41:57.000Z
notebooks/ch05_regression.ipynb
mertcookimg/pytorch_book_info
225c5f63c82a4a984b5ca5a1cd98f536219b4e17
[ "Apache-2.0" ]
18
2021-09-03T03:18:44.000Z
2022-03-18T03:54:03.000Z
22.616145
83
0.396671
[ [ [ "# 5章 線形回帰", "_____no_output_____" ] ], [ [ "# 必要ライブラリの導入\n\n!pip install japanize_matplotlib | tail -n 1\n!pip install torchviz | tail -n 1\n!pip install torchinfo | tail -n 1", "_____no_output_____" ], [ "# 必要ライブラリのインポート\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport japanize_matplotlib\nfrom IPython.display import display", "_____no_output_____" ], [ "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torchviz import make_dot", "_____no_output_____" ], [ "# デフォルトフォントサイズ変更\nplt.rcParams['font.size'] = 14\n\n# デフォルトグラフサイズ変更\nplt.rcParams['figure.figsize'] = (6,6)\n\n# デフォルトで方眼表示ON\nplt.rcParams['axes.grid'] = True\n\n# numpyの浮動小数点の表示精度\nnp.set_printoptions(suppress=True, precision=4)", "_____no_output_____" ] ], [ [ "## 5.3 線形関数(nn.Linear)", "_____no_output_____" ], [ "### 入力:1 出力:1 の線形関数", "_____no_output_____" ] ], [ [ "# 乱数の種固定\ntorch.manual_seed(123)\n\n# 入力:1 出力:1 の線形関数の定義\nl1 = nn.Linear(1, 1)\n\n# 線形関数の表示\nprint(l1)", "_____no_output_____" ], [ "# パラメータ名、パラメータ値、shapeの表示\n\nfor param in l1.named_parameters():\n print('name: ', param[0])\n print('tensor: ', param[1])\n print('shape: ', param[1].shape)", "_____no_output_____" ], [ "# 初期値設定\nnn.init.constant_(l1.weight, 2.0)\nnn.init.constant_(l1.bias, 1.0)\n\n# 結果確認\nprint(l1.weight)\nprint(l1.bias)", "_____no_output_____" ], [ "# テスト用データ生成\n\n# x_npをnumpy配列で定義\nx_np = np.arange(-2, 2.1, 1)\n\n# Tensor化\nx = torch.tensor(x_np).float()\n\n# サイズを(N,1)に変更\nx = x.view(-1,1)\n\n# 結果確認\nprint(x.shape)\nprint(x)", "_____no_output_____" ], [ "# 1次関数のテスト\n\ny = l1(x)\n\nprint(y.shape)\nprint(y.data)", "_____no_output_____" ] ], [ [ "### 入力:2 出力:1 の線形関数", "_____no_output_____" ] ], [ [ "# 入力:2 出力:1 の線形関数の定義\nl2 = nn.Linear(2, 1)\n\n# 初期値設定\nnn.init.constant_(l2.weight, 1.0)\nnn.init.constant_(l2.bias, 2.0)\n\n# 結果確認\nprint(l2.weight)\nprint(l2.bias)", "_____no_output_____" ], [ "# 2次元numpy配列\nx2_np = np.array([[0, 0], [0, 1], [1, 0], [1,1]])\n\n# Tensor化\nx2 = torch.tensor(x2_np).float()\n\n# 結果確認\nprint(x2.shape)\nprint(x2)", "_____no_output_____" ], [ "\n# 関数値計算\ny2 = l2(x2)\n\n# shape確認\nprint(y2.shape)\n\n# 値確認\nprint(y2.data)", "_____no_output_____" ] ], [ [ "### 入力:2 出力:3 の線形関数", "_____no_output_____" ] ], [ [ "# 入力:2 出力:3 の線形関数の定義\n\nl3 = nn.Linear(2, 3)\n\n# 初期値設定\nnn.init.constant_(l3.weight[0,:], 1.0)\nnn.init.constant_(l3.weight[1,:], 2.0)\nnn.init.constant_(l3.weight[2,:], 3.0)\nnn.init.constant_(l3.bias, 2.0)\n\n# 結果確認\nprint(l3.weight)\nprint(l3.bias)", "_____no_output_____" ], [ "# 関数値計算\ny3 = l3(x2)\n\n# shape確認\nprint(y3.shape)\n\n# 値確認\nprint(y3.data)", "_____no_output_____" ] ], [ [ "## 5.4 カスタムクラスを利用したモデル定義", "_____no_output_____" ] ], [ [ "# モデルのクラス定義\n\nclass Net(nn.Module):\n def __init__(self, n_input, n_output):\n # 親クラスnn.Modulesの初期化呼び出し\n super().__init__()\n\n # 出力層の定義\n self.l1 = nn.Linear(n_input, n_output) \n \n # 予測関数の定義\n def forward(self, x):\n x1 = self.l1(x) # 線形回帰\n return x1", "_____no_output_____" ], [ "# ダミー入力\ninputs = torch.ones(100,1)\n\n# インスタンスの生成 (1入力1出力の線形モデル)\nn_input = 1\nn_output = 1\nnet = Net(n_input, n_output)\n\n# 予測\noutputs = net(inputs)", "_____no_output_____" ] ], [ [ "\n## 5.6 データ準備\nUCI公開データセットのうち、回帰でよく使われる「ボストン・データセット」を用いる。 \n\nhttps://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html\n\nオリジナルのデーセットは、17項目の入力値から、不動産価格を予測する目的のものだが、\n一番単純な「単回帰モデル」(1入力)のモデルを作るため、このうち``RM``の1項目だけを抽出する。\n", "_____no_output_____" ] ], [ [ "# 学習用データ準備\n\n# ライブラリのインポート\nfrom sklearn.datasets import load_boston\n\n# データ読み込み\nboston = load_boston()\n\n# 入力データと正解データ取得\nx_org, yt = boston.data, boston.target\n\n# 項目名リスト取得\nfeature_names = boston.feature_names\n\n# 結果確認\nprint('元データ', x_org.shape, yt.shape)\nprint('項目名: ', feature_names)", "_____no_output_____" ], [ "# データ絞り込み (項目 RMのみ)\nx = x_org[:,feature_names == 'RM']\nprint('絞り込み後', x.shape)\nprint(x[:5,:])\n\n# 正解データ yの表示\nprint('正解データ')\nprint(yt[:5])", "_____no_output_____" ], [ "# 散布図の表示\n\nplt.scatter(x, yt, s=10, c='b')\nplt.xlabel('部屋数')\nplt.ylabel('価格')\nplt.title('部屋数と価格の散布図')\nplt.show()", "_____no_output_____" ] ], [ [ "## 5.7 モデル定義", "_____no_output_____" ] ], [ [ "# 変数定義\n\n# 入力次元数\nn_input= x.shape[1]\n\n# 出力次元数\nn_output = 1\n\nprint(f'入力次元数: {n_input} 出力次元数: {n_output}')", "_____no_output_____" ], [ "# 機械学習モデル(予測モデル)クラス定義\n\nclass Net(nn.Module):\n def __init__(self, n_input, n_output):\n # 親クラスnn.Modulesの初期化呼び出し\n super().__init__()\n\n # 出力層の定義\n self.l1 = nn.Linear(n_input, n_output) \n \n # 初期値を全部1にする\n # 「ディープラーニングの数学」と条件を合わせる目的\n nn.init.constant_(self.l1.weight, 1.0)\n nn.init.constant_(self.l1.bias, 1.0)\n\n # 予測関数の定義\n def forward(self, x):\n x1 = self.l1(x) # 線形回帰\n return x1", "_____no_output_____" ], [ "# インスタンスの生成\n# 1入力1出力の線形モデル\n\nnet = Net(n_input, n_output)", "_____no_output_____" ], [ "# モデル内のパラメータの確認\n# モデル内の変数取得にはnamed_parameters関数を利用する\n# 結果の第1要素が名前、第2要素が値\n# \n# predict.weightとpredict.biasがあることがわかる\n# 初期値はどちらも1.0になっている\n\nfor parameter in net.named_parameters():\n print(f'変数名: {parameter[0]}')\n print(f'変数値: {parameter[1].data}')", "_____no_output_____" ], [ "# パラメータのリスト取得にはparameters関数を利用する\n\nfor parameter in net.parameters():\n print(parameter)", "_____no_output_____" ] ], [ [ "### モデル確認", "_____no_output_____" ] ], [ [ "# モデルの概要表示\n\nprint(net)", "_____no_output_____" ], [ "# モデルのサマリー表示\n\nfrom torchinfo import summary\nsummary(net, (1,))", "_____no_output_____" ] ], [ [ "### 損失関数と最適化関数", "_____no_output_____" ] ], [ [ "# 損失関数: 平均2乗誤差\ncriterion = nn.MSELoss()\n\n# 学習率\nlr = 0.01\n\n# 最適化関数: 勾配降下法\noptimizer = optim.SGD(net.parameters(), lr=lr)\n", "_____no_output_____" ] ], [ [ "## 5.8 勾配降下法", "_____no_output_____" ] ], [ [ "# 入力変数x と正解値 ytのテンソル変数化\n\ninputs = torch.tensor(x).float()\nlabels = torch.tensor(yt).float()\n\n# 次元数確認\n\nprint(inputs.shape)\nprint(labels.shape)", "_____no_output_____" ], [ "# 損失値計算用にlabels変数を(N,1)次元の行列に変換する\n\nlabels1 = labels.view((-1, 1))\n\n# 次元数確認\nprint(labels1.shape)", "_____no_output_____" ], [ "# 予測計算\n\noutputs = net(inputs)", "_____no_output_____" ], [ "\n# 損失計算\nloss = criterion(outputs, labels1)\n\n# 損失値の取得\nprint(f'{loss.item():.5f}')", "_____no_output_____" ], [ "\n# 損失の計算グラフ可視化\n\ng = make_dot(loss, params=dict(net.named_parameters()))\ndisplay(g)", "_____no_output_____" ], [ "# 予測計算\noutputs = net(inputs)\n\n# 損失計算\nloss = criterion(outputs, labels1)\n\n# 勾配計算\nloss.backward()\n\n# 勾配の結果が取得可能に\nprint(net.l1.weight.grad)\nprint(net.l1.bias.grad)", "_____no_output_____" ], [ "# パラメータ修正\noptimizer.step()\n\n# パラメータ値が変わる\nprint(net.l1.weight)\nprint(net.l1.bias)", "_____no_output_____" ], [ "# 勾配値の初期化\noptimizer.zero_grad()\n\n# 勾配値がすべてゼロになっている\nprint(net.l1.weight.grad)\nprint(net.l1.bias.grad)", "_____no_output_____" ] ], [ [ "### 繰り返し計算", "_____no_output_____" ] ], [ [ "# 学習率\nlr = 0.01\n\n# インスタンス生成 (パラメータ値初期化)\nnet = Net(n_input, n_output)\n\n# 損失関数: 平均2乗誤差\ncriterion = nn.MSELoss()\n\n# 最適化関数: 勾配降下法\noptimizer = optim.SGD(net.parameters(), lr=lr)\n\n# 繰り返し回数\nnum_epochs = 50000\n\n# 評価結果記録用 (損失関数値のみ記録)\nhistory = np.zeros((0,2))", "_____no_output_____" ], [ "# 繰り返し計算メインループ\n\nfor epoch in range(num_epochs):\n \n # 勾配値初期化\n optimizer.zero_grad()\n\n # 予測計算\n outputs = net(inputs)\n \n # 損失計算\n # 「ディープラーニングの数学」に合わせて2で割った値を損失とした\n loss = criterion(outputs, labels1) / 2.0\n\n # 勾配計算\n loss.backward()\n\n # パラメータ修正\n optimizer.step()\n\n # 100回ごとに途中経過を記録する\n if ( epoch % 100 == 0):\n history = np.vstack((history, np.array([epoch, loss.item()])))\n print(f'Epoch {epoch} loss: {loss.item():.5f}')", "_____no_output_____" ] ], [ [ "## 5.9 結果確認", "_____no_output_____" ] ], [ [ "# 損失初期値と最終値\n\nprint(f'損失初期値: {history[0,1]:.5f}')\nprint(f'損失最終値: {history[-1,1]:.5f}')", "_____no_output_____" ], [ "\n# 学習曲線の表示 (損失) \n# 最初の1つを除く\n\nplt.plot(history[1:,0], history[1:,1], 'b')\nplt.xlabel('繰り返し回数')\nplt.ylabel('損失')\nplt.title('学習曲線(損失)')\nplt.show()", "_____no_output_____" ], [ "# 回帰直線の算出\n\n# xの最小値、最大値\nxse = np.array((x.min(), x.max())).reshape(-1,1)\nXse = torch.tensor(xse).float()\n\nwith torch.no_grad():\n Yse = net(Xse)\n\nprint(Yse.numpy())", "_____no_output_____" ], [ "# 散布図と回帰直線の描画\n\nplt.scatter(x, yt, s=10, c='b')\nplt.xlabel('部屋数')\nplt.ylabel('価格')\nplt.plot(Xse.data, Yse.data, c='k')\nplt.title('散布図と回帰直線')\nplt.show()", "_____no_output_____" ] ], [ [ "## 5.10 重回帰モデルへの拡張", "_____no_output_____" ] ], [ [ "# 列(LSTAT: 低所得者率)の追加\n\nx_add = x_org[:,feature_names == 'LSTAT']\nx2 = np.hstack((x, x_add))\n\n# shapeの表示\nprint(x2.shape)\n\n# 入力データxの表示\nprint(x2[:5,:])", "_____no_output_____" ], [ "# 今度は入力次元数=2\n\nn_input = x2.shape[1]\nprint(n_input)\n\n# モデルインスタンスの生成\nnet = Net(n_input, n_output)", "_____no_output_____" ], [ "# モデル内のパラメータの確認\n# predict.weight が2次元に変わった\n\nfor parameter in net.named_parameters():\n print(f'変数名: {parameter[0]}')\n print(f'変数値: {parameter[1].data}')", "_____no_output_____" ], [ "# モデルの概要表示\n\nprint(net)", "_____no_output_____" ], [ "# モデルのサマリー表示\n\nfrom torchinfo import summary\nsummary(net, (2,))", "_____no_output_____" ], [ "# 入力変数x2 のテンソル変数化 \n# labels, labels1は前のものをそのまま利用\n\ninputs = torch.tensor(x2).float()", "_____no_output_____" ] ], [ [ "### くり返し計算", "_____no_output_____" ] ], [ [ "# 初期化処理\n\n# 学習率\nlr = 0.01\n\n# インスタンス生成 (パラメータ値初期化)\nnet = Net(n_input, n_output)\n\n# 損失関数: 平均2乗誤差\ncriterion = nn.MSELoss()\n\n# 最適化関数: 勾配降下法\noptimizer = optim.SGD(net.parameters(), lr=lr)\n\n# 繰り返し回数\nnum_epochs = 50000\n\n# 評価結果記録用 (損失関数値のみ記録)\nhistory = np.zeros((0,2))", "_____no_output_____" ], [ "# 繰り返し計算メインループ\n\nfor epoch in range(num_epochs):\n \n # 勾配値初期化\n optimizer.zero_grad()\n\n # 予測計算\n outputs = net(inputs)\n \n # 誤差計算\n # 「ディープラーニングの数学」に合わせて2で割った値を損失とした\n loss = criterion(outputs, labels1) / 2.0\n\n # 勾配計算\n loss.backward()\n\n # パラメータ修正\n optimizer.step()\n\n # 100回ごとに途中経過を記録する\n if ( epoch % 100 == 0):\n history = np.vstack((history, np.array([epoch, loss.item()])))\n print(f'Epoch {epoch} loss: {loss.item():.5f}')", "_____no_output_____" ] ], [ [ "## 5.11 学習率の変更", "_____no_output_____" ] ], [ [ "# 繰り返し回数\n#num_epochs = 50000\nnum_epochs = 2000\n\n# 学習率\n#l r = 0.01\nlr = 0.001\n\n# モデルインスタンスの生成\nnet = Net(n_input, n_output)\n\n# 損失関数: 平均2乗誤差\ncriterion = nn.MSELoss()\n\n# 最適化関数: 勾配降下法\noptimizer = optim.SGD(net.parameters(), lr=lr)", "_____no_output_____" ], [ "# 繰り返し計算メインループ\n\n# 評価結果記録用 (損失関数値のみ記録)\nhistory = np.zeros((0,2))\n\nfor epoch in range(num_epochs):\n \n # 勾配値初期化\n optimizer.zero_grad()\n\n # 予測計算\n outputs = net(inputs)\n \n # 誤差計算\n loss = criterion(outputs, labels1) / 2.0\n\n #勾配計算\n loss.backward()\n\n # パラメータ修正\n optimizer.step()\n\n # 100回ごとに途中経過を記録する\n if ( epoch % 100 == 0):\n history = np.vstack((history, np.array([epoch, loss.item()])))\n print(f'Epoch {epoch} loss: {loss.item():.5f}')", "_____no_output_____" ], [ "# 損失初期値、最終値\n\nprint(f'損失初期値: {history[0,1]:.5f}')\nprint(f'損失最終値: {history[-1,1]:.5f}')", "_____no_output_____" ], [ "# 学習曲線の表示 (損失)\n\nplt.plot(history[:,0], history[:,1], 'b')\nplt.xlabel('繰り返し回数')\nplt.ylabel('損失')\nplt.title('学習曲線(損失)')\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb156988af6cd7a5936e03059d7d8ab9d6503e3c
7,167
ipynb
Jupyter Notebook
Chapter 2 - Data Processing.ipynb
andreamartinelli1974/CorsoML
bd8785a3f1a1e6729085588547b049da7d6d99ac
[ "MIT" ]
null
null
null
Chapter 2 - Data Processing.ipynb
andreamartinelli1974/CorsoML
bd8785a3f1a1e6729085588547b049da7d6d99ac
[ "MIT" ]
null
null
null
Chapter 2 - Data Processing.ipynb
andreamartinelli1974/CorsoML
bd8785a3f1a1e6729085588547b049da7d6d99ac
[ "MIT" ]
null
null
null
22.824841
216
0.487233
[ [ [ "# Chapter 2: Processing data for machine learning\n\nTo simplify the code examples in these notebooks, we populate the namespace with functions from numpy and matplotlib:", "_____no_output_____" ] ], [ [ "%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "# Import Pixie Debugger\nimport pixiedust", "Pixiedust database opened successfully\n" ] ], [ [ "### Converting categorical data to numerical features", "_____no_output_____" ] ], [ [ "cat_data = array(['male', 'female', 'male', 'male', 'female', 'male', 'female', 'female'])", "_____no_output_____" ], [ "def cat_to_num(data):\n categories = unique(data)\n features = []\n for cat in categories:\n binary = (data == cat)\n features.append(binary.astype(\"int\"))\n return features", "_____no_output_____" ], [ "%%pixie_debugger\ncat_to_num(cat_data)", "_____no_output_____" ] ], [ [ "### Simple feature engineering of the Titanic dataset", "_____no_output_____" ] ], [ [ "cabin_data = array([\"C65\", \"\", \"E36\", \"C54\", \"B57 B59 B63 B66\"])", "_____no_output_____" ], [ "def cabin_features(data):\n features = []\n for cabin in data:\n cabins = cabin.split(\" \")\n n_cabins = len(cabins)\n # First char is the cabin_char\n try:\n cabin_char = cabins[0][0]\n except IndexError:\n cabin_char = \"X\"\n n_cabins = 0\n # The rest is the cabin number\n try:\n cabin_num = int(cabins[0][1:]) \n except:\n cabin_num = -1\n # Add 3 features for each passanger\n features.append( [cabin_char, cabin_num, n_cabins] )\n return features", "_____no_output_____" ], [ "cabin_features(cabin_data)", "_____no_output_____" ] ], [ [ "### Feature normalization", "_____no_output_____" ] ], [ [ "num_data = array([1, 10, 0.5, 43, 0.12, 8])", "_____no_output_____" ], [ "def normalize_feature(data, f_min=-1, f_max=1):\n d_min, d_max = min(data), max(data)\n factor = (f_max - f_min) / (d_max - d_min)\n normalized = f_min + data*factor\n return normalized, factor\n", "_____no_output_____" ], [ "normalize_feature(num_data)", "_____no_output_____" ], [ "# Alternative: Convert to Pandas Dataframe\nimport pandas as pd\nimport numpy as np\nnum_data2 = np.array([1, 10, 0.5, 43, 0.12, 8])\ndf = pd.Series(num_data2)\nnormalized_df=(df-df.min())/(df.max()-df.min()) # normalize 0 - 1\nnormalized_df\nnormalized_df_mean=(df-df.mean())/df.std() # normalize around mean\nnormalized_df_mean", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb1577d84f1216ca651378d3c416b8360140d09f
114,893
ipynb
Jupyter Notebook
2_weight_label_maker.ipynb
ataraxno/weighing_dev
d4ea91645435bef2656d6ed3235888ae90d0ad59
[ "Apache-2.0" ]
null
null
null
2_weight_label_maker.ipynb
ataraxno/weighing_dev
d4ea91645435bef2656d6ed3235888ae90d0ad59
[ "Apache-2.0" ]
null
null
null
2_weight_label_maker.ipynb
ataraxno/weighing_dev
d4ea91645435bef2656d6ed3235888ae90d0ad59
[ "Apache-2.0" ]
null
null
null
213.953445
50,248
0.912536
[ [ [ "import os\n\nimport numpy as np\nnp.set_printoptions(suppress=True)\nimport pandas as pd\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nfrom matplotlib.ticker import LinearLocator\nfrom matplotlib import gridspec\nfrom pandas.plotting import register_matplotlib_converters\nregister_matplotlib_converters()\n%matplotlib inline\n\nprint(\"Package is ready.\")", "Package is ready.\n" ], [ "plt.rcParams['figure.figsize'] = ((8/2.54), (6/2.54))\nplt.rcParams[\"font.family\"] = \"Arial\"\nplt.rcParams[\"mathtext.default\"] = \"rm\"\nplt.rcParams.update({'font.size': 11})\nMARKER_SIZE = 15\ncmap_m = [\"#f4a6ad\", \"#f6957e\", \"#fccfa2\", \"#8de7be\", \"#86d6f2\", \"#24a9e4\", \"#b586e0\", \"#d7f293\"]\ncmap = [\"#e94d5b\", \"#ef4d28\", \"#f9a54f\", \"#25b575\", \"#1bb1e7\", \"#1477a2\", \"#a662e5\", \"#c2f442\"]\n\nplt.rcParams['axes.spines.top'] = False\n# plt.rcParams['axes.edgecolor'] = \nplt.rcParams['axes.linewidth'] = 1\nplt.rcParams['lines.linewidth'] = 1.5\nplt.rcParams['xtick.major.width'] = 1\nplt.rcParams['xtick.minor.width'] = 1\nplt.rcParams['ytick.major.width'] = 1\nplt.rcParams['ytick.minor.width'] = 1", "_____no_output_____" ] ], [ [ "# 2020 Summer", "_____no_output_____" ], [ "## Control weight", "_____no_output_____" ] ], [ [ "SW2_df = pd.read_csv('./results/2020_S/SW2_greenhouse.csv', index_col='Unnamed: 0')\nSW2_df.index = pd.DatetimeIndex(SW2_df.index)", "_____no_output_____" ] ], [ [ "### Cultivation period", "_____no_output_____" ] ], [ [ "SW2_df = SW2_df.loc['2020-03-05 00:00:00': '2020-07-03 23:59:00']\nSW2_df = SW2_df.interpolate()", "_____no_output_____" ] ], [ [ "### Rockwool weight", "_____no_output_____" ] ], [ [ "rockwool_mean = 656.50/1000", "_____no_output_____" ] ], [ [ "### water weight", "_____no_output_____" ] ], [ [ "substrate_volume = (120*12*7.5 + 10*10*6.5*4)/1000\nwater_w_df = substrate_volume*SW2_df['subs_VWC']/100\nSW2_df['water'] = water_w_df", "_____no_output_____" ] ], [ [ "### Calculating aerial weight", "_____no_output_____" ] ], [ [ "SW2_df.loc[:, 'loadcell_1'] = SW2_df.loc[:, 'loadcell_1'] - rockwool_mean\nSW2_df.loc[:, 'loadcell_2'] = SW2_df.loc[:, 'loadcell_2'] - rockwool_mean\nSW2_df.loc[:, 'loadcell_3'] = SW2_df.loc[:, 'loadcell_3'] - rockwool_mean", "_____no_output_____" ] ], [ [ "### Destructive crop weight", "_____no_output_____" ] ], [ [ "weight_df = pd.read_csv('./results/2020_S/weight_ct.csv', index_col='Unnamed: 0')\nweight_df.index = pd.DatetimeIndex(weight_df.index)\nweight_df.index = np.append(weight_df.index[:-20], pd.DatetimeIndex(['2020-07-03']*20))\nwweight_df = weight_df[['Stem FW', 'Leaf FW', 'petiole FW', 'Idv fruit FW']].sum(axis=1)", "_____no_output_____" ] ], [ [ "### Root DW to FW", "_____no_output_____" ] ], [ [ "roots_DW_mean = 297.27\nDW_sum_df = weight_df.loc[:, [_ for _ in weight_df.columns if _.endswith('DW')]].sum(axis=1)\nrs_ratio_df = (roots_DW_mean/(DW_sum_df.loc['2020-07-03']*4)).mean()", "_____no_output_____" ], [ "roots_df = pd.DataFrame(DW_sum_df * rs_ratio_df)\nroots_df.columns = ['root DW']\nroots_df['root FW'] = roots_df['root DW']/0.1325\nroots_df.index = pd.DatetimeIndex(roots_df.index)\nwweight_wr_df = wweight_df.add(roots_df['root FW'])", "_____no_output_____" ] ], [ [ "### Excepting irrigation disturbance", "_____no_output_____" ] ], [ [ "night_df = SW2_df.loc[SW2_df['rad'] <= 0.2, 'loadcell_1':'loadcell_3']", "_____no_output_____" ], [ "fig = plt.figure(figsize=((8/2.54*2.5), (6/2.54*1.2)))\nax0 = plt.subplot()\n\nax0.spines['right'].set_visible(False)\nax0.spines['left'].set_position(('outward', 5))\nax0.spines['bottom'].set_position(('outward', 5))\n\nax0.plot(SW2_df.index, SW2_df['loadcell_1']/4, c=cmap[3], alpha=0.5)\nax0.plot(SW2_df.index, SW2_df['loadcell_2']/4, c=cmap[0], alpha=0.5)\nax0.plot(SW2_df.index, SW2_df['loadcell_3']/4, c=cmap[4], alpha=0.5)\n\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_1'] - SW2_df['water'].resample('1d').mean())/4), '-o', ms=5, mec='k', mew=0.5, c=cmap[3])\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_2'] - SW2_df['water'].resample('1d').mean())/4), '-o', ms=5, mec='k', mew=0.5, c=cmap[0])\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_3'] - SW2_df['water'].resample('1d').mean())/4), '-o', ms=5, mec='k', mew=0.5, c=cmap[4])\n\nax0.plot(wweight_df.index, wweight_wr_df/1000, 'o', ms=5, c='k')\n\nax0.set_xbound(SW2_df.index.min(), SW2_df.index.max())\nax0.xaxis.set_major_locator(LinearLocator(10))\nax0.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))\n\nax0.yaxis.set_major_locator(LinearLocator(6))\nax0.set_ybound(0, 5)\n\nax0.set_xlabel('Date')\nax0.set_ylabel('Weight (kg)')\n\nfig.tight_layout()\nplt.show()", "_____no_output_____" ], [ "fw_labels = pd.concat([(night_df.resample('1d').mean()['loadcell_1'] - SW2_df['water'].resample('1d').mean())/4,\n (night_df.resample('1d').mean()['loadcell_2'] - SW2_df['water'].resample('1d').mean())/4,\n (night_df.resample('1d').mean()['loadcell_3'] - SW2_df['water'].resample('1d').mean())/4], axis=1)\nfw_labels.columns = ['CT_1', 'CT_2', 'CT_3']", "_____no_output_____" ], [ "fw_labels.to_csv('./results/2020_S/ct_fw_labels.csv')", "_____no_output_____" ] ], [ [ "# 2020 Winter", "_____no_output_____" ], [ "## Control weight", "_____no_output_____" ] ], [ [ "SW2_df = pd.read_csv('./results/2020_W/SW_CT_greenhouse.csv', index_col='Unnamed: 0')\nSW2_df.index = pd.DatetimeIndex(SW2_df.index)", "_____no_output_____" ] ], [ [ "### Cultivation period", "_____no_output_____" ] ], [ [ "SW2_df = SW2_df.loc['2020-08-26 00:00:00': '2021-01-25 23:59:00']\nSW2_df = SW2_df.interpolate()", "_____no_output_____" ] ], [ [ "### Rockwool weight", "_____no_output_____" ] ], [ [ "rockwool_mean = 887.20/1000", "_____no_output_____" ] ], [ [ "### water weight", "_____no_output_____" ] ], [ [ "substrate_volume = (120*12*7.5 + 10*10*6.5*3)/1000\nwater_w_df = substrate_volume*SW2_df['subs_VWC']/100\nSW2_df['water'] = water_w_df", "_____no_output_____" ] ], [ [ "### Calculating aerial weight", "_____no_output_____" ] ], [ [ "SW2_df.loc[:, 'loadcell_1'] = SW2_df.loc[:, 'loadcell_1'] - rockwool_mean\nSW2_df.loc[:, 'loadcell_2'] = SW2_df.loc[:, 'loadcell_2'] - rockwool_mean\nSW2_df.loc[:, 'loadcell_3'] = SW2_df.loc[:, 'loadcell_3'] - rockwool_mean", "_____no_output_____" ] ], [ [ "### Destructive crop weight", "_____no_output_____" ] ], [ [ "weight_df = pd.read_csv('./results/2020_W/weight_ct.csv', index_col='Unnamed: 0')\nweight_df.index = pd.DatetimeIndex(weight_df.index)\nwweight_df = weight_df[['Stem FW', 'Leaf FW', 'petiole FW', 'Idv fruit FW']].sum(axis=1)", "_____no_output_____" ] ], [ [ "### Root DW to FW", "_____no_output_____" ] ], [ [ "roots_DW_mean = 355.37\nDW_sum_df = weight_df.loc[:, [_ for _ in weight_df.columns if _.endswith('DW')]].sum(axis=1)\nrs_ratio_df = (roots_DW_mean/(DW_sum_df.loc['2021-01-25']*3)).mean()", "_____no_output_____" ], [ "roots_df = pd.DataFrame(DW_sum_df * rs_ratio_df)\nroots_df.columns = ['root DW']\nroots_df['root FW'] = roots_df['root DW']/0.1325\nroots_df.index = pd.DatetimeIndex(roots_df.index)\nwweight_wr_df = wweight_df.add(roots_df['root FW'])", "_____no_output_____" ] ], [ [ "### Excepting irrigation disturbance", "_____no_output_____" ] ], [ [ "night_df = SW2_df.loc[SW2_df['rad'] <= 0.2, 'loadcell_1':'loadcell_3']", "_____no_output_____" ], [ "fig = plt.figure(figsize=((8/2.54*2.5), (6/2.54*1.2)))\nax0 = plt.subplot()\n\nax0.spines['right'].set_visible(False)\nax0.spines['left'].set_position(('outward', 5))\nax0.spines['bottom'].set_position(('outward', 5))\n\nax0.plot(SW2_df.index, SW2_df['loadcell_1']/3, c=cmap[3], alpha=0.5)\nax0.plot(SW2_df.index, SW2_df['loadcell_2']/3, c=cmap[0], alpha=0.5)\nax0.plot(SW2_df.index, SW2_df['loadcell_3']/3, c=cmap[4], alpha=0.5)\n\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_1'] - SW2_df['water'].resample('1d').mean())/3), '-o', ms=5, mec='k', mew=0.5, c=cmap[3])\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_2'] - SW2_df['water'].resample('1d').mean())/3), '-o', ms=5, mec='k', mew=0.5, c=cmap[0])\nax0.plot(night_df.resample('1d').mean().index, ((night_df.resample('1d').mean()['loadcell_3'] - SW2_df['water'].resample('1d').mean())/3), '-o', ms=5, mec='k', mew=0.5, c=cmap[4])\n\nax0.plot(wweight_df.index, wweight_wr_df/1000, 'o', ms=5, c='k')\n\nax0.set_xbound(SW2_df.index.min(), SW2_df.index.max())\nax0.xaxis.set_major_locator(LinearLocator(10))\nax0.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))\n\nax0.yaxis.set_major_locator(LinearLocator(6))\nax0.set_ybound(0, 6)\n\nax0.set_xlabel('Date')\nax0.set_ylabel('Weight (kg)')\n\nfig.tight_layout()\nplt.show()", "_____no_output_____" ], [ "fw_labels = pd.concat([(night_df.resample('1d').mean()['loadcell_1'] - SW2_df['water'].resample('1d').mean())/3,\n (night_df.resample('1d').mean()['loadcell_2'] - SW2_df['water'].resample('1d').mean())/3,\n (night_df.resample('1d').mean()['loadcell_3'] - SW2_df['water'].resample('1d').mean())/3], axis=1)\nfw_labels.columns = ['CT_1', 'CT_2', 'CT_3']", "_____no_output_____" ], [ "fw_labels.to_csv('./results/2020_W/ct_fw_labels.csv')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb157bb0306863085ded1967573f157bfeed1130
944
ipynb
Jupyter Notebook
kubeflow/notebooks/05_Airflow_ML_Pipelines.ipynb
ThomasHenckel/pipeline
6302abd631bc8461cc92c48c135f285ab866a9f8
[ "Apache-2.0" ]
null
null
null
kubeflow/notebooks/05_Airflow_ML_Pipelines.ipynb
ThomasHenckel/pipeline
6302abd631bc8461cc92c48c135f285ab866a9f8
[ "Apache-2.0" ]
null
null
null
kubeflow/notebooks/05_Airflow_ML_Pipelines.ipynb
ThomasHenckel/pipeline
6302abd631bc8461cc92c48c135f285ab866a9f8
[ "Apache-2.0" ]
1
2019-06-30T09:56:38.000Z
2019-06-30T09:56:38.000Z
19.265306
121
0.54661
[ [ [ "# Run Airflow `taxi` Pipeline", "_____no_output_____" ], [ "# Make sure you unpause the DAG before you start the run!!!\n\n![Unpause Airflow DAG](https://raw.githubusercontent.com/PipelineAI/site/master/assets/img/airflow-dag-unpause.png)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
cb158720b4d57907286087288a5e258e10f59761
2,165
ipynb
Jupyter Notebook
00_core.ipynb
kenshih/nbdev_ken
69f14304023d94888fa464b3e105d3d2d96080ef
[ "Apache-2.0" ]
null
null
null
00_core.ipynb
kenshih/nbdev_ken
69f14304023d94888fa464b3e105d3d2d96080ef
[ "Apache-2.0" ]
null
null
null
00_core.ipynb
kenshih/nbdev_ken
69f14304023d94888fa464b3e105d3d2d96080ef
[ "Apache-2.0" ]
null
null
null
17.18254
86
0.464203
[ [ [ "# default_exp core", "_____no_output_____" ] ], [ [ "# API detils", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.showdoc import *", "_____no_output_____" ], [ "#export\ndef say_hello(to):\n \"Say hello to somebody\"\n return f'Hello {to}!'", "_____no_output_____" ], [ "say_hello(\"george\")", "_____no_output_____" ] ], [ [ "## assertion", "_____no_output_____" ] ], [ [ "assert say_hello(\"Jeremy\")==\"Hello Jeremy!\"", "_____no_output_____" ] ], [ [ "# Testing out SVG", "_____no_output_____" ] ], [ [ "from IPython.display import display,SVG", "_____no_output_____" ], [ "display(SVG('<svg height=\"100\"><circle cx=\"50\" cy=\"50\" r=\"40\"/></svg>'))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb15934669434de25490e9f2240fe6325f16da2f
193,046
ipynb
Jupyter Notebook
3rdparty/fast_retraining/experiments/XGBoostvsLightGBM.ipynb
JohnZed/gbm-bench
04f052febb95436762c67c59eaa33d6cd3ebcdbc
[ "BSD-3-Clause" ]
56
2017-05-23T10:54:24.000Z
2021-08-01T08:14:41.000Z
3rdparty/fast_retraining/experiments/XGBoostvsLightGBM.ipynb
trivialfis/gbm-bench
24665667369d6b04d1c81ef444b508e5f0dac5a2
[ "BSD-3-Clause" ]
29
2017-05-23T15:52:39.000Z
2019-09-19T05:39:42.000Z
3rdparty/fast_retraining/experiments/XGBoostvsLightGBM.ipynb
trivialfis/gbm-bench
24665667369d6b04d1c81ef444b508e5f0dac5a2
[ "BSD-3-Clause" ]
23
2019-08-01T15:49:53.000Z
2022-02-17T07:01:04.000Z
117.710976
24,874
0.586943
[ [ [ "# XGBoost vs LightGBM\n\nIn this notebook we collect the results from all the experiments and reports the comparative difference between XGBoost and LightGBM", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport nbformat\nimport json\nfrom toolz import pipe, juxt\nimport pandas as pd\nimport seaborn\nfrom toolz import curry\n\nfrom bokeh.io import show, output_notebook\nfrom bokeh.charts import Bar\nfrom bokeh.models.renderers import GlyphRenderer\nfrom bokeh.models.glyphs import Rect\nfrom bokeh.models import Range1d\nfrom toolz import curry\nfrom bokeh.io import export_svgs\nfrom IPython.display import SVG, display\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n%matplotlib inline ", "/anaconda/envs/strata/lib/python3.5/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning: \nThe bokeh.charts API has moved to a separate 'bkcharts' package.\n\nThis compatibility shim will remain until Bokeh 1.0 is released.\nAfter that, if you want to use this API you will have to install\nthe bkcharts package explicitly.\n\n warn(message)\n" ], [ "output_notebook()", "_____no_output_____" ] ], [ [ "We are going to read the results from the following notebooks", "_____no_output_____" ] ], [ [ "notebooks = {\n 'Airline':'01_airline.ipynb',\n 'Airline_GPU': '01_airline_GPU.ipynb',\n 'BCI': '02_BCI.ipynb',\n 'BCI_GPU': '02_BCI_GPU.ipynb',\n 'Football': '03_football.ipynb',\n 'Football_GPU': '03_football_GPU.ipynb',\n 'Planet': '04_PlanetKaggle.ipynb',\n 'Plannet_GPU': '04_PlanetKaggle_GPU.ipynb',\n 'Fraud': '05_FraudDetection.ipynb',\n 'Fraud_GPU': '05_FraudDetection_GPU.ipynb',\n 'HIGGS': '06_HIGGS.ipynb',\n 'HIGGS_GPU': '06_HIGGS_GPU.ipynb'\n}", "_____no_output_____" ], [ "def read_notebook(notebook_name):\n with open(notebook_name) as f:\n return nbformat.read(f, as_version=4)", "_____no_output_____" ], [ "def results_cell_from(nb):\n for cell in nb.cells:\n if cell['cell_type']=='code' and cell['source'].startswith('# Results'):\n return cell", "_____no_output_____" ], [ "def extract_text(cell):\n return cell['outputs'][0]['text']", "_____no_output_____" ], [ "@curry\ndef remove_line_with(match_str, json_string):\n return '\\n'.join(filter(lambda x: match_str not in x, json_string.split('\\n')))", "_____no_output_____" ], [ "def process_nb(notebook_name):\n return pipe(notebook_name,\n read_notebook,\n results_cell_from,\n extract_text,\n remove_line_with('total RAM usage'),\n json.loads)", "_____no_output_____" ] ], [ [ "Here we collect the results from all the exeperiment notebooks. The method simply searches the notebooks for a cell that starts with # Results. It then reads that cells output in as JSON.", "_____no_output_____" ] ], [ [ "results = {nb_key:process_nb(nb_name) for nb_key, nb_name in notebooks.items()}", "_____no_output_____" ], [ "results", "_____no_output_____" ], [ "datasets = [k for k in results.keys()]\nprint(datasets)\nalgos = [a for a in results[datasets[0]].keys()]\nprint(algos)", "['Football_GPU', 'Airline_GPU', 'Planet', 'Airline', 'HIGGS_GPU', 'HIGGS', 'BCI_GPU', 'Plannet_GPU', 'Football', 'Fraud', 'Fraud_GPU', 'BCI']\n['xgb', 'lgbm', 'xgb_hist']\n" ] ], [ [ "We wish to compare LightGBM and XGBoost both in terms of performance as well as how long they took to train.", "_____no_output_____" ] ], [ [ "def average_performance_diff(dataset):\n lgbm_series = pd.Series(dataset['lgbm']['performance'])\n try:\n perf = 100*((lgbm_series-pd.Series(dataset['xgb']['performance']))/lgbm_series).mean()\n except KeyError:\n perf = None\n return perf", "_____no_output_____" ], [ "def train_time_ratio(dataset):\n try: \n val = dataset['xgb']['train_time']/dataset['lgbm']['train_time']\n except KeyError:\n val = None\n return val\n\ndef train_time_ratio_hist(dataset):\n try: \n val = dataset['xgb_hist']['train_time']/dataset['lgbm']['train_time']\n except KeyError:\n val = None\n return val\n\ndef test_time_ratio(dataset):\n try: \n val = dataset['xgb']['test_time']/dataset['lgbm']['test_time']\n except KeyError:\n val = None\n return val", "_____no_output_____" ], [ "metrics = juxt(average_performance_diff, train_time_ratio, train_time_ratio_hist, test_time_ratio)\nres_per_dataset = {dataset_key:metrics(dataset) for dataset_key, dataset in results.items()}", "_____no_output_____" ], [ "results_df = pd.DataFrame(res_per_dataset, index=['Perf. Difference(%)', \n 'Train Time Ratio',\n 'Train Time Ratio Hist',\n 'Test Time Ratio']).T", "_____no_output_____" ], [ "results_df", "_____no_output_____" ], [ "results_gpu = results_df.ix[[idx for idx in results_df.index if idx.endswith('GPU')]]\nresults_cpu = results_df.ix[~results_df.index.isin(results_gpu.index)]", "_____no_output_____" ] ], [ [ "Plot of train time ratio for CPU experiments.", "_____no_output_____" ] ], [ [ "data = {\n 'Ratio': results_cpu['Train Time Ratio'].values.tolist() + results_cpu['Train Time Ratio Hist'].values.tolist(),\n 'label': results_cpu.index.values.tolist()*2,\n 'group': ['xgb/lgb']*len(results_cpu.index.values) + ['xgb_hist/lgb']*len(results_cpu.index.values)\n}", "_____no_output_____" ], [ "bar = Bar(data, values='Ratio', agg='mean', label='label', group='group', \n plot_width=600, plot_height=400, bar_width=0.7, color=['#5975a4','#99ccff'], legend='top_right')\nbar.axis[0].axis_label=''\nbar.axis[1].axis_label='Train Time Ratio (XGBoost/LightGBM)'\nbar.axis[1].axis_label_text_font_size='12pt'\nbar.y_range = Range1d(0, 30)\nbar.toolbar_location='above'\nbar.legend[0].visible=True\nshow(bar)", "_____no_output_____" ], [ "bar.output_backend = \"svg\"\nexport_svgs(bar, filename=\"xgb_vs_lgbm_train_time.svg\")\ndisplay(SVG('xgb_vs_lgbm_train_time.svg'))", "_____no_output_____" ] ], [ [ "Plot of train time ratio for GPU experiments.", "_____no_output_____" ] ], [ [ "data = {\n 'Ratio': results_gpu['Train Time Ratio'].values.tolist() + results_gpu['Train Time Ratio Hist'].values.tolist(),\n 'label': results_gpu.index.values.tolist()*2,\n 'group': ['xgb/lgb']*len(results_gpu.index.values) + ['xgb_hist/lgb']*len(results_gpu.index.values)\n}", "_____no_output_____" ], [ "bar = Bar(data, values='Ratio', agg='mean', label='label', group='group', \n plot_width=600, plot_height=400, bar_width=0.5, color=['#ff8533','#ffd1b3'], legend='top_right')\nbar.axis[0].axis_label=''\nbar.y_range = Range1d(0, 30)\nbar.axis[1].axis_label='Train Time Ratio (XGBoost/LightGBM)'\nbar.axis[1].axis_label_text_font_size='12pt'\nbar.toolbar_location='above'\nbar.legend[0].visible=True\nshow(bar)", "_____no_output_____" ], [ "bar.output_backend = \"svg\"\nexport_svgs(bar, filename=\"xgb_vs_lgbm_train_time_gpu.svg\")\ndisplay(SVG('xgb_vs_lgbm_train_time_gpu.svg'))", "WARNING:bokeh.io:The webdriver raised a TimeoutException while waiting for a 'bokeh:idle' event to signify that the layout has rendered. Something may have gone wrong.\n" ], [ "data = {\n 'Perf. Difference(%)': results_df['Perf. Difference(%)'].values,\n 'label': results_df.index.values\n}", "_____no_output_____" ], [ "bar = Bar(data, values='Perf. Difference(%)', agg='mean', label=['label'], \n plot_width=600, plot_height=400, bar_width=0.7, color='#5975a4')\nbar.axis[0].axis_label=''\nbar.axis[1].axis_label='Perf. Difference(%)'\nbar.toolbar_location='above'\nbar.legend[0].visible=False\nshow(bar)", "_____no_output_____" ], [ "bar.output_backend = \"svg\"\nexport_svgs(bar, filename=\"xgb_vs_lgbm_performance.svg\")\ndisplay(SVG('xgb_vs_lgbm_performance.svg'))", "WARNING:bokeh.io:The webdriver raised a TimeoutException while waiting for a 'bokeh:idle' event to signify that the layout has rendered. Something may have gone wrong.\n" ] ], [ [ "For the speed results we can see that LightGBM is on average 5 times faster than the CPU and GPU versions of XGBoost and XGBoost histogram. In regards to the performance, we can see that LightGBM is sometimes better and sometimes worse. \n\nAnalyzing the results of XGBoost in CPU we can see that XGBoost histogram is faster than XGBoost in the Airline, Fraud and HIGGS datasets, but much slower in Planet and BCI dataset. In these two cases there is a memory overhead due to the high number of features. In the case of football dataset, the histogram implementation is slightly slower, we believe that there could be a slight principle of memory overhead.\n\nFinally, if we look at the results of XGBoost in GPU we see that there are several values missing. This is due to an out of memory of the standard version. In our experiments we observed that XGBoost's memory consumption is around 10 times higher than LightGBM and 5 times higher than XGBoost histogram. We see that the histogram version is faster except in the BCI dataset, where there could be a memory overhead like in the CPU version. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb15b2037c1839912774590ccf736643114b80ac
3,216
ipynb
Jupyter Notebook
13enero.ipynb
pandemicbat801/daa_2021_1
4b912d0ca5631882f8137583dfbc25280ec8c574
[ "MIT" ]
null
null
null
13enero.ipynb
pandemicbat801/daa_2021_1
4b912d0ca5631882f8137583dfbc25280ec8c574
[ "MIT" ]
null
null
null
13enero.ipynb
pandemicbat801/daa_2021_1
4b912d0ca5631882f8137583dfbc25280ec8c574
[ "MIT" ]
null
null
null
30.923077
231
0.494714
[ [ [ "<a href=\"https://colab.research.google.com/github/pandemicbat801/daa_2021_1/blob/master/13enero.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "class NodoArbol:\r\n def __init__(self, value, left=None,right=None):\r\n self.data=value\r\n self.left=left\r\n self.right=right", "_____no_output_____" ] ], [ [ "#Arbol Binario de búsqueda\r\nLos nodos a la izq son menores a la raiz y los nodods a la derecha son mayores a la raiz.\r\nPueden ser recorridos en: pre-orden, in-orden y postt-orden.", "_____no_output_____" ] ], [ [ "class BinarySearchTree:\r\n def __init__ (self):\r\n self.__root__=None\r\n\r\n def insert (self, value):\r\n if self.__root__ == None:\r\n self.__root__ = NodoArbol(value,None,None)\r\n else:\r\n #preguntar si value es menor que root, de ser el caso \r\n #insertar a la izquierda. PERO ... Puede ser el caso que el \r\n #subarbol izq.. tenga muchos elementos.0\r\n self.__insert_nodo__(self.__root__,value)\r\n\r\n def __insert_nodo__(self,nodo,value):\r\n if nodo.data == value:\r\n pass\r\n elif value < nodo.data: #TRUE va a la izq\r\n if nodo.left == None:#Si hay espacio a la izq ahi va\r\n nodo.left=NodoArbol(value,None,None)#incertamos el nodo\r\n else:\r\n self.__insert_nodo__(nodo.left,value)\r\n else:\r\n if nodo.right== None:#Si hay espacio a la izq ahi va\r\n nodo.right=NodoArbol(value,None,None)#incertamos el nodo\r\n else:\r\n self.__insert_nodo__(nodo.right,value)\r\n", "_____no_output_____" ], [ "bst=BinarySearchTree()\r\nbst.insert(50)\r\nbst.insert(30)\r\nbst.insert(20)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb15bda71cc959d22625a6e46a39dd03c96205ef
37,318
ipynb
Jupyter Notebook
tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb
mmyros/course-content
6b3751fa2aea1c101a9213c1fce2d0832231fa76
[ "CC-BY-4.0" ]
2
2021-05-12T02:19:05.000Z
2021-05-12T13:49:29.000Z
tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb
pattanaikay/course-content
b9c79974109a279121e6875cdcd2e69f39aeb2fb
[ "CC-BY-4.0", "BSD-3-Clause" ]
1
2020-08-26T10:44:11.000Z
2020-08-26T10:44:11.000Z
tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb
pattanaikay/course-content
b9c79974109a279121e6875cdcd2e69f39aeb2fb
[ "CC-BY-4.0", "BSD-3-Clause" ]
1
2021-05-02T10:03:07.000Z
2021-05-02T10:03:07.000Z
37.206381
441
0.568921
[ [ [ "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3\n# Causality Day - Simultaneous fitting/regression\n\n**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording\n\n**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom", "_____no_output_____" ], [ "---\n# Tutorial objectives\n\nThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:\n\n1. Master definitions of causality\n2. Understand that estimating causality is possible\n3. Learn 4 different methods and understand when they fail\n 1. perturbations\n 2. correlations\n 3. **simultaneous fitting/regression**\n 4. instrumental variables\n\n### Notebook 3 objectives\n\nIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? \n\nHere we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:\n\n- Learn about more advanced (but also controversial) techniques for estimating causality\n - conditional probabilities (**regression**)\n- Explore limitations and failure modes\n - understand the problem of **omitted variable bias**\n", "_____no_output_____" ], [ "---\n# Setup", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.multioutput import MultiOutputRegressor\nfrom sklearn.linear_model import Lasso", "_____no_output_____" ], [ "#@title Figure settings\nimport ipywidgets as widgets # interactive display\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")", "_____no_output_____" ], [ "# @title Helper functions\n\n\ndef sigmoid(x):\n \"\"\"\n Compute sigmoid nonlinearity element-wise on x.\n\n Args:\n x (np.ndarray): the numpy data array we want to transform\n Returns\n (np.ndarray): x with sigmoid nonlinearity applied\n \"\"\"\n return 1 / (1 + np.exp(-x))\n\n\ndef logit(x):\n \"\"\"\n\n Applies the logit (inverse sigmoid) transformation\n\n Args:\n x (np.ndarray): the numpy data array we want to transform\n Returns\n (np.ndarray): x with logit nonlinearity applied\n \"\"\"\n return np.log(x/(1-x))\n\n\ndef create_connectivity(n_neurons, random_state=42, p=0.9):\n \"\"\"\n Generate our nxn causal connectivity matrix.\n\n Args:\n n_neurons (int): the number of neurons in our system.\n random_state (int): random seed for reproducibility\n\n Returns:\n A (np.ndarray): our 0.1 sparse connectivity matrix\n \"\"\"\n np.random.seed(random_state)\n A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])\n\n # set the timescale of the dynamical system to about 100 steps\n _, s_vals, _ = np.linalg.svd(A_0)\n A = A_0 / (1.01 * s_vals[0])\n\n # _, s_val_test, _ = np.linalg.svd(A)\n # assert s_val_test[0] < 1, \"largest singular value >= 1\"\n\n return A\n\n\ndef get_regression_estimate_full_connectivity(X):\n \"\"\"\n Estimates the connectivity matrix using lasso regression.\n\n Args:\n X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n neuron_idx (int): optionally provide a neuron idx to compute connectivity for\n Returns:\n V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n if neuron_idx is specified, V is of shape (n_neurons,).\n \"\"\"\n n_neurons = X.shape[0]\n\n # Extract Y and W as defined above\n W = X[:, :-1].transpose()\n Y = X[:, 1:].transpose()\n\n # apply inverse sigmoid transformation\n Y = logit(Y)\n\n # fit multioutput regression\n reg = MultiOutputRegressor(Lasso(fit_intercept=False,\n alpha=0.01, max_iter=250 ), n_jobs=-1)\n reg.fit(W, Y)\n\n V = np.zeros((n_neurons, n_neurons))\n for i, estimator in enumerate(reg.estimators_):\n V[i, :] = estimator.coef_\n\n return V\n\n\ndef get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):\n \"\"\"\n A wrapper function for our correlation calculations between A and the V estimated\n from regression.\n\n Args:\n n_neurons (int): number of neurons\n A (np.ndarray): connectivity matrix\n X (np.ndarray): dynamical system\n observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.\n regression_args (dict): dictionary of lasso regression arguments and hyperparameters\n\n Returns:\n A single float correlation value representing the similarity between A and R\n \"\"\"\n assert (observed_ratio > 0) and (observed_ratio <= 1)\n\n sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)\n\n sel_X = X[:sel_idx, :]\n sel_A = A[:sel_idx, :sel_idx]\n\n sel_V = get_regression_estimate_full_connectivity(sel_X)\n return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V\n\n\ndef see_neurons(A, ax, ratio_observed=1, arrows=True):\n \"\"\"\n Visualizes the connectivity matrix.\n\n Args:\n A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)\n ax (plt.axis): the matplotlib axis to display on\n\n Returns:\n Nothing, but visualizes A.\n \"\"\"\n n = len(A)\n\n ax.set_aspect('equal')\n thetas = np.linspace(0, np.pi * 2, n, endpoint=False)\n x, y = np.cos(thetas), np.sin(thetas),\n if arrows:\n for i in range(n):\n for j in range(n):\n if A[i, j] > 0:\n ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,\n width = A[i, j] / 25,shape='right', length_includes_head=True,\n alpha = .2)\n if ratio_observed < 1:\n nn = int(n * ratio_observed)\n ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')\n ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')\n ax.legend(fontsize=15)\n else:\n ax.scatter(x, y, c='k', s=150)\n ax.axis('off')\n\n\ndef simulate_neurons(A, timesteps, random_state=42):\n \"\"\"\n Simulates a dynamical system for the specified number of neurons and timesteps.\n\n Args:\n A (np.array): the connectivity matrix\n timesteps (int): the number of timesteps to simulate our system.\n random_state (int): random seed for reproducibility\n\n Returns:\n - X has shape (n_neurons, timeteps).\n \"\"\"\n np.random.seed(random_state)\n\n\n n_neurons = len(A)\n X = np.zeros((n_neurons, timesteps))\n\n for t in range(timesteps - 1):\n # solution\n epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))\n X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)\n\n assert epsilon.shape == (n_neurons,)\n return X\n\n\ndef correlation_for_all_neurons(X):\n \"\"\"Computes the connectivity matrix for the all neurons using correlations\n\n Args:\n X: the matrix of activities\n\n Returns:\n estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)\n \"\"\"\n n_neurons = len(X)\n S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)\n R = np.corrcoef(S)[:n_neurons, n_neurons:]\n return R\n\n\ndef get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):\n \"\"\"\n A wrapper function for our correlation calculations between A and R.\n\n Args:\n n_neurons (int): the number of neurons in our system.\n timesteps (int): the number of timesteps to simulate our system.\n random_state (int): seed for reproducibility\n neuron_idx (int): optionally provide a neuron idx to slice out\n\n Returns:\n A single float correlation value representing the similarity between A and R\n \"\"\"\n\n A = create_connectivity(n_neurons, random_state)\n X = simulate_neurons(A, timesteps)\n\n R = correlation_for_all_neurons(X)\n\n return np.corrcoef(A.flatten(), R.flatten())[0, 1]\n\n\ndef get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):\n \"\"\"\n\n A wrapper function for our correlation calculations between A and the V estimated\n from regression.\n\n Args:\n n_neurons (int): the number of neurons in our system.\n A (np.array): the true connectivity\n X (np.array): the simulated system\n observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.\n regression_args (dict): dictionary of lasso regression arguments and hyperparameters\n neuron_idx (int): optionally provide a neuron idx to compute connectivity for\n\n Returns:\n A single float correlation value representing the similarity between A and R\n \"\"\"\n assert (observed_ratio > 0) and (observed_ratio <= 1)\n\n sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)\n selected_X = X[:sel_idx, :]\n selected_connectivity = A[:sel_idx, :sel_idx]\n\n estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)\n if neuron_idx is None:\n return np.corrcoef(selected_connectivity.flatten(),\n estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity\n else:\n return np.corrcoef(selected_connectivity[neuron_idx, :],\n estimated_selected_connectivity)[1, 0], estimated_selected_connectivity\n\n\ndef plot_connectivity_matrix(A, ax=None):\n \"\"\"Plot the (weighted) connectivity matrix A as a heatmap\n\n Args:\n A (ndarray): connectivity matrix (n_neurons by n_neurons)\n ax: axis on which to display connectivity matrix\n \"\"\"\n if ax is None:\n ax = plt.gca()\n lim = np.abs(A).max()\n ax.imshow(A, vmin=-lim, vmax=lim, cmap=\"coolwarm\")", "_____no_output_____" ] ], [ [ "---\n# Section 1: Regression", "_____no_output_____" ] ], [ [ "#@title Video 1: Regression approach\n# Insert the ID of the corresponding youtube video\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"Av4LaXZdgDo\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtu.be/\" + video.id)\nvideo", "_____no_output_____" ] ], [ [ "You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.\n\n**A confounding example**:\nSuppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.\n\nA confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. \n\n**Controlling for a confound**: \nConfonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things:\n \n1. **All** confounds are included as covariates\n2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)\n3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)\n\nIn the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.", "_____no_output_____" ] ], [ [ "#@title Video 2: Fitting a GLM\n# Insert the ID of the corresponding youtube video\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"GvMj9hRv5Ak\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtu.be/\" + video.id)\nvideo", "_____no_output_____" ] ], [ [ "## Section 1.1: Recovering connectivity by model fitting\n\nRecall that in our system each neuron effects every other via:\n\n$$\n\\vec{x}_{t+1} = \\sigma(A\\vec{x}_t + \\epsilon_t), \n$$\n\nwhere $\\sigma$ is our sigmoid nonlinearity from before: $\\sigma(x) = \\frac{1}{1 + e^{-x}}$\n\nOur system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they?", "_____no_output_____" ], [ "We will use a regression approach to estimate the causal influence of all neurons to neuron #1. Specifically, we will use linear regression to determine the $A$ in:\n\n$$\n\\sigma^{-1}(\\vec{x}_{t+1}) = A\\vec{x}_t + \\epsilon_t ,\n$$\n\nwhere $\\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\\sigma^{-1}(x) = \\log(\\frac{x}{1-x})$.\n\nLet $W$ be the $\\vec{x}_t$ values, up to the second-to-last timestep $T-1$:\n\n$$\nW = \n\\begin{bmatrix}\n\\mid & \\mid & ... & \\mid \\\\ \n\\vec{x}_0 & \\vec{x}_1 & ... & \\vec{x}_{T-1} \\\\ \n\\mid & \\mid & ... & \\mid\n\\end{bmatrix}_{n \\times (T-1)}\n$$\n\nLet $Y$ be the $\\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:\n\n$$\nY = \n\\begin{bmatrix}\nx_{i,1} & x_{i,2} & ... & x_{i, T} \\\\ \n\\end{bmatrix}_{1 \\times (T-1)}\n$$\n\nYou will then fit the following model:\n\n$$\n\\sigma^{-1}(Y^T) = W^TV\n$$\n\nwhere $V$ is the $n \\times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.\n\n**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here.", "_____no_output_____" ], [ "## Exercise 1: Use linear regression plus lasso to estimate causal connectivities\n\nYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.\n\n**Code**:\n\nYou'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? \n\nThis is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.\n\n\n- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function\n- Use the following hyperparameters for the `Lasso` estimator:\n - `alpha = 0.01`\n - `fit_intercept = False`\n- How do we obtain $V$ from the fitted model?\n", "_____no_output_____" ] ], [ [ "def get_regression_estimate(X, neuron_idx):\n \"\"\"\n Estimates the connectivity matrix using lasso regression.\n\n Args:\n X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n neuron_idx (int): a neuron index to compute connectivity for\n\n Returns:\n V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n if neuron_idx is specified, V is of shape (n_neurons,).\n \"\"\"\n # Extract Y and W as defined above\n W = X[:, :-1].transpose()\n Y = X[[neuron_idx], 1:].transpose()\n\n # Apply inverse sigmoid transformation\n Y = logit(Y)\n\n ############################################################################\n ## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures\n ## our assumption that most connections are precisely 0.\n ## Fill in function and remove\n raise NotImplementedError(\"Please complete the regression exercise\")\n ############################################################################\n\n # Initialize regression model with no intercept and alpha=0.01\n regression = ...\n\n # Fit regression to the data\n regression.fit(...)\n\n V = regression.coef_\n\n return V\n\n# Parameters\nn_neurons = 50 # the size of our system\ntimesteps = 10000 # the number of timesteps to take\nrandom_state = 42\nneuron_idx = 1\n\nA = create_connectivity(n_neurons, random_state)\nX = simulate_neurons(A, timesteps)\n\n\n# Uncomment below to test your function\n# V = get_regression_estimate(X, neuron_idx)\n\n#print(\"Regression: correlation of estimated connectivity with true connectivity: {:.3f}\".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))\n\n#print(\"Lagged correlation of estimated connectivity with true connectivity: {:.3f}\".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))", "_____no_output_____" ], [ "# to_remove solution\ndef get_regression_estimate(X, neuron_idx):\n \"\"\"\n Estimates the connectivity matrix using lasso regression.\n\n Args:\n X (np.ndarray): our simulated system of shape (n_neurons, timesteps)\n neuron_idx (int): a neuron index to compute connectivity for\n\n Returns:\n V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).\n if neuron_idx is specified, V is of shape (n_neurons,).\n \"\"\"\n # Extract Y and W as defined above\n W = X[:, :-1].transpose()\n Y = X[[neuron_idx], 1:].transpose()\n\n # Apply inverse sigmoid transformation\n Y = logit(Y)\n\n # Initialize regression model with no intercept and alpha=0.01\n regression = Lasso(fit_intercept=False, alpha=0.01)\n\n # Fit regression to the data\n regression.fit(W, Y)\n\n V = regression.coef_\n\n return V\n\n# Parameters\nn_neurons = 50 # the size of our system\ntimesteps = 10000 # the number of timesteps to take\nrandom_state = 42\nneuron_idx = 1\n\nA = create_connectivity(n_neurons, random_state)\nX = simulate_neurons(A, timesteps)\n\n\n# Uncomment below to test your function\nV = get_regression_estimate(X, neuron_idx)\n\nprint(\"Regression: correlation of estimated connectivity with true connectivity: {:.3f}\".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))\n\nprint(\"Lagged correlation of estimated connectivity with true connectivity: {:.3f}\".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))", "_____no_output_____" ] ], [ [ "You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.\n\nWe can see from these numbers that multiple regression is better than simple correlation for estimating connectivity.", "_____no_output_____" ], [ "---\n# Section 2: Omitted Variable Bias\n\nIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately?\n\n", "_____no_output_____" ], [ "## Section 2.1: Visualizing subsets of the connectivity matrix\n\nWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.\n\nRecall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.\n", "_____no_output_____" ] ], [ [ "#@markdown Execute this cell to visualize subsets of connectivity matrix\n\n# Run this cell to visualize the subsets of variables we observe\nn_neurons = 25\nA = create_connectivity(n_neurons)\n\nfig, axs = plt.subplots(2, 2, figsize=(10, 10))\nratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system\n\nfor i, ratio in enumerate(ratio_observed):\n sel_idx = int(n_neurons * ratio)\n\n offset = np.zeros((n_neurons, n_neurons))\n axs[i,1].title.set_text(\"{}% neurons observed\".format(int(ratio * 100)))\n offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]\n im = axs[i, 1].imshow(offset, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n axs[i, 1].set_xlabel(\"Connectivity from\")\n axs[i, 1].set_ylabel(\"Connectivity to\")\n plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)\n see_neurons(A,axs[i, 0],ratio)\n\nplt.suptitle(\"Visualizing subsets of the connectivity matrix\", y = 1.05)\nplt.show()", "_____no_output_____" ] ], [ [ "## Section 2.2: Effects of partial observability", "_____no_output_____" ] ], [ [ "#@title Video 3: Omitted variable bias\n# Insert the ID of the corresponding youtube video\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"5CCib6CTMac\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtu.be/\" + video.id)\nvideo", "_____no_output_____" ] ], [ [ "**Video correction**: the labels \"connectivity from\"/\"connectivity to\" are swapped in the video but fixed in the figures/demos below", "_____no_output_____" ], [ "### Interactive Demo: Regression performance as a function of the number of observed neurons\n\nWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?\n\n**Note:** the plots will take a moment or so to update after moving the slider.", "_____no_output_____" ] ], [ [ "#@markdown Execute this cell to enable demo\nn_neurons = 50\nA = create_connectivity(n_neurons, random_state=42)\nX = simulate_neurons(A, 4000, random_state=42)\n\nreg_args = {\n \"fit_intercept\": False,\n \"alpha\": 0.001\n}\n\[email protected]\ndef plot_observed(n_observed=(5, 45, 5)):\n to_neuron = 0\n fig, axs = plt.subplots(1, 3, figsize=(15, 5))\n sel_idx = n_observed\n ratio = (n_observed) / n_neurons\n offset = np.zeros((n_neurons, n_neurons))\n axs[0].title.set_text(\"{}% neurons observed\".format(int(ratio * 100)))\n offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]\n im = axs[1].imshow(offset, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)\n\n see_neurons(A,axs[0], ratio, False)\n corr, R = get_regression_corr_full_connectivity(n_neurons,\n A,\n X,\n ratio,\n reg_args)\n\n #rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')\n #axs[1].add_patch(rect)\n big_R = np.zeros(A.shape)\n big_R[:sel_idx, :sel_idx] = 1 + R\n #big_R[to_neuron, :sel_idx] = 1 + R\n im = axs[2].imshow(big_R, cmap=\"coolwarm\", vmin=0, vmax=A.max() + 1)\n plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)\n c = 'w' if n_observed<(n_neurons-3) else 'k'\n axs[2].text(0,n_observed+3,\"Correlation : {:.2f}\".format(corr), color=c, size=15)\n #axs[2].axis(\"off\")\n axs[1].title.set_text(\"True connectivity\")\n axs[1].set_xlabel(\"Connectivity from\")\n axs[1].set_ylabel(\"Connectivity to\")\n\n axs[2].title.set_text(\"Estimated connectivity\")\n axs[2].set_xlabel(\"Connectivity from\")\n #axs[2].set_ylabel(\"Connectivity to\")", "_____no_output_____" ] ], [ [ "Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.\nWhat is the relationship that you see between performance and the number of neurons observed?\n\n**Note:** the cell below will take about 25-30 seconds to run.\n", "_____no_output_____" ] ], [ [ "#@title\n#@markdown Plot correlation vs. subsampling\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# we'll simulate many systems for various ratios of observed neurons\nn_neurons = 50\ntimesteps = 5000\nratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system\nn_trials = 3 # run it this many times to get variability in our results\n\nreg_args = {\n \"fit_intercept\": False,\n \"alpha\": 0.001\n}\n\ncorr_data = np.zeros((n_trials, len(ratio_observed)))\nfor trial in range(n_trials):\n\n A = create_connectivity(n_neurons, random_state=trial)\n X = simulate_neurons(A, timesteps)\n print(\"simulating trial {} of {}\".format(trial + 1, n_trials))\n\n\n for j, ratio in enumerate(ratio_observed):\n result,_ = get_regression_corr_full_connectivity(n_neurons,\n A,\n X,\n ratio,\n reg_args)\n corr_data[trial, j] = result\n\ncorr_mean = np.nanmean(corr_data, axis=0)\ncorr_std = np.nanstd(corr_data, axis=0)\n\nplt.plot(np.asarray(ratio_observed) * 100, corr_mean)\nplt.fill_between(np.asarray(ratio_observed) * 100,\n corr_mean - corr_std,\n corr_mean + corr_std,\n alpha=.2)\nplt.xlim([100, 10])\nplt.xlabel(\"Percent of neurons observed\")\nplt.ylabel(\"connectivity matrices correlation\")\nplt.title(\"Performance of regression as a function of the number of neurons observed\");", "_____no_output_____" ] ], [ [ "---\n# Summary\n", "_____no_output_____" ] ], [ [ "#@title Video 4: Summary\n# Insert the ID of the corresponding youtube video\nfrom IPython.display import YouTubeVideo\nvideo = YouTubeVideo(id=\"T1uGf1H31wE\", width=854, height=480, fs=1)\nprint(\"Video available at https://youtu.be/\" + video.id)\nvideo", "_____no_output_____" ] ], [ [ "In this tutorial, we explored:\n\n1) Using regression for estimating causality\n\n2) The problem of ommitted variable bias, and how it arises in practice", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb15c59ce1002e5397f1460a7e4561b246173f63
818
ipynb
Jupyter Notebook
docs/_build/html/notebooks/.ipynb_checkpoints/test.py-checkpoint.ipynb
mr-haseeb/Artificial-Intelligence
94f5956520852b7454a7837631be4ddca1031610
[ "MIT" ]
null
null
null
docs/_build/html/notebooks/.ipynb_checkpoints/test.py-checkpoint.ipynb
mr-haseeb/Artificial-Intelligence
94f5956520852b7454a7837631be4ddca1031610
[ "MIT" ]
13
2021-03-19T11:38:41.000Z
2022-03-12T00:52:00.000Z
docs/_build/doctrees/nbsphinx/jupyter notebooks/test.ipynb
mr-haseeb/Artificial-Intelligence
94f5956520852b7454a7837631be4ddca1031610
[ "MIT" ]
1
2021-12-27T12:44:08.000Z
2021-12-27T12:44:08.000Z
16.36
34
0.492665
[ [ [ "print(\"Hi from haseeb\")", "Hi from haseeb\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb15cbefafa1870a876e701da000391a4378f6d9
6,710
ipynb
Jupyter Notebook
code/python-workshop/Solutions/Ex05_PowerPlanningModel.ipynb
houstonhaynes/MathematicalPlanning101
ed6a4aa22bbec85f82a13c96ae7420e1fb32944d
[ "MIT" ]
18
2020-12-28T08:56:20.000Z
2022-03-30T16:25:30.000Z
code/python-workshop/Solutions/Ex05_PowerPlanningModel.ipynb
houstonhaynes/MathematicalPlanning101
ed6a4aa22bbec85f82a13c96ae7420e1fb32944d
[ "MIT" ]
1
2021-03-05T18:38:02.000Z
2021-03-05T18:38:02.000Z
code/python-workshop/Solutions/Ex05_PowerPlanningModel.ipynb
houstonhaynes/MathematicalPlanning101
ed6a4aa22bbec85f82a13c96ae7420e1fb32944d
[ "MIT" ]
4
2020-12-28T17:01:03.000Z
2021-09-29T14:09:17.000Z
26.733068
139
0.474963
[ [ [ "# Setup Sets\ncities = [\"C1\", \"C2\", \"C3\", \"C4\", \"C5\", \"C6\", \"C7\", \"C8\", \"C9\"]\n\npower_plants = [\"P1\", \"P2\", \"P3\", \"P4\", \"P5\", \"P6\"]\n\nconnections = [(\"C1\", \"P1\"), (\"C1\", \"P3\"), (\"C1\",\"P5\"), \\\n (\"C2\", \"P1\"), (\"C2\", \"P2\"), (\"C2\",\"P4\"), \\\n (\"C3\", \"P2\"), (\"C3\", \"P3\"), (\"C3\",\"P4\"), \\\n (\"C4\", \"P2\"), (\"C4\", \"P4\"), (\"C4\",\"P6\"), \\\n (\"C5\", \"P2\"), (\"C5\", \"P5\"), (\"C5\",\"P6\"), \\\n (\"C6\", \"P3\"), (\"C6\", \"P4\"), (\"C6\",\"P6\"), \\\n (\"C7\", \"P1\"), (\"C7\", \"P3\"), (\"C7\",\"P6\"), \\\n (\"C8\", \"P2\"), (\"C8\", \"P3\"), (\"C8\",\"P4\"), \\\n (\"C9\", \"P3\"), (\"C9\", \"P5\"), (\"C9\",\"P6\")]", "_____no_output_____" ], [ "# Setup Parameters\nmax_power_generation = {\"P1\":100, \"P2\":150, \"P3\":250, \"P4\":125, \"P5\": 175, \"P6\":165}\n\nstartup_cost = {\"P1\":50, \"P2\":80, \"P3\":90, \"P4\":60, \"P5\": 60, \"P6\":70}\n\npower_cost = {\"P1\":2, \"P2\":1.5, \"P3\":1.2, \"P4\":1.8, \"P5\": 0.8, \"P6\":1.1}\n\npower_required = {\"C1\":25, \"C2\":35, \"C3\":30, \"C4\":29, \"C5\":40, \"C6\":35, \"C7\":50, \"C8\":45, \"C9\":38}", "_____no_output_____" ], [ "# Import PuLP Library\nfrom pulp import *", "_____no_output_____" ], [ "# Create Decision Variables\nrun_power_plant = LpVariable.dicts(\"StartPlant\", power_plants, 0, 1, LpInteger)\n\npower_generation = LpVariable.dicts(\"PowerGeneration\", power_plants, 0, None, LpContinuous)\n\npower_sent = LpVariable.dicts(\"PowerSent\", connections, 0, None, LpContinuous)", "_____no_output_____" ], [ "# Create Problem object\nproblem = LpProblem(\"PowerPlanning\", LpMinimize)", "_____no_output_____" ], [ "# Add the Objective Function\nproblem += lpSum([run_power_plant[p] * startup_cost[p] + power_generation[p] * power_cost[p] for p in power_plants])", "_____no_output_____" ], [ "# Add Power Capacity Constraints\nfor p in power_plants:\n problem += power_generation[p] <= max_power_generation[p] * run_power_plant[p], f\"PowerCapacity_{p}\"", "_____no_output_____" ], [ "# Add Power Balance Constraints\nfor p in power_plants:\n problem += power_generation[p] == lpSum([power_sent[(c,p)] for c in cities if (c, p) in connections]), f\"PowerSent_{p}\"", "_____no_output_____" ], [ "# Add Cities Powered Constraints\nfor c in cities:\n problem += power_required[c] == lpSum([power_sent[(c,p)] for p in power_plants if (c, p) in connections]), f\"PowerRequired_{c}\"", "_____no_output_____" ], [ "# Solve the problem\nproblem.solve()", "_____no_output_____" ], [ "# Check the status of the solution\nstatus = LpStatus[problem.status]\nprint(status)", "Optimal\n" ], [ "# Print the results\nfor v in problem.variables():\n if v.varValue != 0:\n print(v.name, \"=\", v.varValue)", "PowerGeneration_P2 = 110.0\nPowerGeneration_P5 = 103.0\nPowerGeneration_P6 = 114.0\nPowerSent_('C1',_'P5') = 25.0\nPowerSent_('C2',_'P2') = 35.0\nPowerSent_('C3',_'P2') = 30.0\nPowerSent_('C4',_'P6') = 29.0\nPowerSent_('C5',_'P5') = 40.0\nPowerSent_('C6',_'P6') = 35.0\nPowerSent_('C7',_'P6') = 50.0\nPowerSent_('C8',_'P2') = 45.0\nPowerSent_('C9',_'P5') = 38.0\nStartPlant_P2 = 1.0\nStartPlant_P5 = 1.0\nStartPlant_P6 = 1.0\n" ], [ "# Let's look at the Plant Utilization\nfor p in power_plants:\n if power_generation[p].varValue > 0:\n utilization = (power_generation[p].varValue / max_power_generation[p]) * 100\n print(f\"Plant: {p} Generation: {power_generation[p].varValue} Utilization: {utilization:.2f}%\")", "Plant: P2 Generation: 110.0 Utilization: 73.33%\nPlant: P5 Generation: 103.0 Utilization: 58.86%\nPlant: P6 Generation: 114.0 Utilization: 69.09%\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb15de514d6abfcd8bbf5813c54644846aa0046a
10,792
ipynb
Jupyter Notebook
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-mollea1213
c42e0f49bd2d9965ea1a58db7f72784fea167110
[ "MIT" ]
null
null
null
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-mollea1213
c42e0f49bd2d9965ea1a58db7f72784fea167110
[ "MIT" ]
null
null
null
content/lessons/05/Class-Coding-Lab/CCL-Iterations.ipynb
MahopacHS/spring2019-mollea1213
c42e0f49bd2d9965ea1a58db7f72784fea167110
[ "MIT" ]
1
2019-02-05T12:52:27.000Z
2019-02-05T12:52:27.000Z
52.135266
1,200
0.584136
[ [ [ "# In-Class Coding Lab: Iterations\n\nThe goals of this lab are to help you to understand:\n\n- How loops work.\n- The difference between definite and indefinite loops, and when to use each.\n- How to build an indefinite loop with complex exit conditions.\n- How to create a program from a complex idea.\n", "_____no_output_____" ], [ "# Understanding Iterations\n\nIterations permit us to repeat code until a Boolean expression is `False`. Iterations or **loops** allow us to write succint, compact code. Here's an example, which counts to 3 before [Blitzing the Quarterback in backyard American Football](https://www.quora.com/What-is-the-significance-of-counting-one-Mississippi-two-Mississippi-and-so-on):", "_____no_output_____" ] ], [ [ "i = 1\nwhile i <= 3:\n print(i,\"Mississippi...\")\n i=i+1\nprint(\"Blitz!\")", "1 Mississippi...\n2 Mississippi...\n3 Mississippi...\nBlitz!\n" ] ], [ [ "## Breaking it down...\n\nThe `while` statement on line 2 starts the loop. The code indented beneath it (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 `i <= 3` is `False`, at which time the program continues with line 5.\n\n### Some Terminology\n\nWe call `i <=3` the loop's **exit condition**. The variable `i` inside the exit condition is the only thing that we can change to make the exit condition `False`, therefore it is the **loop control variable**. On line 4 we change the loop control variable by adding one to it, this is called an **increment**.\n\nFurthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a **definite loop**. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.\n\nIf the loop control variable never forces the exit condition to be `False`, we have an **infinite loop**. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up. ", "_____no_output_____" ] ], [ [ "## WARNING!!! INFINITE LOOP AHEAD\n## IF YOU RUN THIS CODE YOU WILL NEED TO KILL YOUR BROWSER AND SHUT DOWN JUPYTER NOTEBOOK\n\ni = 1\nwhile i <= 3:\n print(i,\"Mississippi...\")\n# i=i+1\nprint(\"Blitz!\")", "_____no_output_____" ] ], [ [ "### For loops\n\nTo prevent an infinite loop when the loop is definite, we use the `for` statement. Here's the same program using `for`:", "_____no_output_____" ] ], [ [ "for i in range(1,4):\n print(i,\"Mississippi...\")\nprint(\"Blitz!\")", "1 Mississippi...\n2 Mississippi...\n3 Mississippi...\nBlitz!\n" ] ], [ [ "One confusing aspect of this loop is `range(1,4)` why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.\n\n### Now Try It\n\nIn the space below, Re-Write the above program to count from 10 to 15. Note: How many times will that loop?", "_____no_output_____" ] ], [ [ "# TODO Write code here\nfor i in range(10,16):\n print(i,\"Mississippi...\")\nprint(\"Blitz!\")", "10 Mississippi...\n11 Mississippi...\n12 Mississippi...\n13 Mississippi...\n14 Mississippi...\n15 Mississippi...\nBlitz!\n" ] ], [ [ "## Indefinite loops\n\nWith **indefinite loops** we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application. \n\nThe classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example:", "_____no_output_____" ] ], [ [ "name = \"\"\nwhile name != 'mike':\n name = input(\"Say my name! : \")\n print(\"Nope, my name is not %s! \" %(name))", "Nope, my name is not yeee! \nNope, my name is not ! \nNope, my name is not ! \nNope, my name is not mike! \n" ] ], [ [ "The classic problem with indefinite loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set `name = \"\"` in line 1 so that line 2 start out as `True`. Also we have this wonky logic where when we say `'mike'` it still prints `Nope, my name is not mike!` before exiting.\n\n### Break statement\n\nThe solution to this problem is to use the break statement. **break** tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:\n\n```\nwhile True:\n if exit-condition:\n break\n```\n\nHere's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.", "_____no_output_____" ] ], [ [ "while True:\n name = input(\"Say my name!: \")\n if name == 'mike':\n break\n print(\"Nope, my name is not %s!\" %(name))", "Nope, my name is not anthony!\n" ] ], [ [ "### Multiple exit conditions\n\nThis indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. Make sure to run this program a couple of times. First enter mike to exit the program, next enter the wrong name 3 times.", "_____no_output_____" ] ], [ [ "times = 0\nwhile True:\n name = input(\"Say my name!: \")\n times = times + 1\n if name == 'mike':\n print(\"You got it!\")\n break\n if times == 3:\n print(\"Game over. Too many tries!\")\n break\n print(\"Nope, my name is not %s!\" %(name))", "Nope, my name is not yee!\nNope, my name is not pee!\nYou got it!\n" ] ], [ [ "# Number sums\n\nLet's conclude the lab with you writing your own program which\n\nuses an indefinite loop. We'll provide the to-do list, you write the code. This program should ask for floating point numbers as input and stops looping when **the total of the numbers entered is over 100**, or **more than 5 numbers have been entered**. Those are your two exit conditions. After the loop stops print out the total of the numbers entered and the count of numbers entered. ", "_____no_output_____" ] ], [ [ "## TO-DO List\n\n#1 count = 0\n#2 total = 0\n#3 loop Indefinitely\n#4. input a number\n#5 increment count\n#6 add number to total\n#7 if count equals 5 stop looping\n#8 if total greater than 100 stop looping\n#9 print total and count", "_____no_output_____" ], [ "# Write Code here:\ncount = 0\ntotal = 0\nwhile True:\n total = float(input(\"Enter floating point numbers: \"))\n count = count + 1\n total = total + total\n if count == 5:\n print(\"More than 5 numbers have been entered\")\n break\n if total >= 100:\n print(\"Your total is greater than 100.\")\n break\nprint(\"The total is\", total, \"and the count is\", count)", "Your total is greater than 100.\nThe total is 113.4 and the count is 4\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb15e2388b24b45266f17fc271103fd31dc10bc6
20,029
ipynb
Jupyter Notebook
workflow/scripts/process_stanford_pr.py.ipynb
DamLabResources/hiv-transformers
fb44f73e542c54974489cd1fa59fdadbf60d5e72
[ "CECILL-B" ]
null
null
null
workflow/scripts/process_stanford_pr.py.ipynb
DamLabResources/hiv-transformers
fb44f73e542c54974489cd1fa59fdadbf60d5e72
[ "CECILL-B" ]
null
null
null
workflow/scripts/process_stanford_pr.py.ipynb
DamLabResources/hiv-transformers
fb44f73e542c54974489cd1fa59fdadbf60d5e72
[ "CECILL-B" ]
null
null
null
31.492138
119
0.380299
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport seaborn as sns\nfrom Bio import SeqIO\nimport datasets", "_____no_output_____" ], [ "data_path = '../../data/PI_DataSet.tsv'\n\ndataset_root = '../../datasets/'\nresults_root = '../../results/'", "_____no_output_____" ], [ "shuffle_stream = np.random.RandomState(seed = 1234)", "_____no_output_____" ], [ "df = pd.read_csv(data_path, sep = '\\t')\ndf['id'] = df['SeqID'].map(str)\ndf.head()", "_____no_output_____" ] ], [ [ "## Cleaning", "_____no_output_____" ], [ "First, we need to convert the \"difference from reference\" format back into a normal sequence.\nUsing the Uniprot reference we can add back in the missing information.", "_____no_output_____" ] ], [ [ "pr_seq = 'PQITLWQRPLVTIKIGGQLKEALLDTGADDTVLEEMNLPGRWKPKMIGGIGGFIKVRQYDQILIEICGHKAIGTVLVGPTPVNIIGRNLLTQIGCTLNF'\n# REF: https://hivdb.stanford.edu/pages/documentPage/consensus_amino_acid_sequences.html\n\nseq_cols = [f'P{i}' for i in range(1,100)]\nrep_dict = {}\nfor col, pr in zip(seq_cols, pr_seq):\n rep_dict[col] = {'-': pr, '*': ''}\n\nmake_seq = lambda row: ''.join(row.reindex(seq_cols).fillna(''))\nseq_ser = df[seq_cols].replace(rep_dict).apply(make_seq, axis=1)\ndf['sequence'] = seq_ser\ndf.head()", "_____no_output_____" ] ], [ [ "We also need to account for the data sparseness.\nThe subset of drugs: FPV, IDV, NFV, and SQV have the highest mutual coverage. \nWe'll use only those for downstream predictions.", "_____no_output_____" ] ], [ [ "wanted = ['FPV', 'IDV', 'NFV', 'SQV']\ndf.dropna(subset = wanted, inplace = True)", "_____no_output_____" ], [ "cutoff = 4 # fold increase over WT\nresist = df[wanted]>4\nresist['MULTI'] = resist.sum(axis=1)>0\nresist.sum()", "_____no_output_____" ] ], [ [ "## Dataset Creation", "_____no_output_____" ] ], [ [ "# TODO: Add BibTeX citation\n# Find for instance the citation on arxiv or on the dataset repo/website\n_CITATION = \"\"\"\\\n@InProceedings{huggingface:dataset,\ntitle = {HIV Protease Drug Resistance Prediction Dataset},\nauthor={Will Dampier\n},\nyear={2021}\n}\n\"\"\"\n\n# TODO: Add a link to an official homepage for the dataset here\n_HOMEPAGE = \"\"\n\n# TODO: Add the licence for the dataset here if you can find it\n_LICENSE = \"\"\n\n# TODO: Add description of the dataset here\n# You can copy an official description\n_DESCRIPTION = \"\"\"\\\nThis dataset was constructed the Stanford HIV Drug Resistance Database. \nhttps://hivdb.stanford.edu/pages/genopheno.dataset.html\nThe sequences were interpolated from the protease high-quality dataset.\nSequences with >4-fold increased resistance relative to wild-type was labeled as True.\n\"\"\"\n", "_____no_output_____" ], [ "features = datasets.Features({\n 'sequence': datasets.Value('string'),\n 'id': datasets.Value('string'),\n 'FPV': datasets.Value('bool'),\n 'IDV': datasets.Value('bool'),\n 'NFV': datasets.Value('bool'),\n 'SQV': datasets.Value('bool'),\n 'fold': datasets.Value('int32')\n})\n\ntraining_folds = shuffle_stream.randint(0,5, size = df['sequence'].values.shape)\ndf['fold'] = training_folds\ninfo = datasets.DatasetInfo(description = _DESCRIPTION,\n features = features,\n homepage=_HOMEPAGE, license = _LICENSE, citation=_CITATION)\n\nprocessed_df = df[wanted] > cutoff\nprocessed_df['id'] = df['id']\nprocessed_df['fold'] = df['fold']\nprocessed_df['sequence'] = df['sequence']\n\n\ndset = datasets.Dataset.from_pandas(processed_df,\n info = info,\n features = features)\n#corecpt_dset\ndset.save_to_disk(dataset_root + 'PR_resist')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb160918262d3ff355e1b261b23f28e40ef2eb63
36,017
ipynb
Jupyter Notebook
Chapter12/Activity12.1/Activity12_1.ipynb
khieunguyen/The-Data-Science-Workshop
52cab305e6e2e8bb6820cf488ddb6e16b5567ac9
[ "MIT" ]
1
2020-05-08T08:59:30.000Z
2020-05-08T08:59:30.000Z
Chapter12/Activity12.1/Activity12_1.ipynb
khieunguyen/The-Data-Science-Workshop
52cab305e6e2e8bb6820cf488ddb6e16b5567ac9
[ "MIT" ]
1
2022-03-12T01:05:09.000Z
2022-03-12T01:05:09.000Z
Chapter12/Activity12.1/Activity12_1.ipynb
khieunguyen/The-Data-Science-Workshop
52cab305e6e2e8bb6820cf488ddb6e16b5567ac9
[ "MIT" ]
null
null
null
36,017
36,017
0.564261
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "disp_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter12/Dataset/disp.csv'\ntrans_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter12/Dataset/trans.csv'\naccount_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter12/Dataset/account.csv'\nclient_url = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter12/Dataset/client.csv'", "_____no_output_____" ], [ "df_disp = pd.read_csv(disp_url, sep=';')\ndf_trans = pd.read_csv(trans_url, sep=';')\ndf_account = pd.read_csv(account_url, sep=';')\ndf_client = pd.read_csv(client_url, sep=';')", "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2718: DtypeWarning: Columns (8) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n" ], [ "df_trans.head()", "_____no_output_____" ], [ "df_trans.shape", "_____no_output_____" ], [ "df_account.head()", "_____no_output_____" ], [ "df_trans_acc = pd.merge(df_trans, df_account, how='left', on='account_id')", "_____no_output_____" ], [ "df_trans_acc.shape", "_____no_output_____" ], [ "df_disp.head()", "_____no_output_____" ], [ "df_disp_owner = df_disp[df_disp['type'] == 'OWNER']", "_____no_output_____" ], [ "df_disp_owner.duplicated(subset='account_id').sum()", "_____no_output_____" ], [ "df_trans_acc_disp = pd.merge(df_trans_acc, df_disp_owner, how='left', on='account_id')\ndf_trans_acc_disp.shape", "_____no_output_____" ], [ "df_client.head()", "_____no_output_____" ], [ "df_merged = pd.merge(df_trans_acc_disp, df_client, how='left', on=['client_id', 'district_id'])\ndf_merged.shape", "_____no_output_____" ], [ "df_merged.columns", "_____no_output_____" ], [ "df_merged.rename(columns={'date_x': 'trans_date', 'type_x': 'trans_type', 'date_y':'account_creation', 'type_y':'client_type'}, inplace=True)", "_____no_output_____" ], [ "df_merged.head()", "_____no_output_____" ], [ "df_merged.dtypes", "_____no_output_____" ], [ "df_merged['trans_date'] = pd.to_datetime(df_merged['trans_date'], format=\"%y%m%d\")\ndf_merged['account_creation'] = pd.to_datetime(df_merged['account_creation'], format=\"%y%m%d\")", "_____no_output_____" ], [ "df_merged.dtypes", "_____no_output_____" ], [ "df_merged['is_female'] = (df_merged['birth_number'] % 10000) / 5000 > 1", "_____no_output_____" ], [ "df_merged['birth_number'].head()", "_____no_output_____" ], [ "df_merged.loc[df_merged['is_female'] == True, 'birth_number'] -= 5000", "_____no_output_____" ], [ "df_merged['birth_number'].head()", "_____no_output_____" ], [ "pd.to_datetime(df_merged['birth_number'], format=\"%y%m%d\", errors='coerce')", "_____no_output_____" ], [ "df_merged['birth_number'] = df_merged['birth_number'].astype(str)\ndf_merged['birth_number'].head()", "_____no_output_____" ], [ "import numpy as np\ndf_merged.loc[df_merged['birth_number'] == 'nan', 'birth_number'] = np.nan\ndf_merged['birth_number'].head()", "_____no_output_____" ], [ "df_merged.loc[~df_merged['birth_number'].isna(), 'birth_number'] = '19' + df_merged.loc[~df_merged['birth_number'].isna(), 'birth_number']\ndf_merged['birth_number'].head()", "_____no_output_____" ], [ "df_merged['birth_number'] = pd.to_datetime(df_merged['birth_number'], format=\"%Y%m%d\", errors='coerce')\ndf_merged['birth_number'].head(20)", "_____no_output_____" ], [ "df_merged['age_at_creation'] = df_merged['account_creation'] - df_merged['birth_number']", "_____no_output_____" ], [ "df_merged['age_at_creation'] = df_merged['age_at_creation'] / np.timedelta64(1,'Y')", "_____no_output_____" ], [ "df_merged['age_at_creation'] = df_merged['age_at_creation'].round()\ndf_merged.head()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb161d9600ad5262ea43a38dbf31de8f38298826
851,724
ipynb
Jupyter Notebook
P1.ipynb
mattlubbers/Udacity-SDC-Lane_Lines
411f18c61d84279dd44cc74e82014aa0727e1440
[ "MIT" ]
null
null
null
P1.ipynb
mattlubbers/Udacity-SDC-Lane_Lines
411f18c61d84279dd44cc74e82014aa0727e1440
[ "MIT" ]
null
null
null
P1.ipynb
mattlubbers/Udacity-SDC-Lane_Lines
411f18c61d84279dd44cc74e82014aa0727e1440
[ "MIT" ]
null
null
null
821.334619
350,528
0.939869
[ [ [ "# Self-Driving Car Engineer Nanodegree\n\n\n## Project: **Finding Lane Lines on the Road** \n***\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \n\nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\n\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.\n\n---\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\n\n**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".**\n\n---", "_____no_output_____" ], [ "**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**\n\n---\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n <p></p> \n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>", "_____no_output_____" ], [ "**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ", "_____no_output_____" ], [ "## Import Packages", "_____no_output_____" ] ], [ [ "#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\nimport os\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Read in an Image", "_____no_output_____" ] ], [ [ "#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')", "This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n" ] ], [ [ "## Ideas for Lane Detection Pipeline", "_____no_output_____" ], [ "**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**\n\n`cv2.inRange()` for color selection \n`cv2.fillPoly()` for regions selection \n`cv2.line()` to draw lines on an image given endpoints \n`cv2.addWeighted()` to coadd / overlay two images\n`cv2.cvtColor()` to grayscale or change color\n`cv2.imwrite()` to output images to file \n`cv2.bitwise_and()` to apply a mask to an image\n\n**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**", "_____no_output_____" ], [ "## Helper Functions", "_____no_output_____" ], [ "Below are some helper functions to help get you started. They should look familiar from the lesson!", "_____no_output_____" ] ], [ [ "import math\n\ndef grayscale(img):\n \"\"\"Applies Grayscale transform\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n\ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask, only keep the region of the image defined by the polygon\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n\ndef line_fit(img, x, y, color=[255, 0, 0], thickness=20):\n fit = np.polyfit(x,y,1)\n m, b = fit\n \n #Define y Values\n y_1 = img.shape[0]\n y_2 = int(y_1 / 2) + 50\n \n # Define x Values\n # y = mx + b ----> x = (y - b) / m\n x_1 = int((y_1 - b) / m)\n x_2 = int((y_2 - b) / m)\n \n cv2.line(img, (x_1, y_1), (x_2, y_2), color, thickness)\n \ndef draw_lines(img, lines):\n \"\"\"\n Draw the lines with the line fit\n \"\"\"\n left_x = []\n left_y = []\n right_x = []\n right_y = []\n \n for line in lines:\n for x1,y1,x2,y2 in line:\n m = (y2 - y1) / (x2 - x1)\n if m < 0:\n left_x.append(x1)\n left_x.append(x2)\n left_y.append(y1)\n left_y.append(y2)\n else:\n right_x.append(x1)\n right_x.append(x2)\n right_y.append(y1)\n right_y.append(y2)\n \n line_fit(img, left_x, left_y)\n line_fit(img, right_x, right_y)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform. Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it. \n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)", "_____no_output_____" ] ], [ [ "## Test Images\n\nBuild your pipeline to work on the images in the directory \"test_images\" \n**You should make sure your pipeline works well on these images before you try the videos.**", "_____no_output_____" ] ], [ [ "# Use a single Image to test the Pipeline\nimage = mpimg.imread('test_images/' + 'solidWhiteCurve.jpg')", "_____no_output_____" ], [ "#Start with Grayscale\nip_gray = grayscale(image)\nplt.imshow(ip_gray)", "_____no_output_____" ], [ "# Apply Gaussian Blur\nip_gaussian = gaussian_blur(ip_gray, 3)\nplt.imshow(ip_gaussian)", "_____no_output_____" ], [ "# Experiment with Canny Edge Parameters\nip_canny1 = canny(ip_gaussian, 10, 30)\nip_canny2 = canny(ip_gaussian, 30, 90)\nip_canny3 = canny(ip_gaussian, 50, 150)\nip_canny4 = canny(ip_gaussian, 60, 120)\n\nplt.subplot(221)\nplt.imshow(ip_canny1)\nplt.subplot(222)\nplt.imshow(ip_canny2)\nplt.subplot(223)\nplt.imshow(ip_canny3)\nplt.subplot(224)\nplt.imshow(ip_canny4)", "_____no_output_____" ], [ "ip_canny = canny(ip_gaussian, 60, 120)\n\n#Set mask verticies to limit range of interest within image\nvertices = np.array([[(90,image.shape[0]),(450, 330), (510, 330), (image.shape[1],image.shape[0])]], dtype=np.int32)\n\nip_mask = region_of_interest(ip_canny, vertices)\nplt.imshow(ip_mask)", "_____no_output_____" ], [ "# Set Hough Threshold Parameters\nrho = 2\ntheta = np.pi/180\nthreshold = 10\nmin_line_len = 20\nmax_line_gap = 10\n\nip_hough = hough_lines(ip_mask, rho, theta, threshold, min_line_len, max_line_gap)\nplt.imshow(ip_hough)", "_____no_output_____" ], [ "# Overlay the Hough Lines with the Original Image\nip_weighted = weighted_img(ip_hough, image)\nplt.imshow(ip_weighted)", "_____no_output_____" ] ], [ [ "## Build a Lane Finding Pipeline\n\n", "_____no_output_____" ], [ "Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.\n\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.", "_____no_output_____" ] ], [ [ "def image_processing(image):\n # Set Hough Parameters\n rho = 2\n theta = np.pi/180\n threshold = 20 #Originally 10\n min_line_len = 40 #Originally 20\n max_line_gap = 10\n \n #Set Canny Parameters\n low_threshold = 60\n high_threshold = 120\n \n #Set Mask Vertices\n vertices = np.array([[(90,image.shape[0]),(450, 320), (510, 320), (image.shape[1],image.shape[0])]], dtype=np.int32) \n \n # Build Pipeline: Grayscale -> Gaussian -> Canny -> Mask -> Hough -> Weighted\n ip_gray = grayscale(image)\n ip_gaussian = gaussian_blur(ip_gray, 3)\n ip_canny = canny(ip_gaussian, low_threshold, high_threshold)\n ip_mask = region_of_interest(ip_canny, vertices)\n ip_hough = hough_lines(ip_mask, rho, theta, threshold, min_line_len, max_line_gap)\n ip_weighted = weighted_img(ip_hough, image)\n return ip_weighted", "_____no_output_____" ], [ "#Run through the pipeline with all images in directory\noutput_list = []\nfor img in os.listdir(\"test_images/\"):\n image = mpimg.imread('test_images/' + img)\n output = image_processing(image)\n output_list.append(output)\n mpimg.imsave('test_images_output/' + img, output)\n\n# Plot All Output Images\nplt.figure(figsize=(12,8))\nplt.subplot(231)\nplt.imshow(output_list[0])\nplt.subplot(232)\nplt.imshow(output_list[1])\nplt.subplot(233)\nplt.imshow(output_list[2])\nplt.subplot(234)\nplt.imshow(output_list[3])\nplt.subplot(235)\nplt.imshow(output_list[4])\nplt.subplot(236)\nplt.imshow(output_list[5])\n\nplt.subplots_adjust(top=0.95, bottom=0.42, left=0.10, right=0.95, hspace=0.35,\n wspace=0.15)", "_____no_output_____" ] ], [ [ "## Test on Videos\n\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\n\nWe can test our solution on two provided videos:\n\n`solidWhiteRight.mp4`\n\n`solidYellowLeft.mp4`\n\n**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**\n\n**If you get an error that looks like this:**\n```\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\n```\n**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**", "_____no_output_____" ] ], [ [ "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML", "_____no_output_____" ] ], [ [ "Let's try the one with the solid white lane on the right first ...", "_____no_output_____" ] ], [ [ "white_output = 'test_videos_output/solidWhiteRight.mp4'\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(image_processing) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4\n[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4\n" ] ], [ [ "Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.", "_____no_output_____" ] ], [ [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))", "_____no_output_____" ] ], [ [ "## Improve the draw_lines() function\n\n**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".**\n\n**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**", "_____no_output_____" ], [ "Now for the one with the solid yellow lane on the left. This one's more tricky!", "_____no_output_____" ] ], [ [ "yellow_output = 'test_videos_output/solidYellowLeft_houghTest.mp4'\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(image_processing)\n%time yellow_clip.write_videofile(yellow_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/solidYellowLeft_houghTest.mp4\n[MoviePy] Writing video test_videos_output/solidYellowLeft_houghTest.mp4\n" ], [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "_____no_output_____" ], [ "# This video showed that the weighted lines jumped. Let's jump through each step to determine the culprit\n# Copy from image_processing function for reference:\ndef image_test(image):\n rho = 2\n theta = np.pi/180\n threshold = 20\n min_line_len = 40\n max_line_gap = 10\n \n #Set Canny Parameters\n low_threshold = 60\n high_threshold = 120\n \n #Set Mask Vertices\n vertices = np.array([[(90,image.shape[0]),(450, 320), (510, 320), (image.shape[1],image.shape[0])]], dtype=np.int32) \n \n # Build Pipeline: Grayscale -> Gaussian -> Canny -> Mask -> Hough -> Weighted\n ip_gray = grayscale(image)\n ip_gaussian = gaussian_blur(ip_gray, 3)\n ip_canny = canny(ip_gaussian, low_threshold, high_threshold)\n ip_mask = region_of_interest(ip_canny, vertices)\n ip_hough = hough_lines(ip_mask, rho, theta, threshold, min_line_len, max_line_gap)\n ip_weighted = weighted_img(ip_hough, image)\n return ip_weighted\n", "_____no_output_____" ], [ "yellow_output = 'test_videos_output/solidYellowLeft_houghTest.mp4'\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(image_test)\n%time yellow_clip.write_videofile(yellow_output, audio=False)", "[MoviePy] >>>> Building video test_videos_output/solidYellowLeft_houghTest.mp4\n[MoviePy] Writing video test_videos_output/solidYellowLeft_houghTest.mp4\n" ], [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "_____no_output_____" ] ], [ [ "## Writeup and Submission\n\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.\n", "_____no_output_____" ], [ "## Optional Challenge\n\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!", "_____no_output_____" ] ], [ [ "challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)", "_____no_output_____" ], [ "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cb16239c4060d7c286ac2726c0572cd0c50a8fda
4,406
ipynb
Jupyter Notebook
02NumPy/03numpy배열다루기_연습.ipynb
aonekoda/YGL_1
92ffaf9dc6c9a70b749f1fc0cc016881023a1316
[ "MIT" ]
null
null
null
02NumPy/03numpy배열다루기_연습.ipynb
aonekoda/YGL_1
92ffaf9dc6c9a70b749f1fc0cc016881023a1316
[ "MIT" ]
null
null
null
02NumPy/03numpy배열다루기_연습.ipynb
aonekoda/YGL_1
92ffaf9dc6c9a70b749f1fc0cc016881023a1316
[ "MIT" ]
null
null
null
4,406
4,406
0.593282
[ [ [ "# 배열 다루기 연습문제\n___", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "Q1. x가 10x10x3의 다차원 배열일때, x의 두번째 차원이 150인 2차원 배열이 되도록 reshape하세요.", "_____no_output_____" ] ], [ [ "x = np.ones([10, 10, 3])\n\n", "_____no_output_____" ] ], [ [ "Q2. x가 [[1, 2, 3], [4, 5, 6]]일 때, [1 2 3 4 5 6]로 변환하세요.", "_____no_output_____" ] ], [ [ "x = np.array([[1, 2, 3], [4, 5, 6]])", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "Q3. x가 [[1, 2, 3], [4, 5, 6]]일 때, flatten한 후 5번째 원소를 가져오세요.", "_____no_output_____" ] ], [ [ "x = np.array([[1, 2, 3], [4, 5, 6]])\nout = x.flatten()\n\nout[?]", "_____no_output_____" ] ], [ [ "Q4. \n\nx= <br/>\n[[ 1 2 3]<br/>\n[ 4 5 6].<br/><br/>\ny = <br/>\n[[ 7 8 9]<br/>\n[10 11 12]].<br/>\n\n\nx와 y를 연결해서\n<br/>[[1, 2, 3, 7, 8, 9], <br/>[4, 5, 6, 10, 11, 12]]\n를 만드세요\n", "_____no_output_____" ] ], [ [ "x = np.array([[1, 2, 3], [4, 5, 6]])\ny = np.array([[7, 8, 9], [10, 11, 12]])\n\n", "_____no_output_____" ] ], [ [ "Q5. \n\nx= <br/>\n[[ 1 2 3]<br/>\n[ 4 5 6].<br/><br/>\ny = <br/>\n[[ 7 8 9]<br/>\n[10 11 12]].<br/>\n\n\nx와 y를 연결해서\n<br/>[[ 1 2 3]<br/>\n [ 4 5 6]<br/>\n [ 7 8 9]<br/>\n [10 11 12]]\n를 만드세요", "_____no_output_____" ] ], [ [ "x = np.array([[1, 2, 3], [4, 5, 6]])\ny = np.array([[7, 8, 9], [10, 11, 12]])\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb162990d85e6fade34c6ef1a2610349018cf022
22,562
ipynb
Jupyter Notebook
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
null
null
null
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
null
null
null
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
Jonathanpro/asl-ml-immersion
c461aa215339a6816810dfef5a92a6e375f9bc66
[ "Apache-2.0" ]
null
null
null
39.721831
658
0.629687
[ [ [ "# TensorFlow Transfer Learning\n\nThis notebook shows how to use pre-trained models from [TensorFlowHub](https://www.tensorflow.org/hub). Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.\n\n## Learning Objectives\n1. Know how to apply image augmentation\n2. Know how to download and use a TensorFlow Hub module as a layer in Keras.", "_____no_output_____" ] ], [ [ "import os\nimport pathlib\n\nimport IPython.display as display\nimport matplotlib.pylab as plt\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_hub as hub\nfrom PIL import Image\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import (\n Conv2D,\n Dense,\n Dropout,\n Flatten,\n MaxPooling2D,\n Softmax,\n)", "_____no_output_____" ] ], [ [ "## Exploring the data\n\nAs usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.\n\nThe below [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.", "_____no_output_____" ] ], [ [ "data_dir = tf.keras.utils.get_file(\n \"flower_photos\",\n \"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz\",\n untar=True,\n)\n\n# Print data path\nprint(\"cd\", data_dir)", "_____no_output_____" ] ], [ [ "We can use python's built in [pathlib](https://docs.python.org/3/library/pathlib.html) tool to get a sense of this unstructured data.", "_____no_output_____" ] ], [ [ "data_dir = pathlib.Path(data_dir)\n\nimage_count = len(list(data_dir.glob(\"*/*.jpg\")))\nprint(\"There are\", image_count, \"images.\")\n\nCLASS_NAMES = np.array(\n [item.name for item in data_dir.glob(\"*\") if item.name != \"LICENSE.txt\"]\n)\nprint(\"These are the available classes:\", CLASS_NAMES)", "_____no_output_____" ] ], [ [ "Let's display the images so we can see what our model will be trying to learn.", "_____no_output_____" ] ], [ [ "roses = list(data_dir.glob(\"roses/*\"))\n\nfor image_path in roses[:3]:\n display.display(Image.open(str(image_path)))", "_____no_output_____" ] ], [ [ "## Building the dataset\n\nKeras has some convenient methods to read in image data. For instance [tf.keras.preprocessing.image.ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) is great for small local datasets. A tutorial on how to use it can be found [here](https://www.tensorflow.org/tutorials/load_data/images), but what if we have so many images, it doesn't fit on a local machine? We can use [tf.data.datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) to build a generator based on files in a Google Cloud Storage Bucket.\n\nWe have already prepared these images to be stored on the cloud in `gs://cloud-ml-data/img/flower_photos/`. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:\n\nTraining set: [train_set.csv](https://storage.cloud.google.com/cloud-ml-data/img/flower_photos/train_set.csv) \nEvaluation set: [eval_set.csv](https://storage.cloud.google.com/cloud-ml-data/img/flower_photos/eval_set.csv) \n\nExplore the format and contents of the train.csv by running:", "_____no_output_____" ] ], [ [ "!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n | head -5 > /tmp/input.csv\n!cat /tmp/input.csv", "_____no_output_____" ], [ "!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | \\\n sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt\n!cat /tmp/labels.txt", "_____no_output_____" ] ], [ [ "Let's figure out how to read one of these images from the cloud. TensorFlow's [tf.io.read_file](https://www.tensorflow.org/api_docs/python/tf/io/read_file) can help us read the file contents, but the result will be a [Base64 image string](https://en.wikipedia.org/wiki/Base64). Hmm... not very readable for humans or Tensorflow.\n\nThankfully, TensorFlow's [tf.image.decode_jpeg](https://www.tensorflow.org/api_docs/python/tf/io/decode_jpeg) function can decode this string into an integer array, and [tf.image.convert_image_dtype](https://www.tensorflow.org/api_docs/python/tf/image/convert_image_dtype) can cast it into a 0 - 1 range float. Finally, we'll use [tf.image.resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) to force image dimensions to be consistent for our neural network.\n\nWe'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.", "_____no_output_____" ] ], [ [ "IMG_HEIGHT = 224\nIMG_WIDTH = 224\nIMG_CHANNELS = 3\n\nBATCH_SIZE = 32\n# 10 is a magic number tuned for local training of this dataset.\nSHUFFLE_BUFFER = 10 * BATCH_SIZE\nAUTOTUNE = tf.data.experimental.AUTOTUNE\n\nVALIDATION_IMAGES = 370\nVALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE", "_____no_output_____" ], [ "def decode_img(img, reshape_dims):\n # Convert the compressed string to a 3D uint8 tensor.\n img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)\n # Use `convert_image_dtype` to convert to floats in the [0,1] range.\n img = tf.image.convert_image_dtype(img, tf.float32)\n # Resize the image to the desired size.\n return tf.image.resize(img, reshape_dims)", "_____no_output_____" ] ], [ [ "Is it working? Let's see!\n\n**TODO 1.a:** Run the `decode_img` function and plot it to see a happy looking daisy.", "_____no_output_____" ] ], [ [ "img = tf.io.read_file(\n \"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg\"\n)\n\n# Uncomment to see the image string.\n# print(img)\nimg = decode_img(img, [IMG_WIDTH, IMG_HEIGHT])\nplt.imshow(img.numpy());", "_____no_output_____" ] ], [ [ "One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. [tf.io.decode_csv](https://www.tensorflow.org/api_docs/python/tf/io/decode_csv) reads in csv rows (or each line in a csv file), while [tf.math.equal](https://www.tensorflow.org/api_docs/python/tf/math/equal) will help us format our label such that it's a boolean array with a truth value corresponding to the class in `CLASS_NAMES`, much like the labels for the MNIST Lab.", "_____no_output_____" ] ], [ [ "def decode_csv(csv_row):\n record_defaults = [\"path\", \"flower\"]\n filename, label_string = tf.io.decode_csv(csv_row, record_defaults)\n image_bytes = tf.io.read_file(filename=filename)\n label = tf.math.equal(CLASS_NAMES, label_string)\n return image_bytes, label", "_____no_output_____" ] ], [ [ "Next, we'll transform the images to give our network more variety to train on. There are a number of [image manipulation functions](https://www.tensorflow.org/api_docs/python/tf/image). We'll cover just a few:\n\n* [tf.image.random_crop](https://www.tensorflow.org/api_docs/python/tf/image/random_crop) - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.\n* [tf.image.random_flip_left_right](https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right) - Randomly flips the image horizontally\n* [tf.image.random_brightness](https://www.tensorflow.org/api_docs/python/tf/image/random_brightness) - Randomly adjusts how dark or light the image is.\n* [tf.image.random_contrast](https://www.tensorflow.org/api_docs/python/tf/image/random_contrast) - Randomly adjusts image contrast.\n\n**TODO 1.b:** Add the missing parameters from the random augment functions.", "_____no_output_____" ] ], [ [ "MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%\nCONTRAST_LOWER = 0.2\nCONTRAST_UPPER = 1.8\n\n\ndef read_and_preprocess(image_bytes, label, random_augment=False):\n if random_augment:\n img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])\n img = tf.image.random_crop(img, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])\n img = tf.image.random_flip_left_right(img)\n img = tf.image.random_brightness(img, MAX_DELTA)\n img = tf.image.random_contrast(img, CONTRAST_LOWER, CONTRAST_UPPER)\n else:\n img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])\n return img, label\n\n\ndef read_and_preprocess_with_augment(image_bytes, label):\n return read_and_preprocess(image_bytes, label, random_augment=True)", "_____no_output_____" ] ], [ [ "Finally, we'll make a function to craft our full dataset using [tf.data.dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). The [tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) will read in each line in our train/eval csv files to our `decode_csv` function.\n\n[.cache](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache) is key here. It will store the dataset in memory", "_____no_output_____" ] ], [ [ "def load_dataset(csv_of_filenames, batch_size, training=True):\n dataset = (\n tf.data.TextLineDataset(filenames=csv_of_filenames)\n .map(decode_csv)\n .cache()\n )\n\n if training:\n dataset = (\n dataset.map(read_and_preprocess_with_augment)\n .shuffle(SHUFFLE_BUFFER)\n .repeat(count=None)\n ) # Indefinately.\n else:\n dataset = dataset.map(read_and_preprocess).repeat(\n count=1\n ) # Each photo used once.\n\n # Prefetch prepares the next set of batches while current batch is in use.\n return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)", "_____no_output_____" ] ], [ [ "We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.", "_____no_output_____" ] ], [ [ "train_path = \"gs://cloud-ml-data/img/flower_photos/train_set.csv\"\ntrain_data = load_dataset(train_path, 1)\nitr = iter(train_data)", "_____no_output_____" ] ], [ [ "**TODO 1.c:** Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?", "_____no_output_____" ] ], [ [ "image_batch, label_batch = next(itr)\nimg = image_batch[0]\nplt.imshow(img)\nprint(label_batch[0])", "_____no_output_____" ] ], [ [ "**Note:** It may take a 4-5 minutes to see result of different batches. \n", "_____no_output_____" ], [ "## MobileNetV2\n\nThese flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis **and** there are three color channels, making the information here over 200 times larger!\n\nHow do our current techniques stand up? Copy your best model architecture over from the <a href=\"2_mnist_models.ipynb\">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps.\n\n**TODO 2.a** Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.", "_____no_output_____" ] ], [ [ "eval_path = \"gs://cloud-ml-data/img/flower_photos/eval_set.csv\"\nnclasses = len(CLASS_NAMES)\nhidden_layer_1_neurons = 400\nhidden_layer_2_neurons = 100\ndropout_rate = 0.25\nnum_filters_1 = 64\nkernel_size_1 = 3\npooling_size_1 = 2\nnum_filters_2 = 32\nkernel_size_2 = 3\npooling_size_2 = 2\n\nlayers = [\n Conv2D(\n num_filters_1,\n kernel_size=kernel_size_1,\n activation=\"relu\",\n input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS),\n ),\n MaxPooling2D(pooling_size_1),\n Conv2D(num_filters_2, kernel_size=kernel_size_2, activation=\"relu\"),\n MaxPooling2D(pooling_size_2),\n Flatten(),\n Dense(hidden_layer_1_neurons, activation=\"relu\"),\n Dense(hidden_layer_2_neurons, activation=\"relu\"),\n Dropout(dropout_rate),\n Dense(nclasses),\n Softmax(),\n]\n\nold_model = Sequential(layers)\nold_model.compile(\n optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"]\n)\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)", "_____no_output_____" ], [ "old_model.fit(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS,\n)", "_____no_output_____" ] ], [ [ "If your model is like mine, it learns a little bit, slightly better then random, but *ugh*, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?\n\nEnter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.\n\n[Tensorflow Hub](https://tfhub.dev/s?module-type=image-augmentation,image-classification,image-others,image-style-transfer,image-rnn-agent) is a database of models, many of which can be used for Transfer Learning. We'll use a model called [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4) which is an architecture optimized for image classification on mobile devices, which can be done with [TensorFlow Lite](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb). Let's compare how a model trained on [ImageNet](http://www.image-net.org/) data compares to one built from scratch.\n\nThe `tensorflow_hub` python package has a function to include a Hub model as a [layer in Keras](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer). We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.\n\n**TODO 2.b**: Add a Hub Keras Layer at the top of the model using the handle provided.", "_____no_output_____" ] ], [ [ "module_selection = \"mobilenet_v2_100_224\"\nmodule_handle = \"https://tfhub.dev/google/imagenet/{}/feature_vector/4\".format(\n module_selection\n)\n\ntransfer_model = tf.keras.Sequential(\n [\n hub.KerasLayer(module_handle, trainable=False),\n tf.keras.layers.Dropout(rate=0.2),\n tf.keras.layers.Dense(\n nclasses,\n activation=\"softmax\",\n kernel_regularizer=tf.keras.regularizers.l2(0.0001),\n ),\n ]\n)\ntransfer_model.build((None,) + (IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))\ntransfer_model.summary()", "_____no_output_____" ] ], [ [ "Even though we're only adding one more `Dense` layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!\n\nMoment of truth. Let's compile this new model and see how it compares to our MNIST architecture.", "_____no_output_____" ] ], [ [ "transfer_model.compile(\n optimizer=\"adam\", loss=\"categorical_crossentropy\", metrics=[\"accuracy\"]\n)\n\ntrain_ds = load_dataset(train_path, BATCH_SIZE)\neval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)", "_____no_output_____" ], [ "transfer_model.fit(\n train_ds,\n epochs=5,\n steps_per_epoch=5,\n validation_data=eval_ds,\n validation_steps=VALIDATION_STEPS,\n)", "_____no_output_____" ] ], [ [ "Alright, looking better!\n\nStill, there's clear room to improve. Data bottlenecks are especially prevalent with image data due to the size of the image files. There's much to consider such as the computation of augmenting images and the bandwidth to transfer images between machines.\n\nThink life is too short, and there has to be a better way? In the next lab, we'll blast away these problems by developing a cloud strategy to train with TPUs!\n\n## Bonus Exercise\n\nKeras has a [local way](https://keras.io/models/sequential/) to do distributed training, but we'll be using a different technique in the next lab. Want to give the local way a try? Check out this excellent [blog post](https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly) to get started. Or want to go full-blown Keras? It also has a number of [pre-trained models](https://keras.io/applications/) ready to use.", "_____no_output_____" ], [ "Copyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
cb1632ac65932abe953d39ed7e417306ad22ad45
27,892
ipynb
Jupyter Notebook
Drawing/OldDrawing/PathOptimization-Copy3.ipynb
jed-frey/python_cnc3018
52b128dec1c2b6f6fbafd36bbaa0531082fdec4b
[ "BSD-3-Clause" ]
8
2017-10-24T01:13:47.000Z
2019-12-30T21:42:54.000Z
Drawing/OldDrawing/PathOptimization-Copy3.ipynb
jed-frey/python_cnc3018
52b128dec1c2b6f6fbafd36bbaa0531082fdec4b
[ "BSD-3-Clause" ]
1
2018-03-12T19:35:03.000Z
2018-07-12T13:17:29.000Z
Drawing/OldDrawing/PathOptimization-Copy3.ipynb
jed-frey/python_cnc3018
52b128dec1c2b6f6fbafd36bbaa0531082fdec4b
[ "BSD-3-Clause" ]
2
2019-08-05T07:03:01.000Z
2020-01-15T22:14:21.000Z
21.050566
109
0.403915
[ [ [ "import os\nimport sys\nimport time\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport GCode\nimport GRBL\n", "_____no_output_____" ], [ "# Flip a 2D array. Effectively reversing the path.\nflip2 = np.array([\n [0, 1],\n [1, 0],\n])\nflip2", "_____no_output_____" ], [ "# Flip a 2x3 array. Effectively reversing the path.\nflip3 = np.array([\n [0, 0, 1],\n [0, 1, 0],\n [1, 0, 0],\n])\nflip3", "_____no_output_____" ], [ "A = np.array([\n [1, 2], \n [3, 4],\n])\nA", "_____no_output_____" ], [ "np.matmul(flip2, A)", "_____no_output_____" ], [ "B = np.array([\n [1, 2], \n [3, 4],\n [5, 6],\n])\nB", "_____no_output_____" ], [ "np.matmul(flip3, B)", "_____no_output_____" ], [ "B.shape[0]", "_____no_output_____" ], [ "np.eye(B.shape[0])", "_____no_output_____" ], [ "flip_n_reverseit = np.eye(B.shape[0])[:, ::-1]\nflip_n_reverseit", "_____no_output_____" ], [ "def reverse(self, points):\n flip_n_reverseit = np.eye(points.shape[0])[:, ::-1]\n \n return np.matmul(flip_n_reverseit, points)", "_____no_output_____" ], [ "reverse(None, B)", "_____no_output_____" ] ], [ [ "# Code:", "_____no_output_____" ], [ "Draw a 10 mm line from (0, 0) to (10, 0).", "_____no_output_____" ] ], [ [ "line_len = 10\nline_n_points = 2", "_____no_output_____" ], [ "p = np.linspace(0, line_len, line_n_points, endpoint=True)\np", "_____no_output_____" ], [ "line_n_points = 3\np = np.linspace(0, line_len, line_n_points, endpoint=True)\np", "_____no_output_____" ], [ "line_n_points = 4\np = np.linspace(0, line_len, line_n_points, endpoint=True)\np", "_____no_output_____" ], [ "p", "_____no_output_____" ], [ "Y=0\nfor X in np.linspace(0, line_len, line_n_points, endpoint=True):\n ", "_____no_output_____" ], [ "def HorzLine(X0=0, Xf=10, Y=0, n_points=2):\n p = np.linspace(X0, Xf, n_points, endpoint=True)\n line_points = np.array([\n p,\n Y*np.ones(p.shape),\n ])\n return line_points.transpose()\nHorzLine()", "_____no_output_____" ], [ "def VertLine(X=0, Y0=0, Yf=10, n_points=2):\n p = np.linspace(Y0, Yf, n_points, endpoint=True)\n line_points = np.array([\n X*np.ones(p.shape),\n p,\n ])\n return line_points.transpose()\nVertLine()", "_____no_output_____" ], [ "points = HorzLine(X0=0, Xf=10, Y=0, n_points=2)\npoints", "_____no_output_____" ], [ "line = GCode.Line(points=points)\nline", "_____no_output_____" ], [ "line.__repr__()", "_____no_output_____" ], [ "prog_cfg={\n \"points\": points\n}\nprog_cfg", "_____no_output_____" ], [ "line_cfg = {\n \"X0\": 0,\n \"Xf\": 10,\n \"Y\": 0,\n \"n_points\": 2\n}\nline_cfg", "_____no_output_____" ], [ "help(GCode.Line)", "Help on class Line in module GCode.Line:\n\nclass Line(GCode.GCode.GCode)\n | Method resolution order:\n | Line\n | GCode.GCode.GCode\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, points=array([[ 0. , 0. ],\n | [17.32050808, 0. ],\n | [17.32050808, 10. ],\n | [ 0. , 0. ]]), feed=300, power=150, dynamic_power=True, *args, **kwargs)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __repr__(self)\n | Return repr(self).\n | \n | generate_gcode(self)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | X\n | \n | Y\n | \n | dist\n | Total distance traveled.\n | \n | dists\n | Distances traveled in each line segment\n | \n | time\n | Total distance traveled.\n | \n | times\n | Amount of time spent drawing each line spegment.\n | \n | Does not take into consideration acceleration curves\n | \n | x_0\n | \n | x_f\n | \n | y_0\n | \n | y_f\n | \n | ----------------------------------------------------------------------\n | Methods inherited from GCode.GCode.GCode:\n | \n | G0 = cmd_fcn(self, **kwargs)\n | \n | G1 = cmd_fcn(self, **kwargs)\n | \n | G2 = cmd_fcn(self, **kwargs)\n | \n | G20 = cmd_fcn(self, **kwargs)\n | \n | G21 = cmd_fcn(self, **kwargs)\n | \n | G28 = cmd_fcn(self, **kwargs)\n | \n | G3 = cmd_fcn(self, **kwargs)\n | \n | G4 = cmd_fcn(self, **kwargs)\n | \n | G90 = cmd_fcn(self, **kwargs)\n | \n | G91 = cmd_fcn(self, **kwargs)\n | \n | G92 = cmd_fcn(self, **kwargs)\n | \n | M0 = cmd_fcn(self, **kwargs)\n | \n | M1 = cmd_fcn(self, **kwargs)\n | \n | M2 = cmd_fcn(self, **kwargs)\n | \n | M3 = cmd_fcn(self, **kwargs)\n | \n | M4 = cmd_fcn(self, **kwargs)\n | \n | M5 = cmd_fcn(self, **kwargs)\n | \n | M6 = cmd_fcn(self, **kwargs)\n | \n | __add__(self, other)\n | \n | __iter__(self)\n | __iter__ function\n | \n | __str__(self)\n | Return str(self).\n | \n | load(self, filename)\n | \n | optimise(self)\n | Create the best GCode possible.\n | \n | run(self)\n | run the program on the given machine\n | \n | save(self, filename)\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from GCode.GCode.GCode:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | code\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from GCode.GCode.GCode:\n | \n | NEWLINE = '\\n'\n\n" ], [ "help(GCode.Program)", "Help on class Program in module GCode.Program:\n\nclass Program(GCode.GCode.GCode)\n | A GCode Program Object\n | \n | Method resolution order:\n | Program\n | GCode.GCode.GCode\n | builtins.object\n | \n | Methods defined here:\n | \n | __init__(self, lines=[], feed=120, *args, **kwargs)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | __repr__(self)\n | Return repr(self).\n | \n | generate_gcode(self)\n | \n | setup(self)\n | \n | teardown(self)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | dist\n | \n | jog_dist\n | \n | jog_time\n | Return time spent jogging\n | \n | laserin_dist\n | Distance, in mm, the line spends cutting.\n | \n | laserin_time\n | Duration, in s, the line spends cutting.\n | \n | time\n | \n | ----------------------------------------------------------------------\n | Methods inherited from GCode.GCode.GCode:\n | \n | G0 = cmd_fcn(self, **kwargs)\n | \n | G1 = cmd_fcn(self, **kwargs)\n | \n | G2 = cmd_fcn(self, **kwargs)\n | \n | G20 = cmd_fcn(self, **kwargs)\n | \n | G21 = cmd_fcn(self, **kwargs)\n | \n | G28 = cmd_fcn(self, **kwargs)\n | \n | G3 = cmd_fcn(self, **kwargs)\n | \n | G4 = cmd_fcn(self, **kwargs)\n | \n | G90 = cmd_fcn(self, **kwargs)\n | \n | G91 = cmd_fcn(self, **kwargs)\n | \n | G92 = cmd_fcn(self, **kwargs)\n | \n | M0 = cmd_fcn(self, **kwargs)\n | \n | M1 = cmd_fcn(self, **kwargs)\n | \n | M2 = cmd_fcn(self, **kwargs)\n | \n | M3 = cmd_fcn(self, **kwargs)\n | \n | M4 = cmd_fcn(self, **kwargs)\n | \n | M5 = cmd_fcn(self, **kwargs)\n | \n | M6 = cmd_fcn(self, **kwargs)\n | \n | __add__(self, other)\n | \n | __iter__(self)\n | __iter__ function\n | \n | __str__(self)\n | Return str(self).\n | \n | load(self, filename)\n | \n | optimise(self)\n | Create the best GCode possible.\n | \n | run(self)\n | run the program on the given machine\n | \n | save(self, filename)\n | \n | ----------------------------------------------------------------------\n | Data descriptors inherited from GCode.GCode.GCode:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | code\n | \n | ----------------------------------------------------------------------\n | Data and other attributes inherited from GCode.GCode.GCode:\n | \n | NEWLINE = '\\n'\n\n" ], [ "progs = list()\n\nfor n_points in range(2, 10):\n \n line_cfg = {\n \"X0\": 0,\n \"Xf\": 10,\n \"Y\": 0,\n \"n_points\": n_points\n } \n points = HorzLine(**line_cfg)\n \n line_cfg = {\n \"points\": points,\n \"feed\":120,\n \"power\":128,\n \"dynamic_power\": True,\n }\n line = GCode.Line(points=points)\n \n \n prog_cfg={\n \"lines\": [line, line],\n \"feed\": 120\n }\n prog = GCode.Program(**prog_cfg)\n progs.append(prog)\nprogs", "_____no_output_____" ], [ "for prog in progs:\n print(len(prog.buffer))", "0\n0\n0\n" ], [ "for prog in progs:\n prog.generate_gcode()\n print(len(prog.buffer))", "17\n19\n21\n" ], [ "list(map(lambda prog: prog.generate_gcode(), progs))\n", "_____no_output_____" ], [ "list(map(lambda prog: len(prog.buffer), progs))", "_____no_output_____" ], [ "import threading\n\ndef concurrent_map(func, data):\n \"\"\"\n Similar to the bultin function map(). But spawn a thread for each argument\n and apply `func` concurrently.\n\n Note: unlike map(), we cannot take an iterable argument. `data` should be an\n indexable sequence.\n \"\"\"\n\n N = len(data)\n result = [None] * N\n\n # wrapper to dispose the result in the right slot\n def task_wrapper(i):\n result[i] = func(data[i])\n\n threads = [threading.Thread(target=task_wrapper, args=(i,)) for i in range(N)]\n for t in threads:\n t.start()\n for t in threads:\n t.join()\n\n return result", "_____no_output_____" ], [ "concurrent_map(lambda prog: prog.generate_gcode(), progs)", "_____no_output_____" ], [ "concurrent_map(lambda prog: len(prog.buffer), progs)", "_____no_output_____" ], [ "concurrent_map(lambda prog: prog.__repr__(), progs)", "_____no_output_____" ], [ "concurrent_map(lambda prog: prog.dist, progs)", "_____no_output_____" ], [ "concurrent_map(lambda prog: prog.jog_dist, progs)", "_____no_output_____" ], [ "concurrent_map(lambda prog: prog.laserin_dist, progs)", "_____no_output_____" ], [ "m=concurrent_map(lambda prog: prog.laserin_dist, progs)", "_____no_output_____" ], [ "np.diff(m)", "_____no_output_____" ], [ "np.diff(m)==0", "_____no_output_____" ], [ "np.all(np.diff(m)==0)", "_____no_output_____" ], [ "assert(np.all(np.diff(m)==0))", "_____no_output_____" ], [ "flip2", "_____no_output_____" ], [ "reverse(None, progs[1].lines[0].points)", "_____no_output_____" ], [ "progs", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb163fe5965797aad08559d8850764a6977c992c
230,740
ipynb
Jupyter Notebook
chinese/cnn_tensorflow/CNN_classification.ipynb
ZJUEarthData/new-algorithm
68426a492c26babc16261d57c1365de28e495d97
[ "MIT" ]
5
2020-10-02T10:14:14.000Z
2020-11-04T15:30:29.000Z
cnn_tensorflow/CNN_classification.ipynb
Geeksongs/new-algorithm
f15cefd3a78cbbc2ad8ab2a0c914b95f04f83aff
[ "MIT" ]
null
null
null
cnn_tensorflow/CNN_classification.ipynb
Geeksongs/new-algorithm
f15cefd3a78cbbc2ad8ab2a0c914b95f04f83aff
[ "MIT" ]
5
2021-04-12T10:11:53.000Z
2022-01-02T13:24:25.000Z
121.186975
56,884
0.753398
[ [ [ "import pandas as pd\nimport numpy as np\nimport tensorflow as tf", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "raw_data = pd.read_excel(\"hydrogen_test_classification.xlsx\")\nraw_data.head()", "_____no_output_____" ], [ "# 分开特征值和标签值\nX = raw_data.drop(\"TRUE VALUE\", axis=1).copy()\ny = raw_data[\"TRUE VALUE\"]\ny.unique()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\n# 分训练集、验证集和测试集\nX_train_full, X_test, y_train_full, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\nX_train, Y_valid, X_label, Y_label = train_test_split(X_train_full, y_train_full, test_size=0.2, random_state=42)", "_____no_output_____" ], [ "print(X_train.shape)\nprint(Y_valid.shape)\nprint(X_label.shape)\nprint(Y_label.shape)", "(1508, 25)\n(378, 25)\n(1508,)\n(378,)\n" ], [ "#将数据进行转化成四个通道,方便使用卷积神经网络进行分类\ndef transform(X):\n X=X.values\n X=X.reshape(X.shape[0:1][0],5,5)\n X=np.expand_dims(X,-1)\n print(\"转化后的维度大小为:\")\n print(X.shape)\n return X", "_____no_output_____" ], [ "#首先对X—train进行维度转换\nX_train=transform(X_train)", "转化后的维度大小为:\n(1508, 5, 5, 1)\n" ], [ "X_train.shape", "_____no_output_____" ], [ "Y_valid=transform(Y_valid)\nY_valid.shape", "转化后的维度大小为:\n(378, 5, 5, 1)\n" ], [ "Y_valid.shape", "_____no_output_____" ], [ "#将label的标签值,转化为1和0\ndef only_one_and_zero(y):\n y=y.values#将pandas当中的dateframe对象转化为numpy-ndarray对象\n length=y.shape[0:1][0]\n i=0\n while i<length:\n if(y[i]==-1):\n y[i]=0\n i+=1\n print(\"当前的y为\",y)\n print(type(y))\n return y", "_____no_output_____" ], [ "X_label=only_one_and_zero(X_label)", "当前的y为 [1 1 0 ... 1 1 1]\n<class 'numpy.ndarray'>\n" ], [ "Y_label=only_one_and_zero(Y_label)", "当前的y为 [1 0 1 1 1 1 0 0 0 1 1 1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1\n 0 1 0 1 0 1 0 0 0 1 1 1 0 1 1 1 1 1 0 0 1 0 1 0 0 0 1 1 1 0 0 0 1 0 0 1 1\n 0 1 1 0 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 1 0 1 1\n 0 0 0 1 1 1 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 0 0 0\n 0 1 0 0 1 1 1 0 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 0 1 0 0\n 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 0 1 0 1 1 0 0 1 1 0 0 1 1 1 0\n 0 0 1 1 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 1 1 1 1 1 0 1 1 0 0 1 0 1 1 1 0 0 1\n 1 0 1 1 1 0 1 1 1 1 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 0 0 1 1 0\n 0 1 1 0 1 0 0 0 1 1 1 0 1 1 1 0 0 0 1 0 0 0 0 1 1 0 1 0 0 0 1 1 1 0 1 1 0\n 0 0 0 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 1 0 1 0 1 1 0 0 1\n 0 1 0 0 1 1 1 0]\n<class 'numpy.ndarray'>\n" ], [ "X_train", "_____no_output_____" ], [ "Y_label", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Flatten, Dense\n#初步想法,将5*5的数据进行卷积操作,然后使用卷积神经网络进行图像识别\n# input_shape 填特征的维度,将特征变为一维特征的形式\nmodel = Sequential()\n# model.add(Flatten(input_shape=[25]))、\nmodel.add(tf.keras.layers.Conv2D(50,(2,2),input_shape=X_train.shape[1:],activation=\"relu\"))\nmodel.add(tf.keras.layers.BatchNormalization())\nmodel.add(tf.keras.layers.Conv2D(100,(2,2),activation=\"relu\"))\nmodel.add(tf.keras.layers.MaxPool2D())\nmodel.add(tf.keras.layers.Flatten())\n#model.add(Dense(1024, activation=\"relu\", input_shape=X_train.shape[1:]))\nmodel.add(Dense(500, activation=\"relu\"))\nmodel.add(Dense(250, activation=\"relu\"))\nmodel.add(Dense(125, activation=\"relu\"))\nmodel.add(Dense(50, activation=\"relu\"))\nmodel.add(Dense(1, activation=\"sigmoid\"))", "_____no_output_____" ], [ "model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 4, 4, 50) 250 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 4, 4, 50) 200 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 3, 3, 100) 20100 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 1, 1, 100) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 100) 0 \n_________________________________________________________________\ndense (Dense) (None, 500) 50500 \n_________________________________________________________________\ndense_1 (Dense) (None, 250) 125250 \n_________________________________________________________________\ndense_2 (Dense) (None, 125) 31375 \n_________________________________________________________________\ndense_3 (Dense) (None, 50) 6300 \n_________________________________________________________________\ndense_4 (Dense) (None, 1) 51 \n=================================================================\nTotal params: 234,026\nTrainable params: 233,926\nNon-trainable params: 100\n_________________________________________________________________\n" ], [ "#from tensorflow.keras.utils import plot_model\n#plot_model(model, to_file='model.png',show_shapes=True)\n#如果在本机安装pydot和pyfotprint这两个库的情况下,可以直接将神经网络进行可视化", "_____no_output_____" ], [ "model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),\n loss='binary_crossentropy',\n metrics=['acc']\n)", "_____no_output_____" ], [ "#创建checkpoint,在模型训练是进行回调,这样可以让训练完之后的模型得以保存\nimport os\ncheckpoint_path = \"training_1/cnn.ckpt\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\n# 创建一个保存模型权重的回调\ncp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,\n save_weights_only=True,\n verbose=1)\n#verbose=1,表示在模型训练的时候返回some imformation\n", "_____no_output_____" ], [ "history = model.fit(X_train,X_label, epochs=200,\n validation_data=(Y_valid, Y_label),\n callbacks=[cp_callback])", "Train on 1508 samples, validate on 378 samples\nEpoch 1/200\n1440/1508 [===========================>..] - ETA: 0s - loss: 0.5942 - acc: 0.6736\nEpoch 00001: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 1s 797us/sample - loss: 0.5889 - acc: 0.6771 - val_loss: 0.5336 - val_acc: 0.6481\nEpoch 2/200\n1408/1508 [===========================>..] - ETA: 0s - loss: 0.5032 - acc: 0.7280\nEpoch 00002: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 204us/sample - loss: 0.4996 - acc: 0.7321 - val_loss: 0.4850 - val_acc: 0.7275\nEpoch 3/200\n1440/1508 [===========================>..] - ETA: 0s - loss: 0.4778 - acc: 0.7278\nEpoch 00003: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 204us/sample - loss: 0.4772 - acc: 0.7261 - val_loss: 0.4684 - val_acc: 0.7910\nEpoch 4/200\n1216/1508 [=======================>......] - ETA: 0s - loss: 0.4519 - acc: 0.7484\nEpoch 00004: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 194us/sample - loss: 0.4480 - acc: 0.7454 - val_loss: 0.4648 - val_acc: 0.7275\nEpoch 5/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.4668 - acc: 0.7387\nEpoch 00005: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 189us/sample - loss: 0.4633 - acc: 0.7454 - val_loss: 0.4548 - val_acc: 0.8069\nEpoch 6/200\n1504/1508 [============================>.] - ETA: 0s - loss: 0.4492 - acc: 0.7507\nEpoch 00006: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 196us/sample - loss: 0.4485 - acc: 0.7507 - val_loss: 0.4610 - val_acc: 0.7937\nEpoch 7/200\n1376/1508 [==========================>...] - ETA: 0s - loss: 0.4342 - acc: 0.7631\nEpoch 00007: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 203us/sample - loss: 0.4395 - acc: 0.7580 - val_loss: 0.4464 - val_acc: 0.7963\nEpoch 8/200\n1440/1508 [===========================>..] - ETA: 0s - loss: 0.4349 - acc: 0.7681\nEpoch 00008: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 208us/sample - loss: 0.4367 - acc: 0.7666 - val_loss: 0.4462 - val_acc: 0.7884\nEpoch 9/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.4480 - acc: 0.7466\nEpoch 00009: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 195us/sample - loss: 0.4421 - acc: 0.7546 - val_loss: 0.4423 - val_acc: 0.8042\nEpoch 10/200\n1504/1508 [============================>.] - ETA: 0s - loss: 0.4226 - acc: 0.7773\nEpoch 00010: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 202us/sample - loss: 0.4220 - acc: 0.7772 - val_loss: 0.4316 - val_acc: 0.8042\nEpoch 11/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.4170 - acc: 0.7760\nEpoch 00011: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 185us/sample - loss: 0.4206 - acc: 0.7792 - val_loss: 0.4320 - val_acc: 0.8095\nEpoch 12/200\n1216/1508 [=======================>......] - ETA: 0s - loss: 0.4198 - acc: 0.7738\nEpoch 00012: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 183us/sample - loss: 0.4200 - acc: 0.7772 - val_loss: 0.4250 - val_acc: 0.8095\nEpoch 13/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.4085 - acc: 0.7804\nEpoch 00013: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 185us/sample - loss: 0.4068 - acc: 0.7812 - val_loss: 0.4167 - val_acc: 0.8228\nEpoch 14/200\n1504/1508 [============================>.] - ETA: 0s - loss: 0.4088 - acc: 0.7899\nEpoch 00014: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 194us/sample - loss: 0.4107 - acc: 0.7891 - val_loss: 0.4266 - val_acc: 0.7698\nEpoch 15/200\n1248/1508 [=======================>......] - ETA: 0s - loss: 0.4223 - acc: 0.7861\nEpoch 00015: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 181us/sample - loss: 0.4124 - acc: 0.8011 - val_loss: 0.4134 - val_acc: 0.8201\nEpoch 16/200\n1216/1508 [=======================>......] - ETA: 0s - loss: 0.3907 - acc: 0.8002\nEpoch 00016: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 181us/sample - loss: 0.4063 - acc: 0.7805 - val_loss: 0.4131 - val_acc: 0.8280\nEpoch 17/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.3821 - acc: 0.8073\nEpoch 00017: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 185us/sample - loss: 0.3931 - acc: 0.8017 - val_loss: 0.4032 - val_acc: 0.7884\nEpoch 18/200\n1408/1508 [===========================>..] - ETA: 0s - loss: 0.4033 - acc: 0.7770\nEpoch 00018: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 199us/sample - loss: 0.4059 - acc: 0.7759 - val_loss: 0.4130 - val_acc: 0.8254\nEpoch 19/200\n1216/1508 [=======================>......] - ETA: 0s - loss: 0.3945 - acc: 0.8084\nEpoch 00019: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 183us/sample - loss: 0.3894 - acc: 0.8117 - val_loss: 0.3995 - val_acc: 0.8201\nEpoch 20/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.3811 - acc: 0.7948\nEpoch 00020: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 183us/sample - loss: 0.3865 - acc: 0.8077 - val_loss: 0.3940 - val_acc: 0.8360\nEpoch 21/200\n1440/1508 [===========================>..] - ETA: 0s - loss: 0.3802 - acc: 0.8042\nEpoch 00021: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 193us/sample - loss: 0.3842 - acc: 0.8064 - val_loss: 0.3940 - val_acc: 0.8519\nEpoch 22/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.3760 - acc: 0.8041\nEpoch 00022: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 183us/sample - loss: 0.3837 - acc: 0.8044 - val_loss: 0.3937 - val_acc: 0.8228\nEpoch 23/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.3825 - acc: 0.8074\nEpoch 00023: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 184us/sample - loss: 0.3811 - acc: 0.8110 - val_loss: 0.3861 - val_acc: 0.8333\nEpoch 24/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.3699 - acc: 0.8438\nEpoch 00024: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 184us/sample - loss: 0.3666 - acc: 0.8435 - val_loss: 0.3803 - val_acc: 0.8333\nEpoch 25/200\n1376/1508 [==========================>...] - ETA: 0s - loss: 0.3642 - acc: 0.8343\nEpoch 00025: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 210us/sample - loss: 0.3604 - acc: 0.8342 - val_loss: 0.3885 - val_acc: 0.8175\nEpoch 26/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.3502 - acc: 0.8203\nEpoch 00026: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 198us/sample - loss: 0.3665 - acc: 0.8143 - val_loss: 0.3691 - val_acc: 0.8413\nEpoch 27/200\n1152/1508 [=====================>........] - ETA: 0s - loss: 0.3666 - acc: 0.8351\nEpoch 00027: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 194us/sample - loss: 0.3658 - acc: 0.8322 - val_loss: 0.3732 - val_acc: 0.8413\nEpoch 28/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.3501 - acc: 0.8378\nEpoch 00028: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 190us/sample - loss: 0.3546 - acc: 0.8415 - val_loss: 0.3665 - val_acc: 0.8360\nEpoch 29/200\n1184/1508 [======================>.......] - ETA: 0s - loss: 0.3465 - acc: 0.8378\nEpoch 00029: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 183us/sample - loss: 0.3470 - acc: 0.8336 - val_loss: 0.3607 - val_acc: 0.8466\nEpoch 30/200\n1216/1508 [=======================>......] - ETA: 0s - loss: 0.3459 - acc: 0.8470\nEpoch 00030: saving model to training_1/cnn.ckpt\n1508/1508 [==============================] - 0s 184us/sample - loss: 0.3447 - acc: 0.8415 - val_loss: 0.3543 - val_acc: 0.8413\nEpoch 31/200\n" ], [ "# 在一张图上画出loss和accuracy的\npd.DataFrame(history.history).plot(figsize=(8, 5))\nplt.xlabel(\"epoch\")\nplt.grid(True)\n# plt.gca().set_ylim(0, 1)\n#save_fig(\"keras_learning_curves_plot\")\nplt.show()", "_____no_output_____" ], [ "plt.plot(history.epoch,history.history.get('loss'),label=\"loss\")\nplt.plot(history.epoch,history.history.get('val_loss'),label=\"val_loss\")\nplt.legend()", "_____no_output_____" ], [ "#络的重要信息\nhistory.params", "_____no_output_____" ], [ "history.history.keys()", "_____no_output_____" ], [ "a = [\"acc\", \"val_acc\"]\nplt.figure(figsize=(8, 5))\nfor i in a:\n plt.plot(history.history[i], label=i)\n plt.legend()\nplt.grid(True)", "_____no_output_____" ], [ "model.evaluate(Y_valid, Y_label)", "378/378 [==============================] - 0s 61us/sample - loss: 0.2300 - acc: 0.9127\n" ] ], [ [ "\n#如果需要重新加载已经训练好的权重,直接使用下列代码,加载使用sheckpoint已经训练好的模型:\n\nmodel.load_weights(checkpoint_path)\n\n# 重新评估模型,更改一下测试集的图片和lebel的参数为我们想要的参数就可以了\nloss,acc = model.evaluate(test_images, test_labels, verbose=2)\nprint(\"Restored model, accuracy: {:5.2f}%\".format(100*acc))\n", "_____no_output_____" ] ] ]
[ "code", "raw" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "raw" ] ]
cb164ab3579ad72cb8b95d47bbade265d1a247d0
516,044
ipynb
Jupyter Notebook
.ipynb_checkpoints/explore_year_temps-checkpoint.ipynb
darrida/NOAA-Global-Temp-Data-Processing
3fe5f62452bb2500b73674eefd0c5c29a25eacff
[ "MIT" ]
null
null
null
.ipynb_checkpoints/explore_year_temps-checkpoint.ipynb
darrida/NOAA-Global-Temp-Data-Processing
3fe5f62452bb2500b73674eefd0c5c29a25eacff
[ "MIT" ]
null
null
null
.ipynb_checkpoints/explore_year_temps-checkpoint.ipynb
darrida/NOAA-Global-Temp-Data-Processing
3fe5f62452bb2500b73674eefd0c5c29a25eacff
[ "MIT" ]
null
null
null
37.626249
747
0.262896
[ [ [ "import os, csv\nfrom pprint import pprint\nfrom pathlib import Path\nimport pandas as pd\nfrom pandas.errors import ParserError\npd.set_option('display.max_columns', 999)\nfrom icecream import ic\nfrom tqdm.notebook import tqdm#, tqdm_notebook", "_____no_output_____" ], [ "temp_csvs = Path('/media/share/store_240a/data_downloads/noaa_daily_avg_temps')\n\nlen(os.listdir(temp_csvs))\n#print(os.listdir(temp_csvs))", "_____no_output_____" ], [ "first_file = temp_csvs / os.listdir(temp_csvs)[0]\nfirst_file = str(first_file)\nfirst_file", "_____no_output_____" ], [ "df1 = pd.read_csv(first_file)", "_____no_output_____" ], [ "len(df1)", "_____no_output_____" ], [ "df1", "_____no_output_____" ], [ "df1['TEMP'].mean()", "_____no_output_____" ], [ "def unique_values_only_one(column: str):\n value_l = column.unique()\n if len(value_l) > 1:\n return 'X'\n return value_l[0]\n\ndef year_complete(year):\n with open('complete.csv', 'a+', newline='') as f:\n f.write(f'{year}')\n f.write('\\n')\n \ndef get_complete():\n with open('complete.csv', 'r') as f:\n year_l = [x[0] for x in csv.reader(f)]\n return year_l", "_____no_output_____" ], [ "file_folders = os.listdir(temp_csvs)\n\nfor year in file_folders:\n print(year)", "2008\n1930\n1935\n1952\n1970\n1958\n1982\n1974\n1956\n1985\n1959\n1981\n1949\n1957\n1993\n1962\n1988\n2011\n1945\n1946\n1948\n1975\n1998\n2005\n2000\n1995\n1942\n2010\n1983\n1963\n1947\n1969\n1960\n2009\n1955\n2018\n1972\n1941\n2019\n1990\n1966\n2017\n2001\n1979\n2012\n1936\n1933\n1934\n1996\n1978\n1987\n1984\n1943\n1991\n1954\n1940\n1989\n1950\n1964\n1971\n1967\n2003\n1997\n1977\n2014\n1992\n1932\n1973\n1976\n1939\n1980\n1951\n2015\n1986\n1931\n1944\n1965\n2016\n1968\n2013\n2006\n2002\n1938\n2020\n1929\n1953\n2007\n1961\n1999\n1937\n1994\n2004\n" ], [ "file_folders_full = os.listdir(temp_csvs)\nfile_folders_full.sort()\n\nprint('All years:')\nprint(file_folders_full)\n\nfile_folders, done = [], []\n\ncomplete_years = get_complete()\nfor year in file_folders_full:\n if year not in complete_years:\n file_folders.append(year)\n else:\n done.append(year)\n\nprint(\"Years completed already:\")\nprint(done)\n\nfor year in tqdm(file_folders, desc='Overall Progress', position=0):\n try:\n years_complete = get_complete()\n if year not in years_complete:\n\n sites_in_folder = os.listdir(temp_csvs / year)\n columns = ['SITE_NUMBER','LATITUDE','LONGITUDE','ELEVATION','AVERAGE_TEMP']\n rows_l = []\n for site in tqdm(sites_in_folder, desc=f'{year} Progress', position=1):\n try:\n df1 = pd.read_csv(temp_csvs / year / site)\n average_temp = df1['TEMP'].mean()\n site_number = unique_values_only_one(df1['STATION'])\n latitude = unique_values_only_one(df1['LATITUDE'])\n longitude = unique_values_only_one(df1['LONGITUDE'])\n elevation = unique_values_only_one(df1['ELEVATION'])\n if site_number == 'X' \\\n or latitude == 'X' \\\n or longitude == 'X' \\\n or elevation == 'X':\n with open('non_unique', 'w', newline='') as f2:\n f2.write(f'Non-unique column: {temp_csvs}, {year}, {site}')\n rows_l.append([site_number,latitude,longitude,elevation,average_temp])\n except ParserError as e:\n print(e)\n print(year, site)\n with open(f'explor_year_temps_files/results_{year}.csv', 'w', newline='') as f: \n write = csv.writer(f) \n write.writerow(columns) \n write.writerows(rows_l)\n year_complete(year)\n else:\n print(f'{year} marked as complete already.')\n except FileNotFoundError as e:\n if not str(e).startswith(\"[Errno 2] No such file or directory: 'complete.csv'\"):\n raise FileNotFoundError(e)\n year_complete('0000')", "All years:\n['1929', '1930', '1931', '1932', '1933', '1934', '1935', '1936', '1937', '1938', '1939', '1940', '1941', '1942', '1943', '1944', '1945', '1946', '1947', '1948', '1949', '1950', '1951', '1952', '1953', '1954', '1955', '1956', '1957', '1958', '1959', '1960', '1961', '1962', '1963', '1964', '1965', '1966', '1967', '1968', '1969', '1970', '1971', '1972', '1973', '1974', '1975', '1976', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020']\nYears completed already:\n['1929', '1930', '1931', '1932', '1933', '1934', '1935', '1936', '1937', '1938', '1939', '1940', '1941', '1942', '1943', '1944', '1945', '1946', '1947', '1948', '1949', '1950', '1951', '1952', '1953', '1954', '1955', '1956', '1957', '1958', '1959', '1960', '1961', '1962', '1963', '1964', '1965', '1966', '1967', '1968', '1969', '1970', '1971', '1972', '1973', '1974', '1975', '1976', '1977', '1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', '1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019', '2020']\n" ], [ "_2012 = temp_csvs / '2012' / '44275099999.csv'\nwith open(_2012, 'r') as f:\n csv.reader(f)\n year_l = [x[0] for x in csv.reader(f)]\n \nyear_l", "_____no_output_____" ], [ "_2012 = temp_csvs / '2012' / '30635099999.csv'\nwith open(_2012, 'r') as f:\n csv.reader(f)\n year_l = [x[0] for x in csv.reader(f)]\n \nyear_l", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb164c11fb812e01cd78f99637f9be6db319b80a
372,290
ipynb
Jupyter Notebook
notebooks/06 Supervised Learning.ipynb
d9w/computational_intelligence
aafcb3aacad0640468bf7bc0b01d0d8cafed6ee3
[ "MIT" ]
2
2020-07-17T21:15:51.000Z
2020-08-15T03:29:51.000Z
notebooks/06 Supervised Learning.ipynb
d9w/computational_intelligence
aafcb3aacad0640468bf7bc0b01d0d8cafed6ee3
[ "MIT" ]
null
null
null
notebooks/06 Supervised Learning.ipynb
d9w/computational_intelligence
aafcb3aacad0640468bf7bc0b01d0d8cafed6ee3
[ "MIT" ]
4
2018-04-23T11:29:00.000Z
2020-05-16T05:34:07.000Z
998.096515
249,016
0.955913
[ [ [ "# Supervised Learning", "_____no_output_____" ], [ "Supervised learning consists in learning the link between two datasets: the observed data X and an external variable y that we are trying to predict, usually called “target” or “labels”. Most often, y is a 1D array of length n_samples.\n\nIf the prediction task is to classify the observations in a set of finite labels, in other words to “name” the objects observed, the task is said to be a **classification** task. On the other hand, if the goal is to predict a continuous target variable, it is said to be a **regression** task.\n\nClustering, which we've just done with K means, is a type of *unsupervised* learning similar to classification. Here, the difference is that we'll be using the labels in our data in our algorithm.", "_____no_output_____" ], [ "## Classification", "_____no_output_____" ], [ "\"The problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.\" (Wikipedia)\n\nWe've seen one classification example already, the iris dataset. In this dataset, iris flowers are classified based on their petal and sepal geometries.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.decomposition import PCA\n\ndef pca_plot(data):\n pca = PCA(n_components=2)\n pca.fit(data.data)\n data_pca = pca.transform(data.data)\n\n for label in range(len(data.target_names)):\n plt.scatter(data_pca[data.target==label, 0],\n data_pca[data.target==label, 1],\n label=data.target_names[label])\n plt.xlabel('Principal Component 1')\n plt.ylabel('Principal Component 2')\n plt.legend(loc='best')\n plt.tight_layout()\n plt.show()", "_____no_output_____" ], [ "from sklearn.datasets import load_iris\niris = load_iris()\npca_plot(iris)", "_____no_output_____" ] ], [ [ "Another dataset with more features is the wine classification dataset, which tries to determine the original cultivar, or plant family, of three different Italian wines. A chemical analysis determined the following samples:\n\n1. Alcohol\n2. Malic acid\n3. Ash\n4. Alcalinity of ash \n5. Magnesium\n6. Total phenols\n7. Flavanoids\n8. Nonflavanoid phenols\n9. Proanthocyanins\n10. Color intensity\n11. Hue\n12. OD280/OD315 of diluted wines\n13. Proline", "_____no_output_____" ] ], [ [ "from sklearn.datasets import load_wine\nwine = load_wine()\npca_plot(wine)", "_____no_output_____" ] ], [ [ "A final and more difficult dataset is a sample from the National Institute of Standards and Technology (NIST) dataset on handwritten numbers. A modified and larger version of this, Modified NIST or MNIST, is a current standard benchmark for state of the art machine learning algorithms. In this problem, each datapoint is an 8x8 pixel image (64 features) and the classification task is to label each image as the correct number.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import load_digits\n\ndigits = load_digits()\nimages_and_labels = list(zip(digits.images, digits.target))\nfor index, (image, label) in enumerate(images_and_labels[:8]):\n plt.subplot(2, 4, index + 1)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Label: %i' % label)\n\nplt.show()", "_____no_output_____" ], [ "pca_plot(digits)", "_____no_output_____" ] ], [ [ "## Regression", "_____no_output_____" ], [ "\"In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed.\" (Wikipedia)\n\nIn regression, each set of features doesn't correspond to a label but rather to a value. The task of the regression algorithm is to correctly predict this value based on the feature data. One way to think about regression and classification is that regression is continuous while classification is discrete.\n\nScikit learn also comes with a number of sample regression datasets.", "_____no_output_____" ], [ "In our example regression dataset, health metrics of diabetes patients were measured and then the progress of their diabetes was quantitatively measured after 1 year. The features are:\n\n1. age\n2. sex\n3. body mass index\n4. average blood pressure\n+ 5-10 six blood serum measurements", "_____no_output_____" ] ], [ [ "from sklearn.datasets import load_diabetes\ndiabetes = load_diabetes()\ny = diabetes.target\nfeatures = [\"AGE\", \"SEX\", \"BMI\", \"BP\", \"BL1\", \"BL2\", \"BL3\", \"BL4\", \"BL5\", \"BL6\"]", "_____no_output_____" ], [ "plt.figure(figsize=(20,20))\nfor i in range(10):\n plt.subplot(4, 4, i + 1)\n plt.scatter(diabetes.data[:, i], y, edgecolors=(0, 0, 0));\n plt.title('Feature: %s' % features[i])", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-success\">\n <b>EXERCISE: UCI datasets</b>\n <ul>\n <li>\nMany of these datasets originally come from the UCI Machine Learning Repository. Visit https://archive.ics.uci.edu/ml/index.php and select a dataset. What is the dataset describing? What are the features? Is it classification or regression? How many data samples are there?\n </li>\n </ul>\n</div>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb164e289b14a87ee5f2501d37b09fb3ab10bf60
366,414
ipynb
Jupyter Notebook
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
zhaojijet/UdacityDeepLearningProject
719915769dc060d85cadb894e2337c12e686924d
[ "Apache-2.0" ]
1
2019-01-27T10:57:19.000Z
2019-01-27T10:57:19.000Z
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
zhaojijet/UdacityDeepLearningProject
719915769dc060d85cadb894e2337c12e686924d
[ "Apache-2.0" ]
null
null
null
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
zhaojijet/UdacityDeepLearningProject
719915769dc060d85cadb894e2337c12e686924d
[ "Apache-2.0" ]
1
2019-01-15T11:16:26.000Z
2019-01-15T11:16:26.000Z
387.32981
169,406
0.909755
[ [ [ "# Your first neural network\n\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.\n\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Load and prepare the data\n\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "_____no_output_____" ] ], [ [ "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)", "_____no_output_____" ], [ "rides.head()", "_____no_output_____" ] ], [ [ "## Checking out the data\n\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.\n\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "_____no_output_____" ] ], [ [ "rides[:24*10].plot(x='dteday', y='cnt')", "_____no_output_____" ] ], [ [ "### Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.", "_____no_output_____" ] ], [ [ "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "_____no_output_____" ] ], [ [ "### Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\n\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "_____no_output_____" ] ], [ [ "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "_____no_output_____" ] ], [ [ "### Splitting the data into training, testing, and validation sets\n\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "_____no_output_____" ] ], [ [ "# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "_____no_output_____" ] ], [ [ "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "_____no_output_____" ] ], [ [ "# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "_____no_output_____" ] ], [ [ "## Time to build the network\n\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.\n\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.\n\n> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.\n2. Implement the forward pass in the `train` method.\n3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.\n4. Implement the forward pass in the `run` method.\n ", "_____no_output_____" ] ], [ [ "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # Activation function is the sigmoid function\n self.activation_function = lambda x: 1. / (1. + np.exp(-x))\n \n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n \n # TODO: Output error\n output_errors = targets - final_outputs\n \n # TODO: Backpropagated error\n hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)\n hidden_grad = hidden_outputs * (1 - hidden_outputs)\n \n # TODO: Update the weights\n self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)\n self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)\n \n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n \n return final_outputs", "_____no_output_____" ], [ "def MSE(y, Y):\n return np.mean((y-Y)**2)", "_____no_output_____" ] ], [ [ "## Training the network\n\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\n\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\n\n### Choose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\n\n### Choose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\n\n### Choose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "_____no_output_____" ] ], [ [ "import sys\n\n### Set the hyperparameters here ###\nepochs = 1000\nlearning_rate = 0.1\nhidden_nodes = 10\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)", "Progress: 99.9% ... Training loss: 0.066 ... Validation loss: 0.212" ], [ "plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)", "_____no_output_____" ] ], [ [ "## Check out your predictions\n\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "_____no_output_____" ] ], [ [ "## Thinking about your results\n \nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\n> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\n#### Your answer below\nBefore Dec 21 model fits well. From Dec 22 model fits bad, because the amount of data decrease.", "_____no_output_____" ], [ "## Unit tests\n\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.", "_____no_output_____" ] ], [ [ "import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", ".....\n----------------------------------------------------------------------\nRan 5 tests in 0.006s\n\nOK\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cb16504a08f66a371775c22225c742ea60c1d0e1
36,067
ipynb
Jupyter Notebook
notebook/.ipynb_checkpoints/Bi-LSTM-CRF-checkpoint.ipynb
stevehamwu/EmotionCauseExtraction
b5a160f35f7b03bf3730b6885096dbc5f958df8b
[ "MIT" ]
3
2022-02-07T12:08:38.000Z
2022-03-28T04:26:39.000Z
notebook/.ipynb_checkpoints/Bi-LSTM-CRF-checkpoint.ipynb
stevehamwu/EmotionCauseExtraction
b5a160f35f7b03bf3730b6885096dbc5f958df8b
[ "MIT" ]
null
null
null
notebook/.ipynb_checkpoints/Bi-LSTM-CRF-checkpoint.ipynb
stevehamwu/EmotionCauseExtraction
b5a160f35f7b03bf3730b6885096dbc5f958df8b
[ "MIT" ]
null
null
null
29.709226
164
0.516067
[ [ [ "# Author: Robert Guthrie\nfrom copy import copy\nimport torch\nimport torch.autograd as autograd\nimport torch.nn as nn\nimport torch.optim as optim\n\ntorch.manual_seed(1)", "_____no_output_____" ], [ "def argmax(vec):\n # return the argmax as a python int\n _, idx = torch.max(vec, 1)\n return idx.item()\n\n\ndef prepare_sequence(seq, to_ix):\n idxs = [to_ix[w] for w in seq]\n return torch.tensor(idxs, dtype=torch.long)\n\n\n# Compute log sum exp in a numerically stable way for the forward algorithm\ndef log_sum_exp(vec):\n max_score = vec[0, argmax(vec)]\n max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])\n return max_score + \\\n torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))", "_____no_output_____" ], [ "class BiLSTM_CRF(nn.Module):\n\n def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim):\n super(BiLSTM_CRF, self).__init__()\n self.embedding_dim = embedding_dim\n self.hidden_dim = hidden_dim\n self.vocab_size = vocab_size\n self.tag_to_ix = tag_to_ix\n self.tagset_size = len(tag_to_ix)\n\n self.word_embeds = nn.Embedding(vocab_size, embedding_dim)\n self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2,\n num_layers=1, bidirectional=True)\n\n # Maps the output of the LSTM into tag space.\n self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)\n\n # Matrix of transition parameters. Entry i,j is the score of\n # transitioning *to* i *from* j.\n self.transitions = nn.Parameter(\n torch.randn(self.tagset_size, self.tagset_size))\n\n # These two statements enforce the constraint that we never transfer\n # to the start tag and we never transfer from the stop tag\n self.transitions.data[tag_to_ix[START_TAG], :] = -10000\n self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000\n\n self.hidden = self.init_hidden()\n\n def init_hidden(self):\n return (torch.randn(2, 1, self.hidden_dim // 2),\n torch.randn(2, 1, self.hidden_dim // 2))\n\n def _forward_alg(self, feats):\n # Do the forward algorithm to compute the partition function\n init_alphas = torch.full((1, self.tagset_size), -10000.)\n # START_TAG has all of the score.\n init_alphas[0][self.tag_to_ix[START_TAG]] = 0.\n\n # Wrap in a variable so that we will get automatic backprop\n forward_var = init_alphas\n\n # Iterate through the sentence\n for feat in feats:\n alphas_t = [] # The forward tensors at this timestep\n for next_tag in range(self.tagset_size):\n # broadcast the emission score: it is the same regardless of\n # the previous tag\n emit_score = feat[next_tag].view(\n 1, -1).expand(1, self.tagset_size)\n # the ith entry of trans_score is the score of transitioning to\n # next_tag from i\n trans_score = self.transitions[next_tag].view(1, -1)\n # The ith entry of next_tag_var is the value for the\n # edge (i -> next_tag) before we do log-sum-exp\n next_tag_var = forward_var + trans_score + emit_score\n # The forward variable for this tag is log-sum-exp of all the\n # scores.\n alphas_t.append(log_sum_exp(next_tag_var).view(1))\n forward_var = torch.cat(alphas_t).view(1, -1)\n terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]\n alpha = log_sum_exp(terminal_var)\n return alpha\n\n def _get_lstm_features(self, sentence):\n self.hidden = self.init_hidden()\n embeds = self.word_embeds(sentence).view(len(sentence), 1, -1)\n lstm_out, self.hidden = self.lstm(embeds, self.hidden)\n lstm_out = lstm_out.view(len(sentence), self.hidden_dim)\n lstm_feats = self.hidden2tag(lstm_out)\n return lstm_feats\n\n def _score_sentence(self, feats, tags):\n # Gives the score of a provided tag sequence\n score = torch.zeros(1)\n tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags])\n for i, feat in enumerate(feats):\n score = score + \\\n self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]\n score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]]\n return score\n\n def _viterbi_decode(self, feats):\n backpointers = []\n\n # Initialize the viterbi variables in log space\n init_vvars = torch.full((1, self.tagset_size), -10000.)\n init_vvars[0][self.tag_to_ix[START_TAG]] = 0\n\n # forward_var at step i holds the viterbi variables for step i-1\n forward_var = init_vvars\n for feat in feats:\n bptrs_t = [] # holds the backpointers for this step\n viterbivars_t = [] # holds the viterbi variables for this step\n\n for next_tag in range(self.tagset_size):\n # next_tag_var[i] holds the viterbi variable for tag i at the\n # previous step, plus the score of transitioning\n # from tag i to next_tag.\n # We don't include the emission scores here because the max\n # does not depend on them (we add them in below)\n next_tag_var = forward_var + self.transitions[next_tag]\n best_tag_id = argmax(next_tag_var)\n bptrs_t.append(best_tag_id)\n viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))\n # Now add in the emission scores, and assign forward_var to the set\n # of viterbi variables we just computed\n forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)\n backpointers.append(bptrs_t)\n\n # Transition to STOP_TAG\n terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]\n best_tag_id = argmax(terminal_var)\n path_score = terminal_var[0][best_tag_id]\n\n # Follow the back pointers to decode the best path.\n best_path = [best_tag_id]\n for bptrs_t in reversed(backpointers):\n best_tag_id = bptrs_t[best_tag_id]\n best_path.append(best_tag_id)\n # Pop off the start tag (we dont want to return that to the caller)\n start = best_path.pop()\n assert start == self.tag_to_ix[START_TAG] # Sanity check\n best_path.reverse()\n return path_score, best_path\n\n def neg_log_likelihood(self, sentence, tags):\n feats = self._get_lstm_features(sentence)\n forward_score = self._forward_alg(feats)\n gold_score = self._score_sentence(feats, tags)\n return forward_score - gold_score\n\n def forward(self, sentence): # dont confuse this with _forward_alg above.\n # Get the emission scores from the BiLSTM\n lstm_feats = self._get_lstm_features(sentence)\n\n # Find the best path, given the features.\n score, tag_seq = self._viterbi_decode(lstm_feats)\n return score, tag_seq", "_____no_output_____" ], [ "START_TAG = \"<START>\"\nSTOP_TAG = \"<STOP>\"\nEMBEDDING_DIM = 5\nHIDDEN_DIM = 4\n\n# Make up some training data\ntraining_data = [(\n \"the wall street journal reported today that apple corporation made money\".split(),\n \"B I I I O O O B I O O\".split()\n), (\n \"georgia tech is a university in georgia\".split(),\n \"B I O O O O B\".split()\n)]\n\nword_to_ix = {}\nfor sentence, tags in training_data:\n for word in sentence:\n if word not in word_to_ix:\n word_to_ix[word] = len(word_to_ix)\n\ntag_to_ix = {\"B\": 0, \"I\": 1, \"O\": 2, START_TAG: 3, STOP_TAG: 4}\n\nmodel = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)\noptimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)\n\n# Check predictions before training\nwith torch.no_grad():\n precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)\n precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)\n print(model(precheck_sent))\n\n# Make sure prepare_sequence from earlier in the LSTM section is loaded\nfor epoch in range(\n 300): # again, normally you would NOT do 300 epochs, it is toy data\n for sentence, tags in training_data:\n # Step 1. Remember that Pytorch accumulates gradients.\n # We need to clear them out before each instance\n model.zero_grad()\n\n # Step 2. Get our inputs ready for the network, that is,\n # turn them into Tensors of word indices.\n sentence_in = prepare_sequence(sentence, word_to_ix)\n targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long)\n\n # Step 3. Run our forward pass.\n loss = model.neg_log_likelihood(sentence_in, targets)\n\n # Step 4. Compute the loss, gradients, and update the parameters by\n # calling optimizer.step()\n loss.backward()\n optimizer.step()\n\n# Check predictions after training\nwith torch.no_grad():\n precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)\n print(model(precheck_sent))\n# We got it!", "(tensor(2.6907), [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1])\n(tensor(20.4906), [0, 1, 1, 1, 2, 2, 2, 0, 1, 2, 2])\n" ] ], [ [ "# model", "_____no_output_____" ] ], [ [ "import sys\nsys.path.append('..')\nfrom utils.dataset.ec import ECDataset\nfrom utils.dataloader.ec import ECDataLoader\nfrom models.han.word_model import WordAttention\nfrom models.han.sentence_model import SentenceWithPosition\ndevice = torch.device('cuda: 0')", "_____no_output_____" ], [ "batch_size = 16\nvocab_size = 23071\nnum_classes = 2\nsequence_length = 41\nembedding_dim = 300\ndropout = 0.5\nword_rnn_size = 300\nword_rnn_layer = 2\nsentence_rnn_size = 300\nsentence_rnn_layer = 2\npos_size = 103\npos_embedding_dim = 300\npos_embedding_file= '/data/wujipeng/ec/data/embedding/pos_embedding.pkl'", "_____no_output_____" ], [ "train_dataset = ECDataset(data_root='/data/wujipeng/ec/data/test/', vocab_root='/data/wujipeng/ec/data/raw_data/', train=True)\ntest_dataset = ECDataset(data_root='/data/wujipeng/ec/data/test/', vocab_root='/data/wujipeng/ec/data/raw_data/', train=False)", "_____no_output_____" ], [ "train_loader = ECDataLoader(dataset=train_dataset, clause_length=sequence_length, batch_size=16, shuffle=True, sort=True, collate_fn=train_dataset.collate_fn)", "_____no_output_____" ], [ "for batch in train_loader:\n clauses, keywords, poses = ECDataset.batch2input(batch)\n labels = ECDataset.batch2target(batch)\n clauses = torch.from_numpy(clauses).to(device)\n keywords = torch.from_numpy(keywords).to(device)\n poses = torch.from_numpy(poses).to(device)\n labels = torch.from_numpy(labels).to(device)\n targets = labels\n break", "_____no_output_____" ], [ "class HierachicalAttentionModelCRF:\n def __init__(self,\n vocab_size,\n num_classes,\n embedding_dim,\n hidden_size,\n word_model,\n sentence_model,\n dropout=0.5,\n fix_embed=True,\n name='HAN'):\n super(HierachicalAttentionModelCRF, self).__init__()\n\n self.num_classes = num_classes\n self.fix_embed = fix_embed\n self.name = name\n\n self.Embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=0)\n self.word_rnn = WordAttention(\n vocab_size=vocab_size,\n embedding_dim=embedding_dim,\n batch_size=batch_size,\n sequence_length=sequence_length,\n rnn_size=word_rnn_size,\n rnn_layers=word_rnn_layer,\n dropout=dropout)\n self.sentence_rnn = SentenceAttention(\n batch_size=batch_size,\n word_rnn_size = word_rnn_size,\n rnn_size = sentence_rnn_size,\n rnn_layers=sentence_rnn_layer,\n pos_size=pos_size,\n pos_embedding_dim=pos_embedding_dim,\n pos_embedding_file=pos_embedding_file\n )\n self.fc = nn.Linear(\n 2 * self.word_rnn_size + 2 * self.sentence_rnn_size, num_classes)\n self.dropout = nn.Dropout(dropout)\n # self.fc = nn.Sequential(\n # nn.Linear(2 * self.sentence_rnn_size, linear_hidden_dim),\n # nn.ReLU(inplace=True),\n # nn.Dropout(dropout),\n # nn.Linear(linear_hidden_dim, num_classes)\n # )\n\n def init_weights(self, embeddings):\n if embeddings is not None:\n self.Embedding = self.Embedding.from_pretrained(embeddings)\n\n def forward(self, clauses, keywords, poses):\n inputs = self.linear(self.Embedding(clauses))\n queries = self.linear(self.Embedding(keywords))\n documents, word_attn = self.word_rnn(inputs, queries)\n outputs, sentence_attn = self.sentence_rnn(documents, poses)\n # outputs = self.fc(outputs)\n s_c = torch.cat((documents, outputs), dim=-1)\n outputs = self.fc(self.dropout(s_c))\n return outputs, word_attn, sentence_attn", "_____no_output_____" ] ], [ [ "## init", "_____no_output_____" ] ], [ [ "def argmax(vec):\n # return the argmax as a python int\n _, idx = torch.max(vec, 1)\n return idx.item()\n\ndef prepare_sequence(seq, to_ix):\n idxs = [to_ix[w] for w in seq]\n return torch.tensor(idxs, dtype=torch.long)\n\n\n# Compute log sum exp in a numerically stable way for the forward algorithm\ndef log_sum_exp(vec):\n max_score = vec[0, argmax(vec)]\n max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])\n return max_score + \\\n torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))", "_____no_output_____" ], [ "START_TAG = \"<START>\"\nSTOP_TAG = \"<STOP>\"\ntag_to_ix = {0: 0, 1: 1, START_TAG: 2, STOP_TAG: 3}\ntagsize = len(tag_to_ix)", "_____no_output_____" ], [ "Embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=0).to(device)\nword_rnn = WordAttention(\n vocab_size=vocab_size,\n embedding_dim=embedding_dim,\n batch_size=batch_size,\n sequence_length=sequence_length,\n rnn_size=word_rnn_size,\n rnn_layers=word_rnn_layer,\n dropout=dropout).to(device)\nsentence_rnn = SentenceWithPosition(\n batch_size=batch_size,\n word_rnn_size = word_rnn_size,\n rnn_size = sentence_rnn_size,\n rnn_layers=sentence_rnn_layer,\n pos_size=pos_size,\n pos_embedding_dim=pos_embedding_dim,\n pos_embedding_file=pos_embedding_file\n).to(device)\nfc = nn.Linear(2 * word_rnn_size + 2 * sentence_rnn_size, num_classes+2).to(device)\ndrop = nn.Dropout(dropout).to(device)", "_____no_output_____" ], [ "transitions = nn.Parameter(torch.randn(tagset_size, tagset_size)).to(device)\ntransitions.data[tag_to_ix[START_TAG], :] = -10000\ntransitions.data[:, tag_to_ix[STOP_TAG]] = -10000", "_____no_output_____" ] ], [ [ "## forward", "_____no_output_____" ] ], [ [ "inputs = Embedding(clauses)\nqueries = Embedding(keywords)\ndocuments, word_attn = word_rnn(inputs, queries)\noutputs, sentence_attn = sentence_rnn(documents, poses)", "_____no_output_____" ], [ "s_c = torch.cat((documents, outputs), dim=-1)\noutputs = fc(drop(s_c))", "_____no_output_____" ], [ "outputs.size()", "_____no_output_____" ], [ "lstm_feats = copy(outputs)\nlstm_feats.size()", "_____no_output_____" ] ], [ [ "### _forward_alg", "_____no_output_____" ] ], [ [ "init_alphas = torch.full((1, tagset_size), -10000.).to(device)\n# START_TAG has all of the score.\ninit_alphas[0][tag_to_ix[START_TAG]] = 0.\n\n# Wrap in a variable so that we will get automatic backprop\nforward_var = init_alphas\n\n# Iterate through the sentence\nfor feat in lstm_feats:\n alphas_t = [] # The forward tensors at this timestep\n for next_tag in range(tagset_size):\n # broadcast the emission score: it is the same regardless of\n # the previous tag\n emit_score = feat[next_tag].view(1, -1).expand(1, tagset_size)\n # the ith entry of trans_score is the score of transitioning to\n # next_tag from i\n trans_score = transitions[next_tag].view(1, -1)\n # The ith entry of next_tag_var is the value for the\n # edge (i -> next_tag) before we do log-sum-exp\n next_tag_var = forward_var + trans_score + emit_score\n # The forward variable for this tag is log-sum-exp of all the\n # scores.\n alphas_t.append(log_sum_exp(next_tag_var).view(1))\n forward_var = torch.cat(alphas_t).view(1, -1)\nterminal_var = forward_var + transitions[tag_to_ix[STOP_TAG]]\nalpha = log_sum_exp(terminal_var)", "_____no_output_____" ], [ "forward_score = alpha", "_____no_output_____" ], [ "init_alphas", "_____no_output_____" ] ], [ [ "### _score_sentence", "_____no_output_____" ] ], [ [ "lstm_feats.size()", "_____no_output_____" ], [ "tags = copy(targets)\nscore = torch.zeros(1).to(device)\ntags = torch.cat((torch.full((tags.size(0), 1), tag_to_ix[START_TAG], dtype=torch.long).to(device), tags), dim=-1)\nfor feats, tag in zip(lstm_feats, tags):\n score = torch.zeros(1).to(device)\n tag = torch.cat([torch.LongTensor([tag_to_ix[START_TAG]]).to(device), tag])\n for i, feat in enumerate(feats):\n score = score + transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]\nscore = score + transitions[tag_to_ix[STOP_TAG], tags[-1]]", "_____no_output_____" ], [ "tag", "_____no_output_____" ], [ "score = forward_score - gold_score\nforward_score, gold_score, score", "_____no_output_____" ] ], [ [ "### _viterbi_decode", "_____no_output_____" ] ], [ [ "backpointers = []\n\n# Initialize the viterbi variables in log space\ninit_vvars = torch.full((1, tagset_size), -10000.).to(device)\ninit_vvars[0][tag_to_ix[START_TAG]] = 0\n\n# forward_var at step i holds the viterbi variables for step i-1\nforward_var = init_vvars\nfor feat in lstm_feats:\n bptrs_t = [] # holds the backpointers for this step\n viterbivars_t = [] # holds the viterbi variables for this step\n\n for next_tag in range(tagset_size):\n # next_tag_var[i] holds the viterbi variable for tag i at the\n # previous step, plus the score of transitioning\n # from tag i to next_tag.\n # We don't include the emission scores here because the max\n # does not depend on them (we add them in below)\n next_tag_var = forward_var + transitions[next_tag]\n best_tag_id = argmax(next_tag_var)\n bptrs_t.append(best_tag_id)\n viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))\n # Now add in the emission scores, and assign forward_var to the set\n # of viterbi variables we just computed\n forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)\n backpointers.append(bptrs_t)\n\n# Transition to STOP_TAG\nterminal_var = forward_var + transitions[tag_to_ix[STOP_TAG]]\nbest_tag_id = argmax(terminal_var)\npath_score = terminal_var[0][best_tag_id]\n\n# Follow the back pointers to decode the best path.\nbest_path = [best_tag_id]\nfor bptrs_t in reversed(backpointers):\n best_tag_id = bptrs_t[best_tag_id]\n best_path.append(best_tag_id)\n# Pop off the start tag (we dont want to return that to the caller)\nstart = best_path.pop()\nassert start == tag_to_ix[START_TAG] # Sanity check\nbest_path.reverse()", "_____no_output_____" ], [ "path_score.data, len(best_path)", "_____no_output_____" ], [ "best_path", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb16533f94fb6efec306792c822c6e42ef55152b
2,471
ipynb
Jupyter Notebook
03_view_imported_data.ipynb
JiaminXu1019/ECG_ML
ccf3375416cfdafae9d15b93e6b91637b18c85a5
[ "MIT" ]
15
2019-06-06T05:10:51.000Z
2021-12-20T03:45:18.000Z
03_view_imported_data.ipynb
JiaminXu1019/ECG_ML
ccf3375416cfdafae9d15b93e6b91637b18c85a5
[ "MIT" ]
4
2019-05-07T06:06:38.000Z
2020-12-12T11:17:09.000Z
03_view_imported_data.ipynb
koen-aerts/ECG_ML
ccf3375416cfdafae9d15b93e6b91637b18c85a5
[ "MIT" ]
10
2019-06-13T05:21:03.000Z
2022-03-20T07:23:11.000Z
22.87963
130
0.532578
[ [ [ "# Author\n[Koen Aerts](https://koenaerts.ca/) @ [Mobia Technology Innovations](https://mobia.io)\n\n[myOpenHealth](https://mobia.io/healthcare/)", "_____no_output_____" ], [ "# Intro\nThis notebook plots some of the individual heartbeat records that were generated from the \"02_import_mitdb_data\" notebook.", "_____no_output_____" ], [ "# Initialize\nImport dependencies.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom matplotlib import pyplot as plt", "_____no_output_____" ] ], [ [ "# Load Data\nChange file name to select different file. The file must be CSV and each row should contain exactly 188 values.", "_____no_output_____" ] ], [ [ "data = np.loadtxt('data_ecg/210_MLII.csv', delimiter=',')\nprint(data.shape)", "_____no_output_____" ] ], [ [ "# Visualize\nChange values to navigate around the data. Different colors are used for Normal vs Abnormal heartbeats.", "_____no_output_____" ] ], [ [ "for beatid in [0,1,2,3,99,200,502,2428,2443]:\n times = np.arange(187, dtype = 'float') / 187\n beat = data[beatid][:-1]\n anno = data[beatid][-1]\n plt.figure(figsize=(20,5))\n if (anno == 0.0):\n plt.plot(times, beat, 'b')\n else:\n plt.plot(times, beat, 'r')\n plt.xlabel('Time [s]')\n plt.ylabel('beat ' + str(beatid) + \" type \" + str(anno))\n plt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1659eab473bffa1e6731829adc9e601b45818d
23,444
ipynb
Jupyter Notebook
module3-ridge-regression/LS_DS_213_assignment.ipynb
ppmorgoun/DS-Unit-2-Linear-Models
faacc5e97649f6a678e555b7f2a11f1ffd392938
[ "MIT" ]
4
2019-11-20T05:10:43.000Z
2021-04-12T04:59:00.000Z
module3-ridge-regression/LS_DS_213_assignment.ipynb
ppmorgoun/DS-Unit-2-Linear-Models
faacc5e97649f6a678e555b7f2a11f1ffd392938
[ "MIT" ]
30
2019-07-20T03:27:01.000Z
2021-09-08T03:08:25.000Z
module3-ridge-regression/LS_DS_213_assignment.ipynb
ppmorgoun/DS-Unit-2-Linear-Models
faacc5e97649f6a678e555b7f2a11f1ffd392938
[ "MIT" ]
712
2019-07-08T15:50:08.000Z
2021-11-10T15:18:57.000Z
67.174785
14,084
0.806859
[ [ [ "Lambda School Data Science\n\n*Unit 2, Sprint 1, Module 3*\n\n---", "_____no_output_____" ] ], [ [ "%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'", "_____no_output_____" ] ], [ [ "# Module Project: Ridge Regression\n\nFor this project, you'll return to the Tribecca Condo dataset. But this time, you'll look at the _entire_ dataset and try to predict property sale prices.\n\nThe [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.\n\n## Directions\n\nThe tasks for this project are the following:\n\n- **Task 1:** Import `csv` file using `wrangle` function.\n- **Task 2:** Conduct exploratory data analysis (EDA), and modify `wrangle` function to engineer two subset your dataset to one-family dwellings whose price is between \\\\$100,000 and \\\\$2,000,000.\n- **Task 3:** Split data into feature matrix `X` and target vector `y`.\n- **Task 4:** Split feature matrix `X` and target vector `y` into training and test sets.\n- **Task 5:** Establish the baseline mean absolute error for your dataset.\n- **Task 6:** Build and train a `OneHotEncoder`, and transform `X_train` and `X_test`.\n- **Task 7:** Build and train a `LinearRegression` model.\n- **Task 8:** Build and train a `Ridge` model.\n- **Task 9:** Calculate the training and test mean absolute error for your `LinearRegression` model.\n- **Task 10:** Calculate the training and test mean absolute error for your `Ridge` model.\n- **Task 11:** Create a horizontal bar chart showing the 10 most influencial features for your `Ridge` model. \n\n**Note**\n\nYou should limit yourself to the following libraries for this project:\n\n- `category_encoders`\n- `matplotlib`\n- `pandas`\n- `sklearn`", "_____no_output_____" ], [ "# I. Wrangle Data", "_____no_output_____" ] ], [ [ "def wrangle(filepath):\n # Import csv file\n cols = ['BOROUGH', 'NEIGHBORHOOD',\n 'BUILDING CLASS CATEGORY', 'GROSS SQUARE FEET', \n 'YEAR BUILT', 'SALE PRICE', 'SALE DATE']\n df = pd.read_csv(filepath, usecols=cols)\n return df\n\nfilepath = DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv'", "_____no_output_____" ] ], [ [ "**Task 1:** Use the above `wrangle` function to import the `NYC_Citywide_Rolling_Calendar_Sales.csv` file into a DataFrame named `df`.", "_____no_output_____" ] ], [ [ "df = ...", "_____no_output_____" ] ], [ [ "**Task 2:** Modify the above `wrangle` function so that:\n\n- The column `'SALE DATE'` becomes the `DatetimeIndex`.\n- The dtype for the `'BOROUGH'` column is `object`, not `int`.\n- The dtype for the `'SALE PRICE'` column is `int`, not `object`.\n- The dataset includes only one-family dwellings (`BUILDING CLASS CATEGORY == '01 ONE FAMILY DWELLINGS'`).\n- The dataset includes only properties whose sale price is between \\\\$100,000 and \\\\$2,000,000.", "_____no_output_____" ] ], [ [ "# Perform your exploratory data analysis here and\n# modify the wrangle function above", "_____no_output_____" ] ], [ [ "# II. Split Data\n\n**Task 3:** Split your dataset into the feature matrix `X` and the target vector `y`. You want to predict `'SALE_PRICE'`.", "_____no_output_____" ] ], [ [ "X = ...\ny = ...", "_____no_output_____" ] ], [ [ "**Task 4:** Split `X` and `y` into a training set (`X_train`, `y_train`) and a test set (`X_test`, `y_test`).\n\n- Your training set should include data from January to March 2019. \n- Your test set should include data from April 2019.", "_____no_output_____" ] ], [ [ "X_train, y_train = ..., ...\nX_test, y_test = ..., ...", "_____no_output_____" ] ], [ [ "# III. Establish Baseline\n\n**Task 5:** Since this is a **regression** problem, you need to calculate the baseline mean absolute error for your model.", "_____no_output_____" ] ], [ [ "baseline_mae = ...\nprint('Baseline MAE:', baseline_mae)", "_____no_output_____" ] ], [ [ "# IV. Build Model \n\n**Task 6:** Build and train a `OneHotEncoder` and then use it to transform `X_train` and `X_test`.", "_____no_output_____" ] ], [ [ "ohe = ...\n\nXT_train = ...\nXT_test = ...", "_____no_output_____" ] ], [ [ "**Task 7:** Build and train a `LinearRegression` model named `model_lr`. Remember to train your model using your _transformed_ feature matrix.", "_____no_output_____" ] ], [ [ "model_lr = ...", "_____no_output_____" ] ], [ [ "**Task 8:** Build and train a `Ridge` model named `model_r`. Remember to train your model using your _transformed_ feature matrix.", "_____no_output_____" ] ], [ [ "model_r = ...", "_____no_output_____" ] ], [ [ "# V. Check Metrics\n\n**Task 9:** Check the training and test metrics for `model_lr`.", "_____no_output_____" ] ], [ [ "training_mae_lr = ...\ntest_mae_lr = ...\n\nprint('Linear Training MAE:', training_mae_lr)\nprint('Linear Test MAE:', test_mae_lr)", "_____no_output_____" ] ], [ [ "**Task 10:** Check the training and test metrics for `model_r`.", "_____no_output_____" ] ], [ [ "training_mae_r = ...\ntest_mae_r = ...\n\nprint('Ridge Training MAE:', training_mae_r)\nprint('Ridge Test MAE:', test_mae_r)", "_____no_output_____" ] ], [ [ "**Stretch Goal:** Calculate the training and test $R^2$ scores `model_r`.", "_____no_output_____" ] ], [ [ "# Caculate R^2 score", "_____no_output_____" ] ], [ [ "# IV. Communicate Results\n\n**Task 11:** Create a horizontal barchart that plots the 10 most important coefficients for `model_r`, sorted by absolute value. Your figure should look like our example from class:\n\n![](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAc0AAAEICAYAAAA9YK8aAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3deZhcVZ3/8ffHhE0SAUkPAgLNooQ9kgoSWSYsooJCkGBAVILzyIiIgsIYBxREnCchMCqCMhkFRCJENmUAgbCEsIVQHbISAgjhB4LQoCwBCZB8f3/c06RoarnVXb1/Xs9TT90699xzvvfQ1Dfn3FtVigjMzMystvf1dABmZmZ9hZOmmZlZTk6aZmZmOTlpmpmZ5eSkaWZmlpOTppmZWU5OmmZ9kKQLJf2gyv6QtE13xtRZkjaSNEvSq5LOVeZiSf+QNEfSXpKW5mjnKEm3dEfMNvDIn9M0630kLQM2AlYCy4GbgG9GxPKcxwfwkYh4rAti+xRwKvAx4A3gIeDciLiuk+3+ILV5WESEpL2Ay4FtI+K1Tobd0Zi6bBytb/JM06z3+lxEDAFGkCWT7/dwPEgaB1wJXAp8mCyx/xD4XAOa3wJ4KFb/S34LYFlPJUyzcpw0zXq5iPgbcDNZ8gRA0iWSzip5fYqkZyU9I+mrpcdL2lDS/0l6RdIDks6SdHfJ/uGSZkj6u6Slkr5QLg5JAv4b+HFE/DoiXo6IVRFxZ0R8LdV5n6TTJD0p6XlJl0par6SN3SXdK+klSfMljWk7H+Bo4D8kLZf078CvgdHp9Y8kjZH0dElbm0m6RlKrpBclnZ/KJ+Q9vzSOF0i6IS0L3y9p67RvVqo2P8UwXtIwSden+P8u6S5Jfh8dQPwf26yXk/Rh4DNA2SVCSZ8GTgY+CXwE2L9dlQuA14APkSWmo0uOXReYAfwe+BfgCOCXkrYv09W2wGbAVVXCnZAe+wBbAUOAtmS2KXADcBbwwRTz1ZKaImICMA04OyKGRMT/AF8H7kuvT293zoOA64EngWZgU+CKMmOT5/yOAH4EbEA2xj8BiIi90/5dUgzTge8CTwNNZLPs/wR8jWsAcdI0673+KOlV4CngeeD0CvW+AFwcEYvSUuYZbTtScjkMOD0iXo+Ih4Dflhz7WbIl0Isj4u2IeBC4Gji8TD8bpudnq8R8FPDfEfF4uv76feAISYOBLwE3RsSNaYY6AygCB1YbhAp2AzYBTomI1yLijYi4u0y9POd3bUTMiYi3yRL3iDLttHkL2BjYIiLeioi7SpaTbQBw0jTrvcZGxFBgDDAcGFah3iZkibXNkyXbTcDgdvtLt7cAPp6WG1+S9BJZ4vtQmX5eTM8bV4l5k3b9P5n63yj1dXi7vvas0V4lmwFPpkRXTZ7z+1vJ9utks+NKppDNRm+R9LikiR2I3fqwwT0dgJlVFxF3pmt+5wBjy1R5liyJtNm8ZLsVeJvspp1HUllp3aeAOyPikzlCWZrqH5ZiKecZskRVGsvbwHPp2N+1Xf/spKeAzSUNrpE46zm/miLiVbIl2u9K2hG4XdIDEXFbI9q33s8zTbO+4WfAJyXtUmbfH4AJkraX9H5KlnEjYiVwDXCGpPdLGg58peTY64GPSvqypDXSY5Sk7dp3kpYhvwP8QNIxkj6QbvzZU9LUVO1y4CRJW0oaAvwXMD0ltsuAz0n6lKRBktZON/d8uAPjMYfsHwuTJK2b2tqjTL3c51fBc2TXZgGQ9FlJ26Sbol4m+0jQqg7Eb32Uk6ZZHxARrWQf8/hhmX1/Jkuqt5MtHd7erso3gfXIliF/R5bYVqRjXwUOILsZ5plUZzKwVoU4rgLGA19N9Z8ju7HnT6nKRamPWcATZJ/jPCEd+xRwCNnNM61ks8BT6MD7UPrHwOeAbYD/R3Zzzvgy9eo6vzLOAH6blna/QHaj1a1kn529D/hlRNxRb/zWd/nLDcwGGEmTgQ9FxNE1K5vZu3imadbPpc8p7qzMbsC/Adf2dFxmfZFvBDLr/4aSLcluQracei6rl1PNrA5enjUzM8vJy7NmZmY5eXm2nxk2bFg0Nzf3dBhmZn1KS0vLCxHRVKuek2Y/09zcTLFY7OkwzMz6FElP1q7l5VkzM7PcnDTNzMxyctI0MzPLyUnTzMwsJ98IZO9onnhDT4dgZn3EskkH9XQIPcIzTTMzs5wGdNKUtLwL2jy47YdpJY2VtH0H2pgpqdDo2MzMrHMGdNLsChFxXURMSi/HAnUnTTMz652cNIH06w9TJC2StFDS+FQ+Js36rpL0sKRp6cdnkXRgKmuRdJ6k61P5BEnnS/oEcDAwRdI8SVuXziAlDZO0LG2vI+kKSUskXQusUxLbAZLukzRX0pXph33NzKwHOGlmPg+MAHYB9idLdBunfR8DTiSbMW4F7CFpbeB/gM9ExEjgPV+9FBH3AtcBp0TEiIj4S5X+jwNej4jtgNOBkZAlVuA0YP+I2BUoAt9pf7CkYyUVJRVbW1vrP3szM8vFSTOzJ3B5RKyMiOeAO4FRad+ciHg6IlYB84BmYDjweEQ8kepc3sn+9wYuA4iIBcCCVL47WbK+R9I84Ghgi/YHR8TUiChERKGpqeZXJ5qZWQf5Iye1rSjZXknnxuxtVv9DZe0c9QXMiIgjO9GnmZk1iGeambuA8ZIGSWoim/nNqVJ/KbCVpOb0enyFeq+S/QBwm2WkpVdgXEn5LOCLAJJ2BHZO5bPJloO3SfvWlfTRHOdjZmZdwEkzcy3Zkuh84HbgPyLib5UqR8Q/gW8AN0lqIUuOL5epegVwiqQHJW0NnAMcJ+lBYFhJvV8BQyQtAc4EWlI/rcAE4HJJC4D7yJaGzcysBygiejqGPknSkIhYnu6mvQB4NCJ+2tNxFQqF6OhPg/kbgcwsr/72jUCSWiKi5ufjfU2z474m6WhgTeBBsrtp+7T+9j+BmVmjOWl2UJpV9vjM0szMuo+vaZqZmeXkpGlmZpaTk6aZmVlOTppmZmY5OWmamZnl5KRpZmaWk5OmmZlZTk6aZmZmOTlpmpmZ5eRvBDIzs7r1xu+q7o6vAvVM08zMLKd+nzQlNUtaVEf9CZI2KXm9TNKwaseYmdnA0O+TZgdMADapVamUJC9zm5kNAAMlaQ6WNE3SEklXSXq/pB9KekDSIklTlRkHFIBpkuZJWicdf4KkuZIWShoOIOkMSb+TdA/wuzSjvV3SAkm3Sdo81atUfomkX0maLelxSWMkXZRivCTVGZTqLUp9n9TtI2dmZu8YKElzW+CXEbEd8ArwDeD8iBgVETsC6wCfjYirgCJwVESMiIh/puNfiIhdgV8BJ5e0uz2wf0QcCfwC+G1E7AxMA85LdSqVA2wAjAZOAq4j+6mxHYCdJI0ARgCbRsSOEbETcHG5k5N0rKSipGJra2uHB8nMzKobKEnzqYi4J21fBuwJ7CPpfkkLgX3JklUl16TnFqC5pPy6ksQ6Gvh92v5d6qNaOcD/RUQAC4HnImJhRKwCFqd+Hge2kvQLSZ8mS/jvERFTI6IQEYWmpqYqp2FmZp0xUJJmlHn9S2BcmsH9L7B2leNXpOeVvPtjOq91Mq62dleVbLe9HhwR/wB2AWYCXwd+3cn+zMysEwZK0txc0ui0/UXg7rT9gqQhwLiSuq8CQzvQx73AEWn7KOCuGuU1pbt23xcRVwOnAbt2IC4zM2uQgXLX51LgeEkXAQ+RXZvcAFgE/A14oKTuJcCFkv5JtrSa1wnAxZJOAVqBY2qU57FpOrbtHzffr+NYMzNrMGWX1Ky/KBQKUSwWezoMM7M+RVJLRBRq1Rsoy7NmZmad5qRpZmaWk5OmmZlZTk6aZmZmOTlpmpmZ5eSkaWZmlpOTppmZWU5OmmZmZjk5aZqZmeXkpGlmZpbTQPnuWTMza6DmiTd0WdvLJh3UZW13lmeaZmZmOTlpmpmZ5eSkWYWk5TX2ry/pGyWvN5F0VdoeIenADvR5hqST64/WzMy6mpNm56wPvJM0I+KZiGj7QesRQN1J08zMei8nzRwkDZF0m6S5khZKOiTtmgRsLWmepCmSmiUtkrQmcCYwPu0b334Gmeo1p+1TJT0i6W5g25I6W0u6SVKLpLskDe+2kzYzs/fw3bP5vAEcGhGvSBoGzJZ0HTAR2DEiRgC0JcGIeFPSD4FCRHwz7TujXMOSRgJHkM1MBwNzgZa0eyrw9Yh4VNLHgV8C+5Zp41jgWIDNN9+8EedrZmZlOGnmI+C/JO0NrAI2BTZqUNt7AddGxOsAKRkjaQjwCeBKSW111yrXQERMJUuwFAqFaFBcZmbWjpNmPkcBTcDIiHhL0jJg7TrbeJt3L4fXOv59wEtts1gzM+t5vqaZz3rA8ylh7gNskcpfBYZWOKb9vmXArgCSdgW2TOWzgLGS1pE0FPgcQES8Ajwh6fB0jCTt0rhTMjOzejlp5jMNKEhaCHwFeBggIl4E7kk39Uxpd8wdwPZtNwIBVwMflLQY+CbwSGpjLjAdmA/8GXigpI2jgH+TNB9YDByCmZn1GEX4Elh/UigUolgs9nQYZmZ9iqSWiCjUqueZppmZWU5OmmZmZjk5aZqZmeXkpGlmZpaTk6aZmVlOTppmZmY5OWmamZnl5KRpZmaWk5OmmZlZTk6aZmZmOflXTszMrG7NE2/IXXfZpIO6MJLu5ZmmmZlZTgMuaUpqlrSou481M7O+b8Alza4gycvcZmYDwEBNmoMlTZO0RNJVkt4vaaSkOyW1SLpZ0sYAqXx++k3L49sakDRB0nWSbgduk/RBSX+UtEDSbEk7p3qVys+Q9FtJd0l6UtLnJZ0taaGkmyStkepNkvRQOv6c7h8qMzNrM1CT5rbALyNiO+AVsmT4C2BcRIwELgJ+kupeDJwQEbuUaWfXdMy/Aj8CHoyInYH/BC5NdSqVA2wN7AscDFwG3BEROwH/BA6StCFwKLBDOv6scicj6VhJRUnF1tbWDgyHmZnlMVCT5lMRcU/avgz4FLAjMEPSPOA04MOS1gfWj4hZqe7v2rUzIyL+nrb3bNsfEbcDG0r6QJVygD9HxFvAQmAQcFMqXwg0Ay8DbwC/kfR54PVyJxMRUyOiEBGFpqam+kfDzMxyGajX4qLd61eBxRExurQwJc1qXutkHCsAImKVpLcioi2uVcDgiHhb0m7AfsA44JtkM1MzM+sBA3WmubmktgT5RWA20NRWJmkNSTtExEvAS5L2THWPqtLmXW37JY0BXoiIV6qU1yRpCLBeRNwInASUWyI2M7NuMlBnmkuB4yVdBDxEdj3zZuA8SeuRjcvPgMXAMcBFkgK4pUqbZ6R6C8iWUY+uUZ7HUOBPktYGBHynjmPNzKzBtHpF0PqDQqEQxWKxp8Mws36uv30jkKSWiCjUqjdQZ5pmZtYJfSERdoWBek3TzMysbk6aZmZmOTlpmpmZ5eSkaWZmlpOTppmZWU5OmmZmZjk5aZqZmeXkpGlmZpaTk6aZmVlOTppmZmY5+Wv0zMysbnm/e7a/fd2eZ5pmZmY5OWmamZnl1CuSpqTl3dDHTEk1f/YlRzs3Slq/ETHl7Ss9vtEdfZqZWWW9Imn2JEmD6qkfEQdGxEtdFU+FvtYHnDTNzHpYr0qakoZIuk3SXEkLJR2SypslLSqpd7KkM9L2TEmTJc2R9IikvVL5OpKukLRE0rXAOiXHL5d0rqT5wKmS/liy75OpfqUYl0kalmJaIul/JS2WdIukdaoc985MNx2/LG1PkHSNpJskPSrp7PZ9AZOArSXNkzSlTNvHSipKKra2ttYaZjMz66BelTSBN4BDI2JXYB/gXEnKcdzgiNgNOBE4PZUdB7weEdulspEl9dcF7o+IXYAfA8MlNaV9xwAX5Yz3I8AFEbED8BJwWM7j2hsBjAd2AsZL2qzd/onAXyJiRESc0v7giJgaEYWIKDQ1NbXfbWZmDdLbkqaA/5K0ALgV2BTYKMdx16TnFqA5be8NXAYQEQuABSX1VwJXp30B/A74UrpWORr4c854n4iIeWX6rtdtEfFyRLwBPARs0cF2zMysC/W2z2keBTQBIyPirbSEuTbwNu9O8Gu3O25Fel5JvnN6IyJWlry+GPg/spnulRHxds54V5Rsr6RkCbiM0nOoFH9bO73tv4uZmdH7ZprrAc+nhLkPq2dczwH/ImlDSWsBn83R1izgiwCSdgR2rlQxIp4BngFOI0ugXWEZq5eIx9V57KvA0IZGY2ZmdettSXMaUJC0EPgK8DBARLwFnAnMAWa0ldfwK2CIpCXp2JYcfT8VEUs6GHst5wDHSXoQGFbPgRHxInCPpEXlbgQyM7PuoeySnkk6H3gwIn7T07F0RqFQiGKx2NNhmJn1KZJaIqLmZ/l97YxssIDXgO/2dCxmZtZ7OWkCETGyfZmk+4G12hV/OSIWVmtL0gXAHu2Kfx4RXXWt1MzMuomTZgUR8fEOHnd8o2MxM7PeobfdCGRmZtZrOWmamZnl5KRpZmaWk5OmmZlZTk6aZmZmOTlpmpmZ5eSkaWZmlpM/p2lmZnVrnnhDzTrLJh3UDZF0L880zczMcnLSLEPSGZJO7uk4zMysd3HS7CKSvPRtZtbPOGkmkk6V9Iiku4FtU9kISbMlLZB0raQNapTPlPQzSUXg2+n1TyUVJS2RNErSNZIelXRWSd9/lNQiabGkY0vKl0v6iaT5qb+NundUzMyslJMmIGkkcAQwAjgQGJV2XQp8LyJ2BhYCp9coB1gzIgoRcW56/Wb6jbYLgT8BxwM7AhMkbZjqfDX90koB+FZJ+brA7IjYBZgFfK1C/MemxFxsbW3t+ECYmVlVTpqZvYBrI+L1iHgFuI4sYa0fEXemOr8F9pa0Xrnykramt2v7uvS8EFgcEc9GxArgcWCztO9bkuYDs1PZR1L5m8D1absFaC4XfERMTYm60NTUVM95m5lZHXzdrfFea/d6RXpeVbLd9nqwpDHA/sDoiHhd0kxg7VTnrYiItL0S//cyM+tRnmlmZgFjJa0jaSjwObLk9w9Je6U6XwbujIiXy5V3ou/1gH+khDkc2L0TbZmZWRfyzAWIiLmSpgPzgeeBB9Kuo4ELJb2fbDn1mBrlHXET8HVJS4ClZEu0ZmbWC2n16p/1B4VCIYrFYk+HYWbWp0hqSTdtVuXlWTMzs5ycNM3MzHJy0jQzM8vJSdPMzCwnJ00zM7OcnDTNzMxyctI0MzPLyUnTzMwsJydNMzOznJw0zczMcvJ3z5qZWd2aJ95Qs86ySQd1QyTdyzNNMzOznJw0zczMcuqSpCnp3hx1Tkw/rdVlJI2VtH2NOpdIGtfgfqu2KWmmpJrfpm9mZr1LlyTNiPhEjmonAnUlTUmD6gxlLFA1aZqZmeXVVTPN5el5TJpVXSXpYUnTlPkWsAlwh6Q7Ut0DJN0naa6kKyUNSeXLJE2WNBc4vEq9SZIekrRA0jmSPgEcDEyRNE/S1jniHinpTkktkm6WtLGk4ZLmlNRplrSwUv0OjNWRkhZKWiRpcio7XNJ/p+1vS3o8bW8l6Z56+zAzs8bojmuaHyObVW4PbAXsERHnAc8A+0TEPpKGAacB+0fErkAR+E5JGy+m8lvL1ZO0IXAosENE7AycFRH3AtcBp0TEiIj4S7UgJa0B/AIYFxEjgYuAn0TEw8CakrZMVccD0yvVr2dgJG0CTAb2BUYAoySNBe4C9krV9gJelLRp2p5Vpp1jJRUlFVtbW+sJwczM6tAdHzmZExFPA0iaBzQDd7ersztZUr1HEsCawH0l+6fXqPcy8AbwG0nXA9d3IM5tgR2BGantQcCzad8fyJLlpPQ8vkb9vEYBMyOiFUDSNGDviPijpCGShgKbAb8H9iZLmte0byQipgJTAQqFQtQZg5mZ5dQdSXNFyfbKCn0KmBERR1Zo47Va9STtBuwHjAO+STZ7q4eAxRExusy+6cCVkq4BIiIelbRTlfqNcC9wDLCUbOb5VWA08N0u6s/MzGroyY+cvAoMTduzgT0kbQMgaV1JHy1zTNl66brmehFxI3ASsEuZPmpZCjRJGp3aXkPSDgBpaXcl8ANWz3or1q/DHOBfJQ1LNzkdCdyZ9t0FnEy2HPsgsA+wIiJerrMPMzNrkJ5MmlOBmyTdkZYnJwCXS1pAtuQ6vP0BVeoNBa5PZXez+nroFcApkh6sdSNQRLxJNkudLGk+MA8ovQt4OvAlsqXaPPVriohngYnAHcB8oCUi/pR230W2NDsrIlYCT/HeZW0zM+tGivAlsP6kUChEsVjs6TDMzPoUSS0RUfPz8/5GIDMzs5wGzBe2S7oA2KNd8c8j4uIu7PNaYMt2xd+LiJu7qk8zM+s6AyZpRsTxPdDnod3dp5mZdR0vz5qZmeXkpGlmZpaTk6aZmVlOTppmZmY5OWmamZnl5KRpZmaWk5OmmZlZTgPmc5pmZla/5ok3lC1fNumgbo6kd/BM08zMLCcnTTMzs5w6nTQl3ZujzomS3t/Zvmr0MVbS9jXqXCJpXIP7rdqmpGWShpUpP1jSxBptj5F0fSPiNDOzzut00oyIPL8heSJQV9JMP8pcj7FA1aTZm0TEdRExqafjMDOz/Box01yensdIminpKkkPS5qmzLeATYA7JN2R6h4g6T5JcyVdKWlIKl8mabKkucDhVepNkvSQpAWSzpH0CeBgYIqkebV+cDq1MVLSnZJaJN0saWNJwyXNKanTLGlhpfp1DNMJ6RwWShqe2psg6fy0vbWk2Wn/WW1jmgxpP6ZlzuVYSUVJxdbW1jrCMjOzejT6mubHyGaV2wNbAXtExHnAM8A+EbFPWqo8Ddg/InYFisB3Stp4MZXfWq6epA2BQ4EdImJn4KyIuBe4DjglIkZExF+qBSlpDeAXwLiIGAlcBPwkIh4G1pTU9nNe44HplerXMS4vpHP4FXBymf0/J/uZsp2Ap9vte8+Ytj84IqZGRCEiCk1NTXWEZWZm9Wj0R07mRMTTAJLmAc3A3e3q7E6WAO5Jk6Y1gftK9k+vUe9l4A3gN+l6X0eu+W0L7AjMSG0PAp5N+/5AliwnpefxNerncU16bgE+X2b/aLLlZYDfA+eU7MszpmZm1g0anTRXlGyvrNC+gBkRcWSFNl6rVU/SbsB+wDjgm8C+dcYpYHFEjC6zbzpwpaRrgIiIRyXtVKV+Hm3jUmlM8hzb0ePNzKxBuusjJ68CQ9P2bGAPSdsASFpX0kfLHFO2XrquuV5E3AicBOxSpo9algJNkkantteQtANAWtpdCfyA1bPeivUbZDZwWNo+ooHtmplZA3VX0pwK3CTpjohoBSYAl0taQLbkOrz9AVXqDQWuT2V3s/p66BXAKZIerHUjUES8STZLnSxpPjAPKL0LeDrwJbKl2jz1O+tEsuu1C4BtyJagzcysl1FE9HQMA176DOs/IyIkHQEcGRGHdKStQqEQxWKxsQGamfVzkloiolCrnq+P9Q4jgfPTx0leAr7aw/GYmVkZ/TJpSrqA93404+cRcXEX9nktsGW74u9FxM21jo2Iu1h9bdbMzHqpfpk0I+L4Hujz0O7u08zMupe/sN3MzCwnJ00zM7OcnDTNzMxyctI0MzPLyUnTzMwsJydNMzOznJw0zczMcuqXn9M0M7PGaJ54Q9nyZZMO6uZIegfPNM3MzHJy0jQzM8upZtKU1CxpUZnyMyXtX+PYMySd3JkAu4qkmZIqfqO9pBslrV+jjQmSNml8dO/pp1nSF7u6HzMzq67DM82I+GFE3NrIYHqTiDgwIl6qUW0CUFfSlNSR68jNgJOmmVkPy5s0B0n6X0mLJd0iaR1Jl0gaByDpQEkPS2qRdJ6k60uO3T7N6h6X9K1KHUhaV9INkuZLWiRpfCpfJulsSQslzZG0TSpvknS1pAfSY4+Sdi5KdR+UdEgqX0fSFZKWpF8kWafaCad+h6VZ3pIy5z8OKADTJM1LZSMl3ZnG4WZJG6e2Zkr6maQi8O30enKK8RFJe6V6gyRNSeezQNK/p3AmAXulfk4qE+uxkoqSiq2trTn+c5qZWUfkTZofAS6IiB3Ifu/xsLYdktYG/gf4TESMBJraHTsc+BSwG3C6pDUq9PFp4JmI2CUidgRuKtn3ckTsBJwP/CyV/Rz4aUSMSvH8OpWfCtweEbsB+wBTJK0LHAe8HhHbAaeT/YZlXu85/4i4CigCR0XECOBt4BfAuDQOFwE/KWljzYgoRMS56fXgFOOJKR6Af0vnOgoYBXxN0pbAROCuiBgRET9tH1xETE1tF5qa2g+/mZk1St6lwiciYl7abiFbLmwzHHg8Ip5Iry8Hji3Zf0NErABWSHoe2Ah4ukwfC4FzJU0Grk+/Mdnm8pLntqSxP9kstq3OByQNAQ4ADi65lro2sDmwN3AeQEQskLQg15lnqp1/m22BHYEZKaZBwLMl+6e3q39NmfYOAHZum8ED65El7DfriNXMzLpI3qS5omR7JTWWNmscW7bPiHhE0q7AgcBZkm6LiDPbdpdWTc/vA3aPiDdK21GWsQ6LiKXtyusI+T3ynL+AxRExukIbr1Vos3RMBJzQ/oerJY2pK1ozM+sSjfjIyVJgK0nN6fX4jjSS7kJ9PSIuA6YAu5bsHl/yfF/avgU4oeT4EWnzZuCElDyR9LFUPot0M42kHYGdOxJnO68CQ9P2UqBJ0ujUxxqSdqizvZuB49qWsCV9NC0tl/ZjZmY9pNPfCBQR/5T0DeAmSa8BD3SwqZ3Irj+uAt4iuwbZZoO0nLoCODKVfQu4IJUPJkuKXwd+THbdc4Gk9wFPAJ8FfgVcLGkJsIRsWbSzLgEulPRPYDQwDjhP0noppp8Bi+to79dkS7VzU9JvBcYCC4CVkuYDl5S7rmlmZl1PEVG7Vq1GpCERsTy90V8APNqoN3ZJy4BCRLzQiPb6u0KhEMVisafDMDPrUyS1RETFz+63adQ3An1N0jyyWdV6ZHfTmpmZ9SsN+cL2NKvMNbOUtCFwW5ld+0XEi2Xabu5cdDXjuR9Yq13xlyNiYVf2a2ZmfU+3/8pJSowjalbsJhHx8Z6OwczM+gZ/YbuZmVlOTppmZmY5OWmamZnl5KRpZmaWk5OmmZlZTk6aZmZmOXX7R07MzKzva554Q9nyZZMO6uZIupdnmmZmZjk5aZqZmeXkpC1QeNkAAAh5SURBVGlmZpZTv7umKekMYDnwAWBWRNxape7BwPYRMambwjMzsz6s3yXNNhHxwxx1rgOu66oYJA2KiJVd1b6ZmXWvfrE8K+lUSY9IuhvYNpVdImlc2l4m6UeS5kpaKGl4Kp8g6fyS+udJulfS4yXHvk/SLyU9LGmGpBvb9lWIZZmkyZLmAodLOjL1uUjS5JJ6lcqXS5oiabGkWyXtJmlmiungCn0eK6koqdja2tr5ATUzs7L6fNKUNBI4guyXUw4ERlWo+kJE7Ar8Cji5Qp2NgT2BzwJtS7afB5qB7YEvA6NzhPVi6msWMBnYN8U3StJYSZuUK0/HrgvcHhE7AK8CZwGfBA4FzizXWURMjYhCRBSamppyhGdmZh3R55MmsBdwbUS8HhGvUHm59Zr03EKWBMv5Y0SsioiHgI1S2Z7Alan8b8AdOWKanp5HATMjojUi3gamAXtXKQd4E7gpbS8E7oyIt9J2pbjNzKwb9IekmdeK9LySytdyV5RsqxN9vdaJY9+KiEjbq0gxRcQq+vE1aDOzvqA/JM1ZwFhJ60gaCnyuwe3fAxyWrm1uBIyp49g5wL9KGiZpEHAkcGeVcjMz68X6/MwlIuZKmg7MB54HHmhwF1cD+wEPAU8Bc4GXc8b2rKSJZEu6Am6IiD8BVCo3M+sL+vvX5VWi1SuBVomkIRGxXNKGZLPEPdL1zV6nUChEsVjs6TDMzPoUSS0RUahVr8/PNLvJ9ZLWB9YEftxbE6aZmXUtJ80cImJM+zJJ1wJbtiv+XkTc3C1BmZlZt3PS7KCIOLSnYzAzs+7VH+6eNTMz6xa+EaifkdQKPNlN3Q0DXuimvjqrr8TaV+IEx9oV+kqc0P9i3SIian6lmpOmdZikYp67zXqDvhJrX4kTHGtX6CtxwsCN1cuzZmZmOTlpmpmZ5eSkaZ0xtacDqENfibWvxAmOtSv0lThhgMbqa5pmZmY5eaZpZmaWk5OmmZlZTk6a9g5Jh0taLGmVpEJJ+ScltUhamJ73Ldk3MpU/Juk8SUrlH5Q0Q9Kj6XmDVK5U7zFJCyTt2shY077vp/aXSvpUSfmnU9lj6Vdm2sq3lHR/Kp8uac1UvlZ6/Vja39yRWNvFNkLSbEnzJBUl7ZbKK46LpKPTOD4q6eiS8rJj3yiSTpD0cBrns0vKGzK+jSbpu5JC0rD0ujeO6ZQ0pgskXZu+07ptX68c1wrnUTambux/M0l3SHoo/X1+O5XX/b5T6W+hoojwww8iAmA7YFtgJlAoKf8YsEna3hH4a8m+OcDuZD9x9mfgM6n8bGBi2p4ITE7bB6Z6Ssfd3+BYtyf7mbi1yL4b+C/AoPT4C7AV2Rfvzwe2T8f8ATgibV8IHJe2vwFcmLaPAKY3YIxvKRmjA4GZ1cYF+CDweHreIG1vUG3sG/S3sA9wK7BWev0vjR7fBv/tbgbcTPbFHsN645im9g8ABqftySX/X/TKca1wDhVj6q4HsDGwa9oeCjySxrCu951qfwuVHp5p2jsiYklELC1T/mBEPJNeLgbWSbOwjYEPRMTsyP4CLwXGpnqHAL9N279tV35pZGYD66d2GhJrav+KiFgREU8AjwG7pcdjEfF4RLwJXAEckmYS+wJXVYi17RyuAvZrwMwjgA+k7fWAtnGtNC6fAmZExN8j4h/ADODTNca+EY4DJkXECoCIeL4kzkaNbyP9FPgPsvFt09vGlIi4JSLeTi9nAx8uibU3jms5ZWPqpr6B7LeKI2Ju2n4VWAJsSv3vO2X/Fqr17aRp9ToMmJveTDcFni7Z93QqA9goIp5N238DNkrbm5L9mHe5YxqhUvuVyjcEXip5IyuN551j0v6XU/3OOBGYIukp4Bzg+x2Mu9rYN8JHgb3S8t+dkkZ1MM5q49sQkg4hW/2Y325XbxvT9r5KNvuhRkw9Mq5VdPX/w3VRdtnkY8D91P++U/e5+FdOBhhJtwIfKrPr1Ij4U41jdyBbUjqgnj4jIiTV/dmmzsTak6rFDewHnBQRV0v6AvAbYP/ujK9NjTgHky1Z7Q6MAv4gaatuDO9dasT6n9T5N9mV8vzdSjoVeBuY1p2x9TeShgBXAydGxCulC0Edfd+pxUlzgImIDr1BS/owcC3wlYj4Syr+K6uXl0jbf03bz0naOCKeTcsgz5ccs1mFYxoRa7X2y5W/SLZUMzj9q720fltbT0saTLac+mKtAKrFLelS4Nvp5ZXAr2vE/VdgTLvymVQf+1xqxHkccE1appwjaRXZl143cnw7HauknciuAc5Pb5gfBuYqu8Gq28e0WqwlMU8APgvsl8aXKrFSobwh49pBuf8f7kqS1iBLmNMi4ppUXO/7TqW/hcq64iKtH337wXtvrlmf7GL/58vUbX/jxIGpfArvviB/dto+iHdfkJ/T4Fh34N03VDxOduPC4LS9JatvXtghHXMl776h4htp+3jefSPQHxowtkuAMWl7P6Cl2riQzfaeILtJYYO0/cFqY9+gv4GvA2em7Y+SLWGpkePbRX+7y1h9I1CvGtPU/qeBh4CmduW9elzbxVoxpu56pP8+lwI/a1de1/tOtb+Fin1354n60bsfwKFka/orgOeAm1P5acBrwLySR9vdlAVgEdnddOez+lumNgRuAx4luwuz7U1JwAWp/kJKEl4jYk37Tk3tL6Xk7keyO+geSftOLSnfKr1ZPpbeiNruGF07vX4s7d+qAWO8J9CS3mjuB0bWGheya1+PpccxJeVlx75BfwtrApel9ucC+zZ6fLvob3gZq5NmrxrT1P5jZP8Aafv/6MK+MK5lzqNsTN3Y/55kN30tKBnLA+nA+06lv4VKD3+NnpmZWU6+e9bMzCwnJ00zM7OcnDTNzMxyctI0MzPLyUnTzMwsJydNMzOznJw0zczMcvr/X0kYnzz8ofAAAAAASUVORK5CYII=)\n\n**Note:** Your figure shouldn't be identical to the one above. Your model will have different coefficients since it's been trained on different data. Only the formatting should be the same.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb166130b79b0dbdd2cf833771477156c290d0d9
236,541
ipynb
Jupyter Notebook
Time Series Analysis/Time Series Forecasting/ARIMAX.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
53
2021-08-28T07:41:49.000Z
2022-03-09T02:20:17.000Z
Time Series Analysis/Time Series Forecasting/ARIMAX.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
142
2021-07-27T07:23:10.000Z
2021-08-25T14:57:24.000Z
Time Series Analysis/Time Series Forecasting/ARIMAX.ipynb
shreepad-nade/ds-seed
93ddd3b73541f436b6832b94ca09f50872dfaf10
[ "Apache-2.0" ]
38
2021-07-27T04:54:08.000Z
2021-08-23T02:27:20.000Z
261.950166
100,502
0.909339
[ [ [ "# Simple ARIMAX", "_____no_output_____" ], [ "This code template is for Time Series Analysis and Forecasting to make scientific predictions based on historical time stamped data with the help of ARIMAX algorithm", "_____no_output_____" ], [ "### Required Packages", "_____no_output_____" ] ], [ [ "import warnings \r\nimport numpy as np\r\nimport pandas as pd \r\nimport seaborn as se \r\nimport matplotlib.pyplot as plt \r\nfrom statsmodels.tsa.stattools import adfuller\r\nfrom statsmodels.graphics.tsaplots import plot_acf, plot_pacf\r\nfrom statsmodels.tsa.statespace.sarimax import SARIMAX\r\nfrom sklearn.metrics import mean_absolute_error, mean_squared_error \r\n\r\nwarnings.filterwarnings(\"ignore\")\r\n", "_____no_output_____" ] ], [ [ "### Initialization\n\nFilepath of CSV file", "_____no_output_____" ] ], [ [ "file_path = \"\"", "_____no_output_____" ] ], [ [ "Variable containing the date time column name of the Time Series data", "_____no_output_____" ] ], [ [ "date = \"\"", "_____no_output_____" ] ], [ [ "Target feature for prediction.", "_____no_output_____" ] ], [ [ "target = \"\"", "_____no_output_____" ] ], [ [ "### Data Fetching\n\nPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.\n\nWe will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.", "_____no_output_____" ] ], [ [ "df = pd.read_csv(file_path)\ndf.head()", "_____no_output_____" ] ], [ [ "### Data Preprocessing\n\nSince the majority of the machine learning models for Time Series Forecasting doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippets have functions, which removes the rows containing null value if any exists. And convert the string classes date column in the datasets to proper Date-time classes.\n\nAfter the proper date conversions are done and null values are dropped, we set the Date column as the index value.\n", "_____no_output_____" ] ], [ [ "def data_preprocess(df, target, date):\n df = df.dropna(axis=0, how = 'any')\n df[date] = pd.to_datetime(df[date])\n df = df.set_index(date)\n return df", "_____no_output_____" ], [ "df = data_preprocess(df,target,date)\ndf.head()", "_____no_output_____" ], [ "df.plot(figsize = (15,8))\nplt.show()", "_____no_output_____" ] ], [ [ "### Seasonality decomposition", "_____no_output_____" ], [ "Since Simple ARIMAX for non-seasonal data, we need to check for any seasonality in our time series and decompose it.\n\nWe use the Dickey Fuller Test for testing the seasonality and if the ADF Statistic value is positive, it means that the data has seasonality.\n\n#### Dickey Fuller Test\nThe Dickey Fuller test is a common statistical test used to test whether a given Time series is stationary or not. The Augmented Dickey Fuller (ADF) test expands the Dickey-Fuller test equation to include high order regressive process in the model. We can implement the ADF test via the **adfuller()** function. It returns the following outputs:\n\n 1. adf : float\n> The test statistic.\n\n 2. pvalue : float\n> MacKinnon's approximate p-value based on MacKinnon(1994, 2010). It is used alongwith the test statistic to reject or accept the null hypothesis.\n\n 3. usedlag : int\n> Number of lags considered for the test\n\n 4. critical values : dict\n> Critical values for the test statistic at the 1 %, 5 %, and 10 % levels. Based on MacKinnon (2010).\n\nFor more information on the adfuller() function [click here](https://www.statsmodels.org/stable/generated/statsmodels.tsa.stattools.adfuller.html)", "_____no_output_____" ] ], [ [ "def dickeyFuller(df,target):\n \n # Applying Dickey Fuller Test\n X = df.values\n result = adfuller(X)\n print('ADF Statistic: %f' % result[0])\n print('p-value: %f' % result[1])\n print('Number of lags used: %d' % result[2])\n print('Critical Values:')\n for key, value in result[4].items():\n print('\\t%s: %.3f' % (key, value))\n \n \n # Decomposing Seasonality if it exists\n if result[0]>0:\n df[target] = df[target].rolling(12).mean()\n \n return df", "_____no_output_____" ] ], [ [ "To remove the seasonality we use the rolling mean technique for smoothing our data and decomposing any seasonality.\nThis method provides rolling windows over the data. On the resulting windows, we can perform calculations using a statistical function (in this case the mean) in order to decompose the seasonality.\n\nFor more information about rolling function [click here](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html)", "_____no_output_____" ] ], [ [ "df = dickeyFuller(df,target)", "ADF Statistic: -2.256990\np-value: 0.186215\nNumber of lags used: 15\nCritical Values:\n\t1%: -3.448\n\t5%: -2.869\n\t10%: -2.571\n" ] ], [ [ "### Autocorrelation Plot\n\n\nWe can calculate the correlation for time series observations with observations with previous time steps, called lags. Because the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation.\nA plot of the autocorrelation of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. \n\nAn autocorrelation plot shows whether the elements of a time series are positively correlated, negatively correlated, or independent of each other. \nThe plot shows the value of the autocorrelation function (acf) on the vertical axis ranging from –1 to 1.\nThere are vertical lines (a “spike”) corresponding to each lag and the height of each spike shows the value of the autocorrelation function for the lag.\n\n[API](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_acf.html)", "_____no_output_____" ] ], [ [ "x = plot_acf(df, lags=40)\nx.set_size_inches(15, 10, forward=True)\nplt.show()", "_____no_output_____" ] ], [ [ "### Partial Autocorrelation Plot\n\nA partial autocorrelation is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of intervening observations removed.\n\nThe partial autocorrelation at lag k is the correlation that results after removing the effect of any correlations due to the terms at shorter lags. By examining the spikes at each lag we can determine whether they are significant or not. A significant spike will extend beyond the significant limits, which indicates that the correlation for that lag doesn't equal zero.\n\n[API](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_pacf.html)\n", "_____no_output_____" ] ], [ [ "y = plot_pacf(df, lags=40)\ny.set_size_inches(15, 10, forward=True)\nplt.show()", "_____no_output_____" ] ], [ [ "### Data Splitting\n\nSince we are using a univariate dataset, we can directly split our data into training and testing subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.", "_____no_output_____" ] ], [ [ "size = int(len(df)*0.9)\ndf_train, df_test = df.iloc[:size], df.iloc[size:]", "_____no_output_____" ] ], [ [ "### Model\n\nThe ARIMAX model is an extended version of the ARIMA model. It includes also other independent (predictor) variables. The model is also referred to as the vector ARIMA or the dynamic regression model.\nThe ARIMAX model is similar to a multivariate regression model, but allows to take advantage of autocorrelation that may be present in residuals of the regression to improve the accuracy of a forecast.\n\nThe API used here is from the statsmodels library. Statsmodels does not have a dedicated API for ARIMAX but the model can be created via <Code>SARIMAX</Code> API by setting the parameter <Code>seasonal_order</Code> = (0,0,0,0) i.e., no seasonality\n\n#### Model Tuning Parameters\n\n 1. endog: array_like\n>The observed time-series process \n\n 2. exog: array_like, optional\n>Array of exogenous regressors, shaped nobs x k.\n\n 3. order: iterable or iterable of iterables, optional\n>The (p,d,q) order of the model for the number of AR parameters, differences, and MA parameters. d must be an integer indicating the integration order of the process, while p and q may either be an integers indicating the AR and MA orders (so that all lags up to those orders are included) or else iterables giving specific AR and / or MA lags to include. Default is an AR(1) model: (1,0,0).\n\n 4. seasonal_order: iterable, optional\n>The (P,D,Q,s) order of the seasonal component of the model for the AR parameters, differences, MA parameters, and periodicity. D must be an integer indicating the integration order of the process, while P and Q may either be an integers indicating the AR and MA orders (so that all lags up to those orders are included) or else iterables giving specific AR and / or MA lags to include. s is an integer giving the periodicity (number of periods in season), often it is 4 for quarterly data or 12 for monthly data. Default is no seasonal effect.\n\n 5. trend: str{‘n’,’c’,’t’,’ct’} or iterable, optional\n>Parameter controlling the deterministic trend polynomial . Can be specified as a string where ‘c’ indicates a constant (i.e. a degree zero component of the trend polynomial), ‘t’ indicates a linear trend with time, and ‘ct’ is both. Can also be specified as an iterable defining the non-zero polynomial exponents to include, in increasing order. For example, [1,1,0,1] denotes \n. Default is to not include a trend component.\n\n 6. measurement_error: bool, optional\n>Whether or not to assume the endogenous observations endog were measured with error. Default is False.\n\n 7. time_varying_regression: bool, optional\n>Used when an explanatory variables, exog, are provided provided to select whether or not coefficients on the exogenous regressors are allowed to vary over time. Default is False.\n\n 8. mle_regression: bool, optional\n>Whether or not to use estimate the regression coefficients for the exogenous variables as part of maximum likelihood estimation or through the Kalman filter (i.e. recursive least squares). If time_varying_regression is True, this must be set to False. Default is True.\n\nRefer to the official documentation at [statsmodels](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html) for more parameters and information", "_____no_output_____" ] ], [ [ "model=SARIMAX(df[target],order=(1, 0, 0),seasonal_order=(0,0,0,0))\nresult=model.fit()", "_____no_output_____" ] ], [ [ "### Model Summary\n\nAfter fitting the training data into our ARIMAX and training it, we can take a look at a brief summary of our model by using the **summary()** function. The followings aspects are included in our model summary:\n\n1. Basic Model Details: The first column of our summary table contains the basic details regarding our model such as: \n \n a. Name of dependent variable \n b. Model used along with parameters \n c. Date and time of model deployment \n d. Time Series sample used to train the model\n \n \n2. Probablistic Statistical Measures: The second column gives the values of the probablistic measures obtained by our model:\n \n a. Number of observations\n \n b. Log-likelihood, which comes from Maximum Likelihood Estimation, a technique for finding or optimizing the\n parameters of a model in response to a training dataset.\n \n c. Standard Deviation of the innovations\n \n d. Akaike Information Criterion (AIC), which is derived from frequentist probability.\n \n e. Bayesian Information Criterion (BIC), which is derived from Bayesian probability.\n \n f. Hannan-Quinn Information Criterion (HQIC), which is an alternative to AIC and is derived using the log-likelihood and \n the number of observartions.\n \n \n \n3. Statistical Measures and Roots: The summary table also consists of certain other statistical measures such as z-value, standard error as well as the information on the characteristic roots of the model.", "_____no_output_____" ] ], [ [ "result.summary()", "_____no_output_____" ] ], [ [ "#### Simple Forecasting", "_____no_output_____" ] ], [ [ "df_train.tail()", "_____no_output_____" ] ], [ [ "### Predictions\n\nBy specifying the start and end time for our predictions, we can easily predict the future points in our time series with the help of our model.", "_____no_output_____" ] ], [ [ "d = df.drop([target], axis = 1)\n\nstart_date = d.iloc[size].name\nend_date = d.iloc[len(df)-1].name\n\ndf_pred = result.predict(start = start_date, end = end_date)\ndf_pred.head()", "_____no_output_____" ] ], [ [ "## Model Accuracy\n\nWe will use the three most popular metrics for model evaluation: Mean absolute error (MAE), Mean squared error (MSE), or Root mean squared error (RMSE).", "_____no_output_____" ] ], [ [ "test = df_test[target]\nprint(\"Mean Absolute Error {:.2f}\".format(mean_absolute_error(test,df_pred)))\nprint(\"Mean Squared Error {:.2f}\".format(mean_squared_error(test,df_pred)))\nprint(\"Root Mean Squared Error {:.2f}\".format(np.sqrt(mean_squared_error(test,df_pred))))", "Mean Absolute Error 8.61\nMean Squared Error 98.90\nRoot Mean Squared Error 9.94\n" ] ], [ [ "## Predictions Plot\n\nFirst we make use of plot to plot the predicted values returned by our model based on the test data.\nAfter that we plot the actual test data to compare our predictions.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(18,5))\nplt.plot(df_pred[start_date:end_date], color = \"red\")\nplt.plot(df_test, color = \"blue\")\nplt.title(\"Predictions vs Actual\", size = 24)\nplt.plot(fontsize=\"x-large\")\n\nplt.show()", "_____no_output_____" ] ], [ [ "#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb1663ca6e8df543538c0a64dc79a414a3e0c4e0
10,803
ipynb
Jupyter Notebook
notebooks/STARmap.ipynb
kne42/starfish
78b348c9756f367221dcca725cfa5107e5520b33
[ "MIT" ]
null
null
null
notebooks/STARmap.ipynb
kne42/starfish
78b348c9756f367221dcca725cfa5107e5520b33
[ "MIT" ]
null
null
null
notebooks/STARmap.ipynb
kne42/starfish
78b348c9756f367221dcca725cfa5107e5520b33
[ "MIT" ]
null
null
null
29.842541
110
0.590021
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\n# STARmap processing example\n\nThis notebook demonstrates the processing of STARmap data using starfish. The\ndata we present here is a subset of the data used in this\n[publication](https://doi.org/10.1126/science.aat5691) and was generously provided to us by the authors.", "_____no_output_____" ] ], [ [ "from pprint import pprint \n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport starfish\nimport starfish.data\nfrom starfish.types import Axes\nfrom starfish.util.plot import (\n diagnose_registration, imshow_plane, intensity_histogram\n)\n\nmatplotlib.rcParams[\"figure.dpi\"] = 150", "_____no_output_____" ] ], [ [ "## Visualize raw data\n\nIn this starmap experiment, starfish exposes a test dataset containing a\nsingle field of view. This dataset contains 672 images spanning 6 rounds\n`(r)`, 4 channels `(ch)`, and 28 z-planes `(z)`. Each image\nis `1024x1024 (y, x)`\n\nTo examine this data, the vignette displays the max projection of channels and\nrounds. Ideally, these should form fairly coherent spots, indicating that the\ndata are well registered. By contrast, if there are patterns whereby pairs of\nspots are consistently present at small shifts, that can indicate systematic\nregistration offsets which should be corrected prior to analysis.", "_____no_output_____" ] ], [ [ "experiment = starfish.data.STARmap(use_test_data=True)\nstack = experiment['fov_000'].get_image('primary')", "_____no_output_____" ], [ "ch_r_projection = stack.max_proj(Axes.CH, Axes.ROUND)\n\nf = plt.figure(dpi=150)\nimshow_plane(ch_r_projection, sel={Axes.ZPLANE: 15})", "_____no_output_____" ] ], [ [ "Visualize the codebook\n----------------------\nThe STARmap codebook maps pixel intensities across the rounds and channels to\nthe corresponding barcodes and genes that those pixels code for. For this\ndataset, the codebook specifies 160 gene targets.", "_____no_output_____" ] ], [ [ "print(experiment.codebook)", "_____no_output_____" ] ], [ [ "## Registration\n\nStarfish exposes some simple tooling to identify registration shifts.\n`starfish.util.plot.diagnose_registration` takes an ImageStack and a\nset of selectors, each of which maps `Axes` objects\nto indices that specify a particular 2d image.\n\nBelow the vignette projects the channels and z-planes and examines the\nregistration of those max projections across channels 0 and 1. To make the\ndifference more obvious, we zoom in by selecting a subset of the image, and\ndisplay the data before and after registration.\n\nIt looks like there is a small shift approximately the size of a spot\nin the `x = -y` direction for at least the plotted rounds\n\nThe starfish package can attempt a translation registration to fix this\nregistration error.", "_____no_output_____" ] ], [ [ "projection = stack.max_proj(Axes.CH, Axes.ZPLANE)\nreference_image = projection.sel({Axes.ROUND: 1})\n\nltt = starfish.image.LearnTransform.Translation(\n reference_stack=reference_image,\n axes=Axes.ROUND,\n upsampling=1000,\n)\ntransforms = ltt.run(projection)", "_____no_output_____" ] ], [ [ "How big are the identified translations? ", "_____no_output_____" ] ], [ [ "pprint([t[2].translation for t in transforms.transforms])", "_____no_output_____" ] ], [ [ "Apply the translations", "_____no_output_____" ] ], [ [ "warp = starfish.image.ApplyTransform.Warp()\nstack = warp.run(\n stack=stack,\n transforms_list=transforms,\n)", "_____no_output_____" ] ], [ [ "Show the effect of registration.", "_____no_output_____" ] ], [ [ "post_projection = stack.max_proj(Axes.CH, Axes.ZPLANE)", "_____no_output_____" ], [ "f, (ax1, ax2) = plt.subplots(ncols=2)\nsel_0 = {Axes.ROUND: 0, Axes.X: (500, 600), Axes.Y: (500, 600)}\nsel_1 = {Axes.ROUND: 1, Axes.X: (500, 600), Axes.Y: (500, 600)}\ndiagnose_registration(\n projection, sel_0, sel_1, ax=ax1, title='pre-registered'\n)\ndiagnose_registration(\n post_projection, sel_0, sel_1, ax=ax2, title='registered'\n)\nf.tight_layout()", "_____no_output_____" ] ], [ [ "The plot shows that the slight offset has been corrected.\n\nEqualize channel intensities\n----------------------------\nThe second stage of the STARmap pipeline is to align the intensity\ndistributions across channels and rounds. Here we calculate a reference\ndistribution by sorting each image's intensities in increasing order and\naveraging the ordered intensities across rounds and channels. All `(z, y, x)`\nvolumes from each round and channel are quantile normalized against this\nreference.\n\nNote that this type of histogram matching has an implied assumption that each\nchannel has relatively similar numbers of spots. In the case of this data\nthis assumption is reasonably accurate, but for other datasets it can be\nproblematic to apply filters that match this stringently.", "_____no_output_____" ] ], [ [ "mh = starfish.image.Filter.MatchHistograms({Axes.CH, Axes.ROUND})\nscaled = mh.run(stack, in_place=False, verbose=True, n_processes=8)", "_____no_output_____" ], [ "def plot_scaling_result(\n template: starfish.ImageStack, scaled: starfish.ImageStack\n):\n f, (before, after) = plt.subplots(ncols=4, nrows=2)\n for channel, ax in enumerate(before):\n title = f'Before scaling\\nChannel {channel}'\n intensity_histogram(\n template, sel={Axes.CH: channel, Axes.ROUND: 0}, ax=ax, title=title,\n log=True, bins=50,\n )\n ax.set_xlim((0, 1))\n for channel, ax in enumerate(after):\n title = f'After scaling\\nChannel {channel}'\n intensity_histogram(\n scaled, sel={Axes.CH: channel, Axes.ROUND: 0}, ax=ax, title=title,\n log=True, bins=50,\n )\n ax.set_xlim((0, 1))\n f.tight_layout()\n return f\n\nf = plot_scaling_result(stack, scaled)", "_____no_output_____" ] ], [ [ "Find spots\n----------\nFinally, a local blob detector that finds spots in each (z, y, x) volume\nseparately is applied. The user selects an \"anchor round\" and spots found in\nall channels of that round are used to seed a local search across other rounds\nand channels. The closest spot is selected, and any spots outside the search\nradius (here 10 pixels) is discarded.\n\nThe Spot finder returns an IntensityTable containing all spots from round\nzero. Note that many of the spots do _not_ identify spots in other rounds and\nchannels and will therefore fail decoding. Because of the stringency built\ninto the STARmap codebook, it is OK to be relatively permissive with the spot\nfinding parameters for this assay.", "_____no_output_____" ] ], [ [ "lsbd = starfish.spots.DetectSpots.LocalSearchBlobDetector(\n min_sigma=1,\n max_sigma=8,\n num_sigma=10,\n threshold=np.percentile(np.ravel(stack.xarray.values), 95),\n exclude_border=2,\n anchor_round=0,\n search_radius=10,\n)\nintensities = lsbd.run(scaled, n_processes=8)", "_____no_output_____" ] ], [ [ "Decode spots\n------------\nNext, spots are decoded. There is really no good way to display 3-d spot\ndetection in 2-d planes, so we encourage you to grab this notebook and\nuncomment the below lines.", "_____no_output_____" ] ], [ [ "decoded = experiment.codebook.decode_per_round_max(intensities.fillna(0))\ndecode_mask = decoded['target'] != 'nan'\n\n# %gui qt\n# viewer = starfish.display(\n# stack, decoded[decode_mask], radius_multiplier=2, mask_intensities=0.1\n# )", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb1673a24fe8bde4e68342e7377ffc979c6038c3
1,756
ipynb
Jupyter Notebook
.ipynb_checkpoints/00.kickoff-checkpoint.ipynb
haltaro/ml-tutorial
0547804469dcce28f35ed49ca5ccefc6e13f1fcc
[ "Unlicense" ]
null
null
null
.ipynb_checkpoints/00.kickoff-checkpoint.ipynb
haltaro/ml-tutorial
0547804469dcce28f35ed49ca5ccefc6e13f1fcc
[ "Unlicense" ]
null
null
null
.ipynb_checkpoints/00.kickoff-checkpoint.ipynb
haltaro/ml-tutorial
0547804469dcce28f35ed49ca5ccefc6e13f1fcc
[ "Unlicense" ]
null
null
null
22.512821
146
0.520501
[ [ [ "# 0. キックオフ\n\n## はじめに\n\n機械学習の基礎を,二ヶ月で一通り復習します.\n\n## 環境構築\n\n言語は[python3](https://www.python.org/)と[R](https://www.r-project.org/)を想定します.環境設定ファイルは,`env.yml`です.`conda`コマンドが使える場合は,以下で仮想環境を構築できます.\n\n```bash\n$ conda create --file env.yml\n$ activate ml #MacやLinuxの場合は,source activate mlかも.\n```\n\n## 参考\n\n本勉強会では,以下を参考にします.\n\n* [Scikit-learn tutorials](http://scikit-learn.org/stable/tutorial/index.html):Pythonの機械学習ライブラリであるscikit-learnのチュートリアルです.今回はテキスト処理は割愛します.\n* [久保拓哉,データ解析のための統計モデリング入門](http://amzn.asia/g3XaAKg):データ解析の代表的な教科書です.Rのソースコードつき.\n\n## 全体計画\n\n毎週木曜の夕方に開催します.\n\n|#|日付|テーマ|\n|:--|:--|:--|\n|1|10/26|Scikit-learn入門(1/2)|\n|2|11/2|Scikit-learn入門(2/2)|\n|3|11/9|Scikit-learnで統計的学習(1/2)|\n|4|11/16|Scikit-learnで統計的学習(2/2)|\n|5|11/23|未定|\n|6|11/30|未定|\n|7|12/7|未定|\n|8|12/14|未定|\n|9|12/21|未定|", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
cb1680625b079be94f4f66d6b5e1e1c44c1f2cc6
670,822
ipynb
Jupyter Notebook
docs/notebooks/interface_example.ipynb
jlustigy/coronagraph
b321693512422343b08ada7e246413e1f4bae4cc
[ "MIT" ]
4
2020-05-25T07:48:31.000Z
2022-01-04T00:40:57.000Z
docs/notebooks/interface_example.ipynb
jlustigy/coronagraph
b321693512422343b08ada7e246413e1f4bae4cc
[ "MIT" ]
8
2019-04-12T22:17:07.000Z
2020-05-07T00:01:11.000Z
docs/notebooks/interface_example.ipynb
jlustigy/coronagraph
b321693512422343b08ada7e246413e1f4bae4cc
[ "MIT" ]
6
2016-11-14T06:46:57.000Z
2021-12-31T06:50:55.000Z
3,320.90099
348,620
0.968014
[ [ [ "%matplotlib inline\n%config InlineBackend.figure_format = \"retina\"\n\nfrom __future__ import print_function\n \nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams", "_____no_output_____" ], [ "rcParams[\"savefig.dpi\"] = 200\nrcParams[\"figure.dpi\"] = 200\nrcParams[\"font.size\"] = 20\nrcParams[\"figure.figsize\"] = [8, 5]\nrcParams[\"font.family\"] = \"sans-serif\"\nrcParams[\"font.sans-serif\"] = [\"Computer Modern Sans Serif\"]\nrcParams[\"text.usetex\"] = True", "_____no_output_____" ] ], [ [ "# Using the Object-Oriented Interface", "_____no_output_____" ] ], [ [ "import coronagraph as cg\nprint(cg.__version__)", "0.0.3\n" ], [ "noise = cg.CoronagraphNoise(THERMAL=True)", "_____no_output_____" ], [ "lamhr, Ahr, fstar = cg.get_earth_reflect_spectrum()", "_____no_output_____" ], [ "noise.run_count_rates(Ahr, lamhr, fstar)", "_____no_output_____" ], [ "noise.make_fake_data(texp = 10)", "_____no_output_____" ], [ "fig, ax = noise.plot_spectrum()", "_____no_output_____" ], [ "fig, ax = noise.plot_SNR()", "_____no_output_____" ], [ "fig, ax = noise.plot_time_to_wantsnr()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb168586810dc12b787577c6390b1a3c6af90fdf
169,963
ipynb
Jupyter Notebook
pandas/Demos/plotting.ipynb
WebucatorTraining/classfiles-actionable-python
930c154a6dbfa6c54768557a998b4dbafb43df38
[ "MIT" ]
2
2022-01-04T22:25:01.000Z
2022-01-16T16:50:23.000Z
pandas/Demos/plotting.ipynb
WebucatorTraining/classfiles-actionable-python
930c154a6dbfa6c54768557a998b4dbafb43df38
[ "MIT" ]
null
null
null
pandas/Demos/plotting.ipynb
WebucatorTraining/classfiles-actionable-python
930c154a6dbfa6c54768557a998b4dbafb43df38
[ "MIT" ]
null
null
null
316.504655
50,896
0.91474
[ [ [ "# Plotting with matplotlib", "_____no_output_____" ], [ "### Setup", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport pandas as pd\n\npd.set_option('display.max_columns', 10)\npd.set_option('display.max_rows', 10)", "_____no_output_____" ] ], [ [ "### Getting the pop2019 DataFrame", "_____no_output_____" ] ], [ [ "csv ='../csvs/nc-est2019-agesex-res.csv'\npops = pd.read_csv(csv, usecols=['SEX', 'AGE', 'POPESTIMATE2019'])\n\ndef fix_sex(sex):\n if sex == 0:\n return 'T'\n elif sex == 1:\n return 'M'\n else: # 2\n return 'F'\n \npops.SEX = pops.SEX.apply(fix_sex)\n\npops = pops.pivot(index='AGE', columns='SEX', values='POPESTIMATE2019')\n\npops", "_____no_output_____" ], [ "pops.plot();", "_____no_output_____" ] ], [ [ "### Create a Line Plot", "_____no_output_____" ] ], [ [ "# Create the plot.\nplt_pop = pops.plot(\n title = \"Population by Age: 2019\",\n style=['b--', 'm^', 'k-'],\n figsize=(12, 6),\n lw=2\n)\n\n# Include gridlines.\nplt_pop.grid(True)\n\n# Set the x and y labels.\nplt_pop.set_xlabel('Age')\nplt_pop.set_ylabel('Population')\n\n# Create the legend.\nplt_pop.legend(['M', 'F', 'A'], loc=\"lower left\")\n\n# Set x and y ticks.\nplt_pop.set_xticks(np.arange(0, 101, 10))\n\nyticks = np.arange(500000, 5000001, 500000)\nytick_labels = pd.Series(yticks).apply(lambda y: \"{:,}\".format(y))\n\nplt_pop.set_yticks(yticks)\nplt_pop.set_yticklabels(ytick_labels);", "_____no_output_____" ] ], [ [ "### Create a Bar Plot", "_____no_output_____" ] ], [ [ "csv ='../csvs/mantle.csv'\nmantle = pd.read_csv(csv, index_col='Year',\n usecols=['Year', '2B', '3B', 'HR'])\nmantle", "_____no_output_____" ], [ "# Create the plot.\nplt_mantle = mantle.plot(\n kind='bar',\n title = 'Mickey Mantle: Doubles, Triples, and Home Runs',\n figsize=(12, 6),\n width=.8,\n fontsize=16\n)\n\n# Include gridlines.\nplt_mantle.grid(True)\n\n# Set the x and y labels.\nplt_mantle.set_ylabel('Number', fontsize=20)\nplt_mantle.set_xlabel('Year', fontsize=20)\n\n# Hatch the bars.\nbars = plt_mantle.patches\n\nfor i in np.arange(0, 18):\n bars[i].set_hatch('+')\nfor i in np.arange(18, 36):\n bars[i].set_hatch('o')\nfor i in np.arange(36, 54):\n bars[i].set_hatch('/')\n\n# Create the legend.\nplt_mantle.legend(['Doubles', 'Triples', 'Home Runs'],\n loc=\"upper right\", fontsize='xx-large');", "_____no_output_____" ], [ "plt_mantle = mantle.plot(kind='bar',\n title = 'Mickey Mantle: Doubles, Triples, and Home Runs',\n figsize=(12, 6),\n width=.8,\n fontsize=16,\n stacked=True)\n\nplt_mantle.set_ylabel('Number', fontsize=20)\nplt_mantle.set_xlabel('Year', fontsize=20)\nplt_mantle.grid(True)\n\nbars = plt_mantle.patches\n\nfor i in np.arange(0, 18):\n bars[i].set_hatch('-')\nfor i in np.arange(18, 36):\n bars[i].set_hatch('o')\nfor i in np.arange(36, 54):\n bars[i].set_hatch('/')\n \nplt_mantle.legend(['Doubles','Triples','Home Runs'],\n loc=\"upper right\", fontsize='xx-large');", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb168d3028b3f2a2f72dc2464fa79dde38325717
7,665
ipynb
Jupyter Notebook
ae/experience.ipynb
epsilon-deltta/Protein-AR
07ddefdc1d3bbb445deb35dfad2b55eb99126ac8
[ "MIT" ]
null
null
null
ae/experience.ipynb
epsilon-deltta/Protein-AR
07ddefdc1d3bbb445deb35dfad2b55eb99126ac8
[ "MIT" ]
null
null
null
ae/experience.ipynb
epsilon-deltta/Protein-AR
07ddefdc1d3bbb445deb35dfad2b55eb99126ac8
[ "MIT" ]
null
null
null
30.783133
135
0.529811
[ [ [ "a =[]", "_____no_output_____" ], [ "import torch\nfrom torch import nn\na = torch.rand(4,10,20)\nb = torch.rand(4,10,20)\nloss = nn.MSELoss()\n[loss(x,y).item() for x,y in zip(a,b)]", "_____no_output_____" ], [ "import numpy as np\nnp.mean(list(range(10)))\nnp.std(list(range(10)))\nnp.quantile(list(range(10)),0.5)", "_____no_output_____" ], [ "import sys,os\nsys.path.append(os.path.abspath('../'))\nfrom models import get_model", "_____no_output_____" ], [ "model,config = get_model('ae0')", "_____no_output_____" ], [ "import argparse\nexam_code = '''\ne.g) \npython evaluate.py -p ./ae/models/ae0_80.pt\n'''\nparser = argparse.ArgumentParser(\"Evaluate AE models\",epilog=exam_code) \n\n\nparser.add_argument('-d' ,'--directory' ,default='models' ,metavar='{...}' ,help='directory path containing the models')\nparser.add_argument('-p' ,'--path' ,default=None , help='Specify the model path')\nparser.add_argument('-th',default=None,help='Value of threshold to classify')\nparser.add_argument('--dataset_path' ,default='./data/split/test.csv' , help='test dataset path')\nparser.add_argument('-s','--save' ,default=True, type=bool , help='whether to save')\nargs = parser.parse_args()", "_____no_output_____" ], [ "import torch\nfrom dataset import ProteinDataset\n\nfrom sklearn.metrics import accuracy_score \nfrom sklearn.metrics import recall_score \nfrom sklearn.metrics import precision_score \nfrom sklearn.metrics import f1_score \n# from sklearn.metrics import make_scorer\n# from sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix \n\ndef evaluate(dl,model,lossf,epoch=None):\n model.eval()\n size, _ , losses = len(dl.dataset) ,0,0\n pre_l,gt_l = [],[]\n with torch.no_grad():\n for x,y in dl:\n x,y = x.to(device),y.to(device)\n pre = model(x)\n loss = lossf(pre,y)\n \n losses += loss.item()\n pre_l.extend(pre.argmax(1).cpu().numpy().tolist())\n gt_l .extend(y.cpu().numpy().tolist())\n \n loss = losses/size\n acc = accuracy_score(gt_l,pre_l)\n recall = recall_score(gt_l,pre_l)\n precision= precision_score(gt_l,pre_l)\n f1 = f1_score(gt_l,pre_l)\n confusion= confusion_matrix(gt_l,pre_l)\n\n metrics = {'acc':acc,'recall':recall,'precision':precision,'f1':f1,'confusion':confusion,'loss':loss}\n return metrics\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n\nfrom models import *\nimport os\nresults = {}\nmodel_paths = []\nif args.path is not None:\n m_path = args.path\n model_paths.append(m_path)\n model_name = os.path.basename(m_path).split('_')[0].lower()\n print(model_name)\n # model = 'lstm0'\n model,transform = get_model(model_name)\n model = model.to(device)\n\n # config\n transform = config['transform'] \n batch_size = config['batch_size']\n\n tedt = ProteinDataset(args.dataset_path,transform=transform)\n tedl = torch.utils.data.DataLoader(tedt, batch_size=batch_size, num_workers=4)\n \n loss = nn.CrossEntropyLoss()\n params = [p for p in model.parameters() if p.requires_grad]\n opt = torch.optim.Adam(params)\n\n \n model.load_state_dict(torch.load(m_path))\n \n result = evaluate(tedl,model,loss)\n \n print(f'{model_name}: {result}')\n results[model_name] = result\n\nelse:\n files = os.listdir(args.directory)\n model_paths = [os.path.join('./models',file) for file in files if file.endswith('.pt')]\n \n for m_path in model_paths:\n model_name = os.path.basename(m_path).split('_')[0].lower()\n \n print(model_name)\n model,config = get_model(model_name)\n model = model.to(device)\n \n # config\n transform = config['transform'] \n batch_size = config['batch_size']\n \n tedt = ProteinDataset(args.dataset_path,transform=transform)\n tedl = torch.utils.data.DataLoader(tedt, batch_size=batch_size, num_workers=4)\n \n loss = nn.CrossEntropyLoss()\n params = [p for p in model.parameters() if p.requires_grad]\n opt = torch.optim.Adam(params)\n\n model.load_state_dict(torch.load(m_path))\n\n result = evaluate(tedl,model,loss)\n\n print(f'{model_name}: {result}')\n results[model_name] = result\n# save the results\nprint(type(args.save ))\nif args.save:\n import pandas as pd\n df = pd.DataFrame(results).T\n models = [os.path.splitext( os.path.basename(path) )[0] for path in model_paths]\n df.to_csv(f\"assets/{'&'.join(models)}.csv\")\n print(f\"result was saved in assets/{'&'.join(models)}.csv\")\n ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb16a8fb5709525f778f19407ca88479a6442ec8
5,636
ipynb
Jupyter Notebook
Activation_Functions.ipynb
hemanthsunny/machine_learning
9e29daad50db22398da266a8f74f42d741503d4b
[ "MIT" ]
3
2020-04-12T10:23:36.000Z
2020-04-19T20:31:46.000Z
Activation_Functions.ipynb
hemanthsunny/machine_learning
9e29daad50db22398da266a8f74f42d741503d4b
[ "MIT" ]
null
null
null
Activation_Functions.ipynb
hemanthsunny/machine_learning
9e29daad50db22398da266a8f74f42d741503d4b
[ "MIT" ]
null
null
null
24.398268
248
0.432931
[ [ [ "<a href=\"https://colab.research.google.com/github/hemanthsunny/machine_learning/blob/master/Activation_Functions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Activation function is an identity function. It is trivial. Many activation functions. Out of them, these are famous -\n1. Sigmoid - range: [0, 1]\n2. Tanh - range: [-1, 1] - zero centered, prefered in hidden layers over (1)\n3. ReLU\n\nDue to vanishing gradient problem, Sigmoid and Tanh functions are being avoided. But to understand activation functions, it is good to type all activation functions at least once.\n\n*Reference:*\n1. https://towardsdatascience.com/activation-functions-and-its-types-which-is-better-a9a5310cc8f\n2. https://towardsdatascience.com/activation-functions-b63185778794", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef sigmoid(z):\n return 1 / (1 + np.exp(-z))\n\nsigmoid(-2)", "_____no_output_____" ] ], [ [ "Tanh", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef tanh(z):\n return np.tanh(z)\n\ndef tanh2(z):\n return (np.exp(z) - np.exp(-z)) / (np.exp(z) + np.exp(-z))\n\nprint(\"Tanh :\", tanh(-2))\nprint(\"Tanh2 :\", tanh2(-2))", "Tanh : -0.9640275800758169\nTanh2 : -0.964027580075817\n" ] ], [ [ "ReLU", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef relu(z):\n return z*(z>0)\n\nrelu(10)", "_____no_output_____" ] ], [ [ "Leaky ReLU", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef leaky_relu(z):\n return np.maximum(0.01 * z, z)\n\nleaky_relu(-2)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb16c6a045110beb0a9dd2ebdfcb7a7219c5133e
14,752
ipynb
Jupyter Notebook
CNNBreatCancer.ipynb
RihaChri/ImageClassificationBreastCancer
e4ec7050bbeae0294a3f6055a7a3647f72e511e7
[ "MIT" ]
null
null
null
CNNBreatCancer.ipynb
RihaChri/ImageClassificationBreastCancer
e4ec7050bbeae0294a3f6055a7a3647f72e511e7
[ "MIT" ]
null
null
null
CNNBreatCancer.ipynb
RihaChri/ImageClassificationBreastCancer
e4ec7050bbeae0294a3f6055a7a3647f72e511e7
[ "MIT" ]
null
null
null
53.064748
251
0.451464
[ [ [ "<a href=\"https://colab.research.google.com/github/RihaChri/ImageClassificationBreastCancer/blob/main/CNNBreatCancer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import os\nimport glob\nimport cv2\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom re import search\nfrom tensorflow import keras\n#-----------------------load images---------------------------------------------\n#imageDir=os.path.abspath('/content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/images')\nimageDir=os.path.abspath('/content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/images/validation')\nsubDirList=[x[0] for x in os.walk(imageDir)]\n\nfileList=[]\nfor folder in subDirList: \n for file in glob.glob(folder+'/*.png'):\n fileList.append(file)\n\nimages=[]\ny=np.zeros(len(fileList))\nfor i in range(len(fileList)):\n image = cv2.imread(fileList[i])\n image = cv2.resize(image, (700, 460), cv2.INTER_LINEAR)\n images.append(image)\n if search('SOB_M', fileList[i]): y[i]=1\n elif search('SOB_B', fileList[i]): y[i]=0\n else: raise Exception('the file ',fileList[i],'is not a valid file')\n if i%1000==0: print(i,' images out of', len(fileList), ' are loaded')\nassert len(images)==y.size\nX=np.asarray(images)\nprint('Loading images succesful')\nprint('Data shape: ',X.shape)\nprint('Labels shape: ',y.shape)\n#------------------------split data---------------------------------------------\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2, random_state=1)\nX_train, X_val, y_train, y_val = train_test_split(\n X_train, y_train, test_size=0.25, random_state=1)\nprint('X_train: ',X_train.shape)\nprint('y_train: ',y_train.shape)\nprint('X_test: ',X_test.shape)\nprint('y_test: ',y_test.shape)\nprint('X_val: ',X_val.shape)\nprint('y_val: ',y_val.shape)\n\n\n#-------------------Define Model------------------------------------------------\nmodel = keras.Sequential()\n\n# Convolutional layer and maxpool layer 1\nmodel.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(460,700,3)))\nmodel.add(keras.layers.MaxPool2D(2,2))\n\n# Convolutional layer and maxpool layer 2\nmodel.add(keras.layers.Conv2D(64,(3,3),activation='relu'))\nmodel.add(keras.layers.MaxPool2D(2,2))\n\n# Convolutional layer and maxpool layer 3\nmodel.add(keras.layers.Conv2D(128,(3,3),activation='relu'))\nmodel.add(keras.layers.MaxPool2D(2,2))\n\n# Convolutional layer and maxpool layer 4\nmodel.add(keras.layers.Conv2D(128,(3,3),activation='relu'))\nmodel.add(keras.layers.MaxPool2D(2,2))\n\n# This layer flattens the resulting image array to 1D array\nmodel.add(keras.layers.Flatten())\n\n# Hidden layer with 512 neurons and Rectified Linear Unit activation function \nmodel.add(keras.layers.Dense(512,activation='relu'))\n\n# Output layer with single neuron which gives 0 for benign or 1 for malign\n#Here we use sigmoid activation function which makes our model output to lie between 0 and 1\nmodel.add(keras.layers.Dense(1,activation='sigmoid'))\n\nmodel.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n\nbatch_size = 32\n\nmodel.summary()\n\n#-----------------Define callback-----------------------------------------------\ncheckpoint_path = \"/content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\n# Create a callback that saves the model's weights\ncp_callback = keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,\n save_weights_only=True,\n verbose=1)\n\n#model.load_weights(checkpoint_path)\n#----------------Train the model------------------------------------------------\nmodel.fit(X_train, \n y_train, \n epochs=10,\n validation_data=(X_val, y_val),\n callbacks=[cp_callback]) # Pass callback to training\n\n\n\n \n \n", "0 images out of 622 are loaded\nLoading images succesful\nData shape: (622, 460, 700, 3)\nLabels shape: (622,)\nX_train: (372, 460, 700, 3)\ny_train: (372,)\nX_test: (125, 460, 700, 3)\ny_test: (125,)\nX_val: (125, 460, 700, 3)\ny_val: (125,)\nModel: \"sequential_5\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n conv2d_16 (Conv2D) (None, 458, 698, 32) 896 \n \n max_pooling2d_16 (MaxPoolin (None, 229, 349, 32) 0 \n g2D) \n \n conv2d_17 (Conv2D) (None, 227, 347, 64) 18496 \n \n max_pooling2d_17 (MaxPoolin (None, 113, 173, 64) 0 \n g2D) \n \n conv2d_18 (Conv2D) (None, 111, 171, 128) 73856 \n \n max_pooling2d_18 (MaxPoolin (None, 55, 85, 128) 0 \n g2D) \n \n conv2d_19 (Conv2D) (None, 53, 83, 128) 147584 \n \n max_pooling2d_19 (MaxPoolin (None, 26, 41, 128) 0 \n g2D) \n \n flatten_4 (Flatten) (None, 136448) 0 \n \n dense_8 (Dense) (None, 512) 69861888 \n \n dense_9 (Dense) (None, 1) 513 \n \n=================================================================\nTotal params: 70,103,233\nTrainable params: 70,103,233\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/10\n12/12 [==============================] - ETA: 0s - loss: 3068.7305 - accuracy: 0.5027 \nEpoch 00001: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 273s 23s/step - loss: 3068.7305 - accuracy: 0.5027 - val_loss: 150.5281 - val_accuracy: 0.6640\nEpoch 2/10\n12/12 [==============================] - ETA: 0s - loss: 17.0045 - accuracy: 0.6290 \nEpoch 00002: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 278s 23s/step - loss: 17.0045 - accuracy: 0.6290 - val_loss: 0.6259 - val_accuracy: 0.8080\nEpoch 3/10\n12/12 [==============================] - ETA: 0s - loss: 0.6940 - accuracy: 0.7634 \nEpoch 00003: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 279s 23s/step - loss: 0.6940 - accuracy: 0.7634 - val_loss: 0.5607 - val_accuracy: 0.8320\nEpoch 4/10\n12/12 [==============================] - ETA: 0s - loss: 0.5772 - accuracy: 0.8011 \nEpoch 00004: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 281s 23s/step - loss: 0.5772 - accuracy: 0.8011 - val_loss: 0.6066 - val_accuracy: 0.8160\nEpoch 5/10\n12/12 [==============================] - ETA: 0s - loss: 0.4842 - accuracy: 0.7608 \nEpoch 00005: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 281s 23s/step - loss: 0.4842 - accuracy: 0.7608 - val_loss: 0.8290 - val_accuracy: 0.6640\nEpoch 6/10\n12/12 [==============================] - ETA: 0s - loss: 63.2787 - accuracy: 0.5806 \nEpoch 00006: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 281s 23s/step - loss: 63.2787 - accuracy: 0.5806 - val_loss: 4.3161 - val_accuracy: 0.3360\nEpoch 7/10\n12/12 [==============================] - ETA: 0s - loss: 0.9721 - accuracy: 0.6801 \nEpoch 00007: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 278s 23s/step - loss: 0.9721 - accuracy: 0.6801 - val_loss: 4.3634 - val_accuracy: 0.6240\nEpoch 8/10\n12/12 [==============================] - ETA: 0s - loss: 0.8932 - accuracy: 0.6828 \nEpoch 00008: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 279s 23s/step - loss: 0.8932 - accuracy: 0.6828 - val_loss: 0.5856 - val_accuracy: 0.7040\nEpoch 9/10\n12/12 [==============================] - ETA: 0s - loss: 0.5527 - accuracy: 0.7419 \nEpoch 00009: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 280s 23s/step - loss: 0.5527 - accuracy: 0.7419 - val_loss: 0.8957 - val_accuracy: 0.6640\nEpoch 10/10\n12/12 [==============================] - ETA: 0s - loss: 0.6664 - accuracy: 0.7581 \nEpoch 00010: saving model to /content/drive/MyDrive/Colab Notebooks/ImageClassificationBreastCancerCNN/checkpoints/cp.ckpt\n12/12 [==============================] - 279s 23s/step - loss: 0.6664 - accuracy: 0.7581 - val_loss: 165.2894 - val_accuracy: 0.3360\n" ], [ "#evaluate model\nloss, acc = model.evaluate(X_test, y_test, verbose=2)\nprint(\"Restored model, accuracy: {:5.2f}%\".format(100 * acc))", "4/4 - 21s - loss: 166.1185 - accuracy: 0.3280 - 21s/epoch - 5s/step\nRestored model, accuracy: 32.80%\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ] ]
cb16ccad9a43995302525e6675738037cf1ff88d
455,464
ipynb
Jupyter Notebook
BI_Capstone_Project1/Solution.ipynb
amazerahul/BI_Capstone_Project
e04e10c2aadc34e05a78a87564365b7fc9453d1c
[ "MIT" ]
null
null
null
BI_Capstone_Project1/Solution.ipynb
amazerahul/BI_Capstone_Project
e04e10c2aadc34e05a78a87564365b7fc9453d1c
[ "MIT" ]
null
null
null
BI_Capstone_Project1/Solution.ipynb
amazerahul/BI_Capstone_Project
e04e10c2aadc34e05a78a87564365b7fc9453d1c
[ "MIT" ]
null
null
null
164.84401
40,680
0.878159
[ [ [ "# Import all libraries\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "data = pd.read_csv('bank-marketing.csv')\ndata_copy = data.copy()", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "Itemlist = []\nfor col in data.columns:\n Itemlist.append([col, data[col].dtype, data[col].isnull().sum(),\n round(data[col].isnull().sum()/len(data[col])*100,2), \n data[col].nunique(), \n list(data[col].sample(5).drop_duplicates().values)])\n\ndfDesc = pd.DataFrame(columns=['dataFeatures', 'dataType', 'null', 'nullPct', 'unique', 'uniqueSample'], data=Itemlist)\ndfDesc", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ] ], [ [ "#### We notice that pdays is having [25, 50, 75] percentiles as -1 which indicates that missing values are denoted by -1.", "_____no_output_____" ] ], [ [ "data[data.duplicated()]", "_____no_output_____" ] ], [ [ "### Which age was targeted most and which one of them responded positively?", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(13,5))\nplt.subplot(1, 2, 1)\nsns.distplot(data['age']).set_title('Age distribution')\n\nplt.subplot(1, 2, 2)\nsns.violinplot(x = 'response', y = 'age', data=data).set_title('Relationship between age and response')", "_____no_output_____" ] ], [ [ "* We see from the above graph that most people targeted in the campaign lie in the age group of 25 to 50.\n* It is clear that people lying in the age group of 25 to 40 respond positively than any other age group.", "_____no_output_____" ], [ "### Does job description affect the chances of success?", "_____no_output_____" ] ], [ [ "grouped_job = pd.DataFrame(data.groupby(['job'])['response'].value_counts(normalize=True))\ngrouped_job.rename(columns={\"response\" : \"pct\"}, inplace=True)\ngrouped_job.reset_index(inplace=True)\nplt.figure(figsize=(15,5))\nsns.barplot(x='job', y='pct', hue='response', data=grouped_job).set_title('Response by job description')", "_____no_output_____" ] ], [ [ "* From the above plot it can be said that customers responding positively to the camapign are very less.\n* Customer with job role as management seems to interested more than any other job role.", "_____no_output_____" ], [ "### Does people with high profile job and high salary have greater chances of positive response?", "_____no_output_____" ] ], [ [ "job_salary = pd.DataFrame(data.groupby('job')['salary'].unique().apply(lambda x : x[0])).reset_index()\njob_response = pd.crosstab(data['job'], data['response']).reset_index()\ndata1 = pd.merge(job_salary, job_response)", "_____no_output_____" ], [ "data1.sort_values('yes', ascending=False)", "_____no_output_____" ], [ "plt.figure(figsize=(15,5))\nplt.subplot(1, 2, 1)\ng = sns.barplot(x='job', y='salary', data=data)\ng.set_title('Salary by job description')\ng.set_xticklabels(g.get_xticklabels(), rotation=45)\n\nplt.subplot(1, 2, 2)\ng1 = sns.barplot(x='job', y='yes', data=data1)\ng1.set_title('Positive response based on job description')\ng1.set_xticklabels(g1.get_xticklabels(), rotation=45)\n\nplt.show()", "_____no_output_____" ] ], [ [ "Yes, it is clearly visible that people with high job profile and salary tend to be more interested.", "_____no_output_____" ], [ "### Does marital status decide the response type?", "_____no_output_____" ] ], [ [ "np.log(pd.crosstab(data['marital'], data['response'])).plot.bar(title = 'Response based on marital status')", "_____no_output_____" ] ], [ [ "Positive response is almost neutral in case of marital status.", "_____no_output_____" ], [ "### Does married people of younger age responds positively to the campaign?", "_____no_output_____" ] ], [ [ "data2 = data[(data['marital']=='married') & (data['age']<25)]", "_____no_output_____" ], [ "sns.barplot(x='response', y='age', data=data2)", "_____no_output_____" ] ], [ [ "Ratio of yes/no among married young customer seems to be equal.", "_____no_output_____" ], [ "### Age distribution based on marital status", "_____no_output_____" ] ], [ [ "m = sns.FacetGrid(data, col='marital', height = 4)\nm.map(plt.hist, 'age')", "_____no_output_____" ] ], [ [ "* From the 1st plot it can be seen that mostly married people lie around 25 to 56.\n* Single people lie in the age group of 25 to 37.\n* Most probably divorced people lie in the age group of 30 to 60.", "_____no_output_____" ], [ "### What is the relationship between education level and salary?", "_____no_output_____" ] ], [ [ "sns.boxplot(x='education', y='salary', data=data).set_title('Boxplot distribution of salary based on education level')\nplt.show()", "_____no_output_____" ] ], [ [ "It is clear from the above graph that people with high level of education tend to have high salary.", "_____no_output_____" ], [ "### Are customer of low job profile and married are most responsive one?", "_____no_output_____" ] ], [ [ "grouped_job_married = pd.DataFrame(data[data['marital']=='married'].groupby(['job'])['response'].value_counts(normalize=True))\ngrouped_job_married.rename(columns={\"response\" : \"pct\"}, inplace=True)\ngrouped_job_married.reset_index(inplace=True)\nplt.figure(figsize=(15,5))\nsns.barplot(x='job', y='pct', hue='response', data=grouped_job_married).set_title('Response of married and low job profile customers')", "_____no_output_____" ] ], [ [ "Yes, people who are married and have low job profile tend to response more positvely than any other job profile.", "_____no_output_____" ], [ "### Are the targeted customers interested in such campaigns?", "_____no_output_____" ] ], [ [ "data.head()", "_____no_output_____" ] ], [ [ "In order to answer this questions checking of previous campaign records would be appropriate.", "_____no_output_____" ] ], [ [ "grouped_poutcome = pd.DataFrame(data['poutcome'].value_counts(normalize=True))\ngrouped_poutcome = grouped_poutcome.reset_index().rename(columns={\"index\" : \"poutcome\", \"poutcome\" : \"pct\"})\nsns.barplot(x='poutcome', y='pct', data=grouped_poutcome).set_title('Results of previous campaign')", "_____no_output_____" ], [ "plt.figure(figsize=(12, 4))\nplt.subplot(1, 2, 1)\nsns.distplot(data['pdays'], bins=10).set_title('Days passed after last contact')\n\nplt.subplot(1, 2, 2)\nsns.boxplot(y=data['pdays']).set_title('Boxplot of Days passed after last contact')", "_____no_output_____" ], [ "plt.figure(figsize=(12, 4))\nplt.subplot(1, 2, 1)\nsns.distplot(data['previous'], bins=10).set_title('Contacts performed before this campaign')\n\nplt.subplot(1, 2, 2)\nsns.boxplot(y=data['previous']).set_title('Boxplot of Contacts performed before this campaign')", "_____no_output_____" ] ], [ [ "It seems that mostly people were contacted within few days still people dosen't seems to be interested in such campaigns.", "_____no_output_____" ], [ "### Does targeted customer with high job profile and high salary interested in such campaigns anymore?", "_____no_output_____" ] ], [ [ "data.groupby('job')['salary'].unique().apply(lambda x : x[0]).sort_values(ascending=False)", "_____no_output_____" ], [ "data3 = data[(data['salary']>50000) & (data['targeted']=='yes')]['response'].value_counts(normalize=True)\ndata3 = pd.DataFrame(data3)\ndata3 = data3.reset_index().rename(columns={'index' : 'response', 'response' : 'pct'})", "_____no_output_____" ], [ "grouped_high_job = pd.DataFrame(data[data['salary']>50000]['response'].value_counts(normalize=True)).reset_index().rename(columns={'index' : 'response', 'response' : 'pct'})\nsns.barplot(x='response', y = 'pct', data=grouped_high_job).set_title('Response of high job profile customer on previous campaigns')", "_____no_output_____" ], [ "sns.barplot(x='response', y = 'pct', data=data3).set_title('Response of targeted customer with high job profile')", "_____no_output_____" ], [ "(data[data['salary']>50000]['response'].value_counts(normalize=True) - data3['response'].value_counts(normalize=True)).plot.bar(title='Diff in response of previous v current campaign')", "_____no_output_____" ] ], [ [ "There is not much significant difference in the response of high job profile customers from previous to current campaign.", "_____no_output_____" ], [ "### Distribution of diff in salary and balance.", "_____no_output_____" ] ], [ [ "data['expenditure'] = data['salary'] - data['balance']", "_____no_output_____" ], [ "plt.figure(figsize=(15,5))\nsns.distplot(data['expenditure'], bins=10).set_title('Distribution of expenditure')", "_____no_output_____" ] ], [ [ "It seems there are people with varying limit in terms of expenditure.", "_____no_output_____" ], [ "### What is the response of customer with high expenditure?", "_____no_output_____" ] ], [ [ "data['expenditure'].plot.box()", "_____no_output_____" ], [ "threshold = data['expenditure'].quantile(0.90)", "_____no_output_____" ], [ "less_exp = pd.DataFrame(data[data['expenditure']<threshold]['response'].value_counts(normalize=True))\nless_exp = less_exp.reset_index().rename(columns={'index' : 'response', 'response' : 'pct'})\n\nhigh_exp = pd.DataFrame(data[data['expenditure']>=threshold]['response'].value_counts(normalize=True))\nhigh_exp = high_exp.reset_index().rename(columns={'index' : 'response', 'response' : 'pct'})", "_____no_output_____" ], [ "plt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1)\nsns.barplot(x='response', y='pct', data = less_exp).set_title('Response by expenditure less than cutoff')\n\nplt.subplot(1, 2, 2)\nsns.barplot(x='response', y='pct', data = high_exp).set_title('Response by expenditure with greater or equal to cutoff')", "_____no_output_____" ] ], [ [ "Although, the response from customers are very few. But, those who spend less have greater chances of postive response as compared to those who spend more.", "_____no_output_____" ], [ "### Is there any relationship between education level and bank balance?", "_____no_output_____" ] ], [ [ "sns.boxplot(x='education', y='balance', data=data).set_title('Boxplot distribution of balance by education level')", "_____no_output_____" ] ], [ [ "There is not much difference in the statistical measure of balance by education. Rather, it depends on person-to-person.", "_____no_output_____" ], [ "### What are the chances of customer responding positively who already have home and personal loan?", "_____no_output_____" ] ], [ [ "data[(data['housing']=='yes') & (data['loan']=='yes')]['response'].value_counts(normalize=True).plot.bar(title='Customer response having home and personal loan')", "_____no_output_____" ] ], [ [ "Not much people are interested in such campaigns who already have loans.", "_____no_output_____" ], [ "### What are the ways of contacting a customer. And which contact types are most effective?", "_____no_output_____" ] ], [ [ "grouped_contact = pd.DataFrame(data['contact'].value_counts(normalize=True)).reset_index().rename(columns={'index' : 'contact_type', 'contact' : 'pct'})", "_____no_output_____" ], [ "sns.barplot(x='contact_type', y ='pct', data=grouped_contact).set_title('Different contact types for contacting customers')", "_____no_output_____" ], [ "grouped_contact_response = pd.DataFrame(data.groupby(['contact'])['response'].value_counts(normalize=True)).rename(columns={\"response\" : \"pct\"}).reset_index()", "_____no_output_____" ], [ "sns.barplot(x='contact', y='pct' , data=grouped_contact_response, hue='response').set_title('Response by contact type')", "_____no_output_____" ] ], [ [ "Most of the people use cell phones as a communication device and it would be better if we target customer who use cell phones.", "_____no_output_____" ], [ "### In which month and day most customer were addressed?", "_____no_output_____" ] ], [ [ "grouped_month = pd.DataFrame(data['month'].value_counts(normalize=True)).reset_index().rename(columns={\"index\" : \"month\", \"month\" : \"pct\"})\ngrouped_day = pd.DataFrame(data['day'].value_counts(normalize=True)).reset_index().rename(columns={\"index\" : \"day\", \"day\" : \"pct\"})", "_____no_output_____" ], [ "plt.figure(figsize=(15, 5))\nplt.subplot(1, 2, 1)\nsns.barplot(x='month', y='pct', data=grouped_month).set_title('Customer addressed by month')\n\nplt.subplot(1, 2, 2)\nsns.barplot(x='day', y='pct', data=grouped_day).set_title('Customer addressed by day')", "_____no_output_____" ] ], [ [ "It seems in the month of may most customers are handled and this may be due to the fact that vacations are going on at that point of time.", "_____no_output_____" ], [ "### What is the effect of regular campaigning to a customer?", "_____no_output_____" ] ], [ [ "grouped_camp = pd.DataFrame(data['campaign'].value_counts(normalize=True)).reset_index().rename(columns={\"index\" : \"campaign_freq\", \"campaign\" : \"pct\"})", "_____no_output_____" ], [ "plt.figure(figsize=(15,5))\nplt.subplot(1, 2, 1)\nsns.barplot(x='campaign_freq', y='pct', data=grouped_camp).set_title('No of times a customer was contacted during a campaign')\n\nplt.subplot(1, 2, 2)\nsns.boxplot(y = data['campaign'])", "_____no_output_____" ], [ "data['campaign'].describe()", "_____no_output_____" ], [ "bins = [0, 10, 25, 40, 55, 70]\nlabels = ['0-10', '10-25', '25-40', '40-55', '55-70']\ndata['campaign_size'] = pd.cut(data['campaign'], bins =bins, labels = labels)", "_____no_output_____" ], [ "plt.figure(figsize=(15,5))\nsns.countplot(data['campaign_size'], hue=data['response']).set_title('Response based on the campaign size')\nplt.yscale('log')", "_____no_output_____" ] ], [ [ "* It is clear from above graph that as people are being contacted more and more for a particular contact they seems to lose interest.\n* The campaign team also needs to keep track of these records as it will save their time and focus on things which will add value to them.", "_____no_output_____" ], [ "### Modelling", "_____no_output_____" ] ], [ [ "def get_dummies(x,df):\n temp = pd.get_dummies(df[x], prefix = x, prefix_sep = '_', drop_first = True)\n df = pd.concat([df, temp], axis = 1)\n df.drop([x], axis = 1, inplace = True)\n return df", "_____no_output_____" ], [ "X = data.drop(['response', 'campaign_size'], axis=1)\ny = data.response", "_____no_output_____" ], [ "X.loc[X['pdays']==-1, 'pdays'] = 10000\n\n# Create a new column: recent_pdays \nX['recent_pdays'] = np.where(X['pdays'], 1/X.pdays, 1/X.pdays)\n\n# Drop 'pdays'\nX.drop('pdays', axis=1, inplace = True)", "_____no_output_____" ], [ "cols = X.select_dtypes('object').columns.tolist()", "_____no_output_____" ], [ "cols = cols + X.select_dtypes('number').columns.tolist()", "_____no_output_____" ], [ "X[cols].shape", "_____no_output_____" ], [ "categoric_cols = X.select_dtypes('object').columns", "_____no_output_____" ], [ "numeric_cols = data.select_dtypes('number').columns.tolist()\nnumeric_cols.remove('pdays')\nnumeric_cols.append('recent_pdays')", "_____no_output_____" ], [ "for col in categoric_cols:\n X = get_dummies(col, X)", "_____no_output_____" ], [ "from sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\nX[numeric_cols] = scaler.fit_transform(X[numeric_cols])", "_____no_output_____" ], [ "y.replace({'yes' : 1, 'no' : 0}, inplace=True)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.feature_selection import RFE\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor\nfrom sklearn.metrics import classification_report, f1_score\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import RandomizedSearchCV", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 121)", "_____no_output_____" ], [ "rf = RandomForestClassifier()\nrf.fit(X_train, y_train)\nrfe = RFE(rf, 15)\nrfe = rfe.fit(X_train, y_train)", "_____no_output_____" ], [ "list(zip(X_train.columns,rfe.support_,rfe.ranking_))", "_____no_output_____" ], [ "X_train.columns[rfe.support_]", "_____no_output_____" ], [ "X_train_rfe = X_train[X_train.columns[rfe.support_]]", "_____no_output_____" ], [ "params_grid = {'n_estimators' : [10,20,35,50],\n 'criterion' : ['gini', 'entropy'],\n 'max_depth' : [10,20,30,50]\n }", "_____no_output_____" ] ], [ [ "### Random Forest Classifier", "_____no_output_____" ] ], [ [ "rfc = RandomForestClassifier()", "_____no_output_____" ], [ "random_search = RandomizedSearchCV(rfc, param_distributions=params_grid, n_iter=10, cv=10)", "_____no_output_____" ], [ "random_search.fit(X_train, y_train)", "_____no_output_____" ], [ "random_search.best_estimator_.n_estimators", "_____no_output_____" ], [ "random_search.best_estimator_.criterion", "_____no_output_____" ], [ "random_search.best_estimator_.max_depth", "_____no_output_____" ], [ "rfc2 = RandomForestClassifier(n_estimators=50, criterion='entropy', max_depth=20)", "_____no_output_____" ], [ "rfc2.fit(X_train_rfe, y_train)", "_____no_output_____" ], [ "X_test_rfe = X_test[X_test.columns[rfe.support_]]", "_____no_output_____" ], [ "y_pred = rfc2.predict(X_test_rfe)", "_____no_output_____" ], [ "print(classification_report(y_test, y_pred))", " precision recall f1-score support\n\n 0 0.92 0.97 0.95 7987\n 1 0.64 0.35 0.45 1056\n\n accuracy 0.90 9043\n macro avg 0.78 0.66 0.70 9043\nweighted avg 0.89 0.90 0.89 9043\n\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb16d95088fd269cbb56752c468185563d5f0ae6
10,241
ipynb
Jupyter Notebook
Chapter05/GuessTheWord.ipynb
PacktPublishing/Hands-On-Artificial-Intelligence-for-IoT
774500bdc1b453c3e3dc3d2a3e67d3f941d801c8
[ "MIT" ]
95
2018-05-13T14:58:58.000Z
2022-03-15T11:00:38.000Z
Chapter05/GuessTheWord.ipynb
Jimmy-INL/Hands-On-Artificial-Intelligence-for-IoT
774500bdc1b453c3e3dc3d2a3e67d3f941d801c8
[ "MIT" ]
null
null
null
Chapter05/GuessTheWord.ipynb
Jimmy-INL/Hands-On-Artificial-Intelligence-for-IoT
774500bdc1b453c3e3dc3d2a3e67d3f941d801c8
[ "MIT" ]
72
2018-09-16T05:51:03.000Z
2022-02-24T07:39:03.000Z
31.510769
305
0.534128
[ [ [ "import string\nimport random\n\nfrom deap import base, creator, tools", "_____no_output_____" ], [ "## Create a Finess base class which is to be minimized\n# weights is a tuple -sign tells to minimize, +1 to maximize\n\ncreator.create(\"FitnessMax\", base.Fitness, weights=(1.0,)) \n\n", "_____no_output_____" ] ], [ [ "This will define a class ```FitnessMax``` which inherits the Fitness class of deep.base module. The attribute weight which is a tuple is used to specify whether fitness function is to be maximized (weights=1.0) or minimized weights=-1.0. The DEAP library allows multi-objective Fitness function. \n\n### Individual\n\nNext we create a ```Individual``` class, which inherits the class ```list``` and has the ```FitnessMax``` class in its Fitness attribute. ", "_____no_output_____" ] ], [ [ "# Now we create a individual class\n\ncreator.create(\"Individual\", list, fitness=creator.FitnessMax)", "_____no_output_____" ] ], [ [ "# Population\n\nOnce the individuals are created we need to create population and define gene pool, to do this we use DEAP toolbox. All the objects that we will need now onwards- an individual, the population, the functions, the operators and the arguments are stored in the container called ```Toolbox```\n\nWe can add or remove content in the container ```Toolbox``` using ```register()``` and ```unregister()``` methods", "_____no_output_____" ] ], [ [ "toolbox = base.Toolbox()\n\n# Gene Pool\ntoolbox.register(\"attr_string\", random.choice, string.ascii_letters + string.digits )", "_____no_output_____" ], [ "#Number of characters in word\nword = list('hello')\nN = len(word)\n\n# Initialize population\ntoolbox.register(\"individual\", tools.initRepeat, creator.Individual, toolbox.attr_string, N )\ntoolbox.register(\"population\",tools.initRepeat, list, toolbox.individual)", "_____no_output_____" ], [ "def evalWord(individual, word):\n #word = list('hello')\n return sum(individual[i] == word[i] for i in range(len(individual))),\n", "_____no_output_____" ], [ "toolbox.register(\"evaluate\", evalWord, word)\ntoolbox.register(\"mate\", tools.cxTwoPoint)\ntoolbox.register(\"mutate\", tools.mutShuffleIndexes, indpb=0.05)\ntoolbox.register(\"select\", tools.selTournament, tournsize=3)", "_____no_output_____" ] ], [ [ "We define the other operators/functions we will need by registering them in the toolbox. This allows us to easily switch between the operators if desired.\n\n## Evolving the Population\nOnce the representation and the genetic operators are chosen, we will define an algorithm combining all the individual parts and performing the evolution of our population until the One Max problem is solved. It is good style in programming to do so within a function, generally named main().\n\nCreating the Population\nFirst of all, we need to actually instantiate our population. But this step is effortlessly done using the population() method we registered in our toolbox earlier on.", "_____no_output_____" ] ], [ [ "def main():\n random.seed(64)\n\n # create an initial population of 300 individuals (where\n # each individual is a list of integers)\n pop = toolbox.population(n=300)\n\n # CXPB is the probability with which two individuals\n # are crossed\n #\n # MUTPB is the probability for mutating an individual\n CXPB, MUTPB = 0.5, 0.2\n \n print(\"Start of evolution\")\n \n # Evaluate the entire population\n fitnesses = list(map(toolbox.evaluate, pop))\n for ind, fit in zip(pop, fitnesses):\n #print(ind, fit)\n ind.fitness.values = fit\n \n print(\" Evaluated %i individuals\" % len(pop))\n\n # Extracting all the fitnesses of \n fits = [ind.fitness.values[0] for ind in pop]\n\n # Variable keeping track of the number of generations\n g = 0\n \n # Begin the evolution\n while max(fits) < 5 and g < 1000:\n # A new generation\n g = g + 1\n print(\"-- Generation %i --\" % g)\n \n # Select the next generation individuals\n offspring = toolbox.select(pop, len(pop))\n # Clone the selected individuals\n offspring = list(map(toolbox.clone, offspring))\n \n # Apply crossover and mutation on the offspring\n for child1, child2 in zip(offspring[::2], offspring[1::2]):\n\n # cross two individuals with probability CXPB\n if random.random() < CXPB:\n toolbox.mate(child1, child2)\n\n # fitness values of the children\n # must be recalculated later\n del child1.fitness.values\n del child2.fitness.values\n\n for mutant in offspring:\n\n # mutate an individual with probability MUTPB\n if random.random() < MUTPB:\n toolbox.mutate(mutant)\n del mutant.fitness.values\n \n # Evaluate the individuals with an invalid fitness\n invalid_ind = [ind for ind in offspring if not ind.fitness.valid]\n fitnesses = map(toolbox.evaluate, invalid_ind)\n for ind, fit in zip(invalid_ind, fitnesses):\n ind.fitness.values = fit\n \n print(\" Evaluated %i individuals\" % len(invalid_ind))\n \n # The population is entirely replaced by the offspring\n pop[:] = offspring\n \n # Gather all the fitnesses in one list and print the stats\n fits = [ind.fitness.values[0] for ind in pop]\n \n length = len(pop)\n mean = sum(fits) / length\n sum2 = sum(x*x for x in fits)\n std = abs(sum2 / length - mean**2)**0.5\n \n print(\" Min %s\" % min(fits))\n print(\" Max %s\" % max(fits))\n print(\" Avg %s\" % mean)\n print(\" Std %s\" % std)\n \n print(\"-- End of (successful) evolution --\")\n \n best_ind = tools.selBest(pop, 1)[0]\n print(\"Best individual is %s, %s\" % (''.join(best_ind), best_ind.fitness.values))", "_____no_output_____" ], [ "main()", "Start of evolution\n Evaluated 300 individuals\n-- Generation 1 --\n Evaluated 178 individuals\n Min 0.0\n Max 2.0\n Avg 0.22\n Std 0.4526956299030656\n-- Generation 2 --\n Evaluated 174 individuals\n Min 0.0\n Max 2.0\n Avg 0.51\n Std 0.613650280425803\n-- Generation 3 --\n Evaluated 191 individuals\n Min 0.0\n Max 3.0\n Avg 0.9766666666666667\n Std 0.6502221842484989\n-- Generation 4 --\n Evaluated 167 individuals\n Min 0.0\n Max 4.0\n Avg 1.45\n Std 0.6934214687571574\n-- Generation 5 --\n Evaluated 191 individuals\n Min 0.0\n Max 4.0\n Avg 1.9833333333333334\n Std 0.7765665171481163\n-- Generation 6 --\n Evaluated 168 individuals\n Min 0.0\n Max 4.0\n Avg 2.48\n Std 0.7678541528180985\n-- Generation 7 --\n Evaluated 192 individuals\n Min 1.0\n Max 5.0\n Avg 3.013333333333333\n Std 0.6829999186595044\n-- End of (successful) evolution --\nBest individual is hello, (5.0,)\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb16df11dfbea1f17f5d8a37e9caa02ee4b6d02e
12,900
ipynb
Jupyter Notebook
Colab Notebooks/learning/20191012_important_maths_formulas_in_pytorch.ipynb
ankschoubey/notes
e8f86e90ceb93282073c1760bedcfbb8ad35a1df
[ "MIT" ]
3
2018-04-17T08:47:07.000Z
2020-02-13T18:39:16.000Z
Colab Notebooks/learning/20191012_important_maths_formulas_in_pytorch.ipynb
ankschoubey/notes
e8f86e90ceb93282073c1760bedcfbb8ad35a1df
[ "MIT" ]
null
null
null
Colab Notebooks/learning/20191012_important_maths_formulas_in_pytorch.ipynb
ankschoubey/notes
e8f86e90ceb93282073c1760bedcfbb8ad35a1df
[ "MIT" ]
null
null
null
12,900
12,900
0.82876
[ [ [ "import torch\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "# Variance", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "x = torch.randn(10)\nx", "_____no_output_____" ], [ "plt.scatter(range(x.numel()),x)", "_____no_output_____" ], [ "mean = x.mean()\nmean", "_____no_output_____" ], [ "x - mean", "_____no_output_____" ], [ "wrong_variance = (x-mean).mean()\nwrong_variance, wrong_variance.round()", "_____no_output_____" ] ], [ [ "The reason that this is 0 is because positive and negatives cancel out. Therefore we need to make sure everything is positive", "_____no_output_____" ] ], [ [ "# mean absolute deviation\n\nmad = (x-mean).abs().mean()\nmad", "_____no_output_____" ], [ "(x-mean).abs()", "_____no_output_____" ], [ "# standard deviation\n\nsd = (x-mean).pow(2).mean().sqrt()\nsd", "_____no_output_____" ], [ "sd", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb16fdc914069583420cf15861ab376806eedd7e
236,239
ipynb
Jupyter Notebook
Sentiment_Analysis_Series_part_2(4000samplesonsent140).ipynb
kumarikumari/Keras-Deep-Learning-Cookbook
65bc3e69637f0d39ec53c1b0317370d18946c0e9
[ "MIT" ]
null
null
null
Sentiment_Analysis_Series_part_2(4000samplesonsent140).ipynb
kumarikumari/Keras-Deep-Learning-Cookbook
65bc3e69637f0d39ec53c1b0317370d18946c0e9
[ "MIT" ]
null
null
null
Sentiment_Analysis_Series_part_2(4000samplesonsent140).ipynb
kumarikumari/Keras-Deep-Learning-Cookbook
65bc3e69637f0d39ec53c1b0317370d18946c0e9
[ "MIT" ]
null
null
null
69.237691
45,958
0.620384
[ [ [ "<a href=\"https://colab.research.google.com/github/kumarikumari/Keras-Deep-Learning-Cookbook/blob/master/Sentiment_Analysis_Series_part_2(4000samplesonsent140).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "Mounted at /content/drive\n" ], [ "!nvidia-smi", "Sun Nov 29 12:52:55 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 455.38 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |\n| N/A 35C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |\n| | | ERR! |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n" ], [ "!pip install transformers", "Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3a/83/e74092e7f24a08d751aa59b37a9fc572b2e4af3918cb66f7766c3affb1b4/transformers-3.5.1-py3-none-any.whl (1.3MB)\n\u001b[K |████████████████████████████████| 1.3MB 5.7MB/s \n\u001b[?25hCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)\n\u001b[K |████████████████████████████████| 890kB 31.9MB/s \n\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)\nRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)\nCollecting sentencepiece==0.1.91\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)\n\u001b[K |████████████████████████████████| 1.1MB 41.5MB/s \n\u001b[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from transformers) (3.12.4)\nRequirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)\nCollecting tokenizers==0.9.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4c/34/b39eb9994bc3c999270b69c9eea40ecc6f0e97991dba28282b9fd32d44ee/tokenizers-0.9.3-cp36-cp36m-manylinux1_x86_64.whl (2.9MB)\n\u001b[K |████████████████████████████████| 2.9MB 41.9MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.17.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) (50.3.2)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.11.8)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)\nBuilding wheels for collected packages: sacremoses\n Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893257 sha256=a84d6bd49d1aae35ee0f3ac00f763e4bf86a1002ae5cca91695fd548d037a4b3\n Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45\nSuccessfully built sacremoses\nInstalling collected packages: sacremoses, sentencepiece, tokenizers, transformers\nSuccessfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.9.3 transformers-3.5.1\n" ], [ "!pip install -q -U watermark", "_____no_output_____" ], [ "%reload_ext watermark\n%watermark -v -p numpy,pandas,torch,transformers", "CPython 3.6.9\nIPython 5.5.0\n\nnumpy 1.18.5\npandas 1.1.4\ntorch 1.7.0+cu101\ntransformers 3.5.1\n" ], [ "!pip install sklearn\nimport sklearn", "Requirement already satisfied: sklearn in /usr/local/lib/python3.6/dist-packages (0.0)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from sklearn) (0.22.2.post1)\nRequirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.4.1)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (0.17.0)\nRequirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.18.5)\n" ] ], [ [ "### Making the necessary imports", "_____no_output_____" ] ], [ [ "import transformers\nfrom transformers import XLNetTokenizer, XLNetModel, AdamW, get_linear_schedule_with_warmup\nimport torch\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import accuracy_score\nfrom collections import defaultdict\nfrom textwrap import wrap\nfrom pylab import rcParams\n\nfrom torch import nn, optim\nfrom keras.preprocessing.sequence import pad_sequences\nfrom torch.utils.data import TensorDataset,RandomSampler,SequentialSampler\nfrom torch.utils.data import Dataset, DataLoader\nimport torch.nn.functional as F", "_____no_output_____" ], [ "%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nsns.set(style='whitegrid', palette='muted', font_scale=1.2)\n\nHAPPY_COLORS_PALETTE = [\"#01BEFE\", \"#FFDD00\", \"#FF7D00\", \"#FF006D\", \"#ADFF02\", \"#8F00FF\"]\n\nsns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE))\n\nrcParams['figure.figsize'] = 12, 8\n\nRANDOM_SEED = 42\nnp.random.seed(RANDOM_SEED)\ntorch.manual_seed(RANDOM_SEED)\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\ndevice", "_____no_output_____" ] ], [ [ "### Data Preprocessing", "_____no_output_____" ] ], [ [ "from google.colab import files\nfiles.upload()", "_____no_output_____" ], [ "df = pd.read_csv('sent140_preprocessed.csv')\ndf.head()", "_____no_output_____" ], [ "from sklearn.utils import shuffle\ndf = shuffle(df)\ndf.head(20)", "_____no_output_____" ], [ "df = df[:4000]\nlen(df)", "_____no_output_____" ], [ "import re\ndef clean_text(text1):\n text = re.sub(r\"@[A-Za-z0-9]+\", ' ', text)\n text = re.sub(r\"https?://[A-Za-z0-9./]+\", ' ', text)\n text = re.sub(r\"[^a-zA-z.!?'0-9]\", ' ', text)\n text = re.sub('\\t', ' ', text)\n text = re.sub(r\" +\", ' ', text)\n return text1", "_____no_output_____" ], [ "#df_new = df.rename(columns={'text': 'review'})\n#df_new", "_____no_output_____" ], [ "#data = df_new[['review', 'polarity']]\n#data.polarity.replace(4, 1, inplace=True)\n#data", "_____no_output_____" ], [ "#df['text'] = df_new['text'].apply(clean_text)", "_____no_output_____" ], [ "rcParams['figure.figsize'] = 10, 8\nsns.countplot(data.polarity)\nplt.xlabel('review score');", "/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n" ], [ "def sentiment2label(polarity):\n if polarity == \"positive\":\n return 1\n else :\n return 0\n\ndf['polarity'] = df['polarity'].apply(sentiment2label)", "_____no_output_____" ], [ "df['polarity'].value_counts()", "_____no_output_____" ], [ "#class_names = ['negative', 'positive']", "_____no_output_____" ] ], [ [ "### Playing with XLNetTokenizer", "_____no_output_____" ] ], [ [ "from transformers import XLNetTokenizer, XLNetModel\nPRE_TRAINED_MODEL_NAME = 'xlnet-base-cased'\ntokenizer = XLNetTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)", "_____no_output_____" ], [ "input_txt = \"India is my country. All Indians are my brothers and sisters\"\nencodings = tokenizer.encode_plus(input_txt, add_special_tokens=True, max_length=16, return_tensors='pt', return_token_type_ids=False, return_attention_mask=True, pad_to_max_length=False)", "Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n" ], [ "print('input_ids : ',encodings['input_ids'])", "input_ids : tensor([[ 837, 27, 94, 234, 9, 394, 7056, 41, 94, 4194, 21, 8301,\n 4, 3]])\n" ], [ "tokenizer.convert_ids_to_tokens(encodings['input_ids'][0])", "_____no_output_____" ], [ "type(encodings['attention_mask'])", "_____no_output_____" ], [ "attention_mask = pad_sequences(encodings['attention_mask'], maxlen=512, dtype=torch.Tensor ,truncating=\"post\",padding=\"post\")", "_____no_output_____" ], [ "attention_mask = attention_mask.astype(dtype = 'int64')\nattention_mask = torch.tensor(attention_mask) \nattention_mask.flatten()", "_____no_output_____" ], [ "encodings['input_ids']", "_____no_output_____" ] ], [ [ "### Checking the distribution of token lengths", "_____no_output_____" ] ], [ [ "token_lens = []\n\nfor txt in df['text']:\n tokens = tokenizer.encode(txt, max_length=512)\n token_lens.append(len(tokens))", "_____no_output_____" ], [ "sns.distplot(token_lens)\nplt.xlim([0, 1024]);\nplt.xlabel('Token count');", "/usr/local/lib/python3.6/dist-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n warnings.warn(msg, FutureWarning)\n" ], [ "MAX_LEN = 512", "_____no_output_____" ] ], [ [ "### Custom Dataset class", "_____no_output_____" ] ], [ [ "class sent140(Dataset):\n\n def __init__(self, reviews, targets, tokenizer, max_len):\n self.reviews = reviews\n self.targets = targets\n self.tokenizer = tokenizer\n self.max_len = max_len\n \n def __len__(self):\n return len(self.reviews)\n \n def __getitem__(self, item):\n review = str(self.reviews[item])\n target = self.targets[item]\n\n encoding = self.tokenizer.encode_plus(\n review,\n add_special_tokens=True,\n max_length=self.max_len,\n return_token_type_ids=False,\n pad_to_max_length=False,\n return_attention_mask=True,\n return_tensors='pt',\n )\n\n input_ids = pad_sequences(encoding['input_ids'], maxlen=MAX_LEN, dtype=torch.Tensor ,truncating=\"post\",padding=\"post\")\n input_ids = input_ids.astype(dtype = 'int64')\n input_ids = torch.tensor(input_ids) \n\n attention_mask = pad_sequences(encoding['attention_mask'], maxlen=MAX_LEN, dtype=torch.Tensor ,truncating=\"post\",padding=\"post\")\n attention_mask = attention_mask.astype(dtype = 'int64')\n attention_mask = torch.tensor(attention_mask) \n\n return {\n 'review_text': review,\n 'input_ids': input_ids,\n 'attention_mask': attention_mask.flatten(),\n 'targets': torch.tensor(target, dtype=torch.long)\n }", "_____no_output_____" ], [ "df_train, df_test = train_test_split(df, test_size=0.5, random_state=101)\ndf_val, df_test = train_test_split(df_test, test_size=0.5, random_state=101)", "_____no_output_____" ], [ "df_train.shape, df_val.shape, df_test.shape", "_____no_output_____" ] ], [ [ "### Custom Dataloader", "_____no_output_____" ] ], [ [ "def create_data_loader(df, tokenizer, max_len, batch_size):\n ds = sent140(\n reviews=df.text.to_numpy(),\n targets=df.polarity.to_numpy(),\n tokenizer=tokenizer,\n max_len=max_len\n )\n\n return DataLoader(\n ds,\n batch_size=batch_size,\n num_workers=4\n )", "_____no_output_____" ], [ "BATCH_SIZE = 4\n\ntrain_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)\nval_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)\ntest_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)", "_____no_output_____" ] ], [ [ "### Loading the Pre-trained XLNet model for sequence classification from huggingface transformers", "_____no_output_____" ] ], [ [ "from transformers import XLNetForSequenceClassification\nmodel = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels = 2)\nmodel = model.to(device)", "_____no_output_____" ], [ "model", "_____no_output_____" ] ], [ [ "### Setting Hyperparameters", "_____no_output_____" ] ], [ [ "EPOCHS = 3\n\nparam_optimizer = list(model.named_parameters())\nno_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']\noptimizer_grouped_parameters = [\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay':0.0}\n]\noptimizer = AdamW(optimizer_grouped_parameters, lr=3e-5)\n\ntotal_steps = len(train_data_loader) * EPOCHS\n\nscheduler = get_linear_schedule_with_warmup(\n optimizer,\n num_warmup_steps=0,\n num_training_steps=total_steps\n)", "_____no_output_____" ] ], [ [ "### Sanity check with one batch", "_____no_output_____" ] ], [ [ "data = next(iter(val_data_loader))\ndata.keys()", "_____no_output_____" ], [ "input_ids = data['input_ids'].to(device)\nattention_mask = data['attention_mask'].to(device)\ntargets = data['targets'].to(device)\nprint(input_ids.reshape(4,512).shape) # batch size x seq length\nprint(attention_mask.shape) # batch size x seq length", "torch.Size([4, 512])\ntorch.Size([4, 512])\n" ], [ "input_ids[0]", "_____no_output_____" ], [ "outputs = model(input_ids.reshape(4,512), token_type_ids=None, attention_mask=attention_mask, labels=targets)\noutputs", "_____no_output_____" ], [ "type(outputs[0])", "_____no_output_____" ] ], [ [ "### Defining the training step function", "_____no_output_____" ] ], [ [ "from sklearn import metrics\ndef train_epoch(model, data_loader, optimizer, device, scheduler, n_examples):\n model = model.train()\n losses = []\n acc = 0\n counter = 0\n \n for d in data_loader:\n input_ids = d[\"input_ids\"].reshape(4,512).to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n \n outputs = model(input_ids=input_ids, token_type_ids=None, attention_mask=attention_mask, labels = targets)\n loss = outputs[0]\n logits = outputs[1]\n\n # preds = preds.cpu().detach().numpy()\n _, prediction = torch.max(outputs[1], dim=1)\n targets = targets.cpu().detach().numpy()\n prediction = prediction.cpu().detach().numpy()\n accuracy = metrics.accuracy_score(targets, prediction)\n\n acc += accuracy\n losses.append(loss.item())\n \n loss.backward()\n\n nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)\n optimizer.step()\n scheduler.step()\n optimizer.zero_grad()\n counter = counter + 1\n\n return acc / counter, np.mean(losses)", "_____no_output_____" ] ], [ [ "### Defining the evaluation function", "_____no_output_____" ] ], [ [ "def eval_model(model, data_loader, device, n_examples):\n model = model.eval()\n losses = []\n acc = 0\n counter = 0\n \n with torch.no_grad():\n for d in data_loader:\n input_ids = d[\"input_ids\"].reshape(4,512).to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n \n outputs = model(input_ids=input_ids, token_type_ids=None, attention_mask=attention_mask, labels = targets)\n loss = outputs[0]\n logits = outputs[1]\n\n _, prediction = torch.max(outputs[1], dim=1)\n targets = targets.cpu().detach().numpy()\n prediction = prediction.cpu().detach().numpy()\n accuracy = metrics.accuracy_score(targets, prediction)\n\n acc += accuracy\n losses.append(loss.item())\n counter += 1\n\n return acc / counter, np.mean(losses)", "_____no_output_____" ] ], [ [ "### Fine-tuning the pre-trained model", "_____no_output_____" ] ], [ [ "%%time\nhistory = defaultdict(list)\nbest_accuracy = 0\n\nfor epoch in range(EPOCHS):\n print(f'Epoch {epoch + 1}/{EPOCHS}')\n print('-' * 10)\n\n train_acc, train_loss = train_epoch(\n model,\n train_data_loader, \n optimizer, \n device, \n scheduler, \n len(df_train)\n )\n\n print(f'Train loss {train_loss} Train accuracy {train_acc}')\n\n val_acc, val_loss = eval_model(\n model,\n val_data_loader, \n device, \n len(df_val)\n )\n\n print(f'Val loss {val_loss} Val accuracy {val_acc}')\n print()\n\n history['train_acc'].append(train_acc)\n history['train_loss'].append(train_loss)\n history['val_acc'].append(val_acc)\n history['val_loss'].append(val_loss)\n\n if val_acc > best_accuracy:\n torch.save(model.state_dict(), 'C:\\\\Users\\\\BUJJI\\\\Desktop\\\\NLP\\\\models\\\\xlnet_model1.bin')\n best_accuracy = val_acc", "Epoch 1/3\n----------\nTrain loss 0.5852309298217296 Train accuracy 0.7585\nVal loss 0.7540176632255315 Val accuracy 0.766\n\nEpoch 2/3\n----------\nTrain loss 0.5752474646037444 Train accuracy 0.764\nVal loss 0.6447369921803474 Val accuracy 0.765\n\nEpoch 3/3\n----------\nTrain loss 0.5502564660999923 Train accuracy 0.812\nVal loss 0.8428783402740956 Val accuracy 0.771\n\nCPU times: user 8min 6s, sys: 3min 50s, total: 11min 57s\nWall time: 12min\n" ] ], [ [ "### Evaluation of the fine-tuned model", "_____no_output_____" ] ], [ [ "model.load_state_dict(torch.load('C:\\\\Users\\\\BUJJI\\\\Desktop\\\\NLP\\\\models\\\\xlnet_model1.bin'))", "_____no_output_____" ], [ "model = model.to(device)", "_____no_output_____" ], [ "test_acc, test_loss = eval_model(\n model,\n test_data_loader,\n device,\n len(df_test)\n)\n\nprint('Test Accuracy :', test_acc)\nprint('Test Loss :', test_loss)", "Test Accuracy : 0.755\nTest Loss : 0.8722399174310267\n" ], [ "def get_predictions(model, data_loader):\n model = model.eval()\n \n review_texts = []\n predictions = []\n prediction_probs = []\n real_values = []\n\n with torch.no_grad():\n for d in data_loader:\n\n texts = d[\"review_text\"]\n input_ids = d[\"input_ids\"].reshape(4,512).to(device)\n attention_mask = d[\"attention_mask\"].to(device)\n targets = d[\"targets\"].to(device)\n \n outputs = model(input_ids=input_ids, token_type_ids=None, attention_mask=attention_mask, labels = targets)\n\n loss = outputs[0]\n logits = outputs[1]\n \n _, preds = torch.max(outputs[1], dim=1)\n\n probs = F.softmax(outputs[1], dim=1)\n\n review_texts.extend(texts)\n predictions.extend(preds)\n prediction_probs.extend(probs)\n real_values.extend(targets)\n\n predictions = torch.stack(predictions).cpu()\n prediction_probs = torch.stack(prediction_probs).cpu()\n real_values = torch.stack(real_values).cpu()\n return review_texts, predictions, prediction_probs, real_values", "_____no_output_____" ], [ "y_review_texts, y_pred, y_pred_probs, y_test = get_predictions(\n model,\n test_data_loader\n)", "_____no_output_____" ], [ "print(classification_report(y_test, y_pred, target_names=class_names))", " precision recall f1-score support\n\n negative 0.83 0.86 0.84 768\n positive 0.47 0.42 0.44 232\n\n accuracy 0.76 1000\n macro avg 0.65 0.64 0.64 1000\nweighted avg 0.75 0.76 0.75 1000\n\n" ] ], [ [ "### Custom prediction function on raw text", "_____no_output_____" ] ], [ [ "def predict_sentiment(text):\n review_text = text\n\n encoded_review = tokenizer.encode_plus(\n review_text,\n max_length=MAX_LEN,\n add_special_tokens=True,\n return_token_type_ids=False,\n pad_to_max_length=False,\n return_attention_mask=True,\n return_tensors='pt',\n )\n\n input_ids = pad_sequences(encoded_review['input_ids'], maxlen=MAX_LEN, dtype=torch.Tensor ,truncating=\"post\",padding=\"post\")\n input_ids = input_ids.astype(dtype = 'int64')\n input_ids = torch.tensor(input_ids) \n\n attention_mask = pad_sequences(encoded_review['attention_mask'], maxlen=MAX_LEN, dtype=torch.Tensor ,truncating=\"post\",padding=\"post\")\n attention_mask = attention_mask.astype(dtype = 'int64')\n attention_mask = torch.tensor(attention_mask) \n\n input_ids = input_ids.reshape(1,512).to(device)\n attention_mask = attention_mask.to(device)\n\n outputs = model(input_ids=input_ids, attention_mask=attention_mask)\n\n outputs = outputs[0][0].cpu().detach()\n\n probs = F.softmax(outputs, dim=-1).cpu().detach().numpy().tolist()\n _, prediction = torch.max(outputs, dim =-1)\n\n print(\"Positive score:\", probs[1])\n print(\"Negative score:\", probs[0])\n print(f'Review text: {review_text}')\n print(f'Sentiment : {class_names[prediction]}')", "_____no_output_____" ], [ "text = \"Movie is the worst one I have ever seen!! The story has no meaning at all\"\npredict_sentiment(text)", "Positive score: 0.04329725727438927\nNegative score: 0.9567027688026428\nReview text: Movie is the worst one I have ever seen!! The story has no meaning at all\nSentiment : negative\n" ], [ "text = \"This is the best movie I have ever seen!! The story is such a motivation\"\npredict_sentiment(text)", "Positive score: 0.9775038957595825\nNegative score: 0.022496072575449944\nReview text: This is the best movie I have ever seen!! The story is such a motivation\nSentiment : positive\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb172dd795d09ce33c79931d27e3515cd8fb86f6
673,102
ipynb
Jupyter Notebook
project_2/06_Production_Model_Insights.ipynb
bearylogical/ga-test
17e1b62e56c685b76ab55a66d61a9da5ec59e47f
[ "MIT" ]
1
2019-06-21T03:15:48.000Z
2019-06-21T03:15:48.000Z
project_2/06_Production_Model_Insights.ipynb
bearylogical/ga-test
17e1b62e56c685b76ab55a66d61a9da5ec59e47f
[ "MIT" ]
null
null
null
project_2/06_Production_Model_Insights.ipynb
bearylogical/ga-test
17e1b62e56c685b76ab55a66d61a9da5ec59e47f
[ "MIT" ]
null
null
null
1,598.817102
197,528
0.957155
[ [ [ "# 06_Business_Insights", "_____no_output_____" ], [ "In this section, we will expend upon the features used by the model and attempt to explain its significance as well as contributions to the pricing model.\n\nAccordingly, in Section Four, we identified the following key features that that are strong predictors of housing price based upon a combination of feature engineering coupled with recursive feature elimination.\n \n $$\n \\hat{y} = \\beta_0 + \\begin{align} \\beta_1\\textit{(age_since_built)} + \\beta_2\\textit{(Gr_Liv_Area)} + \\beta_3\\textit{(Total_Bsmt_SF)} + \\beta_4\\textit{(house_exter_score)} + \\beta_j\\textit{(Land_Contour)}_j \n \\end{align}$$ \n \n Where:\n \n $\\textit{house_exter_score}$ = ['Overall Qual'] + ['Overall Cond'] +['Exter Qual'] + ['Exter Cond']\n \n| Score |\n| --- |\n|age_since_built |\n|Total Bsmt SF |\n|Land Contour_Lvl |\n|house_exter_score |\n|Gr Liv Area |\n|Land Contour_Low |\n|Land Contour_HLS |\n\n", "_____no_output_____" ] ], [ [ "# model coefficients\nprod_model_rfe", "_____no_output_____" ] ], [ [ "## Import Libraries", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns \nimport matplotlib.ticker as ticker\n\n%matplotlib inline", "_____no_output_____" ], [ "rfe_columns = ['Total Bsmt SF', 'Gr Liv Area', 'age_since_built', 'house_exter_score',\n 'Land Contour_HLS', 'Land Contour_Low', 'Land Contour_Lvl']", "_____no_output_____" ] ], [ [ "## Load in Training Set for Exploration", "_____no_output_____" ] ], [ [ "train = pd.read_csv('./datasets/imputed_train.csv')", "_____no_output_____" ], [ "new_train = train[(train['Total Bsmt SF']<4000) & (train['Total Bsmt SF']>0)]", "_____no_output_____" ], [ "plt.figure(figsize=(15,10))\nax = sns.scatterplot(x='Total Bsmt SF',y='SalePrice',data=new_train,hue='Total Bsmt SF')\nax.set_title(\"Total Basement Area against Sale Prices\", fontname='Helvetica', fontsize=18,loc='left')\nax.set_xlabel('Total Basement Area / SqFt',fontname='Helvetica',fontsize=12)\nax.set_ylabel('Sale Price / $',fontname='Helvetica',fontsize=12)\nplt.savefig(\"./img/bsmt_area.png\",dpi=300)", "_____no_output_____" ], [ "plt.figure(figsize=(15,10))\nax = sns.scatterplot(x='age_since_built',y='SalePrice',data=new_train,hue='age_since_built')\nax.set_title(\"Building Age against Sale Prices\", fontname='Helvetica', fontsize=18,loc='left')\nax.set_xlabel('Building Age / yr',fontname='Helvetica',fontsize=12)\nax.set_ylabel('Sale Price / $',fontname='Helvetica',fontsize=12)\nplt.savefig(\"./img/building_age.png\",dpi=300)", "_____no_output_____" ], [ "train.columns", "_____no_output_____" ], [ "plt.figure(figsize=(15,10))\nax = sns.pointplot(x='house_exter_score',y='SalePrice',data=new_train)\nax.set_title(\"Housing Scores against Sale Prices\", fontname='Helvetica', fontsize=18,loc='left')\nax.set_xlabel('Score',fontname='Helvetica',fontsize=12)\nax.set_ylabel('Sale Price / $',fontname='Helvetica',fontsize=12)\nfor ind, label in enumerate(ax.get_xticklabels()):\n if ind % 5 == 0: # every 10th label is kept\n label.set_visible(True)\n else:\n label.set_visible(False)\n\nplt.savefig(\"./img/housing_score.png\",dpi=300)", "_____no_output_____" ], [ "plt.figure(figsize=(15,10))\nax = sns.scatterplot(x='Gr Liv Area',y='SalePrice',data=new_train,hue='Gr Liv Area')\nax.set_title(\"Living Area against Sale Prices\", fontname='Helvetica', fontsize=18,loc='left')\nax.set_xlabel('Living Area / Sqft',fontname='Helvetica',fontsize=12)\nax.set_ylabel('Sale Price / $',fontname='Helvetica',fontsize=12)\nplt.savefig(\"./img/living_area.png\",dpi=300)", "_____no_output_____" ], [ "new_train_melt = new_train[['Land Contour_Low', 'Land Contour_HLS','Land Contour_Lvl','SalePrice']].melt(id_vars='SalePrice' ,value_vars=['Land Contour_Low', 'Land Contour_HLS','Land Contour_Lvl'])\n", "_____no_output_____" ], [ "plt.figure(figsize=(15,10))\nax = sns.boxplot(x='variable',y='SalePrice',order=['Land Contour_Lvl','Land Contour_Low','Land Contour_HLS'],data=new_train_melt[new_train_melt['value']!=0])\nax.set_title(\"Land Contours and relationship to Sale Prices\", fontname='Helvetica', fontsize=18,loc='left')\nax.set_xlabel('Score',fontname='Helvetica',fontsize=12)\nax.set_ylabel('Sale Price / $',fontname='Helvetica',fontsize=12)\nplt.savefig(\"./img/contour_plot.png\",dpi=300)\n", "_____no_output_____" ] ], [ [ "## Key Takeaways", "_____no_output_____" ], [ "## Conclusion And Recommendations", "_____no_output_____" ], [ "House age, Land Contours, Housing Scores as well as Gross floor areas are strong predictors of housing prices. Using these few variables, a prospective home seller can look into improving areas such as home quality, condition as well as look at expanding gross floor areas via careful remodelling of their homes.\n\n\nTo make this model location agnostic, we may impute features such as accessibility to the city (via distances) and crime rates which can affect buyer's judgement.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]