markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
ํ›ˆ๋ จ ํ›„ ํ…Œ์ŠคํŠธ์…‹์— ๋Œ€ํ•œ ์ •ํ™•๋„๊ฐ€ 90%๋ฅผ ์กฐ๊ธˆ ์›ƒ๋Œ ์ •๋„๋กœ ๋งŽ์ด ํ–ฅ์ƒ๋˜์—ˆ๋‹ค.
model = get_model() callbacks = [ keras.callbacks.ModelCheckpoint("binary_2gram.keras", save_best_only=True) ] model.fit(binary_2gram_train_ds.cache(), validation_data=binary_2gram_val_ds.cache(), epochs=10, callbacks=callbacks) model = keras.models.load_model("binary_2gram.keras") print(f"Test acc: {model.evaluate(binary_2gram_test_ds)[1]:.3f}")
Epoch 1/10 625/625 [==============================] - 12s 18ms/step - loss: 0.3857 - accuracy: 0.8347 - val_loss: 0.2791 - val_accuracy: 0.9000 Epoch 2/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2592 - accuracy: 0.9082 - val_loss: 0.2947 - val_accuracy: 0.8988 Epoch 3/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2277 - accuracy: 0.9241 - val_loss: 0.3060 - val_accuracy: 0.8978 Epoch 4/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2074 - accuracy: 0.9333 - val_loss: 0.3417 - val_accuracy: 0.8994 Epoch 5/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2070 - accuracy: 0.9365 - val_loss: 0.3538 - val_accuracy: 0.8968 Epoch 6/10 625/625 [==============================] - 4s 6ms/step - loss: 0.1997 - accuracy: 0.9395 - val_loss: 0.3908 - val_accuracy: 0.8946 Epoch 7/10 625/625 [==============================] - 4s 6ms/step - loss: 0.1940 - accuracy: 0.9421 - val_loss: 0.3715 - val_accuracy: 0.8940 Epoch 8/10 625/625 [==============================] - 4s 6ms/step - loss: 0.1902 - accuracy: 0.9427 - val_loss: 0.4054 - val_accuracy: 0.8930 Epoch 9/10 625/625 [==============================] - 4s 6ms/step - loss: 0.1952 - accuracy: 0.9432 - val_loss: 0.3848 - val_accuracy: 0.8880 Epoch 10/10 625/625 [==============================] - 4s 6ms/step - loss: 0.1949 - accuracy: 0.9441 - val_loss: 0.4011 - val_accuracy: 0.8912 782/782 [==============================] - 9s 11ms/step - loss: 0.2788 - accuracy: 0.8953 Test acc: 0.895
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
**๋ฐฉ์‹ 3: ๋ฐ”์ด๊ทธ๋žจ TF-IDF ์ธ์ฝ”๋”ฉ** N-๊ทธ๋žจ์„ ๋ฒกํ„ฐํ™”ํ•  ๋•Œ ์‚ฌ์šฉ ๋นˆ๋„๋ฅผ ํ•จ๊ป˜ ์ €์žฅํ•˜๋Š” ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.๋‹จ์–ด์˜ ์‚ฌ์šฉ ๋นˆ๋„๊ฐ€ ์•„๋ฌด๋ž˜๋„ ๋ฌธ์žฅ ํ‰๊ฐ€์— ์ค‘์š”ํ•œ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•  ๊ฒƒ์ด๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.์•„๋ž˜ ์ฝ”๋“œ์—์„œ์ฒ˜๋Ÿผ `output_mode="count"` ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค.
text_vectorization = TextVectorization( ngrams=2, max_tokens=20000, output_mode="count" )
_____no_output_____
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
๊ทธ๋Ÿฐ๋ฐ ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด "the", "a", "is", "are" ๋“ฑ์˜ ์‚ฌ์šฉ ๋นˆ๋„๋Š” ๋งค์šฐ ๋†’์€ ๋ฐ˜๋ฉด์—"Chollet" ๋“ฑ์˜ ๋‹จ์–ด๋Š” ๋นˆ๋„๊ฐ€ ๊ฑฐ์˜ 0์— ๊ฐ€๊น๊ฒŒ ๋‚˜์˜จ๋‹ค.๋˜ํ•œ ์ƒ์„ฑ๋œ ๋ฒกํ„ฐ์˜ ๋Œ€๋ถ€๋ถ„์€ 0์œผ๋กœ ์ฑ„์›Œ์งˆ ๊ฒƒ์ด๋‹ค. `max_tokens=20000`์„ ์‚ฌ์šฉํ•œ ๋ฐ˜๋ฉด์— ํ•˜๋‚˜์˜ ๋ฌธ์žฅ์—” ๋งŽ์•„์•ผ ๋ช‡ ์‹ญ๊ฐœ ์ •๋„์˜ ๋‹จ์–ด๋งŒ ์‚ฌ์šฉ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ```pythoninputs[0]: tf.Tensor([1. 1. 1. ... 0. 0. 0.], shape=(20000,), dtype=float32)``` ์ด ์ ์„ ๊ณ ๋ คํ•ด์„œ ์‚ฌ์šฉ ๋นˆ๋„๋ฅผ ์ •๊ทœํ™”ํ•œ๋‹ค. ํ‰๊ท ์„ ์›์ ์œผ๋กœ ๋งŒ๋“ค์ง€๋Š” ์•Š๊ณ  TF-IDF ๊ฐ’์œผ๋กœ ๋‚˜๋ˆ„๊ธฐ๋งŒ ์‹คํ–‰ํ•œ๋‹ค.์ด์œ ๋Š” ํ‰๊ท ์„ ์˜ฎ๊ธฐ๋ฉด ๋ฒกํ„ฐ์˜ ๋Œ€๋ถ€๋ถ„์˜ ๊ฐ’์ด 0์ด ์•„๋‹ˆ๊ฒŒ ๋˜์–ดํ›ˆ๋ จ์— ๋ณด๋‹ค ๋งŽ์€ ๊ณ„์‚ฐ์ด ์š”๊ตฌ๋˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. **TF-IDF**์˜ ์˜๋ฏธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.- `TF`(Term Frequency) - ํ•˜๋‚˜์˜ ๋ฌธ์žฅ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๋‹จ์–ด์˜ ๋นˆ๋„ - ๋†’์„ ์ˆ˜๋ก ์ค‘์š” - ์˜ˆ๋ฅผ ๋“ค์–ด, ํ•˜๋‚˜์˜ ๋ฆฌ๋ทฐ์— "terrible" ์ด ๋งŽ์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค๋ฉด ํ•ด๋‹น ๋ฆฌ๋ทฐ๋Š” ๋ถ€์ •์ผ ๊ฐ€๋Šฅ์„ฑ ๋†’์Œ.- `IDF`(Inverse Document Frequency) - ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด ๋ฌธ์žฅ์—์„œ ์‚ฌ์šฉ๋œ ๋‹จ์–ด์˜ ๋นˆ๋„ - ๋‚ฎ์„ ์ˆ˜๋ก ์ค‘์š”. - "the", "a", "is" ๋“ฑ์˜ `IDF` ๊ฐ’์€ ๋†’์ง€๋งŒ ๋ณ„๋กœ ์ค‘์š”ํ•˜์ง€ ์•Š์Œ.- `TF-IDF = TF / IDF` `output_mode="tf_idf"` ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜๋ฉด TF-IDF ์ธ์ฝ”๋”ฉ์„ ์ง€์›ํ•œ๋‹ค.
text_vectorization = TextVectorization( ngrams=2, max_tokens=20000, output_mode="tf_idf", )
_____no_output_____
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
ํ›ˆ๋ จ ํ›„ ํ…Œ์ŠคํŠธ์…‹์— ๋Œ€ํ•œ ์ •ํ™•๋„๊ฐ€ ๋‹ค์‹œ 89% ์•„๋ž˜๋กœ ๋‚ด๋ ค๊ฐ„๋‹ค.์—ฌ๊ธฐ์„œ๋Š” ๋ณ„ ๋„์›€์ด ๋˜์ง€ ์•Š์•˜์ง€๋งŒ ๋งŽ์€ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์—์„œ๋Š” 1% ์ •๋„์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์„ ๊ฐ€์ ธ์˜จ๋‹ค.**์ฃผ์˜์‚ฌํ•ญ**: ์•„๋ž˜ ์ฝ”๋“œ๋Š” ํ˜„์žฌ(Tensorflow 2.6๊ณผ 2.7) GPU๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋งŒ ์ž‘๋™ํ•œ๋‹ค. ์ด์œ ๋Š” ์•„์ง ๋ชจ๋ฅธ๋‹ค([์—ฌ๊ธฐ ์ฐธ์กฐ](https://github.com/fchollet/deep-learning-with-python-notebooks/issues/190)).
text_vectorization.adapt(text_only_train_ds) tfidf_2gram_train_ds = train_ds.map(lambda x, y: (text_vectorization(x), y)) tfidf_2gram_val_ds = val_ds.map(lambda x, y: (text_vectorization(x), y)) tfidf_2gram_test_ds = test_ds.map(lambda x, y: (text_vectorization(x), y)) model = get_model() model.summary() callbacks = [ keras.callbacks.ModelCheckpoint("tfidf_2gram.keras", save_best_only=True) ] model.fit(tfidf_2gram_train_ds.cache(), validation_data=tfidf_2gram_val_ds.cache(), epochs=10, callbacks=callbacks) model = keras.models.load_model("tfidf_2gram.keras") print(f"Test acc: {model.evaluate(tfidf_2gram_test_ds)[1]:.3f}")
Model: "model_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 20000)] 0 dense_4 (Dense) (None, 16) 320016 dropout_2 (Dropout) (None, 16) 0 dense_5 (Dense) (None, 1) 17 ================================================================= Total params: 320,033 Trainable params: 320,033 Non-trainable params: 0 _________________________________________________________________ Epoch 1/10 625/625 [==============================] - 11s 17ms/step - loss: 0.5232 - accuracy: 0.7588 - val_loss: 0.3197 - val_accuracy: 0.8806 Epoch 2/10 625/625 [==============================] - 4s 6ms/step - loss: 0.3534 - accuracy: 0.8442 - val_loss: 0.2946 - val_accuracy: 0.8954 Epoch 3/10 625/625 [==============================] - 4s 6ms/step - loss: 0.3231 - accuracy: 0.8609 - val_loss: 0.3086 - val_accuracy: 0.8864 Epoch 4/10 625/625 [==============================] - 4s 6ms/step - loss: 0.3053 - accuracy: 0.8734 - val_loss: 0.3087 - val_accuracy: 0.8814 Epoch 5/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2781 - accuracy: 0.8845 - val_loss: 0.3225 - val_accuracy: 0.8878 Epoch 6/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2703 - accuracy: 0.8870 - val_loss: 0.3472 - val_accuracy: 0.8702 Epoch 7/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2695 - accuracy: 0.8883 - val_loss: 0.3357 - val_accuracy: 0.8682 Epoch 8/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2650 - accuracy: 0.8931 - val_loss: 0.3343 - val_accuracy: 0.8664 Epoch 9/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2606 - accuracy: 0.8901 - val_loss: 0.3546 - val_accuracy: 0.8580 Epoch 10/10 625/625 [==============================] - 4s 6ms/step - loss: 0.2575 - accuracy: 0.8924 - val_loss: 0.3318 - val_accuracy: 0.8760 782/782 [==============================] - 8s 10ms/step - loss: 0.2998 - accuracy: 0.8927 Test acc: 0.893
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
**๋ถ€๋ก: ๋ฌธ์ž์—ด ๋ฒกํ„ฐํ™” ์ „์ฒ˜๋ฆฌ๋ฅผ ํ•จ๊ป˜ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ** ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‹ค์ „์— ๋ฐฐ์น˜ํ•˜๋ ค๋ฉด ํ…์ŠคํŠธ ๋ฒกํ„ฐํ™”๋„ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๋‚ด๋ณด๋‚ด์•ผ ํ•œ๋‹ค.์ด๋ฅผ ์œ„ํ•ด `TextVectorization` ์ธต์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ™œ์šฉ๋งŒ ํ•˜๋ฉด ๋œ๋‹ค.
inputs = keras.Input(shape=(1,), dtype="string") # ํ…์ŠคํŠธ ๋ฒกํ„ฐํ™” ์ถ”๊ฐ€ processed_inputs = text_vectorization(inputs) # ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์— ์ ์šฉ outputs = model(processed_inputs) # ์ตœ์ข… ๋ชจ๋ธ inference_model = keras.Model(inputs, outputs)
_____no_output_____
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
`inference_model`์€ ์ผ๋ฐ˜ ํ…์ŠคํŠธ ๋ฌธ์žฅ์„ ์ง์ ‘ ์ธ์ž๋กœ ๋ฐ›์„ ์ˆ˜ ์žˆ๋‹ค.์˜ˆ๋ฅผ ๋“ค์–ด "That was an excellent movie, I loved it."๋ผ๋Š” ๋ฆฌ๋ทฐ๋Š”๊ธ์ •์ผ ํ™•๋ฅ ์ด ๋งค์šฐ ๋†’๋‹ค๊ณ  ์˜ˆ์ธก๋œ๋‹ค.
import tensorflow as tf raw_text_data = tf.convert_to_tensor([ ["That was an excellent movie, I loved it."], ]) predictions = inference_model(raw_text_data) print(f"{float(predictions[0] * 100):.2f} percent positive")
92.10 percent positive
MIT
notebooks/dlp11_part01_introduction.ipynb
codingalzi/dlp
Loading neurons from s3
import numpy as np from skimage import io from pathlib import Path from brainlit.utils.session import NeuroglancerSession from brainlit.utils.Neuron_trace import NeuronTrace import napari from napari.utils import nbscreenshot %gui qt
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
Loading entire neuron from AWS `s3_trace = NeuronTrace(s3_path,seg_id,mip)` to create a NeuronTrace object with s3 file path`swc_trace = NeuronTrace(swc_path)` to create a NeuronTrace object with swc file path1. `s3_trace.get_df()` to output the s3 NeuronTrace object as a pd.DataFrame2. `swc_trace.get_df()` to output the swc NeuronTrace object as a pd.DataFrame3. `swc_trace.generate_df_subset(list_of_voxels)` creates a smaller subset of the original dataframe with coordinates in img space4. `swc_trace.get_df_voxel()` to output a DataFrame that converts the coordinates from spatial to voxel coordinates5. `swc_trace.get_graph()` to output the s3 NeuronTrace object as a netwrokx.DiGraph6. `swc_trace.get_paths()` to output the s3 NeuronTrace object as a list of paths7. `ViewerModel.add_shapes` to add the paths as a shape layer into the napari viewer8. `swc_trace.get_sub_neuron(bounding_box)` to output NeuronTrace object as a graph cropped by a bounding box9. `swc_trace.get_sub_neuron(bounding_box)` to output NeuronTrace object as paths cropped by a bounding box 1. `s3_trace.get_df()`This function outputs the s3 NeuronTrace object as a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in spatial units of micrometers ([swc specification](http://www.neuronland.org/NLMorphologyConverter/MorphologyFormats/SWC/Spec.html))
""" s3_path = "s3://open-neurodata/brainlit/brain1_segments" seg_id = 2 mip = 1 s3_trace = NeuronTrace(s3_path, seg_id, mip) df = s3_trace.get_df() df.head() """
Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 5.13it/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 5.82it/s]
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
2. `swc_trace.get_df()`This function outputs the swc NeuronTrace object as a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in spatial units of micrometers ([swc specification](http://www.neuronland.org/NLMorphologyConverter/MorphologyFormats/SWC/Spec.html))
""" swc_path = str(Path().resolve().parents[2] / "data" / "data_octree" / "consensus-swcs" / '2018-08-01_G-002_consensus.swc') swc_trace = NeuronTrace(path=swc_path) df = swc_trace.get_df() df.head() """
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
3. `swc_trace.generate_df_subset(list_of_voxels)`This function creates a smaller subset of the original dataframe with coordinates in img space. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in same spatial units as the image file when using `ngl.pull_vertex_list`
"""# Choose vertices to use for the subneuron subneuron_df = df[0:3] vertex_list = subneuron_df['sample'].array # Define a neuroglancer session url = "s3://open-neurodata/brainlit/brain1" mip = 1 ngl = NeuroglancerSession(url, mip=mip) # Get vertices seg_id = 2 buffer = 10 img, bounds, vox_in_img_list = ngl.pull_vertex_list(seg_id=seg_id, v_id_list=vertex_list, buffer = buffer, expand = True) df_subneuron = swc_trace.generate_df_subset(vox_in_img_list.tolist(),subneuron_start=0,subneuron_end=3 ) print(df_subneuron) """
Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 6.08it/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 6.95it/s] Downloading: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 5.02it/s] Downloading: 0%| | 0/4 [00:01<?, ?it/s] sample structure x y z r parent 0 1 0 106 106 112 1.0 -1 1 2 0 121 80 61 1.0 1 2 3 0 61 55 49 1.0 2
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
4. `swc_trace.get_df_voxel()` If we want to overlay the swc file with a corresponding image, we need to make sure that they are in the same coordinate space. Because an image in an array of voxels, it makes sense to convert the vertices from spatial units into voxel units.Given the `spacing` (spatial units/voxel) and `origin` (spatial units) of the image, `swc_to_voxel` does the conversion by using the following equation:$voxel = \frac{spatial - origin}{spacing}$
# spacing = np.array([0.29875923,0.3044159,0.98840415]) # origin = np.array([70093.276,15071.596,29306.737]) # df_voxel = swc_trace.get_df_voxel(spacing=spacing, origin=origin) # df_voxel.head()
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
5. `swc_trace.get_graph()`A neuron is a graph with no cycles (tree). While napari does not support displaying graph objects, it can display multiple paths. The DataFrame already contains all the possible edges in the neurons. Each row in the DataFrame is an edge. For example, from the above we can see that `sample 2` has `parent 1`, which represents edge `(1,2)`. `sample 1` having `parent -1` means that `sample 1` is the root of the tree. `swc_trace.get_graph()` converts the NeuronTrace object into a networkx directional graph.
# G = swc_trace.get_graph() # print('Number of nodes:', len(G.nodes)) # print('Number of edges:', len(G.edges)) # print('\n') # print('Sample 1 coordinates (x,y,z)') # print(G.nodes[1]['x'],G.nodes[1]['y'],G.nodes[1]['z'])
Number of nodes: 1650 Number of edges: 1649 Sample 1 coordinates (x,y,z) -387 1928 -1846
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
6. `swc_trace.get_paths()` This function returns the NeuronTrace object as a list of non-overlapping paths. The union of the paths forms the graph.The algorithm works by:1. Find longest path in the graph ([networkx.algorithms.dag.dag_longest_path](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.dag_longest_path.html))2. Remove longest path from graph3. Repeat steps 1 and 2 until there are no more edges left in the graph
# paths = swc_trace.get_paths() # print(f"The graph was decomposed into {len(paths)} paths")
The graph was decomposed into 179 paths
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
7. `ViewerModel.add_shapes`napari displays "layers". The most common layer is the image layer. In order to display the neuron, we use `path` from the [shapes](https://napari.org/tutorials/shapes) layer
# viewer = napari.Viewer(ndisplay=3) # viewer.add_shapes(data=paths, shape_type='path', edge_color='white', name='Skeleton 2') # nbscreenshot(viewer)
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
Loading sub-neuronThe image of the entire brain has dimensions of (33792, 25600, 13312) voxels. G-002 spans a sub-image of (7386, 9932, 5383) voxels. Both are too big to load in napari and overlay the neuron.To circumvent this, we can crop out a smaller region of the neuron, load the sub-neuron, and load the corresponding sub-image.In order to get a sub-neuron, we need to specify the `bounding_box` that will be used to crop the neuron. `bounding_box` is a length 2 tuple. The first element is one corner of the bounding box (inclusive) and the second element is the opposite corner of the bounding box (exclusive). Both corners are in voxel units.`add_swc` can do all of this automatically when given `bounding_box` by following these steps:1. `read_s3` to read the swc file into a pd.DataFrame2. `swc_to_voxel` to convert the coordinates from spatial to voxel coordinates3. `df_to_graph` to convert the DataFrame into a netwrokx.DiGraph**3.1 `swc.get_sub_neuron` to crop the graph by `bounding_box`**4. `graph_to_paths` to convert from a graph into a list of paths5. `ViewerModel.add_shapes` to add the paths as a shape layer into the napari viewer 8. `swc_trace.get_sub_neuron(bounding_box)` 9. `swc_trace.get_sub_neuron_paths(bounding_box)` This function crops a graph by removing edges. It removes edges that do not intersect the bounding box.Edges that intersect the bounding box will have at least one of its vertices be contained by the bounding box. The algorithm follows this principle by checking the neighborhood of vertices.For each vertex *v* in the graph:1. Find vertices belonging to local neighborhood of *v*2. If vertex *v* or any of its local neighborhood vertices are in the bounding box, do nothing. Otherwise, remove vertex *v* and its edges from the graphWe check the neighborhood of *v* along with *v* because we want the sub-neuron to show all edges that pass through the bounding box, including edges that are only partially contained.`swc_trace.get_sub_neuron(bounding_box)` returns a sub neuron in graph format`swc_trace.get_sub_neuron_paths(bounding_box)` returns a sub neuron in paths format
# # Create an NGL session to get the bounding box # url = "s3://open-neurodata/brainlit/brain1" # mip = 1 # ngl = NeuroglancerSession(url, mip=mip) # img, bbbox, vox = ngl.pull_chunk(2, 300, 1) # bbox = bbbox.to_list() # box = (bbox[:3], bbox[3:]) # print(box) # G_sub = s3_trace.get_sub_neuron(box) # paths_sub = s3_trace.get_sub_neuron_paths(box) # print(len(G_sub)) # viewer = napari.Viewer(ndisplay=3) # viewer.add_shapes(data=paths_sub, shape_type='path', edge_color='blue', name='sub-neuron') # # overlay corresponding image (random image but correct should be G-002_15312-4400-6448_15840-4800-6656.tif' ) # image_path = str(Path().resolve().parents[2] / "data" / "data_octree" / 'default.0.tif') # img_comp = io.imread(image_path) # img_comp = np.swapaxes(img_comp,0,2) # viewer.add_image(img_comp) # nbscreenshot(viewer)
459
Apache-2.0
docs/notebooks/visualization/loading.ipynb
neurodata/brainl
Deep Convolutional GANsIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the [original paper, here](https://arxiv.org/pdf/1511.06434.pdf).You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, our goal is to create a DCGAN that can generate new, realistic-looking images of house numbers. We'll go through the following steps to do this:* Load in and pre-process the house numbers dataset* Define discriminator and generator networks* Train these adversarial networks* Visualize the loss over time and some sample, generated images Deeper Convolutional NetworksSince this dataset is more complex than our MNIST data, we'll need a deeper network to accurately identify patterns in these images and be able to generate new ones. Specifically, we'll use a series of convolutional or transpose convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get these convolutional networks to train. Besides these changes in network structure, training the discriminator and generator networks should be the same as before. That is, the discriminator will alternate training on real and fake (generated) images, and the generator will aim to trick the discriminator into thinking that its generated images are real!
# import libraries import matplotlib.pyplot as plt import numpy as np import pickle as pkl %matplotlib inline
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Getting the dataHere you can download the SVHN dataset. It's a dataset built-in to the PyTorch datasets library. We can load in training data, transform it into Tensor datatypes, then create dataloaders to batch our data into a desired size.
import torch from torchvision import datasets from torchvision import transforms # Tensor transform transform = transforms.ToTensor() # SVHN training datasets svhn_train = datasets.SVHN(root='data/', split='train', download=True, transform=transform) batch_size = 128 num_workers = 0 # build DataLoaders for SVHN dataset train_loader = torch.utils.data.DataLoader(dataset=svhn_train, batch_size=batch_size, shuffle=True, num_workers=num_workers)
Using downloaded and verified file: data/train_32x32.mat
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Visualize the DataHere I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real, training images that we'll pass to the discriminator. Notice that each image has _one_ associated, numerical label.
# obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) plot_size=20 for idx in np.arange(plot_size): ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.transpose(images[idx], (1, 2, 0))) # print out the correct label for each image # .item() gets the value contained in a Tensor ax.set_title(str(labels[idx].item()))
<ipython-input-3-a55faf2ffde6>:9: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later. ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Pre-processing: scaling from -1 to 1We need to do a bit of pre-processing; we know that the output of our `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
# current range img = images[0] print('Min: ', img.min()) print('Max: ', img.max()) # helper scale function def scale(x, feature_range=(-1, 1)): ''' Scale takes in an image x and returns that image, scaled with a feature_range of pixel values from -1 to 1. This function assumes that the input x is already scaled from 0-1.''' # assume x is scaled to (0, 1) # scale to feature_range and return scaled x range_min, range_max = feature_range x = x * (range_max - range_min) + range_min return x # scaled range scaled_img = scale(img) print('Scaled min: ', scaled_img.min()) print('Scaled max: ', scaled_img.max())
Scaled min: tensor(-0.4196) Scaled max: tensor(0.2627)
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorHere you'll build the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. * The inputs to the discriminator are 32x32x3 tensor images* You'll want a few convolutional, hidden layers* Then a fully connected layer for the output; as before, we want a sigmoid output, but we'll add that in the loss function, [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.htmlbcewithlogitsloss), laterFor the depths of the convolutional layers I suggest starting with 32 filters in the first layer, then double that depth as you add layers (to 64, 128, etc.). Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpooling layers.You'll also want to use batch normalization with [nn.BatchNorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d) on each layer **except** the first convolutional layer and final, linear output layer. Helper `conv` function In general, each layer should look something like convolution > batch norm > leaky ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a convolutional + an optional batch norm layer. We'll create these using PyTorch's [Sequential container](https://pytorch.org/docs/stable/nn.htmlsequential), which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for strided convolutions.
import torch.nn as nn import torch.nn.functional as F # helper conv function def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True): """Creates a convolutional layer, with optional batch normalization. """ layers = [] conv_layer = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, bias=False) # append conv layer layers.append(conv_layer) if batch_norm: # append batchnorm layer layers.append(nn.BatchNorm2d(out_channels)) # using Sequential container return nn.Sequential(*layers) class Discriminator(nn.Module): def __init__(self, conv_dim=32): super(Discriminator, self).__init__() # complete init function self.conv1 = conv(in_channels=3, out_channels=conv_dim, kernel_size=4, stride=2, batch_norm=False) self.conv2 = conv(in_channels=conv_dim, out_channels=conv_dim*2, kernel_size=4, stride=2) self.conv3 = conv(in_channels=conv_dim*2, out_channels=conv_dim*4, kernel_size=4, stride=2) # 128*4*4 self.fc = nn.Linear(in_features=128*4*4, out_features=1) def forward(self, x): # complete forward function x = self.conv1(x) x = F.leaky_relu(x, negative_slope=0.2) x = self.conv2(x) x = F.leaky_relu(x, negative_slope=0.2) x = self.conv3(x) x = F.leaky_relu(x, negative_slope=0.2) x = x.view(-1, 128*4*4) x = self.fc(x) return x
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
GeneratorNext, you'll build the generator network. The input will be our noise vector `z`, as before. And, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.What's new here is we'll use transpose convolutional layers to create our new images. * The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x512. * Then, we use batch normalization and a leaky ReLU activation. * Next is a series of [transpose convolutional layers](https://pytorch.org/docs/stable/nn.htmlconvtranspose2d), where you typically halve the depth and double the width and height of the previous layer. * And, we'll apply batch normalization and ReLU to all but the last of these hidden layers. Where we will just apply a `tanh` activation. Helper `deconv` functionFor each of these layers, the general scheme is transpose convolution > batch norm > ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a transpose convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for transpose convolutions.
# helper deconv function def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True): """Creates a transposed-convolutional layer, with optional batch normalization. """ ## TODO: Complete this function ## create a sequence of transpose + optional batch norm layers layers = [] deconv_layer = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding, bias=False) layers.append(deconv_layer) if batch_norm: layers.append(nn.BatchNorm2d(out_channels)) return nn.Sequential(*layers) class Generator(nn.Module): def __init__(self, z_size, conv_dim=32): super(Generator, self).__init__() # complete init function self.fc = nn.Linear(z_size, 4*4*512) self.deconv1 = deconv(conv_dim*16, conv_dim*8, kernel_size=4) self.deconv2 = deconv(conv_dim*8, conv_dim*4, kernel_size=4) self.deconv3 = deconv(conv_dim*4, 3, kernel_size=4, batch_norm=False) def forward(self, x): # complete forward function x = self.fc(x) x = x.view(-1, 512, 4, 4) x = self.deconv1(x) x = F.relu(x) x = self.deconv2(x) x = F.relu(x) x = self.deconv3(x) x = torch.tanh(x) return x
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
# define hyperparams conv_dim = 32 z_size = 100 # define discriminator and generator D = Discriminator(conv_dim) G = Generator(z_size=z_size, conv_dim=conv_dim) print(D) print() print(G)
Discriminator( (conv1): Sequential( (0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) ) (conv2): Sequential( (0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (conv3): Sequential( (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (fc): Linear(in_features=2048, out_features=1, bias=True) ) Generator( (fc): Linear(in_features=100, out_features=8192, bias=True) (deconv1): Sequential( (0): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (deconv2): Sequential( (0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (deconv3): Sequential( (0): ConvTranspose2d(128, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) ) )
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Training on GPUCheck if you can train on GPU. If you can, set this as a variable and move your models to GPU. > Later, we'll also move any inputs our models and loss functions see (real_images, z, and ground truth labels) to GPU as well.
train_on_gpu = torch.cuda.is_available() if train_on_gpu: # move models to GPU G.cuda() D.cuda() print('GPU available for training. Models moved to GPU') else: print('Training on CPU.')
GPU available for training. Models moved to GPU
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
--- Discriminator and Generator LossesNow we need to calculate the losses. And this will be exactly the same as before. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.htmlbcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the real images with a label = 1, indicating that these are real. The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
def real_loss(D_out, smooth=False): batch_size = D_out.size(0) # label smoothing if smooth: # smooth, real labels = 0.9 labels = torch.ones(batch_size)*0.9 else: labels = torch.ones(batch_size) # real labels = 1 # move labels to GPU if available if train_on_gpu: labels = labels.cuda() # binary cross entropy with logits loss criterion = nn.BCEWithLogitsLoss() # calculate loss loss = criterion(D_out.squeeze(), labels) return loss def fake_loss(D_out): batch_size = D_out.size(0) labels = torch.zeros(batch_size) # fake labels = 0 if train_on_gpu: labels = labels.cuda() criterion = nn.BCEWithLogitsLoss() # calculate loss loss = criterion(D_out.squeeze(), labels) return loss
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
OptimizersNot much new here, but notice how I am using a small learning rate and custom parameters for the Adam optimizers, This is based on some research into DCGAN model convergence. HyperparametersGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them.
import torch.optim as optim # params lr = 0.0002 beta1=0.5 beta2=0.999 # Create optimizers for the discriminator and generator d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2]) g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
--- TrainingTraining will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases. Discriminator training1. Compute the discriminator loss on real, training images 2. Generate fake images3. Compute the discriminator loss on fake, generated images 4. Add up real and fake loss5. Perform backpropagation + an optimization step to update the discriminator's weights Generator training1. Generate fake images2. Compute the discriminator loss on fake images, using **flipped** labels!3. Perform backpropagation + an optimization step to update the generator's weights Saving SamplesAs we train, we'll also print out some loss statistics and save some generated "fake" samples.**Evaluation mode**Notice that, when we call our generator to create the samples to display, we set our model to evaluation mode: `G.eval()`. That's so the batch normalization layers will use the population statistics rather than the batch statistics (as they do during training), *and* so dropout layers will operate in eval() mode; not turning off any nodes for generating samples.
import pickle as pkl # training hyperparams num_epochs = 30 # keep track of loss and generated, "fake" samples samples = [] losses = [] print_every = 300 # Get some fixed data for sampling. These are images that are held # constant throughout training, and allow us to inspect the model's performance sample_size=16 fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size)) fixed_z = torch.from_numpy(fixed_z).float() # train the network for epoch in range(num_epochs): for batch_i, (real_images, _) in enumerate(train_loader): batch_size = real_images.size(0) # important rescaling step real_images = scale(real_images) # ============================================ # TRAIN THE DISCRIMINATOR # ============================================ d_optimizer.zero_grad() # 1. Train with real images # Compute the discriminator losses on real images if train_on_gpu: real_images = real_images.cuda() D_real = D(real_images) d_real_loss = real_loss(D_real) # 2. Train with fake images # Generate fake images z = np.random.uniform(-1, 1, size=(batch_size, z_size)) z = torch.from_numpy(z).float() # move x to GPU, if available if train_on_gpu: z = z.cuda() fake_images = G(z) # Compute the discriminator losses on fake images D_fake = D(fake_images) d_fake_loss = fake_loss(D_fake) # add up loss and perform backprop d_loss = d_real_loss + d_fake_loss d_loss.backward() d_optimizer.step() # ========================================= # TRAIN THE GENERATOR # ========================================= g_optimizer.zero_grad() # 1. Train with fake images and flipped labels # Generate fake images z = np.random.uniform(-1, 1, size=(batch_size, z_size)) z = torch.from_numpy(z).float() if train_on_gpu: z = z.cuda() fake_images = G(z) # Compute the discriminator losses on fake images # using flipped labels! D_fake = D(fake_images) g_loss = real_loss(D_fake) # use real loss to flip labels # perform backprop g_loss.backward() g_optimizer.step() # Print some loss stats if batch_i % print_every == 0: # append discriminator loss and generator loss losses.append((d_loss.item(), g_loss.item())) # print discriminator and generator loss print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format( epoch+1, num_epochs, d_loss.item(), g_loss.item())) ## AFTER EACH EPOCH## # generate and save sample, fake images G.eval() # for generating samples if train_on_gpu: fixed_z = fixed_z.cuda() samples_z = G(fixed_z) samples.append(samples_z) G.train() # back to training mode # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f)
Epoch [ 1/ 30] | d_loss: 1.4085 | g_loss: 0.9993 Epoch [ 1/ 30] | d_loss: 0.6737 | g_loss: 1.9478 Epoch [ 2/ 30] | d_loss: 0.7026 | g_loss: 2.6182 Epoch [ 2/ 30] | d_loss: 0.4292 | g_loss: 2.3596 Epoch [ 3/ 30] | d_loss: 0.2889 | g_loss: 2.7350 Epoch [ 3/ 30] | d_loss: 0.1361 | g_loss: 4.5357 Epoch [ 4/ 30] | d_loss: 0.2069 | g_loss: 4.2325 Epoch [ 4/ 30] | d_loss: 0.2646 | g_loss: 10.0169 Epoch [ 5/ 30] | d_loss: 0.1014 | g_loss: 5.4149 Epoch [ 5/ 30] | d_loss: 0.0929 | g_loss: 4.8199 Epoch [ 6/ 30] | d_loss: 0.0590 | g_loss: 6.2550 Epoch [ 6/ 30] | d_loss: 0.3863 | g_loss: 2.4094 Epoch [ 7/ 30] | d_loss: 0.1023 | g_loss: 4.1469 Epoch [ 7/ 30] | d_loss: 0.0450 | g_loss: 5.3767 Epoch [ 8/ 30] | d_loss: 0.1133 | g_loss: 3.1710 Epoch [ 8/ 30] | d_loss: 0.3909 | g_loss: 2.8371 Epoch [ 9/ 30] | d_loss: 0.0228 | g_loss: 7.7792 Epoch [ 9/ 30] | d_loss: 0.5372 | g_loss: 3.8941 Epoch [ 10/ 30] | d_loss: 0.0888 | g_loss: 4.3109 Epoch [ 10/ 30] | d_loss: 0.4739 | g_loss: 5.8511 Epoch [ 11/ 30] | d_loss: 0.1066 | g_loss: 4.5965 Epoch [ 11/ 30] | d_loss: 0.0896 | g_loss: 8.8515 Epoch [ 12/ 30] | d_loss: 0.0152 | g_loss: 6.1287 Epoch [ 12/ 30] | d_loss: 0.0917 | g_loss: 3.1805 Epoch [ 13/ 30] | d_loss: 0.5349 | g_loss: 6.7379 Epoch [ 13/ 30] | d_loss: 0.0511 | g_loss: 7.0306 Epoch [ 14/ 30] | d_loss: 0.0228 | g_loss: 4.7947 Epoch [ 14/ 30] | d_loss: 0.0280 | g_loss: 6.1609 Epoch [ 15/ 30] | d_loss: 0.0406 | g_loss: 7.4366 Epoch [ 15/ 30] | d_loss: 0.0334 | g_loss: 5.7624 Epoch [ 16/ 30] | d_loss: 0.0413 | g_loss: 6.2405 Epoch [ 16/ 30] | d_loss: 0.2505 | g_loss: 3.0691 Epoch [ 17/ 30] | d_loss: 0.1006 | g_loss: 6.3208 Epoch [ 17/ 30] | d_loss: 0.1634 | g_loss: 3.8810 Epoch [ 18/ 30] | d_loss: 0.0337 | g_loss: 5.8343 Epoch [ 18/ 30] | d_loss: 0.2797 | g_loss: 6.6325 Epoch [ 19/ 30] | d_loss: 0.3332 | g_loss: 4.8526 Epoch [ 19/ 30] | d_loss: 0.0802 | g_loss: 3.8929 Epoch [ 20/ 30] | d_loss: 0.3042 | g_loss: 3.1036 Epoch [ 20/ 30] | d_loss: 0.1205 | g_loss: 1.6828 Epoch [ 21/ 30] | d_loss: 0.0568 | g_loss: 3.0710 Epoch [ 21/ 30] | d_loss: 0.1022 | g_loss: 5.4802 Epoch [ 22/ 30] | d_loss: 0.2595 | g_loss: 8.0963 Epoch [ 22/ 30] | d_loss: 0.1082 | g_loss: 2.2444 Epoch [ 23/ 30] | d_loss: 0.0162 | g_loss: 8.6265 Epoch [ 23/ 30] | d_loss: 0.1178 | g_loss: 5.9417 Epoch [ 24/ 30] | d_loss: 0.1680 | g_loss: 4.4331 Epoch [ 24/ 30] | d_loss: 0.5456 | g_loss: 3.4422 Epoch [ 25/ 30] | d_loss: 0.2071 | g_loss: 2.1712 Epoch [ 25/ 30] | d_loss: 0.0729 | g_loss: 5.1144 Epoch [ 26/ 30] | d_loss: 0.0537 | g_loss: 3.7469 Epoch [ 26/ 30] | d_loss: 0.3997 | g_loss: 6.2408 Epoch [ 27/ 30] | d_loss: 0.0555 | g_loss: 2.4301 Epoch [ 27/ 30] | d_loss: 0.1863 | g_loss: 3.7632 Epoch [ 28/ 30] | d_loss: 0.2211 | g_loss: 3.6232 Epoch [ 28/ 30] | d_loss: 0.1328 | g_loss: 4.7159 Epoch [ 29/ 30] | d_loss: 0.1348 | g_loss: 3.1974 Epoch [ 29/ 30] | d_loss: 0.2392 | g_loss: 2.6332 Epoch [ 30/ 30] | d_loss: 0.0215 | g_loss: 6.6233 Epoch [ 30/ 30] | d_loss: 0.2591 | g_loss: 3.3791
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Training lossHere we'll plot the training losses for the generator and discriminator, recorded after each epoch.
fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend()
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
Generator samples from trainingHere we can view samples of images from the generator. We'll look at the images we saved during training.
# helper function for viewing a list of passed in sample images def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): img = img.detach().cpu().numpy() img = np.transpose(img, (1, 2, 0)) img = ((img +1)*255 / (2)).astype(np.uint8) # rescale to pixel range (0-255) ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((32,32,3))) _ = view_samples(-1, samples)
_____no_output_____
MIT
DCGAN_Exercise.ipynb
ng572/DCGAN_SVHN
get names of each condition for later
pd.Categorical(luminescence_raw_df.condition) names = luminescence_raw_df.condition.unique() for name in names: print(name) #get list of promoters pd.Categorical(luminescence_raw_df.Promoter) prom_names = luminescence_raw_df.Promoter.unique() for name in prom_names: print(name)
UBQ10 NIR1 NOS STAP4 NRP
MIT
src/plotting/luminescence/24.11.19/luminescence_plots.ipynb
Switham1/PromoterArchitecture
test normality
#returns test statistic, p-value for name1 in prom_names: for name in names: print('{}: {}'.format(name, stats.shapiro(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == name])))
nitrate_free: (0.7033216953277588, 0.0002697518502827734) 100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859) 100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336) nitrate_free: (0.7033216953277588, 0.0002697518502827734) 100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859) 100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336) nitrate_free: (0.7033216953277588, 0.0002697518502827734) 100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859) 100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336) nitrate_free: (0.7033216953277588, 0.0002697518502827734) 100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859) 100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336) nitrate_free: (0.7033216953277588, 0.0002697518502827734) 100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859) 100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
MIT
src/plotting/luminescence/24.11.19/luminescence_plots.ipynb
Switham1/PromoterArchitecture
not normal
#test variance stats.levene(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[0]], luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[1]], luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[2]]) test = luminescence_raw_df.groupby('Promoter')['nluc/fluc'].apply test
_____no_output_____
MIT
src/plotting/luminescence/24.11.19/luminescence_plots.ipynb
Switham1/PromoterArchitecture
ะ—ะฐะณั€ัƒะทะบะฐ ะฑะธะฑะปะธะพั‚ะตะบ ะธ ะดะฐะฝะฝั‹ั…
!pip install simpletransformers==0.61.13 !pip uninstall transformers !pip install transformers==4.10.0 !git clone https://github.com/GoldenRMT/WikiSearch.git !pip install googledrivedownloader import nltk nltk.download('stopwords') nltk.download('punkt') from nltk.corpus import stopwords nltk.download('wordnet') stopwords = stopwords.words("english") lemmatizer = nltk.stem.WordNetLemmatizer() from nltk.tokenize import RegexpTokenizer, word_tokenize, sent_tokenize main_tokenizer = RegexpTokenizer(r'\w+',) sec_tokenizer = RegexpTokenizer(r'\S+') from sklearn.externals import joblib import numpy as np from google.colab import output import urllib import difflib import WikiSearch.wikipedia.wikipedia as wikipedia import pandas as pd from bs4 import BeautifulSoup from google_drive_downloader import GoogleDriveDownloader as gdd gdd.download_file_from_google_drive(file_id='13Nuwm7BV-4RXI9JqjPTDE9rcdupkKqlF', dest_path='/Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model')
Downloading 13Nuwm7BV-4RXI9JqjPTDE9rcdupkKqlF into /Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model... Done.
MIT
solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb
Makual/AIIJC_NLP
ะคัƒะฝะบั†ะธะธ ะดะปั ะฟั€ะตะดะพะฑั€ะฐะฑะพั‚ะบะธ ั‚ะตะบัั‚ะฐ
def normal_form(word): #ะŸะพะปัƒั‡ะตะฝะธะต ะฝะพั€ะผะฐะปัŒะฝะพะน ั„ะพั€ะผั‹ ัะปะพะฒะฐ word = word.lower() return word def clean_html(html): #ะžั‡ะธัั‚ะบะฐ html soup = BeautifulSoup(BeautifulSoup(html, "lxml").text) return str(soup.body) def get_good_tokens(text): #ะ’ั‹ะดะตะปะตะฝะธะต ะบะปัŽั‡ะตะฒั‹ั… ัะปะพะฒ good_tokens = [] for tokens in tokenizer(text)[1]: for token in tokens: token = normal_form(token) if token not in stopwords: good_tokens.append(token) return good_tokens def tokenizer(text): #ะขะพะบะตะฝะธะทะฐั†ะธั ั‚ะตะบัั‚ะฐ ะฒ ะพะฑั€ะฐะฑะพั‚ะฐะฝะฝั‹ะต ะธ ะฝะตะพะฑั€ะฐะฑะพั‚ะฐะฝะฝั‹ะต ั‚ะพะบะตะฝั‹ raw_tokens = sec_tokenizer.tokenize(text) clean_tokens = main_tokenizer.tokenize_sents(raw_tokens) nClean_tokens = [] for i in range(len(clean_tokens)): nClean_tokens.append([]) for m in range(len(clean_tokens[i])): if normal_form(clean_tokens[i][m]) != 's': nClean_tokens[i].append(normal_form(clean_tokens[i][m])) return (raw_tokens, nClean_tokens) def similarity(s1, s2): #ะะฐั…ะพะถะดะตะฝะธะต ะบะพัั„ั„ะธั†ะธะตะฝั‚ะฐ ัั…ะพะถะตัั‚ะธ ะผะตะถะดัƒ ะดะฒัƒะผั ัั‚ั€ะพะบะฐะผะธ normalized1 = s1.lower() normalized2 = s2.lower() matcher = difflib.SequenceMatcher(None, normalized1, normalized2) return matcher.ratio() def part_extractor(data,question,step,part_length): #ะคัƒะฝะบั†ะธั ะฒั‹ะดะตะปะตะฝะธั ั€ะตะปะตะฒะฐะฝั‚ะฝะพะณะพ ั„ั€ะฐะณะผะตะฝั‚ะฐ (ะขะตะบัั‚, ะฒะพะฟั€ะพั, ะดะปะธะฝะฝะฐ ั„ั€ะฐะณะผะตะฝั‚ะฐ) good_tokens = get_good_tokens(question) tokens = tokenizer(data) for i in range(step-(len(tokens[0]) % step)): #ะฃะฒะตะปะธั‡ะตะฝะธะต ะบะพะปะธั‡ะตัั‚ะฒะฐ ั‚ะพะบะตะฝะพะฒ ะดะพ ะบั€ะฐั‚ะฝะพะณะพ ะดะปะธะฝั‹ ั‡ะฐัั‚ะธ tokens[0].append('') tokens[1].append('') match_counter = 0 #ะกั‡ะตั‚ั‡ะธะบ ั‚ะพั‡ะฝั‹ั… ัะพะฒะฟะฐะดะตะฝะธะน ั‚ะพะบะตะฝะพะฒ best_part = '' #ะ›ัƒั‡ัˆะฐั ั‡ะฐัั‚ัŒ max_match_qty = 0 #ะœะฐะบัะธะผะฐะปัŒะฝะพะต ะบะพะปะธั‡ะตัั‚ะฒะพ ัะพะฒะฟะฐะฒัˆะธั… ั‚ะพะบะตะฝะพะฒ main_clrTokens = tokens[1] main_tokens = tokens[0] for i in range(0,len(tokens[0])-1,part_length): #ะะฐั…ะพะถะดะตะฝะธะต ะฝะฐะธะฑะพะปะตะต ั€ะตะปะตะฒะฐะฝั‚ะฝะพะน ั‡ะฐัั‚ะธ ั‚ะตะบัั‚ะฐ tokens = main_tokens[i:i+part_length-1] clrTokens = main_clrTokens[i:i+part_length-1] for good_token in good_tokens: if in_tokens(good_token,clrTokens): match_counter += 1 if match_counter > max_match_qty: max_match_qty = match_counter best_part = tokens match_counter = 0 fin = '' #ะ’ะพััั‚ะฐะฝะพะฒะปะตะฝะธะต ั‚ะตะบัั‚ะฐ for i in best_part: fin += (i+' ') return fin def in_tokens(token,text): for i in text: for m in i: if token == m: return True return False
_____no_output_____
MIT
solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb
Makual/AIIJC_NLP
ะ—ะฐะณั€ัƒะทะบะฐ ะผะพะดะตะปะธ ะธ ั„ัƒะฝะบั†ะธั ะดะปั ะพั‚ะฒะตั‚ะพะฒ ะฝะฐ ะฒะพะฟั€ะพั
model = joblib.load('/Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model') model.args.max_seq_length = 512 model.args.silent = True def answering(question): text = question good_tokens = get_good_tokens(text) try: urls = wikipedia.search(text,results=2) except: link_1 = '-' link_2 = '-' try: link_1 = urls[0] except: link_1 = '-' try: link_2 = urls[1] except: link_2 = '-' #ะ—ะฐะณั€ัƒะทะบะฐ ัั‚ะฐั‚ะตะน ะฒะธะบะธะฟะตะดะธะธ try: link_1 = link_1.replace('https://en.wikipedia.org/wiki/','') #ะฃะฑะตั€ะฐะตะผ ะฝะฐั‡ะฐะปะพ ััั‹ะปะบะธ link_1 = urllib.parse.unquote(link_1) #ะ—ะฐะผะตะฝัะตะผ ะบั€ะธะฒั‹ะต ัะธะผะฒะพะปั‹ ะฝะฐ ะพั€ะธะณะธะฝะฐะป data_1 = wikipedia.page(link_1,auto_suggest=False).content #ะŸะฐั€ัะธะผ ัั‚ั€ะฐะฝะธั‡ะบัƒ ะฒะธะบะธ data_1 = data_1.replace('\n',' ') except: pass try: link_2 = link_2.replace('https://en.wikipedia.org/wiki/','') #ะฃะฑะตั€ะฐะตะผ ะฝะฐั‡ะฐะปะพ ััั‹ะปะบะธ link_2 = urllib.parse.unquote(link_2) #ะ—ะฐะผะตะฝัะตะผ ะบั€ะธะฒั‹ะต ัะธะผะฒะพะปั‹ ะฝะฐ ะพั€ะธะณะธะฝะฐะป data_2 = wikipedia.page(link_2,auto_suggest=False).content #ะŸะฐั€ัะธะผ ัั‚ั€ะฐะฝะธั‡ะบัƒ ะฒะธะบะธ data_2 = data_2.replace('\n',' ') except: pass try: #ะŸะพะธัะบ ั€ะตะปะตะฒะฐะฝั‚ะฝะพะณะพ ะบัƒัะบะฐ ะดะปะธะฝะพะน 128 ั‚ะพะบะตะฝะพะฒ ั ัˆะฐะณะพะผ 64 ะฒ ัะฐะผะพะน ั€ะตะปะตะฒะฐะฝั‚ะฝะพะน ัั‚ะฐั‚ัŒะต context = part_extractor(data_1,question,16,64) except: pass try: #ะŸะพะธัะบ ั€ะตะปะตะฒะฐะฝั‚ะฝะพะณะพ ะบัƒัะบะฐ ะดะปะธะฝะพะน 64 ั‚ะพะบะตะฝะฐ ั ัˆะฐะณะพะผ 32 ะฒะพ ะฒั‚ะพั€ะพะน ะฟะพ ั€ะตะปะตะฒะฐะฝั‚ะฝะพัั‚ะธ ัั‚ะฐั‚ัŒะต context += ' ' + part_extractor(data_2,question,16,32) except: pass try: predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] #ะŸั€ะตะดัะบะฐะทะฐะฝะธะต ะพั‚ะฒะตั‚ะฐ except: predict = [{'answer':['']}] predict[0]['answer'][0] = 'empty' if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_1,question,16,64) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_2,question,16,64) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_1,question,16,128) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_2,question,16,128) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_1,question,16,256) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass if predict[0]['answer'][0] == 'empty': try: context = part_extractor(data_2,question,16,256) predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] except: pass return predict[0]['answer'][0]
_____no_output_____
MIT
solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb
Makual/AIIJC_NLP
ะŸั€ะพะฒะตั€ะบะฐ ั€ะฐะฑะพั‚ะพัะฟะพัะพะฑะฝะพัั‚ะธ ะธ ะฒั€ะตะผะตะฝะธ ั€ะฐะฑะพั‚ั‹ ั„ัƒะฝะบั†ะธะธ
import time time_1 = time.time() print(answering("What is the name of Trump first daughter?")) print('ะ’ั€ะตะผั ะพะฑั€ะฐะฑะพั‚ะบะธ ะทะฐะฟั€ะพัะฐ: ' + str(time.time()-time_1))
Ivana Marie "Ivanka" Trump ะ’ั€ะตะผั ะพะฑั€ะฐะฑะพั‚ะบะธ ะทะฐะฟั€ะพัะฐ: 1.6687853336334229
MIT
solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb
Makual/AIIJC_NLP
CNTK 101: Logistic Regression and ML PrimerThis tutorial is targeted to individuals who are new to CNTK and to machine learning. In this tutorial, you will train a simple yet powerful machine learning model that is widely used in industry for a variety of applications. The model trained below scales to massive data sets in the most expeditious manner by harnessing computational scalability leveraging the computational resources you may have (one or more CPU cores, one or more GPUs, a cluster of CPUs or a cluster of GPUs), transparently via the CNTK library.The following notebook uses Python APIs. If you are looking for this example in BrainScript, please look [here](https://github.com/Microsoft/CNTK/tree/release/2.6/Tutorials/HelloWorld-LogisticRegression). Introduction**Problem**:A cancer hospital has provided data and wants us to determine if a patient has a fatal [malignant](https://en.wikipedia.org/wiki/Malignancy) cancer vs. a benign growth. This is known as a classification problem. To help classify each patient, we are given their age and the size of the tumor. Intuitively, one can imagine that younger patients and/or patients with small tumors are less likely to have a malignant cancer. The data set simulates this application: each observation is a patient represented as a dot (in the plot below), where red indicates malignant and blue indicates benign. Note: This is a toy example for learning; in real life many features from different tests/examination sources and the expertise of doctors would play into the diagnosis/treatment decision for a patient.
# Figure 1 Image(url="https://www.cntk.ai/jup/cancer_data_plot.jpg", width=400, height=400)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
**Goal**:Our goal is to learn a classifier that can automatically label any patient into either the benign or malignant categories given two features (age and tumor size). In this tutorial, we will create a linear classifier, a fundamental building-block in deep networks.
# Figure 2 Image(url= "https://www.cntk.ai/jup/cancer_classify_plot.jpg", width=400, height=400)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
In the figure above, the green line represents the model learned from the data and separates the blue dots from the red dots. In this tutorial, we will walk you through the steps to learn the green line. Note: this classifier does make mistakes, where a couple of blue dots are on the wrong side of the green line. However, there are ways to fix this and we will look into some of the techniques in later tutorials. **Approach**: Any learning algorithm typically has five stages. These are Data reading, Data preprocessing, Creating a model, Learning the model parameters, and Evaluating the model (a.k.a. testing/prediction). >1. Data reading: We generate simulated data sets with each sample having two features (plotted below) indicative of the age and tumor size.>2. Data preprocessing: Often, the individual features such as size or age need to be scaled. Typically, one would scale the data between 0 and 1. To keep things simple, we are not doing any scaling in this tutorial (for details look here: [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling).>3. Model creation: We introduce a basic linear model in this tutorial. >4. Learning the model: This is also known as training. While fitting a linear model can be done in a variety of ways ([linear regression](https://en.wikipedia.org/wiki/Linear_regression), in CNTK we use Stochastic Gradient Descent a.k.a. [SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent).>5. Evaluation: This is also known as testing, where one evaluates the model on data sets with known labels (a.k.a. ground-truth) that were never used for training. This allows us to assess how a model would perform in real-world (previously unseen) observations. Logistic Regression[Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) is a fundamental machine learning technique that uses a linear weighted combination of features and generates the probability of predicting different classes. In our case, the classifier will generate a probability in [0,1] which can then be compared to a threshold (such as 0.5) to produce a binary label (0 or 1). However, the method shown can easily be extended to multiple classes. [softmax]: https://en.wikipedia.org/wiki/Multinomial_logistic_regression
# Figure 3 Image(url= "https://www.cntk.ai/jup/logistic_neuron.jpg", width=300, height=200)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
In the above figure, contributions from different input features are linearly weighted and aggregated. The resulting sum is mapped to a (0, 1) range via a [sigmoid]( https://en.wikipedia.org/wiki/Sigmoid_function) function. For classifiers with more than two output labels, one can use a [softmax](https://en.wikipedia.org/wiki/Softmax_function) function.
# Import the relevant components from __future__ import print_function import numpy as np import sys import os import cntk as C import cntk.tests.test_utils cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system) C.cntk_py.set_fixed_random_seed(1) # fix the random seed so that LR examples are repeatable
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Data GenerationLet us generate some synthetic data emulating the cancer example using the `numpy` library. We have two input features (represented in two-dimensions) and two output classes (benign/blue or malignant/red). In our example, each observation (a single 2-tuple of features - age and size) in the training data has a label (blue or red). Because we have two output labels, we call this a binary classification task.
# Define the network input_dim = 2 num_output_classes = 2
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Input and LabelsIn this tutorial we are generating synthetic data using the `numpy` library. In real-world problems, one would use a [reader](https://docs.microsoft.com/en-us/cognitive-toolkit/brainscript-and-python---understanding-and-extending-readers), that would read feature values (`features`: *age* and *tumor size*) corresponding to each observation (patient). The simulated *age* variable is scaled down to have a similar range to that of the other variable. This is a key aspect of data pre-processing that we will learn more about in later tutorials. Note: in general, observations and labels can reside in higher dimensional spaces (when more features or classifications are available) and are then represented as [tensors](https://en.wikipedia.org/wiki/Tensor) in CNTK. More advanced tutorials introduce the handling of high dimensional data.
# Ensure that we always get the same results np.random.seed(0) # Helper function to generate a random data sample def generate_random_data_sample(sample_size, feature_dim, num_classes): # Create synthetic data using NumPy. Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes) # Make sure that the data is separable X = (np.random.randn(sample_size, feature_dim)+3) * (Y+1) # Specify the data type to match the input variable used later in the tutorial # (default type is double) X = X.astype(np.float32) # convert class 0 into the vector "1 0 0", # class 1 into the vector "0 1 0", ... class_ind = [Y==class_number for class_number in range(num_classes)] Y = np.asarray(np.hstack(class_ind), dtype=np.float32) return X, Y # Create the input variables denoting the features and the label data. Note: the input # does not need additional info on the number of observations (Samples) since CNTK creates only # the network topology first mysamplesize = 32 features, labels = generate_random_data_sample(mysamplesize, input_dim, num_output_classes)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Let us visualize the input data.**Note**: If the import of `matplotlib.pyplot` fails, please run `conda install matplotlib`, which will fix the `pyplot` version dependencies. If you are on a python environment different from Anaconda, then use `pip install matplotlib`.
# Plot the data import matplotlib.pyplot as plt %matplotlib inline # let 0 represent malignant/red and 1 represent benign/blue colors = ['r' if label == 0 else 'b' for label in labels[:,0]] plt.scatter(features[:,0], features[:,1], c=colors) plt.xlabel("Age (scaled)") plt.ylabel("Tumor size (in cm)") plt.show()
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Model CreationA logistic regression (a.k.a. LR) network is a simple building block, but has powered many ML applications in the past decade. LR is a simple linear model that takes as input a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf{x}$, the blue nodes in the figure below) and emits the *evidence* ($z$) (output of the green node, also known as "activation"). Each feature in the input layer is connected to an output node by a corresponding weight $w$ (indicated by the black lines of varying thickness).
# Figure 4 Image(url= "https://www.cntk.ai/jup/logistic_neuron2.jpg", width=300, height=200)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
The first step is to compute the evidence for an observation. $$z = \sum_{i=1}^n w_i \times x_i + b = \textbf{w} \cdot \textbf{x} + b$$ where $\bf{w}$ is the weight vector of length $n$ and $b$ is known as the [bias](https://www.quora.com/What-does-the-bias-term-represent-in-logistic-regression) term. Note: we use **bold** notation to denote vectors. The computed evidence is mapped to a (0, 1) range using a `sigmoid` (when the outcome can be in one of two possible classes) or a `softmax` function (when the outcome can be in one of more than two possible classes).Network input and output: - **input** variable (a key CNTK concept): >An **input** variable is a user-code-facing container where user-provided code fills in different observations (a data point or sample of data points, equivalent to (age, size) tuples in our example) as inputs to the model function during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the `input` must match the shape of the data that will be provided. For example, if each data point was a grayscale image of height 10 pixels and width 5 pixels, the input feature would be a vector of 50 floating-point values representing the intensity of each of the 50 pixels, and could be written as `C.input_variable(10*5, np.float32)`. Similarly, in our example the dimensions are age and tumor size, thus `input_dim` = 2. More on data and their dimensions to appear in separate tutorials.
feature = C.input_variable(input_dim, np.float32)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Network setupThe `linear_layer` function is a straightforward implementation of the equation above. We perform two operations:0. multiply the weights ($\bf{w}$) with the features ($\bf{x}$) using the CNTK `times` operator,1. add the bias term ($b$).These CNTK operations are optimized for execution on the available hardware and the implementation hides the complexity away from the user.
# Define a dictionary to store the model parameters mydict = {} def linear_layer(input_var, output_dim): input_dim = input_var.shape[0] weight_param = C.parameter(shape=(input_dim, output_dim)) bias_param = C.parameter(shape=(output_dim)) mydict['w'], mydict['b'] = weight_param, bias_param return C.times(input_var, weight_param) + bias_param
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
`z` will be used to represent the output of the network.
output_dim = num_output_classes z = linear_layer(feature, output_dim)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Learning model parametersNow that the network is set up, we would like to learn the parameters $\bf w$ and $b$ for our simple linear layer. To do so we convert, the computed evidence ($z$) into a set of predicted probabilities ($\textbf p$) using a `softmax` function.$$ \textbf{p} = \mathrm{softmax}(z)$$ The `softmax` is an activation function that normalizes the accumulated evidence into a probability distribution over the classes (Details of [softmax](https://www.cntk.ai/pythondocs/cntk.ops.htmlcntk.ops.softmax)). Other choices of activation function can be [here](https://cntk.ai/pythondocs/cntk.layers.layers.htmlcntk.layers.layers.Activation). TrainingThe output of the `softmax` is the probabilities of an observation belonging each of the respective classes. For training the classifier, we need to determine what behavior the model needs to mimic. In other words, we want the generated probabilities to be as close as possible to the observed labels. We can accomplish this by minimizing the difference between our output and the ground-truth labels. This difference is calculated by the *cost* or *loss* function.[Cross entropy](http://cntk.ai/pythondocs/cntk.ops.htmlcntk.ops.cross_entropy_with_softmax) is a popular loss function. It is defined as:$$ H(p) = - \sum_{j=1}^{| \textbf y |} y_j \log (p_j) $$ where $p$ is our predicted probability from `softmax` function and $y$ is the ground-truth label, provided with the training data. In the two-class example, the `label` variable has two dimensions (equal to the `num_output_classes` or $| \textbf y |$). Generally speaking, the label variable will have $| \textbf y |$ elements with 0 everywhere except at the index of the true class of the data point, where it will be 1. Understanding the [details](http://colah.github.io/posts/2015-09-Visual-Information/) of the cross-entropy function is highly recommended.
label = C.input_variable(num_output_classes, np.float32) loss = C.cross_entropy_with_softmax(z, label)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
EvaluationIn order to evaluate the classification, we can compute the [classification_error](https://www.cntk.ai/pythondocs/cntk.metrics.htmlcntk.metrics.classification_error), which is 0 if our model was correct (it assigned the true label the most probability), otherwise 1.
eval_error = C.classification_error(z, label)
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Configure trainingThe trainer strives to minimize the `loss` function using an optimization technique. In this tutorial, we will use [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) (`sgd`), one of the most popular techniques. Typically, one starts with random initialization of the model parameters (the weights and biases, in our case). For each observation, the `sgd` optimizer can calculate the `loss` or error between the predicted label and the corresponding ground-truth label, and apply [gradient descent](http://www.statisticsviews.com/details/feature/5722691/Getting-to-the-Bottom-of-Regression-with-Gradient-Descent.html) to generate a new set of model parameters after each observation. The aforementioned process of updating all parameters after each observation is attractive because it does not require the entire data set (all observations) to be loaded in memory and also computes the gradient over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations into the model and use an average of the `loss` or error from that set to update the model parameters. This subset is called a *minibatch*.With minibatches we often sample observations from the larger training dataset. We repeat the process of updating the model parameters using different combinations of training samples, and over a period of time minimize the `loss` (and the error). When the incremental error rates are no longer changing significantly, or after a preset maximum number of minibatches have been processed, we claim that our model is trained.One of the key parameters of [optimization](https://en.wikipedia.org/wiki/Category:Convex_optimization) is the `learning_rate`. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will cover more details in later tutorials. With this information, we are ready to create our trainer.
# Instantiate the trainer object to drive the model training learning_rate = 0.5 lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch) learner = C.sgd(z.parameters, lr_schedule) trainer = C.Trainer(z, (loss, eval_error), [learner])
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
First, let us create some helper functions that will be needed to visualize different functions associated with training. Note: these convenience functions are for understanding what goes on under the hood.
# Define a utility function to compute the moving average. # A more efficient implementation is possible with np.cumsum() function def moving_average(a, w=10): if len(a) < w: return a[:] return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)] # Define a utility that prints the training progress def print_training_progress(trainer, mb, frequency, verbose=1): training_loss, eval_error = "NA", "NA" if mb % frequency == 0: training_loss = trainer.previous_minibatch_loss_average eval_error = trainer.previous_minibatch_evaluation_average if verbose: print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}".format(mb, training_loss, eval_error)) return mb, training_loss, eval_error
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Run the trainerWe are now ready to train our Logistic Regression model. We want to decide what data we need to feed into the training engine.In this example, each iteration of the optimizer will work on 25 samples (25 dots w.r.t. the plot above) a.k.a. `minibatch_size`. We would like to train on 20000 observations. If the number of samples in the data is only 10000, the trainer will make 2 passes through the data. This is represented by `num_minibatches_to_train`. Note: in a real world scenario, we would be given a certain amount of labeled data (in the context of this example, (age, size) observations and their labels (benign / malignant)). We would use a large number of observations for training, say 70%, and set aside the remainder for the evaluation of the trained model.With these parameters we can proceed with training our simple feedforward network.
# Initialize the parameters for the trainer minibatch_size = 25 num_samples_to_train = 20000 num_minibatches_to_train = int(num_samples_to_train / minibatch_size) from collections import defaultdict # Run the trainer and perform model training training_progress_output_freq = 50 plotdata = defaultdict(list) for i in range(0, num_minibatches_to_train): features, labels = generate_random_data_sample(minibatch_size, input_dim, num_output_classes) # Assign the minibatch data to the input variables and train the model on the minibatch trainer.train_minibatch({feature : features, label : labels}) batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1) if not (loss == "NA" or error =="NA"): plotdata["batchsize"].append(batchsize) plotdata["loss"].append(loss) plotdata["error"].append(error) # Compute the moving average loss to smooth out the noise in SGD plotdata["avgloss"] = moving_average(plotdata["loss"]) plotdata["avgerror"] = moving_average(plotdata["error"]) # Plot the training loss and the training error import matplotlib.pyplot as plt plt.figure(1) plt.subplot(211) plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--') plt.xlabel('Minibatch number') plt.ylabel('Loss') plt.title('Minibatch run vs. Training loss') plt.show() plt.subplot(212) plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--') plt.xlabel('Minibatch number') plt.ylabel('Label Prediction Error') plt.title('Minibatch run vs. Label Prediction Error') plt.show()
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Run evaluation / Testing Now that we have trained the network, let us evaluate the trained network on data that hasn't been used for training. This is called **testing**. Let us create some new data and evaluate the average error and loss on this set. This is done using `trainer.test_minibatch`. Note the error on this previously unseen data is comparable to the training error. This is a **key** check. Should the error be larger than the training error by a large margin, it indicates that the trained model will not perform well on data that it has not seen during training. This is known as [overfitting](https://en.wikipedia.org/wiki/Overfitting). There are several ways to address overfitting that are beyond the scope of this tutorial, but the Cognitive Toolkit provides the necessary components to address overfitting.Note: we are testing on a single minibatch for illustrative purposes. In practice, one runs several minibatches of test data and reports the average. **Question** Why is this suggested? Try plotting the test error over several set of generated data sample and plot using plotting functions used for training. Do you see a pattern?
# Run the trained model on a newly generated dataset test_minibatch_size = 25 features, labels = generate_random_data_sample(test_minibatch_size, input_dim, num_output_classes) trainer.test_minibatch({feature : features, label : labels})
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Checking prediction / evaluation For evaluation, we softmax the output of the network into a probability distribution over the two classes, the probability of each observation being malignant or benign.
out = C.softmax(z) result = out.eval({feature : features})
_____no_output_____
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
Let us compare the ground-truth label with the predictions. They should be in agreement.**Question:** - How many predictions were mislabeled? Can you change the code below to identify which observations were misclassified?
print("Label :", [np.argmax(label) for label in labels]) print("Predicted:", [np.argmax(x) for x in result])
Label : [1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1] Predicted: [1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1]
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
VisualizationIt is desirable to visualize the results. In this example, the data can be conveniently plotted using two spatial dimensions for the input (patient age on the x-axis and tumor size on the y-axis), and a color dimension for the output (red for malignant and blue for benign). For data with higher dimensions, visualization can be challenging. There are advanced dimensionality reduction techniques, such as [t-sne](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) that allow for such visualizations.
# Model parameters print(mydict['b'].value) bias_vector = mydict['b'].value weight_matrix = mydict['w'].value # Plot the data import matplotlib.pyplot as plt # let 0 represent malignant/red, and 1 represent benign/blue colors = ['r' if label == 0 else 'b' for label in labels[:,0]] plt.scatter(features[:,0], features[:,1], c=colors) plt.plot([0, bias_vector[0]/weight_matrix[0][1]], [ bias_vector[1]/weight_matrix[0][0], 0], c = 'g', lw = 3) plt.xlabel("Patient age (scaled)") plt.ylabel("Tumor size (in cm)") plt.show()
[ 8.00007153 -8.00006485]
MIT
Tutorials/CNTK_101_LogisticRegression.ipynb
shyamalschandra/CNTK
3์žฅ ์ฒ˜์Œ ์‹œ์ž‘ํ•˜๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
# ๅฟ…่ฆใƒฉใ‚คใƒ–ใƒฉใƒชใฎๅฐŽๅ…ฅ !pip install japanize_matplotlib | tail -n 1 !pip install torchviz | tail -n 1 # ๅฟ…่ฆใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚คใƒณใƒใƒผใƒˆ %matplotlib inline import numpy as np import matplotlib.pyplot as plt #import japanize_matplotlib from IPython.display import display # PyTorch้–ข้€ฃใƒฉใ‚คใƒ–ใƒฉใƒช import torch from torchviz import make_dot # ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใƒ•ใ‚ฉใƒณใƒˆใ‚ตใ‚คใ‚บๅค‰ๆ›ด plt.rcParams['font.size'] = 14 # ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใ‚ฐใƒฉใƒ•ใ‚ตใ‚คใ‚บๅค‰ๆ›ด plt.rcParams['figure.figsize'] = (6,6) # ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงๆ–น็œผ่กจ็คบON plt.rcParams['axes.grid'] = True # numpyใฎๆตฎๅ‹•ๅฐๆ•ฐ็‚นใฎ่กจ็คบ็ฒพๅบฆ np.set_printoptions(suppress=True, precision=4) # warning่กจ็คบoff import warnings warnings.simplefilter('ignore')
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.4 ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•์˜ ๊ตฌํ˜„ ๋ฐฉ๋ฒ•
def L(u, v): return 3 * u**2 + 3 * v**2 - u*v + 7*u - 7*v + 10 def Lu(u, v): return 6* u - v + 7 def Lv(u, v): return 6* v - u - 7 u = np.linspace(-5, 5, 501) v = np.linspace(-5, 5, 501) U, V = np.meshgrid(u, v) Z = L(U, V) # ๅ‹พ้…้™ไธ‹ๆณ•ใฎใ‚ทใƒŸใƒฅใƒฌใƒผใ‚ทใƒงใƒณ W = np.array([4.0, 4.0]) W1 = [W[0]] W2 = [W[1]] N = 21 alpha = 0.05 for i in range(N): W = W - alpha *np.array([Lu(W[0], W[1]), Lv(W[0], W[1])]) W1.append(W[0]) W2.append(W[1]) n_loop=11 WW1 = np.array(W1[:n_loop]) WW2 = np.array(W2[:n_loop]) ZZ = L(WW1, WW2) fig = plt.figure(figsize=(8,8)) ax = plt.axes(projection='3d') ax.set_zlim(0,250) ax.set_xlabel('W') ax.set_ylabel('B') ax.set_zlabel('loss') ax.view_init(50, 240) ax.xaxis._axinfo["grid"]['linewidth'] = 2. ax.yaxis._axinfo["grid"]['linewidth'] = 2. ax.zaxis._axinfo["grid"]['linewidth'] = 2. ax.contour3D(U, V, Z, 100, cmap='Blues', alpha=0.7) ax.plot3D(WW1, WW2, ZZ, 'o-', c='k', alpha=1, markersize=7) plt.show() fig.savefig('fig03-06.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.5 ใƒ‡ใƒผใ‚ฟๅ‰ๅ‡ฆ็†5ไบบใฎไบบใฎ่บซ้•ทใจไฝ“้‡ใฎใƒ‡ใƒผใ‚ฟใ‚’ไฝฟใ†ใ€‚ 1ๆฌก้–ขๆ•ฐใง่บซ้•ทใ‹ใ‚‰ไฝ“้‡ใ‚’ไบˆๆธฌใ™ใ‚‹ๅ ดๅˆใ€ๆœ€้ฉใช็›ด็ทšใ‚’ๆฑ‚ใ‚ใ‚‹ใ“ใจใŒ็›ฎ็š„ใ€‚
# ใ‚ตใƒณใƒ—ใƒซใƒ‡ใƒผใ‚ฟใฎๅฎฃ่จ€ sampleData1 = np.array([ [166, 58.7], [176.0, 75.7], [171.0, 62.1], [173.0, 70.4], [169.0,60.1] ]) print(sampleData1) # ๆฉŸๆขฐๅญฆ็ฟ’ใƒขใƒ‡ใƒซใงๆ‰ฑใ†ใŸใ‚ใ€่บซ้•ทใ ใ‘ใ‚’ๆŠœใๅ‡บใ—ใŸๅค‰ๆ•ฐxใจ # ไฝ“้‡ใ ใ‘ใ‚’ๆŠœใๅ‡บใ—ใŸๅค‰ๆ•ฐyใ‚’ใ‚ปใƒƒใƒˆใ™ใ‚‹ x = sampleData1[:,0] y = sampleData1[:,1] import matplotlib # '๋ง‘์€ ๊ณ ๋”•'์œผ๋กœ ํฐํŠธ ์„ค์ • matplotlib.rcParams['font.family'] = 'Malgun Gothic' # ํ•œ๊ธ€ ํฐํŠธ์—์„œ ๋งˆ์ด๋„ˆ์Šค(-) ํฐํŠธ๊ฐ€ ๊นจ์ง€๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ matplotlib.rcParams['axes.unicode_minus'] = False # ๆ•ฃๅธƒๅ›ณ่กจ็คบใง็Šถๆณใฎ็ขบ่ช fig1 = plt.gcf() plt.scatter(x, y, c='k', s=50) plt.xlabel('$x$: ์‹ ์žฅ(cm) ') plt.ylabel('$y$: ์ฒด์ค‘(kg)') plt.title('์‹ ์žฅ๊ณผ ์ฒด์ค‘์˜ ๊ด€๊ณ„') plt.show() plt.draw() fig1.savefig('ex03-03.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
ๅบงๆจ™็ณปใฎๅค‰ๆ›ๆฉŸๆขฐๅญฆ็ฟ’ใƒขใƒ‡ใƒซใงใฏใ€ใƒ‡ใƒผใ‚ฟใฏ0ใซ่ฟ‘ใ„ๅ€คใ‚’ๆŒใคใ“ใจใŒๆœ›ใพใ—ใ„ใ€‚ ใใ“ใงใ€x, y ใจใ‚‚ใซๅนณๅ‡ๅ€คใŒ0ใซใชใ‚‹ใ‚ˆใ†ใซๅนณ่กŒ็งปๅ‹•ใ—ใ€ๆ–ฐใ—ใ„ๅบงๆจ™็ณปใ‚’X, Yใจใ™ใ‚‹ใ€‚
X = x - x.mean() Y = y - y.mean() # ๆ•ฃๅธƒๅ›ณ่กจ็คบใง็ตๆžœใฎ็ขบ่ช fig1 = plt.gcf() plt.scatter(X, Y, c='k', s=50) plt.xlabel('$X$') plt.ylabel('$Y$') plt.title('๋ฐ์ดํ„ฐ ๊ฐ€๊ณต ํ›„ ์‹ ์žฅ๊ณผ ์ฒด์ค‘์˜ ๊ด€๊ณ„') plt.show() plt.draw() fig1.savefig('ex03-04.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.6 ไบˆๆธฌ่จˆ็ฎ—
# XใจYใ‚’ใƒ†ใƒณใ‚ฝใƒซๅค‰ๆ•ฐๅŒ–ใ™ใ‚‹ X = torch.tensor(X).float() Y = torch.tensor(Y).float() # ็ตๆžœ็ขบ่ช print(X) print(Y) # ้‡ใฟๅค‰ๆ•ฐใฎๅฎš็พฉ # WใจBใฏๅ‹พ้…่จˆ็ฎ—ใ‚’ใ™ใ‚‹ใฎใงใ€requires_grad=Trueใจใ™ใ‚‹ W = torch.tensor(1.0, requires_grad=True).float() B = torch.tensor(1.0, requires_grad=True).float() # ไบˆๆธฌ้–ขๆ•ฐใฏไธ€ๆฌก้–ขๆ•ฐ def pred(X): return W * X + B # ไบˆๆธฌๅ€คใฎ่จˆ็ฎ— Yp = pred(X) # ็ตๆžœๆจ™็คบ print(Yp) # ไบˆๆธฌๅ€คใฎ่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ๅฏ่ฆ–ๅŒ– params = {'W': W, 'B': B} g = make_dot(Yp, params=params) display(g) g.render('ex03-08', format='tif') !dot -Ttif -Gdpi=300 ex03-08 -o ex03-08_large.tif
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.7 ๆๅคฑ่จˆ็ฎ—
# ๆๅคฑ้–ขๆ•ฐใฏ่ชคๅทฎไบŒไน—ๅนณๅ‡ def mse(Yp, Y): loss = ((Yp - Y) ** 2).mean() return loss # ๆๅคฑ่จˆ็ฎ— loss = mse(Yp, Y) # ็ตๆžœๆจ™็คบ print(loss) # ๆๅคฑใฎ่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ๅฏ่ฆ–ๅŒ– params = {'W': W, 'B': B} g = make_dot(loss, params=params) display(g) g.render('ex03-11', format='tif') !dot -Ttif -Gdpi=300 ex03-11 -o ex03-11_large.tif
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.8 ๅ‹พ้…่จˆ็ฎ—
# ๅ‹พ้…่จˆ็ฎ— loss.backward() # ๅ‹พ้…ๅ€ค็ขบ่ช print(W.grad) print(B.grad)
tensor(-19.0400) tensor(2.0000)
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.9 ใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ
# ๅญฆ็ฟ’็އใฎๅฎš็พฉ lr = 0.001 # ๊ฒฝ์‚ฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜์ • W -= lr * W.grad B -= lr * B.grad
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
WใจBใฏไธ€ๅบฆ่จˆ็ฎ—ๆธˆใฟใชใฎใงใ€ใ“ใฎ็Šถๆ…‹ใงๅ€คใฎๆ›ดๆ–ฐใŒใงใใชใ„ ๆฌกใฎๆ›ธใๆ–นใซใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹
# ๅ‹พ้…ใ‚’ๅ…ƒใซใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ # with torch.no_grad() ใ‚’ไป˜ใ‘ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚‹ with torch.no_grad(): W -= lr * W.grad B -= lr * B.grad # ่จˆ็ฎ—ๆธˆใฟใฎๅ‹พ้…ๅ€คใ‚’ใƒชใ‚ปใƒƒใƒˆใ™ใ‚‹ W.grad.zero_() B.grad.zero_() # ใƒ‘ใƒฉใƒกใƒผใ‚ฟใจๅ‹พ้…ๅ€คใฎ็ขบ่ช print(W) print(B) print(W.grad) print(B.grad)
tensor(1.0190, requires_grad=True) tensor(0.9980, requires_grad=True) tensor(0.) tensor(0.)
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
ๅ…ƒใฎๅ€คใฏใฉใกใ‚‰ใ‚‚1.0ใ ใฃใŸใฎใงใ€Wใฏๅพฎๅฐ‘้‡ๅข—ๅŠ ใ€Bใฏๅพฎๅฐ‘้‡ๆธ›ๅฐ‘ใ—ใŸใ“ใจใŒใ‚ใ‹ใ‚‹ใ€‚ ใ“ใฎ่จˆ็ฎ—ใ‚’็นฐใ‚Š่ฟ”ใ™ใ“ใจใงใ€ๆœ€้ฉใชWใจBใ‚’ๆฑ‚ใ‚ใ‚‹ใฎใŒๅ‹พ้…้™ไธ‹ๆณ•ใจใชใ‚‹ใ€‚ 3.10 ็นฐใ‚Š่ฟ”ใ—่จˆ็ฎ—
# ๅˆๆœŸๅŒ– # WใจBใ‚’ๅค‰ๆ•ฐใจใ—ใฆๆ‰ฑใ† W = torch.tensor(1.0, requires_grad=True).float() B = torch.tensor(1.0, requires_grad=True).float() # ็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐ num_epochs = 500 # ๅญฆ็ฟ’็އ lr = 0.001 # ่จ˜้Œฒ็”จ้…ๅˆ—ๅˆๆœŸๅŒ– history = np.zeros((0, 2)) # ใƒซใƒผใƒ—ๅ‡ฆ็† for epoch in range(num_epochs): # ไบˆๆธฌ่จˆ็ฎ— Yp = pred(X) # ๆๅคฑ่จˆ็ฎ— loss = mse(Yp, Y) # ๅ‹พ้…่จˆ็ฎ— loss.backward() with torch.no_grad(): # ใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ W -= lr * W.grad B -= lr * B.grad # ๅ‹พ้…ๅ€คใฎๅˆๆœŸๅŒ– W.grad.zero_() B.grad.zero_() # ๆๅคฑใฎ่จ˜้Œฒ if (epoch %10 == 0): item = np.array([epoch, loss.item()]) history = np.vstack((history, item)) print(f'epoch = {epoch} loss = {loss:.4f}')
epoch = 0 loss = 13.3520 epoch = 10 loss = 10.3855 epoch = 20 loss = 8.5173 epoch = 30 loss = 7.3364 epoch = 40 loss = 6.5858 epoch = 50 loss = 6.1047 epoch = 60 loss = 5.7927 epoch = 70 loss = 5.5868 epoch = 80 loss = 5.4476 epoch = 90 loss = 5.3507 epoch = 100 loss = 5.2805 epoch = 110 loss = 5.2275 epoch = 120 loss = 5.1855 epoch = 130 loss = 5.1507 epoch = 140 loss = 5.1208 epoch = 150 loss = 5.0943 epoch = 160 loss = 5.0703 epoch = 170 loss = 5.0480 epoch = 180 loss = 5.0271 epoch = 190 loss = 5.0074 epoch = 200 loss = 4.9887 epoch = 210 loss = 4.9708 epoch = 220 loss = 4.9537 epoch = 230 loss = 4.9373 epoch = 240 loss = 4.9217 epoch = 250 loss = 4.9066 epoch = 260 loss = 4.8922 epoch = 270 loss = 4.8783 epoch = 280 loss = 4.8650 epoch = 290 loss = 4.8522 epoch = 300 loss = 4.8399 epoch = 310 loss = 4.8281 epoch = 320 loss = 4.8167 epoch = 330 loss = 4.8058 epoch = 340 loss = 4.7953 epoch = 350 loss = 4.7853 epoch = 360 loss = 4.7756 epoch = 370 loss = 4.7663 epoch = 380 loss = 4.7574 epoch = 390 loss = 4.7488 epoch = 400 loss = 4.7406 epoch = 410 loss = 4.7327 epoch = 420 loss = 4.7251 epoch = 430 loss = 4.7178 epoch = 440 loss = 4.7108 epoch = 450 loss = 4.7040 epoch = 460 loss = 4.6976 epoch = 470 loss = 4.6913 epoch = 480 loss = 4.6854 epoch = 490 loss = 4.6796
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.11 ็ตๆžœ็ขบ่ช
# ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆœ€็ต‚ๅ€ค print('W = ', W.data.numpy()) print('B = ', B.data.numpy()) #ๆๅคฑใฎ็ขบ่ช print(f'์ดˆ๊ธฐ์ƒํƒœ: ์†์‹ค:{history[0,1]:.4f}') print(f'์ตœ์ข…์ƒํƒœ: ์†์‹ค:{history[-1,1]:.4f}') # ๅญฆ็ฟ’ๆ›ฒ็ทšใฎ่กจ็คบ (ๆๅคฑ) fig1 = plt.gcf() plt.plot(history[:,0], history[:,1], 'b') plt.xlabel('๋ฐ˜๋ณต ํšŸ์ˆ˜') plt.ylabel('์†์‹ค') plt.title('ํ•™์Šต ๊ณก์„ (์†์‹ค)') plt.show() plt.draw() fig1.savefig('ex03-19.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
ๆ•ฃๅธƒๅ›ณใซๅ›žๅธฐ็›ด็ทšใ‚’้‡ใญๆ›ธใใ™ใ‚‹
# xใฎ็ฏ„ๅ›ฒใ‚’ๆฑ‚ใ‚ใ‚‹(Xrange) X_max = X.max() X_min = X.min() X_range = np.array((X_min, X_max)) X_range = torch.from_numpy(X_range).float() print(X_range) # ๅฏพๅฟœใ™ใ‚‹yใฎไบˆๆธฌๅ€คใ‚’ๆฑ‚ใ‚ใ‚‹ Y_range = pred(X_range) print(Y_range.data) # ใ‚ฐใƒฉใƒ•ๆ็”ป fig1 = plt.gcf() plt.scatter(X, Y, c='k', s=50) plt.xlabel('$X$') plt.ylabel('$Y$') plt.plot(X_range.data, Y_range.data, lw=2, c='b') plt.title('์‹ ์žฅ๊ณผ ์ฒด์ค‘์˜ ์ƒ๊ด€ ์ง์„ (๊ฐ€๊ณต ํ›„)') plt.show() plt.draw() fig1.savefig('ex03-20.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
ๅŠ ๅทฅๅ‰ใƒ‡ใƒผใ‚ฟใธใฎๅ›žๅธฐ็›ด็ทšๆ็”ป
# yๅบงๆจ™ๅ€คใจxๅบงๆจ™ๅ€คใฎ่จˆ็ฎ— x_range = X_range + x.mean() yp_range = Y_range + y.mean() # ใ‚ฐใƒฉใƒ•ๆ็”ป fig1 = plt.gcf() plt.scatter(x, y, c='k', s=50) plt.xlabel('$x$') plt.ylabel('$y$') plt.plot(x_range, yp_range.data, lw=2, c='b') plt.title('์‹ ์žฅ๊ณผ ์ฒด์ค‘์˜ ์ƒ๊ด€ ์ง์„ (๊ฐ€๊ณต ์ „)') plt.show() plt.draw() fig1.savefig('ex03-21.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.12 ๆœ€้ฉๅŒ–้–ขๆ•ฐใจstep้–ขๆ•ฐใฎๅˆฉ็”จ
# ๅˆๆœŸๅŒ– # WใจBใ‚’ๅค‰ๆ•ฐใจใ—ใฆๆ‰ฑใ† W = torch.tensor(1.0, requires_grad=True).float() B = torch.tensor(1.0, requires_grad=True).float() # ็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐ num_epochs = 500 # ๅญฆ็ฟ’็އ lr = 0.001 # optimizerใจใ—ใฆSGD(็ขบ็އ็š„ๅ‹พ้…้™ไธ‹ๆณ•)ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ import torch.optim as optim optimizer = optim.SGD([W, B], lr=lr) # ่จ˜้Œฒ็”จ้…ๅˆ—ๅˆๆœŸๅŒ– history = np.zeros((0, 2)) # ใƒซใƒผใƒ—ๅ‡ฆ็† for epoch in range(num_epochs): # ไบˆๆธฌ่จˆ็ฎ— Yp = pred(X) # ๆๅคฑ่จˆ็ฎ— loss = mse(Yp, Y) # ๅ‹พ้…่จˆ็ฎ— loss.backward() # ใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ optimizer.step() #ๅ‹พ้…ๅ€คๅˆๆœŸๅŒ– optimizer.zero_grad() # ๆๅคฑๅ€คใฎ่จ˜้Œฒ if (epoch %10 == 0): item = np.array([epoch, loss.item()]) history = np.vstack((history, item)) print(f'epoch = {epoch} loss = {loss:.4f}') # ใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎๆœ€็ต‚ๅ€ค print('W = ', W.data.numpy()) print('B = ', B.data.numpy()) #ๆๅคฑใฎ็ขบ่ช print(f'ๅˆๆœŸ็Šถๆ…‹: ๆๅคฑ:{history[0,1]:.4f}') print(f'ๆœ€็ต‚็Šถๆ…‹: ๆๅคฑ:{history[-1,1]:.4f}') # ๅญฆ็ฟ’ๆ›ฒ็ทšใฎ่กจ็คบ (ๆๅคฑ) plt.plot(history[:,0], history[:,1], 'b') plt.xlabel('็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐ') plt.ylabel('ๆๅคฑ') plt.title('ๅญฆ็ฟ’ๆ›ฒ็ทš(ๆๅคฑ)') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
3.7ใฎ็ตๆžœใจ่ฆ‹ๆฏ”ในใ‚‹ใจใพใฃใŸใๅŒใ˜ใงใ‚ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚‹ใ€‚ ใคใพใ‚Šใ€step้–ขๆ•ฐใงใ‚„ใฃใฆใ„ใ‚‹ใ“ใจใฏใ€ๆฌกใฎใ‚ณใƒผใƒ‰ใจๅŒใ˜ใ€‚```py3 with torch.no_grad(): ใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ (ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ไฝฟใ†ๅ ดๅˆใฏstep้–ขๆ•ฐ) W -= lr * W.grad B -= lr * B.grad``` ๆœ€้ฉๅŒ–้–ขๆ•ฐใฎใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐ
# ๅˆๆœŸๅŒ– # WใจBใ‚’ๅค‰ๆ•ฐใจใ—ใฆๆ‰ฑใ† W = torch.tensor(1.0, requires_grad=True).float() B = torch.tensor(1.0, requires_grad=True).float() # ็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐ num_epochs = 500 # ๅญฆ็ฟ’็އ lr = 0.001 # optimizerใจใ—ใฆSGD(็ขบ็އ็š„ๅ‹พ้…้™ไธ‹ๆณ•)ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ import torch.optim as optim optimizer = optim.SGD([W, B], lr=lr, momentum=0.9) # ่จ˜้Œฒ็”จ้…ๅˆ—ๅˆๆœŸๅŒ– history2 = np.zeros((0, 2)) # ใƒซใƒผใƒ—ๅ‡ฆ็† for epoch in range(num_epochs): # ไบˆๆธฌ่จˆ็ฎ— Yp = pred(X) # ๆๅคฑ่จˆ็ฎ— loss = mse(Yp, Y) # ๅ‹พ้…่จˆ็ฎ— loss.backward() # ใƒ‘ใƒฉใƒกใƒผใ‚ฟไฟฎๆญฃ optimizer.step() #ๅ‹พ้…ๅ€คๅˆๆœŸๅŒ– optimizer.zero_grad() # ๆๅคฑๅ€คใฎ่จ˜้Œฒ if (epoch %10 == 0): item = np.array([epoch, loss.item()]) history2 = np.vstack((history2, item)) print(f'epoch = {epoch} loss = {loss:.4f}') # ๅญฆ็ฟ’ๆ›ฒ็ทšใฎ่กจ็คบ (ๆๅคฑ) fig1 = plt.gcf() plt.plot(history[:,0], history[:,1], 'b', label='๊ธฐ๋ณธ๊ฐ’ ์„ค์ •') plt.plot(history2[:,0], history2[:,1], 'k', label='momentum=0.9') plt.xlabel('๋ฐ˜๋ณต ํšŸ์ˆ˜') plt.ylabel('์†์‹ค') plt.legend() plt.title('ํ•™์Šต ๊ณก์„ (์†์‹ค)') plt.show() plt.draw() fig1.savefig('ex03-27.tif', format='tif', dpi=300)
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
ใ‚ณใƒฉใƒ ใ€€ๅฑ€ๆ‰€ๆœ€้ฉ่งฃ
def f(x): return x * (x+1) * (x+2) * (x-2) x = np.arange(-3, 2.7, 0.05) y = f(x) plt.plot(x, y) plt.axis('off') plt.show()
_____no_output_____
Apache-2.0
notebooks/ch03_first_ml.ipynb
ychoi-kr/pytorch_book_info
Assignment 2: **Machine learning with tree based models** In this assignment, you will work on the **Titanic** dataset and use machine learning to create a model that predicts which passengers survived the **Titanic** shipwreck. --- About the dataset:---* The column named `Survived` is the label and the remaining columns are features. * The features can be described as given below: Variable Definition pclass Ticket class SibSp Number of siblings / spouses aboard the Titanic Parch Number of parents / children aboard the Titanic Ticket Ticket number Embarked Port of Embarkation: C = Cherbourg, Q = Queenstown, S = Southampton --- Instructions---* Apply suitable data pre-processing techniques, if needed. * Implement a few classifiers to create your model and compare the performance metrics by plotting the curves like roc_auc, confusion matrix, etc.
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.impute import SimpleImputer import seaborn as sns from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split,cross_val_score,GridSearchCV from sklearn.linear_model import LinearRegression,LogisticRegression from sklearn.neighbors import KNeighborsClassifier as KNN from sklearn.ensemble import RandomForestClassifier,VotingClassifier,BaggingClassifier,AdaBoostClassifier,GradientBoostingClassifier from sklearn.metrics import accuracy_score,mean_squared_error as MSE,roc_auc_score,confusion_matrix,classification_report,roc_curve from xgboost import XGBClassifier import xgboost as xgb SEED=1 titanic_data = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L2/titanic.csv') titanic_data.head() titanic_data.shape print(titanic_data.isna().sum()) titanic_data.dtypes titanic_data.describe() titanic_data.info() titanic_data = titanic_data.drop(['PassengerId','Name','Cabin','Ticket'], axis=1) titanic_data imp =SimpleImputer(missing_values=np.nan, strategy='mean') imp.fit(titanic_data[['Age']]) titanic_data['Age'] = imp.transform(titanic_data[['Age']]) titanic_data['Embarked'].describe() common_value='S' titanic_data['Embarked'] = titanic_data['Embarked'].fillna(common_value) titanic_data['Sex'] = titanic_data['Sex'].apply(lambda x: 0 if x=="male" else 1) titanic_data print(titanic_data.isna().sum()) ports = {"C":0,"Q":1,"S":2} titanic_data['Embarked'] = titanic_data['Embarked'].map(ports) titanic_data titanic_data['Age']=titanic_data['Age'].astype(int) titanic_data['Fare']=titanic_data['Fare'].astype(int) titanic_data pd.qcut(titanic_data['Fare'],4) titanic_data.loc[ titanic_data['Age'] <=19, 'Age'] = 0 titanic_data.loc[(titanic_data['Age'] > 19 )& (titanic_data['Age'] <= 25), 'Age'] = 1 titanic_data.loc[(titanic_data['Age'] > 25) & (titanic_data['Age'] <= 29), 'Age'] = 2 titanic_data.loc[(titanic_data['Age'] > 29) & (titanic_data['Age'] <= 31), 'Age'] = 3 titanic_data.loc[(titanic_data['Age'] > 31) & (titanic_data['Age'] <= 40), 'Age'] = 4 titanic_data.loc[(titanic_data['Age'] > 40) & (titanic_data['Age'] <= 80), 'Age'] = 5 titanic_data['Age'].value_counts() titanic_data.loc[ titanic_data['Fare'] <=7, 'Fare'] = 0 titanic_data.loc[(titanic_data['Fare'] > 7 )& (titanic_data['Fare'] <= 14), 'Fare'] = 1 titanic_data.loc[(titanic_data['Fare'] > 14) & (titanic_data['Fare'] <= 31), 'Fare'] = 2 titanic_data.loc[(titanic_data['Fare'] > 31) & (titanic_data['Fare'] <= 512), 'Fare'] = 3 titanic_data.loc[(titanic_data['Fare'] > 512), 'Fare'] = 4 titanic_data['Fare'].value_counts() titanic_data['Relatives']=titanic_data['SibSp']+titanic_data['Parch'] titanic_data titanic_data['Fare_Per_Person'] = titanic_data['Fare']/(titanic_data['Relatives']+1) titanic_data['Fare_Per_Person'] = titanic_data['Fare_Per_Person'].astype(int) titanic_data titanic_data['Age_Class']= titanic_data['Age']* titanic_data['Pclass'] titanic_data y=titanic_data['Survived'] X=titanic_data.drop(['Survived','Parch','Fare_Per_Person'],axis=1) # Split data into 70% train and 30% test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.3, random_state= SEED) # Instantiate individual classifiers lr = LogisticRegression(random_state=SEED) knn = KNN() dt = DecisionTreeClassifier(random_state=SEED) rf = RandomForestClassifier(n_estimators=300,random_state=SEED) bc = BaggingClassifier(base_estimator=dt, n_estimators=300, n_jobs=-1,random_state=SEED,oob_score=True) adb = AdaBoostClassifier(base_estimator=dt, n_estimators=100,random_state=SEED) gb= GradientBoostingClassifier(n_estimators=300, max_depth=1, random_state=SEED,subsample=0.8,max_features=0.2) xgb = xgb.XGBClassifier(learning_rate=0.01) # Define a list called classifier that contains the tuples (classifier_name, classifier) classifiers = [('Logistic Regression', lr),('K Nearest Neighbours', knn), ('Classification Tree', dt),('Random Forest',rf), ('Bagging Classifier',bc),('Adaboost',adb),('Gradient Boosting',gb),('Xtreme GB',xgb)] import warnings warnings.filterwarnings("ignore") # Iterate over the defined list of tuples containing the classifiers for clf_name, clf in classifiers: #fit clf to the training set clf.fit(X_train, y_train) # Predict the labels of the test set y_pred = clf.predict(X_test) # Evaluate the accuracy of clf on the test set print('{:s} : {:.3f}'.format(clf_name, accuracy_score(y_test, y_pred))) print(confusion_matrix(y_test,y_pred)) y_pred_proba = clf.predict_proba(X_test)[:,1] clf_roc_auc_score = roc_auc_score(y_test, y_pred_proba) print('ROC AUC score: {:.2f}'.format(clf_roc_auc_score)) fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr, tpr, label='Random Forest Classification') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve') plt.show(); print(classification_report(y_test, y_pred)) print("="*60) oob_accuracy = bc.oob_score_ print('OOB accuracy of bagging classifier: {:.3f}'.format(oob_accuracy)) # Instantiate a VotingClassifier 'vc' vc = VotingClassifier(estimators=classifiers) # Fit 'vc' to the traing set and predict test set labels vc.fit(X_train, y_train) y_pred = vc.predict(X_test) # Evaluate the test-set accuracy of 'vc' print('Voting Classifier: {:.3f}'.format(accuracy_score(y_test, y_pred))) classifiers = [('Logistic Regression', lr),('K Nearest Neighbours', knn), ('Classification Tree', dt),('Random Forest',rf), ('Bagging Classifier',bc),('Adaboost',adb),('Gradient Boosting',gb)] lr = LogisticRegression(random_state=SEED) knn = KNN() dt = DecisionTreeClassifier(random_state=SEED) rf = RandomForestClassifier(n_estimators=300,random_state=SEED) bc = BaggingClassifier(base_estimator=dt, n_estimators=300, n_jobs=-1,random_state=SEED,oob_score=True) adb = AdaBoostClassifier(base_estimator=dt, n_estimators=100,random_state=SEED) gb= GradientBoostingClassifier(n_estimators=300, max_depth=1, random_state=SEED,subsample=0.8,max_features=0.2) for clf_name, clf in classifiers: scores = cross_val_score(clf, X_train, y_train, cv=10, scoring = "accuracy") print('{:s} '.format(clf_name)) print("Scores:", scores) print("Mean:", scores.mean()) print("Standard Deviation:", scores.std()) feature_imp = pd.Series(rf.feature_importances_,index=list(X.columns.values.tolist())).sort_values(ascending=False) feature_imp plt.figure(figsize=(10,10)) sns.barplot(x=feature_imp, y=feature_imp.index) # Add labels to your graph plt.xlabel('Feature Importance Score') plt.ylabel('Features') plt.title("Visualizing Important Features") plt.legend() plt.show() param_grid = { "criterion" : ["gini", "entropy"], "min_samples_leaf" : [1, 5, 10, 25, 50, 70], "min_samples_split" : [2, 4, 10, 12, 16, 18, 25, 35], "n_estimators": [100, 400, 700, 1000, 1500]} raf = RandomForestClassifier(random_state=SEED) clfa = GridSearchCV(estimator=raf, param_grid=param_grid, n_jobs=-1) clfa.fit(X_train, y_train) clfa.best_params_ So bagging classifier is the classifer with highest accuracy=78% and oob score=80.4% among all the classifers and will be used to train our model.
_____no_output_____
MIT
Assignment_06/Assignment_ML_L2_Sankalp_Jain_ipynb_txt.ipynb
Sankalp679/SHALA
Read the CSV and Perform Basic Data Cleaning
df = pd.read_csv("exoplanet_data.csv") # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() df.head() df.describe()
_____no_output_____
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Select your features (columns)
# Set features. This will also be used as your x values. target = df["koi_disposition"] data = df.drop("koi_disposition", axis=1) feature_names = data.columns data.head()
_____no_output_____
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Create a Train Test SplitUse `koi_disposition` for the y values
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42) X_train.head()
_____no_output_____
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Pre-processingScale the data using the MinMaxScaler and perform some feature selection
# Scale your data from sklearn.preprocessing import MinMaxScaler X_minmax = MinMaxScaler().fit(X_train) X_train_minmax = X_minmax.transform(X_train) X_test_minmax = X_minmax.transform(X_test) from sklearn.svm import SVC model = SVC(kernel='linear') model.fit(X_train_minmax, y_train)
_____no_output_____
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Train the Model
print(f"Training Data Score: {model.score(X_train_minmax, y_train)}") print(f"Testing Data Score: {model.score(X_test_minmax, y_test)}")
Training Data Score: 0.8455082967766546 Testing Data Score: 0.8415331807780321
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters
# Create the GridSearchCV model from sklearn.model_selection import GridSearchCV param_grid = {'C': [1, 5, 10, 50], 'gamma': [0.0001, 0.0005, 0.001, 0.005]} grid = GridSearchCV(model, param_grid, verbose=3) # Train the model with GridSearch grid.fit(X_train_minmax, y_train) print(grid.best_params_) print(grid.best_score_) # Model Accuracy print('Test Acc: %.3f' % model.score(X_test, y_test)) # Make prediction and save to variable for report. predictions = grid.predict(X_test_minmax) # Print Classification Report. from sklearn.metrics import classification_report print(classification_report(y_test, predictions))
precision recall f1-score support CANDIDATE 0.81 0.67 0.73 411 CONFIRMED 0.76 0.85 0.80 484 FALSE POSITIVE 0.98 1.00 0.99 853 accuracy 0.88 1748 macro avg 0.85 0.84 0.84 1748 weighted avg 0.88 0.88 0.88 1748
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Save the Model
# save your model by updating "your_name" with your name # and "your_model" with your model variable # be sure to turn this in to BCS # if joblib fails to import, try running the command to install in terminal/git-bash import joblib filename = 'models/bridgette_svm.sav' joblib.dump(model, filename)
_____no_output_____
MIT
exoplanet1.ipynb
bshub6/machine-learning-challenge
Watershed Distance Transform for 3D Data---Implementation of papers:[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829)
import os import errno import datetime import numpy as np import deepcell
Using TensorFlow backend.
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Load the Training Data
# Download the data (saves to ~/.keras/datasets) filename = 'mousebrain.npz' test_size = 0.1 # % of data saved as test seed = 0 # seed for random train-test split (X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename, test_size=test_size, seed=seed) print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
Downloading data from https://deepcell-data.s3.amazonaws.com/nuclei/mousebrain.npz 1730158592/1730150850 [==============================] - 106s 0us/step X.shape: (176, 15, 256, 256, 1) y.shape: (176, 15, 256, 256, 1)
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Set up filepath constants
# the path to the data file is currently required for `train_model_()` functions # change DATA_DIR if you are not using `deepcell.datasets` DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets')) # DATA_FILE should be a npz file, preferably from `make_training_data` DATA_FILE = os.path.join(DATA_DIR, filename) # confirm the data file is available assert os.path.isfile(DATA_FILE) # Set up other required filepaths # If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR) ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX)) LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX)) # create directories if they do not exist for d in (MODEL_DIR, LOG_DIR): try: os.makedirs(d) except OSError as exc: # Guard against race condition if exc.errno != errno.EEXIST: raise
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Set up training parameters
from tensorflow.keras.optimizers import SGD from deepcell.utils.train_utils import rate_scheduler fgbg_model_name = 'conv_fgbg_3d_model' conv_model_name = 'conv_watershed_3d_model' n_epoch = 10 # Number of training epochs norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv receptive_field = 61 # should be adjusted for the scale of the data optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) lr_sched = rate_scheduler(lr=0.01, decay=0.99) # FC training settings n_skips = 3 # number of skip-connections (only for FC training) batch_size = 1 # FC training uses 1 image per batch # Transformation settings transform = 'watershed' distance_bins = 4 # number of distance classes erosion_width = 1 # erode edges, improves segmentation when cells are close # 3D Settings frames_per_batch = 3
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
First, create a foreground/background separation model Instantiate the fgbg model
from deepcell import model_zoo fgbg_model = model_zoo.bn_feature_net_skip_3D( receptive_field=receptive_field, n_features=2, # segmentation mask (is_cell, is_not_cell) n_frames=frames_per_batch, n_skips=n_skips, n_conv_filters=32, n_dense_filters=128, input_shape=tuple([frames_per_batch] + list(X_train.shape[2:])), multires=False, last_only=False, norm_method='whole_image')
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Train the fgbg model
from deepcell.training import train_model_conv fgbg_model = train_model_conv( model=fgbg_model, dataset=DATA_FILE, # full path to npz file model_name=fgbg_model_name, test_size=test_size, seed=seed, transform='fgbg', optimizer=optimizer, batch_size=batch_size, frames_per_batch=frames_per_batch, n_epoch=n_epoch, model_dir=MODEL_DIR, lr_sched=rate_scheduler(lr=0.01, decay=0.95), rotation_range=180, flip=True, shear=False, zoom_range=(0.8, 1.2))
X_train shape: (198, 15, 256, 256, 1) y_train shape: (198, 15, 256, 256, 1) X_test shape: (22, 15, 256, 256, 1) y_test shape: (22, 15, 256, 256, 1) Output Shape: (None, 3, 256, 256, 2) Number of Classes: 2 Training on 1 GPUs Epoch 1/10 197/198 [============================>.] - ETA: 0s - loss: 0.8965 - model_loss: 0.2152 - model_1_loss: 0.2171 - model_2_loss: 0.2121 - model_3_loss: 0.2163 - model_acc: 0.9120 - model_1_acc: 0.9044 - model_2_acc: 0.9113 - model_3_acc: 0.9067 Epoch 00001: val_loss improved from inf to 1.07529, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 133s 673ms/step - loss: 0.8945 - model_loss: 0.2147 - model_1_loss: 0.2166 - model_2_loss: 0.2116 - model_3_loss: 0.2158 - model_acc: 0.9122 - model_1_acc: 0.9046 - model_2_acc: 0.9115 - model_3_acc: 0.9069 - val_loss: 1.0753 - val_model_loss: 0.3084 - val_model_1_loss: 0.2489 - val_model_2_loss: 0.2391 - val_model_3_loss: 0.2430 - val_model_acc: 0.9378 - val_model_1_acc: 0.9172 - val_model_2_acc: 0.9246 - val_model_3_acc: 0.9233 Epoch 2/10 197/198 [============================>.] - ETA: 0s - loss: 0.7316 - model_loss: 0.1725 - model_1_loss: 0.1739 - model_2_loss: 0.1755 - model_3_loss: 0.1739 - model_acc: 0.9246 - model_1_acc: 0.9222 - model_2_acc: 0.9219 - model_3_acc: 0.9232 Epoch 00002: val_loss improved from 1.07529 to 1.02578, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 547ms/step - loss: 0.7372 - model_loss: 0.1742 - model_1_loss: 0.1755 - model_2_loss: 0.1767 - model_3_loss: 0.1750 - model_acc: 0.9246 - model_1_acc: 0.9222 - model_2_acc: 0.9219 - model_3_acc: 0.9232 - val_loss: 1.0258 - val_model_loss: 0.2438 - val_model_1_loss: 0.2449 - val_model_2_loss: 0.2461 - val_model_3_loss: 0.2551 - val_model_acc: 0.9296 - val_model_1_acc: 0.9310 - val_model_2_acc: 0.9389 - val_model_3_acc: 0.9429 Epoch 3/10 197/198 [============================>.] - ETA: 0s - loss: 0.6888 - model_loss: 0.1632 - model_1_loss: 0.1636 - model_2_loss: 0.1634 - model_3_loss: 0.1626 - model_acc: 0.9282 - model_1_acc: 0.9265 - model_2_acc: 0.9277 - model_3_acc: 0.9277 Epoch 00003: val_loss improved from 1.02578 to 0.99577, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 548ms/step - loss: 0.6880 - model_loss: 0.1630 - model_1_loss: 0.1634 - model_2_loss: 0.1633 - model_3_loss: 0.1624 - model_acc: 0.9282 - model_1_acc: 0.9264 - model_2_acc: 0.9276 - model_3_acc: 0.9277 - val_loss: 0.9958 - val_model_loss: 0.2410 - val_model_1_loss: 0.2488 - val_model_2_loss: 0.2339 - val_model_3_loss: 0.2361 - val_model_acc: 0.9151 - val_model_1_acc: 0.9145 - val_model_2_acc: 0.9202 - val_model_3_acc: 0.9169 Epoch 4/10 197/198 [============================>.] - ETA: 0s - loss: 0.6923 - model_loss: 0.1636 - model_1_loss: 0.1646 - model_2_loss: 0.1648 - model_3_loss: 0.1634 - model_acc: 0.9286 - model_1_acc: 0.9271 - model_2_acc: 0.9284 - model_3_acc: 0.9292 Epoch 00004: val_loss improved from 0.99577 to 0.96332, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 547ms/step - loss: 0.6920 - model_loss: 0.1635 - model_1_loss: 0.1646 - model_2_loss: 0.1648 - model_3_loss: 0.1633 - model_acc: 0.9287 - model_1_acc: 0.9271 - model_2_acc: 0.9285 - model_3_acc: 0.9293 - val_loss: 0.9633 - val_model_loss: 0.2329 - val_model_1_loss: 0.2375 - val_model_2_loss: 0.2301 - val_model_3_loss: 0.2270 - val_model_acc: 0.8982 - val_model_1_acc: 0.8952 - val_model_2_acc: 0.8998 - val_model_3_acc: 0.9054 Epoch 5/10 197/198 [============================>.] - ETA: 0s - loss: 0.6872 - model_loss: 0.1633 - model_1_loss: 0.1625 - model_2_loss: 0.1638 - model_3_loss: 0.1618 - model_acc: 0.9274 - model_1_acc: 0.9267 - model_2_acc: 0.9262 - model_3_acc: 0.9282 Epoch 00005: val_loss improved from 0.96332 to 0.96122, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 546ms/step - loss: 0.6896 - model_loss: 0.1638 - model_1_loss: 0.1631 - model_2_loss: 0.1645 - model_3_loss: 0.1624 - model_acc: 0.9273 - model_1_acc: 0.9265 - model_2_acc: 0.9260 - model_3_acc: 0.9281 - val_loss: 0.9612 - val_model_loss: 0.2280 - val_model_1_loss: 0.2326 - val_model_2_loss: 0.2325 - val_model_3_loss: 0.2323 - val_model_acc: 0.9260 - val_model_1_acc: 0.9179 - val_model_2_acc: 0.9242 - val_model_3_acc: 0.9140 Epoch 6/10 197/198 [============================>.] - ETA: 0s - loss: 0.6726 - model_loss: 0.1590 - model_1_loss: 0.1591 - model_2_loss: 0.1603 - model_3_loss: 0.1583 - model_acc: 0.9290 - model_1_acc: 0.9277 - model_2_acc: 0.9273 - model_3_acc: 0.9286 Epoch 00006: val_loss did not improve from 0.96122 198/198 [==============================] - 108s 546ms/step - loss: 0.6717 - model_loss: 0.1588 - model_1_loss: 0.1589 - model_2_loss: 0.1601 - model_3_loss: 0.1581 - model_acc: 0.9290 - model_1_acc: 0.9277 - model_2_acc: 0.9274 - model_3_acc: 0.9286 - val_loss: 1.0302 - val_model_loss: 0.2523 - val_model_1_loss: 0.2546 - val_model_2_loss: 0.2410 - val_model_3_loss: 0.2465 - val_model_acc: 0.8991 - val_model_1_acc: 0.8924 - val_model_2_acc: 0.9154 - val_model_3_acc: 0.9130 Epoch 7/10 197/198 [============================>.] - ETA: 0s - loss: 0.6620 - model_loss: 0.1565 - model_1_loss: 0.1566 - model_2_loss: 0.1574 - model_3_loss: 0.1557 - model_acc: 0.9301 - model_1_acc: 0.9281 - model_2_acc: 0.9290 - model_3_acc: 0.9297 Epoch 00007: val_loss improved from 0.96122 to 0.92732, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 547ms/step - loss: 0.6616 - model_loss: 0.1564 - model_1_loss: 0.1565 - model_2_loss: 0.1573 - model_3_loss: 0.1556 - model_acc: 0.9300 - model_1_acc: 0.9281 - model_2_acc: 0.9290 - model_3_acc: 0.9296 - val_loss: 0.9273 - val_model_loss: 0.2280 - val_model_1_loss: 0.2261 - val_model_2_loss: 0.2177 - val_model_3_loss: 0.2197 - val_model_acc: 0.9086 - val_model_1_acc: 0.9049 - val_model_2_acc: 0.9144 - val_model_3_acc: 0.9117 Epoch 8/10 197/198 [============================>.] - ETA: 0s - loss: 0.6602 - model_loss: 0.1563 - model_1_loss: 0.1562 - model_2_loss: 0.1564 - model_3_loss: 0.1555 - model_acc: 0.9312 - model_1_acc: 0.9294 - model_2_acc: 0.9296 - model_3_acc: 0.9298 Epoch 00008: val_loss did not improve from 0.92732 198/198 [==============================] - 108s 545ms/step - loss: 0.6601 - model_loss: 0.1563 - model_1_loss: 0.1562 - model_2_loss: 0.1564 - model_3_loss: 0.1554 - model_acc: 0.9313 - model_1_acc: 0.9295 - model_2_acc: 0.9297 - model_3_acc: 0.9299 - val_loss: 0.9669 - val_model_loss: 0.2298 - val_model_1_loss: 0.2335 - val_model_2_loss: 0.2339 - val_model_3_loss: 0.2338 - val_model_acc: 0.9224 - val_model_1_acc: 0.9255 - val_model_2_acc: 0.9318 - val_model_3_acc: 0.9229 Epoch 9/10 197/198 [============================>.] - ETA: 0s - loss: 0.6534 - model_loss: 0.1554 - model_1_loss: 0.1542 - model_2_loss: 0.1548 - model_3_loss: 0.1532 - model_acc: 0.9312 - model_1_acc: 0.9312 - model_2_acc: 0.9315 - model_3_acc: 0.9314 Epoch 00009: val_loss improved from 0.92732 to 0.88550, saving model to /data/models/conv_fgbg_3d_model.h5 198/198 [==============================] - 108s 547ms/step - loss: 0.6536 - model_loss: 0.1554 - model_1_loss: 0.1542 - model_2_loss: 0.1549 - model_3_loss: 0.1533 - model_acc: 0.9310 - model_1_acc: 0.9310 - model_2_acc: 0.9313 - model_3_acc: 0.9312 - val_loss: 0.8855 - val_model_loss: 0.2115 - val_model_1_loss: 0.2154 - val_model_2_loss: 0.2107 - val_model_3_loss: 0.2121 - val_model_acc: 0.9330 - val_model_1_acc: 0.9328 - val_model_2_acc: 0.9316 - val_model_3_acc: 0.9308 Epoch 10/10 197/198 [============================>.] - ETA: 0s - loss: 0.6626 - model_loss: 0.1569 - model_1_loss: 0.1567 - model_2_loss: 0.1572 - model_3_loss: 0.1560 - model_acc: 0.9306 - model_1_acc: 0.9295 - model_2_acc: 0.9292 - model_3_acc: 0.9297 Epoch 00010: val_loss did not improve from 0.88550 198/198 [==============================] - 108s 545ms/step - loss: 0.6622 - model_loss: 0.1568 - model_1_loss: 0.1566 - model_2_loss: 0.1571 - model_3_loss: 0.1559 - model_acc: 0.9304 - model_1_acc: 0.9293 - model_2_acc: 0.9290 - model_3_acc: 0.9296 - val_loss: 0.9433 - val_model_loss: 0.2337 - val_model_1_loss: 0.2267 - val_model_2_loss: 0.2240 - val_model_3_loss: 0.2230 - val_model_acc: 0.9096 - val_model_1_acc: 0.9157 - val_model_2_acc: 0.9234 - val_model_3_acc: 0.9264
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Next, Create a model for the watershed energy transform Instantiate the distance transform model
from deepcell import model_zoo watershed_model = model_zoo.bn_feature_net_skip_3D( fgbg_model=fgbg_model, receptive_field=receptive_field, n_skips=n_skips, n_features=distance_bins, n_frames=frames_per_batch, n_conv_filters=32, n_dense_filters=128, multires=False, last_only=False, input_shape=tuple([frames_per_batch] + list(X_train.shape[2:])), norm_method='whole_image')
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Train the model
from deepcell.training import train_model_conv watershed_model = train_model_conv( model=watershed_model, dataset=DATA_FILE, # full path to npz file model_name=conv_model_name, test_size=test_size, seed=seed, transform=transform, distance_bins=distance_bins, erosion_width=erosion_width, optimizer=optimizer, batch_size=batch_size, n_epoch=n_epoch, frames_per_batch=frames_per_batch, model_dir=MODEL_DIR, lr_sched=lr_sched, rotation_range=180, flip=True, shear=False, zoom_range=(0.8, 1.2))
X_train shape: (198, 15, 256, 256, 1) y_train shape: (198, 15, 256, 256, 1) X_test shape: (22, 15, 256, 256, 1) y_test shape: (22, 15, 256, 256, 1) Output Shape: (None, 3, 256, 256, 4) Number of Classes: 4 Training on 1 GPUs Epoch 1/10 197/198 [============================>.] - ETA: 0s - loss: 3.8927 - model_5_loss: 0.9546 - model_6_loss: 0.9520 - model_7_loss: 0.9633 - model_8_loss: 0.9501 - model_5_acc: 0.8515 - model_6_acc: 0.8609 - model_7_acc: 0.8556 - model_8_acc: 0.8664 Epoch 00001: val_loss improved from inf to 3.58243, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 171s 862ms/step - loss: 3.8903 - model_5_loss: 0.9541 - model_6_loss: 0.9513 - model_7_loss: 0.9626 - model_8_loss: 0.9497 - model_5_acc: 0.8516 - model_6_acc: 0.8611 - model_7_acc: 0.8557 - model_8_acc: 0.8664 - val_loss: 3.5824 - val_model_5_loss: 0.9999 - val_model_6_loss: 0.8246 - val_model_7_loss: 0.8505 - val_model_8_loss: 0.8347 - val_model_5_acc: 0.8552 - val_model_6_acc: 0.8806 - val_model_7_acc: 0.8899 - val_model_8_acc: 0.8820 Epoch 2/10 197/198 [============================>.] - ETA: 0s - loss: 3.2688 - model_5_loss: 0.7954 - model_6_loss: 0.8017 - model_7_loss: 0.8018 - model_8_loss: 0.7971 - model_5_acc: 0.8935 - model_6_acc: 0.8904 - model_7_acc: 0.8868 - model_8_acc: 0.8929 Epoch 00002: val_loss improved from 3.58243 to 3.26194, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 144s 727ms/step - loss: 3.2674 - model_5_loss: 0.7951 - model_6_loss: 0.8014 - model_7_loss: 0.8014 - model_8_loss: 0.7967 - model_5_acc: 0.8936 - model_6_acc: 0.8904 - model_7_acc: 0.8869 - model_8_acc: 0.8930 - val_loss: 3.2619 - val_model_5_loss: 0.8245 - val_model_6_loss: 0.8164 - val_model_7_loss: 0.7766 - val_model_8_loss: 0.7716 - val_model_5_acc: 0.9008 - val_model_6_acc: 0.9094 - val_model_7_acc: 0.9056 - val_model_8_acc: 0.9073 Epoch 3/10 197/198 [============================>.] - ETA: 0s - loss: 3.2074 - model_5_loss: 0.7824 - model_6_loss: 0.7851 - model_7_loss: 0.7882 - model_8_loss: 0.7788 - model_5_acc: 0.8969 - model_6_acc: 0.8971 - model_7_acc: 0.8925 - model_8_acc: 0.8956 Epoch 00003: val_loss improved from 3.26194 to 3.08470, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 143s 725ms/step - loss: 3.2081 - model_5_loss: 0.7827 - model_6_loss: 0.7852 - model_7_loss: 0.7883 - model_8_loss: 0.7790 - model_5_acc: 0.8967 - model_6_acc: 0.8970 - model_7_acc: 0.8924 - model_8_acc: 0.8955 - val_loss: 3.0847 - val_model_5_loss: 0.7503 - val_model_6_loss: 0.7541 - val_model_7_loss: 0.7527 - val_model_8_loss: 0.7546 - val_model_5_acc: 0.9105 - val_model_6_acc: 0.9094 - val_model_7_acc: 0.8988 - val_model_8_acc: 0.9046 Epoch 4/10 197/198 [============================>.] - ETA: 0s - loss: 3.2189 - model_5_loss: 0.7860 - model_6_loss: 0.8026 - model_7_loss: 0.7803 - model_8_loss: 0.7770 - model_5_acc: 0.8933 - model_6_acc: 0.8874 - model_7_acc: 0.8916 - model_8_acc: 0.8949 Epoch 00004: val_loss did not improve from 3.08470 198/198 [==============================] - 143s 723ms/step - loss: 3.2174 - model_5_loss: 0.7857 - model_6_loss: 0.8021 - model_7_loss: 0.7800 - model_8_loss: 0.7767 - model_5_acc: 0.8933 - model_6_acc: 0.8874 - model_7_acc: 0.8916 - model_8_acc: 0.8950 - val_loss: 3.0984 - val_model_5_loss: 0.7471 - val_model_6_loss: 0.7796 - val_model_7_loss: 0.7491 - val_model_8_loss: 0.7495 - val_model_5_acc: 0.9032 - val_model_6_acc: 0.9134 - val_model_7_acc: 0.8882 - val_model_8_acc: 0.9122 Epoch 5/10 197/198 [============================>.] - ETA: 0s - loss: 3.1495 - model_5_loss: 0.7716 - model_6_loss: 0.7765 - model_7_loss: 0.7652 - model_8_loss: 0.7632 - model_5_acc: 0.9012 - model_6_acc: 0.8977 - model_7_acc: 0.8981 - model_8_acc: 0.9031 Epoch 00005: val_loss improved from 3.08470 to 3.01958, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 144s 726ms/step - loss: 3.1469 - model_5_loss: 0.7710 - model_6_loss: 0.7758 - model_7_loss: 0.7644 - model_8_loss: 0.7626 - model_5_acc: 0.9012 - model_6_acc: 0.8977 - model_7_acc: 0.8981 - model_8_acc: 0.9031 - val_loss: 3.0196 - val_model_5_loss: 0.7375 - val_model_6_loss: 0.7557 - val_model_7_loss: 0.7238 - val_model_8_loss: 0.7295 - val_model_5_acc: 0.8905 - val_model_6_acc: 0.9011 - val_model_7_acc: 0.8767 - val_model_8_acc: 0.8719 Epoch 6/10 197/198 [============================>.] - ETA: 0s - loss: 3.0814 - model_5_loss: 0.7565 - model_6_loss: 0.7578 - model_7_loss: 0.7482 - model_8_loss: 0.7457 - model_5_acc: 0.8979 - model_6_acc: 0.8955 - model_7_acc: 0.8965 - model_8_acc: 0.9008 Epoch 00006: val_loss improved from 3.01958 to 2.92890, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 144s 725ms/step - loss: 3.0772 - model_5_loss: 0.7555 - model_6_loss: 0.7567 - model_7_loss: 0.7472 - model_8_loss: 0.7447 - model_5_acc: 0.8981 - model_6_acc: 0.8957 - model_7_acc: 0.8967 - model_8_acc: 0.9009 - val_loss: 2.9289 - val_model_5_loss: 0.7209 - val_model_6_loss: 0.7166 - val_model_7_loss: 0.7114 - val_model_8_loss: 0.7069 - val_model_5_acc: 0.9053 - val_model_6_acc: 0.9081 - val_model_7_acc: 0.8920 - val_model_8_acc: 0.9041 Epoch 7/10 197/198 [============================>.] - ETA: 0s - loss: 3.0843 - model_5_loss: 0.7582 - model_6_loss: 0.7587 - model_7_loss: 0.7492 - model_8_loss: 0.7450 - model_5_acc: 0.9002 - model_6_acc: 0.8977 - model_7_acc: 0.8987 - model_8_acc: 0.9020 Epoch 00007: val_loss did not improve from 2.92890 198/198 [==============================] - 143s 722ms/step - loss: 3.0841 - model_5_loss: 0.7582 - model_6_loss: 0.7586 - model_7_loss: 0.7492 - model_8_loss: 0.7449 - model_5_acc: 0.9003 - model_6_acc: 0.8979 - model_7_acc: 0.8989 - model_8_acc: 0.9022 - val_loss: 3.0507 - val_model_5_loss: 0.7429 - val_model_6_loss: 0.7490 - val_model_7_loss: 0.7422 - val_model_8_loss: 0.7434 - val_model_5_acc: 0.8986 - val_model_6_acc: 0.8998 - val_model_7_acc: 0.8944 - val_model_8_acc: 0.9109 Epoch 8/10 197/198 [============================>.] - ETA: 0s - loss: 3.0380 - model_5_loss: 0.7474 - model_6_loss: 0.7455 - model_7_loss: 0.7375 - model_8_loss: 0.7344 - model_5_acc: 0.8997 - model_6_acc: 0.8984 - model_7_acc: 0.8992 - model_8_acc: 0.9030 Epoch 00008: val_loss did not improve from 2.92890 198/198 [==============================] - 143s 724ms/step - loss: 3.0375 - model_5_loss: 0.7472 - model_6_loss: 0.7455 - model_7_loss: 0.7374 - model_8_loss: 0.7342 - model_5_acc: 0.8996 - model_6_acc: 0.8983 - model_7_acc: 0.8991 - model_8_acc: 0.9030 - val_loss: 3.0694 - val_model_5_loss: 0.7353 - val_model_6_loss: 0.7731 - val_model_7_loss: 0.7532 - val_model_8_loss: 0.7347 - val_model_5_acc: 0.8916 - val_model_6_acc: 0.8562 - val_model_7_acc: 0.8855 - val_model_8_acc: 0.8808 Epoch 9/10 197/198 [============================>.] - ETA: 0s - loss: 3.0477 - model_5_loss: 0.7486 - model_6_loss: 0.7477 - model_7_loss: 0.7391 - model_8_loss: 0.7390 - model_5_acc: 0.9000 - model_6_acc: 0.8975 - model_7_acc: 0.8999 - model_8_acc: 0.9026 Epoch 00009: val_loss improved from 2.92890 to 2.91570, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 144s 726ms/step - loss: 3.0471 - model_5_loss: 0.7485 - model_6_loss: 0.7474 - model_7_loss: 0.7390 - model_8_loss: 0.7390 - model_5_acc: 0.9001 - model_6_acc: 0.8975 - model_7_acc: 0.9000 - model_8_acc: 0.9027 - val_loss: 2.9157 - val_model_5_loss: 0.7177 - val_model_6_loss: 0.7192 - val_model_7_loss: 0.7003 - val_model_8_loss: 0.7052 - val_model_5_acc: 0.8953 - val_model_6_acc: 0.9110 - val_model_7_acc: 0.9087 - val_model_8_acc: 0.8886 Epoch 10/10 197/198 [============================>.] - ETA: 0s - loss: 3.0652 - model_5_loss: 0.7553 - model_6_loss: 0.7531 - model_7_loss: 0.7425 - model_8_loss: 0.7411 - model_5_acc: 0.9027 - model_6_acc: 0.9017 - model_7_acc: 0.9013 - model_8_acc: 0.9044 Epoch 00010: val_loss improved from 2.91570 to 2.90629, saving model to /data/models/conv_watershed_3d_model.h5 198/198 [==============================] - 144s 726ms/step - loss: 3.0634 - model_5_loss: 0.7548 - model_6_loss: 0.7526 - model_7_loss: 0.7420 - model_8_loss: 0.7407 - model_5_acc: 0.9026 - model_6_acc: 0.9016 - model_7_acc: 0.9012 - model_8_acc: 0.9043 - val_loss: 2.9063 - val_model_5_loss: 0.7080 - val_model_6_loss: 0.7071 - val_model_7_loss: 0.7149 - val_model_8_loss: 0.7030 - val_model_5_acc: 0.9048 - val_model_6_acc: 0.9090 - val_model_7_acc: 0.9118 - val_model_8_acc: 0.9021
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Run the modelThe model was trained on only a `frames_per_batch` frames at a time. In order to run this data on a full set of frames, a new model must be instantiated, which will load the trained weights. Save weights of trained models
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name)) fgbg_model.save_weights(fgbg_weights_file) watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(conv_model_name)) watershed_model.save_weights(watershed_weights_file)
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Initialize the new models
from deepcell import model_zoo # All training parameters should match except for the `input_shape` run_fgbg_model = model_zoo.bn_feature_net_skip_3D( receptive_field=receptive_field, n_features=2, n_frames=frames_per_batch, n_skips=n_skips, n_conv_filters=32, n_dense_filters=128, input_shape=tuple(X_test.shape[1:]), multires=False, last_only=False, norm_method=norm_method) run_fgbg_model.load_weights(fgbg_weights_file) run_watershed_model = model_zoo.bn_feature_net_skip_3D( fgbg_model=run_fgbg_model, receptive_field=receptive_field, n_skips=n_skips, n_features=distance_bins, n_frames=frames_per_batch, n_conv_filters=32, n_dense_filters=128, multires=False, last_only=False, input_shape=tuple(X_test.shape[1:]), norm_method=norm_method) run_watershed_model.load_weights(watershed_weights_file) # too many batches at once causes OOM X_test, y_test = X_test[:4], y_test[:4] print(X_test.shape)
(4, 15, 256, 256, 1)
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Make predictions on test data
test_images = run_watershed_model.predict(X_test)[-1] test_images_fgbg = run_fgbg_model.predict(X_test)[-1] print('watershed transform shape:', test_images.shape) print('segmentation mask shape:', test_images_fgbg.shape)
watershed transform shape: (4, 15, 256, 256, 4) segmentation mask shape: (4, 15, 256, 256, 2)
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Watershed post-processing
argmax_images = [] for i in range(test_images.shape[0]): max_image = np.argmax(test_images[i], axis=-1) argmax_images.append(max_image) argmax_images = np.array(argmax_images) argmax_images = np.expand_dims(argmax_images, axis=-1) print('watershed argmax shape:', argmax_images.shape) # threshold the foreground/background # and remove back ground from watershed transform threshold = 0.5 fg_thresh = test_images_fgbg[..., 1] > threshold fg_thresh = np.expand_dims(fg_thresh, axis=-1) argmax_images_post_fgbg = argmax_images * fg_thresh # Apply watershed method with the distance transform as seed from skimage.measure import label from skimage.morphology import watershed from skimage.feature import peak_local_max watershed_images = [] for i in range(argmax_images_post_fgbg.shape[0]): image = fg_thresh[i, ..., 0] distance = argmax_images_post_fgbg[i, ..., 0] local_maxi = peak_local_max( test_images[i, ..., -1], min_distance=10, threshold_abs=0.05, indices=False, labels=image, exclude_border=False) markers = label(local_maxi) segments = watershed(-distance, markers, mask=image) watershed_images.append(segments) watershed_images = np.array(watershed_images) watershed_images = np.expand_dims(watershed_images, axis=-1)
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Plot the results
import matplotlib.pyplot as plt import matplotlib.animation as animation index = np.random.randint(low=0, high=watershed_images.shape[0]) frame = np.random.randint(low=0, high=watershed_images.shape[1]) print('Image:', index) print('Frame:', frame) fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True) ax = axes.ravel() ax[0].imshow(X_test[index, frame, ..., 0]) ax[0].set_title('Source Image') ax[1].imshow(test_images_fgbg[index, frame, ..., 1]) ax[1].set_title('FGBG Prediction') ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet') ax[2].set_title('FGBG {}% Threshold'.format(int(threshold * 100))) ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet') ax[3].set_title('Distance Transform') ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet') ax[4].set_title('Distance Transform w/o Background') ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet') ax[5].set_title('Watershed Segmentation') fig.tight_layout() plt.show() # Can also export as a video # But this does not render well on GitHub from IPython.display import HTML from deepcell.utils.plot_utils import get_js_video HTML(get_js_video(watershed_images[..., [-1]], batch=index))
_____no_output_____
Apache-2.0
scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb
esgomezm/deepcell-tf
Tutorial 6 - Handle Missing Data replace function
import pandas as pd import numpy as np df = pd.read_csv('sample_data_tutorial_06.csv') df newdf = df.replace(-99999,np.NaN) newdf newdf = df.replace([-99999, -88888],np.NaN) newdf newdf = df.replace({ 'temperature': -99999, 'windspeed': [-99999, -88888], 'event': 'No event' }, np.NaN) newdf # Podemos gerar um mapa das alteraรงรตes que queremos fazer: newdf = df.replace({ -99999: np.NaN, -88888: np.NaN, 'No event': 'Sunny' }) newdf # Importando outro csv com algumas unidades que precisam ser limpas! df = pd.read_csv('sample_data_tutorial_06a.csv') df # ร‰ necessรกrio usar o 'regex' (regular expression) # No caso abaixo estamos substituindo todas as letras (de A a Z - maiรบscula e minรบscula) por vazio (='') newdf = df.replace('[A-Za-z]', '', regex=True) newdf # Observe, no caso anterior, que ele removeu o que pedimos mas tambรฉm removeu toda coluna 'event' # Para fazer as substituiรงรตes somente em determinadas colunas รฉ preciso utilizar o dicionรกrio: newdf = df.replace({ 'temperature': '[A-Za-z]', 'windspeed': '[A-Za-z]' }, '', regex=True) newdf
_____no_output_____
MIT
Python Pandas Tutorials 06.ipynb
HenriqueArgentieri/Tutoriais
PARSE SINGLE ABSTRACT WITH NON-INDEXED AUTHORS LIST
abstract = text_dict['P123'] #abstract abstract_info = re.findall(r"\w+[A-Z\w+]\w+.*(?=TNF\stherapy.*)", abstract) abstract_head = str(abstract_info[0]) abstract_head authors_info = re.findall(r"\w+[^A-Z\d)\W]\s\w.*(?=TNF\stherapy*)", abstract) authors = str(authors_info[0]) authors author_name = re.findall(r"\w.+(?=Spherix)", authors) author_name author_list = [x for x in author_name[0].split(',')] author_list author_location = re.findall(r"Spherix[^*]+", authors) author_location pattern = re.compile(r"\w+[^A-Z\d\W]\s\w.*") abstract_title = [re.sub(pattern, "", i) for i in abstract_info] abstract_title abstract_text = re.findall(r"(TNF\stherapy.*)", abstract) #abstract_text import pandas as pd df = pd.DataFrame({"About the person": 'Name (incl. titles if any mentioned)', "Unnamed: 1": 'Affiliation(s) Name(s)', "Unnamed: 2": "Person's Location", "About the session/topic": "Session Name", "Unnamed: 4": 'Topic Title', "Unnamed: 5": 'Presentation Abstract'}, index=[0]) df1 = pd.DataFrame({"About the person": author_list[2], "Unnamed: 1": author_location, "Unnamed: 2": "", "About the session/topic": "P123", "Unnamed: 4": abstract_title, "Unnamed: 5": abstract_text}) df1
_____no_output_____
MIT
file_parse.ipynb
ivanlohvyn/beetroot_parse_pdf
PARSE SINGLE ABSTRACT WITH INDEXED AUTHORS LIST
abstract = text_dict['P120'] #abstract abstract_info = re.findall(r"\w+[A-Z\w+]\w+.*(?=Introduction.*)", abstract) abstract_head = str(abstract_info[0]) abstract_head authors_info = re.findall(r"\w+[^A-Z\d)\W]\s\w.*(?=Introduction.*)", abstract) authors = str(authors_info[0]) authors author_name = re.findall(r"(\w+.\s[a-zA-z\s-]+\d)", authors) author_name author_location = re.findall(r"(\d\w+\W[a-zA-z-'\s&,\s.,(A-Z)]+)", authors) author_location import string from collections import namedtuple DigitGroup = namedtuple('DigitGroup', ['keys', 'values']) def combine(all_keys, all_values): by_digit = {} for word in all_keys: for char in word: if char in string.digits: group = by_digit.get(char) if not group: group = DigitGroup(word, []) by_digit[char] = group break for word in all_values: for char in word: if char in string.digits: group = by_digit[char] group.values.append(word) break return dict(by_digit.values()) combined_dict = combine(author_location, author_name) combined_dict list_of_dict = [{k: v} for k, v in combined_dict.items()] list_of_dict import itertools i = list_of_dict[6] get_key = i.keys() names = [] for key, value in ( itertools.chain.from_iterable( [itertools.product((k, ), v) for k, v in i.items()])): names.append(value) names name = [re.sub(r'[0-9]', '', i) for i in names] print(name) location = [re.sub(r'[0-9]', '', i) for i in get_key] print(location) pattern = re.compile(r"\w+[^A-Z\d\W]\s\w.*") abstract_title = [re.sub(pattern, "", i) for i in abstract_info] abstract_title abstract_text = re.findall(r"(Introduction.*)", abstract) #abstract_text import pandas as pd df2 = pd.DataFrame({"About the person": name[0], "Unnamed: 1": location, "Unnamed: 2": "", "About the session/topic": "P120", "Unnamed: 4": abstract_title, "Unnamed: 5": abstract_text}) df2 df = pd.read_excel('/home/azashiro/Desktop/Datas.xlsx') df73 = df72.append(df2, ignore_index=True) df73 '''Import pandas DataFrame into Excel file''' excel_file = df73.to_excel("/home/azashiro/Desktop/beetroot_task/Datas.xlsx", index=False) excel_file
_____no_output_____
MIT
file_parse.ipynb
ivanlohvyn/beetroot_parse_pdf
![Self Check Exercises check mark image](files/art/check.png) 6.3.2 Self Check **2. _(IPython Session)_** Given the sets `{10, 20, 30}` and `{5, 10, 15, 20}` use the mathematical set operators to produce the following results:**a.** `{30}` **b.** `{5, 15, 30}` **c.** `{5, 10, 15, 20, 30}` **d.** `{10, 20}`**Answer:**
{10, 20, 30} - {5, 10, 15, 20} {10, 20, 30} ^ {5, 10, 15, 20} {10, 20, 30} | {5, 10, 15, 20} {10, 20, 30} & {5, 10, 15, 20} ########################################################################## # (C) Copyright 2019 by Deitel & Associates, Inc. and # # Pearson Education, Inc. All Rights Reserved. # # # # DISCLAIMER: The authors and publisher of this book have used their # # best efforts in preparing the book. These efforts include the # # development, research, and testing of the theories and programs # # to determine their effectiveness. The authors and publisher make # # no warranty of any kind, expressed or implied, with regard to these # # programs or to the documentation contained in these books. The authors # # and publisher shall not be liable in any event for incidental or # # consequential damages in connection with, or arising out of, the # # furnishing, performance, or use of these programs. # ##########################################################################
_____no_output_____
Apache-2.0
examples/ch06/snippets_ipynb/06.03.02selfcheck.ipynb
germanngc/PythonFundamentals
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/03_Python_Flow_Control)** Python Nested `if` statementWe can have a nested-**[if-else](https://github.com/milaan9/03_Python_Flow_Control/blob/main/002_Python_if_else_statement.ipynb)** or nested-**[if-elif-else](https://github.com/milaan9/03_Python_Flow_Control/blob/main/003_Python_if_elif_else_statement%20.ipynb)** statement inside another **`if-else`** statement. This is called **nesting** in computer programming. The nested if statements is useful when we want to make a series of decisions.Any number of these statements can be nested inside one another. Indentation is the only way to figure out the level of nesting. They can get confusing, so they must be avoided unless necessary.We can use nested if statements for situations where we want to check for a **secondary condition** if the first condition executes as **`True`**. Syntax: Example 1:```pythonif conditon_outer: if condition_inner: statement of nested if else: statement of nested if else: statement ot outer ifelse: Outer else statement outside if block``` Example 2:```pythonif expression1: statement(s) if expression2: statement(s) elif expression3: statement(s) elif expression4: statement(s) else: statement(s)else: statement(s)```
# Example 1: a=10 if a>=20: # Condition FALSE print ("Condition is True") else: # Code will go to ELSE body if a>=15: # Condition FALSE print ("Checking second value") else: # Code will go to ELSE body print ("All Conditions are false") # Example 2: x = 10 y = 12 if x > y: print( "x>y") elif x < y: print( "x<y") if x==10: print ("x=10") else: print ("invalid") else: print ("x=y") # Example 3: num1 = 0 if (num1 != 0): # For zero condition is FALSE if(num1 > 0): print("num1 is a positive number") else: print("num1 is a negative number") else: # For zero condition is TRUE print("num1 is neither positive nor negative") # Example 4: '''In this program, we input a number check if the number is positive or negative or zero and display an appropriate message. This time we use nested if statement''' num = float(input("Enter a number: ")) if num >= 0: if num == 0: print("Zero") else: print("Positive number") else: print("Negative number") # Example 5: def number_arithmetic(num1, num2): if num1 >= num2: if num1 == num2: print(f'{num1} and {num2} are equal') else: print(f'{num1} is greater than {num2}') else: print(f'{num1} is smaller than {num2}') number_arithmetic(96, 66) # Output 96 is greater than 66 number_arithmetic(96, 96) # Output 56 and 56 are equal
96 is greater than 66 96 and 96 are equal
MIT
004_Python_Nested_if_statement.ipynb
chen181016/03_Python_Flow_Control
import torch import torch.nn as nn import torchvision.transforms.functional as TF
_____no_output_____
MIT
notebooks/Original_U-Net_PyTorch.ipynb
jimmiemunyi/fastai-experiments
The Original U-Net Architecture ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA5wAAAJmCAIAAAC/rhhNAAAgAElEQVR4Aey9Xask2Zmol//A5GDw7bB/gnb7zjfDSRn8xagv9oznMDAqms7hCKMpyoJE6GBCwxwUgmKodJ2yiGkOZYX7widGJfko58wUHepCqk73xxDqciPnxqAOpDZS9LGEwiNEO1DfLNP7zVpaHV8ZEbl27hWZT5J0R0ZGvGvF867IeGrtFSsmihcEIAABCEAAAhCAAARGTmAy8vp/Uv0sy4IgCMOw47EURRHHse/7nufFcVwURWnHLMs8z0vTtLR+58fri7yzaDaAAAQgAAEIQAACp0xg9FK72Wx834/juLvURlEUBEGapnmeh2EYx3GpBQRB4Pv+AKm9vsilGvIRAhCAAAQgAAEIQMAkMHqpTdO0KIo0TTtKbZZlvu/r3tn86mUSieM4iqIwDPtK7fVFNqvHMgQgAAEIQAACEIBAlcDopVYOqbvUJkmyWq1k0IJ01ppQtJgOkNrri2zWkGUIQAACEIAABCAAgSqBk5Pa+OolY3DDMPQ8L0kS4VIURRAEm81GKTVAaq8vcjVtrIEABCAAAQhAAAIQMAmcotT6vp9lmVBI09TzvDzPlVIy8EDWD5Paa4psJoxlCEAAAhCAAAQgAIEqgVOU2iiKTBDir2ma+r6/2WzSq1cQBOv1WruvuX3TsunEso2tyE0lsh4CEIAABCAAAQhAQAicnNTK/F9m+mUmhDiOvcqr481nEu36Ipu1ZRkCEIAABCAAAQhAoErg5KRWKRUEgUxPWxTFer02J0PQgAYMP7jWyLpiLEAAAhCAAAQgAAEIVAmMXmorvau7H5pQFEUURbJjEAS1YwyGSe31Ra5mjjUQgAAEIAABCEAAAprA6KVWHwkLEIAABCAAAQhAAAInS+BIpPYb3/3RN777o32y+P33fv7ojVTe7//8n/YJxb4QgAAEIAABCEAAAgcmcCRSe/vB09sPnu7D7qW7T27dW8v7q998e59QSqkvv/LmS3efyPvLr7y5ZzR2hwAEIAABCEAAAhBoJ4DUbvm8dPfJnYfP5L2/1N5+8FRH29O22/PHtxCAAAQgAAEIQAACSimkdtsMkFrOBwhAAAIQgAAEIDBeAkjtNndI7XgbMTWHAAQgAAEIQAACSO22DSC1nAwQgAAEIAABCEBgvASQ2m3ukNrxNmJqDgEIQAACEIAABJDabRtAajkZIAABCEAAAhCAwHgJILXb3CG1423E1BwCEIAABCAAAQggtds2gNRyMkAAAhCAAAQgAIHxEkBqt7lDasfbiKk5BCAAAQhAAAIQQGq3bQCp5WSAAAQgAAEIQAAC4yWA1G5zh9SOtxFTcwhAAAIQgAAEIIDUbtsAUsvJAAEIQAACEIAABMZLAKnd5g6pHW8jpuYQgAAEIAABCEAAqd22AaSWkwECEIAABCAAAQiMlwBSu80dUjveRkzNIQABCEAAAhCAAFK7bQNILScDBCAAAQhAAAIQGC8BpHabO6R2vI2YmkMAAhCAAAQgAAGkdtsGkFpOBghAAAIQgAAEIDBeAkjtNndI7XgbMTWHAAQgAAEIQAACSO22DSC1nAwQgAAEIAABCEBgvASQ2m3ukNrxNmJqDgEIQAACEIAABJDabRtAajkZIAABCEAAAhCAwHgJILXb3CG1423E1BwCEIAABCAAAQggtds2gNRyMkAAAhCAAAQgAIHxEkBqt7lDasfbiKk5BCAAAQhYIZBlWRAEYRia0bIsi6LI933P88IwTNNUvt1sNmEYyvooirIsM/dqXy6KIo5j2TeO46IoSttnWeZ5ni6r9C0fIVBLAKndYkFqa9sHKyEAAQhA4EQIbDYb3/fjODalNs9zWZmmaZZlYqJ5nqdp6vt+kiTZ1StJEt/38zzvyCqKoiAI0jTN8zwMwziOSzsGQeD7fnepvT5LbnL6UoX56AIBpHabBaTWheZIHSAAAQhA4KYIpGlaFEWapqbUJkmyXq/NKoVhuNls4qtXaX1HB82yzPd93TubX73MUHEcR1Fk9gqb39YuW7dkKaXJ6WvrwMobJ4DUblOA1N54W6QCEIAABCBw4wRKUlutTxAEWZalabpcLrWYFkXRvac2SZLVaqWUkjil/l2tvN2lVu8itbViyRKqyemrWFjjAgGkdpsFpNaF5kgdIAABCEDgZgm0S+3q6iU1TJJkuVyGVy/f97uPqZVeXhm8G4ah53lJkkjMoiiCINhsNkqp7lJr3ZJbUiBO37IBX90gAaR2Cx+pvcFWSNEQgAAEIOAIgSapLYoiiiLpYVVKiX2uVqvk6mV+tfNAZGCuluA0TT3Pk/5aGXggEbpLrXVLbjoE0+mbtmH9DRJAarfwkdobbIUUDQEIQAACjhColVq5Lcy8nUt3jupqy41f+mPLgmmuspn4q5Sy2WzSq1cQBOv1Wrtve0Czq3h/S66WVXL66gascYEAUrvNAlLrQnOkDhCAAAQgcLMEqlIrMxuU5LL2RjEZNrCz/jJxmLmZCHEcx17lZd61Zu5iLlu3ZDO4Uqrq9KUN+OgIAaR2mwik1pEWSTUgAAEIQOAGCZSkVm7/r05roHtVZSjCer3WQwi6VD4IApmetiiK9XptToagd+8+/MC6Jes6KKVqnd7cgGV3CCC121wgte40SmoCAQhAAAKHJ1DpJP3k2Qe1vacyDmGz2QRBIHvpu7s6Vlv+mq/3LXUDS5DuUquUsmvJ+iianF5vwIJTBJDabTqQWqfaJZWBAAQgAAEIdCdg3ZKl6Ban7143tjwYAaR2i9qu1H7ef+1F77G8bz94erB0UhAEIAABCEAAAhA4TQJI7TbvdqX2c1/57gsvvypvpPY0Ty2OGgIQgMAREPjZL34TJx+8c/nhO5cffvsHP370Rvo//rv/U97ha//X/gd4+dNfSfB3Lj/89Ue/3T+gRPj1R7/VYX/2i9/YCkscxwkgtdsE2ZXa//Yv//6zX/qOvJFax88BqgcBCEAAAk0ERA3l2/uP3r394Omdh8/kbeXq9tLdJ7fureX91W++3VSNvusfvZFe/FUsYa3U88++9trlT3/Vtxpsf2ACSO0WuF2ptX7aH7hZUBwEIAABCEBAKXUAqdWWbFdqb91bW5TvP/zK6p3LD2kSjhNAarcJsii1P/vFbz7vvza//+adh8/m99/8vP8af/tw/DSgehCAAAQgUEsAqRUsSG1t83BtJVK7zYhFqb3/6N0XXn71s1/6zp2Hzz77pe+88PKr9x+961riqQ8EIAABCEBgJwGkVhAhtTubigsbILXbLCC1LjRH6gABCEAAAk4RQGolHUitU82yqTJI7ZYMUtvURFgPAQhAAAInSwCpldQjtaM4BZDabZosSu23f/Dj2Z1vveg9vvPw2Z/4Tz73le8yunwUJwOVhAAEIACBEgGkVoAgtaWG4eZHpHabF4tSq5Ri9gM3mzu1ggAEIACBXgSQWsGF1PZqNje1MVK7JY/U3lQTpFwIQAACEHCWAFIrqUFqnW2iZsWQ2i2Na5Jahh+YrY1lCEAAAhAYFwGkVvKF1I6i3SK12zRZlFqm9BpF06eSEIAABCCwkwBSK4iQ2p1NxYUNkNptFpBaF5ojdYAABCAAAacIILWSDqTWqWbZVBmkdksGqW1qIqyHAAQgAIGTJYDUSuqR2lGcAkjtNk0Wpdac0ovH5I7iNKCSEIAABCBQSwCpFSxIbW3zcG0lUrvNiEWpZUov11o59YEABCAAgWEEkFrhhtQOaz8H3gup3QJHag/c8igOAhCAAATcJ4DUSo6QWvfbqlIKqd2m6ZqkluEHozgNqCQEIAABCNQSQGoFC1Jb2zxcW4nUbjNiUWqZ0su1Vk59IAABCEBgGAGkVrghtcPaz4H3Qmq3wJHaA7c8ioMABCAAAfcJILWSI6TW/bbK8IPf5Qip/R0LliAAAQhAAAJXBJBaaQhI7ShOCHpqt2myKLXmlF48JncUpwGVhAAEIACBWgJIrWBBamubh2srkdptRixKLVN6udbKqQ8EIAABCAwjgNQKN6R2WPs58F5I7RY4UnvglkdxEIAABCDgPgGkVnKE1LrfVhlT+7scXZPUMqXX7xCzBAEIQAACYyOA1ErGkNpRtFx6ardpsii1TOk1iqZPJSEAAQhAYCcBpFYQIbU7m4oLGyC12ywgtS40R+oAAQhAAAJOEUBqJR1IrVPNsqkySO2WDFLb1ERYDwEIQAACJ0sAqZXUI7WjOAWQ2m2aLEqtOaUXY2pHcRpQSQhAAAIQqCWA1AoWpLa2ebi2EqndZsSi1DKll2utnPpAAAIQgMAwAkitcENqh7WfA++F1G6BI7UHbnkUBwEIQAAC7hNAaiVHSK37bZUpvX6Xo2uSWoYf/A4xSxCAAAQgMDYCSK1kDKkdRculp3abJotSy5Reo2j6VBICEIAABHYSQGoFEVK7s6m4sAFSu80CUutCc6QOEIAABCDgFAGkVtKB1DrVLJsqg9RuySC1TU2E9RCAAAQgcLIEkFpJPVI7ilMAqd2myaLUmlN6/Yn/5HNf+e47lx+OojVQSQhAAAIQgIBJAKkVGkit2SqcXUZqt6mxKLVM6eVsc6diEIAABCDQiwBSK7iQ2l7N5qY2Rmq35JHam2qClAsBCEAAAs4SQGolNUits03UrBhSu6VxTVLL8AOztbEMAQhAAALjIoDUSr6Q2lG0W6R2myaLUsuUXqNo+lQSAhCAAAR2EkBqBRFSu7OpuLABUrvNAlLrQnOkDhCAAAQg4BQBpFbSgdQ61SybKoPUbskgtU1NhPUQgAAEIHCyBJBaST1SO4pTAKndpsmi1DKl1yiaPpWEAAQgAIGdBJBaQYTU7mwqLmyA1G6zYFFqmdLLhZZNHSAAAQhAYH8CSK0wRGr3b0sHiIDUbiEjtQdobRQBAQhAAALjIoDUSr6Q2lG0W6R2m6Zrktr5/Tc/77/2s1/8ZhStgUpCAAIQgAAETAJIrdBAas1W4ewyUrtNjUWpNcfUfvZL33nh5VfvP3rX2RZAxSAAAQhAAAJNBJBaIYPUNrUQp9Yjtdt0WJRa5ql1qolTGQhAAAIQGEwAqRV0SO3gJnTIHZHaLW2k9pDNjrIgAAEIQGAUBJBaSRNSO4rm2kNqF4vFZNJj+0Me/+0HT28/eLpPiRal1hx+wGNy90kK+0IAAhCAwM0SQGqFP1J7s+2wY+k9JDWO48Vi0TGuxc26FOqU1DKll8XsEwoCEIAABG6QAFIr8JHaG2yE3YvuIbXdg1rcMkmSLt3DSK1F5oSCAAQgAAEICAGkVjggtaM4I3pIbZqmcRzLUaVpmiSJXo6vXnmem8dsbpMkSRzHaZqaGyilzJjmV7JxkiQy5kHimxuUlp2VWoYflDLFRwhAAAIQGBEBpFaShdSOotH2kFpzTO1isTg/P8/zfDabTZ6/ptOp7/v6sGWbJEnOz8+fbzKZzWam+5ox9Y5KqclkslgszODt/bVOSS2zH5ipZBkCEIAABMZLAKmV3CG1o2jDw6X27Ozs/Pz84uIiCII4joMgODs7m0wmujt2sVhMp9Pz8/P5fC5draKwFxcXGk271CZJMp/PJ5PJuHpqkVqdXxYgAAEIQGDUBJBaSR9SO4pmPFxqJ5PJfD43DzKO48lkojtrRVhL28hKU3xru2Clp1Yp1WS9ZrlyY5Y7sx8gtaXs8BECEIAABEZKAKmVxCG1o2jAe0mtdlN9qFpGtY/qobeyTZqmk8kkiiL52OSsOk7TBrpEWXBq+AFTepWyw0cIQAACEBgpAaRWEofUjqIB7yW11SPUMqqlds9txii1TOlVTTprIAABCEBgjASQWskaUjuK1ovUbtNk8eELSO0omj6VhAAEIACBnQSQWkGE1O5sKi5scGipleEH+nkKTR2xuse3aYMSO6eGH5hSO7//5uf91372i9+UKsxHCEAAAhCAgPsEkFrJEVLrflv9ZO6s7rU0/dJcNiNoGdXDD/TUtrKZPEyhfUyt3HAm4ttUkFmoazeKmWNqP/ul77zw8qv3H71bqjAfIQABCEAAAu4TQGolR0it+231EFI7m81MEBcXF5PJRE9V6/u+ed+YUirPc5nXdrxSy+wHZsZZhgAEIACB8RJAaiV3SO0o2vD19tROp9PZ1Wtx9ZKHKeixB/JEsclkMp1O5/O5bHN2diZz08pmYr3ybXWyBY3YqeEHSK3OCwsQgAAEIDBqAkitpA+pHUUzvl6plU7Z+Xw+nU5FXk2jFUBJkugnh52dnckGs9lMFvRDy6bTaWl2MJOvU1JrDj/gMblmmliGAAQgAIFxEUBqJV83KLVVcbLShK4prJW6DQ7SQ2r7ltFxOGzfsLXbOyW15o1idx4+2/OpELXHy0oIQAACEIDAAQggtQL5pqRW7kSynuhrCmu9nn0DIrVbYkzp1bfpsD0EIAABCBw9AaRWUryP1MZXr9ohlGmalu6nl+LiOE7TNEkS6R+UCPKVrJflJElkS/mo/zsgrN531AtI7TZ91yS1TOk16tODykMAAhA4cQJIrTSAYVIbBIEMv5xcvWazWUltm/6mLXNJ6cGZsrvUZLFYnJ+fJ0kid9XryPoWfD39VLXptoStbjzGNUjtNmsWpZYbxcZ4JlBnCEAAAhCoEkBqhckAqZU73WezWRRFcRyLv56fn5uQ26U2SRK5dd7sqV0sFtPp9Pz8fD6fy3oJcnFxoSMPCKv3HfXCNUqtpPAwdJwaU4vUHibplAIBCEAAAtdNAKkVwgOk9uzqZSZIpuEPgkCvbLfP2j5X2WU+n+sgejPdDTwgrBltvMvXKLWHhILUHpI2ZUEAAhCAwIkQQGol0X2lVu7E8n2/1E70LE+yfoB9yi6l+aDkca3tT7b65NkEk4lMetBUbqm2o/uI1G5TZnH4AVN6je40oMIQgAAEIFBLAKkVLH2lVjplqzeByeT9GnWTXLbY585ddMetLkUvtITV24x6Aandps+i1DKl16hPCSoPAQhAAAKaAFIrKJBa3SRcXthKbZP4u1x1s25ODT9Aas3UsAwBCEAAAuMlgNRK7ixKrTkctsm+WrpUa3eR4Qf6eQq125zQ8IND3tR1Hee2s1LLlF7XkW5iQgACEIDAYQggtcK5r9SaBqkzlef5dDrV6tk0TkCGLjQNfhVhLQ1skCG87WNq28PqSo56geEH2/RZHH7A7AejPiWoPAQgAAEIaAJIraAYILXn5+dnZ2fV6WPNe7xk2i8to0qpPM9lAtp2qZ3NZjpHSqmLi4vJZKLLGhDWjDbe5a3Umg+f2PmwCpkXTc8cYR68xInjWJM1v9XL8gwMvU31eRgSp7YIHcRccKqnFqk1U8MyBCAAAQiMlwBSK7kbILVxHE+nU5nuYLFYiHeaYw+UUjJsYDqdzufzxdXr7OxM5qYVqRU9lW9FimSeWrnhTHaRZzSYHcADwo63iZo1rxlTqx9WYT7KQtIQRZH5bAyTYOnhFpPJZDabaW3VRfq+b0aQsHrsiFKq9PiNi4uLahAdTS8gtRoFCxCAAAQgAAFbBJBaITlAapVScRyLy04mk9JkXjpBSZJo3dLbzGYzUaw8z+Xb6XQqXbwy/CDP8/l8LkJVGtIgkfuG1fUZ9UK91J6dncnDKqKrl6RE/nGwWCziOPZ9X7rH5d8NeZ6fnZ1Np1Pf9+M4jqJIoJf+RSL/4Dg/P5fNRHDNf2FEUSQ2LI/f0BvsRNwktWmaJ0m2c3elVJfhB3Gc5nmxMxpTeu1ExAYQgAAEICAEPv7441/+8pfO0hggtXGcpmne8Yi6XHy7X8p1oY/eSG/dW995+OzOw2e3HzzV60sLcZrmxe7LulJqmNSWirPysekmMCvBxx6kXmrNrlM5QvnXgDkw2ZxYOE3Ti4sL81s9wsMEJP3w5hqxYV3c+fl5aZhI00xvZhCZbaC21U6n/mTizWZhFG1Ku5Q+7jyv4jidTLzp1F8sPjkLSruXPt5+8FTOpfbTqbQXHyEAAQhA4NQIXF5erlar119//YMPPvj4449dO/y+Upsk2WTiTSbefL7aea3s2KM0m4VyKY/jtCOfLlKbZNnE8yaeN1+t0nzHZR2p7Uj+ZjdrlNrSeNbZbFZ6YHHtnX3mwZT+MSEjPKpP1wiCQKRWNjCHS0s03RtvBi8tN/XUyqkl/z07WwZBUtpRf+wotTpg++mK1GqwLEAAAhCAQAsBkdrV1evx48eXl5dOqW1fqZUOIH2tvLiI2k1058VXKSVS2+VSrjl3kdo4TUVqtdomWeOfdpFazdblhUapLVW69AwM+Vb3sMpHmRdsPp/LxmdnZ5PJNr6MLJlMJqXeXDOOdMpKZ61EkP9Op9OLi4tSfUoftdSmaR7HqX7r80ovnJ0tfX9dHUVQe17pOHGcfv3rax1ELzSdrlpqnZrS65e8IAABCEDAMQI//OEPxWjN/7733nsfffRR6Up3Ix+7SK155a29Vrb8vbT24pvnhXn9/cxnAn3ZlYWmS7lG1CS1SZbFaSrvr6/XptTK8iwM47SmPxip1WxdXrAmtdLhOplMtJXaklo9XLqFo5baxSIuNf3ajzKKwFTb2vOqdt/qytLp6uzsB+YvJssQgAAEIOA4gSRJbny4bRep7Xjlrf17ae3Ft9TdW73sypqWAYFNUjsLw6rIVtecB0GQfOpPu+5I7dgfLNDicvt/ZUdqZeTA+fm5OVNBafhB0+hY2VfuP2vqyt15nH2lVp8hehRB7XmlN+uyoE9XpNbx6wTVgwAEIDAiAm+99VbW/GfxndfHPTewKLVyJS11stZefDtKrb4060u5Ptg9pVY092y5DJJE7iRzR2r1MbJQJWBHamuFVaY10EXKjWXmLGDyVWlMbXUDHaFlYbDUyikRBEnteaVPmO4LZ2dLpHZEVwuqCgEIQGAUBJ4+bbyFv+XiuP9X1qVWrqfTqS83cNdefPtKrcRcLGJ9vFakVtR26vtxmiK1mq3LCzal1rzHS083a/bdykxh5pokSWReBXFZmRfM3ECmR6jeXlZiuo/UzmZhkmS151V3l9Vn6WIRm1L7J/6Tz33lu+9cfliq8I18HMUPN5WEAAQgAIESgddff/3y8vJGLhzXJLW6b7X24jtAas/PA3MGT1tSO/X9+WqVFwVSeyPNr2+hdqRWHmdsPjZDJqOdTCbz+VzfHCadsnozmTdYRimI1IrjyjTC8pyM6lRitUc4TGrn85U+B2rPq+5SW/p7ir5RzKkpvUq/knyEAAQgAAHHCTx9+vSDDz6ovfAdZqVdqe14Q0svqZ3PV9UJFvaX2rPlchHHehZbpPaa2ltpqOqepdiRWqWU+UQxeQyYfn6x2c8aRZE8tUFuKZOvzFkUZMpbcdnJZHJxcWE+JbnpaLXUBkEym4X6XWultUPLa6VWx5nNwurdlxJ8NgurM4W5KbVv8YIABCAAAccIfO9736sVaxfuElNKdZFa88rbdK3Ut52UruO1F98kyczr7+/93terV/PaS7kO3iS1iziehaG8PxME1VvEJp5XvUvMqYcv6GN0YWHAkNHSLnbve9tK7Q2i0TeK7VMHLbWlIKXTQE4qc9IDvX3teaW/vZqS7JOHL5jv2n8dyi5aap2a0ss8HJYhAAEIQMAFAuY8tavVSqaqdWQ+r45Sa2KsdrLOZmG1J1XvsvPiW5qndjLxWi7lOmyT1OoNPrmsf3qe2onnXURR7XxeSK3JTS/LvVL6Y5eFAbt0Cau3OZzU5nkeBEHpmQ5KKRmToIco6Jr1WtgptbX9qWYRO88rfaK2/+tQKWU+JvezX/rOCy+/ev/Ru2ZZLEMAAhCAAASEgJba119//f3333fqyQt7Sq05xq8p3TsvvqbUlibQbIqplOortTsfKjZ4+EGapvHVq6W2sk1VkPQuSZLEcazvOIqvnkQs36Zpqv+gLZuV4kjppZU6clPRZlh9CLoC8vd5GTlQOrrajaW4JEmqu8j2uj6y0FLn9oodVGqn0+n5+bn21zzPfd+vPju3dGxdPjZJ7WwWNj0foRR253mVJFlp4Gwpgv5o3iiG1GosLEAAAhCAQJVAlmU3O2lXtUrmmi7DD8zt87w4O1t2eZ687LXz4quUms9XFxdRl4fu6pp0kdo0z8+DwBw4q3evLgyQ2iRJZML+yfPXbDYrRdZ31csmMnrT3EY06XmAT+5TKj3PdbFYnJ+fJ0kiU07JlrJZFEV6MKc5zlPitxctYfM8N8NOp1M9oNRcL0/aMoehSjVms5npwdVdlFKlMbWlWs1ms5KOt1fscFIrXq/56oWzszP9jwwzkb2Wm6S2e5Au51XHaEhtR1BsBgEIQAACjhPoK7V9D8fixdcsuovUmtvvXO4rtWmaSkee7/vS7zifz0tmGUXRZDKZzWZRFMVxLP5qiq/v+/oGJHMDM85isZCppebzeXT1uri4kA3ktnvZUW5n0oK4s2gd9uLiIgiCOI6DIBBHlyBJksgRydHJXFVivXEcR1Ektip6LXhLu8hKU2rleDUQ+er8/NzMTnvFDiq1Sqk8z+VQZXKDKIpMizfr3WvZKak1hx84NaVXL6RsDAEIQAACEEBqpQ30ldra+fvFfHSjkiew6o9Xd+/E5lOoqn/KFncsSa35UaJJv6H+w7j0Kk4mE93PurPoqpLq6ukgpo/KXf5miUop0WvzAM1dZL255uzqZW4vGIMg0CvbK3ZoqdXVsrvglNQqpfSNYk5N6WWXOdEgAAEIQODoCSC1kuK+UqsfONXUcyd3yZsT/EtBMu2pUko20AapW5p+ZJWsEcnTXbCycjablTo4zUELO4vWowJKYc0gehtdseqCKaxmbc0t9TZCrHq8GogZoaliSO2Wrd2/gCC1ZpNlGQIQgAAERkoAqZXE9ZXaq6HAn4w3kG7U6o3y0gcpPaYz4zWdTi8uLnS3aKnvUypjds1qKTQbmMQz15g+urPoFmFtL1rm55rP51IBGa5gVqNaW72mtm/76jbBT4LpIHp7vUYWpGJI7RYLUltqH3yEAAQgAAEIILXSBlJOJC4AACAASURBVAZIrVIqiqL5fK5vF5vP57rjtsksZ7OZzOTaJHmmnjbZ5zCp1UU3hW0vWrqQZRAwUrvXT4dTww+4UWyvXLIzBCAAAQg4QwCplVQMk1qdRunClEdKycoWZ23foDS7f23PZRepre0D1hWuDdsitVKr8/Nzbe21ZlwNq9c0AZnNZubdZnp7XVVZuLGe2lKFSh9Ltez4EantCIrNIAABCEAAAt0JILXCak+plSCLxWI6ncpyyU2rGdGjcktf1Y6pLW3TLrU7i671USmiafhBrZLKHF5m3arKZ64xg8teeZ7LHA46iLm9Xqlt+xDDD9ofidZUP7OuO5eR2p2I2AACEIAABCDQlwBSK8T6Sm2apmdnZ6XbnubzuZZapdTZ2dl0OjW7NmVyA72XzNVlbpAkicxsoM2qVqLapbZL0bVhtTsKE3MbkVrzvjc946xZf3OXapDz8/Ozs7Pq9ubEr9UIEudAPbU7H4nWVD+pZcf/Oiu1TOnVMYNsBgEIQAACDhJAaiUpfaVWKSVTw87nc5nJqzpPrRiq9ETKNqWpuKRTVm7/XywW4sRiTXtK7c6im9zM7EyVaWXlAN99912ZgEwO5OLi4vz8XG+ghzroNYvFQmYwMAuK47gUZDLZPm9Cnxrm9nqltu3r7ant8ki02vo1PbfNPABz2SmpZUovMzUsQwACEIDAeAkgtZK7AVKb57l5l9j5+bk2Ud0eZHpX/Tiqi4sLs1dSbjUTOdZPYdACJ0FqJWpnT61MGXZxcdFUdG3YUtH6eWPT6TS5eumqyqPR8jyXNbrvubRLdZxDHMcyu+1kMilN5tVyvLpi1yu1XR6JVgKn+6tlIozqI+N0UzAXkFqTBssQgAAEIAABKwSQWsE4QGqt8K8G6TIitrrXiay5Xqnd+Ui0kqTvfG5bU1acldr5/Tc/77/2s1/8pqnmrIcABCAAAQg4SwCpldQcXmrzPK/ObquUkjEJ+g/6zracG6nY9UptyVnlCEtds+bHnc9ta2LklNSaj8n97Je+88LLr95/9G5TzVkPAQhAAAIQcJYAUiupuRGpnU6n5+fn2l/zPPd9v/rsXGcbz+Er5pDUSo+6eeuc4KgdVFEi5ZTUMk9tKTt8hAAEIAABZwnkeZ6maZZlpRqKS9mSWiklff4qikKK2+fJRxJT3yyfpulms5HIj95Ib91b33n4rPvD6luiKaUOL7VKKT3XwWQy0YNfz87OSuNuS4k75Y8OSa3MB9HyyLiWPCG1LXD4CgIQgAAEIFBLYLPZeJ4XhuFyudQ9grKl53lKqf2ltiiK1WolpYTPX57nrVYrpdRgqU3T1Pf9MAx938+yLIqiIAj0x75S2x7tpqRWKZXneRRFMqXAYrGIokhLfG1CT3zlCKTWfG5bU7acklpz+AFTejWljPUQgAAEIHDjBIIgWK/XUo3VamV6rS2pXV29dNeslJXneRiGcRwPltowDKXDUv4oL4qslIrjOIqivlLbHu0GpfbGW8i4KuCc1JpnVHeUTkktU3p1TxxbQgACEIDADRLwPM/UTdNxbUmt53m1nYtpmi6Xy8FSK9UTdOYNVdLn2ldq26MhtTfYRHsV7ZDU7jNLBVLbK+tsDAEIQAACEFBKlaRWej1lfK1FqTW9WWPP89zzPCtSG4ahzOQvf6/3PG8fqa1GQ2p11hxfcEhquzy3rYkmUttEhvUQgAAEIACBJgJBEJTuOtpsNjJK1ZbUyjCDagVkWMJgqV0ul5vNRizW9339Z94kScIw7Cu17dGQ2mr63Fxz7VLb/ki00pxfO5/b1gTRKall9oOmNLEeAhCAAAScIpCmqed5elit1C1JEu/qZeVGsSzLgiCQG9Hiq9dqtVoul2EYFkUxWGqTJBGXFR8NgkDGBPu+n6ZpX6ltj4bUOtVoWypz7VK785Fo5jy1O5/b1nQkSG0TGdZDAAIQgAAEWghkWVbqrFVKZVkmfZ/7z36glCqKIk3T9XotUhvHsZ5BbLDUKqU2m40OVRSFxJfIfaW2PRpS29J+nPrq2qX2MEeL1B6GM6VAAAIQgMDxERDpfD6HbGoeoBWp1V5bLWIfqW2ZWXaA1LZEQ2rNJuHyMlK7zc4+51UpwebwA6b0KsHhIwQgAAEIuENATyIbBIGesNb3fT0gYX+pbS9i8MW3fWbZvlLbHm2Y1KZpqrvAkySJ41jfzSYNQPqtSyvlqzRN5dvqxBE7w7rTug5fE6R2y3zweVWbs9sPnsqDTLo/y6Q2DishAAEIQAAC10dA7taS+EmSyEM9ZYCpCNn+UttexOCLb/vMsn2ltj3aMKldLBbn5+dJksxms8nz13w+V0pFUaSfEDaZTBaLhU5xkiTn5+fPN//k/7PZzFRbHdbcrLSNjnZqC0jtNuODz6vaFoPU1mJhJQQgAAEIOEXA931TmIIgkDGpm80mCAIrN4q1FzH44ts+s2xfqW2PNlhqz87Ozs/P5/N5dPW6uLgQhZ1Op4vFIo5j3/fFTaW/Ns/zs7Oz6XQq98DJs8Qmk4mosLScxWIxnU4lrPTmyr1JFxcXTjWtG6kMUrvFPvi8qk2bltr5/Tc/77/2s1/8pnYzVkIAAhCAAARukIApc0opPUVrURS2pvRqL2LwxdcMq6vt1Dy14ppmL6xSSjpo9QRkSqkkSSaTie/7+l5581ullKiwbiQS1tRcPZFU7UgGveMpLCC12ywPPq+qrcQcU/vZL33nhZdfvf/o3epmrIEABCAAAQjcLAFzntosy3Sv6mazWS6XVnpq24sYfPFtn1m2b09te7TBPbWTyaQkmrPZ7Pz8vJT00giE0relSaLkox6tKxvL46tk9Ehp95P6iNRu0z34vKo2F6S2yoQ1EIAABCDgIAGZklb+iq0fYSDPFZNHG+w/pra9iMEX3/aZZftKbXu0faS2lPTZ1au0siS1cRwvFov5fC4bn52dTSa/s7WS4+pQpSB6/Ukt/A7TqA+bKb1GnT4qDwEIQAACN0UgyzKZ4VX3KeZ5LkZrpadWZr1tKmKw1LbPLNtXatujHVJqgyCQu8TOz8+R2r4nBVK7JbbPeVWC/u0f/Hh251sveo/vPHzGlF4lOHyEAAQgAAGXCchUskVRSCX376mtHqxZhMWLrxl2gNSW6mlGO5jUyiiC8/Nz8+69Utds6aNUW3Ysjd8tHdEpfERqt1m2eF4ppfSNYkzpdQpnEccIAQhAYKQEZBLZMAzl5qTVahVcvXzfl2kQ9pfa9iIGX3zbw/aV2vZoB5PaOI4nk0npRjGZEUw3MJHa0jZytxljapHabTsZfF7pdmYuILUmDZYhAAEIQMBNAqvVKoqiNE1lziltRXEcy/L+UttexOCLb3vYvlLbHu3AUquzoJQKgkAmTNB9tyK1s9nMbFEyQ4LexvzqpJaR2m26B59Xtc1FSy1TetXyYSUEIAABCLhAwPd9PdJguVzqYbUyE4KVMbXtRQy++LaH7Su17dEOJrV5nk+n07Ozs8XV6+Li4vz83Pd9mapWemdlnloZbiubSVcuYw+UUkjt9ldl8HlV/VUyx9QypVeVD2sgAAEIQMARAuZsr3Ecy5ADPdurFaltL2Lwxbc9bF+pbY92MKmVaWv1o8IuLi7yq5eskblspac2z/P5fC6duPIoB0da1M1WA6nd8h98XlXzx5ReVSasgQAEIAABBwksl8vSjKdSyfV6HYahFaltL2Lwxbc9bF+pbY82TGqvKd21N4pdU1mjC4vUblM2+LyqphyprTJhDQQgAAEIOEhAz89q1i2OY8/zbN0o1l7E4Itve9i+UtseDak1m4fLy0jtNjuDz6tqdk2pZUqvKh/WQAACEICAOwTSNC111iZJom852v9GMXn6a1MR+1x8W2reV2rbK4nUutNc22uC1G757HNeVRHrG8WY0qsKhzUQgAAEIDAWAlaktuVg7V58dUEDpFbvW7vwh19ZvXP5Ye1XB17J8IMW4EjtFo7d8wqpbWlzfAUBCEAAAmMhgNRKptyRWnmI7ljaz4HridRugV+T1DKl14EbNMVBAAIQgIBFAkitwHRHai0m9/hCIbXbnFqUWqb0Or7zhCOCAAQgcJoEkFrJO1I7ivaP1G7TZFFqzRvFmKd2FKcBlYQABCBwygSiKNKPYBAORVHYeqKYBGwqYs+Lb1PYYWNqm6I5daPYKTfUnceO1G4R7XlemaCRWpMGyxCAAAQg4DiBMAx939ePE0vT1Pf9IAiszFMrx95UxJ4X36aww6S2KRpS63gD1tVDarco9jyvNFCllDn84EXv8ezOt779gx+bG7AMAQhAAAIQcIrAer32fT++evm+v16vpXoWhx/UFrH/xbc27DCpVUrVRkNqnWqrLZVBardw9j+vTMrMfmDSYBkCEIAABNwnkGWZ53m+7+uH5VrsqZXDrxZh5eJbDTtYapVS1WhIrfutV2qI1G4zZeW80llHajUKFiAAAQhAwH0CaZoul8soisIwXC6XeiiCxZ7a2iL2v/jWhh0stbXRkFr3G7DUEKndZmr/88pMOVJr0mAZAhCAAARcJhDHse/7+qFf+k/wFntqm4rY8+LbFHaY1DZFQ2pdbr1m3ZDaLY09zyuTqTmmltkPTDIsQwACEICAgwSCIDCHHCil8jwPw9Ci1DYVsefFtynsMKltiobUOthoa6vkutTmeZ6maelkU0rFcWwez+0HT28/eGquqS5LqPT5qzR9ScfzSoLoh2KnabrZbEqhmP2gCp81EIAABCAwRgIWhx/UHn7Hi2/tvi0rh0ltS0DmqW2B485XTkvtZrPxPE/G95Qs1vM8E2K71BZFsVqtJFT4/OV53mq10kG6nFcyy4lM+ZFlWRRFQRDojzoUUqtRsAABCEAAAqMmgNRK+pDaUTRjp6U2CAI9q8hqtTK9tpfUrq5epf5U+duKjtlFasMwlCFHeZ77vq+dOI5jmaRaUm5K7Z/4Tz73le++c/nhKFoDlYQABCAAAQiYBJBaoYHUmq3C2WWnpdbzPNNETcftJbWe5+kBA2Ym5CZHWdNFas1CgyDQN4dKD64ZmRvFTBosQwACEIDASAkgtZI4pHYUDXhMUiv9ozK+1vRLpVT78IOSHOvE5Hmu4/SV2jAMtdSacSQ4UqshswABCEAAAi4TaL93xZbUNt3W0uXi20Sv5S6XAWNqW6Jxo1hTClxb77TUBkGgZxgRcJvNRuaF1jIq69ulNgxDPczATIAMS5A1Xc6r5XK52WzktlB58orsmySJ3CWqg2upnd9/8/P+az/7xW/0VyxAAAIQgAAEHCGw896V/aW2/baWLhffWlbtd7n0ldr2aEhtbQocXOm01KZp6nmeHlYr+JIk8a5eJs12qc2yLAiC5XIpDwCM43i1Wi2XyzAM9fCGLudVkiTismK3QRDISF/zkdlKKXNMLVN6mWliGQIQgAAEnCJgjuurvXdlf6ltv62ly8W3llj7XS59pbY9GlJbmwIHVzottfK0ulJnraws9by2S61SqiiKNE3X67X22tI0YR3Pq81mE8ex7FsUhQQshUJqHWzoVAkCEIAABKoESsPzTMeVv4juL7Xtt7V0vPjW1lyvrN7l0ldqzT//VqMhtRq14wtOS20cx7ontZ1ju9R2iTPgvNIjhEp1Q2pLQPgIAQhAAAJuEihJbfXeFStSW3spl9tRBlx8haSpodW7XPaR2mo0pNbN1lutldNS63me7/syjLVadXNNu9R2idPlvJKBQTKRggxFCMNQBjaYsyuYTxR70Xs8u/Otb//gx2ZtWYYABCAAAQi4QGDnvSv7S237bS1dLr61oNrvcukrte3RkNraFDi40nWplVm3VqtV7b/zNNCdUrszTpfzKrp6FUWRZZnneXrUQWmeWpmN4c7DZ/Le+agzfRQsQAACEIAABA5JYOe9K/tLbfttLV0uvrVA2u9y6Su17dGQ2toUOLjSdamV4bDr9VoedqA9soRyp9TujNPlvNJ/pilNd5Blme/7ZpX07Ad3Hj5Dak0yLEMAAhCAgFMEsixruXdlf6ltv62ly8W3CVfLXS59pVYp1RINqW1KgWvrRyC1gqwoiiRJlsul7/vypFsTZRepbY/T5bzSo93TNDXn8JKO21J96Kk1gbAMAQhAAALOEpB7qdPnL7OeVqRWe+3zElJdRJeLr964tNAys+wAqW2JhtSWyDv7cTRSqwkWRSH/nNJrujx8wdxYlktxupxXURTpgRBBEOhxtOZ8t0opc0wtU3pVybMGAhCAAAQcIaAnkQ2CwPO8MAyl80hPprm/1LYX0eXiW8uqfWbZvlLbHg2prU2BgyvHJ7W1ELv31NburpTqcl4VRRFFkQyEWK1W0mccBIE53y3z1DYRZj0EIAABCLhGwOyUSZIkiiKllAwwlTEJ+0ttexFdLr610Npnlu0rte3RkNraFDi40mmpLU1G24KvXWq7xOl+XqVpmiRJ03y3SG1LmvgKAhCAAAScIuD7vv6ro1IqCAK5d2Wz2QRBoJTaX2rbi+h+8S1xM6f0qs4s21dq26MhtSX4zn50WmpL1PI832w2afq7sTh6g3ap1ZvJQm2cAedVbZyS1P6J/+RzX/nuO5cflurARwhAAAIQgMCNEzBlTimlp2gtikK+2l9q24sYcPEVaGZYXW15jr3neftIbTUaUnvjDbVjBZyWWvNmrJZ5YXeOqe0Sp8t51SWOcGf2g47tj80gAAEIQOAGCZjz1MpMPtJxu9lslsullZ7a9iK6XHxr+bTPLNtXatujIbW1KXBwpdNSq/8dJs8d0fN5rVYrGfejgbb31HaJ0+W86hJHqoTU6tSwAAEIQAACzhJIksTzPBlQ5/u+jNaT54rJk4/276ltL6LLxbeWXvvMsn2ltj0aUlubAgdXjkNqS/PCypMUTJodpbYlTpfzSkttSxypFVJrZodlCEAAAhBwlkCWZev1Oo5jPbpPBtdJhfeXWqVUSxFdLr5N6Fpmlu0rtcxT2wR5XOvHIbWleWGl49YE3VFqW+J0Oa+01LbEYUovMy8sQwACEIDAuAjIVLL6KZ5WpLZEwCyiy8W3tHvTRzPsAKkthTWj0VNbguPsR9elVlqVdM3qOzRLHaU7x9R6nrczTpfzqkuc0o1izFPrbNOnYhCAAAQgIJPIhmEoAw9Wq1Vw9fJ9X4b87S+17UV0ufjWpqk9bF+pbY+G1NamwMGVTkttFEXy8LAwDPVYn81m4/u+jPXRQNt7arvE6XJedYmD1OqksAABCEAAAo4TkHtU0jSNnr+kwnEcy70r+0ttexFdLr61DNvD9pXa9mhIbW0KHFzptNTW8krTVN8xpjdol1q9mblQijP4vCrFKUnti97j2Z1vffsHPzaLZhkCEIAABCDgAgHf9/VIg+VyqYfVykwIVmY/aC9i8MW3PWxfqW2PhtS60Fa71GF8Ult7VAOkthRn8HlViiMfuVGsFgsrIQABCEDAKQL6XhGlVBzHus9I37uyf09texGDL77tYftKbXs0pNapRttSGaR2C2fweVULF6mtxcJKCEAAAhBwisByuZTH4ZZqtV6vZWr2/aW2vYjBF9/2sH2ltj0aUltqHs5+RGq3qRl8XtWmFqmtxcJKCEAAAhBwioCen9WsVRzHnufZulGsvYjBF9/2sH2ltj0aUms2D5eXkdptdgafV9XsfvsHP57d+daL3uM7D58x+0GVD2sgAAEIQMAdAmmaljprkyTR0w3t31OrlGopYp+Lb0vY77/385fuPpH3l195swvtlmhIbReALmwzAqmNokgPYxdkRVH0eqKY7NUep/t51R5H3yj2wsuv/mf/3b994eVXX3j51fuP3nUh2dQBAhCAAAQg0IvAO5cf/s//sBG1/Vf/y9u3Hzy9dW8t79sPnvYKVbvx7QdPtX3+m7/f1G7jwsrlo2e//ui3LtSEOrQQGIHUynxe+pbMNE193w+CwDyqLjeKtcfpLrXtcZBaMy8sQwACEIDAqAn8+qPfitHKf3/0k/zRG6m837r8D6M+NCp/fARGILVKqfV6LfPUxnHs+/56vS5loovUtsfpLrXtcaRijKktJYiPEIAABCAAAQhA4FoJjENq5cnRnufpZ5yUoHSU2pY4vaS2JY5UDKktJYiPEIAABCAAAQhA4FoJjENq5TG58kAvc3Zojaaj1LbE6SW1LXGkSkitTg0LEIAABCAAAQhA4AAERiC1MuRA35uphyKYdLpIbXuc7lLbHkdqhdSa2WEZAhCAAAQgAAEIXDeBEUhtEAT6GSeCI89zmRRa0+kite1xukttexypElKrU8MCBCAAAQhAAAIQOACBEUhtFwpdpLY9TnepbY8j3yK1XSixDQQgAAEIQAACELBFAKndkkRqbTUp4kAAAhCAAAQgAIHDE0Bqt8yR2sM3PkqEAAQgAAEIQAACtgggtVuSSK2tJkUcCEAAAhCAAAQgcHgCJyS1eZ6nxqv06N2OUitB9EOx0zTdbDalUEopxtQevilTIgQgAAEIQAACp0zAdakViSzNfqCUiuPYTFv7jWJFUaxWK8/zQuPled5qtdJBukitPKFXHpObZVkURUEQ6I86FFJromAZAhCAAAQgAAEIHICA01K72WzERJfLZcliPc8z6bRL7erqVepPlXnBdNguUhuGoUyXm+e57/vaieM4jqKoVJ87D5/J+/aDp+ZXLEMAAhCAAAQgAAEIWCfgtNQGQbBer+WYV6uVFlClVC+p9TxPDxgwCcqDwWRNF6k1Cw2CIE1T2Vd6cM3IDD8wabAMAQhAAAIQgAAErpuA01LreZ7ZvWo6rumX8uf+lg7RUhzNNM9zHaev1IZhqKXWjCPBkVoNmQUIQAACEIAABCBwAAJjklr5o7+Mr9UyKozahx+EYWj28mqsMixBPnaR2uVyudlslFJSEx0zSZLqE84YfqA5swABCEAAAkdDYBEvLqKLozkcDuSYCDgttUEQyBhWTXyz2fi+n2VZL6nNsiwIAhmYG1+9VqvVcrkMw1D3BHeR2iRJfN+P41jsNggCGRTh+77utZWq0lOrU8YCBCAAAQgcDYG8yCfeZOJN4vRTt2sfzQFyIKMm4LTUpmnqeZ4eViugkyTxrl4m9/aeWqVUURRpmq7Xa5HaOI5LMyp0kVql1Gaz0fsWRSEBS6GY/cBMDcsQgAAEIHA0BBbxQqR2Fs6O5qA4kKMh4LTUKqWyLCt11spK/ad/ycROqd2ZsI5Sa8bRE9+aK3V9GH5QxcIaCEAAAhAYNYGpPxWpnXiTNN/eLT3qI6Lyx0TAaamN41gPD2iH3i61XeJ0kVqZ71YmUpChCGEYysCG0uwKDD9ozxffQgACEIDA6AgESaCNduJN5qv56A6BCh83Aael1vM83/fl3qz2NLRLbZc4XaQ2unoVRSGDevWoA+apbc8O30IAAhCAwBEQOFuemVI78SZ5kR/BcXEIR0PAdamVqWRXq1V7l+1Oqd0Zp4vU6qnBStMdZFnm+77ZJuipNWmwDAEIQAACYycQp3HJaCfeZBEvxn5c1P+YCLgutXKP13q9lid46c7RUg52Su3OOB2lVoYZpGlqzuFVOxsDY2pLOeIjBCAAAQiMl8AsnFWldupPx3tE1Pz4CIxAagV6URRJkiyXS9/3w6uXmYwuUtsep4vURlGk+4yDINDjaM35bqUUemrN7LAMAQhAAAKjJpBkSdVoZU2QBKM+NCp/TARGI7UaelEUMq+WXtPliWLmxrJcitNFaouiiKJI+oxXq5XodRAE5ny3EhyprQJnDQQgAAEIjJTAfDVvktqz5dlID4pqHx+B8UltbQ6699TW7q6U6iK1sm+apkmSNM13K9sgtU2cWQ8BCEAAAuMikOZpk9HKeh7EMK6EHnFtnZba0mS0LWlol9oucbpLbUs19FdIrUbBAgQgAAEIjJqAfuBCk9ryIIZR5/eYKu+01JZA53m+2WxKD6SVbdqltkucAVLbXh9uFCth5yMEIAABCIyRgPnAhSavHeNxUefjI+C01JozDOx82MHtB0+b0tMlThep7RJH6kBPbVMuWA8BCEAAAuMi4K/9Rbyo7a/947/9Y/lqXEdEbY+VgNNS63mecM/z3PM8PZ/XarWKoshMSXtPbZc4XaS2SxypFVJrZodlCEAAAhA4AgLVblpG0x5BWo/pEMYhtaWHHciTFMw0dJTalji9pLYljtQKqTWzwzIEIAABCBwBAaT2CJJ43IcwDqktPexAOm7NxHSU2pY4vaS2JY7UCqk1s8MyBCAAAQgcAQGk9giSeNyH4LrUps9fy+VSP+yg1FHaZZ7a52HSpjgdpXZnHGkuSO1xnzYcHQQgAIETJIDUnmDSx3XITkttFEXy8LAwDH3fl5m5NpuN7/ubzcYE3d5T2yVOF6ntEkdqhdSa2WEZAhCAAASOgABSewRJPO5DcFpqa9GnaarvGNMbtEut3sxcKMXpIrXm7nq5FEfWI7WaDwsQgAAEIHAcBJDa48jjER/F+KS2NhkDpLYUZ7DUluLIR6S2FgsrIQABCEBgvASQ2vHm7kRqjtRuE43UnkiL5zAhAAEIQGAYAaR2GDf2OhgBpHaL+s//+slLd7fv5aNneybg0RupdB7ffvD0G9/90Z7R2B0CEIAABCBw4wSQ2htPARVoJ3AkUvujn+Q/+knefqh8CwEIQAACEIDAYAJI7WB07HgYAkcitYeBRSkQgAAEIACBkyWA1J5s6sdy4EjtWDJFPSEAAQhAAAI3SQCpvUn6lN2BAFLbARKbQAACEIAABE6eAFJ78k3AdQBIresZon4QgAAEIAABFwggtS5kgTq0EEBqW+DwFQQgAAEIQAACWwJILU3BcQJIreUEJUEQzmbmOwkCy2UQDgIQgAAEIHBwAkjtwZFTYD8CSG0/Xju3jhcLb/KpEz9eLHbuxQYQgAAEIAABxwl86tp2damL09jxOlO9kyKA1FpON1JrGSjhIAABCEDADQJIrRt5oBaNBJDaRjTDvkBqh3FjWIgbpgAAIABJREFULwhAAAIQcJwAUut4gqgeUmu5DSC1loESDgIQgAAE3CCA1LqRB2rRSACpbUQz7Aukdhg39oIABCAAAccJILWOJ4jqIbWW2wBSaxko4SAAAQhAwA0CSK0beaAWjQSQ2kY0vb7YRJFM43Xv93+/NPvBvd///XA2iy4usiTpFZONIQABCEAAAu4QQGrdyQU1qSWA1NZi6b0yurgouWz149r3e8dlBwhAAAIQgIAbBJBaN/JALRoJILWNaHp9kQRB1WJLa/I07RWTjSEAAQhAAALuEEBq3ckFNaklgNTWYhmycnl2VrJY8+NqPh8SlH0gAAEIQAACbhBAat3IA7VoJIDUNqLp+8Xa902LLS3TTduXJ9tDAAIQgIBTBJBap9JBZaoEkNoqk4Frijz3p9OSy8rHcDYbGJTdIAABCEAAAm4QQGrdyAO1aCSA1DaiGfBFdT4vkdo05unYA3CyCwQgAAEIOEQAqXUoGVSljgBSW0dl6Lo8Tas9tcH5eXu8LMuCIAjD0Nwsy7Ioinzf9zwvDMP0+U1mm80mDENZH0VRlmXmXubyNYU1i2AZAhCAAAROhwBSezq5HumRIrWWE7eaz0temwRBSxmbzcb3/TiOTanN81xWpmmaZVkcx77v53mepqnv+0mSZFevJElkfTX+NYWtFsQaCEAAAhA4EQJI7YkkeryHidRazl0ax6bULs/O2gtI07QoijRNTalNkmS9Xps7hmG42Wziq1dpve7ENddfU1izCJYhAAEIQOCkCCC1J5XuMR4sUms/a+Fspr22vZtWl12SWr1eLwRBkGVZmqbL5bIoCllfFEVTT61scE1hda1YgAAEIACB0yGA1J5Orkd6pEit/cTpBzH402mR510KaLfP1dVL4iRJslwuw6uX7/stY2qVUtcUtssRsQ0EIAABCBwZAaT2yBJ6fIeD1F5LTuVBDPFi0TF6k30WRRFF0Wq1kjhFUQRBsFqtkquX+VVtQdcUtrYsVkIAAhCAwHETQGqPO79HcHRI7bUkcRNF3btpm7pU5baw2JgOLEkSLbhS7yAIasfUyre1Urt/2GtBRlAIQAACEHCbAFLrdn6onUJqnWgEVfuUmQ1KowtqbxTbbDZNx3BNYZuKYz0EIAABCBwxAaT2iJN7HIeG1DqRx5J9ypRe1S5Y6WQViy2KYr1ee56XNw/bvaawTiCjEhCAAAQgcFgCSO1heVNabwJIbW9kdnfwKq80TeM4rqz2ZBzCZrMJgkC+DYKgqZu2uruVsHaPnWgQgAAEIDAiAkjtiJJ1mlVFak8z7xw1BCAAAQhAoB8BpLYfL7Y+OAGk9uDIKRACEIAABCAwQgJI7QiTdlpVRmot5/v/+X//v0dvpPL+u7d/Yjk64SAAAQhAAAI3RACpvSHwFNuVAFLblVTH7R69kV78VXzr3vrWvfWffu21jnvVbvaHX1m9dPdJ6f2nX3vt8qe/qt2+y8rlo2df/ebb1XeXfdkGAhCAAAROmUCcxqV3XnR6wNApQ+PYD0kAqbVM+9Eb6a176zsPn915+Oylu0/2if6i91jimP+9dW/9zuWHg8O+dPeJGU2WX/QeDw7IjhCAAAQgAAEIQMAFAkit5SwgtZaBEg4CEIAABCAAAQh0IIDUdoDUZxOktg8ttoUABCAAAQhAAAJ2CCC1djjqKEitRsECBCAAAQhAAAIQOBgBpNYyaqTWMlDCQQACEBg/gSzLoijyfd/zvDAM9QMjoygyn5UThqEca1EUcRzL9nEcF0XRhUGWZUEQ6CCyS1PRm80mDEMpIoqi0lPZuxTHNhBwjQBSazkjSK1loISDAAQgMHIC8uTzOI7TNM2yTGxVnnBuCq55lFEUBUGQpmme52EYyhMlzQ2qy5vNxvf9OI5NqW0qWh66niRJdvVKksT3/ZaHrleLYw0EHCSA1FpOClJrGSjhIAABCIycQJIk6/XaPIgwDOUh57VSm2WZ7/u6dza/epm71y6naVoURZqmptQ2FR1fvcw4tTUxN2AZAu4TQGot5wiptQyUcBCAAASOjkAQBPLnflHJ9OqlO0qTJFmtVkqpLMuks7Y7gJLUVneUotM0XS6X2puLoqCntsqKNaMjgNRaThlSaxko4SAAAQgcF4HV1UuOyff95XIZhmEQBL7vJ0milJJuVBkdG4ah53myvguGdqk1i06SRIqWkbWMqe2Cl20cJ4DUWk4QUmsZKOEgAAEIHAuBoiiiKJJeWDmmOI61TaZp6nlenucy6La6vguGJqktFV0URRAEq9UquXqVatWlILaBgIMEkFrLSUFqLQMlHAQgAIGjICD3ZrXf8iVjbeM4jqLIPOjuA15rpbZatB7hoEuR+9L0RxYgMEYCSK3lrCG1loESDgIQgMD4Ccj0ArrzVQ4oTVO5XUwfn5ilzMylVyqluhtnVWpri669UaxUGbMCLENgFASQWstpQmotAyUcBCAAgZETkHm19Ny0+mhklgNRST3qQL4NgkCmpy2KYr1em5Mh6N1rF0pS21S09N1K0VKEjHyojclKCIyFAFJrOVNIrWWghIMABCAwcgJxHJtPWJBlGYew2WyCIJA15hMQZAisrNdTJbRjqBaRpmnHooMgoJu2HS/fjoIAUms5TUitZaCEgwAEIAABCEAAAh0IILUdIPXZZBRS+8VXklv31vL+4ivJi97jPofIthCAAAQgAAEIQMA5Akit5ZQ4LrX/4l//73cePrt1b/3Cy6/K+9a99UvLTz3qxjIRwkEAAhCAwBWBdy4/vP/o3er7ncsPv/HdH91+8LT6/tFPcuBBAAIdCSC1HUF13cxlqb386a/+7O73v/hKMr//5me/9B15z++/eefhs66Hx3YQgAAEIDCUwP1H79buev/Ru7cfPL3z8Fnpfeve+tEbae0uXVb+6ddee9F7XH1/9Ztvd9m9tM3df/tJJavvty7/Q2lLPkLgpgggtZbJuyy1X/jr773w8qu37q3N380vfOOdF73HTT+1lukQDgIQgMAJE2j6pb0mqX3p7hPz114vD5Pa69DuE24LHPq1EEBqLWMdndTqoQiWQRAOAhCAAAQ+TQCp/TQPPkHAMgGk1jJQpNYyUMJBAAIQOBYCSO2xZJLjcJQAUms5MS5L7bd/8OM//B/+/gvfeEf/Ecq8acwyCMJBAAIQgMCnCSC1n+bBJwhYJoDUWgbqstQqpb74N/9oGu2dh89keq93Lj+0DIJwEIAABCDwaQJI7ad58AkClgkgtZaBjk5qxXEtUyAcBCAAAQhUCCC1FSSsgIBNAkitTZpKKZeltnb4AVJruQUQDgIQgEADAaS2AQyrIWCHAFJrh6OO4rLUMqWXThMLEIAABA5PAKk9PHNKPCkCSK3ldI9OapnSy3ILIBwEIACBBgJIbQMYVkPADgGk1g5HHQWp1ShYgAAEIAABkwBSa9JgGQLWCSC1lpG6LLW1Y2rpqbXcAggHAQhAoIEAUtsAhtUQsEMAqbXDUUdxWWqZ0kuniQUIQAAChyeA1B6eOSWeFAGk1nK6Rye1zH5guQUQDgIQgEADAaS2AQyrIWCHAFJrh6OO4rLUXv70V3929/tffCUpPX/hzsNnuv4sQAACEIDANRFAaq8JLGEhIASQWsstwWWpZUovy8kmHAQgAIE+BJDaPrTYFgK9CSC1vZG17zA6qeVGsfaE8i0EIAABWwSQWlskiQOBWgJIbS2W4SuR2uHs2BMCEIDAURNAao86vRzczRNAai3nwGWpZUovy8kmHAQgAIE+BJDaPrTYFgK9CSC1vZG17+Cy1DKlV3vu+BYCEIDAtRJAaq8VL8EhgNRabgOjk1qm9LLcAggHAQhAoIEAUtsAhtUQsEMAqbXDUUdxWWprhx8gtTp3LEAAAhC4VgJI7bXiJTgEkFrLbcBlqWVKL8vJJhwEIACBPgSQ2j602BYCvQkgtb2Rte8wOqllSq/2hPItBCAAAVsEkFpbJIkDgVoCSG0tluErkdrh7NgTAhCAwFETQGqPOr0c3M0TQGot58Blqa0dU0tPreUWQDgIQAACDQSQ2gYwrIaAHQJIrR2OOorLUsuUXjpNLEAAAhA4PAGk9vDMKfGkCCC1ltM9Oqll9gPLLYBwEIAABBoIILUNYFgNATsEkFo7HHUUl6X28qe/+rO73//iK4mIrPlfXX8WIAABCEDgmgggtdcElrAQEAJIreWW4LLUMqWX5WQTDgIQgEAfAkhtH1psC4HeBJDa3sjadxid1HKjWHtC+RYCEICALQJIrS2SxIFALQGkthbL8JVI7XB27AkBCEDgqAkgtUedXg7u5gkgtZZz4LLU1o6ppafWcgsgHAQgAIEGAkhtAxhWQ8AOAaTWDkcdxWWpZUovnSYWIAABCByeAFJ7eOaUeFIEkFrL6R6d1DKll+UWQDgIQAACDQSQ2gYwrIaAHQJIrR2OOorLUlv7RDGkVueOBQhAAALXSgCpvVa8BIcAUmu5DXSR2rOz5Xy+SpKsvewXvcfmVLKyfOve+p3LD0s7LhbxbBZG0aa0vvSRKb1KQPgIAQgcPYEsy15//fUPPvjAhSO1JbXn58FsFsZx2n5QL919Ur2I3Hn47KvffLu0o++vz86WQZCU1psfbz94Wo1269760Rs7qmEGYRkC10oAqbWMt4vUTiaevNtNtLvUzmahBJRfpTwvao+qVmq5UayWFSshAIHjIHB5ebm6ej1+/Pjy8vLjjz++weOyJbX6InJ+HrSYaHepNS8ivr+uvYggtTfYcii6IwGktiOorpv1klrTRKsFDJBaCTid+otFnKZ5KSZSWwLCRwhA4OgJaKkVtV2tVu+9994//dM/3ciBW5dafRFZLOKqiQ6Q2paLCFJ7I22GQnsRQGp74dq9ca3UyvCA2SyUt/xqlP4rJmr+KrVIbZJkOtpsFv7e7329FE0+zucr8+9TtWNq3eyp/eCDD97iBQEIQGBvAt/73ve0zpoLb731VpbtGAO2+xe/5xaDpTYIEvM3v/YHfzr15/OV2Z3RLrVmwC4XEaS2Z7bZ/AYIILWWoddKrf7LTu0vUWml/lVqkdo4Tkt7tXyczUL996kv/s0/lgZFffGVpHacrmUuPcNVO1fMqxHLEIAABKwQkOG2BxuTMFhqF4u45Ue+9NXFRSTdGe1SW9qr5aNcRJDantcxNr8BAkitZej7S638ssxm4X/+F/+uJKB3Hj4TAe0ltRJQhttWpVaKsExh73BIrZULNkEgAIEuBB4/fvz+++/v/bu1O8BhpFZfRP6LP39UvYjoG8VaLLb2q//oP/76f/2Ff18KyI1iu7POFgckgNRahm1LaicT7w/mNb9Hg6V2MvH+6J9/68/ufv+LrySlX6U7D59ZprB3OKS2y5WYbSAAAVsEnj59uvfv1u4Ah5TaycT7T/+rb1Z/7QdL7WTi/Se/f68UEKndnXW2OCABpNYybCtSKxN+2Rp+MJl4cods7Y1iX/jGOy96j5t+ai3T6RwOqbV1qSYOBCDQTuD1119///33DzMCoemX9v6jd9v/uN9r+MFk4skIBFvDDyYTbzYL/9mf/K8lo5U/HjKlV+crGxteOwGk1jLifaS2NGuBFak17xWrlVo3bxRDatsvw3wLAQjsT+Ctt9468Py11y21pXvFrEitvoi0a7flSynhIDCIAFI7CFvzTrVSmyRZHKf6XR2uVDu/bIvU5nmho8Vx+pnPBKWYJT+W+o5Iaj/66KNf8oIABCCwN4Ef/vCHVf29qVm9Bkttmubmb37pB38y8c7OltVZvdql1gxYexHRdy3LFQSpbb7y840rBJBay5moldpSGebvUcvzF1qkthTQnF3h7GzZNHX2iKb0Kh0gHyEAAQgMI2D+2Ueev/DRRx8NC7X/XoOltlS0eRFpef5Cu9SaMUsXkaofK6WQWpMYy24SQGot56W71O58Um5fqW3xY32Q1dkP3JzSS1eYBQhAAAL7EBCpdeRJuXaldueTcrtL7cVFpO++aKKN1DaRYb07BJBay7noIrVxnJrzYzfVoLvUJkmWJJ1mEa9KrQz8b6oD6yEAAQiMmsDHH398U88Pq3KzJbVJknW5iHSX2jTNd15EkNpqQlnjGgGk1nJGukhtxyK7S23HgJc//dVYpvTqeERsBgEIQGBEBGxJbcdD7i61XQIitV0osc3NEkBqLfN3WWprbxRzc0ovy1khHAQgAAEHCCC1DiSBKhwzAaTWcnZHJ7VuTullOSuEgwAEIOAAAaTWgSRQhWMmgNRazi5Saxko4SAAAQgcCwGk9lgyyXE4SgCptZwYl6WWKb0sJ5twEIAABPoQQGr70GJbCPQmgNT2Rta+g8tSq5Sqzn7AlF7tCeVbCEAAArYIILW2SBIHArUEkNpaLMNXjk5qmdJreLLZEwIQgEAfAkhtH1psC4HeBJDa3sjad3BZapnSqz13fAsBCEDgWgkgtdeKl+AQQGottwGXpZYpvSwnm3AQgAAEnhPIsmy1Wq3X66IolFLZ1ev5l9v/I7UlIHyEgF0CSK1dnmp0UsuUXpZbAOEgAIGTJBBF0Xq9juM4CIKiKNKrV4mERant4tC9Hr6wMyAPXyhlk48OEkBqLScFqbUMlHAQgAAExkAgjmOpZpIkURRdt9R2ceheUrszIFI7hmZ46nVEai23AJellim9LCebcBCAAASeE4iiKEkS+bRarZbLZZqmz7/c/t9iT20Xh+4ltTsDIrWlbPLRQQJIreWkuCy1tVN68Zhcyy2AcBCAwEkSKIpis9noQ0+SJM9z/VEWLEptF4fuJbU7AyK1pWzy0UECSK3lpIxOapnSy3ILIBwEIHDCBPI8l4EH8l+5aUzzsCi1XRy6l9TuDIjU6jyy4CwBpNZyalyWWqb0spxswkEAAhB4TqAoitVq5XleaLw8z1utVs83URalViklAq37g9M03Ww2pkb3klpdSVnQdq7XI7UaBQvOEkBqLafGZamtndKLJ4pZbgGEgwAETpLA6uplOqV4ZxiGeriqRalN09T3/TAMfd/PsiyKoiAI9EfJQC+pFSkXRU6SRIIHQbBcLmUlUnuS7XpkB43UWk7Y6KSWKb0stwDCQQACJ0nA8zzdaWoCSNN0uVzKGotSG4ah3JeW57nv+7o/OI7jKIqkuF5SG129iqLIsszzvCzLJIgOiNSaaWXZTQJIreW8ILWWgRIOAhCAwBgIeJ5X6qaVWud57nmeLFuUWh1TKRUEgZ5pQXpwpbheUqvrnyRJGIYaeZZlvu8rpZBazYQFZwkgtZZT47LUMqWX5WQTDgIQgMBzAuYwg+frPvm/DEuQNdcktWEYaqk1Hbqv1EpPc5qmJakVgUZqzbSy7CYBpNZyXlyW2topvRhTa7kFEA4CEDhJAlmWyQjU+PlLZqsNw1D34FqU2uVyKTOIyfADPWzX7GftJbVRFK1WK6lqEAR6KIWWcqT2JNv1yA4aqbWcsNFJLVN6WW4BhIMABE6VgDwdVx6WK2arR6YKEotSK/dyxXEsdhsEwWq1iuPY933da9tLaouiiKJIhueuVit9o5iWcqT2VNv1mI4bqbWcLZellim9LCebcBCAAAQ+TUDPhHXd89QqpTabTRzH4s1FUYhMmxrdS2rlONI0TZLkeV/zNrh8hdR+OtV8cpEAUms5Ky5Lbe2UXjxRzHILIBwEIHCSBA4/T61SSvqG9eMeSuAHSG1LQKS2hJePDhJAanskJcuy1Wq1Xq9l1FF29Srtv1NquwSRmC96j2VsgPnfW/fW71x+WCpUPu6MXCu1TOlVC5OVEIAABHoROPA8tdqhgyCQJz4sl0vf99frta52L6ndGRCp1WBZcJYAUtsjNVEUyd93giDQ/5wt7b9TarsEkZh9pXZnZKS2lCw+QgACELBF4MDz1Or7t5RSSZLI3LQy0Fbmr1VK9ZLanQGRWltNhTjXRwCp7cHWvL00iiL5i09p/51S2yWIxOwrtTsjM6VXKVl8hAAEIGCLgJ7ntRTQnGPL4o1ivu/rCQpkqloZTbvZbIIgkDr0ktqdAZHaUmb56CABpLZHUqIo0v8Clrla9E2mOspOqe0SRKL1ldoukb/4N/9oDma48/AZY2p17liAAAQgMJjADc5Tq5TSU9UWRaGfy9BLavVeQqAaEKkd3DbY8WAEkNoeqIuikHkBZZ8kScx/KMvKnVLbJYiE6iu1XSJXpZYpvXq0ADaFAAQg0EDgwPPUBkGgO1nkoV9yPdpsNvqpvL2kdmdApLYh86x2iABSu1cysiwzn7yilNoptUopmfPFnHhFKqHHD8jHvlIre7VMKMOUXnslm50hAAEItBI48Dy1nufJ3Fu+78vlQx7EoDtfekltkiTtAZHa1uTzpRMEkNoeadDTpuiF9XrteZ45uHan1G42G32nasliS3/96Su1+t7V0Hh5nrdareQga28UY/hBjxbAphCAAASaCezssLA4plYplWWZ3LusB8Llea6Ntu+NYjsDIrXNmecbVwggtT0y4Xnecrk0jDHUc6no/tqdUhsEgZ5yRR4Ao2uwp9TunFCmVmqZ0kvzZwECEIDAYAJdOizsSm1TVXV3Sa+e2qZoSikJiNS2IOIrRwggtT0SIUOm9NOxlVJpmpZMdKfUlu6QNR23FKpvT+3OCWWQ2h7JZlMIQAACfQiYP+ZNHRaHkVp9KbEltRIQqe3THNj2Zgggtb25y8O15U88+0utDIGS8bX6l0jqNEBq5akQpUPSE8owpVeJDB8hAAEI2CLQpcMCqbVFmzgQqCWA1NZi2bFSumyjKJK/N5lb7+ypNe8wlR03m43v+1mW7Sm1XSaUqc5+8MVXkpanlJmHxjIEIAABCDQRKEltbYeFRak1B8KVlvWlpFdPbSmI+ZGe2qaks941AkjtwIwURSFdtvrnQwLtlFrp3NXDamUvue20FKpvT22XCWX++F/FL3qPS+87D58NpMBuEIAABCBwRaBLh4VFqQ2CII5jfdeyuaAvJS/dfXLr3rr6/uo3364mbWfAL7/y5kt3n5Tef/q11/7u7Z9Uo7EGAjdCAKndC3uapnpIvgTaKbVyh6meX1AXn2VZKVRfqVVK7ZxQ5p/defTCy6+W3v/iG+/oarAAAQhAAAIDCHTpsPj2D358/9G71fe3f/DjWmV86e6T77/389rKyNy0tUPOtNR+/72fP3ojrb7f//k/VWN2CVjdizUQcIoAUtsvHTJji37mQpqmm83G/FnpIrU7g0idBkjtzsif+5d/V+qmlY/9KLA1BCAAAQhUCGRZ1qXDorLfwBWbzUZP5mWG0FJrruyybD1gl0LZBgIWCSC1PWCmaer7fhiGMgQ2iqIgCPRHCbRTarsEkVB9pbZL5NohVi96j3tQYFMIQAACEIAABCDgHgGktkdOwjCUf4XLHQD6oQZxHEdRJIF2Sm2XIBKqr9R2iYzU9sg3m0IAAhCAAAQgMB4CSG2PXJl/0wmCQP/dR7pIJdBOqe0SREL1ldoukZHaHvlmUwhAAAIQgAAExkMAqe2RK9MawzDUUqsnglVK9ZLapiBSp32ktikyUtsj32wKAQhAAAIQgMB4CCC1PXK1XC7lmQsy/EBPVpAkSffH5HYJInXqK7VdIiO1PfLNphCAAAQgAAEIjIcAUtsjV0mS+L4fx7HoYxAE8ixE3/d1r+3OntouQaROfaW2S2Sktke+2RQCEIDATRDIi3wRL9I8vYnCKRMCIyaA1PZL3mazieNYnmpbFMV6vdYfJdBOqVVK7QwiofpKbZfISG2/fLM1BCAAgYMT8Nf+xJvMV/ODl0yBEBg3AaS2X/5kIliRWnNPPRShi9TuDCKRB0jtzshIrZk1liEAAQg4SOBseTbxJlN/mhe5g9WjShBwlgBS2yM1m83G87wwDJfLpbZY2V/fQ7ZTarsEkZh9pbZLZKS2R77ZFAIQgMDBCUSbaOJN5L2IFwcvnwIhMGICSG2P5AVBsF6vZQcZTat37i61XYJI2L5S2yUyUqtTxgIEIAABBwnMwpmW2rPlmYM1pEoQcJYAUtsjNZ7nmU/ENSWyu9R2CSJ16iu1XSIjtT3yzaYQgAAEDksgyRJttLIQJMFhq0BpEBgxAaS2R/JK1igTe8n42sFSWxtE6rSn1NZGRmp75JtNIQABCByWwHw1L0ntLJwdtgqUBoERE0BqeyQvCAJ5TK7eZ7PZ+L6fZVl3qe0SROL3ldoukZFanTsWIAABCDhFIC/yktHKxziNnaonlYGAswSQ2h6pSdPU8zw9rFb2TJLEu3rJx503inUJIqH6Sm2XyEhtj3yzKQQgAIEDEljEi1qpZW6vAyaBosZNAKntl78sy0qdtUqpLMv0ZAg7pVa2bw8ideortV0iI7X98s3WEIAABA5FYOpPa6V24k14EMOhkkA54yaA1FrOXxep7Vjkf/Mv//2te+vS++Kv4ncuP+wYobrZl1958/aDp6X3n//1k+qWrIEABCAAgYMRCJKgyWgn3oS5vQ6WCAoaNQGk1nL6fvSTXCvjl195c5/oj95Ia9+//ui3+4RlXwhAAAIQcI3AeXDeIrU8iMG1fFEfNwkgtW7mhVpBAAIQgMCpEIjTuMVo5Svm9jqV1sBx7kEAqd0DHrtCAAIQgAAE9iZgPkWsyW79tb93OQSAwJETQGqPPMEcHgQgAAEIjIVAdQIE5qkdS+6opwsEkFoXsnDQOmRJksax+S7y/KA1oDAIQAACEKgjgNTWUWEdBLoSQGq7kjqa7cLZzJt86g9caczM3keTXg4EAhAYMQGkdsTJo+oOEEBqHUjCYauA1B6WN6VBAAIQ6EoAqe1Kiu0gUEcAqa2jctTrkNqjTi8HBwEIjJgAUjvi5FF1BwggtQ4k4bBVQGoPy5vSIAABCHQlgNR2JcV2EKgjgNTWUTnqdUjtUaeXg4MABEZMAKkdcfKougMEkFoHknDYKiC1h+VNaRCAAAS6EkBqu5JiOwjUEUBq66gc9Tqk9qjTy8FBAAI8a9aDAAAgAElEQVQjJoDUjjh5VN0BAkitA0k4SBWii4vSTF7Vj2ufJ9YcJBkUAgEIQKCOAFJbR4V1EOhKAKntSmrs28WLRdViS2uYsHbsWab+EIDAqAkgtaNOH5W/cQJI7Y2n4EAVKPK8pLClj+FsdqCqUAwEIAABCNQRQGrrqLAOAl0JILVdSR3Bdqv5vCSy5sdNFB3BMXIIEIAABMZLAKkdb+6ouQsEkFoXsnCgOuRpalqsubw8OztQJSgGAhCAAAQaCCC1DWBYDYFOBJDaTpiOZqPq1AeitkkQHM0xciAQgAAERkoAqR1p4qi2IwSQWkcScaBqbKLI7KCVZX86HVx8lmVRFPm+73leGIZpmkqoKIo84xWGoawviiKOY9k+juOiKHoV3bc4pdSeJfaqHhtDAAIQ2IcAUrsPPfaFAFJ7cm1geXZW8tp4sRhGIc9z3/fjOE7TNMsysdU8z5VSpuCawaMoCoIgTdM8z8MwjOPY/LZ9eUBxSql9SmyvD99CAAIQsEsAqbXLk2inRgCpPbWMqyQITKn1p9PiSkMHgEiSZL1emzuGYbjZbJqkNssy3/d172x+9TJ3b1/uW5xSas8S2+vDtxCAAATsEkBq7fIk2qkRQGpPLeOqyHN/OtVeu5rPLSIIgiDLMi216dVL+m6VUkmSrFYrcU3prN2z6PbirqPEPSvM7hCAAARaCCC1LXD4CgI7CSC1OxEd4Qbmgxjy56Ng9z/O1dVL4vi+v1wuwzAMgsD3/SRJlFLx1SsIgvDq5XmerB9W9M7irJc4rJ7sBQEIQKAjAaS2Iyg2g0AtAaS2FsuRr9Rze9nqpi2KIooi6YUVdnEcS5etUipNU8/z8jyXQbfV9X1xdyxOpNb3/f1L7FtDtocABCAwgABSOwAau0BAE0BqNYrTWpAHMWRXHah7HnmapnK7WEscGWsbx3H06Uc8NN1P1hKqe3EitfuX2FIZvoIABCBgkQBSaxEmoU6QAFJ7gknfHrIVo02SxOwKldBpmsrtYhquzHiQZVnw6QlxZb3ebOdCr+Jk8O6eJe6sEhtAAAK2CDRNBajPZT05oJTYNMff4Po0BWyqmPUZA5HawbljRwgopZBamsFwAjLHlp6bVgeSOQfEa/WoA/k2CAKZnrYoivV6bU6GoHdvWhhQnFJqnxKbasJ6CEDgOgg0/elms9nIn4NMqW2Z429Y3VoCNlXM+oyBSO2w3LEXBIQAUktLGE4gjmPjAQvbRZl6drPZBEEgq6Io0qNaZTisrNdzF3SswYDi5OELupelb4kdK8ZmEICAFQJN7pimaVEUaZqaUtsyx9+wyrQErK3YdcwYiNQOyx17QUAIILW0BAhAAAIQcIKAuGNpKkBds5LU6vV6wfq/WnXA2opZn6NQKYXU6myyAIEBBJDaAdDYBQIQgAAE7BOonQpQF9MuteYcf3qXfRbMgLUVsztHoVQVqd0nZewLAaT2tNrA7QdPX/QeV99//tdP9gERJ//37QdPu79/9JNPHqW7z+vf/P3mq998u+N7+ejZPmWxLwQgcBgCtVMB6qKbpLY6x5/eZdhCNWBtxWzNUTiskuwFAQhUCSC1VSbHvOb2g6d3Hj6rvm8/eLrPYX/1m29XYzatuXVv/eiNdJ/ilFIv3X3SFL+6/qW7eyn7nlVldwhAYBgB/dht2b1WarvM8der9C4BLc5R2KtubAwBCLQTQGrb+Rzbt0jtsWWU44HAsRBomgpQH19Vamvn+NPbD1ioDdhUsf3nKBxQQ3aBAARaCCC1LXCO8Cuk9giTyiFB4CgItEwFKMdXktqmOf4Gw2gK2FIxZgwcTJsdIXAdBJDa66Dqbkyk1t3cUDMInDyBpqkAt/MFGv9L07Rljr9hIFsCNlVsnzkKh1WSvSAAgRYCSG0LnCP8Cqk9wqRySBCAAAQgAAEI8ESxU2sDxyG1v/7ot3/0l/9w695a3xM2v//mrXtreVdX/tFf/sOpJZrjhQAEIAABCJwaAXpqTyvjxyG171x++MLLr77w8qvaXz/7pe/ImqaVp5VmjhYCYyBw/9G73d//+tv/R8cp/L76zbeZxW8M+aeOELBPAKm1z9TliMcqtS96jz/7pe/IW5uuXvkHf/G3LieFukHgNAncf/Ru9wP/7/+nH+hTe+fCnrP4/enXXqtO5t205g+/sup+FNUtv/zKmy/dfdLxveeE4tXSWQOBIyOA1B5ZQncczrFKbftFbs8r3A6mfA0BCAwi4KzUHnIa7Kbf5NrfNH7KBjU0djohAkjtCSVbKdX0Azquhy/87Be/+S8X/9uL3uPa3/3qSq4Ep9XKOdqREEBqW36Tq79jdx4+46dsJE2bat4YAaT2xtDfSMHHIbV9nygmw21vBDiFQgACTQSQWqS2qW2wHgLDCCC1w7iNdS+kdqyZo94QODoCSC1Se3SNmgO6YQJI7Q0n4MDFH4fUVqf0qv1TnV5JT+2BmxnFQaALAaQWqe3STtgGAt0JILXdWR3DlschtdUpvbS/1i78wV/87Rf++nvHkD+OAQJHRACpRWqPqDlzKE4QQGqdSMPBKnGaUsvdFQdrYBQEge4EkFqktntrYUsIdCGA1HahdDzbILXHk0uOBAIjJ4DUIrUjb8JU3zkCSK1zKbnWCh2H1DKl17U2EoJD4DAEkFqk9jAtjVJOhwBSezq5/uRIj0NqmdLrtFotR3ukBJDalt/k2tsDGEl1pKcCh2WNAFJrDeUoAiG1o0gTlYTAKRBAapHaU2jnHOMhCSC1h6R982Udh9T2HX7AlF433/KoAQQqBJBapLbSKFgBgb0IILV74RvdzschtUzpNbqGR4UhUCWA1CK11VbBGgjsQwCp3Yfe+PY9TallINr4Wio1PgECSC1SewLNnEM8KAGk9qC4b7ywXlKbpnmSZF3q/NVvvl17W0Ptylv31o/eSKthkyTrWFzfnlqktkqbNcdN4OOPP/7lL3/p+DEeWGq7/6C9dPdJ7W9X7cqmn5c4TvO82JmCpt/kXmXtLIUNIHAiBJDaE0n09jCbfkBvP3haBXFxEU0m3tnZMgiS6rfmGitSO536k4k3m4VRtDGDV5f7jqltuupUI7MGAsdB4PLycrVavf766++///7HH3/s5kEdWGpns7DjD9r+UhvH6WTiTaf+fL5K07yFf9NvMlLbAo2vINBEAKltInOc65t+QGulVq4Bk4knV4LFIm7qeLAitVKQLi4IkqbimNLrOFsnR2WPgEjt6ur1+PHjy8vLjz76yF54O5FuRGrlF2Y69Vt+0GxJrf5Nu7iI4rjmz1MMP7DTkogCgecEkNrnJE7j/+1Sm+dFHKf6/ZnPBPpHWV8Jajsehkltmua6LOnYqBbXdOHpddU5/OwHH3300S95QeDmCPzwhz8UozX/+9577zk1JuG6pXbnD9pk4tX+oPX6edF/CDJ/zb7+9XXp10z+DFX9q1fTbzI9tadxTS4fpdloy991+7xnhNJ12WzV3cr/1FYS7VOrKh/at+k+LlECI7UVwEe9oukHVHpqa82y+tM8mXiljodhUrtYxLXBqytLF55ff/TbP/rLf7h1b137u19deXipNfvJTKv4/9v7vpdZkvO8vgsEQhglV7kSc5E/wKOAL3Jhw+jCCcIsZCAGQ1gWjWVB7I1QaJQI00scXErWYjvLIiqKEdsoBLdzsqBG8cG1Z5OcbbS70JY4iPluskV8QOrFEm7LiHXjvalwvmfPu3Wqf85Md0/3fO/wcU5NddX74+menmfefustbjMCF0fgnXfeefz48dj3uTzPkyRJ07Qsn+SV5rcvR+nYpLb/DW27jexI6mmktnrjqu1Zr0MhUnoM1XRPrt7HvvStHxCBdpDkt9eBgO8rpOHhylmtRPVXULung0iovW7Rud8ndOm2W4Kj+JZvH9k+ZruNttuoXYJ9lEmtjcb1t5tuoMeSWlzfm43ER25sUgt19MVz7EKxX/7tP/ri19+c8uwyqb04dWMD2hF48ODB48ePx0u3jeM4TVOllJSyLEt9+3I+g/MhtbjD0PqBUUktdCH/Qeui6Z7MpNa5WmrfZlkuRNqetVw7cYad4Ha7XSxEqpSWMttsnjws7VxkQr6cL8EYAyF2gBZtIdL9PvG8YLORpLGz0U5YMb19DJPaTpDv9ICmG+hppBZ3590unobUQp0Q6bGkdvrwBpPadkbFR2eCwMOHNStEB7lFKqUgJ8uyOI4XQWrphjYBqSVq23RPZlLb5zoEH0IaSc/iObbYLMuV0k0TQeZqGbNdSYOe11MIEwkAtROh0bYBba0LPAK1DxVFibWGxhgYYx81xkAR7O+U4MxtettOMaXMank2QKi63C4NNrSPYVLbdKa4/wkCTTfQk0ntZiPj+DAZqd1uoyzLmdTOhBKxGYtGALURRrozxnGcZR9XTUmSJAxDrd2VUnOL1GJFbBwfpiG14CtN92QmtX2uTCK1+JHQp3gOxMbxAXFQTFyvQ5vaSpnZaQDbbeTQNd9Xm40sitJeTr1aCSFSyF+vw2pEEyR1t4urrmld+L5ytBhjiNIhSkryIQEVitDZKaGqtLannWIaYzwv8P2Pf7IaYxysdruYyD3FfWsVUWe7RkKAxrc3OP2gHZ8lHe2TxNZ0Az2B1FIygDFmAlK73yd005l/SS+O1C6a7V298ROk1ZZleTh8Upsvy7KicCtbzYrU2je0sUnteh3SEtimezKTWnz7+r5q+fvFX/wvYKX2v5RG0vT9HccHrNujB/3rdbhaCXBKIZ4s8gM/VkqDcjkM1fcVaOtuF0uZIVtgvQ49L4AQzHJIalOYs8nOoihxqSAoaxtpjIEXtRSZBNoSqLO90U4xsyy3SS0hGccHpbQQ6Wol7BTYdmmwpH0Mk9r283XNR/sksTXdQEFqsyzHBYR/P/Wpr9l3Cmo7y7ZOJrVSZrY6km83KPPMOXNHfetMv1Ds8ePH7/CLEbgcAm+++WYtNb9gAYQ8z6PomQUfY5Pak29oR91eKLvJvptVS8fgtkbrEOiG1nRPZlILiOyvg6PaLVXb1uvQJl7GGDy7RwByvQ7X65BOEB79e15gr9kCD9vvk+owipva5A/Ddrt4tRL2lNo2HuUjp3a9DinwCUIJyxH0tY/aopok2GOa2u0UEwFjIuubjXSQxOpMWnbZLg02tI/Bx6rJ2mo/R2qrmCy1p08SW9MNFKTW8dx+sIJC4hRdcEaeFql1hDg3LPzaps+zM/iob53pSa1jLb9lBCZGwHlWMH2pWiTR2v+maRoEgZ1cOzapdTDvf0M76vZCpNZWVy284FSMocFN92QmtYDI+V444a0ThSmK0mGoUISAK4ij85TfGEMRUwwGDyNuR2fTJrKbjbTJsc2baXxtA8IRLabnkxiJKLLvKyQeEHd05LRIcEZW32JubXQcCRtE7uFRdR2bjRWkVbXYPe1jmNTaWN2tdp8ktqYbaDup7XyaMyyp7cyLmn9Jr7t15bG380OASO2DBw9ubm7Gq3LQ5HoQBGEYRtZLShkEATow61Kktv0H87F7u7ST2s5NxZruyUxqcZGcwGKrU+z4qBNKdC7gpqMOtWriYTapRbIB8U7wUYekOtrxluKsCIvaCazIsoWDTr8tql2CPbLahmtVDKulPIEVgrX2Y4rVSlBSRBNQtt72MQ7y9sTaNkdqa2FZZGefJLamG2gtqfV9ZeeZtYAyCKnd7eLdLu7zmT92odj0Jb1asOJDjMAECDx+/Pjhw4cT1KNt8iXPcyllkiSoU3v7hFcHQWCPn5jUYnFPNbBkm4T2+ZHaLMs3G9n0aMvW2HRPZlILlGrZ1VGdzrdYE22FuqajDrVq4mE2qUVImFIUnMCtfQ20tKtUGD2eFxBdbplujKlKaB9f6xqecjifnSZSu91GRLhrpTkGtI9xkHfmVt8yqa1isuyeoijsR370jQKvmm6gtaS2PxCDkNr+6o4ltbWhlP7qeCQjwAichoBSSgiBFWNaX5jU9nfhfFLbX1fTPZlJLTA8ir86g+3lxXRGQMUcfuYcrfLF7TYietqyqN8mtcYYJNEWRdmU1UB6Wxq2TK2L1UpsNnK1Ek0JtVVRtoTqUaenlmJCL62lw5SmHwC2wFpp9gACsynVkIqaObOa3jKpbUJmef1lWSZJQg/48JgvCIIk+SSZvekG2k5qO+sqDE5q2zUyqV3e1ckW31UEELKN4/hwOFw2Utv/DDCp7Y/V2CNrkzups7b6QWfKRy3J2+8TxBerR7Eqi6KPxMOqvjtzURyASihUc3BJAlgvpatSP8K9lOOLrNYsy5HbYJvUUwJJbmo00VBotJeF9ckSbpJma0csufZnBlSQ+/aspjaT2iZkltePxc5OaLYoiiiKaA3ZaaS2s67C4KS2XeP8S3ot7+phixmB0RAoyxIh2zmQ2vYfzMBgQFLbqa7pnsyR2j7XIzgTBWhbKh7Y0hDptEODCDqCPCFPwD4KLXZqXBNXc0itMQYmbTaS0kxtS+w2aoTZeo0xSKuFaiglIouUADuo3CnBGJNlue2IbQDaTa4h8Ow4iCpjts0oIkY0tEWarRoVJxzDtC6qZ8qeVdtmUlsLyyI7gyCoVoJEKlsYflygpOkG2h6pJU7ctDnQ4KS2U+NR3zpc/WCRFzQbfV0IaK3pcw3PJs6phdL2H8wYc9TtpT27qVNd0z2ZSW2fyx+cCbtmCJHa7Kplepblq5UA3fR9BeKIzRRQwAsP9xEPRp0BO/egf6QWI7GPQzUK61iIsC5ZhRRwbJYGMupsUYuUADsJoV0C1HWmqLbQUESsPS8g9ukg6fsKzhLVhjSKrDsNCl1DDpwl2HGOaiO4DnT2Wya1NhrLbgdB4IRp4U9RFBQgabqBtpPazroKg5PaTo1HfeswqV32lc3WLxaB9hT/i5BaItZNP9EHqX5AZ6xTXdM9mUktYdjSiOPDdht18sWqhCzL9/sEDKwa31VKg8uCLlNwlOQ0MT8nkElk1C6/QEKqDeilzczsqsaUeGDPwoN7m3C3SMDEc0gt7fhgM2mtC6QOI17urPamXx0UTbcbxH1RKni/T2ibt84cEhsHu82k1kZj2W07zcD2BGkJ6Gm6gbaT2s66CoOT2naNXNLLPr/cZgRmiECfFP+LkNrOH8zDktpOdU33ZCa1M7yqr8Ckoig7EyEW7SaT2kWfvmeMx4KMMAzV0xe2XI+iiCK4TTfQdlJ7u0ffk6IKlN6gtT4cDiR2cFL7jGPGULwH/ccuFOOSXg6e/JYRGBuBPin+FyG17T+YActRD4La0w861TXdk5nUjn2J3k35WLV2xb4zqb2qk1uWpdY6TdOntFbleW572HQDbSe1WmshRBRFQog8z+M4llLS25O3ybUNc9oI84BDZ1kG7VLKMAyLojiW1LZ/6ziq+S0jwAicj0CfFP+LkFr6ie7cG2+TKRUcH5DUQiD9LEe9RQoHGGOa7slMas+/CFlCFQHfVydka1TlzLaHSe1sT80AhlU3W2+6gbaT2iiKsizDl4EQgmqEKaXiOB6D1Ma3r7Is8zwPgoC+fqCRSe0AFweLYATGRKBPiv9FSC0qi0VRhIdaNga09mBAUtuZhtF0T2ZSa58abjMCPRFgUtsTqAUMs/dcQLu62XrTDbSd1NK93hgjpdRaAw5EcMcgtfSNmGVZFEWEfp7nQggu6UWAcIMRmCcCfVL8L0JqpZRpmgK0JEloIZcxhm50A5LazjSMpnsyk9p5Xths1cwRYFI78xN0hHl9NltvuoH2J7VRFBGppboKg+fU0rNLrbVDavHFc9S3Dlc/OOIy4qGMwBAI9EnxvwippR/M8NLmuGOQWrqVOaBqrVFpsemezKTWQYzfMgJ9EGBS2welZYzBt0j7ZutNN9B2UhuGITa6LIpCCEGxDQqjDk5q4zgmR6SUtECNKjkwqV3GRclW3mEEOlP850BqcU9DgtNIpNbOoKXLgSICTfdkJrWEFTcYgf4IMKntj9UyRrZvtt50A20ntViqpZQCu5VS4rGdEAJR28FJbVmWcRwjfzdJEloohkoOXNJrGdciW8kI1CFAv4ovQmqllFghQKYdDgcsgR2D1HamYTTdk5nU0gniBiPQHwEmtf2xWsxIhGxrN1tvuoG2k1pjzOFwUOrjWgplWaLAAq3fGpzUAmutdZZl1UoOxy4U45Jei7l22dA7gABxx4uQWq11EASUVgu8sywLbl94e9SDoPbiKp1pGE33ZCa1d+CjwC4OjwCT2uExnYPEps3Wm26gnaTWccopTDMSqbWV2hodUvtb38yefyXF3xe/8R6+DKhn/+r32r91bC3cZgQYgbERuCypNcbkee4Ea9FJIeQBSa0xpj0N4ze+/tZzwf2ef7/6b5Oxzw7LZwQWjQCT2kWfvg7jq5utn0ZqUZWGnqMlSSJvX3hmN0b1g3aNILW/+Jv/jfgrloJ95vPffi64j07q+eyX32BS23Gh8GFGYGgEouYXkdovfv3NV+99v+ffvwz/1wsvv9Xz71/83p+0O9RSONYY8/If/eDF1x72/PvKN7/Xrosq41KBmtos204hPIARYAQ6EWBS2wnRNQygCMRppDZJkjiOtdYoH4vatKhVPlKd2naNN3/2F7/8239E/HX/6vc+++U38Pdr4i2QWup5LrjPpPYaLmL2YVEISCmVUkTj7AaR2os41Fk4dlirJlY3rPEsjRFYHAJMahd3yk4xmL5FTiO1QggKLYRhSCW9UDV2jEhtp8Zhnw+eginPYQQYgWYEcHOg+4Y9kG5Hdudk7c7CscNaMrG6YY1naYzA4hBgUru4U3aKwfQtchqppemIztL6MKpKM3hObadGJrWnXAc8hxGYEIHD4UA/gG219qfb7j+tLTNZlEX/uZ2FY/uL6jNyYnV9TOIxdxABz/N8378LjjOpvZ6z3JzDFtG3yGmkNgzD6roKY0yaptgZYXBS26mRSe31XLjsCSNwKgIyk17g7ZN9fwHO5gs0kX6iU88gjYnVDWIzC7k+BJjUXt85vX6P+iSxnUZqqU6tDaJSKggCRG0HJ7WdGpnU2ueC24zA3URgHa69wFuJVf9gLS14dRCjjV2c/jPfTqzuTGuXMj3P83feeefm5uajjz5ais2XtZNJ7WXxZ+2nINAnie00UmuMQclY26wsy2ijr8FJbadGJrX2ueA2I3AHEYgPsRd4+BOp6IlAZ+HYnnJ6DptYXU+rlj7s5uYGP0Lu37//6NGjDz/88GSPUAq99lEk0u2w3rEq3/5OhBA72UYpVSuzKIqmQ1CBmkVKKfp6JdW2xpZhGI/67mRSH1LbBAXprTXeUVRrbRUfGjZ4g9MPBof0kgI7k9hOJrVNXqGuwkuvv0t1YTsbu99V997WTQI7+6Hxxdce9izu88LLb/3G19/qFMsDGAFGYFkIbKMtkdp1uO5vfHvh2P5yeo6cWF1PqxY9jEgtqG2SJFmW/fSnPz3KqTiON5uN9/S1Xq9tGiqlXK1WTw962+2WCCK0+L6/2WyyLLOFbLdbkNH1eu15njPFGOP7vud5UsqqqY4oz3ui1Ka20FgUxXa7JcNWq5UQz/yiE0I4lhdF0U5q26HwfX+73WZZBqe22y2Mb4eoHZ+q+0P1MKkdCsllyPnGd35YW3zx5T/8/mkOIFv3/R//7N7buv/fn//lX5+mzhhD+cEnS+CJjAAjsHQEdKGJ0aIhsxqicJSbVPrwqFknD55Y3cl2TjmRSOppjYcPHz5+/LiPwXEcgzUKIZRSUsr1er1arUBDhRA4GsexUgpMdLPZ2JJ931+tVpvNZr/fI1qJYbvdzhgDCQ7dNMZAi01VIbMoChyCPXEcQ9p+/0m+uO/76/V6s9nsdjtkG8Jsmz1D72azwQAhxHa7Be1uWijWDgWI+Gaz2W63vu/DU3Jwu902QdSOj43ksG0mtcPieeekTU8xp9d4504qO8wIzB6BfbJ3SO02+jiAdLLtE99bJlZ3MixTTjyNyzqzHjx40Jluu16vKeIIB7XWFM5c375sx5VSToS1SjopEKu1RnB0vX7mAQLoo81TSYXWerfbOb9zdrud531C0mo1wjBiz1XLjTGI7DaR2nYoyClnelWRA1GtteisBrAJh/Mbn+B1viyWcK0I9KmrMKzvx2rUSjl/w9rD0hgBRmA+CBRl4TBavFVanWPkxCxzYnXnIDPZXIeenvMW6ba1loNxVnMAEN3MsszzPKKJJGG9XtvEDvzMzljAUhDP87An0X6/9zzP5qno6U/poIIMaGKExMWbLAfdtI0nme1QYFjV0yZFNkTVWQ4+ZMOwDSa1w+J5ndL61FUY1vNjNT5dLvLJN92w9rA0RoARmA8CvvI/+ahbH/4+tb2O/cF8ptcTqzvT2otPP4fFVuc+ePCg1iOQPJtu2sOajm5vXzTSYZzUTxQTciguC/rohIdpFhpIddjv99CFHFYa01NjrV9kFUkjjQ7zdgZQpNbu7wNRp7W2wAHbTGoHBPNqRfWpqzCs88dqtL7XPv6yG9YelsYIMALzQWAlVrWk1gs8XXQsQj32B/OZXk+s7kxrLz69SkxP62lPrm3iZHC/6eixpBYZtJ7nIYNWSukkMDhoY4DneUhg3W63TGodiPq8ZVLbByUeYzrrKgyO0VEamdQOjj8LZATmiQA2XGgitb7q2Dbp2B/MZ4Iwsbozrb349NMorD2rTxkE0FYkCVRdbiG1FHatjV/S43V60I9lW8hz2Gw2q9Wqqg49yOjdbDb2GjIn2Om8JVEUhW2y3E4XpllotEOBMVW9TYq22y1BVJ1VxccxZpC3TGoHgZGFXBgBJrUXPgGsnhGYCoGN3DQx2p4bMRz1g/l8tyZWd77BF5Tw036vP/3TP7WJbJIkxxasJSJoO7vf78FHq0eLolitVsRWidQ6z/qRbEp0GSkHu90O/fZ0Wy8K4lbTALDAi0bW0kRjDFnbpAUL1Jq003RSZIwhKMhT+6itlPodiGBtOz40d8AGk9oBwb+nEW8AACAASURBVGRRF0OASe3FoGfFjMCECCitWhgtDp1f22tCh1jVKQjYdWrv37/fWeugqgNxUzssiugj1odtNpv1em0fBUWzl4Whx8mRRb0Ce+J+v1+tVuhvWSJWjZhSFViS1klqkfDglFwoiqK9pFc7FE2kthOinvhUT82ZPUxqzwSQp88CASa1szgNbAQjMDICIhWdpLb/7mIjG8vix0Lg/fffT5LkwYMHPavSVu3Ismx1+/JvX6hLQE//lVKr1Qpr+X3fByWlB+uQhjqsSLSFkNrKWWCrnuehfm3VEvQgzGlr3Gw2yF5AHdwmcukETRGUJTkobQv7myK17VA06e2EqCc+TYCc3M+k9mToeOLlEShu9wrUSlVJLSp8Fbpj1cjlfWALGAFG4HgEqgUQzq9Te7wVPOMyCHz00Uc/+9nPztSdZRnCqJ7nIbWAYqLIBwAX9DzPrlRFShGJLIrCEUIDqIH1XpSTQP1Ow95RbLfbFbcvBFkRP+4TqXUsp5SJ2hwDMqAdiia9SqkWiPrjQ2YM0mBSOwiMLOQCCBRaV7lstecClrFKRoARGBkBJrUjA8ziOxBoonrVadV9Cqpjrq+nPz7D+s6kdlg8WdqkCETbbZXF2j3R012qJzWLlTECjMDICDCpHRlgFt+BQE/ShvSDpkf/HTqWfLgnPoO7yKR2cEhZ4HQIZFLaFLbazuS528FXnYnjOLBeURTRmDzPpZR2jzEmz/M4joUQQRBEUdSyVoDknNCYp1UnOMJTGIE+CDCp7YMSjxkPgT6kjdJV7cSG8UyaleQ++IxhMJPaMVBlmdMhEK7XVS6LnvDZfbeHsqmJmB4OByGEUsomtUVRoFNrnee5UkoIMcYNbp5WDYU5y2EEHASY1DqA8NuJEWgnbbQ+jHbNndi8i6trx2c885jUjoctS54CgVSIJlKr/I4y7KfZ10QftdZlWWqtbVKbZVmapraiKIoOh4PdM0h7nlYN4hoLYQSqCDCprWLCPVMigC1tmzRqrVESwa4C1jT4Kvvb8RnPZSa142HLkqdAoCwKsVpVea1YrcqiGMMC0Ed9+6rGXB1SWzVASpnnebX/zJ55WnWmU0uZXpt2UpYlpZ0opcqyhDud6Sh5ngdBMFKaSh9IF5HKwqS2z6nkMYzAXUOASe1dO+NX6K/y/SqpTfb7kVwVQoRhGEWRlFII4fwQbye12AVnDMPmadUYns5NZm3aiTFGSpkkSX77ovPeJx0F19UFSW1T1L/W0z4ejXHKmNSOgSrLZASWjgCT2qWfQbbf1Nb2Gq9CrVKKQq1a6yAI7HhtE6lF3C5JkpFO2DytGsnZWYmtTTvRWodhSHaWZRkEQVmWnekoSqk4jptoJQkctdGkvdbTTo9GMpVJ7UjAslhGYNEIMKld9Olj4z9GINnv7WDtlJW8nBzZWlKrtcZysclO2Dytmsz96RU55z1NU+cHTBNTtNNR8jwXQpRl2TR4Gr+gfW4JNo7vTGodQPgtI8AIPNlfjVFgBK4AgTzLbFKrlRrJKa21s8xLSmk/KXbIjTEmyzIhBAV3xzBsnlaN4elsZTrnXd2+bGudnxk4RGkJxpiyLKWUuLouS2oXkcrCpNa+urjNCDACQIBJLV8JV4IAbcQwUiUvwIRYGphHURQo0WUj6JAbZBzarNcePFR7nlYN5d0i5DjnvZbU2pdBNR0FiQdw9rKkdhGpLExqF/G5YCMZgYkRYFI7MeCsbiwEDnGMYO0YGy7YRh8OBykltl+I45hCsNaGDB83tdZKqWq/GiGQPE+rbNyuu10ltS3pB9V0FPQcDgc89JdSpmlKl9ZloXNizI6nsK3q0dg2M6kdG2GWzwgsEQEmtUs8a2xzPQLhei1Wq/pj3MsIjImAQ/VQ5IsU0kKxpnSU2h8/dsFjEjV2YympLExqx74SWD4jsEQEmNQu8ayxzfUI5FmWZ1n9Me5lBMZEwCG1KOmFkHxRFHEcI3DbMx3lgukHS0llYVI75uXMshmBpSLApHapZ47tZgQYgTkgUE0vQe4suCyO0uYLtRHZajrKBUmtMWYRqSxMaudw8bMNjMDcEGBSO7czwvYwAowAI8AIMAKMACPACByNAJPaoyHjCXNA4H8/+vELL7/V/+/mz/7iZLNv/uwv3rv5YKS/k63iiYwAI8AIMAKMACNgI8Ck1kaD24tB4N7b+vlX0i996wd9/p5/JX3v5oOTfXvh5beefyUd4+9zX/3uaVa9+NrD54L7zwX3f/V3/vhXvpLg73Nf/S46P/fV71Lnr/7OH7eMrE7/3Fe/ew5Wp7nDsxgBRoARYAQYgfMRYFJ7PoYs4QIITExq+1DnE8Y8F9w/DbsXX3sIdc+/kn7m89/G33PBfXQ+F9ynTqL+tSOrnWf+ADjNnVnN+tFPfj5SVP69mw/eufng3tt6pL8//8u/nhWSbAwjwAgwAhMjwKR2YsBZ3TAI3GVS+z/+z//dfum/g8JWWemXvvWDc0jt7t+r1+8fhjlJy5Ty6r3vj0dqv/D7b44R8n/+lXT3u+re2/pYyH/0k5+/+NrDkf6+8s3vHWvP+z/+2UiM/97b+v0f/+xYe3j8ghDQStn7SqK9IPvHNlX5fuD1onxlUWiltFJlUZxm1fkSTtPL2+SejBtPvDACd5nUvnrv+5/5/Lc/++U3TogNd05BiPfCZ/ei6l+99/3x9H/h99/sPAWnDXj+lfQEUvvezQcUyz9Nb8usF15+61gkX3r93ZFI//OvpC+9/u6x9vD4BSHApLb9ZPUhtYc4Dtdr+7dBvNsdVSjzfAntXnQe7UXbO6XwAEZgYgSY1DKpHemSY1LbwlOPOnQaqT1KxVGDmdSO9JGZiVgmte0nopPUZlIGnic3m0xKrdQhjpXvi9UqXK97hmzPl9DuQp+jTGr7oMRjZocAk1omtaddlHmeJ0mSpmlZlsaY/PZli2JSexRTbBnMpNa+rrg9NgLHktpCa4pB5lmmlSr0Mwk8eATvdMKL4nYP9OoDekxxPMWzeNLlHDXGdFpCU1pM6hzTTmrLohCrVbTdkhw0wFNBc3Vlg3fbtU4JjuSR3jKpHQlYFjsuAneZ1BpjaKFYC6U47dCvfCX5N//57XFP3kWlx3GcpqlSSkpZlqW+fdkWMak97cqpzmJSa19X3B4bgWNJrfJ9udnkWRZtt/TAPdnvn+w/EsditaJO5ftkfJ5lcrOhQ4HnRdstBTKT/T7wvFQIGm+MiXe7aqc9gCyxJdtin2yvLaVtUrTdVtl2+5h2UltorXy/lnkHnqd8v9O1Tgm2y+O1mdSOhy1LHhGBQUhtUTyJ1XW+Xnj5reoX9iA9tdUP+lg1Hqmdf/WDjz76qPOUtQyg7buyLIvj+LpJbee1NH1ObbtJL73+7iCfrFohnH7Q8rm4gkMnkNpwvZabTbLfH+L4EMdgn3jmrnxfK5UKAaIJBlkWRbhei9UqFYIe0AeeBypsjKEBxDgPcRx4XrzbtSAMjbAEsVgQUJqVCgH2fIhjrRSOys3Gltk5pp3U2qLsdqF14HmZlKe5hjg0JNhix2szqR0PW5Y8IgLnk1oh0tVK+L7SumOB58Sk1vOC7TaK47YSBHeW1N7c3Ny/f//Ro0cffvjhaZdXHMdZlmFukiRhGGJXW5J2NZFapbTnBbtdrNQzD1XJU2PMxKTW9xU+dE3UlkmtfXa4fRQCJ5BaxCBtLYiG2s/Z8yyjOGuhdbzb2UcpEEtCMB7P8fFEvjMnFXSTmDFEoRPkOFyvw/WaVBhj4GwmJXV2jjmN1Ma7Hdl/gmvAhySQteM1mNSOhy1LPhGBzqxHY8z5pNb3led9/BBpv09avvinJ7UwbL0OpcyqX/92Sa/aiNQ5nTNPP7i5uUmevh49evTTn/702IusLMvD4ZMfDFmWFc+WrbkyUotrabuNpPyYytuITU9q7Q9d9fckk1r77HC7DwKF1vZzeTsxgNoOI4RYmzWSomi7dSKgxpgq96XxxpgqWUTQVPk+Qr8OCbbn2pY4j/4RIj3Esc2q7bnhek15EX3GVO20pdW2kXJg23+sa1UJtYoG7GRSOyCYLGoYBDqzHgcntfii3Wxk7Rf/pUgtrKqGk+9ySS+b1ILcvvPOO48fPz75yiuKJ0s+7GDtVZJaXEvrdShEav9MuiCphUnOQwkmtSdfyXd2IqKhxF9rG9X1T7Vk1BgTbbfVwQ6pRQJAst9jMGpgOfhTni7xTmeA/baJbkIvgrI2s8Rc29Q+Y5q02JZQG6nDYrVyqDYgAsjtrrVIIC1jNJjUjoEqyzwLgc6sx5NJLb5H2/9dr0PfV/YX/9ikdruN2k3CUQonz5zUvvPOO09jqdP9/+DBg/fff79Pum1ZlkmSIDqbZZkQIooiKWUYhuhcNKntcyHZP5PGJrVIgei0ih5KMKk969Z5VyeDrtXSWXRWGeHJpBbVAFD6qoXUIqIZeF6taudENdHNS5FaLDgL1+sqozXG9HGtXYLj/rBvmdQOiydLGwCBzqzHUUktvoBXK7HfJ3g8OhNSC8M2G/nPv/DGnDdfuAipBX2+f//+zc1Ne7ptfPsqyzLP8yAI8jzHJauUiuPYGHP1pJYo5n6fvPoH2aibL/QktfSh+8f/NHrhP6Tn5M+0zOWFYgPcnWcpAk/qm0htbe7BaaQWiuRmQ+UOauUgI0JuNj2LvNaSWujCerVachxtt5SG2xKppTG1WqrnEwkDyX5v+0jD+rjWLoFEjdRgUjsSsCz2dAQ6sx4nILX2F/+sSC0M+7t//2sjffefv03uBUktRYZvbm6arr8gCFChNsuyKIpoWJ7n4rYQz90htbiW/sE//E8tRPCcQy+8/NZRpJY+dP/on/zXc/Q2zWVSS1f79TWQvVrLa+3VVLbjtSTPfqZPgyn9oJY7ItOABhtjUDAhzzKEddsf0xMtdmK6SJM93P7SJgNIC5IubMmdY2r9JYFogI82IdbHtU4JjsbB3zKpHRxSFjgAAsh0pBU8WuvD4QAuAumnLRSjb83+je02miGp/Vt/+3f/2b/+k6Yv73P6zy/pNQdS++jRo6arMAgCXFdaa4fUBkFwpyK1+BT8nb/3tXMumJa5TGqbLkLuHxwB0M0qqRWrVZOuWpLXh9SCaEIslYaluCbEEt0E5XUIq2MSpji5vKDpECs3G6eAAKbY6QGdY2r9tS3BAKfIbnVAi2udEmxpI7WZ1I4ELIs9HQGtNTIdhRB5nsdxLKWMoghvIXcCUku1kGZFarHW5zf/41ilc5dOajtLIsRxnCQJfiBJKemHE6K8d4rUbrfRV4K35pN+4HnBZ37pD0b6tfalb/2AI7Wn35SXMNPeuYDYLTGwqge1JK+d1FKJLuX7KG4gNxvkmCb7vVYK4VW7fgKe1zuU1DEGdWqhGpJBhcl4rRQyGUivXRwX0jrH1PpLliDbQaxWUOH8m0nZ6VqnBNI1aoNJ7ajwsvBTEIiiCJVEi6IQQiRJAimU9Thq+oGdTQu9MyG1VJVp1JJeC00/6JNNi7NZlmUcx7iukiShhWJRFIHp3oX0A1p0OJOFYrR2jReKnXLH5Dm3CNASLmK0gedRALUKUi3Jaye1TzbWtnYUi3e7sijKogCfpp0a7AAqrayi3NYmS8qiSPZ7lCcDubRHaqUoxcIu5tV/TK2/NL0p1A0wUeks8LwW1zolkK5RG0xqR4WXhZ+CAJ4CY6aUksotIYKL/tMitbY1dp1aPIetFjzC+LFJrW2VMaaaGrHfJ1n28XomhBLnvFDMcWfYt9WSXv3rHtiWaK2zLFNPX7RcbOmRWtvHaj4rcUcaNjapJUVo1H7o7GLMTGodxPjtUQg4BWtbeORRYsce3E43x9Z+ZfKZ1F7ZCb0Gd2xSG0URkdqiKOjQsKSWgqC18F2K1FYpCMybeUmvWgyH6rRJ7ZkVamFSURSHw4GusWsltU0/2C5IarfbqLrjCZPaoT4pd1MO2CFFap2w4mwxYVI74KlhUjsgmCxqGATCMMSeT0g/sMvW0sqe80mtEKnnBU4QtNaBiUntaiWobGetPXeZ1L7//vtJknQmztbihk66hIwx11en1nY8y3JsuVy7pQhGTk9qq+k9ts1Mam00uH0sAmVREKN11l0dK2rK8UxqB0SbSe2AYLKoYRAA1VBKgd1KKZMkUUoJISiidj6p7W/rxKS2j2EvvvawZdX5OYdmvk1uH3Dax1CwH4F/SjxIkuQK6tS2+149OjGprRrg9DCpdQDht8cigJJSgefZNQqOFTLxeCa1AwLOpHZAMFnUYAgcDgelFAhHWZZpmtJb6GBSew5zbZl7fvWDwS6CcQQRqXXq1GqtwzC8pvSDPvgxqe2DEo9ZEAJYpN+04cI8HcG+u/O0bXFWMald3Cm7cwajZi3FaOH/sKQ2z/MkSdI0xfr3/PZFQF8kUttu0niR2rtDap06tZSxvejqB3TR9mwwqe0JFA9bEAKoP7Ugg9nUARFgUjsgmCxqGATKskySBAVEa7MeTy7p1WRfHMcIBkspy7LUty8afBFS22LSzEt6EW7zbARBgPOL0CzVqaXALZPalkD+UYdeePmtY68BTj84FjEezwgwAjYCTGptNLg9CwTi21dZlnmeB0FAWY/n16ltcs9eixbH8RxIbYtJWCj2S//qHjGM/avfe/6VFH+1nXT0t76ZYUDtlC9+473PfP7bn/n8t5uAuoL+OI6jpy8hBHA+HA5CCCxPvDJS2x7ynz5S227PS6+/S9fq4A3efOEKPr/sAiPQjgCT2nZ8+OgFEAiCAGkAFDyDEXmeCyHQHjb9II5jbPdgjEmSJAxDO9vhUpHaJpNAaj/z+W8Tf/3sl98AGW3qpKO0fVTtlOeC+1dPamsvaK01/XZ69d73x/v7wu+/+Vxwf4y/z331u/fe1lXvWkL+xpgf/eTnL772cKS/r3zze8fa81cf/s17Nx+M9PdXH/5N1R7uYQQYgWtCgEntNZ3NK/ElCAI8FHayHhG4hZPDktqyLBGlg/Asy+iptDHmIqS2xSSkH3z2y28QqX0uuP/ZL7+Bv9pOOrp/9XsYUDvl18Rbv/TivV//d//zSq4kdsOYlpD/ReCZmz0XAYGVMgKMwEgIMKkdCVgWezoCcRwnSYJgrZSS+GVy+4LcYUltu62//nt/MkZoDbSyXbV9NM9zqrH6je/8cKTo2ouvPfzRT35u6+X2ohFofwoxvWtzs2d6BFgjI8AIjIcAk9rxsGXJJyJQlmUcx0IIsFghRBRFUsooisB0jTE//H/FUazuKKJWW2/hRGdOnUaLmaiRpiktcjpVKs+7cwi0hPwvgsXc7LkICKyUEWAERkKASe1IwLLYcxHQWmdZpp6+KOXxXLl18/vUW6ibN2JfEARhGD5d0fTkfyllEAToGVExi54HAkVZbKNtfIgHN8cO+Q8uvI9A/GisfqIpM6GPEB7DCDACjEAVASa1VUy4584h0KfewsSg5HmOrdQoOK21po0DJjaG1U2PgK98L/C20fZM1RTpp8ZlQ/6HwwG/zcIwdFgsX95nnmuezggwAkxq+RpgBEyfegsXgQmbA2MR2wmkVitFO6GjsaD90C8C+HyUrsTKuz1nSqtzrJpbyF9KmaYpPML21+Qdk1qCghvnIKALvQ7XuqgpBnKOWJ67CASY1C7iNLGR4yLQp97CuBY0S0fINo5jhLiaB9YcYVJbA8oSumQmwWi9wNsn+3NMnlvIn35Awimb4zKpPedE81xCYJ/sz//gkDRuLAsBJrXLOl9s7SgI9Km3MIrifkLLskTI9thvfSa1/QCe3ah1uCZS6wXe+TGnM0P+AwLkkNqiKIQQyK899vIe0CoWdTUI6ELjs7MSq6IsrsavwR0pykJppbQ6GaXzJQzulDGGSe0YqLLMhSHQp97CxV3SWjs5iJ0mManthGiGA+JDbDNaL/B85Z9v5zkh//O1kwQpJe0qgk5s52ZXoabB3GAEjkUAyej4BA3ywTnWgPmPjw+x87N5F++yPOtv+fkS+us6diST2mMR4/FXi8CU9RaOBfG0KmNMao/FeQ7jt9HWIbVDxZxODvkPCAtSwymtFpKzLAtuXwMqYlF3EIGiLCgZ3Qu8dbi+gyC0u4zUpo3cyEwqreJD7Ct/JVbrcN0zZHu+hHYLzzzKpPZMAHn6FSJwGoMcFohBqowxqR32pEwgLcszh9EOHnM6IeQ/rON5njvBWmNMnufHPogY1iqWdgUI2Mno+ODITLb7pQtduxZTaUVpP7rQFMjM8sw+1C6cjmIWCaF+NJAGQOrso7ZqmEoJA3j6XzsL6mw5aIP0V2uqADdgBWOcudCV5VkfCc7cid8yqZ0YcFY3RwQGYZDDOjZIlTEmtcOelAmkYY1LldcOFXMqiuJwOGg9l4Xhc7NnglPMKsZDwHmq7gXeRm7a1SFdoTrGTvvxlb+RmyzPNnJDn81ttO0T2owPsT1rHa5taiszaYeWt9HWIalQjarVpHolViIVxph1uK46CN65i3dVp3ShfeXbBtAY8he3IMino7t45wWeSEUfCTTrIg0mtReBnZXOC4FBGOSwLtF6mizLaHdcBLSEeHI7a3llUjqVvKpv413NLa9FJh+aAAFa40LfXnajM+ZUa6F98WRZRvvzhWFIG1DXThypc272jOQmi50egWoyOj4+tYFYMq8nqV2J1UZu9skegUzMqiWOJNkYA5O20VakQmklM7kO1yuxAnMVqUAt6vgQK60g0yGpvvLBXHfxDgkDEIL1o5ji8GCEXY/atwV3HtxhirKwjSQv2p21JdgITN9mUjs95qxxdgicwyBHcuacKmNlUYjVqkpk7R7lD7D2aCTf76xYfEXZRNZuO992PVGikgJFUQRBQPt4JUkSx8NvV9Zp1dzs6TSYBywFgWoyOj4+7WwMH7qqjxS5NMZgjFNcr5ZQOnLW4dp51g/yhxVs63DtPIFRWnmBZ/98rVWNYYib2nZC+y7ercTKsaT97S7e2Tm1SIKC5Yj72kdrRTkSasdM08mkdhqcWcusETiHQY7k2JlVxpL93qaw1XYxjwfQWJJvR++MMVSMIggCpRTtqZbneRzHQghsSVV9ho4V9NX+kc7RsGLx5WGz2Gq7PeZUaw+RSCfkr7UOw7B2yqidc7NnVGdZ+GQINCWj40PkxDJtq/qTWuepPehpS0C0KAuHoUIvAq4w2HnKj4wCu2hDE3UmLruRG5sZ26TZdrOljXwD596CKLKvfCQeOEcdabUSnDGTvWVSOxnUrGi+CJzJIMdwjIhdcvuip8ZRFBHJa9FbaF0lstQzk9wD1HJSSjmkFvsD57cvuG+MQUFTpZTWGouKhBDOA3QppRBioaQW0ZcqkbV77G+7lrNvHyISqbW2cUbg1h45TXtu9kzjNWsZG4GmZHR8fJwgq21Mf1Jrz0KbmGWWZ9toa/8ZY/CJbqKDTUchhHR1modkA9ICMurwb5LmNJAlvBKr2vEU+W657bRLcNRN85ZJ7TQ4s5ZZI3AmgxzPt3OqjEXbLbFYp6HVWTuvDuWv1rosS4dsORHEsiyRHJJlmVMHKooibCAMe5RScRxHUbRQUqsLje+zT7/yaZvIeoH3qa99CodawkJNJyUIAv30ZefROoHbpumD98/NnsEdZIHTI4DwpPOpsd+2FMXrZI2UflD1aw6kFvFgYu1O4LZqM/VgjZqzcI2OGmPAj73AI8ZsHzXGdEpwxk/zlkntNDizlgUgcA6DnKF71dIHoLZy07EceGJfHFKbpmmSJLYNTTxVSkkZonmeCyHKsmwabAuceRvfsvZXspOWd5T9IPrR7UsIgbJZiJHbPwmOknnO4LnZc44vPHcmCBD9sj81Trv6oB/Gn0xqOx/0Ixbb9EO0JVJLDLUPnzbGIIm2KIumlIbqaUJge5/smwo46EJjbVxTCdtOCVWl0/QwqZ0GZ9bCCFwAgXC9dmK0gedlsqNw48SGOqRW3b5sG5yILA5RWgJycKWUoGhMam3omtpI4Wg6On3/3OyZHgHWeA4CSqumpxyffuXTONQUbqwltWCc9NgdYxwJYJBNnBXuUCjX9m6f7CG5ehSJ9aS3J6lFjQWqn9CSQAwzwEft5Wi2eWijDFmWZ0hvsE0yxvSRUJU5TQ+T2mlwZi2MwAUQqNb2EqvjVsVOYHQfUmtnFCBXxI7mIvEApjKpneCUsQpGYJ4IVJ9yOGysajaivDY3LcoClI7mQqzzwAQrqJoinVC0kRsn8wF0GWFjpArYEqDIznBFT9VshxCvxAoVbdtLPRBLbopbQxGUkvtIriVOj6PtEqoGT9bDpHYyqFkRIzA1AtXaXjOs5FUltTZhNcbYPFVrTc/QgSZ6sKGA1lpKmaYppSVMjfgQ+vCdYT88db5Nh1DCMhiBK0Sg+tkhZtbkLbIIVmKFACrqwiISSXOxkSwivr7yfeWD59GAJuFZnq3ECozTVz7EbuQGRFZphYf7kAmWbOceEAetyndILSysLbZgzyVnodH5F7FbRKDtAoJIRUBVrz4SbI3Tt5nUTo85a2QEpkNA+b6dgVAWxXS6+2lySC2KfNFUWihmjMHeAQ5hVUoFlZe9zJ9ELaVR/WJmUruUc8d2XhaB6menk3c+ubHc1i7Az8h1uMaUbbSluRBblMU+2WMDMPDUPs5meebMskOzSitwWS/wSLUtFqrtHrQdUgsm2lmeFnFi+wez3cZ9hhIPbKWIZ2PvCXuK057DnYpJrX3iuM0IXBsCdm2vZL+foXsOqX2yqFZKrGcqigLV1qikl52HUOuLHdatHTD/zuoX8xy+KuaPG1vICFQ/O0RMzwGniVmeI5PnjoQAk9qRgGWxjMBcEKCNGGay4QLhUgmwPik+Bf4axzGO0uYLtRFZcF8S6OQq2P0Lale/mJnULuj0sakXRKD62WFSe8HTcRHVTGovAjsrZQSmQwC1vaLtdjqVrOkMBHShsb88/WsvHDlDME9lBK4cASa1V36Ce7jHpLYHSDyEEVg4OpQNWAAAC5hJREFUApmUM8ymXTiobD4jwAjMCwEmtfM6H5ewhkntJVBnnYwAI8AIMAKMACMwKAIyk6hRQP+2V2PtqRwlYHsO5mGXRYBJ7WXxZ+2MACPACDACjAAjwAgwAgMgwKR2ABBZBCMwJQIvvf7uc8H9kf5eev3dc3y597Z+8bWHi/v7xnd+eLLXP/rJz9+7+WBxfz/6yc9PdpknMgKMACMwTwSY1M7zvLBVjEAjAi+9/u6XvvWDkf5qSe1uF6/XoZRZo01PD7z42sORDBtV7IuvPXzqwcf/+75ar0PfV0VROoecty++9vD5V9LF/VVddvzit4wAI8AILA4BJrWLO2Vs8JUjkOd5kiRpmpblEzqV375sn6cntdtt5N1W2e7keVdDasnl1Urs94nWjZtWXI3L9jXGbUaAEWAElogAk9olnjW2+ZoRiOM4TVOllJSyLEt9+7IdviCpBbVt4XlXw/CI1MJlzwv2+0SpJ2V0ndfVuOz4xW8ZAUaAEVgcAkxqF3fK2OArR4A2FMiyLI7ji5Ba31dE5loau13s8LxFM7wWT+nQdhs5aRiLdvnKP0vsHiPACNwxBJjU3rETzu7OHoE4jrPs4+zVJEnCMHT2hp0gUtuT1ILqbTaSeN6iGR4x187Geh0KkSLddtEuz/7TwAYyAowAI3AEAkxqjwCLhzICEyBQluXhcCBFWZYVxTMJnXMjtaCA63UYx4dFM7xOLusMWK2E76tFu0yXGTcYAUaAEbgCBJjUXsFJZBeuDYGiKLTWeZ47jiEzYZ6kdrUSd43UItGWSa1zlfJbRoARYAQuhQCT2kshz3oZgXoEDodDEARRFIVhSPm1GBoEgTFmbqTWLomwaIbnBGJb3iJGi5IIi3a5/hLkXkaAEWAElokAk9plnje2+noRkFKmaQr/kiSxee3cSG21eO2iGV4Li6VDoLN28dpFu3y9HyP2jBFgBO4iAkxq7+JZZ5/njEAQBKhQCyNtjjsZqdW6UErT3y/8giRWh8Z2G8XxJ4m/hOeiGR75q5Suulxl8PB60S7TieMGI8AIzBCBLM+zSh7aUXaeL+EodRcfzKT24qeADWAEnkHAIbVFUQghkF87Gal9xiBj7KKt223klPGyB18Nw7Ndtss72M6ifTUuV13jHkaAEbgUArootlHkBcE2ik6z4XwJp+m97CwmtZfFn7UzAi4CT+pjPS3phWOHwwG89rKktn1jLZh6NQxvv088L2hn8OTyc8H9xf39xtffcq88fs8IMALzQEBm2UqIlRDrMDyN1J4vYR5IHG0Fk9qjIeMJjMCoCGitgyCgtFroyrIsuH1hodh4FOql1989x7tvfOeHL772cHF/L//h98/xmucyAozApRDYSLkOQ/lsIOBYY7I8V1q3POhXWiut9bPVFaFFFwVN1EWBkcXtJudFWTbNgsYmO7dRtIvjoiy3UVQltVDhzIUusqRdgjP3mt4yqb2ms8m+XAkCeZ47wVpjTJ7n9qKxK3GV3WAEGAFG4AwEvCDA30oIXymwyf7y4sNhIyUJWYch8UIIQciTBmyjyKG2vlIbKUFAadhKCHG73ncdhhspHXuKslwJsYtjp5/ekg21pHafJF4QQD5N2cWx3dkugWZdX4NJ7fWdU/bo2hDATrn26rGLeJjneZIkaZrCkvz2dRFLWCkjwAgwAkCAeCQ19kni8M4mrOLDAUmrIk2V1jLL1mG4EoKmizTFgPhwUFr7SnlB4JBUXykw110cyywjOV4Q6KLAFBIIS2SWeUEQW5vsNFlYS2qLsnTshCO1LLlWQpO6K+hnUnsFJ5FduCoEyrJMkiSKIsRlkySRty9aLnYpb+M4TtNUKSWlLMsSVPtSxkyml6n8ZFCzIkagigBIIRHW/o1tFCmtqwLtnmrGqi4KLwh8pTBsHYbrMLSnKK29ILCzHWDhPkmqw0SaOgIxZhfHKyHs8U3tJkqa5TmtIUPcdx2GtVHqJglNGpfez6R26WeQ7b82BJIkieNYax0/fcFDpVTc/LhqAhQo+SHLMliou74zJrBqbBV3k8qPjSrLZwR6InAyqQX93dwuvK3VVZSlQ08xDNFWYwyIo/OU3xizDkNivcaY2lisMYbIMbJ+yYZamktHnUYLJUUU2VcKiQdNDL5FgqPrOt4yqb2O88heXA8CQgjKNAjDkIhjnuei34/7kbCI45gyfZMksW0bSeMcxN5NKj8H5NkGRoAoY/8Abe3IdRhWn/Uj5trEBY0xTQMcmghSWz1ZRGqRbECKQEYp57U60e5xdNmHjDGo+UWKnKN42y6hdsqiO5nULvr0sfFXiADqdsExpRQq1BpjiqKwD03veVmWBysJLMuyom4t8PSGjarxblL5USFl4YxAfwTOjNQSx7UTBqC9ibOSbU0DHJrYSWoREqb8BCdwS+pqG44uZwz4sRcExJidASC+1foJ1WFX08Ok9mpOJTtyJQiEYUgBUdulNE2jU6tw23LObBdFgWzamSxfO9OdzullWb733nu3xXyKJ08ks+zRo0eHw4Gi6Z0SeAAjwAicjMCZpLalJAI4azWCS6a2kFpiqBRLplnUsAOoSKItyrIppYFmOY0WUquLYiXERkqUs61NqGVS6+DJbxkBRmBqBLIsE0LQU2+oV0oFQUBR26ltutWHFWxBEETWKwiC5NkVEhexbTylWmshRBRFWKgXx7GUkt6Op5clMwKMAFFGCrj2b/QpXmvzTkJ7nySUMlsdgFVZNIAspOnUsOeiOgHVT3CKIdCUaqOF1KISWZbnSG+wTbLltEiwh11NmyO1V3Mq2ZHrQUBr7QRr5/CsP7l9ORHKoiioUMP1nADLkyiKcC6wXzEx+Iuv27Ns5CYjcHcRqNLcbRS1xF9tpBDmtGOciM7S4jCkCtgDEDm2M2I70w+gETHjjZS1hbdsq+x2EyWFUiKySK6tTUJokmBruaY2k9prOpvsCyMwIgJBENQm0Wqtw2er3oxoxOSi7TxmKSWt20MEd3JzWCEjwAg8g4BNavdJYtPNZ8bVvcnyHLvR+kr5SmFTA+ykgOFKazzcxwDUGbBzD3pGajFsJURtvQXHNAR0oRE1xdD2lQJtRQ6DXS4XqQhU1atTgqPxmt4yqb2ms8m+MAIjIhAEgROmhbKLr2Ab0WdjbFIbRRGR2uv2elRIWTgjMCACXhAgCNr/mb6tPcvzfZKAbtYm4CqtwWW9IHCKeUFOz0gtmGif8rQQaJN1aiM0S4kHtiNYNAbC3SnBnnhlbSa1V3ZC2R1GYCwEmtIMkJYwltZLyw3DEDUfkH5Auc5Zls1h3d6l4WH9jAAjwAjMCAEmtTM6GWwKIzBnBPI8l1KGYaievlCtNoqi2gjunH3pbxut2wO7lVImSaKUEkJQ1La/NB7JCDACjAAjMB4CTGrHw5YlMwLXhgB2x8VmuWC2ly3IMA2+h8OBCgaXZQn374Lj08DLWhgBRoARGAoBJrVDIclyGIE7h8AdKVWL0rxVFkupCHfuxLPDjAAjwAjMEgEmtbM8LWwUIzA/BFCnljJrkySRty8UcJ2fvcNYdDgcUJoXeRe2UHsNmd3PbUaAEWAEGIGLIMCk9iKws1JGYHkIJEkSx7HWOn76gg/XXbFVSpmmKTxFNi2dOSa1BAU3GAFGgBGYAwJMaudwFtgGRmABCAghaEFYGIa0TCrPcyHEAhw4yUSnkJnNcZnUnoQoT2IEGAFGYCwEmNSOhSzLZQSuDAGbw9HCKWPMdVdsdUgtCnshv9YG5MrONbvDCDACjMASEWBSu8SzxjYzAhdAIAxDZ/NeGJGm6RVXbJVSOl4fDgekETOpvcBVyCoZAUaAEWhGgEltMzZ8hBFgBCwEqGKr1WeUUkEQVCsD2GMW3dZaB0FAabXwJcuy4Pa1aNfYeEaAEWAErgwBJrVXdkLZHUZgRAS01k7YMsuyoihGVDkD0XmeO14bY/I855JeMzg5bAIjwAgwAp8g8P8BAFDwCgIzCCcAAAAASUVORK5CYII=) Defining the double convolution block:
def conv_block(ni, nf): return nn.Sequential( nn.Conv2d(ni, nf, kernel_size=3, stride=1), nn.ReLU(inplace=True), nn.Conv2d(nf, nf, kernel_size=3, stride=1), nn.ReLU(inplace=True) )
_____no_output_____
MIT
notebooks/Original_U-Net_PyTorch.ipynb
jimmiemunyi/fastai-experiments
Implementing the origal architecture:
class UNET(nn.Module): def __init__(self, in_channels=1, out_channels=1, features = [64, 128, 256, 512]): super(UNET, self).__init__() self.encoder = nn.ModuleList() self.pool = nn.MaxPool2d(kernel_size=2, stride=2) # create the contracting path (encoder + bottleneck) for feature in features: self.encoder.append(conv_block(in_channels, feature)) in_channels = feature self.bottleneck = conv_block(features[-1], features[-1]*2) # create the expansive path self.decoder = nn.ModuleList() # reversed because we want to create from last to first for feature in reversed(features): self.decoder.append( nn.Sequential( nn.ConvTranspose2d(feature*2, feature, kernel_size=2, stride=2), conv_block(feature*2, feature) ) ) self.final_conv = nn.Conv2d(features[0], out_channels, kernel_size=1) def forward(self, x): activations = [] # forward pass on the Encoder for module in self.encoder: x = module(x) activations.append(x) x = self.pool(x) x = self.bottleneck(x) # reverse the order of activations for easier usage activations.reverse() # forward pass on the decoder for idx in range(len(self.decoder)): # up scale first x = self.decoder[idx][0](x) # crop incoming activation activation = TF.resize(activations[idx], size=x.shape[2:]) # concat x = torch.cat([activation, x], dim=1) # double conv x = self.decoder[idx][1](x) return self.final_conv(x)
_____no_output_____
MIT
notebooks/Original_U-Net_PyTorch.ipynb
jimmiemunyi/fastai-experiments