markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Component MakeupWe can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.Note that the components are again ordered from smallest to largest and so I am getting the correct rows by calling N_COMPONENTS-1 to get the top, 1, component.
import seaborn as sns def display_component(v, features_list, component_num, n_weights=10): # get index of component (last row - component_num) row_idx = N_COMPONENTS-component_num # get the list of weights from a row in v, dataframe v_1_row = v.iloc[:, row_idx] v_1 = np.squeeze(v_1_row.values) # match weights to features in counties_scaled dataframe, using list comporehension comps = pd.DataFrame(list(zip(v_1, features_list)), columns=['weights', 'features']) # we'll want to sort by the largest n_weights # weights can be neg/pos and we'll sort by magnitude comps['abs_weights']=comps['weights'].apply(lambda x: np.abs(x)) sorted_weight_data = comps.sort_values('abs_weights', ascending=False).head(n_weights) # display using seaborn ax=plt.subplots(figsize=(10,6)) ax=sns.barplot(data=sorted_weight_data, x="weights", y="features", palette="Blues_d") ax.set_title("PCA Component Makeup, Component #" + str(component_num)) plt.show() # display makeup of first component num=1 display_component(v, counties_scaled.columns.values, component_num=num, n_weights=10)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Deploying the PCA ModelWe can now deploy this model and use it to make "predictions". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point. Run the cell below to deploy/host this model on an instance_type that we specify.
%%time # this takes a little while, around 7mins pca_predictor = pca_SM.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
-----------------!CPU times: user 319 ms, sys: 14 ms, total: 333 ms Wall time: 8min 32s
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data.
# pass np train data to the PCA model train_pca = pca_predictor.predict(train_data_np) # check out the first item in the produced training features data_idx = 0 print(train_pca[data_idx])
label { key: "projection" value { float32_tensor { values: 0.0002009272575378418 values: 0.0002455431967973709 values: -0.0005782842636108398 values: -0.0007815659046173096 values: -0.00041911262087523937 values: -0.0005133943632245064 values: -0.0011316537857055664 values: 0.0017268601804971695 values: -0.005361668765544891 values: -0.009066537022590637 values: -0.008141040802001953 values: -0.004735097289085388 values: -0.00716288760304451 values: 0.0003725700080394745 values: -0.01208949089050293 values: 0.02134685218334198 values: 0.0009293854236602783 values: 0.002417147159576416 values: -0.0034637749195098877 values: 0.01794189214706421 values: -0.01639425754547119 values: 0.06260128319263458 values: 0.06637358665466309 values: 0.002479255199432373 values: 0.10011336207389832 values: -0.1136140376329422 values: 0.02589476853609085 values: 0.04045158624649048 values: -0.01082391943782568 values: 0.1204797774553299 values: -0.0883558839559555 values: 0.16052711009979248 values: -0.06027412414550781 } } }
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create a transformed DataFrameFor each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.You should end up with a DataFrame that looks something like the following:``` c_1 c_2 c_3 c_4 c_5 ...Alabama-Autauga -0.060274 0.160527 -0.088356 0.120480 -0.010824 ...Alabama-Baldwin -0.149684 0.185969 -0.145743 -0.023092 -0.068677 ...Alabama-Barbour 0.506202 0.296662 0.146258 0.297829 0.093111 ......```
# create dimensionality-reduced data def create_transformed_df(train_pca, counties_scaled, n_top_components): ''' Return a dataframe of data points with component features. The dataframe should be indexed by State-County and contain component values. :param train_pca: A list of pca training data, returned by a PCA model. :param counties_scaled: A dataframe of normalized, original features. :param n_top_components: An integer, the number of top components to use. :return: A dataframe, indexed by State-County, with n_top_component values as columns. ''' # create new dataframe to add data to counties_transformed=pd.DataFrame() # for each of our new, transformed data points # append the component values to the dataframe for data in train_pca: # get component values for each data point components=data.label['projection'].float32_tensor.values counties_transformed=counties_transformed.append([list(components)]) # index by county, just like counties_scaled counties_transformed.index=counties_scaled.index # keep only the top n components start_idx = N_COMPONENTS - n_top_components counties_transformed = counties_transformed.iloc[:,start_idx:] # reverse columns, component order return counties_transformed.iloc[:, ::-1]
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously. Define the `top_n` components to use in this transformed dataYour code should return data, indexed by 'State-County' and with as many columns as `top_n` components.You can also choose to add descriptive column names for this data; names that correspond to the component number or feature-level makeup.
## Specify top n top_n = 7 # call your function and create a new dataframe counties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n) ## TODO: Add descriptive column names PCA_list=['c_1', 'c_2', 'c_3', 'c_4', 'c_5', 'c_6', 'c_7'] counties_transformed.columns=PCA_list # print result counties_transformed.head()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Delete the Endpoint!Now that we've deployed the mode and created our new, transformed training data, we no longer need the PCA endpoint.As a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example).
# delete predictor endpoint session.delete_endpoint(pca_predictor.endpoint)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
--- Population Segmentation Now, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we have ~3000 counties and 34 attributes in the original dataset, the large feature space may have made it difficult to cluster the counties effectively. Instead, we have reduced the feature space to 7 PCA components, and we’ll cluster on this transformed dataset. EXERCISE: Define a k-means modelYour task will be to instantiate a k-means model. A `KMeans` estimator requires a number of parameters to be instantiated, which allow us to specify the type of training instance to use, and the model hyperparameters. You can read about the required parameters, in the [`KMeans` documentation](https://sagemaker.readthedocs.io/en/stable/kmeans.html); note that not all of the possible parameters are required. Choosing a "Good" KOne method for choosing a "good" k, is to choose based on empirical data. A bad k would be one so *high* that only one or two very close data points are near it, and another bad k would be one so *low* that data points are really far away from the centers.You want to select a k such that data points in a single cluster are close together but that there are enough clusters to effectively separate the data. You can approximate this separation by measuring how close your data points are to each cluster center; the average centroid distance between cluster points and a centroid. After trying several values for k, the centroid distance typically reaches some "elbow"; it stops decreasing at a sharp rate and this indicates a good value of k. The graph below indicates the average centroid distance for value of k between 5 and 12.A distance elbow can be seen around 8 when the distance starts to increase and then decrease at a slower rate. This indicates that there is enough separation to distinguish the data points in each cluster, but also that you included enough clusters so that the data points aren’t *extremely* far away from each cluster.
# define a KMeans estimator kmeans = sagemaker.KMeans(role = role, train_instance_count = 1, train_instance_type='ml.c4.xlarge', output_path = output_path, k = 8) print('Training artifacts will be uploaded to: {}'.format(output_path))
Training artifacts will be uploaded to: s3://sagemaker-eu-central-1-730357687813/counties/
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Create formatted, k-means training dataJust as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model.
# convert the transformed dataframe into record_set data kmeans_train_data_np = counties_transformed.values.astype('float32') kmeans_formatted_train_data = kmeans.record_set(kmeans_train_data_np)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Train the k-means modelPass in the formatted training data and train the k-means model.
%%time kmeans.fit(kmeans_formatted_train_data)
2020-05-23 06:55:58 Starting - Starting the training job... 2020-05-23 06:56:00 Starting - Launching requested ML instances...... 2020-05-23 06:57:03 Starting - Preparing the instances for training...... 2020-05-23 06:58:26 Downloading - Downloading input data 2020-05-23 06:58:26 Training - Downloading the training image... 2020-05-23 06:58:58 Uploading - Uploading generated training model 2020-05-23 06:58:58 Completed - Training job completed Docker entrypoint called with argument(s): train Running default environment configuration script [05/23/2020 06:58:48 INFO 140047905527616] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-input.json: {u'_enable_profiler': u'false', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'_kvstore': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'true', u'epochs': u'1', u'init_method': u'random', u'local_lloyd_tol': u'0.0001', u'local_lloyd_max_iter': u'300', u'_disable_wait_to_read': u'false', u'extra_center_factor': u'auto', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'half_life_time_size': u'0', u'_num_slices': u'1'} [05/23/2020 06:58:48 INFO 140047905527616] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'7', u'k': u'8', u'force_dense': u'True'} [05/23/2020 06:58:48 INFO 140047905527616] Final configuration: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'} [05/23/2020 06:58:48 WARNING 140047905527616] Loggers have already been setup. Process 1 is a worker. [05/23/2020 06:58:48 INFO 140047905527616] Using default worker. [05/23/2020 06:58:48 INFO 140047905527616] Loaded iterator creator application/x-recordio-protobuf for content type ('application/x-recordio-protobuf', '1.0') [05/23/2020 06:58:48 INFO 140047905527616] Create Store: local [05/23/2020 06:58:48 INFO 140047905527616] nvidia-smi took: 0.0252118110657 secs to identify 0 gpus [05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0 [05/23/2020 06:58:48 INFO 140047905527616] Setting up with params: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'} [05/23/2020 06:58:48 INFO 140047905527616] 'extra_center_factor' was set to 'auto', evaluated to 10. [05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0 [05/23/2020 06:58:48 INFO 140047905527616] number of center slices 1 [05/23/2020 06:58:48 WARNING 140047905527616] Batch size 5000 is bigger than the first batch data. Effective batch size used to initialize is 3218 #metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Batches Since Last Reset": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Records Since Last Reset": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Total Batches Seen": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Total Records Seen": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Max Records Seen Between Resets": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Reset Count": {"count": 1, "max": 0, "sum": 0.0, "min": 0}}, "EndTime": 1590217128.442506, "Dimensions": {"Host": "algo-1", "Meta": "init_train_data_iter", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.442472}  [2020-05-23 06:58:48.442] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 0, "duration": 33, "num_examples": 1, "num_bytes": 167336} [2020-05-23 06:58:48.489] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 1, "duration": 46, "num_examples": 1, "num_bytes": 167336} [05/23/2020 06:58:48 INFO 140047905527616] processed a total of 3218 examples [05/23/2020 06:58:48 INFO 140047905527616] #progress_metric: host=algo-1, completed 100 % of epochs #metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Batches Since Last Reset": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Records Since Last Reset": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Total Batches Seen": {"count": 1, "max": 2, "sum": 2.0, "min": 2}, "Total Records Seen": {"count": 1, "max": 6436, "sum": 6436.0, "min": 6436}, "Max Records Seen Between Resets": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Reset Count": {"count": 1, "max": 1, "sum": 1.0, "min": 1}}, "EndTime": 1590217128.490535, "Dimensions": {"Host": "algo-1", "Meta": "training_data_iter", "Operation": "training", "Algorithm": "AWS/KMeansWebscale", "epoch": 0}, "StartTime": 1590217128.442763}  [05/23/2020 06:58:48 INFO 140047905527616] #throughput_metric: host=algo-1, train throughput=67151.9347251 records/second [05/23/2020 06:58:48 WARNING 140047905527616] wait_for_all_workers will not sync workers since the kv store is not running distributed [05/23/2020 06:58:48 INFO 140047905527616] shrinking 80 centers into 8 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #0. Current mean square distance 0.062246 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #1. Current mean square distance 0.063014 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #2. Current mean square distance 0.059803 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #3. Current mean square distance 0.063063 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #4. Current mean square distance 0.064876 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #5. Current mean square distance 0.063535 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #6. Current mean square distance 0.063639 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #7. Current mean square distance 0.064357 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #8. Current mean square distance 0.061033 [05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #9. Current mean square distance 0.060658 [05/23/2020 06:58:48 INFO 140047905527616] finished shrinking process. Mean Square Distance = 0 [05/23/2020 06:58:48 INFO 140047905527616] #quality_metric: host=algo-1, train msd <loss>=0.0598029382527 [05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: inner product took: 30.7809%, (0.017753 secs) [05/23/2020 06:58:48 INFO 140047905527616] collect from kv store took: 18.8244%, (0.010857 secs) [05/23/2020 06:58:48 INFO 140047905527616] splitting centers key-value pair took: 18.6784%, (0.010773 secs) [05/23/2020 06:58:48 INFO 140047905527616] batch data loading with context took: 7.2903%, (0.004205 secs) [05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: point norm took: 7.0377%, (0.004059 secs) [05/23/2020 06:58:48 INFO 140047905527616] predict compute msd took: 6.0613%, (0.003496 secs) [05/23/2020 06:58:48 INFO 140047905527616] gradient: one_hot took: 5.6905%, (0.003282 secs) [05/23/2020 06:58:48 INFO 140047905527616] gradient: cluster size took: 2.3579%, (0.001360 secs) [05/23/2020 06:58:48 INFO 140047905527616] gradient: cluster center took: 1.6853%, (0.000972 secs) [05/23/2020 06:58:48 INFO 140047905527616] update state and report convergance took: 0.8408%, (0.000485 secs) [05/23/2020 06:58:48 INFO 140047905527616] update set-up time took: 0.3795%, (0.000219 secs) [05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: center norm took: 0.3278%, (0.000189 secs) [05/23/2020 06:58:48 INFO 140047905527616] predict minus dist took: 0.0451%, (0.000026 secs) [05/23/2020 06:58:48 INFO 140047905527616] TOTAL took: 0.0576758384705 [05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0 #metrics {"Metrics": {"finalize.time": {"count": 1, "max": 331.773042678833, "sum": 331.773042678833, "min": 331.773042678833}, "initialize.time": {"count": 1, "max": 28.280019760131836, "sum": 28.280019760131836, "min": 28.280019760131836}, "model.serialize.time": {"count": 1, "max": 0.14591217041015625, "sum": 0.14591217041015625, "min": 0.14591217041015625}, "update.time": {"count": 1, "max": 47.55997657775879, "sum": 47.55997657775879, "min": 47.55997657775879}, "epochs": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "state.serialize.time": {"count": 1, "max": 1.5878677368164062, "sum": 1.5878677368164062, "min": 1.5878677368164062}, "_shrink.time": {"count": 1, "max": 329.76484298706055, "sum": 329.76484298706055, "min": 329.76484298706055}}, "EndTime": 1590217128.824555, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.408428}  [05/23/2020 06:58:48 INFO 140047905527616] Test data is not provided. #metrics {"Metrics": {"totaltime": {"count": 1, "max": 479.54416275024414, "sum": 479.54416275024414, "min": 479.54416275024414}, "setuptime": {"count": 1, "max": 13.439178466796875, "sum": 13.439178466796875, "min": 13.439178466796875}}, "EndTime": 1590217128.824923, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.824649} 
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Deploy the k-means modelDeploy the trained model to create a `kmeans_predictor`.
%%time # deploy the model to create a predictor kmeans_predictor = kmeans.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
-----------------!CPU times: user 316 ms, sys: 14 ms, total: 330 ms Wall time: 8min 32s
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
EXERCISE: Pass in the training data and assign predicted cluster labelsAfter deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point.
# get the predicted clusters for all the kmeans training data cluster_info= kmeans_predictor.predict(train_data_kmeans_np)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Exploring the resultant clustersThe resulting predictions should give you information about the cluster that each data point belongs to.You should be able to answer the **question**: which cluster does a given data point belong to?
# print cluster info for first data point data_idx = 3 print('County is: ', counties_transformed.index[data_idx]) print() print(cluster_info[data_idx])
County is: Alabama-Bibb label { key: "closest_cluster" value { float32_tensor { values: 3.0 } } } label { key: "distance_to_cluster" value { float32_tensor { values: 0.3843974173069 } } }
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Visualize the distribution of data over clustersGet the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster.
# get all cluster labels cluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info] # count up the points in each cluster cluster_df = pd.DataFrame(cluster_labels)[0].value_counts() print(cluster_df)
3.0 907 6.0 842 0.0 386 7.0 375 1.0 368 5.0 167 2.0 87 4.0 86 Name: 0, dtype: int64
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster. Delete the Endpoint!Now that you've deployed the k-means model and extracted the cluster labels for each data point, you no longer need the k-means endpoint.
# delete kmeans endpoint session.delete_endpoint(kmeans_predictor.endpoint)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
--- Model Attributes & ExplainabilityExplaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, based on the data. EXERCISE: Access the k-means model attributesExtract the k-means model attributes from where they are saved as a TAR file in an S3 bucket.You'll need to access the model by the k-means training job name, and then unzip the file into `model_algo-1`. Then you can load that file using MXNet, as before.
# download and unzip the kmeans model file # use the name model_algo-1 # download and unzip the kmeans model file kmeans_job_name = 'kmeans-2020-05-23-06-55-58-261' model_key = os.path.join(prefix, kmeans_job_name, 'output/model.tar.gz') # download the model file boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz') os.system('tar -zxvf model.tar.gz') os.system('unzip model_algo-1') # get the trained kmeans params using mxnet kmeans_model_params = mx.ndarray.load('model_algo-1') print(kmeans_model_params)
[ [[ 0.35492653 0.23771921 0.07889839 0.2500726 0.09919675 -0.05618306 0.04399072] [-0.23379213 -0.3808242 0.07702101 0.08526881 0.0603863 -0.00519104 0.0597847 ] [ 1.3077838 -0.2294502 -0.17610097 -0.42974427 -0.11858643 0.11248738 0.15853602] [-0.02278126 0.07436099 0.12951738 -0.05602401 -0.04330579 0.05682565 -0.03048567] [ 0.5819005 -0.45450625 -0.03150757 0.04155013 -0.09733208 -0.02300905 -0.13401571] [ 0.25074974 -0.1768499 -0.10482205 -0.22392033 0.23187745 -0.19118813 -0.10258509] [-0.24812227 0.04720467 -0.02500745 -0.06317183 -0.03199761 -0.04560736 0.00395537] [-0.04086831 0.03606306 -0.3563783 0.10303619 -0.01080673 0.07729725 -0.01095549]] <NDArray 8x7 @cpu(0)>]
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm.
# get all the centroids cluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy()) cluster_centroids.columns=counties_transformed.columns display(cluster_centroids)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Visualizing Centroids in Component SpaceYou can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space. This gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to interpret. This is one way to make use of the results of PCA + clustering techniques, together. Since you were able to examine the makeup of each PCA component, you can understand what each centroid represents in terms of the PCA components.
# generate a heatmap in component space, using the seaborn library plt.figure(figsize = (12,9)) ax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu') ax.set_xlabel("Cluster") plt.yticks(fontsize = 16) plt.xticks(fontsize = 16) ax.set_title("Attribute Value by Centroid") plt.show()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup.
# what do each of these components mean again? # let's use the display function, from above component_num=5 display_component(v, counties_scaled.columns.values, component_num=component_num)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Natural GroupingsYou can also map the cluster labels back to each individual county and examine which counties are naturally grouped together.
# add a 'labels' column to the dataframe counties_transformed['labels']=list(map(int, cluster_labels)) # sort by cluster label 0-6 sorted_counties = counties_transformed.sort_values('labels', ascending=True) # view some pts in cluster 0 sorted_counties.head(20)
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description.
# get all counties with label == 1 cluster=counties_transformed[counties_transformed['labels']==1] cluster.head()
_____no_output_____
MIT
Population_Segmentation/Pop_Segmentation_Exercise.ipynb
fradeleo/Sagemaker_Case_Studies
Data Characteristics:The actual concrete compressive strength (MPa) for a given mixture under aspecific age (days) was determined from laboratory. Data is in raw form (not scaled).Summary Statistics:Number of instances (observations): 1030Number of Attributes: 9Attribute breakdown: 8 quantitative input variables, and 1 quantitative output variableMissing Attribute Values: NoneVariable Information:Given is the variable name, variable type, the measurement unit and a brief description.The concrete compressive strength is the regression problem. The order of this listingcorresponds to the order of numerals along the rows of the database.Name -- Data Type -- Measurement -- DescriptionCement (component 1) -- quantitative -- kg in a m3 mixture -- Input VariableBlast Furnace Slag (component 2) -- quantitative -- kg in a m3 mixture -- Input VariableFly Ash (component 3) -- quantitative -- kg in a m3 mixture -- Input VariableWater (component 4) -- quantitative -- kg in a m3 mixture -- Input VariableSuperplasticizer (component 5) -- quantitative -- kg in a m3 mixture -- Input VariableCoarse Aggregate (component 6) -- quantitative -- kg in a m3 mixture -- Input VariableFine Aggregate (component 7) -- quantitative -- kg in a m3 mixture -- Input VariableAge -- quantitative -- Day (1~365) -- Input VariableConcrete compressive strength -- quantitative -- MPa -- Output Variable
import pandas as pd import numpy as np import seaborn as sns from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') from sklearn.linear_model import SGDRegressor,GammaRegressor,Lasso,GammaRegressor,ElasticNet,Ridge from sklearn.linear_model import RANSACRegressor,HuberRegressor, BayesianRidge,LinearRegression from sklearn.ensemble import RandomForestRegressor, BaggingRegressor, AdaBoostRegressor, GradientBoostingRegressor, ExtraTreesRegressor from sklearn.svm import SVR from sklearn.tree import DecisionTreeRegressor # Decision Tree Regression from sklearn.neighbors import KNeighborsRegressor from sklearn.pipeline import Pipeline # Streaming pipelines from sklearn.model_selection import learning_curve, validation_curve, GridSearchCV # Model evaluation from sklearn.preprocessing import StandardScaler data=pd.read_csv('/kaggle/input/concrete-compressive-strength/Concrete Compressive Strength.csv') data
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
EXPLORATORY DATA ANALYSIS
data.columns data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1030 entries, 0 to 1029 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Cement (component 1)(kg in a m^3 mixture) 1030 non-null float64 1 Blast Furnace Slag (component 2)(kg in a m^3 mixture) 1030 non-null float64 2 Fly Ash (component 3)(kg in a m^3 mixture) 1030 non-null float64 3 Water (component 4)(kg in a m^3 mixture) 1030 non-null float64 4 Superplasticizer (component 5)(kg in a m^3 mixture) 1030 non-null float64 5 Coarse Aggregate (component 6)(kg in a m^3 mixture) 1030 non-null float64 6 Fine Aggregate (component 7)(kg in a m^3 mixture) 1030 non-null float64 7 Age (day) 1030 non-null int64 8 Concrete compressive strength(MPa, megapascals) 1030 non-null float64 dtypes: float64(8), int64(1) memory usage: 72.5 KB
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
all the variable are numeric
data.describe() data.isnull().sum()
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
no missing is present UNIVARIATE ANALYSIS
col=data.columns.to_list() col data.hist(figsize=(15,10),color='red') plt.show() i=1 plt.figure(figsize = (15,20)) for col in data.columns: plt.subplot(4,3,i) sns.boxplot(x = data[col], data = data) i+=1
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
here we have found some outliers,but we did't remove it due to getting loss of data BIVARIATE ANALYSIS
i=1 plt.figure(figsize = (18,18)) for col in data.columns: plt.subplot(4,3,i) sns.scatterplot(data=data,x='Concrete compressive strength(MPa, megapascals) ',y=col) i+=1
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that compressive strength is highly correlated with cement
plt.figure(figsize=(10,10)) sns.heatmap(data.corr(),linewidths=1,cmap='PuBuGn_r',annot=True) correlation=data.corr()['Concrete compressive strength(MPa, megapascals) '].sort_values() correlation.plot(kind='barh',color='green')
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that cement, superplasticizer,age,are +vely correlated, while water ,fine aggregate are negatively correlated with compressive strength. MODEL SELECTION
X=data.drop(columns='Concrete compressive strength(MPa, megapascals) ') Y=data[['Concrete compressive strength(MPa, megapascals) ']] sc=StandardScaler() X_scaled=sc.fit_transform(X) X_scaled=pd.DataFrame(X_scaled,columns=X.columns) x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=0) lr=LinearRegression() sgd=SGDRegressor() lasso=Lasso() ridge=Ridge() rf=RandomForestRegressor() dt=DecisionTreeRegressor() gboost=GradientBoostingRegressor() bagging=BaggingRegressor() adboost=AdaBoostRegressor() knn=KNeighborsRegressor() etr=ExtraTreesRegressor() gamma=GammaRegressor() algo=[lr,sgd,lasso,ridge,rf,dt,gboost,bagging,adboost,knn,etr] model=[] accuracy_test=[] accuracy_train=[] for i in range(len(algo)): algo[i].fit(x_train,y_train) accuracy_train.append(algo[i].score(x_train,y_train)) accuracy_test.append(algo[i].score(x_test,y_test)) model.append(algo[i]) mod=pd.DataFrame([model,accuracy_train,accuracy_test]).T mod.columns=['model','score_train','score_test'] mod
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that extra tree regressor has the highest accuracy level =90.7%,so we choose for our final model building MODEL BUILDING
etr1=ExtraTreesRegressor() rs=[] score=[] for i in range(1,200,1): x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=i) etr1.fit(x_train,y_train) score.append(etr1.score(x_test,y_test)) rs.append(i) plt.figure(figsize=(20,6)) plt.plot(rs,score) for i in range(len(score)): print(rs[i],score[i])
1 0.89529318226024 2 0.9277744539369183 3 0.926825810368096 4 0.929277398220312 5 0.8946985733005189 6 0.9066382335271965 7 0.9375909152276649 8 0.8798177784082443 9 0.8792678508590264 10 0.9188761161352978 11 0.9248721043508471 12 0.9016606370091849 13 0.8790450510199522 14 0.90286206857159 15 0.9361845117635051 16 0.9103918559878086 17 0.9194389042700499 18 0.9155440974047644 19 0.9149623543026111 20 0.9152627650581631 21 0.9178825939342906 22 0.933442676595351 23 0.9038669999821688 24 0.9147860597553644 25 0.8974741270279977 26 0.9103415974014989 27 0.926171116031605 28 0.8901152376661319 29 0.9072214319234586 30 0.9069034544309591 31 0.8970305284171736 32 0.9049887830584175 33 0.9292951198961779 34 0.9173185581763424 35 0.8975881402027748 36 0.9307101720411162 37 0.9062267343439251 38 0.8926768812818899 39 0.9331845652934211 40 0.8956891147838116 41 0.9175997008124308 42 0.9004182578884321 43 0.8921783511284366 44 0.890816545901059 45 0.9033256046629572 46 0.91264162638476 47 0.9102845528486323 48 0.8926070994040652 49 0.8948750730859413 50 0.9250558398241144 51 0.8977749730713258 52 0.9141359524274064 53 0.9272097292568934 54 0.8940187101262826 55 0.9053256595779804 56 0.9102632255076534 57 0.9258405592676671 58 0.9091086234290273 59 0.9107175826425848 60 0.9083015118948643 61 0.9242459381919436 62 0.9226840828504406 63 0.8793673984988264 64 0.9064094380303714 65 0.9212710874280483 66 0.9086135993540179 67 0.8920255907491763 68 0.8997516006682192 69 0.9146011134592402 70 0.9037368695524626 71 0.9099123106690848 72 0.8968849213438918 73 0.8698487713052809 74 0.9251570458392945 75 0.911139105474144 76 0.9197288937003184 77 0.9420263760065384 78 0.8901469575408667 79 0.9174065090240028 80 0.9135348717280743 81 0.9193405053109891 82 0.9176744020331675 83 0.9157099858048742 84 0.9236440049375585 85 0.9096960662685826 86 0.8958943017704084 87 0.9141373473340262 88 0.9174506061218781 89 0.9202782740840457 90 0.9164562619726861 91 0.9278867464272998 92 0.9185593281447852 93 0.9158094189320314 94 0.91697911396183 95 0.9221607535310148 96 0.912905911582812 97 0.9154524971810701 98 0.8943985987646329 99 0.9280097640316576 100 0.9104633625466904 101 0.9203871816778284 102 0.9078549698666163 103 0.8904238060377717 104 0.9290634159998891 105 0.9131575698016983 106 0.9021645427912188 107 0.9002863065659155 108 0.9114210486507061 109 0.9235117999093678 110 0.9019974737508064 111 0.9052864492715343 112 0.9079408879989107 113 0.9390434617353796 114 0.9215598383792503 115 0.9052421284637482 116 0.9285260577433873 117 0.9059866804976253 118 0.9269265454594784 119 0.9172916857437821 120 0.8830374928260559 121 0.9170774634483768 122 0.9186296228191361 123 0.9127954527824342 124 0.8853452093122024 125 0.9058835642731625 126 0.9121821726491289 127 0.890905139533444 128 0.9158423632735686 129 0.9058979507644945 130 0.9167039256365345 131 0.9207861320443467 132 0.8867697837924595 133 0.911333405919124 134 0.9184891939657748 135 0.9128065337639947 136 0.8791450923209874 137 0.9235445611790237 138 0.9205362785073326 139 0.8989360768080421 140 0.9015958556449082 141 0.9247958900966756 142 0.9347606593729455 143 0.895182396741788 144 0.9108600968904917 145 0.9297227569104195 146 0.9326809494510843 147 0.905541363064909 148 0.9258237338234881 149 0.9337694736564791 150 0.9015384307195701 151 0.907376405740946 152 0.8998352192996377 153 0.906421221173074 154 0.9339890987006378 155 0.9023764046680294 156 0.9123423766384336 157 0.9124870458797895 158 0.9157593451133572 159 0.9103751538182557 160 0.9107960625548797 161 0.9197751762663666 162 0.9145619096371216 163 0.9203736944507968 164 0.9371642586526574 165 0.91046858685322 166 0.9250595002595737 167 0.910351726028797 168 0.9240589568889332 169 0.9252028165883652 170 0.9136243609396435 171 0.9073694274118068 172 0.9291536890562709 173 0.9207721036337553 174 0.9124238739389205 175 0.8921820304512027 176 0.9074826252809058 177 0.9014783862886651 178 0.9250600168758528 179 0.922552052206061 180 0.9349903198994561 181 0.9078509819938434 182 0.9288272802056655 183 0.9326562927853923 184 0.8887649393337306 185 0.9226222618701407 186 0.9169734617452634 187 0.9404185989813758 188 0.9219341581072451 189 0.9281335914442755 190 0.9074182739756548 191 0.8974597180226369 192 0.8938602722379191 193 0.9166000756685708 194 0.9169163105807522 195 0.9283554253936381 196 0.9101342353728978 197 0.9106007206909548 198 0.8973415852731137 199 0.9072253222734327
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that at random state =77,we get a accuracy=94.39%
x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=77) etr2=ExtraTreesRegressor() etr2.fit(x_train,y_train) etr2.score(x_train,y_train) etr2.score(x_test,y_test) y_test_pred=etr2.predict(x_test) y_test1=y_test.copy() y_test1['pred']=y_test_pred y_test1.corr()
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see here the accuracy is to be 97.17%
from sklearn.metrics import mean_squared_error,r2_score mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred']) rsme=np.sqrt(mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred'])) rsme
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
we can see that root mean sqaure error is only 4.15 , which shows that our model is very good
r2_score(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred']) plt.barh(X.columns,etr2.feature_importances_)
_____no_output_____
Apache-2.0
concrete-data-eda-model-acc-97.ipynb
NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio
Load data
adni.load(show_output=False)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Display MetaData
meta_df = adni.meta_to_df() sprint.pd_cols(meta_df)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Display ImageFiles
files_df = adni.files_to_df() sprint.pd_cols(files_df) adni_df = adni.to_df() sprint.pd_cols(adni_df)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Analysis Overview
fig, axes = splot.meta_settings(rows=3) splot.histplot( adni_df, x='subject.researchGroup', hue='subject.subjectSex', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'ResearchGroup distribution','xlabel':'Disorder'} ) splot.histplot( adni_df, x='subject.subjectIdentifier', ax=axes[0,1], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'SubjectIdentifier distribution','xlabel':'subjectIdentifier','rotation':90} ) splot.histplot( adni_df, x='subject.subjectSex', ax=axes[1,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'SubjectSex distribution','xlabel':'subjectSex'} ) splot.histplot( adni_df, x='subject.study.subjectAge', hue='subject.subjectSex', discrete=False, ax=axes[1,1], plot_kws={'element':'poly','fill':False}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'SubjectAge distribution'} ) splot.histplot( adni_df, x='subject.study.series.dateAcquired', hue='subject.researchGroup', discrete=False, ax=axes[2,0], plot_kws={}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'SubjectAge distribution'} ) splot.histplot( adni_df, x='subject.study.weightKg', hue='subject.subjectSex', discrete=False, ax=axes[2,1], plot_kws={'element':'poly','fill':False}, legend_kws={'title':'subjectSex'}, setting_kws={'title':'weightKg distribution'} ) plt.show()
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Data sizes
fig, axes = splot.meta_settings(rows=2,figsize=(15,10)) splot.histplot( adni_df, discrete=False, x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Slices', hue='subject.researchGroup', multiple='stack', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Number of Slices','xlabel':'Slices','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=False, x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Columns', hue='subject.researchGroup', multiple='stack', ax=axes[0,1], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Number of Columns','xlabel':'Slices','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=False, x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Rows', hue='subject.researchGroup', multiple='stack', ax=axes[1,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Number of Rows','xlabel':'Slices','ylabel':'Frequency'} ) plt.show()
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Scoring
fig, axes = splot.meta_settings(rows=3) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.FAQTOTAL', hue='subject.researchGroup', multiple='stack', ax=axes[0,0], plot_kws={'stat':'frequency'}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Functional Activities Questionnaires (FAQTOTAL)','xlabel':'Score','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.NPISCORE', hue='subject.researchGroup', multiple='stack', ax=axes[0,1], legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'assessmentScore_NPISCORE','xlabel':'Score','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.CDGLOBAL', hue='subject.researchGroup', multiple='stack', ax=axes[1,0], legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Clinical Dementia Rating Scale (CDGLOBAL)','xlabel':'Score','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.GDTOTAL', hue='subject.researchGroup', multiple='stack', ax=axes[1,1], legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'assessmentScore.GDTOTAL','xlabel':'Score','ylabel':'Frequency'} ) splot.histplot( adni_df, discrete=True, x='subject.visit.assessment.component.assessmentScore.MMSCORE', hue='subject.researchGroup', multiple='stack', ax=axes[2,0], legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'Mini-Mental State Examination (MMSCORE)','xlabel':'Score','ylabel':'Frequency'} ) splot.histplot( adni_df, x='subject.visit.assessment.component.assessmentScore.MMSCORE', hue='subject.researchGroup', discrete=False, ax=axes[2,1], plot_kws={'element':'poly','fill':False}, legend_kws={'title':'ResearchGroup'}, setting_kws={'title':'MMSE Score per Condition'} ) plt.show()
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Visualise brain slices Create Image generator
SKIP_LAYERS = 10 LIMIT_LAYERS = 70 image_AD_generator = adni.load_images( files=adni.load_files(adni.path.category+'AD/', adni.filename_category, use_processed=True) ) image_CN_generator = adni.load_images( files=adni.load_files(adni.path.category+'CN/', adni.filename_category, use_processed=True) ) image_MCI_generator = adni.load_images( files=adni.load_files(adni.path.category+'MCI/', adni.filename_category, use_processed=True) ) ### Testing functions from nilearn.plotting import view_img, plot_glass_brain, plot_anat, plot_epi test_image = next(image_CN_generator) test_image.shape while True: test_image = next(image_AD_generator) plot_anat(test_image, draw_cross=False, display_mode='z',cut_coords=20,annotate=False) plt.show() break images_AD_array = adni.to_array(list(image_AD_generator)) images_CN_array = adni.to_array(list(image_CN_generator)) images_MCI_array = adni.to_array(list(image_MCI_generator)) images_AD = next(images_AD_array) images_CN = next(images_CN_array) images_MCI = next(images_CN_array)
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Coronal plane (From top)
image_AD_slices = [images_AD[layer,:,:] for layer in range(0,images_AD.shape[0],SKIP_LAYERS)] dplay.display_advanced_plot(image_AD_slices) plt.suptitle("Coronal plane - AD") image_CN_slices = [images_CN[layer,:,:] for layer in range(0,images_CN.shape[0],SKIP_LAYERS)] dplay.display_advanced_plot(image_CN_slices) plt.suptitle("Coronal plane - CN") image_MCI_slices = [images_MCI[layer,:,:] for layer in range(0,images_MCI.shape[0],SKIP_LAYERS)] dplay.display_advanced_plot(image_MCI_slices) plt.suptitle("Coronal plane - MCI")
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Sagittal plane (From front)
image_slices = [images_AD[:,layer,:] for layer in range(0,images_AD.shape[1], SKIP_LAYERS)] dplay.display_advanced_plot(image_slices) plt.suptitle("Sagittal plane")
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
Horisontal plane (from side)
image_slices = [images_AD[:,:,layer] for layer in range(0,images_AD.shape[2], SKIP_LAYERS)] dplay.display_advanced_plot(image_slices) plt.suptitle("Horisonal plane")
_____no_output_____
Apache-2.0
notebooks/00 - visualisation.ipynb
FredrikM97/Medical-ROI
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library:
import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1)
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss
9
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below:
a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c)
Tensor("Mul:0", shape=(), dtype=int32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
sess = tf.Session() print(sess.run(c))
20
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
# Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close()
6
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
# GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3, 1), name="X") W = tf.constant(np.random.randn(4, 3), name="W") b = tf.constant(np.random.randn(4, 1), name="b") Y = tf.matmul(W, X) + b ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function()))
result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
*** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)```
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(dtype=tf.float32, name="x") # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid, feed_dict={x: z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12)))
sigmoid(0) = 0.5 sigmoid(12) = 0.999994
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
# GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(dtype=tf.float32, name="logits") y = tf.placeholder(dtype=tf.float32, name="labels") # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost, feed_dict={z: logits, y: labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost))
cost = [ 1.00538719 1.03664088 0.41385433 0.39956614]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, dtype=tf.int32, name="C") # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(labels, C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot))
one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape)
# GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3])))
ones = [ 1. 1. 1.]
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset.
# Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Change the index below and run the cell to visualize some examples in the dataset.
# Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
y = 5
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
# Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 1080 number of test examples = 120 X_train shape: (12288, 1080) Y_train shape: (6, 1080) X_test shape: (12288, 120) Y_test shape: (6, 120)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow.
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(dtype=tf.float32, shape=[n_x, None], name="X") Y = tf.placeholder(dtype=tf.float32, shape=[n_y, None], name= "Y") ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y))
X = Tensor("X_3:0", shape=(12288, ?), dtype=float32) Y = Tensor("Y_2:0", shape=(6, ?), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours.
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer()) W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref> b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref> W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref> b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3))
Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples.
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost))
cost = Tensor("Mean:0", shape=(), dtype=float32)
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented.
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters
_____no_output_____
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
parameters = model(X_train, Y_train, X_test, Y_test)
Cost after epoch 0: 1.855702 Cost after epoch 100: 1.016458 Cost after epoch 200: 0.733102 Cost after epoch 300: 0.572940 Cost after epoch 400: 0.468774 Cost after epoch 500: 0.381021 Cost after epoch 600: 0.313822 Cost after epoch 700: 0.254158 Cost after epoch 800: 0.203829 Cost after epoch 900: 0.166421 Cost after epoch 1000: 0.141486 Cost after epoch 1100: 0.107580 Cost after epoch 1200: 0.086270 Cost after epoch 1300: 0.059371 Cost after epoch 1400: 0.052228
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
Your algorithm predicts: y = 3
MIT
Week3/Tensorflow_Tutorial.ipynb
dhingratul/Practical_Aspect_of_Deep_Learning
Python I-->-->Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its use of significant indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.* [Cheatsheet](https://perso.limsi.fr/pointal/_media/python:cours:mementopython3-english.pdf) The Python InterpreterExcecution of Python programs is often performed by an **interpreter**, meaning that program statements are converted to machine executable code at **runtime** (i.e., when the program is actually run) as opposed to **compiled** into executable code before it is run by the end user. This is one of the primary ways we'll interact with Python, especially at first. We'll type some Python code and then hit the `Enter` key. This causes the code to be translated and executed. Interpretation allows great flexibility (interpreted programs can modify their source code at run time), but it's often the case that interpreted programs run much more slowly than their compiled counterparts. It's also often more difficult to find errors in interpreted programs. We can interact with the Python interpreter via a prompt, which looks something like the following: (base) C:\Users\nimda>python Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.190064 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> print("Hello World") note: this will print something to the screen. Hello World >>>Above, the `print` function prints out a string representation of the argument. The `` denotes a comment, and the interpreter skips anything on the line after it (that is, it won't try to interpret anything after the ``). Of course, we can save Python code into a program file and execute it later, too. Jupyter NotebooksWe'll also interact with Python using Jupyter Notebooks (like this one). When we hit `Run` in the menu bar, we are performing an action analogous to hitting enter from a command prompt. The code in the active cell will be executed. Users should be aware that though we are intereacting with a Web page, there is a Web server and Python enviroment running behind the scenes. This adds a later of complexity, but the ability to mix well-formatted documentation and code makes using Jupyter Notebook worthwhile. Python Identifiers and Variables IdentifiersAn **identifier** in Python is a word (a string) used to identify a variable, function, class, etc. in a Python program. It can be thought of as a proper name. Identifiers start with a letter (A-Z) or an underscore `_`; this first character is followed by a sequence of letters numbers, and underscores.Certain identifiers, such as `class` or `if` are builtin keywords and cannot be redefined by users. VariablesAs in most programming languages, **variables** play a central role in Python. We need a way to store and refer to data in our programs, and variables are the primary way to do this. Specifically, we assign data values variables using the `=`. After the assignment has been made, we may use the variable to access the data as many times as we like. In general, the righthand side of an assignment is evaluated first (e.g., 1+1 is evaluated to 2), and afterwards the result is stored in the variable specified on the left. That explains why the last line below results in a value of 6 being printed. On evaluation of the righthand side, the current value of `blue_fish` (3) is added to itself, and the resulting value is assigned to `blue_fish`, overwriting the 3.
one_fish = 1 two_fish = one_fish + 1 blue_fish = one_fish + two_fish print(one_fish) print(two_fish) print(blue_fish) blue_fish = blue_fish + blue_fish print(blue_fish)
1 2 3 6
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Dynamic TypingNote that no data type (e.g., integer, string) is specified in an assignment, even the first time a variable is used. In general, variables and types are *not* declared in Python before a value is assigned. Python is said to be a **dynamically typed** language. The below code is perfectly fine in Python, but assigning a number and then a string in another lanauge such as Java would cause an error.
a = 1 print(a) a = "hello" print(a)
1 hello
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Data TypesAs in most programming languages, each data value in a Python program has a **data type** (even though we typically don't specify it). We'll discuss some of the datatypes here. For a given data value, we can get its type using the `type` function, which takes an argument. The below print expressions show several of the built-in data types (and how literal values are parsed by default).
print(type(1)) # an integer print(type(2.0)) # a float print(type("hi!")) # a string print(type(True)) # a boolean value print(type([1, 2, 3, 4, 5])) # a list (a mutable collection) print(type((1, 2, 3, 4, 5))) # a tuple (an immutable collection) print(type({"fname": "john", "lname": "doe"})) # a dictionary (a collection of key-value pairs)
<class 'int'> <class 'float'> <class 'str'> <class 'bool'> <class 'list'> <class 'tuple'> <class 'dict'>
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
NumbersThe basic numerical data types of python are:* `int` (integer values), * `float` (floating point numbers), and * `complex` (complex numbers).
x = 1 # y = 1.0 z = 1 + 2j w = 1e10 v = 1.0 u = 2j print(type(x), ": ", x) print(type(y), ": ", y) print(type(z), ": ", z) print(type(w), ": ", w) print(type(u), ": ", v) print(type(u), ": ", u)
<class 'int'> : 1 <class 'float'> : 1.0 <class 'complex'> : (1+2j) <class 'float'> : 10000000000.0 <class 'complex'> : 1.0 <class 'complex'> : 2j
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
In general, a number written as a simply integer will, unsurprisingly, be interpreted in Python as an `int`.Numbers written using a `.` or scientific notation are interpreted as floats. Numbers written using `j` are interpreted as complex numbers.**NOTE**: Unlike some other languages, Python 3 does not have minimum or maxium integer values (Python 2 does, however). ArithmeticThe arithmetic operations available in most languages are also present in Python (with a default precedence on operations).
1 + 3 - (3 - 2) # simple addition and subtraction 4 * 2.0 # multiplication of an int and a float (yields a float) 5 / 2 # floating point division print(5.6 // 2) # integer division print(type(5.6 // 2)) 5 % 2 # modulo operator (straightforwardly, the integer remainder of 5/2) 2 % -5 # (not so intuitive if negative numbers are involved) 2**4 # exponentiation
_____no_output_____
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Data Type of resultsWhen two numbers of different types are used in an arithmetic operation, the data type is usually what one would expect, but there are some cases where it's different than either operand. For instance, though 5 and 2 are both integers, the result of `5/2` is a `float`, and the result of `5.2//2` (integer division) is a float. StringsStrings in Python (datatype `str`) can be enclosed in single (`'`) or double (`"`) quotes. It doesn't matter which is used, but the opening and closing marks must be of the same type. The backslash `\` is used to escape quotes in a string as well as to indicate other escape characters (e.g., `\n` indicates a new line). Upon printing, the string is formatted appropriately.
print("This is a string") print('this is a string containing "quotes"') print('this is another string containing "quotes"') print("this is string\nhas two lines")
This is a string this is a string containing "quotes" this is another string containing "quotes" this is string has two lines
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
To prevent processing of escape characters, you can use indicate a *raw* string by putting an `r` before the string.
print(r"this is string \n has only one line")
this is string \n has only one line
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Multiline StringsMultiline strings can be delineated using 3 quotes. If you do not wish to include a line end in the output, you can end the line with `\`.
print( """Line 1 Line 2 Line 3\ Line 3 continued""" )
Line 1 Line 2 Line 3Line 3 continued
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
String Concatenation Strings can be concatenated. You must be careful when trying to concatenate other types to a string, however. They must be converted to strings first using `str()`.
print("This" + " line contains " + str(4) + " components") print( "Here are some things converted to strings: " + str(2.3) + ", " + str(True) + ", " + str((1, 2)) )
This line contains 4 components Here are some things converted to strings: 2.3, True, (1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
`print` can take an arbitrary number of arguments. Leveraging this eliminates the need to explicitly convert data values to strings (because we're no longer attempting to concatenate strings).
print("This", "line contains", 4, "components") print("Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2))
This line contains 4 components Here are some things converted to strings: 2.3 , True , (1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note, however, that `print` will by default insert a space between elements. If you wish to change the separator between items (e.g. to `,`) , add `sep=","` as an argument.
print("This", "line contains", 4, "components", sep="---") print( "Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2), sep="---" )
This---line contains---4---components Here are some things converted to strings:---2.3---,---True---,---(1, 2)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
You can also create a string from another string by *multiplying* it with a number
word1 = "abba" word2 = 3 * word1 print(word2)
abbaabbaabba
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Also, if multiple **string literals** (as opposed to variables or string expressions) appear consecutively, they will be combined into one string.
a = "this " "is " "the " "way " "the " "world " "ends." print(a) print(type(a)) a = "this ", "is ", "the ", "way ", "the ", "world ", "ends." print(a) print(type(a))
this is the way the world ends. <class 'str'> ('this ', 'is ', 'the ', 'way ', 'the ', 'world ', 'ends.') <class 'tuple'>
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Substrings: Indexing and SlicingA character of a string can be extracted using an index (starting at 0), and a substring can be extracted using **slices**. Slices indicate a range of indexes. The notation is similar to that used for arrays in other languages.It also happens that indexing from the right (staring at -1) is possible.
string1 = "this is the way the world ends." print(string1[12]) # the substring at index 12 (1 character) print(string1[0:4]) # from the start of the string to index 4 (but 4 is excluded) print(string1[5:]) # from index 5 to the end of the string print(string1[:4]) # from the start of the string to index 4 (4 is excluded) print(string1[-1]) # The last character of the string print(string1[-5:-1]) # from index -5 to -1 (but excluding -1) print(string1[-5:]) # from index -5 to the end of the string
w this is the way the world ends. this . ends ends.
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
**NOTE**: Strings are **immutable**. We cannot reassign a character or sequence in a string as we might assign values to an array in some other programming languages. When the below code is executed, an exception (error) will be raised.
a = "abc" a[0] = "b" # this will raise an exception
_____no_output_____
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Splitting and Joining StringsIt's often the case that we want to split strings into multiple substrings, e.g., when reading a comma-delimited list of values. The `split` method of a string does just that. It retuns a list object (lists are covered later). To combine strings using a delimeter (e.g., to create a comma-delimited list), we can use `join`.
text = "The quick brown fox jumped over the lazy dog" spl = text.split() # This returns a list of strings (lists are covered later) print(spl) joined = ",".join(spl) print(joined) # and this re-joins them, separating words with commas spl = joined.split(",") # and this re-splits them, again based on commas print(spl) joined = "-".join(spl) # and this re-joins them, separating words with dashes print(joined)
['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog'] The,quick,brown,fox,jumped,over,the,lazy,dog ['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog'] The-quick-brown-fox-jumped-over-the-lazy-dog
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Similarly, to split a multiline string into a list of lines (each a string), we can use `splitlines`.
lines = """one two three""" li = lines.splitlines() # Split the multiple line string print(li)
['one', 'two', 'three']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
To join strings into multiple lines, we can again use `join`.
lines = ["one", "two", "three"] data = "\n".join(lines)# join list of strings to multiple line string print(data)
one two three
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Boolean Values, and NonePython has two Boolean values, `True` and `False`. The normal logical operations (`and`, `or`, `not`) are present.
print(True and False) print(True or False) print(not True)
False True False
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
There is also the value `None` (the only value of the `NoneType` data type). `None` is used to stand for the absence of a value. However, it can be used in place of False, as can zero numerical values (of any numerical type), empty sequences/collections (`[]`,`()`, `{}`, etc.). Other values are treated as `True`. Note that Boolean expressions are short-circuited. As soon as the interpreter knows enough to compute the appropriate Boolean value of the expression, it stops further evaluation. Also, the retun value of the Boolean expression need not be a Boolean value, as indicated below. The value of the last item evaluated is returned.
print(1 and True) print(True and 66) print(True and "aa") print(False and "aa") print(True or {}) print(not []) print(True and ())
True 66 aa False True True ()
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Boolean Comparisons There are 8 basic comparison operations in Python.| Symbol | Note | | --- | --- || `<` | less than | | `<=` | less than or equal to | | `>` | greater than | | `>=` | greater than or equal to | | `==` | equal to | | `!=` | not equal to | | `is` | identical to (for objects) | | `is not` | not identical to (for objects) | Regarding the first 6, these will work as expected for numerical values. Note, however, that they can be applied to other datatypes as well. Strings are compared on a character-by-character basis, based on a lexicographic ordering. Sequences such as lists are compared on an element by element basis.
print("abc" > "ac") print("a" < "1") print("A" < "a") print((1, 1, 2) < (1, 1, 3))
False False True True
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note that `is` is true only if the two items compared are the *same* object, whereas `==` only checks for eqaulity in a weaker sense. Below, the elements of the two lists `x` and `y` have elements that evaluate as being equal, but the two lists are nevertheless distinct in memory. As such, the first `print` statement should yield `True`, while the second should yield `False`.
x = (1, 1, 2) y = (1, 1, 2) print(x == y) print(x is y) x = "hello" y = x a = "hel" b = "lo" z = a + b w = x[:] print(x) print(y) print(z) print("x==y: ", x == y) print("x==z: ", x == z) print("x is y: ", x is y) print("x is z: ", x is z) print("x is w: ", x is w)
hello hello hello x==y: True x==z: True x is y: True x is z: False x is w: True
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Converting between TypesValues of certain data types can be converted to values of other datatypes (actually, a new value of the desired data type is produced). If the conversion cannot take place (becuase the datatypes are incompatible), an exception will be raised.
x = 1 s = str(x) # convert x to a string s_int = int(s) s_float = float(s) s_comp = complex(s) x_float = float(x) print(s) print(s_int) # convert to an integer print(s_float) # convert to a floating point number print(s_comp) # convert to a complext number print(x_float) # Let's check their IDs print(id(x)) print(id(s)) print(id(s_int)) print(id(s_float)) print(id(x_float)) print(id(int(x_float)))
1 1 1.0 (1+0j) 1.0 93926537898496 140538951529264 93926537898496 140538952028656 140538952028464 93926537898496
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
The `id()` functionThe `id()` function can be used to identify an object in memory. It returns an integer value that is guaranteed to uniquely identify an object for the duration of its existence.
print("id(x): ", id(x)) print("id(y): ", id(y)) print("id(z): ", id(z)) print("id(w): ", id(w))
id(x): 140539018862384 id(y): 140539018862384 id(z): 140538951488880 id(w): 140539018862384
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Lists,Tuples, Sets, and Dictionaries Lists Many languages (e.g., Java) have what are often called **arrays**. In Python the object most like them are called **lists**. Like arrays in other languages, Python lists are represented syntactically using `[...]` blocks. Their elements can be referenced via indexes, and just like arrays in other languages, Python lists are **mutable** objects. That is, it is possible to change the value of an individual cell in a list. In this way, Python lists are unlike Python strings (which are immutable).
a = [0, 1, 2, 3] # a list of integers print(a) a[0] = 3 # overwrite the first element of the list print(a) a[1:3] = [4, 5] # overwrite the last two elements of the list (using values from a new list) print(a)
[0, 1, 2, 3] [3, 1, 2, 3] [3, 4, 5, 3]
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Note that some operations on lists return other lists.
a = [1, 2, 3] b = [4, 5, 6] c = a + b print(a) print(b) print(c) print("-" * 25) c[0] = 10 b[0] = 40 print(a) print(b) print(c)
[1, 2, 3] [4, 5, 6] [1, 2, 3, 4, 5, 6] ------------------------- [1, 2, 3] [40, 5, 6] [10, 2, 3, 4, 5, 6]
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Above, `c` is a new list containing elements copied from `a` and `b`. Subsequent changes to `a` or `b` do not affect `c`, and changes to `c` do not affect `a` or `b`. The length of a list can be obtained using `len()`, and a single element can be added to a list using `append()`. Note the syntax used for each.
a = [] a.append(1) # add an element to the end of the list a.append(2) a.append([3, 4]) print(a) print("length of 'a': ", len(a))
[1, 2, [3, 4]] length of 'a': 3
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Some additional list operations are shown below. Pay careful attention to how `a` and `b` are related.
a = [10] a.extend([11, 12]) # append elements of one list to the end of another one b = a c = a.copy() # copy the elements of a to a new list, and then assign it to c b[0] = 20 c[0] = 30 print("a:", a) print("b:", b) print("c:", c) b.reverse() # reverse the elements of the list in place print("a reversed:", a) b.sort() print("a sorted:", a) a.clear() # empty the list print("b is ", b, " having length ", len(b)) list1 = ["a", "b", "d", "e"] list1.insert(2, "c") # insert element "c" at position 2, increasing the length by 1 print(list1) e = list1.pop() # remove the last element of the list print("popped: ", e, list1) list1 = ["d", "b", "b", "c", "d", "d", "a"] list1.sort() # sort the list print("new list, sorted:", list1) print("count of 'd': ", list1.count("d")) # count the number of times "d" occurs print("first index of 'd': ", list1.index("d")) # return the index of the first occurrence of "d" print(list1) del list1[2] # remove the element at index 2 print("element at index 2 removed:", list1) del list1[2:4] # remove the elements from index 2 to 4 print("elements at index 2-4 removed:", list1)
['a', 'b', 'c', 'd', 'e'] popped: e ['a', 'b', 'c', 'd'] new list, sorted: ['a', 'b', 'b', 'c', 'd', 'd', 'd'] count of 'd': 3 first index of 'd': 4 ['a', 'b', 'b', 'c', 'd', 'd', 'd'] ele at index 2 removed: ['a', 'b', 'c', 'd', 'd', 'd'] elements at index 2-4 removed: ['a', 'b', 'd', 'd']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
TuplesThere also exists an immutable counterpart to a list, the **tuple**. Elements can also be referenced by index, but (as with Python strings) new values cannot be assigned. Unlike a list, Tuples are created using either `(...)` or simply by using a comma-delimeted sequence of 1 or more elements.
a = () # the empty tuple b = (1, 2) # a tuple of 2 elements c = 3, 4, 5 # another way of creating a tuple d = (6,) # a singleton tuple e = (7,) # another singleton tuple print(a) print(b) print(c) print(d) print(len(d)) print(e) print(b[1])
() (1, 2) (3, 4, 5) (6,) 1 (7,) 2
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
As with lists, we can combine tuples to form new tuples
a = (1, 2, 3, 4) # Create python tuple b = "x", "y", "z" # Another way to create python tuple c = a[0:3] + b # Concatenate two python tuples print(c)
(1, 2, 3, 'x', 'y', 'z')
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
SetsSets, created using `{...}` or `set(...)` in Python, are unordered collections without duplicate elements. If the same element is added again, the set will not change.
a = {"a", "b", "c", "d"} # create a new set containing these elements b = set( "hello world" ) # create a set containing the distinct characters of 'hello world' print(a) print(b) print(a | b) # print the union of a and b print(a & b) # print the intersection of a and b print(a - b) # print elements of a not in b print(b - a) # print elements of b not in a print(b ^ a) # print elements in either but not both
{'a', 'd', 'c', 'b'} {'l', 'r', 'w', 'e', 'd', 'h', ' ', 'o'} {'l', 'b', 'c', 'r', 'w', 'e', 'd', 'h', ' ', 'a', 'o'} {'d'} {'a', 'c', 'b'} {'l', 'r', 'w', 'e', 'h', ' ', 'o'} {'l', 'b', 'c', 'r', 'w', 'e', 'h', ' ', 'a', 'o'}
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Given the below, it appears that `==` is used to evaluate membership.
a = "hello" b = "hel" c = "lo" d = b + c # Concatenate string s = {a, b, c, d} print("id(a):", a) print("id(d):", d) print(s)
id(a): hello id(d): hello {'lo', 'hel', 'hello'}
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
DictionariesDictionaries are collections of key-value pairs. A dictionary can be created using `d = {key1:value1, key2:value2, ...}` syntax, or else from 2-ary tuples using `dictionary()`. New key value pairs can be assigned, and old values referenced, using `d[key]`.
employee = {"last": "smth", "first": "joe"} # Create dictionary employee["middle"] = "william" # Add new key and value to the dictionary employee["last"] = "smith" addr = {} # an empty dictionary addr["number"] = 1234 addr["street"] = "Elm St" # Add new key and value to the dictionary addr["city"] = "Athens" # Add new key and value to the dictionary addr["state"] = "GA" # Add new key and value to the dictionary addr["zip"] = "30602" # Add new key and value to the dictionary employee["address"] = addr print(employee) keys = list(employee.keys()) # list the keys of 'employee' print("keys: " + str(sorted(keys))) print("last" in keys) # Print whether 'last' is in keys or not (prints True or False) print("lastt" in keys) # Print whether 'lastt' is in keys or not (prints True or False) employee2 = employee.copy() # create a shallow copy of the employee employee2["last"] = "jones" employee2["address"][ "street" ] = "beech" # reassign the street name of the employee's address print(employee) print(employee2)
{'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'Elm St', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}} keys: ['address', 'first', 'last', 'middle'] True False {'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'beech', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}} {'last': 'jones', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'beech', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}}
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Conversion Between Types
y = (1, 2, 3, 1, 1) # Create tuple z = list(y) # convert tuple to a list print(y) print(z) print(tuple(z)) # convert z to a tuple print(set(z)) # convert z to a set w = (("one", 1), ("two", 2), ("three", 3)) # Create special tuple to convert it to dictionary v = dict(w) # Convert the tuple to dictionary print(v) print(tuple(v)) # Convert the dictionary to tuple print(tuple(v.keys())) # Get the keys of the dictionary print(tuple(v.values())) # Get the values of the keys in the dictionary
(1, 2, 3, 1, 1) [1, 2, 3, 1, 1] (1, 2, 3, 1, 1) {1, 2, 3} {'one': 1, 'two': 2, 'three': 3} ('one', 'two', 'three') ('one', 'two', 'three') (1, 2, 3)
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Controlling the Flow of Program ExecutionAs in most programing languages, Python allows program execution to branch when certain conditions are met, and it also allows arbitrary execution loops. Without such features, Python would not be very useful (or Turing complete). If StatementsIn Python, *if-then-else* statements are specified using the keywords `if`, `elif` (else if), and `else` (else). The general form is given below: if condition1: do_something elif condition2: do_something_else ... elif condition_n: do_something_else else: if_all_else_fails_do_thisThe `elif` and `else` clauses are optional. There can be many `elif` clauses, but there can be only 1 `else` clause in the `if`-`elif`-`else` sequence.
x = 3 # Test the number if it bigger than 10 if x > 10: print("value " + str(x) + " is greater than 10") # Test the number if it bigger than or equal to 7 and less than 10 elif x >= 7 and x < 10: print("value " + str(x) + " is in range [7,10)") # Test the number if it bigger than or equal to 5 and less than 7 elif x >= 5 and x < 7: print("value " + str(x) + " is in range [5,7)") # Test the number if it's less than 5 else: print("value " + str(x) + " is less than 5")
value 3 is less than 5
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
While LoopsPython provides both `while` loops and `for` loops. The former are arguably lower-level but not as natural-looking to a human eye. Below is a simple `while` loop. So long as the condition specified evaluates to a value comparable to `True`, the code in the body of the loop will be executed. As such, without the statement incrementing `i`, the loop would halt. ```while condition: do_something```
string = "hello world" length = len(string)# get the length of the string i = 0 while i < length: print(string[i]) i = i + 1
h e l l o w o r l d
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Loops, including while loops, can contain break statements (which aborts execution of the loop) and continue statements (which tell the loop to proceed to the next cycle).![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAYIAAAGtCAYAAAAbAHNqAAAgAElEQVR4nOzdd5xcZb348c+p08v2mt5DeoGE3lFEsF1BsOC1oiKCgqg/EBWxXLFce0MFKQr32q8onVASElp6723b7PR6yu+P2czuZGY3u8luNmGe9+vFi51znnbObM73nKeclRqbW2wEQRCEiiWPdgMEQRCE0SUCgSAIQoUTgUAQBKHCiUAgCIJQ4UQgEARBqHAiEAiCIFQ4EQgEQRAqnAgEgiAIFU4EAkEQhAqnjnYDKkFsxrUkJl4BUm/crX/sWuRM91GXaWse2i55AAA53UX94/95xDzxadeQmPQObLn0a2/8+xVH3RZBEE5u4ongODC8rSBJo90MDE8ztvTG/sotZzW54FQsPTDaTTkhWLqfXHAKpqtutJsinMAk8a6h4yc54a1ET/kwcOxPBMOh8+zvY/gnACf/E4Eta8SnXEVi0ttAVlFju6h95tOj3axRF1ryNbK1c8C28Gz9X7xbHkSyjNFulnCCOem6hiyvhpw0wBLxS8hLjruU+PT3YmmewjYplxzFFp1Iep4AJZnElHeRGv9mPFv+gGf7X0a3WcIJ5aQLBHI8R/dVE1E70nifPoBkioBQqdKNS4hPfy+Gd0xhm2Tl8G5+CNfuf49iy04cwVe+RXLC5SQmXoGt6Fiah9jM/yQ15kK8mx/CeeD50W6icAI46QIBgOu1EOErJ5JaWIt7RTueF9uRstaI1dd96m1k6hcVbZNzcaqW344W2VbYFl50K+mG04oGheVsjJplN6Gk2ovyZ6umkW49j2z1TCzdD7aJGt+He9c/ce1+DMnKlbQj1XI20dnXYavukn3Vy29H73z9WA/1iHL+SSQnXka2egamuwkAORtFje3BtedxXHufHPE2ZKtmEJ92NdmaWUXn2rXvGTxb/oga31s2n626SY05n1TrefnxEtWNZBnIqXYc7a/g3vV/qPF9xXkUJ11nfxfD09Jno4XetZbq5beRqV9MZO71WI7eMQnJzOJb90tyNbNItZxTVF7w5f/CdNWSGnMBhm8sAHKmGz20EffOf6B3rSlpd3jBzaSbzyzapkW2UbPsJtKNS0hOuIxccBq2JKOHN+Pd+Hv00Lp82dko3k2/x7XnMeJTr8q3R1IwfGMJL/gcjs6L8Wx9pGy9/Qkv+ByZxiVg5fDs+BveTQ8MOq9wYjopA4FzYxjtQJJck5v4uc0kzmjE+9R+XKtDyInh7/+0FWfpNkkBSSnaZqkuoHhQ2JbVoovVIeFFXyje0POPMzrrYxieZvzrflXaEMVRUudxI8nEJ72DxNSrsGWtaJel+8nWnEK25hSS4y8luOobKOmuYW+C6WkiPvlKUq3n9J5T20QPbcC76X700PoB83Yv+gKGb1zRdltWMT3NJCc0k5xwGcFX78a579neBBLYsl5cmCTngzdgOQJYuvew2mxs1VmaD4id8kFMZ23RNstRRbppKenGU/Fu/R+8mx8Eu/fGxlZdJeXYsk5k7qdIjbmoaHu2eibdS75C7VMfR0l1FrYryTYCr/0A965/EZ/6HjK1s0FSyNTNI1M3D+eBF/Buur/fINpXpn5R/ndA1ki1XiACwRuA4vX57xjtRhwNJZolPbu654NEdpKf5Kl1SIaJ2pVBMoavy8i190m0yHbSLWcD4N14P9UvfQU5G8X0tmI5Atiyimfn/+Hd+giJCZeBoqOk2vODwrk4ALmqaWTqFwAgmRmcbSvwr/0Fgdd/gBbekr+r073kqqahhzagJNuK2qFFtuPd+jDezQ/h3fwQtuIiVz29p41PlaQ/kuS4N2E5qvLHtPmhAdPGp15JfNrVIClIloGzbQXBV+7Gv/ZnSJaB4R+PrTiwnDVk6+bj3v0vYHi+A0v3EZ96NeEFN2MEJuRnYNk2avIAgVe/h2/T/Sipjv7zax5CZ34H09PzBJMO4dt4H1UvfRU9tAHTVYvpbgAg3XQ6WvdG1ORBACTLwLPjr8jZGNnauYUA5DzwIo6OVwHINJ+FreiAjd65ltpnb0APrcd54Dm8mx/CdDdh+McD+acSyczg2fkPap6/teeiD0ZgIrbiIFszMx/cunuDmmvfM4XvPDHhclB0LEcAIzAJACWxn8Dan6F3riFTvxBkBTXRhhbZUnIulHQnrn1Po4fWkauaXghohm8MyfFvwZY0tNgOJDPb7/nMNJ2RfwKyTRydr+E88MKgvkfhxHVSPhEAOLZEUQ+mMBp775ZsVSZ28RhiF7bifWY/3mVtw3UtQkkeKPycaViAd+sfSTefRWTeDUD+8b7+sWvJ1MzG7hm0zN+hlm9A3ZMfK5o15Gh/mbqOT3Dw0kdAkskFJh2Xrp7BMLwtxKe+p/DZv/onuPY+Ufjs2fowzn1P03XWd7F0P4ZvLJmGRTgOvnTMdafGXEh09sdLnkKCr3wH54HnBlVGZN5nMJ01QP5CWPf4hwr79M7Xqe58negpHyM54VIAYjM+iKPjhqIy3Dv/gWRmicz9FADJ8W/G1pykG5YU7tj10AaqV9xedDdfTtWKO4qeXrxbHsJ58Hm6zrwbW3EQn3YNzv3Poyb2DVBKb7v8a39R+Gw5q0m1nF30/ZSjd62l9ulPkG45i/C8mwoBLjHlXSQnXYF3w+/w7Phb2bw1y26k98lXjNG9EZzUk8r9j+4pv0OWiJ/XQvvn5hC7sBlbOfY5/GpsT+GxORechqX7SLWeX9hv6QEsZxXZ2lmFbXrHUC/kNpKd79oq1x0wWrJ18ws/613rce5/piSNkurAu/H3hc/heTcNS92mswb7sO42ORMGafAXoFxwWv4H2yS48htl0/g23Qs90yoN/3jSjUtK0rj2PEbwle8UPqdazit8T859z1C1/MhBwLvpgbJdWGpsD4HXvl/4nG4+e8ByJCNF9fLbi4JAvvz7qXvyY0hmesD8hXJyCeRcrGibjYTlrC7bpdk3lQgCbxwj9kRgNDjJNbix3CqWW8V2K9j68PZv28rAccxyqyTOaCQ1twbXa114nzlwTF1GWvfmnsVhMrmqaYVuGaCnD/1dGIGJ+Y9m5rgMnB4P2erZhZ9tWSU6t/z8fEvt+3TmwnTWoqQ7y6YdLO+WP+Dc/xzJiZeTHPemfD2OIOH5n0Uffxm+9b9BC28esAxL9wEgmTmSEy/vN52SCWO68v33hn8CHFxeksa5fxle31jiU95d2KbGdhN89buDOh4tsr3ffWpsV+HnXHDSgOWoif3H9MSYrZlFfMq7891dfbj2PIFn2/+UDJoLb2zDFggst0p6dhWZCT5yrV4sz4nT62R5NVLzapByFp4X25ByRxcMXHufJDUm/xQQm/be3u4KywBZJTX2QiQzAzAig6Wjp/cuN1c1lVzV1MFlK/Mqi6OhJvbhX/NTXHueIDbtvWR7Bjqz1TPpOuObuPYtw739L2jR/i6y+e/bVp0ls3j603dNQl/p5jNJTHp70TbDN5bwoi8SXHXXEcst93qPgj4TAQ79Hg03wzuGxKR3kG45q6i7Te9a0+/TivDGd8z/Us2gTnJBLcml9diKjBLLooQyuFaH0HbHUOIGciKHnDCGfYqn6dfouHH2gGnkWA7Xa514nzl4zGsO5Gyk8POhFblyJox75/8Rn3Y1tuIszDBSY7uPqa4Tida9kXTT6QCo0Z2Dvssf7ouZFt5M9YrbSTcsITH1KnL+8SAppFrPJdV6Lp7tf8az7c8lK7blXBxLDyCZafSutYOqS42Vdjumm88kMu8zhQuoa8+TpJuWYqsu0o2n0X3al6l66WsDdg8lJ7wFR/vLSFbxYKwtqyQmvLXw+dBA9HAxnTUkJ16RHxAuBCMbNbYb38bf42g79vEc4eR19IFAgmyLh+73TcZWZdRQBt+/9uHYGjly3mGSXNT/+1OktIlnRRveZw/2vaE9JkqyHWyz6M5N71yNa9/T+Rk1fWjdG4en0hOAo20VsRnX5vuMZYXgK99BMlKj1h5n23KcbctJtZ5P7JQPYWn56ZuJiW8jOe5SfOvvwbX3icLMF617C5mGRdiKE9e+Z4qnhw5SumkpkXk35i+ito1r7xMEXv8hzgMvEF54C7aik6lbQGjJ16hefnv+96SMbM1swou+QOC17yFnowBYmpf49PeRGnsxAJKRHrYLsy1rpMZcSGzmB7EVR2G7nAnj3fxH3Lv+MeQyExOvID71arBNAmt+inP/smFpqzB6jvpdQ7HzW0ic2QAS1PxqE9r+JNjHb/DIDOp0fPqU0pe5mTa+J/biXtk5rFNIDwktuTPfNdGj+sXb0LvW9Ly3Z3xhe82ymwqLzUxXA92n3obpbuiZZpgf7FNjOwm+cjdKqoPonE+Qaj6rsFhMsnIoyTaqXroTJXmAXHAykXk3YTqrehsja4W7U8lMF92JatGdBFd9s/AUk2o5l9jMa4suBrbiLAwISkbxKxmcB1cSeK233zs++Uri0/PBTsolcO17Jj+tNpyfophqPR/D00quajK54BT07g0EV31zwGmIw8GWNRKT/4P41CuLtmvdG6h5/taeNCodF/6mZ+GehRbZhmv347h3PwpAtmomhn8cueAkstWzQNbwv/5DHJ2vkQtMJrzwZkxXfe+5sgy8mx7Es+0RTGcNoTO+XRhbgPx369r7NP61PwMgMu9GUq3nFjfcMnqfCoq+xwx1T3wIOZsfwDU8LYQX3Voo3z60VsW2SgaElWQ7VSvvLJpK233q7fkppX14N92PZ9ufyi5aHIz2S+4vBF8l1UHdEx8+qnKEE8dRPREkT6sncVYD+u44wUd2IMeO7hfqWCQX1RUFASln4Xt8L8614fy7iEaI88CyQiDQIlvRu1YDEHz5W3Se+2OQZOR0qGjFcarlXAzfmKJybNVFrmoGhrcVJdVBcuwlxftlDcPbSjY4FVfyANna+RjeFvpz+KK3bPVMsjWzCq8QSI29qLBmoGz+w1Yrp1rPwbPlD4UpjN5tD4OsEp/yLmzNQ3L8pSTHX9pveZm6hViaD8Uc2bGS/CslHsC1+18kJr8zP6AsKSA7+qQxqFrxlfwF3d1ILjiFXHAK0TnX9Vturmoajs7XiM76MKa7sWifLavEpl+DZ9sjpFvOKQoCkP9uk+PfjHvXP0q6mLyb7id9aGXzYeMFzv3L8K/7dSEIQH5+/6EVyMUHLpd8Z4Z/POmmM/Bs/3NvWw79Xtgmrj1P4d384DEP4NMngMjp0LGVJZwQhvxEkB3rJfSBKaihDLU/3TAqL38z/RqdnzoFW5ORMibeZw7geq0LOVX+cXw42YqjMG1Ui2xHC2/K75BkUmMuwJZU5GwY54EXe/OobtKNp5VcrOVcLN9fbKTI1s7F8DQX7ZfMNM62l5ByCSxHkEzdgqI7+oHI2SiO9lWFfnrD20q2+pQjTAnspSTbcHS8UrI9F5hCctybyNXMKHrtgpLuQspGURP7cbS/jBbeWjQL5njJ+SdgBCejda0vmYdvaV7STaeTGnMhprelcFebn0IZR0m1o4W3onetRe9ag2RmsBxB0o2nUTzT2kILb0GLbMdWnKQbl5RM91WSBwv9/H2fCKpe+hp61xqy1acUFrHJ2ShqdAdqYn/J8Vial0zD4rKr2w8nGSmcbSuKuu1MTxPZ6lNQozvRIluPWMZgGN5WctUzwLbRO18fcDGfcHIYUiCwFYnOT87EdirU/GIjSnhkH/v7E31TK+mZVbhf68Tz7MER6QIShOFyeCBwtK8a5RYJQrEhdQ2l51RjVjnwPb531IKArUgosSx1P1gr3jwqCIIwDIb0RND10emYPo36uwf/pkJBqFTxadcULTzrS7KyBF/+LzFtUzghDPoVE2a1g1yjG/erb6SFUoIwcjI1c/rdZ8s66YbS11gIwmgYdNdQ6pQgSKDviI5kewThDSPw+g8wD5sA0MtCC2/rZ58gHF+DDgSZaUEwbbR94k8ACsJgqIn9ZWcCCcKJZvBdQ34dfV98RP8SmCAIgnD8Df411KqM2jEyL8ISBEEQRs+gA4GtgHQcFmwJgiAIx9fgnwgUGTl5/F8lIQiCIIyswT8RyNKIvsNHEARBGB2DfyKQQBLjxIIgCG84J/XfLBYEQRCOnQgEgiAIFU4EAkEQhAonAoEgCEKFE4FAEAShwolAIAiCUOFEIBAEQahwIhAIgiBUOBEIBEEQKpwIBIIgCBVOBAJBEIQKJwKBIAhChROBQBAEocKJQCAIglDhRCAQBEGocCIQCIIgVDgRCARBECqcCASCIAgVTgQCQRCECicCgSAIQoUTgUAQBKHCiUAgHHfppqUkx148KnXbqpPonE9gad5Rqf9kZisOInM+iaX7R7spwjATgUAokQtMIjLnUxje1hEpP9VyHqkxF45I2UdiaX6SYy/B1n2jUv/JzNK8pMZejOUIDilf9JSPkG4+c4RaJQwHEQiEAkv3E5v+XkJnfAtbdSIbyZGpSNZAUkam7CORpEM/jE79J7WjO3dq8gDRUz5KaMnXMD1Nw98s4Zipo90A4QQgKWTqFxFecBNKqpOGf74bbGu0WyW8Qbh3/B33jr8TmX8Tnef8ENfeJ/Gt+zWSmRntpgk9pMbmFnswCQ9+eQHB/92Jc01opNskHEemu5HuU2/D0v341/wcR9tyJMs4Yr5M/UKS499Mzj8JAC22E9fux5HTXcSnv4/qF/8f0PurlfNPpPvU2wB6umUkpGy0pFzJMgi+ejda98YB649PuYpMw0JMZy1yLobz4HI8Wx5GsnJl03cv/hK5wGSQZCxHEDkTAdssSaeFt1C16q6Bj73xNJLj3kzONw4APbQO965/oXetKUkbm/EBUi3nAlC16hsYvjEkJr0DW1LwbfwdzgMvYjmqiM76KNmq6ehdqwm8/qN+jwMgPvk/yDSdjukIIhsp9M7VeLf8ATnTXTZ9dM4nSNcvRsKm+vlbUFKdxKdcRbrlTGzVjRLbg3fzQ+jdG8rmDy+8lWzVtN5zl42AVebcRbZRtfLOAc6chOmup3vRFzHdDfjX34Nr978HSC8cLyIQVChL8xKdfR2ZxtPwbH0E946/I+fig8obOv2b5HxjcR5cjnP/MgAyDYvINJyGrTgBm/rHri260NqqO38xAeLT3outuvGt+0VJ2ZJto0a2lm2L6aojtPTrWM4alMR+HB2voHe8Rq56Bqnms0DWqX7xSyjJgyV5c8GpWJoHy1FFZN4NBF7/IXK6qySdnEughTeXPW7T00T3ws9jeppx7l+Gs+0lTEc1mfqFZOoXoHdvygfAPsdt+MZgupsIL7gZOdONrXlw7/gblrOa5JgLCaz+CbHp70NN7Me98+9EZ34EORen5vmbkYxUUf3phiVE51yHZCRx7X8OLbSebO1sMnULMN2NuHf+Hd/G35e02/BPIOcfT2TeDQRfuZv41KsAcO15HDWxn1TreaTrF+PbfD+erf9b5txNwdK8WLqfyPyb8K/5KUqybUjn7nDppjOJzfwgciaEb8Pv0LvWDiqfMDJE11CFsVU3yXGXkJh4OVp4K9XP34wW2THo/NFZH8N01VLz/C2o8b2F7Y6OV7E2PUB4wecxAhNK8klGEkfHqwAkJ1yO1ZNnKCQjiXvnP9C7NxU9MTg6XsW76QG6zvgWsRkfIPjyt0ryHrpAme4GAPTQepTE/kHXbbpqCZ16B1p0G9UvfRU53XtD5N71T3KBSYQX3EzX6d+iesVthYu4GtuTv2jaFpbmo2rlneihdSDJ5HzjiMy9HveuR/GtvwfJzKBGdtB57o8w3Q2o0Z2FOjJ184ksvBnfht/i2v3vQreKo+NVfBvuJTXmQqIzPwRQEgzU6A6kXAKQiMy7Aee+ZfjW/wo5l8iXcXAFyQlvJTbjAzgOvlT0vebP3Zb8OXDW9py7DaixXYM+d+U4DzyH3rWa5Pi3EF70RRztq/BuuBcl3XlM5QpHRwwWV5Bs7Ry6zvgmqdbz8K/5JVUrvz6kIABgeFtQI9tLLhaQvyP0b+h53B+BMQY5l8Cz/S/9dhs52l8lF5w87PUCxGZ+CDkXJfjyt4uCwCFaZBs1z30Owz+GTMPismX4N9yTDwIAtoUa24uUS+DddH/hwi5no0i2haV6CvkszUt40RfwbnoA946/le1bd+15HP+6X5Ecfxm26u73OLTwFvxrf1oIAoe4d/wNOR0iU7/wiOdiuMjZKN7ND1LzzPVYmpfO836cn9E1QPuFkSECQYXINJxG98JbwYaqFV/BefCFoypHD60jWz+f1JgLMZ3VJfvV6E58G++j7/jAcLMcQQxPC7nAJHKBSRj+8RieZmzVOWJ1ZmtOwbvl4QHTyLk4zgMvkJj4trL7lfi+0m2ZbuQyYyW9s5vA9LRgSwqu3Y8OWL9r7xPIRpLEhCv6SWETeO17SGa27F7JMkZlNpeS7iLw6t04D7xEbNZHiE275ri3odKJrqEK4WhbQcO/rqb71NvoPP/nuLf9Bd/m+4d85+7d/AcM33gicz5VuFgpyXa0yPZ8v/nBF0ZwxpFEZN6NpFrOJh9o7KJ9SBJKqmNEarb0AN2LvzQiZR9JunEpyCrtlzwwqPTZmhmwZYQbNWwkDP94uk7/BpJtUvv0J8uOPwgjSwSCClO18i5ygYlE5n6a1NgL8a3/Na59zw6pjOAr38FyBDCd1WTqFmN4WzH8YwkvvBk5G8G37re49j057G0PLbmDbM1sgq/ejd65GslI9NzFyliah8Tk/yDdtHTY6z0k+Mq3UaPH1jd+tCQrR82yz5ad6VSS1kwfhxYdO8PTTGTeZzB8Y/Cv/RnO/S8gWeWfVoSRJQJBpbFNtPAWapbdRKZ+IdE5nyI19mL8a36OGt8z6DLkdAg5HUILby1sNjzNJKZeSXTOx3AeWDbgFMihytQvJlc1ndpnPl06PmFbyNkYUm6EFsCR7/axJbXs2MhIc3SsIjHpcuT+upFOQpG515NpOA3XnscJvvJtlJQYJB5NYoygQklWDufB5dQ+fR1qbA+hpV8vzAjqj+Ftpe3ND2O6GsruVxP78a37FbbiJFM3wKCjfeR1CoczXTXI6fDAF2J5EP3b9tGNXaixXSTHX3ZUeY+VkjgAkkJqzEWjUn+vYxv3sWWN5NhLaL/wN5juRqpWfg3fht+KIHACEIGgwsnZGP61P6d6+e0YvnF0nfV9MvWLsGWtJG0uOA1b0TH84/stz9ID+Rkx8d39plETB8qWXyAphamKhU2WAXI/D7CSTLZ6JomJl/df5qGkPd0mprOm3zSW5sVWXUXbvBvvw/CPIz71PQOWbzprhn3mjZLuwrPlj8Smv49szawB0+b8E8hVTR3W+g85NFtpqOcO8msRQqd/g8Skt+Pb/ADVL34JrXvTiLRTGDoRCAQA1NhOql/8Iv7VPyQy7wY6z/kRhqe5OFHPIHB4/k2km88ueoOnrbox/BPoXvwl1ORB1DKLug5xtK3E8LWSmHh5YaqgrehYup9szSmElnyVrrPuxu5z4dfCW7AcAWIz/hNLy0+ttGUtv0Bs9nWEln4dJdmRDyKeJkx3fdm65WwUvWsN4Z6FYYfqsHsWmyUmvZ3O839OaswFRfn00AY8Wx4mMfmdROZ+GssRLAQzW3VhOapITngLHRf8iuisjxbyWXoA090IkoTlrCl54rIlFdPTVHLxNJ01mK66wmff5gfRwpvoPvV2Uq3nY2m9L82zNA+mq5bwvJvoOuu7pFrPLy7L3VA4H6arPt+ePufWVt35Nsgqlu4vrLUoOXe5OHrnaiILbsrP0iocf8+5m3gFnef/jOTYNxXlC8+/idDSO9G6N1D31Mdx7X6sbPnC6BEri4VSkkKq6QwcHa8Ur/CVZNov+h2uPY+TmPQOsE2knlcN2JIMkoIeWk/18tuOOKiZaj2PyLwbwDKRbAskqacMGcfBlwi+8l8lYwyGfwKdZ3+/J4/Zk0dByXQTXHkXmfrFxKflV83Kme786uYybFml66zvYXhb8+XYdqFuOZfAs/WPeHb8vczsJ4lcYCLdp92BpXkLeZFkbElGsk28G+7Ds/NvhbzR2Z8gOe6SQglqdAe1z34GgMic60mNzb+FNfDq3bj2PYuleem46LfYsoYa203tM9f3qV4iMf6txGe8P1/foXMvK4CMmtiHf/WP0EPFr4pov/C3WM6q3mLMDLXP3pDvcgKSYy4mOuc6kPL3hXI2Qv2/31/+i5NkOs/6LoZvXOm5MxJ4tj6CZ/tfi85dqvkc9NAalDLrL4QTgwgEwlGxdD+Wowpb0YH8mIOUSwxp+qblCGDpwd4yjBRyJoKciw2QJ4ilB/J5bAvZSCKnOo9qYNr0NGGpbpBkJMtAMpL5/uojzcyRFAx3A7bmBqT84HkuiZwOHZdZL5bmw3JWYSuOfHPMLJKROK597X3PHZaBPNhzJ5yQxKwh4ajI2egxz2CRM5H8y9+GlCeMnAkfU72HKIkDHNXyKdtEHcLrKYabnIsNGCyPh6M+d8IJSYwRCIIgVDgRCARBECqcCASCIAgVTgQCQRCECicCgSAIQoUTgUAQBKHCiUAgCIJQ4UQgEARBqHAiEAiCIFQ4EQgEQRAqnAgEgiAIFU4EAkEQhAonAoEgCEKFE4FAEAShwolAIAiCUOFEIBCOu+SEy4hNf++o1G1rHrrO/A6WIzAq9Z/MbNVF11l3D/g3i4WTkwgEQolM/UJCp32VXGDSyJRfO49szdwRKftILNVDLjil8LeShcGzVDe5wGTsnr8ZPVjhhbeSmPi2EWqVMBxEIBAKTE8T4fk3EV54C3p4I0qqfWQqkpTC38c97iTp0A+jU/9J7ejOnXPf06TGXkLXWd8jWzt7+JslHDPxpyoFLM1LuvU8YtOuRg9toOHR95T5w+2CcHScB5fjPLic2PT3El7wefTQOnzrf4OSPDjaTRN6iD9eX+FywWl0L/4SkpEk8PqP0ENrj5jHlnXSLWeRHP+WQveRGt2Je/djKKk2YjOupfaZT4Hd+6uVC06h68zvHLFsycpRteIO9K7y7bBVF7aiE3bWNpIAACAASURBVJ9yDZmGhZiuWuRsDEfbi/g2/B45W/5vIIfO+CbZqhlHrF8PbaD6hVv73W8rDlKt55EcfymGb1xPnnW4d/4Tx8HlSFauKH109idIjrsEgOrlXyYbnExi8jtBUvBt+A3unY9ieMcQmfNJctXT0TtXE3zlv/r9e9CW5iE+7RrSTWdgOYJIRgpH52q8m+5Hje8pG8DDC28h3XQGYFP35EeRszFiMz5IqvlMbNWNGt+Nd9ODONtWlM0fWvp1sjWzjnjutO5N1Dx/y4BpbNVFeOHnydTOxbv5QTzb/oxkZY9YtjCyRCCoUJYeILT0TgxvK4HXf4Rz/zNIljGovB0X/ApbdeLa+S88O/8KQKr1fJJjL8Fy1SEZSeofuxZsszeTpGDpXgDC8z+HrfmoeunLpYXbIBmJsm0xPM10nf19bFlHi2zBte9pnPufI1O/kMSEKzA9zdQ891nU2O7S49W8ICuYzjq6zrqbmudvKX9HapnIuXjZ4zb84wkt+Sq2rOPZ8Tdcex7HdNeTajmP1JjzUVJd1D71saK226oLS/PSee6PwbaQLAPf+l9juuqJT3sPvg33EZ/6bvTONfg2/JbuU7+MrejUPv0J5FyiqP7k2EuIzfoISnwvnh3/wNH+EunGpaRbziVbNRXXvmcJvPb90lOqujE8TXSddTf+NT8lPvUalFQnnq1/QIvuJDH5HSTHXIRnx9/xrb+nzLnzgKxiOarpPPv7VL/4pXzQGcK5O1wuMInwwluwFQeBNT/FcXDFoPIJI0N0DVUYS/MRn3Y16Zazce5/jqoVd6CkuwadPzLvBiQzQ81zn0POhAvbPdv+hHvnP4jMu4lszSmlGW0TOZO/W5csA7vP58FS0l1UrbwTJdmGkmwrbHfteRLXnifpPu0OYjM/SNWKr5TkPXSBshVn/nM2NqT6DW8zodO+gnP/Mnyb7kfKJfNtSrahd67Bu/kBwou/ROe5P6Vm2Y2F+iQjhWybYNtIVo7qF/8famwXSBK56hnEZrwf75aH8Wx5CMkyqH7hC3Se/3NMdwNyZHuh/lTreURnf5zgy9/C0b6qEGzcux7FvetRsrVzCS+4hcicTxFY/aOitktGEjkbAySisz6GZ9uf8W75A5KZAcC/+qfonWsJz78R5/7n0MKbDzt3+YBkS9pRnbtytMg2ap+9gXTjUiJzP40yaR+B136Amth3TOUKR0cMFleQ1NiL6Dr7uxi+MVS99DX8a346pCAAYDprUBIHioLAIZKZxbf+nvxd5QiMMUhmBr1zdVEQ6EsLbcTwtg57vQCxmR9Bje3Ev/aXhSDQl5LqpPqFL2JpHtIt55Qtw7/2F/kgAGDbyKku5EwYz7b/LVzYJTMN2NiKq5DPcgSIzvkkgdd+gPPg8rJPS3rn6/hX/5B0yzlYerDf43AeXI538wOFIFDYvn8ZSrKDbPXMI52KYSMZaVx7n6L2yY+jRXfQdfb3iM34AJaj//YLI0MEggqRrZpJfMqVYFn4Nt6L1r3xqMpxtK8iWzeP+NSryQUnl+xXUu249j4FDKrH8ShIGL7xZBoWkxpzYf6/lnPI1C8c0QtILjgFz7a/DtwyI4Vr3zMkx72p7H65TNCVc3EkozSw9M5uAsM7BpBwtr00YP3Og8uR010kJl7RTwob34bf9tsFKNnmqMzmknMxPNv+Bz20nuS4N5FqveC4t6HSia6hCqF3r6fuiQ8Tnf1xuhffjuPgCnybfo+c6R5SOZ7tf8V01ZGccCnxqVcCIKdDqPF9ODpeyd9ZpjpG4hCwNC+ReTeRqZ+HnI0iZyL5wVlJxtIDWI7AkI9n0HXrPrpPu31Eyj6STN0ibFml7U0PDip9Ljgy6z9Ggq04yNQvJDL306jxvdQ+/akhP6UKx04EggrjX/MzPFseJrzo87RfeA++Dffg2fGPIXXl+Nf9Gv+6X2OrblJNZ2L4x5MLTCI2/b3Epr0X38b78Gz/0/A2XJIILb0T09NCzQtfLPtEE59yFamxI3c3WfPczSX958eHhWRlqX/0PYMe0D/xSeQCE+le/CWQdapWfg29a91oN6piiUBQgZR0FzXP34rhbiR0+l0kx7+V4CvfGfJFTjKSuPf8u/DZ0jwkpvwH8WlX497595KplMci3bAU091A3eMfHPTMlOEkZ2MY3tZRCQTOthUkJr0NW/MgHeMg7YkitPTr5AIT8W38Pa69jyMZ6dFuUkUTYwSVyrZQE/upe/LjuHf9k+7FX6J78W0DDrbmqqbTdun/YHiay+6Xcwm8mx/CVvSeeev9OIp545bDj5yJDhgEbHkQ9zX20Y1daJEto/aaBCXZBpJCYuI7RqX+Xsc2AcBWHMRmvJ/2Sx5AToeoXfaZ/A2DCAKjTgSCCieZaTzb/kTNc7cg52J0nflfJCa9A1t1lqQ1PC3YskouOLXf8kx3E9gWeqj/x3w1sR9bdfW735Y1DP+Ew9qZxVb0fg5CITXmApKT+hsk7ZPUSAGQ61kMVo7pqsXS/UXbfOt/g+WqITr7ugHLN3xjSUx65xHbMRRyJox/7S9JTLx84AALZBpOJd1y9rDWX2hHz0wjY6Bz56wp+0K/dNOZdJ39PbLVswi8+h2Cr34HJSFWFp8oFK/Pf8dgEsbPbcK5IYzanhrpNgmjQDYSOA8uR+/eSGLSO0lOfDuOzleLVrgavnFkmpaSrV+Yz5ONFvab7kayNbOJzPsMWnQXnp1/p7+ZQ3I2TnLiW7F1L2riIHIujqX7MTzNZBtOJTr7E6TGXIBn5/8VypCNFMnxb8Hwj0eN7ULOxbA0H0ZgItFZHyUx4XK0yHYsRxAl2Y7paURN7C+pW7KyPRfrt6GHtyLnYkhWDtPThOEbQ2LSO4jO+SRKqhMtsq1PmyPI2RiJiVeQq56BmjyIZGaQrCymqxbTN5bkxMuJzLsBJLln5lR+4VS2+hQyTUtQUp3Yiobas5At03Aapm8MSqoTyUwjG0lsRSc56W0o6U5suTetFtmK4W0lMfkdIGs95z7fTWR4mnvOw3UkJr8dLbIVvc8YSqZhMbnAZDINi1ETB/JTgNOdhfEG091AtnYe6cbTkK0skm2Wnc8vWbl8GyZejhbZVnruJr6N6NxPoWTCaOEthXzdi79EeswFuHf8jcDqH6MmDgzwmyiMBrGyWCgrUzsfPbypZGpj1xnfwrn/ORKT34mtebHl/CIjLAPZTONoW1l2dWtJ+Q2LiMy5vqeMfJeOZGaQcgmcB17Av+6XJXlMdyNdZ3wLW/Pk67VtJDONGt1F4PUfkK2dT3LCpUA+SFW/8IXylUsy3Yu+QLZmdn6BmSSBbSIZGdTEPtw7/oZr3zNls5qeZsLzb8LwjcFWHEBPXjODkg7h2fIwrn1PF9LHp76HdPOZhc9KfB9Vq+7q2fde0s1LAfBt+A2OtlXYqpuu078BsoqSOEDVyjuL6k+1nEN82jVYzipsOf+EJFk5JDON1r0J34Z7e9cq9Aid/o3iJxwrS9XKuwqzu9KNZxCfdlVh6qiUi1Pz/OfLnzsgvOhWMrXzypy7/bh3/r0QBA/J1C1AC28elbEdYXBEIBCOjiRR2rNoD20hWbkybIsB1yCU5BlinUVlyRS/SXMIZZXNazNy6yeKKi8z3/8YzsNRNeEYzp1wwhGzhoSjY9uAecRkw17GcNRbKOsYLlyjetGzi9/jNCpNEBf9NxIxWCwIglDhRCAQBEGocCIQCIIgVDgRCARBECqcCASCIAgVTgQCQRCECicCgSAIQoUTgUAQBKHCiUAgCIJQ4UQgEARBqHAiEAiCIFQ4EQgEQRAqnAgEgiAIFU4EAkEQhAonAoEgCEKFE4FAEAShwolAIAiCUOFEIBAEQahwIhAIgiBUOBEIBEEQKpwIBIIgCBVOBAJBEIQKp452A4QT1/RslmtiUTS7d9uvAgG2atroNeo4ctk2H4+EaTDMwrbtmsYvAoFRbJUgDD8RCIR+vTmZYH4mU7TNa1mj1Jrjr9kwuCCZLNpmS6PUGEEYQSIQCP36k8dLUpJoNE0Wp9Oj3Zzjbq+qcr/PT41pckEqiWbbR84kCCchEQiEfu3UNH4eCDI/k6nIQJCRJB70+QA4M50SgUB4wxKDxYIgCBVOPBEIQ9ZgmlwejzMvk8FpW+zUNJ5wu3nB6SqbfnIux43d3Tjt3vGFTkXh87V1ACzIZHhHPEaTYfCox8PDXl+/dZ+RSrE0nWZGNj928brDwWNuDxt0/YjtXphJMzuTYULOoNXIAbBZ13nM7eYVh3PQx3/IWxIJrkjEUQ57UvhWdTWbtSO3RxBOFCIQCENyfThMrWmg9NnWYJqclk5zQFX5SSDIqw5HUZ43JxKM67nw9tVoGnwhFGJSrnffB6JRzkumuLW2lqjc+8C6NJ3ig9EozYZRVMbFySQXJZPs0jR+6Q/w+mF1A8zPZLi1O4SnzEB3QyrFmakU2zSNm+rqGexQuAS8PxYtW+Zb4wnurhKBQDh5KF6f/47BJIyf24RzQxi1PTXSbRJOME2myfmp/OwZr20hA/tVlWUuFy+6XGQkiQbTJGBZLEmn2K5pHFB77zG2aRohRWG97mBaLosKKOQv4o1mfmrmS04n63WdSbkcAcvCbdusdObv0t+SSPDpcJhAz0V3re7gSbebLbqO27YJWhZBy+LsdIoNuk6bWnx/E7Aszk2lMMk/AazTHTzncvG604EENJom1ZbF4nSaf3s8lBsJeFc8jm7btKsqT7ndnJlKcU4qyaFJRHtUlb94vbzucPAXr5ekLHpdhZOHeCIQhuwJt5ufBYKkpN65lKem09wU7sZrWdwS7uaahkaMnv1disKfvF5U2+aKRByHbeOybbBtOhWF7wereK3nTt4GzkmlWKPnP9ebJh+KRlCxMYF7AgH+5vEW7tzvtW2uj4Q5P5mf1fPRaITP1taR6dO2TbrOVY1NZY/lj14fXw51sTidZnIuR8Ay6ZaVsmkBNNvmA9EI74zHC9tecLr4flUVSUnMLRVOTuK2RRiSTkXhp4cFAcjf0T/U07fvsSwuTSYGLMcGVjscXNvQWAgCAP8drOKdTc0868qPN3wsEkbv6YN/2enkL32CAEBOkvhhIEis5w58fC7HWanyT62qbVNjmowxDMYZOcYZOZoMg06l98KvHGFi0PRsthAEcpLEvX4/d1VXiyAgnNTEE4EwJA97faT7ueg95nbz4WgEyHf7/NXj7becdkXhizW1R6yvtc+YwA+DwbJpcpLE7TW1fK+jHYAl6RSPu91FaU5Np7mlO4RzGKeA3uP387cBjlEQThYiEAhDsk/t/1cmIcvsU1VaDIPxudLB4aPh6HPhvu/gwUHlqTpsAPfm7hBnplL03+FzdD4SieCzbB7w9T/LaThcE4uytGcdhwT0HcToDcn2YZ/LpSn/edBp7MPTlAbVI5Vz1G0pOuah1zuYbf0fc3F9R8xzFG059NkGTMCQJAwgK0kcVFWec7p40eUcsNvyWIhAIAxJlWX2u89h29T3DP4OFDCGIinJ5P9pDF7ff2TnppKFINChKPwoGGSbphOWZVy2TdA0uSoeK3mVRH+2axp/9ni5KdyNDFwdi1JnGvwgWDWkNg5FjWkxpiew2lLvZfBQuLOhzzYJpN5tpWmlwmsyrD77e9NLPfUUlw9gSXDoMmz3/ZnD65N60pa241AdVj5Zmfp78vfZX7b8Q/nKltF7jg4/jpL8ZcuQitpceizSYeUVp7EOS9O3nUXfE5Qch8eyUW0bv2Xhsy1acwZzMhnmZDJ8PJKf7PA3j4dlLvegZ7gNhggEwpBcHk+wwukq2yd+YbL3NQyPuj3DUt8qp4Ox8RwmcFd1DSucQ5vv/x+xeOFJ4K7qarb0md+fkiRQFFzW4LuLErLMk243UUXm1lC+q+miZJIa0+Ib1dUlYyfD4b+DQf67n24x4Y0vYFlMy2ZZkElzbirFzd3dvCcW4/vBKjYOYv3MYIjpo0K/gpbFKdkMU7I55vYs4KqxTMYYObZrOilJwpIkqkyTs9JpPh6NIAMxWeabVdVYkoRq20zO5ZiYy9FqmpyZSqGRf/Tdo2q0GEbhvwbTJKrI5PpcTLdoOuekUnhtm1PTadpVlaQsk5ElbCQ02yZgWdRaFq09XVKmJJHoGTw+P5Wktucp5VWHk05VxZAk6kyTedkMN4a7mZPtfbHeDk3Db1lEegaQZ+ayjMsZnJ1KoQLtqsoTbjf7VZWVTicXpJKoQJNpcFY6RZui0mwYJGW537EUQRiKjCSxT1VZ5XTyiNcHSMzNZrgkmSQuy2wZhmAgNTa3DOp26OCXFxD8350414SOuVLh5HBjuLv07ZuU7/M8JCnJfLG2tvCq6om5HP/dM4g7GA97ffzO7y/aNjWb5etdnfkpp4OwVndwa21+IHqsYfDdjvZ+B4nTksQGXS95y+rPAwHW6w5+cFjb1zgcfKHPIPdv2w4WAk1f//R4+HFA3MULI6PJMPhqqIsmw+Ahn4/7ff6y618GS0wfFfr1otNF9rC72h8Gg+ztp/9/je7gk/X1RX+voEtRWF1mtW85UVnmxTJdP5t1nQ81NPK0y10mV7Gtmsav+/y9gN2qym01tYSU0kG29brORxoa2H7Y6yC6FIXXHU46FIU9Rxjr+IPXV3KOgJLV1YIwnA6oKtfV1bNJ17kyFmNqNntM5YknAmHItJ75+EErv8o4LUlEZZluRRnisO7QSECVaeKzLDw9d/hWT/1JWSYpScT7WdHrsG0aTLPw9xQ6FYWuEW6vIIy0GtPkd20HSUoS19c30FbmhmcwxGCxMGS5niltg5vMOXxsIKQoZe/ujyQjSewepplMgnCi6FIUPltbx7c7O7gkkeDew7pVB0t0DQmCIJzEtug6B1WVKxLxwvu4hkoEAkEQhJOYBfzSH0C3bc5JDW49zOFEIBAEQTjJrXI6CSkK8w6b/TZYIhAIgiC8AbQrylHPHhKBQBAE4Q1gk64TtCwmHMV7vkQgEARBeANY17PCWAQCQRCEChXqeTPp0cwcEoFAEAThDSCi5C/nfhEIBEEQKlOk54lABAJBEIQKdegV6E5bBAJBEARhiEQgEARBqHAiEAiCIFQ4EQgEQRAqnAgEgiAIFU4EAkEQhAonAoEgCEKFE4FAEAShwolAIAiCUOFEIBAEQahwIhAIgiBUOBEIBEEQKpwIBIIgCBVOBALhuMsFJpKtnjkqdduyRqrlXGzFMSr1n8xsWSXVei626hztpgjDTAQCoYTlCJCY+DZMZ82IlJ+Y8m7iM94/ImUfieUIEpl/I5ajalTqP5nZup/IvBsxXfVDypeYcDm5wMQRapUwHEQgEPqQSLWeR/tFvyM15kKQ1RGpxZYd2JI2ImUfkSQX/18YNLtwuZCGlM9019N11ncJL/gcSEPLKxwfI/MvXTjp5ILTCM//DLbqpvbpT6EmDoBtjnazhDcA/7pf4dn2JyLzP0P7Rffi2fEXPFseGe1mCX2IQFDhLEcV0dkfJ1s1A/fOv+Pe9ShyNnrEfLngFDINp2J4mgFQkgdxdLyCnI2RGnMhvvW/AexCetPTRGzae/N5AxNA1ggvuLmkXMk28Wx9GDW2Z8D6Uy3nkauagqUHkIwkevcGnPueRbKMsunj067G8LRgK/n+7egpH0YyUiXp1MQ+vJseGPjYq6aTbjgV092QzxPbiaNtFVp0R2k7x15MpnYuAN7ND2K66kk3nwGSgmv3v9FD67E0H8nxl2L4xqJFt+PZ9ucBg3Cq5Wxy1adgaV4kM4MW2YFz31PIuXjZ9ImJl5MLTgNs/Gt/iZyNkGo5j2ztLGzFgZLqwLX3KdTY7vLnbupVGN4xhXGV2MxrkXLJMuduP95N95ctQ0l3Ub38DjJ184nO+gip5nPwbfgtjvaX+z1O4fgRgaBC2YqT5Lg3EZ96JY62VdQ8dzNKqm1QeSNzbyDdtAQtvA29ewMAucAkUmMuQjJS2Job34bfFV/MzCxqYn8+bXAytm0XPhc3zEIys2XrtXQ/4YWfJxecjJyNo0a3o0V3YnhbiE1/P8lxl1K18k7kTLgkr5zqREXC0jwAKKl25GysbLr+WI4qInOuI1u3AC20AS2yFUvzkmlcQmLKlTj3PUNg9Y+hz58KlDNhlGQbyYlXYDlryPnH42h/BUv30L3kqwRX3kVs1kfAtnC0v0xi0jvI1M6jauXXSs5DLjiVyNxPY2ke9O5NqPE9GL6xJCZeTnzKO/FtvA/XnidK2q2kuzGzEZLjL8W153ESk9+F6WlG71yNkuogWzOL5IS34l/9I1x7ny6TvwskBUt195y7DuRMZEjnDgDbxNG+ironVxGfejWRuZ/G0fEanm2PHDHwCyNLBIIKY8sa2br5RGf+J5KVJfjKd4Z0Vxab/j4yjadStfJO9K51RfssRxXdp96GqblL8inprsLdYi4wGUv393v32C9JQc7FCbz+I5z7l5Xs7jz3x8Smv4/A6z8s2efe/W8ATHcDyfFvwbP9ryjlAlE/LM1D96m3g5Wj5tnPoMb3Fu3PNJxKZO71hBd4CLx6N5KVA8DR9hJ652skx7+FXGASgde+j/PA8yCpdJ7z33Sf9mUcB1cQeP0HyLkEzr1P03XW3ZjuJtTYrkL5hm8soaVfx7XnMXwbf49kFN+Rx6a/j+isj2LLGu5djxbtc+5fhta9ieT4txBe9AW07i3UPPdZ5HR3b/4ZHyQ6+xM4Ol4rCaSu3Y/lz52zluSEy3Dv+EdR246Gd/MDuPY8Rmzmf9J1+rfxbvsfXLv+iZxLHFO5wtERI2YVxHIE6V7yFSJzr8e981Fqn/n0kB/Nc8Ep6J1rSoIAgJzpxrf+HrTItqK74uEiZ7oJrvpG2SAA4Ny3jGztnGGvFyA284NYqpOa528pCQKQv+DXPv0pMg2LyVXPKFuGd9Pv80EAwDbQQhvAMvCv+0XhAqik2pFss/DkAmCrLrrO+DauvU/hX/uLkiAA4Nt4H95NDxKf/j7sAQb55WyMqlVfLwoCAL4Nv8l3GTWfdcRzMVyUVAfBl79FzXOfJTn2Ijou+DXZqhkDtl8YGSIQVIhUy3m0X/gbbEmn7vEP4tnx56MqR43tIlM/n2z1KWX3611rqVpxB33HB94IMnUL8W38/YBp5GwER/tK4lOvLrtfC28p2aYmD6KU61LpM6vJdDdgKzrejfcNWL9nx18ASI67rJ8UNtXLbys7NgIgmTmQj/9sLjWxn7qnrsO192lCZ3yDyNxPH/c2VDoReiuEa99TaJHNRObeQOd5P8ez5WHcux89csbD+DY9iOGfQGjJV5GzUZRUB0o6hJLYj6PjVfSuNSPQ+l7xKVeRGnMBlu7DVl35jbaJZGbAtpGNkelasBxB4lOvIjHxioHTOasxndXDWne6YQlIMt2nffmIT1q2JJOtm9t/oD8BZ4KZ7ga6F92K6W6m6qU70btWj3aTKo4IBBVEje+jevltpBuXEp9+Dakx5+PbeN+QLt6SkaTqpa+Rq5qG4RtLNjAVy1VLLjiZxOR3okZ34l9/D3rn68Pe/sjc60m1no936yM42laixvfkB6dlDdPbSmLi28jWjNCKZUnG0fYSanJwA+rDyVacYFm49jyONIguNznddRxadexMZw3xqe8h3XQ67t2P4tl2B3K2dBBaGHkiEFQYyczg2vc0zrYVxCe/i+7F/w/XvqfxbPljfnbIIMvQO1ejd66m77BwtmY28alX0b3oVur//b5+p3IejWztXNIt51C14qs4Ol8tbo+VQ43uQEkcgBEKBFIuiRbZ3tvHfxzpXWtITLoc54Hny850OhklJl5BYuLb0cKbqV5+W35cSRg1YoygQklGCt/G+6h9+pMY3ma6zv4eyfGXYZeZ8XOIpftpe9NDWI7yXR961xoCr34PW3WTqxrogjz0gWTD04Sc6ioJAkVGcLWwmmwjOfbiESt/wLoTe0FSyNQvHpX6ex3juI+kkK2eQee5PyE5/jL8635B1aq7RBA4AYhAUOGUdCfVL95GcNW3iE95Nx3n/BhL95d9FUCmfjG26sLwNPVfoJz/lZL6WdwEoCQOMvBrCiTskkFLe8DXE9iqm8SE/gZJ+5RspPPpB5qZIiklQcWz5SGytXNIjb1owPJtWcXSvEdsx1AoyTZce58kMu+GwgK+futXnL1jJ8Ps0LqG0u+mb6LScwf5mU/di79E92lfwdH2InVPfgTngRdGpJ3C0IlAIACgh9ZR98SH8W57hPaLfkdo6TcwXXVl04YXfI5sVekUSUv3EZ5/E3I2hjbAPHO9cw25wATSTUtL9pmuOsLzbqTz3J8UXay1yE5MVz2J8W8tyROfciXtF9+L3M9smL7kbBQ1tpvuxbeVvWCmm86g89wfkRz35qLtzoPLce/8J5HZ1xGf+p6yZWcaFtNx4W/ouOBXR2zHUPlX/xQl2UbojG+SrZlVNk1s2vtov/hewvM+M+z1A8i5OGpsF+FFn8dWS58c041L6TznhyQmFH9HsZnX0n7RvWAZ1D92Lb4NA89+Eo4/qbG5ZVDPewe/vIDg/+7EuSY00m0SRpnpbiDVegGu3f9GSfdObbQcQTrO/QmOjpdJN56OkmovDJ6arlpMZw1Ksp2qlV9HSbUPWEd86pUkJr8bJXkAJdWJpXmwHFVYjirU8Bb86+9BC28uypNuPpPwgs+hxvejpNqxVHf+NQ+SjG/DvRjuRhJT3gXk1xzUP3Zt+eNz1RJaehe24kRJHkA2UpiuuvxMJEnFtf8ZvJseLHnVhi1rZJqWEpn1cSQri5JsRzaSmD3ttjUPrt2P4d3yUGFRVnT2J0iOu6RQhhrdQe2z+Qt1ZM71pMZeCEDg1btx7XsWS/PScdFvsWUNNbab2meu7z3/mpf41PeQGnsxciaUfx8UYLrqMZ3VyLk4nq0P4971r6J2t1/4Wyxn79tWJTND7bM35MdUgOSYi4nOua5wJy9n/z979x1nV13nf/x1hgVfGAAAIABJREFUyu11+iSZTJJJryQECKEYpKMIqNhw10UUFexlVXRV7Lrq+nMFdbHh6mJZRV11FVl6CS0B0kN6nX7n9nrK7487ucll7p3GTAa4n+fjwYOZe7/tnsyc9znf8z1nYjT/vfLTYS13Pf1rv4Lt8KKlnrftVAeeIw/i33lH2UXf5Lw34hzYhrN/S8U2xcT589EjPODx8I26sa1ckyAQ42IEZ1EIzi1OIwGqkUZLdY5pBVIhOAcj1IHlCADFG4z0xIGKN2wdr9OBEZyD5Qyg2AZaugdHZOu47kjNtZyB6W3GVnQUM4OW7sUZ2VL1ERfH2JqLXNOpmN4mQEUxc2iZXhwDO6o+72ciGb4Zg89ZCgOgFhJo6a6KN/lNllzL6cX7G8q23dbiMl4xZcYbBLJqSIyLHj+AHn9hjxlwxPdVfFDb8HX24ojvfUH9HuPqfmJc9RQzh7tr/YSMYTz01BH01JEp6x/A1f3klPYvJpZcIxBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBA1ToJACCFqnASBEELUOAkCIYSocRIEQghR4yQIhBCixkkQiJMu13I6mbbzpqRvW3ORWPJ2bIdvSvp/KbM1J4kl12E5A1M9FDHBJAjEEEZgFvGl7xrxD6WPV3rmRUP+JvDJYjlDpDquKv1lNTF6liNAquNKLFfdyIVPkFh8LbmWNZM0KjER5C+UiRLL4SM953JS896Is2fD5P3ZQdUByhT96CnKsS+mpv+XtPFtOzUfJbbyAziiuwhuuhUt0zvxQxMviASBAEUh17iS6Gk3oeaitPz1DWBbUz0q8TLh2/MHfHv+QHT1J+g9/z/wHryL4JYfys/Yi4gEQY0zPc1ET78J091IcNP3cXc9Mqpf0FzTSjLtF2ME5wCgJw7iPnwfWjZCcsGbqXvii4BdKm8EZxNd/Ylin+4GUFT6Xvn9oQ1bBUKbbsEx8Nyw/afmXk2ueSWWuwEln8DV8yS+PX9AsQoVy0dP/RhGaC62ogEQWfN5FNsYUk6P7SG88ZvDf/bm00i3X4QZaAfAEdmO59A9OCND/3h8cuE1ZKefC0Bo47cw/G2kO16DrWgEnrsDV9cTWK46EouvpVC3AEf/VoJbfoBiDR1b6bN3XEWudQ2WK4xiZHD2b8W353eouWjF8vGl15NvPhVsm7rHPoOW7Sc192qy09Zi6x605GH8u3+HI7qzYv3Yqo9QCM8vbbuBM/6l4vj02F7CG79Rddzhjd/A8E0nuvrj9Fz0nwS2/wTPoXurlhcnj9I6fYY9cjHo+typhO/cj3tzZLLHJE4Cy+EjseQdZKefi3ff/+Db+0fUfHxUdSNnfpFCaC6ung24ux4FIN+0ilzzamzFAYpC893Xgm2e0J+ffONyAFLz3oilewjs+NnQxm0bZ2RbxbGY7gYGzvwiprcZLd2Ns+9ZnP2bKYQXkJ12FqBQv/5f0DI9Q+rmG5ZhOQNYzjDx5e8huOU21NzQn2U1n8DZv6Xi5za9LURP/WeMQDvurvU4ezZiucLkm1aSazwFZ/8W6h//fNnnLoQ6MH1txFZ+AC3dg+kK4z3wNyx3mMz0dYSevYXE4rehZXrxHryLxKK3oeYGaHj0kyhGtqz/XPPpxFbciGLlcHc+jiO6g0L9EnKNp2C6m/Dt/QP+Xb8eMu5CeAGFUAfx5e8hvOFfSc5/E7bmxHP4frR0F9np55JvWoV/x3/i2/vHCttuKZYziOUIEl9xI4FtP644vaPmkzj7N1fcds+XaTuf5KJ/QEt14d/5c5yR7aOqJ4b356NHeMDj4Rt19WOqJ2cENcbWPWTaLiA173XoiYPUr/8UjuiuUdePL70e0z+DhkdvQk8cKL3u7lyP5fATXf1JjOCsIfXUQhJ353oAMjMvBtsqfT9aipnHfeQBnJGtZTtrd+d6Att/Rv853ySx5J8Ibxh6VHqsvOltAcDV+zRa6uio+7bc9UTW3IyeOETjhq+hZfpK7/n2/pFC3UKiqz5KZO2XqXv8ZhSzuBN3xPaiJw+D9V5Mdz11T30FZ98mUDQMfzuxVR/Cc/Buglt/hGJmcQzspG/dv2N6W9Hj+0t95BtXED3tE/h33oF3/19K12/cnesJAOlZl5FY/DZQFPzP/aps7I7oc4NnCwqxlR/G1fUYwa23oeYTAHiOPEBq7mtJLvpHXN1Poj9vuzj7i2c6prtxcNs9W/ZvPx6ew/fi6t1IquNKBk7/DO7OR/E/dwdaVg40p4KsGqohhbpFRM76CunZlxHYdjt1j988phAAMAIz0aO7K+4I1EKSwPaf4D7ywKTM/6qFBP5dv656xO7qfopCeMGE9wsQX/IOVCND3VNfKQuBYxwDO2l46CMUQh3kmldXbCOw7afFEACwTfT4ARQjQ2Dnz0vBoeaiKLaFpXtL9SyHn4HT/wXfrv/Gt+fOihfxvQf+SnDrT0h1XIWte6p+Dj2xn9CmW0ohcIxvz+9RsxFyLaeNuC0mipqLEtj+Mxof+ACmr4W+V36fTNsrhx2/mBwSBDUi23oWkTM/D7ZC/aOfwn30wXG14xzYQb55FZnpr6i4BNMR20tw64848frARLMcfkxPI4ZvOoZvOqa3FdPTiK05J63PfMOyIUfaz3fsrCc193UV39eTh4a8pmX7UXOxoYWV47+apm8atqLh3f/nYfv3HLobxciQmnNFlRI24Y3fqLoaTLGMKVnNpWX7qHvyy7iPPkp8xftILL72pI+h1snUUI1wdz2K+2+PM3DGZ+m94Ef4dt+Jf9cvx9yO/7lfUgjOJnbqR8G2UGwTLd2DHt+H++hDuLsem4TRHxdb8X6ybedhqxplyxjtYvBo2clZmmi5woMXu4cPOFtRYfCi6kTJtp4Fqk7vRRWuqQzpXyffsBTGdqI3pQz/TCJnfxVb0Wh44ANDpqbE5JMgqCW2Sd0TXyRft5D4ihvJtF9AYPtPcR99ZAxt2NQ99XVMTxOmp5Fc06mYvunFVUHTz0HNRghu++m4zziGM3DGZ8g1rSL07K04+55BzceLq4QUFcsZIjnvanKtZ0x4v8cEN92Knjg4ae1Xp6BYBerXfxosc+TSRvokjOmFM33TiK14L4XQXALbf4rn8AOTd++KGJYEQa2xDZyRrTQ89GGyrWuJL7+RTNuFBLb+CD11ZJRtmGjpLrR0V9l8vRFoJzn/TcRWvBdX1/qqSznHI9d8GvmG5TQ+9JGyi6jF8ViouYEh894TSS0kUWwDR2z3pPVRjat3A6m5V6ClOke9suvFLr7s3WSnn4P7yEOEnv0uWrp7qodU0+QaQY1SzDyeIw/QeN+NqLkIkbO+SmLJdcU1/lUYvul0X/YbTE9zxff1xEGCW27D1t3km1ZV77zC+v2RmJ5G1OzA0BA4kTqKKRl7fNcu9MQhUrNePa66L5SW7gRFI9N2/pT0f9wLu+5jqw4yba+k94IfUwh1EN7wNYJbb5MQeBGQIKhxaj5G6NnvUvf4zeTrF9P/iv9HvmE5tjr0ZLFQtxhbc2EEhi4PPcZy+sG2h12aqac6K7ZfoqhDnmej2Gb1Hb2iUAjNJTXnNdXbPFZ0cHWO6QpXLWM7vNiau+w1/85fYIQ6SM5/w7DtW64w+YZlI45jLLRMH749d5JYci35+kXDljX8baWb/CbasWkba7htpw/ddgBGYDaRtV8iufAafHt+R8MjnygtSxVTT4JAAOCI76Xh4X8muPn7DJz+KfrWfRfTN6280ODRdPTUj5JtPbPsF97WXBi+GURP+xRaphc91Vm1L1fPBoxAO+nZl2FrrmJ9VcfWPRTC84is+Tx9r/hOWVjo0d2Y7noSC99WqoOiYzl8xJe+i/5zvjW4Br0YIpYzVLFvNR/HEdlO9LSbMD2NpQu7tubCcvhJz7mc3vN/SLr94rJ6zv4t+Hb/juSCtxBbcWPx6aXH6qrO4nOa2i+m58KfEl31kePbRfcWd5yKgu0MDFlpZQ+G3vNXPFmO8rKBHT/HEdvLwJovkJ1+TtkSS1tzYTkDxE75AH3rvjskEC1nqNSW5QwNjkctr++qw1Y0bN2D5aqy7QpJnJFtRE/9Z0xP09BtN/tV9J5/G+nZryqrF1v5IfrP/SaO+H6a7rke7/7/rdi+mDpyZ7EYwtZcZKedi6vnibI5aVt10HvhT/AcuZ/UnNegmDkUIzdYx4mtOXEOPEfd458b8aJfuv1i4stvRDGzKGYeVA1bdWCrTly9Gwg9/W3UQrKsTqFuIf1n/yuKkS22r2rYmgslnyC88ZvkG08hueBNAKi5geLdzVU+X/+538L0thbbsU1s1QWqjmJm8O35Pd69fxz6GAVFoRBeyMDp/4Ktu4vjtg1s1VF8kB7g2/krfPv+WLo+El9+I+lZl5Sa0OP7aHzwQ0BxBVSm/UIAQk9/C8+RB7Ecfnovuh1bdaAnDtL4wPuP969qpOZcSXLhNcUVW4NnN7bmKpZPdxPc/P3j9yoM6rnwdiz38TMsxczR+OAH0QbDOj3zYuIrbiiFg5qP0fz3t1XedqqD/nO+ielvK/Zftu2yePf+Ad+e35dtu0zbhTgjm2UK6CQY753FEgRiXCx3Paa7EVsvnhUoZh4lHx/T0j/T3YDlaSwd4SuFJFo2UvWZOcU6jVju+mK/toWaT6Clu8a12sQIzCo+W19RwSqg5pPo6aMjrsyxVR3TPxPL4S8+zdQyUAtJtHT3SVn1YrnCmJ6m0lmBYmaL22GYs7CJVmnbaenOYZ+RJCafPGJCnFRqNoL6Ah8HoGX70bL9Y6zTh5YdemfveIz3MQmKZaDH903IGMZDzUWHDcuT4YU+YkK8uMg1AiGEqHESBEIIUeMkCIQQosZJEAghRI2TIBBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBA1ToJACCFqnASBEELUOAkCIYSocRIEQghR4yQIhBCixo0+CCwbyyG5IYQQLzej3rMrlo3tk791L4QQLzejP8Q3bSyPBIEQQrzcjP6MwLSx/BIEQgjxcjP6IMib5Nt8kzkWIYQQU2DUh/iOrgzZRWHMsBMtmp/MMQkx6cw6F9j2uH+WY5fPIrO6AQDXrjh1d+yeyOEJcVKN+ozAvTUCQGGGnBWIlzZbV+h93xJ6P7iM/KzAVA9HiCk36jMC144YSt4iu6wO99aByRyTEJOqMMMHqgKAWe+CA4kxt+F/sBPfUz0AKDlrQscnxMk26iBQDBvX7hi5BWGMBhd6f24yxyXEpDGaPaWv7XHeGqPF8xCfoAEJMcXG9Gvge7gbW1dInj99ssYjhBDiJBvTelBHZxr3lgjZZfVkTonieVamiGqRrSnkZ/vJrG6iMMOHGXSgGBZ6bxb3s/249ifpf/sCPJsiBP/30JD6ll8nuzBMZlVD6ZqTFs3j3hHF83Q/ek9mSJ3Y62aTWV5f+j70p4NYbo3MKfWlI3zH4RS+9d24dsZQTLt8zA6VgTfPJd9Rfk0gfnk78cvby15TTAvPk30E7zpc9nrqzGaSF07H1sqPn9RkgeZvba66vbKLwkSvngNacTrK0ZnGf+9RUme3kJ/pB01Bi+TwbI7gfbIXNWWU6hame4m8bT62Syu95r+/E/8DncUxnd1C8pXTsQfbBlByJs3f2oxSGDplZbR4SK1pJj/Hjxl2gQ16bwbPpgjurQOyEKRGjfnGgOBfDpGfEyB++Syc+1LFU2RRM4o71A7yHcHy13WVwjQvhWlejs24p09vGhIEhRleIv+4ANtVvjM1w05SZzaTOrMZ/4NdxR2ddXxnXpjmLSsfe035zhug0OYj+oYOvE/0Evxreb+W30Fh5ugWOtiqilFhUURhlh9bHftcUn52oBQCUPwsA2+dV1bGrHeRXDeNzKoGGm/dhpK3SuO2nRrV5DqCZSFQlaoQu7ydzKqG8teV4lRZ4sIZJC6cQfDPh/Bu6B39hxMvC0rr9Bn2yMXKGc0e+m5YDKZN6H/249kkZwa1ov/diym0Fo/AlZyJ79EuvBv6sbw6mVMbyZzSgOU5vuNq/fzG0teFaR4i1y3C1os7LteuON7HenB0pkidO4306Y3YenFH694WJfzfe0t1bUcxaCJvX1B6Te/P4l3fjXtrlPzsAImLZwwuC4Xwb/fh3lb+c2k7VGxNIXNqI4mLZgAQuOswnmf6h3xOxbBQjOedVWgK9gnP24q8cyFGg3vEMwJUBcup0n/DEsygo9S+97FevE/2YrtUkutayS4tnvG4t0cJ/2bwsysKtkvF1lV6P7QMW1PKzghOHFP/jUswA46KZwQD18wlNz9U+t7zbD+Bvx3GaPWQuKiNwvTBoLUh9OeDeDb2Vf884kXrz0eP8IDHwzfq6kcufIJx3Sqs92Ro/N42Im9fQOy1c8guqSNw9xG5gPwyl1rbUgoB184YoT/sR82aAKgpg8Bdh/E90kXPR1cMqWs7NQb+cX4pBEJ37sezOVJ6P/D3w/jWdxP5x/kYTW6yi0IYTW703iwASsFCzZnHy991GO9TfShGcWfn3hHFeSBB74eWYztVsotCQ4JAKVgoBcp2kErBKn2GkSimjWKeUHa0i4Usu9iHXQwWvTdL/e3PoaaPTwGF7zxAf52bwnQv2YUhLLdWqqNkTdBtqHDIVjamKod02UXhshCo//kunHuL523O/UkafriD5HnTSa5rBQWS61olCGrMuB8nqvdmafrOVtybI+QWhul/zxIG3jKXzCn1mGHnRI5RvBioCukzm4HiXebBvxysuANVkwbex4vLKj2bju/ojSZX6VlVvvXdZSFQqpsoEPzb4VJ/8ctmVh2OPpArhUCpfsZEMYuv2d4X7+NQlLxVFgJAMSyOXRtQFZjAJ/0e+3cDCPzfkVIInMh//1Ech1MAmEEn6TOah5QRL18v6LdFyZmE79xPYX0P6TObyS4MkVtQPPLQBnI4OtOoaRM1Y6CkjYoXr8TkcG+LomaMkQuOku1QS1M+jsNptEShatng3w4f36EPOjbtAeDeXH0q0bk3jpooYAUc5OcEsHV1yA5fjM2xs7hjZwzp1Y0Vy2nRHIXBx8hkTqnH+0TPyRieeBGYkMMmR2ea0O/3E9RVcguCZBeHMVq85NuLKyJsTS1OCaijuKglJoTzUGpCgwCAwX++F9ru81f0DHn/hAMG2yVB8IINXndBgcSFM0ZVxax3TeKAxIvNhJ4/K4aFe1sU97boRDYrXgxsuzQnboZG3knYmgKaUtyp26CesLrMDDgqLhEFsHx6aSekZoyypZRifLRIFqNp8AJ/3qLqxYQTKPnRXTcRLw8v3olU8aKi5Iv3CRRmeCnM8JJdXId7e+UpHtuh0vvBZVgejeDfD+N9vBf3zhiJS9qA4j0BDT/eiRYZurgg/urjy0J9j3RPymdxbx0g/qrq1x9e7E5clXVMYYYPy115mal78wDJ84tBEPj7Ybwb5EKwKKf5A8Gbp3oQ4qXBtTdeXIeuq+Tmh7BCTrBstEQexSo+0TOzsoHYle3F9xQFvTuLa28CNWti6wqFmf7iqp6lddi6ipoyUDMG2cVhYlfNLt3wpcXyhO/cDzZYbo3kuulkVjeWFiKYdW4KM33o/TnUtEG+3UfikpnFm8tUBdulYTS7ce1OoFjPOwK2bNJntRTLuTX03gxaooAZdJBbGCZ1bivxV7WTPH86asrA0ZkeXJ7aRmZFPdnlxf+MVm/x/gBVpdDmK72eXV6P5XfgHLz4mlnRQPKV04tj0xRsp4rR5gPLQu/LYXk04pfNLF4TcRancQotHrREoXSDl6IUL/rauooZcuA8mETNGOSW1BO/pI3UutbjS1tVBbPRjeNwGjVn4ujJkF3RgO3SyM0LYrR4iv82gyuyzKCDQpuf3MIQ2ZUNxX9jC/S+7GT/SIkJdk0iwQGHg0c9npELn2Bc9xGI2pVdFCZ+Rfuo/lqd40iaul/vQT12YVlRiF82g8ypTcPeBOU4kiJ05370wTOGfLu/7P6BE/ke7iZwzxH637mIwgzvkPebv7Fp6AodBaKvm0N2Wd2w41cMi9Cd+3FvjzLwD/PIzQ0OW/75jt1D0fW5U4ctk1nZSOzKoTfIubcMEP7dvtL3qbUtJC6aXkyFUfA+1lO6O9podBN9UwdGo3tUdfWeDI3f3z6qsuLF46TeRyBql3tHFL0nQ+rsVvKz/dgevTRVoZjFNe9aLId7WxTvE73lK8Vsm+D/Hsa9OUrq3BYKrV6swOANVlkTfSCH59l+vI+X39mq92Tw3985ZCemZgw8zxSnOYJ/O0T6tPKAUbNG5bluG8K/20cikSe7KFy8e9ehgmmjpQqoSQPn3jjurVEcXWkAfA92oaYM7FEueHAePL5EM/zbfWQXhsp24Ipllx7t7to5gPdxN5bPUdaG96ny7eBb341i2aTWNGEOnnGpWRO9N4N78wBGkxtrcNmsUrDwPXp8ak3vy9L4vW0kz2klu6weM+Q4/tgK00bNWygZAy1RwHkgOaRv8fImZwRCCPEyMd4zgom7a0UIIcRLkgSBEELUOAkCIYSocRIEQghR4yQIhBCixkkQCCFEjZMgEEKIGidBIIQQNU6CQAghapwEgRBC1DgJAiGEqHESBEIIUeMkCIQQosZJEAghRI2TIBBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBA1ToJACCFqnASBEELUOAkCIYSocRIEQghR4yQIhBCixkkQiJMuOe9q4svePSV9Ww4/vRf8CMtVNyX9v5TZupfeC3+C6Wma6qGICSZBIIbITjuL/nO+RaFu4aS0X6hbRCE0b1LaHont8GJ6mrB195T0/1Jm6R5MdwO27h1TvYE1nye58JpJGpWYCBIEosQItDNwxmeInfIB3EceQE8enpyOFA2UqfrRU573fzF649tmvj13km1ZQ995t5JtXTPBYxITQZ/qAYipZ7nCpNsvIdVxJa7eDbTc9Raw7akelniZcPY9S+ODHyQ197XEl99AZtZlBLbdjp7YP9VDE4OU1ukz5De+Zinkmk8ltvKDaOlegltuwxHdOWIty+Ej03YBmfYLMAKzAdCTh3AfeRA91Uli4TU03X9jWZgUwvPpP+ebI4/IKlD3+M04+7dU7ttVh+UMkJr7enKNK7Dc9aiFJM7ejQR2/BdauqtivcjZXyNft3jE/p2R7dQ/+smq71vOIJn2i8i0nY/hbwPAMbATz6F78XQ+iFJIl5WPL7+R9KxLAKh78kvk6xaRnv1qUDT8u36Fb8/vKYQXEF96PYXwPByR7YSf/je0TE/F/k1vC8n5byLXfBqWK4RiZHFGtuHb/d84B3aCbQ6pE139cbLTzgZsmu59F1gGyUX/RLb1DGzdg548im/P73EfuR/FKgzddmu/TL5h2YjbzjGwk4ZHPj5sGVt3E1v+XnLT1uLd9yd8u3+LWkiN2LYYnT8fPcIDHg/fqKsfUz0Jghpluerpe8W3sZwhwhu+hrvrsVHWVOi55BdgW/j2/BHfnt8CkJ51Cek5V2D421DzcZrvvrbiTglg4IzPYTmDNDz80TGN2fDPpO+8WwAbZ2QbnoN34zl8H9nWNaTmvYFCeD4ND34YR3xv1TZMbwu9599G0303oKWOjqn/QngekbVfQbEMfLt/i/fA/2J6W0m3X0x69qtQCmma774WxcqX1bM1Jz0X/Rxbc6Dmk4Q23YLhbSGx9J34n/sNqXmvw9X1OMEtP6D/7H/FcoVouuddqIVEWTupOVeSWHodjuhz+Hb/HnfXo2RnnEt65kXkG1fg6nqCuqe+Cgz9lTY9zfRecBvBrT8msfCt6PEDBHb+J47YHpIL3kJqzmvwHryb4ObvV9927kZ6L/wxjQ98AD1xYEzbbkhb3lYGzvgspqeB0KZbcR95qOK4xdiMNwhkaqjGWM4QiSXvINt6Bt4Df8W3+3eoheSo60dXfQQ100fDozehGMePfr0H7sJz6B5iKz5AvnnVZAwdLd1J4/3vQ83HUfOx0uvursdxdz3OwOmfJrH07dSv/8yE920E2hlY83m8+/6Ef9dvUMwcAHp8P8Ett+F/7pcMnPHZ4o7y/vei5uND2lDzCRoe+SRauhMXCvnGFSQXvBH/jl/g33Mn2CYNj95E7wU/wvQ2ocaOB0F69mUkllxL/frP4IxsAdsqfvYjD+E+8hCF0HwG1nyO6KkfI7zxG1U+hUJ8yXX4d96Bb+/vUSwDgMC2n+Ls38zAaZ/GfeShYvuTTEt30fDgB8k3riS26kMk57+Z8Iavv+CAEeMjF4trSGrua+lb9x0sp5eGRz5BYPvPxhQCAJYrhJbpKwuBYxTLILj9p4Se+XbVs4EXQrEM9OShshA4kSO6B9PbOuH9AsSXXo9jYDuBHT8vhcCJ1HyiGECKSmbmhRXbCG7+Plq6c/A7GzUXR83249v3p+PbyyoANrbuKdWzXHXEl7yTuqe+hrN/UykETuSI7SL09LfJta7Bclc/GvQcvg//njtLIXCMq/sptHQ3hbqTt5pLsQq4ep6k6Z7rcXU/Rv853yS24n3Djl9MDgmCGlEIzSXTfjGKZeLd9+dxH3m5u9aTbzqF+NLryTeuGPK+mhvA1bPxhQ53GAqFukVk2s4nOf+NJOe/kVTHlWTazsP0NE5ar0aoA+++vww/MjOL5/C9VYOg4lmCkUExs5VaO953oB0FcPZvHrZ/V+8GtHQXqTlXVSlh49/1q6ohrdgWU7FLUIw0nkP34IjtJjdtLdnWM0/6GGqdTA3VCEdsD4333UBi0duIrfwQzv6tBHb+Ai3VOXLlE3j3/xXLGSbTfhHpOZcDoOaiaOlunP1b8Bx5YNJO7y1XHdFVHyVfvxgt04uW6SnuRBUd09uM6WmquLOdkL4dfgbW3DyPNk/uAAAgAElEQVQpbY8k17gKW9XpvvSXoyqvZiOTPKKJY+teMtPPJbHk7Tgj24vTarnoVA+r5kgQ1JjAjv/Et+9PxE55P73rbsG/+zf49vyxylFpZf7nfon/uV9i6x6yzWdgBtooBGaTab+YVMeV+Hf9Fv+u0e20RstWHUTWfgHLGaLhkU/giO0eUiY5/81k2i+Y0H5PGAENj3wSx8COSWq/OsUqoFgFmv/2loqrel6SFI18wzKiKz+EYuWpf/xzOAZGXrEmJocEQQ1ScwPUPfFFbIeX3nXfJT3nCsJPfhlnZNuY2lGMDJ6jD5zwgkZy/htIzb8a357fTuhOK9eyBsvVQNPdbxsyv30yqPkEhn/GlASBq/txkvOvxnb4UF4mR8t9534b09dKcNP38Rx9cFKuKYnRk2sENctGKaRouvc9BLbfTnT1J4ic9VWM4JyqNfKNy+l69Z2l9fNDmzTx7b4TW3WQnb6uajvPX145GpbTj5JPDBsCtjaK45oKF1pHwzGwk9Tc14+r7gulZXpAUUnOf9OU9H+Mwvi23TG25ia+4ka6L/s1joGdNN37bjxH7pMQeBGQIKhxipXHc/BuGh/6MHpsL/1nf43EkusqPk/GdDeDolEY5sYsIzATbBNXb/ULxlryCJaj+vNqbNVBIVz+nCPFzGHrriofQiPVcRXpjmoXSU8oamQAyIfnVy1jeluGPJQuuPVHWM4AsZUfHrb9QngeiSXXjTiOsVDzcULPfpf0rEvJzDx/2LKZGeeRnn3ZhPZ/jGIUV0sN95wo09tccdVPuv0S+s77LoZ3GvWPfY7Q5ltRcwOTMk4xdpo/EJyaK2DiRUUxMrh6N+Lqe4bMzItILngTrt6ny5ZqGoF2ctPWkm9eia37UAtJtFxk8L1Z5FrOIHbK+3BGtuM99Heq3SCkZQdId7wGy9uClulFzUWx3PUUgh1kZ7yC+LL3kJ1+Nr4Dfy21oRZSZGZdSr5+KXryMFpuAMsVJt+wjPjy95CdeQGOgZ1YnkYUK48RnIUjNvTGMsXKY3qbSc19HXrqKGo+hmLmKATnYITnk5r/JuLLrkdPdZbdmKYWkmiZftJzLifXvBo1N4BqZlHMLIZvOoW6haTmXU182bvQ8lHcRx8GIN+wnFzLGvJNK4th5vSXLqbnWtZgBGej2AaKkULNJ7A1J+m5V6GYOSxnAMdgWUdiH5arjtS8q7FcYVQjhZbtB6AQ6qDQsGzw8Q2X4O55quyzZ2ZeRL5hKfmG5aiFNIZ/Jnq6szR1Z/jbyc44l1zTKlAU0D04YnsqbLsCpqeR1NzXoqc6S9vOCM6hEJ5Pav4biC99N1q6t+wazsCam8m1noH/uV8R3PYTtGzf8D+MYtyuSSQ44HDwqMczcuETyJ3FoqJ8/VIcsT1DLiJHT7sJV/eTpOa+FtPTiK0Vn+KpmHnUQhz30UcJbPvxyO03riC+/Ibi0yy14pG+WkiiZgdwdz6K/7k7htQxPU1EzvwClru+2K9toRYSOAaeI7j1h+Trl5aOmNV8gvCGr1fuXFGIrXgfuZbTsRwBUNTiBdl8Ekd8D56Dd1e909r0thBf9m4K4QVYDj8oCoploBSS6KmjePf+D+6u9aXyqY4rybWcXvpeS3UR2nTL4HuvJdeyGgD/rt/g7NuErXuIrv4ktqqhpXsIPfvvZf1nW84kueDNmL7W0r0GiplFzSdw9m3Gt+e36MkjZXWiqz+B5Qwc//hmgeCmW0pBkms+jVTHFaUHAaqFFOGnvlp52wHxFTeSbT2zfNsVkjhie/Ecuht35/qy8vn65TgSe1HkURKTTh4xIYQQNW68QSDXCIQQosZJEAghRI2TIBBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBA1ToJACCFqnASBEELUOAkCIYSocRIEQghR4yQIREW26gCUSWk7seTtRFf/86S0PRLLGaTr1X/AdDdOSf8vZZbDT9flf8T0to6p3mT+LImJIUEghkjPfhW9F/6EfOPySWnf8M/E9LRMStsjsXVP8e/yao4p6f+l7NifFD32/9HqP/ffiJ3y/skYkpggEgSiJF+3mL513yW58BpCz3wHx8COSepJYeqPEKe6/5ei8W2z8IavYXpb6bnodtKzXzXBYxITQZ/qAYipZ/paSc57A7nWNXgO3EVgxy8A+VPWYmLoySPUr/8U2ennkFj4D2TaL8K/7T9x9T091UMTg+SP19cyRSU961UkF7wZZ98m/Lt+hZ44OGI109tCas5ryLWcXpov1jI9uHqeRkseIj3nNTTd9x6wrVKdQng+/ed8c+QhWQXqHr8ZZ/+Wiu8XQnMxfdPJtF1AITwXyxlEMTI4ojvxP/cbnJGtFetFzv4a+brFI/bvjGyn/tFPVn3fCMwkPftycs2rMT1NAOiJg7i6n8R74K9omd6y8vHlN5KedQkAoWe+Tb5xJdlpZ4Gi4jl4N4FtPybXupbkgjdj+NvQE/sJbfp+1bOxfMMKUnOvIl+3CNvhQzHz6PH9eA/+FffRh1HM/JA60dUfJzvtbMCm6d53YTlDJBe+lXz9EmzNiZbpw334fvx77kQx0kO33dovk29YNuK2cwzspOGRjw9bxtacpOa+jlTHlbi7n8C/8w60dPeIbYvRGe8fr5cgqEG2qlMILyB62k0oZo7QM9/B2b95VHUtV5jeV/4APdWJd9+f8Ry+B4DstLVkZl5IrnElqpGm+e5rwTYrtjFwxuewnEEaHv7omMZt+NvoO+8W1OwArp4NuLsfw9X9FPmG5WRmXkhmxrnUP3Yzzv5NVdswvS30nn8bTffdgJY6Oqb+cy1nED31o+jJw3j3/QV358OY7kZyrWeSnv1qLIePxgc/OGTHZmtOei76ObbmRE8cIrj1PzDdjcRWfhDPoXvJTj8b78G78e36NdHVn6RQt4Cme9+Nmoue0IpCYtE/kZ59aTF0Dv4fzv5nyTWtItd6JpkZr8AR203dk19BMTJDP7enmd4LbsO3507S7Zfi6tmIb+/v0ZOHSc+6hNS8N+Lse5bwxm9U33buRnov/DGND3wAPXFgTNtuaFsNxFe8j3z9Ivy7/hvvgb9WHLcYm/EGgUwN1RjT00Rs5YcohBfg3/FzfPv+Z0z1YytuRE8couGR8lU/7s71uDvXE1/2HrLTz5rIIZfoycO0/vmqIa87+zfj7N+MrSgkFv8DDQ8Pf1Q6HoXwAqKrP05g+8/w7vvT8TGljqLvuRPfnjsZWHMzveffRvPd16LmBoa0oWX7qX/sX1DzcQByzavJtF9EcNP38R78GwDhDV+n96LbMd2NZUGQXPBmUnOvpPGB96MnD5ded/U+jav3aXy7fk3/Of/GwGmfov6xz1T5FAqpua8n9OwteA7dXXrVt/d/cEa203/ON8gdvh9Xz5MvZFONipbtp+6Jz2MEZhM58/MkF7yF+sc+g2NgJzItefLJxeIaEl9+A33n3YqW6abpnneMOQQAbM2NWkhWfT+w/ac0PPSxqmcDk0lPHsVy1U1K2/Fl1+PqXF8WAs8XfvIrqLkYqY4rKr4f3HRLKQQAFLOAlu7Ce8JO+Rhbd5e+Nt2NJOe/kYaH/7ksBE6kZSOEN3yNfMOyYZd3+nb/tnQWdyJHdBd6shMj0Fa17mTQE/tpuucdBLb/hMjaLxFZ+8UxL08VL5ycEdQI09NMITQXxczj7N9atkMaC8+he4if8n4GTv807qOP4Dlyf9n7iplDy/RMxJCrUMi2rKFQNx/TN6P4ipFGy/ZheKdNWq+GbwaB7T8ffmRWHu/Bv5NpW0dg+8+Gvl9h/l6xjBFD0wjOQbEt9NSRYcs5B3bgiO8j1XEVwS0/qFDCxnvwrrJrN89/fypWUymWgSO2By3djeFvoxCcg5buOunjqGUSBDVCy/TQ8PDHSM+6lOS8N5CZ8UoCO38x5iWiniMPgOYkNec1xFZ9mNiqD6MWUqjZAfTEfjyH78XVs2FSPoPhbyN66j9j+megx3bjiO5GLSSxVSf5ukUYwQ4Uc3LmmW2Hj8jaL05K2yPJ1y/FVnW6L/3VqMorw5yxvdhYrjCpua8lPetSPIfuwb/zjmHPOMXkkCCoMd4Df8Pd+QjJBW8hcuYXcR+5H/9zv0TLRkbdhufg3XgO3o2tucjXLcH0NmP6ppFrWkX0tJvwHLqH4OYfMJFzvZbDz8AZn0UxMjQ8+KGKUyTJ+W8m037BhPVZxraoX/9pnJFtk9P+MBQjhWIVaL7rmopnFS9FtuYmM+MVJBf+A3ryUHHaaxQr1sTkkGsENUjNJwhuuY2Wu96CEZhF33nfIzv9FaO6Y9T0toJS/LFRzByuvqfxHryLwPbbaXzwgwS2305m5oXFO3gnUK55NbbuofHBD1adJ59Maj6OEew46f0CuHo3YCsaljM0Jf1PKEXB9DTRt+67pOa/idCz36F+/aclBKaYBEEtswwaHvkEdU99lfjS6+g773vk66uvtc9OO5Pe8/8Dw99etYx331+wVZ1M2/lVyyhmdsxDtXUPSmHoGveyMqN5bETV+fHhOfs3k5r72nHVfaG0dDcoKolF105J/8cojG/bHWPrXiJrPk/vBT/Ec/g+mu69ftKmEcXYSBDUPBtn37M03XcD3n1/YuDMLzGw5mYsZ2BoSc0HQK5xRdXWCnULUSwD9/MuIp9ITx7Bcgarj0hzkWtaXfaaYmSwHd7K5VWd+PIbSM19XdU2j7dTDJNc06qqZYzALExvc9lrwc0/AEVhYM3Nw7afa1pJ5MwvjDiOsVALKeqe+CLZ6WtJzR26fPZEqXlXk1jy9gnt/xjFKAZ4rnFl1TJGYGbFVT+JRW+j54IfoeYTNN1zPf7n7hh3KIuJp/kDweF/skVNUCwD58AOPEcfJF+3hMTSd+DsewbthPXwpr+N7LSzyDeuKK5kMfPogzdl5RuWk551CYml78Dd+Qieow9X7UtLHSU959XkG5ai5aJo6W5M3zTyTaeS6riSxJJ3km9Yivfg3zl2nUHLDZCZeSHZGevQMr3o6S5MbwvZGecRX/Fe8g3LcfU8gxFox/RNp9C4DFfv0EcYKFYBW3OTmv8GFCuPlu1HNdLkGleSbzmdxOJ/IrnwGhzRPWU3TSlWHkd8P5n2i8m0XwhYqIUkaiFFIbyAXOtaEovfRmr+G3F3PVHqO9N2Puk5l1MIdWC56zH9M0o37+Va1lAIz8f0z0DLR9Gy/diak/Tcq7DcDZj+tlJZPdUJtklq/psGw7ZQmiLLNa0i2/ZK4stvINe8Gv+u35Td1BZf8g5y08/CCLRjuRvIN56CI7ardGaWr19KcsFbKITnY7nrMALtuHo3Vt52qoPU/KtRbKO07fKNp5BrOYPE4reRXPhW9PgBHPF9pXr9Z38d0z+D0LP/jm/vH1Ar3L0sJsY1iQQHHA4e9YxtalbuLBYVGYHZaOmjQy5OJhdeg2NgJ+k5VxR3boNH9oqRRk914jl4N94Dfx2x/UKog8Tit2OE5mA5imcfWqYXPXGwGCSHhq51t1xhoqs+hhGcjeUMoFgGaqYHd/eT+Pb8nkJwNvnBI33FyOB/7pdVeldIzb2KdPvFWJ5mbFVHMTJomV5cPRtxdT9e9aKw5aojNfcqsq1ri4+YUNTBJbO9OCLb8By+r6xudtrZFOoWlr5XcxF8e/4w+N45FOoWAOA+cj+O2F5szUlywTWgqKi5KL49d5b1n69bTGruaynULcRyhYtt5hNo6W7cnY/gPvogWqavrE5ywVvKr9lYRnGHPLiEuBBeOHgTYHHpqGJm8e+8o8q2g1THlWRmXYpZtu36cPUObrv+8sd8FIJz0FOd45oSFGMjj5gQQogaN94gkGsEQghR4yQIhBCixkkQCCFEjZMgEEKIGidBIIQQNU6CQAghapwEgRBC1DgJAiGEqHESBEIIUeMkCIQQosZJEAghRI2TIBBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBA1ToJACCFqnASBEELUOAkCIYSocRIEQghR4yQIhBCixkkQCCFEjZMgEEKIGidBIIQQNU6CQAghapwEgRBC1DgJAiGEqHESBEIIUeMkCIQQosZJEAghRI2TIBBCiBonQSCEEDVOgkAIIWqcBIEQQtQ4CQIhhKhxEgRCCFHjJAiEEKLGSRAIIUSNkyAQQogaJ0EghBAvA9rg/01FGXNdCQIhhHgZCJkmADF17Lt1CQIhhHgZCFsWAHEJAiGEqE0SBEIIUeNmFQoAdGv6mOtKEAghxMvAKbkceUVhh9M55roSBC8y0VM/RvdlvybfeMpJ79sItNN96S+JnPUVbNVx0vt/KTP8bXRfcgf9Z30NWxv7L6IQL9Q002Cvw0FWVg1NIkWjEJpH33m30nferdgO3yT0oZJrOhVbc5Nuv2ji2x+BEZyDrXvJ1y/F8jSe9P5fyoxAO7bDR6F+Maa3ZcTylsNP37p/p2/dd8jXLQJFG7GOENW4bZsW02T7OM4GAMY+mVSDjEA7sZUfohDsgMG0NbzTcMR2T3xnWvFI3NYnIWhGkKtfWvralh3TmOQbVhz/Rhn518py12MEZgEQOeurOOJ7CT/5VbRs32QNUbyMfTISQbdtHnJ7xlVfgmAEA6f/C/nGFdiaq/Saq/cZ9OShKRzVZBn7KaU4ZmzbTsv04RjYQaFuEShq8Wzzld/H1f0EoWf+H4pVmKRxHue2bXyWRb8mof9S5rJtludzbHS5eG6cZwRK6/QZ9gSP6yXPcoZIdVxBes5rygJAj+8nsOM/cfVuBLvyZjP8M8k3nYLhbQVVRzHzaKkjOCPbycxYhzOyFVfPxrI6+bpFZKefDaqTdPvFoKhomV5cPU8NaV+P78d74G9DXrc1J4XwYgqhWViuemzdDbaFlu7G2b8JR2xf1c+bmvMaTP8M8vXLMAIzAXAffQi1kCwrp5gFPIfvRY9Xbsv0tpJrWoURLB7pqrkojugeXD1PVu3zGM/h+8g1rsByN6BmI7iPPozlDJJvWI7laUAppHF1P4lzYHvVz5FvWEEhPB/T2wS2XdyGvRvR4/srlk/PuhQjOLv0fXDLD8nXLSTfsAzLXQ+WgSO2F8/RB8EyKrcx+3KMQBv5+qUYgfbitut8BDUfH7Lt3EfuwxHbW/Z6ruV0Ekuuw/BNL72m5hN4Dv4N354/DPk3mAg+y+K1qSQdhQJfqG+Y8PbFyeO0bb7U38eSfJ6bGhrZ7HKNXKkCCYIT2LqHTNv5pOa+DvOEOXIt049vz+/w7v9L9cqKSuyUD5CZ8Yph53u1VCdN972n7LW+V/w/jOCcUY1RsQxa/vf1Za9lZ7yC2LIbsB3eypVsC8/hewlu+Q8UM1/2Vr5hKZG1XxlV3wDurvWEn/pa2WuWu47kvDeRab+w4kVmPXmYwLbbywKh6/I/Pn+QlB1V29bgNNzx1xSrQGDrj4YEYSG8kMSS68jXLSxN3Z3I1bOBwPafoScOlL3efckdZdd6XN1PkGs+DZTyS2dauou6J78ypH6+fjGRs8q3xXBc3U9Q9+SXK76Xmvd60rMuw/Q0lV5TsxH8z/0Sz9GHUYz0qPupxmdZXJpO8/pkgqBl8ZX6eh4d51SCeHG4KpnknfEYf/b5+EEoPO52NH8gePMEjuslSqEQmktk7ZfIzlhX2qGqhSTefX8hvPHrwx6JomhEzvw8udYzQVFRCym09FH0VGfxSFJ3l8JBsQr49pbvBLVMH5YzhJYbwHQ3gqKgFpI4YnvQMn1l/7mPPoyr79my+rmmleRaVqPYJmo+gZaLoCcPo+ZjoGjYugcj1IGCjbN/8/P67sVyBFBsE1t1FM8kAEdsD3q6u6xvPXkY35470TK9pfqWq47+c75JvukUUDQUM4eeOoKe6sRWNGzdjeUMkZ1+DoqVwzmws1jPEcB2+rCcwdK/QbGfHixXuBQCilXAEd8PqgNb95BvWoV3//+iWMVAyzWvJrL2S8XgVhTUXAw9eRAtG8HW3KA5MX3Tyc44F2fv02i5aGnstsOL5arDdgYAMP0zUMwMeqoTPXUUxcxjOfzYzgC51jPwHry7bMpGy/Rh6T4U28JWdWzdM7jt9qKnuypsuz+gZXoq/gg5I9vxHr4HW3dh+GeWPm+u5Qyy087GObANLR+jGJhj47csLkin+df+Plblcrhsm30OB7e9gB2HmFoO2+bV6RTvisc4out8ub4BaxyrhY6p7TMCRcH0tDBw+qdKF+4AsE1cPRuoe/IrjOYXz/S20Hv+bQA4ortoePhj5e97Woic9SVMTzNqNkLz/729ynhUui/7NbbqxNWzkbonPj/eT1ZiuevoueAngwGVpPmut1YtG1v+XjKzLgag8f73oicPD9+4otBz4e3FHTegp45Qt/6zZRc8E4uvIzX3Coo7dYOme65DzcUAyMw4j9iqDwPgjGyj/tGbgPKzhfCTX8Xd/RjpWZcSX34DYNN0z/VomV4Mfxt9591aKus+8hDhp79Z+t7WnPSt+y6mtxUohlvDQx8p+wjp9kuIr7ixNIa6J75YdvSdab+I2Ir3Fcey8Zu4jz5UcVPEl91AevalxW334IeqTp+NVuTML5BvWFZ2dqknD1P3xBfQ0j2M5ufSYdtckEnzvmh0yHtfravnEY+cDbwUOW2bL/b3sTSf54iu8+7mkVepjaSml4/Gl76bvvNuLQsBNRel+f+uGzyFH2VGWmbpS9PbSnba2rK3tUw3/h3/hWJk8B74+0QMfdTU7EDpwrbl8E9o24a/vRQCipknvOHrQ1a9BHbcjufwvQDYqk5sxXsrthXcdGvF193djz/vlePTRal5ryu9qqW6CD/z7fKSZp7GBz6AlimOqRDqIDN9XdXPU/fY54ZMwbg6HwW7+O9ruk/efHr9Y5+lfv2nUYxM6bVi8H2PxKJ/GLauAlyZSvLz7q6KIbDP4eAxCYGXHJdtc9NAhDu6Olmaz/NXn4/3NzVPSNs1vWoosP0nuHo3El/6ztLab8sVpu+8W/Hs/yuBnb8YVTtaLoLn0D1kZl6A5QwQXf1JFDOHmk+gGGkc0V04BnbQ8rc3T9pnybSdR3rOFZjeZizdX5orV4x02QXvie3z+E7Vu+9P6PEDQwvZFr49fyDTdgEAuZY1WM5QcdrqBEqVi7HDhXEh2DFYxKT+8c+Wdthl7Zo5vPv/QmLxPwEKmfaL8Bx9YNjP9WKQWHIdmbbzS9NNUJzGC275Ac7eZyrW0W2bK1MpXpVK0mIO3RbH/FcgSPV3xVRz2DZhyyJsmXQUCpyazdFmGLSaBi7bZrPTxa8DAZ4Z54XhSmo6CBQzj6v7CZp6niI980KSC9+K5QpjOfyk5r+BbNt5+Hb/Fs+he0tz0hXZFqFnv4tzYDvpWZcVby7SXJie4j+UEWgnM/MCUvNeT2jTLTj7NldvaxziK24k3X5J5aHpVS4gTwDDN7P0dWre60nNe/0wpY+zXEODYDyOnY2gaKWpuZGY3ok5gpoMtuYi03Y+yflvwDrh7EPNDhDY+V+4j9xfcVmpZtu8Kp3iimSKaWa1QD3u9GyWU3JZTrwUX/q/Xf5a8Wu79HVZ+SFl7Qp1y8tWa69Up6xNu2Jbzx/rsTbh+BRHWXm7fIyjaXPY9srK2hXfH1LeHkV7FP8t3c9bkWgB+x0O/s/r5QGPl+1O5ziuFA2vpoOgxLbwHvw77q7HSC54M9kZ67AcfkxPE/HlN5Ce/Wr8O/8Ld9djlavrbnJNq9Fj+2h46COlu3NNTyOmt4V8/RKMUAemt5XoqR+n6d53T8gqEIDkvDeWQkDNxfDt/T3ursfRUkcxPc0UwvOJL7sey1U3If2dSE8fJTeumhNzv4Kai43jc70475XItq4lueAtpaW3AGohgefQfQS2/bhqPYdtc208znmZNKHBp08OxwbOz6RLOxIbyr6m7GsFWxlajgp1jv9/cNeojLb88XpAWX/D/b+srzHUO/a5rOeNcWg7x39O7BPHV+WzVft+SDtVxnnsawuFtKoQU1Wiqka3rrHJ6SI5jieKjoUEwQnUfJzgltvw7/4tseU3kms+FRQNI9BO9LSbcER2EH7m22jpbk78J0zOewOpeVejmDma/u8dqIXEkLXzqXlXk1j0j1jOIOlZl+Db8/vKg6hyf0I12ZmvBIpTIA0PfRQte3xFj5bpQc0NoC5866h2mKqZHVPfnkP3kuq4CgD/c7/E/9yvxlT/hXLEdmEEZ6NYBRrvfTdatv+k9n+i8Qa76W0huvoTFEJzj7dl5nH2biS05Qeo2YFh6xcUhR+GQvw0GOTaeJwLMmkCwwSCAry3qZmjuvzqi+Nq+mJxNWo2Qt2TX6L1L69DS3WWXi/UL6L3/P8getoniksTBxn+4lGcrbmInfK+im0WQh2lr4e9a3RwrvzE9o9TSCy5ju5X/Y5UxxXFcseWpRqZshCA4hx637rvYpxw49bQE/LjHAM7jn+jDt1RGP624vNxzrsF092Alu5EzRV3VMkFbyE1d7ipoYqTCye8Xe1IvcLrg2V9+4r3ddiqg94Lf1K2Br96/8977cR+lUr9nfh+9W134vJiu9K2882g7xXfoe+872G6G7FVB5G1X6L3/NvKQkBPHaX5rrdS99RXRwyBsvYVhR+FQlzTOo3/CgQZ7tzgNamJv0lNvLTV9vLRUbBVnUJoPrFVHy57mFjDQx/BEdsDQHT1x8lOO7v0nmJkUPNxFDNXXEfv8Jfm6rVMP433vbtqGERXf4LstLMAGzUfLy21tJwBbN1XerKlluml6Z53klj0VlLz3ljqV8v0gvr/27l72DbKOI7j3+fO9+KcEzskjWXUig60MFFQQYIUgdKRpRMDA6wsSJWQEHuRmFqpQySQgBkxMKBKIJQBkSC1UNFCaFBDaVqRxA6Jk9SOHZ99bwx3eTHuCz6Ev1AAAAOvSURBVETpAPf/jPbZ53tsPT/f3f//ZAiNfkLDQYU+Wqfe1SCnQo/B79/DXO3uRwicEitjH25vo7WqqNAj0q245t7IsTURFq6ew16cJDTzrIx9kDRmRWjeJsproIL4olFoOHFNvG4R6RYqaDM8eZrqS+d7wi5Tv83w5OnuZrMoJD89TqQZSflobKsU1i2Ncuf4u9vbap0NNG8jbkhDxfvXTSLdJtJ0jNocQ1Nv4xafp3b8nZ5JW4UehcvvY61cYfXFs3iFI93PBx2KX73a870FfSOsnPzoH41d/qfzGLXfqb48vnPszTIDP49jrs30vPde9IchJ1ubvF6v91xz9pXirQMjLMhZgUhIQ9kDqChEd6v0/TFBplnGzz+O5jdx5r5AJZdSNG8Dt3QCe+kSkRE3SUVGjtDKxxNk0m1rL12kcPVsz/IDu1krV/BzhwhyB4l0m9DKx++TyYKmo/wWduUi+elxtM4GVvUXUDre4BNEuhVvb/YT6RZG/RaDP5yhXXx258Yqyb2Em5+j/e1yhuY1MFdncB8djScwsz85hlxSeaTINBbIzX6KXZ5CRQEqaJMtTxJaBfzcIaKMtX3s8efui1+rZVC+i710CbsyhVs6QWjlu/Zvrl3HrnxH4+hrO+MfuGTnJ4iMHO3ic9uPG+vXyS58Q6Yxj7l6DX/gMKE9lDSwDXSNW6SbxMt2rNJ3+0uMOzeIDAe39EJPRZXWrtE3P4HeXscbPIq/6986gO5WcW5d6PneNK+JWZ3GLY3GoXPXsVsk99tnZBe/RUUhbmkUzWswMPMxA79+QqZZvufv4t/qKMWsafK141DVdZ70PMxdNyz1CC7bdzvrFGkkZwT7LMo4tA8cI3BKhMlZQKZZxliffXCD1i6d4afwBg4TmvFkqbeWyTQqGLUbXbXlW+I1jp4msAqowMWozWEt/7inYwjNAu2RZwiyI0S6iea30Dcr6M1Kz1o5XZ+h/zG8whF8p7T9mN5eR3Ua6K0/Mdfu0529DzrDx/BzBwnsR4A4xLX2etJtvNCzRMTDEJr5ZOyKu8ZuKRm7mw99//diRxGvNJucajYYCgI8pXhzpMiyLDgnkCAQInXeqNcZa21yzbQ4N7j/1WTiv0eCQIiUOtVscMHJ3ffGskgHCQIhUkyxl2XsxP+NlI8KkWISAgIkCIQQIvUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuUkCIQQIuX+ArxWkQk4973OAAAAAElFTkSuQmCC)
num = 0 while num < 5: num += 1 # num += 1 is same as num = num + 1 print('num = ', num) if num == 3: # condition before exiting a loop break num = 0 while num < 5: num += 1 if num > 3: # condition before exiting a loop continue print('num = ', num) # the statement after 'continue' statement is skipped
num = 1 num = 2 num = 3
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
While Loop with else Block
num = 0 while num < 3: num += 1 print('num = ', num) else: print('else block executed') a = ['A', 'B', 'C', 'D'] s = 'd' i = 0 while i < len(a): if a[i] == s: # Processing for item found break i += 1 else: # Processing for item not found print(s, 'not found in list')
d not found in list
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp
Below we sort a string, identifying its unique members. Then we use a `while` loop together with the previously defined `count` function to count the occurrences of each character.
# unique characters raw_string = 'Hello' result = set() i = 0 length = len(raw_string) while i < length: result.add(raw_string[i]) i = i + 1 print(sorted(list(result)))
['H', 'e', 'l', 'o']
MIT
Week 01 - Introduction to Python/Python I.ipynb
TheAIDojo/Machine_Learning_Bootcamp